SayCyberOnceMore

  • 1 Post
  • 31 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • Hmm, I setup a Proxmox machine a while back because, well, all the cool kids seemed to do it - and plenty of “support” on youtube

    I found Incus and it just seemed better, but it was harder to find info on (back then) and seemed a little unready

    Now, I regret not sticking with my gut instinct as I’ve got to basically rip out Proxmox to get Incus in, which means all my VMs are prisoners (and us: 1 VM is Home Assistant!)

    So, do you know if it’s possible to migrate my VMs across to Incus, or is it literally wipe drive, start again?

    (Obviously the data in each VM can be backed up & restored into new VMs)



  • I think you’ve misunderstood

    Ok, OMV needs a separate (small) boot drive to install on (ie consider a M.2 / SSD on a USB adapter)

    But, then all your (large) storage is used for the NAS.

    OMV will run Docker containers, but their data would also be pointed to the large NAS storage.

    |  Small |   Large   |
    |--------+-----------|
    | OMV    | Your Files|
    | Docker | Data, etc |
    


  • I always prefer bare metal for the core NAS functionality. There’s no benefit in adding a hypervisor layer just to create an NFS / SMB / iSCSI share

    OMV comes with it’s own bare metal installer, based on Debian, so it’s as stable as a rock.

    If you’ve used it before, you’re probably aware that it needs it’s own drive to install on, then everything else is the bulk storage pool… I’ve used various USB / mSATA / M.2 drives over the years and found it’s a really good way to segregate things.

    I stopped using OMV when - IMO - “core” functions I was using (ie syncthing) became containers, because I have no use for that level of abstraction (but it’s less work for the OMV dev to maintain addons, so fair enough)

    So, you don’t have to install docker, OMV automatically handles it for you.

    How much OMV’s moved on, I don’t know, but I thought it would simplify your setup.


  • You should have all your data separately stored, it shouldn’t be locked inside containers, and using a VM hosted on a device to serve the data is a little convoluted

    I personally don’t like TrueNAS - I’m not a hater, it just doesn’t float my boat (but I suspect someone will rage-downvote me 😉)

    So, as an alternative approach, have a look at OpenMediaVault

    It’s basically a Debian based NAS designed for DIY systems, which serves the local drives but it also has docker on, so feels like it might be a better fit for you.






  • You’ll probably need 2 devices: one actually connected to the external line (ie the modem part) and then your actual router / wifi access point(s).

    Personally, I have a Fritzbox router configured into bridge mode so it just deals with the line signal and passes all the PPPoE / internet comms to a pfSense box I built (ie anything… an old thin client, new microATX, etc…)

    I then have separate POE WAPs for wifi around the house, but pfSense can deal with radio drivers too if separate WAPs are too much today.

    This way, if something goes wrong I can always go back to a single domestic router, keep the family happy, download anything I need to fix my setup and then move forwards again.

    I like having separate components with an up/downgrade path





  • It depends on the sync / backup software

    Syncthing uses a stored list of hashes (which is why it takes a long time for the initial scan), then it can monitor filesystem activity for changes to know what to sync.

    Rsync compares all source and destination files with some magical high speed algorithm

    Then, backup software does… whatever.

    Back in the day on FAT filesystems they used the archive bit on each file’s metadata, which was (IIRC) set during a backup and reset with any writes to that file. The next backup could then just backup those files.

    Your current strategy is ok - just doing an offline backup after a bulk update, maybe it’s just making that more robust by automating it…?

    I suspect you have quite a large archive as photos don’t compress well, and +2TBs won’t disappear with dedupe… so, it’s mostly about long term archival rather than highly dynamic data changes.

    So that +2TB… do you drop those files in amongst everything else, or do you have 2 separate locations ie, “My Photos” + “To Be Organised”?

    Maybe only backup “MyPhotos” once a year / quarter (for example), but fully sync “To Be Organised”… then you’ve reduced risk, and volume of backup data…?


  • The main point is that sync (like RAID) isn’t a backup. If ransomware got in and started encrypting all your files, how would you know / protect yourself…

    There’s a lot of focus on 3-2-1 backups, so offsite is good, but consider your G-F-S strategy too - as long as this remote copy isn’t your only long-term backup option, then sync might be ok for you

    So, syncthing / rsync / etc is fine… but maybe just point it to your monthly / weekly / daily backup folder(s) rather than the main files?

    You also had some other suggestions I think, like zfs / btrfs snapshots… which would be a point in time copy of your files.

    Or burn the photos to DVD / Bluray and store them at the other location? No power requirements there…


  • Wake on LAN won’t work remotely, so you’d either need to have access to a VPN at their location, or have a 2nd always on device that you can connect to and that could then WoL to your device… or… get a device with an IPMI which you remote into. (All non-VPN forms of remote connection are open to abuse)

    I suspect (guess) you’re not going to be able to setup a VPN, so perhaps an always on pi is going to be necessary - so maybe it’ll be that with drives set to spin down when idle?

    OpenMediaVault was my preferred choice until everything went docker on it which was getting too complex for a NAS… so I just created my own, which powers on at certain times of the day and off again when CPU / network IO was low enough.

    Data transfer with syncthing is great, but I don’t really recommend sync for snapshot backups… (consider your files are all corrupted, it’ll happily sync those corruptions) but I have enough space for a few versions of my files, so in theory I can roll back, but it’s cetainly not a Grandfather, Father, Son strategy.




  • Ansible is an automation tool to setup systems to a known desirable end state.

    TBH, for a single device, it’s overkill, but you seem like someone who keeps good notes and has some custom files to copy across… you could convert your setup note into an Ansible file, and it will also copy over your custom config files.

    For Ansible you define the desired outcome and it does “all” (kinda) the work for you… so… say you want Apache, MariaDB and PHP, it doesn’t matter if half are installed already, or not, or their dependencies - you just say:

    Do an update Install packages: A B C Copy my config files over Start the services Relax

    Yep, it’ll take 10 times as long to get it working up front, but the day you want to duplicate it / start on a fresh Pi / VM, it’s all there for you.

    I use it to setup all my Pi Zeros thr same way (they’re doing BLE presence detection) and for their regular updates

    I’ve also got some VMs setup that way

    But… I tried it on a laptop and as it’s a single device I just ended up setting it up manually and now the ansible script is woefully out of date… just some balanced feedback.