

The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup
The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup
If you want a “time travel” feature, your only option is to duplicate data.
Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.
There are two ways to maintain a persistent data store for Docker containers: bind mounts and docker-managed volumes.
A Docker managed volume looks like:
datavolume:/data
And then later on in the compose file you’ll have
volumes:
datavolume:
When you start this container, Docker will create this volume for you in /var/lib/docker/volumes/ and will manage access and permissions. They’re a little easier in that Docker handles permissions for you, but they’re also kind of a PITA because now your compose file and your data are split apart in different locations and you have to spend time tracking down where the hell Docker decided to put the volumes for your service, especially when it comes to backups/migration.
A bind mount looks like:
./datavolume:/data
When you start this container, if it doesn’t already exist, “datavolume” will be created in the same location as your compose file, and the data will be stored there. This is a little more manual since some containers don’t set up permissions properly and, once the volume is created, you may have to shut down the container and then chown the volume so it can use it, but once up and running it makes things much more convenient, since now all of the data needed by that service is in a directory right next to the compose file (or wherever you decide to put it, since bind mounts let you put the directory anywhere you like).
Also with Docker-managed volumes, you have to be VERY careful running your docker prune commands, since if you run “docker prune --volumes” and you have any stopped containers, Docker will wipe out all of the persistent data for them. That’s not an issue with bind mounts.
Docker is far cleaner than native installs once you get used to it. Yes native installs are nice at first, but they aren’t portable, and unless the software is built specifically for the distro you’re running you will very quickly run into dependency hell trying to set up your system to support multiple services that all want different versions of libraries. Plus what if you want or need to move a service to another system, or restore a single service from a backup? Reinstalling a service from scratch and migrating over the libraries and config files in all of their separate locations can be a PITA.
It’s pretty much a requirement to start spinning up separate VMs for each service to get them to not interfere with each other and to allow backup and migration to other hosts, and managing 50 different VMs is much more involved and resource-intensive than managing 50 different containers on one machine.
Also you said that native installs just need an apt update && apt upgrade, but that’s not true. Services that are built into your package manager sure, but most services do not have pre-built packages for all distros. For the vast majority, you have to git clone the source, then build from scratch and install. Updating those services is not a simple apt update && apt upgrade, you have to cd into the repo, git pull, then recompile and reinstall, and pray to god that the dependencies haven’t changed.
docker compose pull/up/down is pretty much all you need, wrap it in a small shell script and you can bring up/down or update every service with a single command. Also if you use bind mounts and place them in the directory for the service along side the compose file, now your entire service is self-contained in one directory. To back it up you just “docker compose down”, rsync the directory to the backup location, then “docker compose up”. To restore you do the exact same thing, just reverse the direction of the rsync. To move a service to a different host, you do the exact same thing, just the rsync and docker compose up are now being run on another system.
Docker lets you compact an entire service with all of its dependencies, databases, config files, and data, into a single directory that can be backed up and/or moved to any other system with nothing more than a “down”, “copy”, and “up”, with zero interference with other services running on your system.
I have 158 containers running on my systems at home. With some wrapper scripts, management is trivial. The thought of trying to manage native installs on over a hundred individual VMs is frightening. The thought of trying to manage this setup with native installs on one machine, if that was even possible, is even more frightening.
Pretty much guaranteed you’ll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to do read the release notes and manually update that one, but not everything.
It’s literally one checkbox in the settings to shut those external media sources off
Either a lifetime pass, or you actually configured local access correctly instead of botching it (or ingoring it entirely) and then coming to lemmy to complain.
Don’t stick your backups on a drive that’s plugged into the same machine as the primary copy, it defeats almost the entire purpose of having a backup.
I host my own via Hetzner VPS and Mailcow. I use SMTP2GO as an outbound relay so I don’t have to worry about IP reputation issues. It’s all very straight-forward, no issues to speak of. I use unique aliases for each account, so spam is a non-issue as well. If an alias gets leaked I just shut it down, no more spam.
As long as there’s a simple way to determine which containers use outdated images, I’m good
Yeah you can either have it update the containers itself, or just print out their names. With a custom plugin you can make it output the names of any containers that have available updates in whatever format you like. This discussion on the github page goes through some example scripts you can use to serve the list of containers with available updates over a REST API to be pulled into any other system you like (eg: Homepage dashboard).
I use node_exporter (for machines/VMs) and cAdvisor (for Docker containers) + VictoriaMetrics + AlertManager/Grafana for resource usage tracking, visualization, and alerts.
For updates, I use a combination of dockcheck.sh and OliveTin with some custom wrappers to dynamically build a page with a button for every stack that includes a container with an update. Clicking the button applies the update and cycles the container. Once the container is updated, its button disappears from the page. So just loading the page will tell you how many and which containers have available updates and you can update them whenever you like from anywhere, including your phone/tablet, with one button click. I also have apt updates for VMs and hosts integrated onto this page, so I can update the host machines as well in the same way.
You seem to be missing/ignoring that sync will protect against data loss from lost/broken devices. When that happens, those connections are severed with no deletions propagating through them.
Only if you very carefully architect things to protect against it. I have absolutey seen instances where a drive had a fault and wouldn’t mount on the source, and a few hours later a poorly designed backup script saw the empty mount location on the source and deleted the entire backup. You have to be VERY CAREFUL when using a sync system as a backup. I don’t use syncthing, but if it can be configured to do incremental backups with versioning then you should absolutely choose that option.
You have to be joking with this. There is no way I’m letting that tracker-filled ransomware near any of my computers.
I believe he was talking about a mini PC with a single drive, not Microsoft’s “One Drive”.
Simple mirroring doesn’t protect against bitrot. RAID 6 does.
Lots wrong with this statement. The way you protect against bitrot is with block-level checksumming, such as what you get natively with ZFS. You can get bitrot protection with a single drive that way. It can’t auto-recover, but it’ll catch the error and flag the affected file so you can replace it with a clean copy from another source at your earliest convenience. If you do want it to auto-recover, you simply need any level of redundancy. Mirror, RAIDZ1, RAIDZ2, etc. would all be able to clean the error automatically.
OPNSense is a great option for turning x86 hardware into a router. That said, I would not recommend combining your router with other functionality. The router should be a dedicated system that only does one thing. Leave your NAS and web services on another machine.
OliveTin, gives you a clean web UI for pre-defined shell scripts, with a dynamically reloadable YAML configuration.
There are a ton of things you could use it for, but I use it for container and system updates. A pre-processor runs on a schedule and collects a list of all containers and systems on my network that have available updates, and generates the OliveTin YAML config with a button for each. Loading up the OliveTin webUI in a browser and clicking the corresponding button installs the update and cycles the container or reboots the host as needed. It makes it trivially easy to see which systems need updating at a glance, and to apply those updates from any machine on my network with a web browser, including my phone or tablet.
I used to use 2FAS, but recently switched to a self-hosted instance of Ente