Canadian software engineer living in Europe.

  • 1 Post
  • 23 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle
  • I’ve used FluxCD in the past and have looked into ArgoCD, but honestly, I’ve not seen any big benefit from either to be honest. I use k8s both at home and at work, and in both cases, we do “imperative” deploys: you run helm install ... either directly or via the CI and stuff is deployed.

    So for example at my last job, our GitLab CI just had a section triggered exclusively for merges into master that ran helm install ... for all three environments. We had three values.yaml files, one for each environment, and when we wanted to deploy a new version, the process was:

    1. Create a tag for our release version (ie. 1.2.3) and push it to the repo. This would trigger a build and push the resulting image into the container registry.
    2. Push an update to the repo with the new tag set in the appropriate Helm values file. If we wanted to deploy 1.2.3 to development but not yet to staging or production, then the tag: value in each of the environment files would look like this:
    • k8s/chart/environments/development.yaml: tag: 1.2.3
    • k8s/chart/environments/staging.yaml: tag: 1.2.2
    • k8s/chart/environments/production.yaml: tag: 1.2.2

    Once that change is pushed, the CI will automatically apply it with helm install ... and make sure that all three environments are what they’re supposed to be.

    As for dependent services, that should all be in your Helm chart so they’re stood up and torn down together. The specific case you mention about “Service A” being dependent on “Service B” but stood up before “Service B” is ready is a classic problem, but easily solved:

    The dependent service (“A” in this case) should have an entrypoint that checks for everything else before starting. Here’s what I’m using right now in a project:

    #!/bin/sh
    
    while ! nc -z postgres 5432; do
      echo "Waiting for postgres..."
      sleep 0.1
    done
    echo "PostgreSQL started"
    
    touch /tmp/ready
    
    exec "$@"
    

    I’ve even got some code that checks that all the Django migrations have run first for the same situation. The Kubernetes philosophy is that any container should be able to die at any time and be eventually be brought back up and that every container needs to be prepared for this. Typically this means that your containers should operate on the basis of “if I can’t work, die, and hope the problem is solved by the time Kubernetes redeploys me”.





  • Daniel Quinn@lemmy.catoSelfhosted@lemmy.worldKitchenOwl Gone?
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    3
    ·
    edit-2
    27 days ago

    A platform that’s down 10% of the time and that now has a reputation of locking people out of their accounts without reason for weeks at a time cannot, under any definition of the word, be considered “stable”.

    I just… don’t get it. This whole community, we’re supposed to be building stuff for ourselves and each other, and for some reason people keep going to bat for a company that demonstrably holds every one of us in contempt.

    Just… stop using their shitty tools already.












  • If the work is a “clean room” reverse engineering job, as in: you didn’t read the original source to produce your version but rather looked at the input and output and wrote new software that had the same behaviour, the this new software is not a derivative work and you can use whatever license you like.

    The easy option is public domain, which effectively is a “this belongs to everyone” thing. There’s not much of a practical difference between this or MIT in my understanding.

    Another option would be something that preserves the freedoms you attach to the software like the GPL or LGPL if youre feeling less aggressive. These licences compel would-be modifiers to share their changes with everyone else, preventing (for example) companies that want to build their business on top of your work and then charging you for it.

    But basically, if you wrote it without referencing the original, it’s your work and you can do as you like. If you were referencing the original source though, then that’s a derivative work and you may be in violation of the copyright holder’s rights.




  • Honestly, I’d buy 6 external 20tb drives and make 2 copies of your data on it (3 drives each) and then leave them somewhere-safe-but-not-at-home. If you have friends or family able to store them, that’d do, but also a safety deposit box is good.

    If you want to make frequent updates to your backups, you could patch them into a Raspberry Pi and put it on Tailscale, then just rsync changes every regularly. Of course means that wherever youre storing the backup needs room for such a setup.

    I often wonder why there isn’t a sort of collective backup sharing thing going on amongst self hosters. A sort of “I’ll host your backups if you host mine” sort of thing. Better than paying a cloud provider at any rate.