Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • Kurious84@lemmings.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    40 minutes ago

    Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.

  • missfrizzle@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 hour ago

    pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

    and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

    until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

    /uj not really but that’d be sick as hell.

  • Lka1988@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    I run my NAS and Home Assistant on bare metal.

    • NAS: OMV on a Mac mini with a separate drive case
    • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

    Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it’s Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

  • Surp@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    What are you doing running your vms on bare metal? Time is a flat circle.

    • missfrizzle@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      for work I have a cloud dev VM, in which I run WSL2. so there’s at least two levels of VMs happening, maybe three honestly.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    5 hours ago

    “What is stopping you from” <- this is a loaded question.

    We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

    I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

    tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      5 hours ago

      Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

  • misterbngo@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    Your phrasing of the question implies a poor understanding. There’s nothing preventing you from running containers on bare metal.

    My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.

    I think you’re actually asking why folks would use bare metal instead of cloud and here’s the truth. You’re paying for that resiliency even if you don’t need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I’ve stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 hours ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    3
    ·
    9 hours ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      8 hours ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

  • layzerjeyt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 hours ago

    Every time I have tried it just introduces a layer of complexity I can’t tolerate. I have struggled to learn everything required to run a simple Debian server. I don’t care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.

    However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I’ll be ready to go.

      • layzerjeyt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I don’t know. both? probably? I tried a couple of things here and there. it was plain that bringing in docker would add a layer of obfuscation to my system that I am not equipped to deal with. So I rinsed it from my mind.

        If you think it’s likely that I followed some “how to get started with docker” tutorial that had completely wrong information in it, that just demonstrates the point I am making.

  • oortjunk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    I generally abstract to docker anything I don’t want to bother with and just have it work.

    If I’m working on something that requires lots of back and forth syncing between host and container, I’ll run that on bare metal and have it talk to things in docker.

    Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I’m messing with and it’s direct dependencies run outside.

        • kiol@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          I see. There is no disrespect intended, because it is a discussion thread starter. My question about this is: what would be the better phrasing for the subject matter of this post? Either way, discussion seems to be going great. Cheers all, because it isn’t a discussion of what is better: it is a general curiosity for people running bare metal, because it seems to receive zero discussion. I am glad to see such people responding, positive or negative.

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            I think people are assuming you want to convert people to The Church of Docker, in their minds, if you know what I mean. I do not see it that way, but “what is stopping you from using virtualization” has such a tone as if everyone is supposed to virtualize, but something prevents them and they can’t.

            I think a better way to ask it would be “what are your reasons for sticking with bare metal?” or something like that.

            sidenote: to me it seems some people here have quite bad experiences with docker. I mean it has parts I don’t like either, but I never had so many problems with it and I’m hosting a dozen of web services locally. maybe their experience was from the early days?

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 hours ago

    In my case it’s performance and sheer RAM need.

    GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

    I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Can anyone confirm if containers would actually impact CPU to GPU transfers

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        To be clear, VMs absolutely have overhead but Docker/Podman is the question. It might be negligible.

        And this is a particularly weird scenario (since prompt processing literally has to shuffle ~112GB over the PCIe bus for each batch). Most GPGPU apps aren’t so sensitive to transfer speed/latency.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 hours ago

    Not knowing about Incus (LXD). It’s a life changer. Would never run any service on bare metal again.

    Using GenAI to develop my Terraform and Ansible playbooks is magical. Also, use it to document everything in beautiful HTML docs from the outputs. Amazing.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:

    1. Peertube
    2. GoToSocial + client
    3. RSS
    4. search engine
    5. A number of custom sites
    6. backups
    7. Matrix server/client
    8. and a whole lot more

    Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

    I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Freshrss. Sips resources.

        The dd I do when I want. I have a script I tested a while back. The machine won’t be on yeah. Its just a small image with the software.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 hours ago

      Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.

          • mesa@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.

            It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Couple of custom bash scripts for the backups. Ive used amsible at work. Its awesome, but my own stuff doesn’t require any robustness.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 hours ago

      Say more, what did that experience teach you? And, what would you do instead?