I spent the better part of 3 days pulling my hair out over a script that just wouldn’t cooperate. Logs, testing, asking deepseek—nothing.

I made a post here yesterday asking about agentic llm models, and someone mentioned opencode.

I ran it from the code location, asked it to find the bug, and within a minute it pointed out a stupid error that I think I wouldn’t have ever found out. A silly little mistake. Fixed in a minute.

If a free model caught that instantly, it really puts things into perspective. Anthropic recently found 22 vulnerabilities in Firefox using their largest models. That’s not just fixing syntax; that’s hardening a massive browser against exploits.

I’m excited because the barrier to shipping stable code just dropped through the floor. But I’m also scared. Not of the tech itself, but of what happens when capitalists decide to fully automate labor. The game is changing fast.

The open‑source community is great at building tools. We need to get equally good at talking about who those tools really serve—and how we make sure they empower workers, not just replace them.

  • Sims@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    8 hours ago

    Every early part of the first llms have exploded and become the target for immense resources/development. This tree of techniques will continue to split and grow, and all ‘slop’/quality complaints and a swath of jobs will disappear quicker than new ones develops.

    Commoners should indeed create an independent open and safe computing environment and even basic societal functions. Open communication, storage, development, food, housing where possible and everything else is possible now that this AI tool can work for us too. If we can think it, we can build it.

    But I think we should aim higher than “support workers” as the highest goal this time. Let’s go for “no workers needed” and build a society around that principle. We can easily design and maintain a societal environment that optimizes for our common well-being and prosperity, but we need the current archaic and combative win/lose architecture to crash and burn, so another can emerge.

    We have a short window to prepare/discover new institutions and processes for an open/safe society without Capitalist parasites in power…

  • mattreb@feddit.it
    link
    fedilink
    arrow-up
    9
    ·
    13 hours ago

    But I’m also scared. Not of the tech itself, but of what happens when capitalists decide to fully automate labor.

    Dont worry that’s not gonna happen. A collegue of mine use llms regularly for code reviews, but you have to put in the work to understand whether what they say makes sense and filter out false positives which are not occasional.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    4
    ·
    12 hours ago

    While it is a good use case, also require human verification. I use to run my code (C++) trough clang analizers and sometime also with open llm models. The clang is pretty good, the llms also hallucinate on that too.

    Code review is a much better use case than writing code for llm s, but still require a grain of salt.