• Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    4
    ·
    7 days ago

    LLMs aren’t AI, let alone AGI.

    They’re fucking prediction engines with extra functions.

    • Onihikage@piefed.social
      link
      fedilink
      English
      arrow-up
      35
      ·
      7 days ago

      The best description I’ve ever heard of LLMs is “a blurry jpeg of the internet”. From the perspective of data compression and retrieval, they’re impressive… but they’re still a blurry jpeg. The image doesn’t change, you can only zoom in on different parts of it and apply extra filters, and there’s nothing you can truly do about the compression artifacts (what we call “hallucinations”). It can’t think, it can’t learn, it just is, and that’s all it will ever be.

    • unnamed1@feddit.org
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      14
      ·
      7 days ago

      So are we. Your definition of AI also seems off. It’s a field of computer science dealing with seemingly cognitive algorithms. Basically everything that is not rule based programming. I work in AI production since over ten years. It is absolutely valid and necessary to hate AI, but not to deny technical functionality. Also the other answer to your comment: of course training a neural network is a form of learning. Wether it is by reinforcement or by training data. There are many applications of ML since many years before LLMs, it makes no sense to deny that it exists.

    • MojoMcJojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      9
      ·
      7 days ago

      It’s an industrial sized prediction engine. And when you apply that to bioscience, it predicts things that saves lives.

  • IchNichtenLichten@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    6 days ago

    If I was a NVDA investor, I’d be worried. This clown is doing nothing but gaslighting and lying these days.

  • CeeBee_Eh@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    6 days ago

    This guy has completely lost the plot. I don’t think it’s possible to be even more disconnected from reality.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Even the AI doesn’t say as many bullshit things as he does. Though I guess if you gave it the instructions “say anything that might make the nvidia stock price go up” then an AI might say the bullshit he does.

  • entropiclyclaude@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    6 days ago

    These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.

    • Rekorse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      They are going to create a success story where someone becomes a billionaire with an AI doing everything. Then idiots will chase that dream for a hundred years and fill these rich fucks bank accounts.

    • awake@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Looking at their history they were always able to create markets for their GPUs and AI has been obviously incredible for them. There will be the next hot thing after AI and they’ll try to have that, too. The alternatives to CUDA are not there yet, ROCm is still lacking and fiddly. I see a lot of things happening but NVIDIA collapsing for whatever reason is not part of that.

    • fierysparrow89@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      I agree, they start to sound desperate to keep their current momentum going. I think the bubble will burst soon. Things look solid until they’re not.

    • VindictiveJudge@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      6 days ago

      Fun fact: if true AGI were a thing, those AI programs would be people and not paying them for their work would be slavery.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        This is honestly one of the scarier parts about the rhetoric, they’re basically implying they would happily enslave a sentient being.

        • a_gee_dizzle@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          In theory, you could imagine a totally unconscious intelligence, that can make intelligent decisions but has no conscious experiences / is not sentient. Of course I don’t know if such a system would be actually possible. But it is at least conceptually possible to separate the two ideas (consciousness/sentience vs intelligence)

  • baller_w@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 days ago

    Worth a read if anyone is interested: https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either

    My favorite part is Anthropic has a bot in the cafeteria that orders what staff request and if the bank balance goes to zero or negative, then it loses and has to close up shop.

    This far, nearly all employees have a 1” tungsten cube on their desk that some managed to get for free with a fake 100% off coupon.

    It’s a fun experiment in what happens when these agents start doing things in the real world and I commend Anthropic for putting it on display. A real hype train killer.

    As a technologist, I work with them all day, every day. I wouldn’t trust them to do my laundry without oversight, let alone run a business.