• tangeli@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    25 days ago

    Even Scott Shambaugh writes as if there were no humans responsible:

    Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library.

    As far as I know, every ‘AI’ agent runs as a consequence of one or more humans choosing to commit resources to install, configure and run it and those humans, therefore, are responsible for what it does. And, until ‘AI’ agents and the systems they run on spontaneously emerge and evolve from inert matter without human intervention, that will be the case. No agent does anything autonomously.

    People should be held accountable for what their ‘AI’ agents do.

    • thesmokingman@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      25 days ago

      I think there’s still reasonable debate over whether or not a human actively triggered the agent to generate the hit piece.

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        25 days ago

        The causes might be more or less proximate to the generation but the only ‘reasonable’ arguments I can think of are that they were mentally incompetent to know what they were doing or otherwise so negligent that they were unaware they were installing, configuring and providing power and Internet access to the agent. And in the case that they installed some device that reasonable was for some other purpose and they were unaware that the manufacturer of that device had configured it to generate hit pieces, then the manufacturer should be held responsible for it doing so.

  • U7826391786239@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    26 days ago

    “We are reinforcing our editorial standards following this incident.”

    how? where did the failure happen, and what specific steps are they taking to stop that? what are the consequences for the human being(s) responsible for the AI slop?

    when you wreck trust, then it’s your problem to re-earn that trust. your one-sentence subtitle promising “i’ll never do it again” doesn’t fix this and increase trust, it makes the shit even worse