• mushroommunk@lemmy.today
    link
    fedilink
    arrow-up
    2
    ·
    24 days ago

    I read recently in an article something that struck me as the heart of it and fits.

    “Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read.” - Dan Brooks

    • Štěpán@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 days ago

      That’s something I’ve attempted to say more than once but never formulated this well.

      Every time I search for something tech-related, I have to spend a considerable amount of energy just trying to figure out whether I’m looking at a well written technical document or a crap resembling it. It’s especially hard when I’m very new to the topic.

      Paradoxically, AI slop made me actually read the official documentation much more, as it’s now easier than to do this AI-checking. And also personal blogs, where it’s usually clearly visible they are someone’s beloved little digital garden.

      • saltesc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        24 days ago

        That’s something I’ve attempted to say more than once but never formulated this well.

        Did you try ChatGPT?

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      I had this “shower” thought when chatting with a friend and getting an obviously LLM-generated answer to a grammar question I had (needless to say the LLM answer misunderstood the nuance of my question just as much as the friend did before). Thank you for linking the article, I will share that with my friend to explain my strong reaction (“please never ever do that again”)

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 days ago

    Well, it’s common courtesy that if someone is asking you, assume they already asked google or whatever and think you might have the answer they can’t find.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      That, and for some questions (i.e. nuances), a personal opinion is much more relevant to the asker than some random slop explanation. In this case I wanted to know which word construct in Turkish comes closes to the English “[ so and so ] is [ whatever ], isn’t it?” vs. “[ so and so ] is not [ whatever ], is it?” - Because Turkish has “isn’t it?” (değil mi? = not so?) but it doesn’t have “is it?”, mostly because “to be” is used much different in the language.

      A google result wouldn’t help me at all - the pure grammar answer is “there’s no form of ‘is it’ to be coupled with a negative assumption/assertion”. But does a language construct exist to transport the nuance of “the speaker assumes that something is NOT [soandso], and wants to ask confirmation” vs. the speaker assuming that something IS [soandso], and asking for confirmation.

      I still don’t know the answer, but it appears this nuance can’t be expressed in Turkish without describing around it in a longer sentence.

  • Sunsofold@lemmings.world
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    No love for LLMs from me but, flatly, no. Asking a question is soliciting a response. Their response is not the one you wanted, but it is solicited. It would be like you asking for a dick pic from someone, the penis of whom you were interested in seeing, and them responding with a generated image from one of the unfiltered image generators.
    The intellectual equivalent to an unsolicited dick pic is probably spam advertising. A piece of media is being sent to someone who did not request it, by someone who does not care if the recipient does not want to receive it.

  • owenfromcanada@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    24 days ago

    I don’t quite get the equivalence there. I’d say an LLM response is more on par with responding with a link to lmgtfy.com or something.

    The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn’t otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they’re trying to sell you their MLM garbage.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      I don’t quite get the equivalence there.

      It’s garbage insulting your intellect and personal relationship with the sender. Whereas an unsolicited dick pic is garbage insulting your eyes and personal relationship with the sender.