• mushroommunk@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    I read recently in an article something that struck me as the heart of it and fits.

    “Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read.” - Dan Brooks

    • Štěpán@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      That’s something I’ve attempted to say more than once but never formulated this well.

      Every time I search for something tech-related, I have to spend a considerable amount of energy just trying to figure out whether I’m looking at a well written technical document or a crap resembling it. It’s especially hard when I’m very new to the topic.

      Paradoxically, AI slop made me actually read the official documentation much more, as it’s now easier than to do this AI-checking. And also personal blogs, where it’s usually clearly visible they are someone’s beloved little digital garden.

      • saltesc@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        That’s something I’ve attempted to say more than once but never formulated this well.

        Did you try ChatGPT?

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      The most annoying part - the recipients email client probably offered to summarise with an LLM. My bot makes slop for your bot to interpret.

      Its the most inefficient form of communication ever devised. Please decompress my prompt 1000x so the recipient can compress it back to my prompt.

      I will say though, even a chatgpt email tells you a lot about the sender.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      I had this “shower” thought when chatting with a friend and getting an obviously LLM-generated answer to a grammar question I had (needless to say the LLM answer misunderstood the nuance of my question just as much as the friend did before). Thank you for linking the article, I will share that with my friend to explain my strong reaction (“please never ever do that again”)

    • jjpamsterdam@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Thank you for this great answer! It’s something I intuitively felt but couldn’t put my finger on with the same surgical precision you just did.

  • morto@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Somehow, people don’t get that if we ask something to them, it’s because we want their personal interpretation of it, otherwise, we would use the internet as well

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      Specifically this - in terms of learning a language, understanding some nuances also absolutely requires an explanation by a native speaker that has a really good grasp of their language AND a talent of explaining. Both of which are criteria diametrically opposed to the average slop training data.

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Well, it’s common courtesy that if someone is asking you, assume they already asked google or whatever and think you might have the answer they can’t find.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      That, and for some questions (i.e. nuances), a personal opinion is much more relevant to the asker than some random slop explanation. In this case I wanted to know which word construct in Turkish comes closes to the English “[ so and so ] is [ whatever ], isn’t it?” vs. “[ so and so ] is not [ whatever ], is it?” - Because Turkish has “isn’t it?” (değil mi? = not so?) but it doesn’t have “is it?”, mostly because “to be” is used much different in the language.

      A google result wouldn’t help me at all - the pure grammar answer is “there’s no form of ‘is it’ to be coupled with a negative assumption/assertion”. But does a language construct exist to transport the nuance of “the speaker assumes that something is NOT [soandso], and wants to ask confirmation” vs. the speaker assuming that something IS [soandso], and asking for confirmation.

      I still don’t know the answer, but it appears this nuance can’t be expressed in Turkish without describing around it in a longer sentence.

  • owenfromcanada@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I don’t quite get the equivalence there. I’d say an LLM response is more on par with responding with a link to lmgtfy.com or something.

    The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn’t otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they’re trying to sell you their MLM garbage.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      I don’t quite get the equivalence there.

      It’s garbage insulting your intellect and personal relationship with the sender. Whereas an unsolicited dick pic is garbage insulting your eyes and personal relationship with the sender.

  • Sunsofold@lemmings.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    No love for LLMs from me but, flatly, no. Asking a question is soliciting a response. Their response is not the one you wanted, but it is solicited. It would be like you asking for a dick pic from someone, the penis of whom you were interested in seeing, and them responding with a generated image from one of the unfiltered image generators.
    The intellectual equivalent to an unsolicited dick pic is probably spam advertising. A piece of media is being sent to someone who did not request it, by someone who does not care if the recipient does not want to receive it.