• Peasley@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I have been a sponsor on Patreon almost since the account was opened (maybe 4 months in). It’s my longest-running Patreon sponsorship.

    I’ve gone ahead and cancelled. Many thanks to the developers, sorry it had to end like this.

  • abcdqfr@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    Sincerely… if you can give a single shit about ai in code, you should be able to tell it was used. If you cannot differentiate human from ai authored code, you do not have a seat at the table. jeer from the soap boxes. code is not art. code is code. get over it. does it compile or run and do the thing, cool, fuck cares who or what wrote it. clutching pearls yall cant even define.

  • CoyoteFacts@piefed.ca
    link
    fedilink
    English
    arrow-up
    289
    arrow-down
    9
    ·
    10 days ago

    Whether or not I use Claude is not going to change society

    This gives me shopping cart theory vibes. I don’t usually base my moral compass based on whether my action will have some kind of measurable impact, but whether I believe it’s the right thing to do. After the intense doubling down in that discussion thread I’m definitely steering clear of lutris. It costs me very little effort to avoid projects that do icky things I don’t want to encourage (even though it may not have a measurable impact~)

    • rtxn@lemmy.world
      link
      fedilink
      English
      arrow-up
      149
      arrow-down
      1
      ·
      10 days ago

      I can’t fix the problem, therefore I’ll be part of the problem.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        2
        ·
        9 days ago

        At my job we have been told how we have to start using AI more. I can’t really see any point. The only tasks AI can help me for are pointless tasks from HR that shouldn’t exist in the first place. Monthly forms with questions like “how are you feeling emotionally”, used to take me ages to come up with corpo bullshit friendly answers but locally hosted deepseek does it in seconds.

        • toynbee@piefed.social
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          9 days ago

          When my work enabled Gemini, I asked it how to disable it. It said it couldn’t help me and asked if I had another question. I didn’t.

          That’s the only interaction I’ve willingly had with it.

        • Kanda@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          The HR department will see that it’s not quality human HR-slop and the thought police will be with you shortly

        • Pika@rekabu.ru
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          9 days ago

          In my experience, AI models are fairly good at contextual search. That’s the only thing I use them for.

          • Korhaka@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 days ago

            Yes, if we had documentation then I suspect AI tools could be good for finding information in that.

    • Joelk111@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      ·
      10 days ago

      Lutris has always been a bit hit-or-miss for me, I avoided it unless it was the only option, as it only worked half the time. I don’t want it to come off like it shouldn’t exist, as stuff making Linux easier to use is great, but I don’t use it at all in my current workflows.

      • CoyoteFacts@piefed.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 days ago

        I guess I’ve just been behind the times, but I’ve never had an incentive to switch. I just installed faugus and transferred everything over and it seems very slick. It seems to be missing 1 or 2 things, like environment variables per-game, but all the other important stuff seems to be here. I know what I’m doing with prefixes so having all the knobs to turn is great, but honestly linux gaming does not need most of those knobs nowadays.

    • blackbrook@mander.xyz
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      1
      ·
      10 days ago

      Also, it is one thing to decide that something is not an ethical issue of concern, it is another thing to act with disrespect to everyone with a different opinion.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        15
        ·
        10 days ago

        it is another thing to act with disrespect to everyone with a different opinion.

        Unless that opinion is ‘I like using AI’, then they deserved the disrespect.

      • MolochAlter@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        Utilitarianism really falls at the first hurdle of any kind of evaluation of a moral system.

        It has no real prescriptive power because it demands you be able to correctly foresee the outcome of your actions, something literally addressed by “The road to hell is paved with good intentions”, an adage of at least 400 years ago, and yet people will still gravitate towards it as if society did not explicitly caution us about that mindset forever now.

        At this point I can’t help but look down on those who genuinely identify as utilitarian as either too young, too stupid, or actively malevolent and trying to find a way to justify their bad behaviours as errors rather than malice or negligence.

        • ns1@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          I’d offer you a counterpoint (ignoring the issue with Lutris and AI for a minute):

          If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them? If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.

          It’s of course completely fine to not be utilitarian, but trying to claim that all utilitarians are either stupid or evil is just incorrect.

          • MolochAlter@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 days ago

            ignoring the issue with Lutris and AI for a minute

            Please by all means, I ignored it in the first place, I find this way more interesting.

            If you choose not to judge your own actions by the expected consequences of those actions for everyone involved, then how exactly are you supposed to judge them?

            Well, this is only half the problem. It’s a bad system because it demands the impossible of you (i.e. accurately predict the future) but it also has a really narrow interest in the dimensions of human morality.

            To directly answer the question however: you judge them by a set of principles, whichever you deem right, that you apply consistently across choices.

            When it comes to inter-personal choices, the vast majority of all questions can easily be answered by asking yourself “am i betraying some explicit or implicit bond of trust with someone (who has not done so themselves) by doing/saying this?” and if you are, you just stop.

            And to be clear, I don’t claim to follow this principle 100% of the time, I am not a saint, but that to me is the guiding principle when there are stakes to my behaviour, and it has not failed me yet.

            If you’re following some rule that disagrees with the utilitarian view, then by definition it’s a rule that in your own opinion leads to a worse outcome for everyone.

            (Emphasis added)

            At its core, the idea of utilitarian morality is to “maximise utility”, that is to do whatever does the most “good” to the highest number of people.

            This is, IMO, a terrible metric, and as a deontologist I am perfectly happy reaching a “worse” outcome by it.

            It is not particularly hard to see how, by applying this metric, you can justify any kind of scapegoating, abuse, and/or undue leniency on people that would deserve harsh punishment in any deontological or virtue based system, as soon as enough “good” is produced through it.

            There is a very dark, but apt, joke about this kind of approach to morality: that 9/10 people involved in it endorse gang rape.

            To me, morality is a qualitative assessment, not a quantitative one.

            It does not matter how many perpetrator lives will be ruined if they have earned their punishment, and it does not matter how much happier they would be to get away with the crime than the victim would suffer, comparatively.

            To do anything else would be to relinquish morality to the whims of the masses, because it implies that there is a threshold past which the abuse of the few becomes negligible due to the benefits it brings to the many.

            trying to claim that all utilitarians are either stupid or evil is just incorrect.

            To be fair I also stated they can be naïve; I was one too in my youth, until I learned and understood better.

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    206
    arrow-down
    20
    ·
    10 days ago

    I’m now assuming it all is and deleting Lutris.

    What a moron.

    • bdonvr@thelemmy.clubOP
      link
      fedilink
      English
      arrow-up
      154
      arrow-down
      1
      ·
      10 days ago

      Oh yeah. Here’s another nugget:

      Sometimes, I generate some code with Claude and commit by hand

      Sometimes, I write code manually and ask Claude to commit

      Sometimes, I ask OpenClaw to generate some code, which doesn’t put the Co-Authorship

      Sometimes, the whole thing is AI generated from end to end

      This is also a somewhat recent addition to Claude Code. I was kinda surprised when I first noticed it but didn’t think much of it, I was like “meh, I guess we’re doing that now, whatever, some people might take issue with it, whatever”. Also, do keep in mind that I love trolling people coming in my projects to complain about my methods.

      For those who are anti-AI, it’s a safe assumption that any addition to the project has had some kind of AI interaction during the development process.

      https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355

      • mlfh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        177
        arrow-down
        4
        ·
        10 days ago

        Sometimes, I ask OpenClaw to…

        This person should not be trusted with anything.

        • mavu@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          54
          arrow-down
          2
          ·
          10 days ago

          That is the real shame in all this. I’m certainly not updating lutris any more, because there is no way of knowing what you will install on your system.

          You can trust humans (as in “trusting is an option”). You can never trust an LLM. And admitting that there might be unsupervised commits, being installed on possibly thousands of PCs is terrifying.

          • entropicdrift@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            29
            arrow-down
            1
            ·
            edit-2
            10 days ago

            Glad I use Heroic instead. Time to check what their AI policy is.

            Based on some PRs, they’re using github copilot to help with reviews but are generally against vibe coding

        • zr0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          10 days ago

          💯 this. I don’t mind using an LLM for certain tasks. We all do at the end of the day. However, OpenClaw is a different topic. This is just dangerous.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    183
    arrow-down
    6
    ·
    edit-2
    10 days ago

    They are free to do what they want to on their repo.

    We are free to fork if need arises.

    Personally I don’t like projects not showing what AI has made. And most of Claude was made on stolen code. Its against the open source license they themselves use https://github.com/lutris/lutris/blob/master/LICENSE

    But almost no one actually enforces the license until the big companies show up. I hope they change their minds, but until then, im going to stop using/contributing for a while.

    • Schadrach@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      We are free to fork if need arises.

      …and how do you ensure your fork does not contain a single commit involving even a single line written by Claude? If you can’t, then isn’t your fork slop by default?

      And most of Claude was made on stolen code.

      Sure, it learned to code by reading lots of code, most of it just publicly available online for anyone to read and for anyone to learn from but not explicitly licensed for a machine to read it and learn from it. I doubt it’s possible to teach an ML system (or for that matter a human being) how to code without reading lots of example code. And any code you’ve ever read has an impact on any code you write afterward (same as any other creative endeavor), that’s why clean room design as a defense against copyright infringement is a thing that exists.

    • db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      7
      ·
      10 days ago

      Does anyone know which was the last version before the dev started shoveling slop in to the repo? The utter dipshit invalidated even the ability to license after that point, those releases are wholly worthless.

      • e8CArkcAuLE@piefed.social
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        10 days ago

        in 5 years from now there’s going to be totally coevolved but unique seed-lines for software. the once with AI, and the once without. how can you distinguish them? did the human that said it wrote them really write them? these problems aside, i suspect it will be forced to happen just from a security viewpoint, big companies won’t be able to get any kind of insurance anymore running AI-infested code.

    • nialv7@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      25
      ·
      edit-2
      10 days ago

      it’s more nuanced than that. Claude is made from stolen code, but it generally isn’t going to copy its training data verbatim (unless specifically told to). so copyright wise it’s more grey than strictly wrong. and though claude is made from stolen code, lutris developers are writing something they give off freely to the world, they are not profiting from the stolen code.

      does this make it ok? i don’t know. what if they use an open weights model rather than a closed one? would that be more acceptable?

      • Miaou@jlai.lu
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        9 days ago

        No, open weights changes nothing. Using stolen material is. Especially for a GPL project, a licence normally used to scare off corporate vultures. Why should anyone respect lutris’ licence, when they gave up on the authorship of their own product?

  • SavvyWolf@pawb.social
    link
    fedilink
    English
    arrow-up
    129
    arrow-down
    6
    ·
    10 days ago

    “This works perfectly, which is why I’m removing all ways to audit what it has contributed.”

    • dev_null@lemmy.ml
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      25
      ·
      10 days ago

      “because that’s the only way to use it without being harassed online”

      I disagree with his reasons for removing it, but they are pretty clear.

    • db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      10 days ago

      “AI” has been known to present code from other projects and hence other licenses. It can’t become public domain unless all of that code was also public domain.

      • bss03@infosec.pub
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 days ago

        I’d imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.

        I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on “The Stack” (NOT “The Pile”) and then included all the required attributions – but no ready-made model does that. All of the “open source” model frameworks that I could find included some amount of proprietary “pre-training” data that would also be an issue.

        If AI output is NOT affected by the copyright of training data… there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.

        • DerHans@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 days ago

          Good Sire, if we are talking about only the US, then that does not matter at all. Existing copyright law and established precedents (without involving AI) already covers this. The copyright of software is handled like that of literature, so the actual content is copyrighted. More specifically the sequence of words. In order to violate the copyright of a protected work, one just has to reproduce this sequence. It is not relevant, if it was reproduced by an AI, a human, God or your cat (:D). The only exclusion to this is fair use. Whether fair use applies must be considered by a case by case basis. There are four factors that are used in deciding whether it falls under fair use. And that is considering that portions of that code are not patented. If they are, then you are screwed no matter what (unless you are allowed to use that code).

          Anyhow, you are opening yourself up for litigations for sure.

          Now, is this a problem? Probably not. Copyright infringement is actually very very hard to spot, especially without automated tools (looking right at you, YouTube). Even if it is spotted, the owners of the copyright must use resources in order to enforce it. Considering that most of the code used in the training data is open-source, most of these owners won’t have these resources or at least aren’t using them (which is sad, because that also applies to the infringement of companies as well). You cannot lose, if no one sues. Whether you should risk it, is anyone’s decision to make.

          For unprotected code… I guess, you are right. It could be one way or the other, but it does not really matter that much. At worst, people can use your code without adhering to your license. That would not mark the end of an project, the former definitely would.

          Also on another note: Using copyrighted material in the training data of AI is considered fair use.

    • Alex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      11
      ·
      10 days ago

      There is no settled legal status on the output of AI systems and it’s certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it’s training data vs following instructions in a local context differently. Human engineers are allowed to use “retained knowledge” from their experiences even if they can’t bring their notebooks from previous careers. LLMs are just better at it.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        3
        ·
        edit-2
        10 days ago

        As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.

        https://www.morganlewis.com/pubs/2026/03/us-supreme-court-declines-to-consider-whether-ai-alone-can-create-copyrighted-works

        In other words, if the AI wrote the code, and you didn’t change it since then, it’s not yours at all. It’s public domain, no question.

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          10 days ago

          Prompting the AI alone does not meet that requirement. IE you can’t say “draw me a picture of a cat” and then copyright the picture of the cat claiming you made it.

          You can say “help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit”, all with prompts, but with your sufficient creative input.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            2
            ·
            edit-2
            10 days ago

            That’s not how the dev said he’s generating code. He said sometimes he does it without any intervention at all.

            Also, that’s potentially copyrightable. That hasn’t been settled.

        • Alex@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 days ago

          Glad it applies worldwide /s

          Slop can’t be copyrighted, great. We don’t want slop.

        • dgdft@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          edit-2
          10 days ago

          Your link doesn’t support what you’re saying in the slightest. Have whatever opinion you want, but don’t shovel up transparent bullshit to push your narrative.

          TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn’t “settle” jack-shit.

          Quoting further:

          Thaler submitted an application to the US Copyright Office to register copyright in “A Recent Entrance to Paradise,” explicitly identifying the AI system as the author and stating the work was created without human intervention.

          For now, businesses and creators using AI should continue to rely on the longstanding human authorship requirement. Under current law, works made solely by autonomous AI are not eligible for copyright protection in the United States. Ongoing cases also consider the amount of human input, including prompting or post-generation editing, required to register copyright in an AI-generated work.[12]

          Companies should ensure a human contributes creatively and is named as the author in any copyright applications for AI-assisted works. To maximize protection, organizations should review their creative workflows and document human involvement in AI-assisted projects, particularly for commercial content. Organizations should continue to document the timing and scope of the use of AI in copyrightable works, for example by retaining prompts provided by the author. Internal policies should clarify attribution, ownership, the nature of creative input, and documentation requirements to avoid denied copyright applications.

          Iteratively working on a codebase by guiding an LLM’s design choices and feeding it bug reports is fundamentally different from this case you’re citing.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            10 days ago

            If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.

            Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.

            OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.

  • Shanmugha@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    3
    ·
    edit-2
    9 days ago

    Been chewing this since yesterday. Okay, here is my two cents:

    • yes, what LLM companies are doing is a problem. So dropping anything that has anything to do with their products is a sane way to make a statement
    • yes, LLMs can be used effectively in development. Whether Lutris author has been using them well - I don’t know. Guess won’t bother to check either, have other things to do
    • yes, doing the stunt with “good luck guessing what is what” is bullshit

    Net total, given I’ve already dropped GNOME because of their culture: guess now I am dropping Lutris. Not because of AI per se, but because of “fuck you” move

    • Skullgrid@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      edit-2
      9 days ago

      Net total, given I’ve already dropped GNOME because of their culture

      what was wrong with gnome’s culture?

      I use KDE BTW, I don’t want a fischer price/mac lookalike ui

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        9 days ago
        • You want customisation? Use extensions
        • We broke extensions, because
        • Also, no API for extensions. Patch our code manually

        No integrity in that see I, so drop them I do (Yoda voice)

      • JustEnoughDucks@slrpnk.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        I’ve already replaced lutris with Heroic launcher + proton and wine-ge a year ago.

        Lutris install script already didn’t work >50% of the time for me and battle.net always completely corrupts and messes up after a time on lutris and I have to reinstall it every few months, but has been going a year strong on heroic.

        You can also always look at the lutris install scripts and install those components in heroic via winetricks. They were made by the community anyway.

      • aeiou_ckr@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 days ago

        For games. I have replaced it with steam as you can load none steam games and run them under proton. I have had great success. Outside of games I’m not sure.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          I’m pretty sure neither is pure? I mean, you don’t have to necessarily limit steam to games. May as well try non games and see what happens.

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        Didn’t look for one yet. As I understand, there is a thing called bottles that is worth a try

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      7
      ·
      8 days ago

      but because of “fuck you” move

      The guy removed the attribution because he is being harassed.

      The ‘fuck you’ move is the people harassing an open source dev. Those people are the source of the bad behavior, not the guy who volunteers his time maintaining an open source project for everyone to use.

      The anti-AI crowd is toxic and need to fuck off. It’s one thing to have an opinion, it’s another thing to harass volunteers because they’re using tools that the crowd has a hateboner for.

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        8 days ago

        The guy removed the attribution because he is being harassed

        That may be, and he never mentions this in the now famous comment. Or was the message about Lutris being slop a harrasment? (question is genuine, I am somewhere in autistic spectrum)

        The ‘fuck you’ move is the people harassing an open source dev

        That is not a decent behaviour, no questions. His doing something preemptively in regards to something that he says he doesn’t see as an issue - that’s some bullshit. I am not against him using llm tools, but I am not ok with someone who can’t just say “this is how I am doing things, these are my reasons and they are enough for me, so fuck off (and/or be banned, if GitHub has such a thing)” and instead goes on with some ill-reasoned tyrade. Before anyone brings this up: yes, he also mentions depression, which is no small thing, so demanding crystal-cut reasoning is also bullshit, but that is not my point, the latter being that the guy needs some care, and doesn’t look like he understands that. Which means things are heading towards a disaster, sadly

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 days ago

          That may be, and he never mentions this in the now famous comment. Or was the message about Lutris being slop a harrasment? (question is genuine, I am somewhere in autistic spectrum)

          There was a lot of toxic conversations in Discord and on the forums for a while prior to his blowing up.

          The dev hasn’t made a secret of his mental health struggles and he probably could have handled the situation a bit better. But, in the end, he’s a guy making a tool that helps the entire community and even if you think AI tools run on the blood of sacrificed puppies, it isn’t okay to harass someone personally.

          Argue about water usage or power usage, copyright issues, etc… but as soon as they start attacking the person directly it has gone way too far. His response could have been better, but the blame should be completely on the anti-ai harassment squad and not the lack of PR skills of a volunteer developer.

          • Shanmugha@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            8 days ago

            Blame for different things:

            Running around and cursing anyone using llm - that’s an idiotic thing to do, and he is not the one doing it, of course

            not the lack of PR skills of a volunteer developer

            That’s not what bothers me

            But, in the end, he’s a guy making a tool that helps the entire community

            While sacrificing his own life (time, energy, emotions, all it takes to keep doing it). That’s not worth it, damn it. Doing something just to say “good luck figuring this out on your own, if it bothers you that much, you stupid fucks” is a priority shift from “what is good for me/project/community” to “what to do with project to stop this toxic shit”. My answer is “Do nothing with the project. Get them to fuck off or get yourself out of their reach”. And my requirement of anyone in charge of anything is clarity

            Edit: word “sacrificing” is important. Not sharing out of abundance, not serving out of devotion, but cutting from what he has and needs for the benefit of others

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 days ago

              Oh I agree he’s handled it badly, I just don’t fault him much.

              He’s just one guy who’s suddenly the target of tens or hundreds of people who’re directly harassing him everywhere that it is possible. He shouldn’t be put in that position and, as bad as his response is, he’s doing it in the context of a pressure and harassment campaign… not because he’s suddenly developed animosity for the community.

              His response is bad, but the people creating the situation are the ones that shoulder the blame… imo.

              • Shanmugha@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 days ago

                On that we agree completely. Screaming “N is bad because llm was used to build it” is utter idiocy

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    4
    ·
    edit-2
    8 days ago
    • their repo (checked the commit graphs and basically they did most of the work, 2nd dev agree with them, covers 90%+) their choice of governance
    • their repo, their choice of tooling
    • I genuinely believe they think are doing “good enough” code and they are probably right about it in their context
    • they do have fair points on the economical power dynamics, namely that yes Anthropic is slightly less worst than Meta, Google, OpenAI, Microsoft, etc (… but IMHO honestly that’s a damn low bar)

    but also

    • obfuscation rather than discussion (closed the issue and limited to maintainers only) so clearly the signal is precisely “my repo, my choice”
    • no mention of the copyright or license washing
    • no mention of ecological impact

    so I would personally consider instead Bottles, GOG (have different problems), Steam (obviously not open source and basically monopolistic position), etc.

    Overall I think preventing discussion is unhealthy (even though sadly sometimes needed, here I lack context, maybe the issue poster did this numerous time on other platforms, title definitely was provocative) but removing provenance is NEVER a good choice. They want to use Claude on their repo? Absolutely fine (even though not to me) but hiding it makes it instantly untrustworthy to me. In fact I even argued in the past that even though I personally do not use GenAI/LLMs (for coding or otherwise) except for testing it should always be disclosed precisely so that others can make THEIR choice in consequence, including using or contributing, cf https://fabien.benetou.fr/Analysis/AgainstPoorArtificialIntelligencePractices

    • Mwa@thelemmy.club
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      9 days ago

      GOG (have different problems)

      but GOG is not open source too (if you use GOG Galaxy)

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 days ago

      Tbh, I think it’s a bit of pick your battles. GOG and Steam are mostly good companies, but it’s all closed source and I would bet money they are using AI to develop too. And they don’t even provide any way that you could check that, because their code isn’t open at all.

      Is that really better than some open source dev developing with AI in broad daylight?

      I totally understand why the Lutris dev shut down the discussion. The dev posted about struggling with mental health, and developing open source software is sadly really bad for your mental health. There’s just too many people who think that the code is public and thus they get a say in it, even if they didn’t contribute anything at all.

      As an open source dev, you contribute without getting anything in return, and then you have to justify your actions in front of random strangers who often get quite aggressive. It’s a really big problem in the FOSS sector. Look up e.g. the controversy around Marcel Bokhorst (M66B). He almost shut down all his great FOSS projects because of all the hassle he got from randos on the internet.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        I agree. I still don’t think it’s right. I’m not sure how to do better. Overall, developing (in open source or not) or creating art or whatever one is into my once advice is to cherry pick whatever strangers are telling you. You only listen to the healthy advice and everything else must be like water off a duck’s back.

        • squaresinger@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          In commercial development that’s easy. I don’t really care about the product I am working on. I am doing a good job working on it, but it’s not my baby. Also, I am a developer, so I develop. There’s customer support people who get paid to have customers scream at them.

          If this is your personal pet project, that you love and that you poured your soul in, that’s more difficult. Especially if you are already struggling with mental health.

          And I don’t like it when we say “only mentally stable people who don’t mind engaging with a toxic community deserve to be FOSS developers”. That’s just not right.

          • utopiah@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            I don’t like it when we say “only mentally stable people who don’t mind engaging with a toxic community deserve to be FOSS developers”. That’s just not right.

            No idea where that came from, I surely didn’t mean nor suggest so.

            • squaresinger@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              Overall, developing (in open source or not) or creating art or whatever one is into my once advice is to cherry pick whatever strangers are telling you. You only listen to the healthy advice and everything else must be like water off a duck’s back.

              This part here.

              Only taking healthy advice and ignoring everything else is something you can do if you are super mentally stable. If you aren’t this is often not possible.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      8 days ago

      obfuscation rather than discussion (closed the issue and limited to maintainers only) so clearly the signal is precisely “my repo, my choice”

      There is discussion but it is limited to the people who contribute to the project, it is closed to the public because of the harassment campaign. Nobody wants to listen to a bunch of toxic children copy and paste the same opinions and insults.

      no mention of the copyright or license washing no mention of ecological impact

      The developers who use AI tools are not repeating the anti-AI memes, this isn’t surprising.

      It’s one thing if you want to not use the software, but contributing to or whitewashing this harassment campaign is toxic and needs to stop.

  • ClamDrinker@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    3
    ·
    edit-2
    10 days ago

    I’m kind of torn on this, because on the one side I can see the developer’s troubles. If they have 30 years of experience and they considered the impact of using it they will most likely know how to use it properly and ethically. Indeed many of the issues people have with AI are a kind of redirected anger, when really they are issues with capitalism, incompetency, or digital illiteracy. And the person posting the issue seems purely there to fan that flame rather than actually contribute. Something maintainers could use just as little as slop authored PRs.

    But on the other hand, being open about the usage is a must. It’s the price to pay for going against the grain. If your ideals and means are pure, they should be defendable and scrutinizable to reasonable people, and there should be no issue with that in the long term. Hiding the usage will create doubt about authorship, and make defenses harder to point at, while it won’t stop the horde.

    • lama@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      1
      ·
      10 days ago

      Yeah what rubs me wrong is that they went out of their way to hide it and are proud of it

    • tinsukE@lemmy.world
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      6
      ·
      9 days ago

      they will most likely know how to use it properly and ethically

      I’d argue that ethical use is not possible:

      • Models are trained on stolen/misappropriated/misused data
      • Training involves psychologically harmful work from ghost workers
      • Those services runs on infrastructure that no one wants around, and wastefully contributes to climate change/global warming
    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      9 days ago

      Per the dev:

      “I don’t refuse to document anything, I’m just taking full ownership of all commits. Claude is not a person (sorry to all the people named Claude) and I don’t see the point of having it in commit messages. It has a “Sent from my iphone” vibe. It’s just advertising.”

      She also said something to the effect of “if you think the AI code I allow through is noticeably worse than my hand written code, you should be able to tell it apart without me labeling it. I’m tired of your bitching about it because of a tag rather than the actual content”

    • Lumisal@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      9 days ago

      It’s at times like this I like to point out examples like surgeons Ben Carr(?) and Dr. Oz as counter examples that you can be very knowledgeable about something but also very unwise or morally bankrupt it simultaneously.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      6
      ·
      8 days ago

      Is Step 1 - Be the target of a harassment campaign?

      He removed the attribution because people are harassing him, it’s one thing to not want to use the tool but harassing an open source dev is way over the line. I don’t care about your opinion on AI, it doesn’t justify harassment.

      The anti-AI crowd have, once again, gone way over the line. Nobody should be supporting this harassment.

  • Zos_Kia@jlai.lu
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    14
    ·
    8 days ago

    Oh great the campaign of harassment is continuing. Keep going guys, hopefully you can get another dev to quit a project, and I know none of the people commenting here have what it takes to fork and maintain it.

    You wouldn’t be doing anything different if you were getting paid by corporate interests to hurt the open source movement. Great job you can be proud of yourselves.

    • luciferofastora@feddit.org
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      7
      ·
      edit-2
      8 days ago

      Edit: To preface this, I concede that targeted harassment against individuals isn’t a good solution to the problems I have with the way the technology is being used.


      Others mention that some recent versions appear to have been unusable. If this is due to LLM-generated code and the dev doubles down on using it, I’m not sure there’s too much value in them carrying on development and burying more artificially generated foot guns in there than human coding tends to have already.

      That aside, the climate, economic and social problems of the GenAI boom are hardly unknown. For the dev to ignore that is… distasteful. If they won’t quit using LLMs without also quitting the project, Lutris might end up another regrettable victim of the AI-Slopalypse.

      Opposing GenAI isn’t trying to hurt the Open Source movement, it’s trying to call out the false messiah that has deluded some people into believing it’s the future of software development.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        5
        ·
        8 days ago

        Opposing GenAI is free. Do it. It just consists of not using the software you don’t agree with. It’s great I do it all the time.

        Coordinating attacks on social media to harass a developer is not great. It’s 4chan-like but at least the 4chan goblins know that they are the bad guys. This is just as slimy but with none of the self awareness.

        • luciferofastora@feddit.org
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          8 days ago

          Opposing GenAI is free. Do it. It just consists of not using the software you don’t agree with.

          That doesn’t mitigate the environmental damage caused by others using it. I’m not opposed to the technology, nor strictly to its application, but to the irresponsible wau it’s being handled currently.

          Coordinating attacks on social media to harass a developer is not great

          You’re right, I agree on that. Efforts should target the companies that offer it, rather than individuals.

          It’s 4chan-like but at least the 4chan goblins know that they are the bad guys. This is just as slimy but with none of the self awareness.

          I’m not sure the 4chan goblins actually believe they are bad guys so much as ironically embrace that image

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            8 days ago

            That doesn’t mitigate the environmental damage caused by others using it. I’m not opposed to the technology, nor strictly to its application, but to the irresponsible wau it’s being handled currently.

            Well i guess that’s a great reason to harass individuals who never wished harm on anybody then 🤷

              • Zos_Kia@jlai.lu
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                4
                ·
                8 days ago

                Yeah i get it. It’s just that the whole situation pisses me off to no end. There are exponentially more people destructively contributing to this campaign, than people constructively contributing code to projects. Cause it’s easy and lazy and takes literally zero effort.

                The only effect is to punish developers for having successful projects. They’d be fine if they were just dicking around on toy projects, but they chose to do something that matters, and to do it for free, and now they have haters. A lot more haters than helpers too !

                We are collectively sending the message that it’s better not to stick your head out and publish open source code, and this will wreak havoc on the already overtaxed FOSS ecosystem. Corporate tech must be rubbing its hand in glee now that we’re doing what they never achieved in 30 years.

                • petrol_sniff_king@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  2
                  ·
                  8 days ago

                  that it’s better not to stick your head out and publish open source code

                  I like the implication that open source code must include AI, and so the only recourse is to reject… all open source projects?

                  You know, we had a FOSS without AI like ten years ago. I’d prefer to keep that one.

      • Retail4068@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        11
        ·
        edit-2
        8 days ago

        Y’all are just prejudice. Making up what ifs and whatanoutism. If you think you can do better then fork it. But you can’t, and won’t.

        • luciferofastora@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          8 days ago

          whatanoutism

          I don’t think pointing out problematic aspects of LLM use is whataboutism, given that the maintainer’s LLM use is the topic of conversation. A whataboutism would be “But what about Microsoft? They use GenAI too!” because that has nothing to do with this specific developer using it.

          This is simply about the reasons I disapprove of using GenAI in general and relying on LLMs for coding in particular.

          If you think you can do better then fork it. But you can’t, and won’t.

          There are a lot of things I can’t do myself. I don’t see how that should mean I can’t criticise the way they are done.

          It also doesn’t mean people have to stop using it entirely. Approval is not a binary. This isn’t a company we’re paying money to, it’s not an atrocity, and it’s not particularly large in scale (which is why making a witch-hunt out of it is dumb too).

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      Well either does the maintainer it seems. And I don’t believe we look at all code. It is hard to understand someone else’s code hell its hard to understand your own after awhike.

  • woelkchen@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    7
    ·
    10 days ago

    Just assume everything is AI generated and feel free to ignore the GPLv3 because generated code doesn’t have any copyright. See how he reacts.

      • renegadespork@lemmy.jelliefrontier.net
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        2
        ·
        edit-2
        10 days ago

        The legal effect of AI generated code on software licenses is untested in court and AFAIK has no explicit laws. So really no one knows how it will work yet.

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          ·
          10 days ago

          The US Copyright Office has updated its guidelines:

          If AI content is present, the Office will only register the work if the human contributions are sufficiently creative and if the AI-generated portions are supplementary or used as a tool under human direction. Essentially, they ask: “Is the work basically one of human authorship, with the computer merely assisting?” If yes, it can be protected (with a disclaimer that some content isn’t human-made). If no, if the AI’s role overshadows the human’s, then the work, or at least the AI-created portion, is not eligible for copyright.

          In Canada, where I live:

          So, can you claim copyright in an AI-generated work in Canada? As of 2025, the safest answer is: only if a human author contributed substantial creative effort to the final work. There needs to be some human “skill and judgment” or creative spark for a work to be protected.

          If the AI was just a tool in your hands, for instance, you used AI to enhance or assemble content that you guided then your contributions are protected and you are the author of the overall work. But if an AI truly created the material with you providing little more than a prompt or idea, the law may treat that output as having no human author, and thus no copyright.

          For now, anyone using AI in creative projects should keep documentation of their own input and creative choices. Emphasize the parts of the work where you exercised judgment or selected elements because those are likely what copyright will cover. And remember that copyright in AI-generated content is a fast-moving area.

          https://www.foundationsoflaw.com/post/can-you-claim-copyright-in-ai-generated-works-in-canada

          Makes sense to me.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            10 days ago

            The thing is, many of these guidelines are related to finalized products fully created by AI. As in, the AI produced a written or drawn work at the end of it that on it’s own is the product (Eg. an article or an image). This will probably apply to code in some reasonable way, but at the end of the day there’s only so many ways to write code since it’s syntax and not as flexible as language. It actually has to produce something that works, so there are far less finite arrangements.

            If you were to compare code written by two people at two companies, doing a very similar project, you wouldn’t be surprised to find two pieces of code doing almost the same thing in the same syntax, barring synthetic sugar like naming and coding conventions. Neither will likely have violated the other’s copyright since simultaneous invention is a thing. And if they happened to have similar prior experiences, it’s even more likely.

            Likewise, the way the code was incorporated into a project as a whole might sufficiently constitute a human contribution, and perhaps even the more important contribution. You likely wouldn’t retain the copyright on the specific snippet, but rarely are small code snippets enough on their own to claim copyright over to begin with. It’s the program or library or system as a whole that’s the finished product.

        • Hubi@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 days ago

          Just assume everything is AI generated

          This is the part that will definitely not work.

          • woelkchen@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 days ago

            If that AI slopper freaks out about alleging conplete lack of threshold of originality, it’s already a win.