What are your thoughts on Generative Machine Learning models? Do you like them? Why? What future do you see for this technology?
What about non-generative uses for these neural networks? Do you know of any field that could use such pattern recognition technology?
I want to get a feel for what are the general thoughts of Lemmy Users on this technology.
Kinda cool how much auto-correct and noise filters can do… kinda uncool to use the same as leverage against humanity.
I think that they’re neat, they’re development is fascinating to me, and that they have their utility. But I am sick of executive and marketing types sloppily cramming them into every corner of every service just so they can tell their shareholders that it’s “powered by AI”. So far, I’ll use a page or app dedicated to chatting with the llm, or I’ve also found that GitHub copilot in vscode is pretty nifty sometimes for things like quickly generating docs that I can then just proofread and edit. But in most other applications and websites I don’t use them at all or I’m forced to and the experience is worse. Recently, I’ve been having to work in Microsoft’s power platform a bit for a client (help me). Almost every page in the entire platform has an AI chatbot on the side that’s supposed to do some of the work around you. Don’t use it. It fucks up your shit. Ask it to do something, it will change your flow or whatever you’re working with with the wrong syntax that won’t even compile 9/10 times, with no opportunity to undo, and the remaining 1/10 is logic errors. Ask it questions about the platform, not only will it not know anything, it will literally accuse you of not speaking English.
TL;DR I think they’re neat and useful IF they’re used responsibility and implemented well. Otherwise they are a nuisance excuse to use a buzzword at best or dangerous at worst
AI is the perfect tool to generate propaganda and fake-news on a massive scale for government and secret services. Humans may live in bubbles divorced from reality because of it. It also is the perfect technology for censorship, sentiment analysis/monitoring and thought-control automation.
I love it for what I use it for which is research, speeding up scripting and code writing, resume building, paraphrashing stupidly long news articles, teaching me Spanish and Japanese, bypassing the bullshit that are what passes as search engines these days, and talking my anxiety down. They cut through the noise and boost my productivity.
I’m in the sector, and there are legitimate time and effort savings when used correctly. Code refactoring gets a little smarter than a dumb script, boilerplate code is instantly generated, and real educational topics can be delved into and analyzed.
I don’t want to see it closed off, and I want the data used to train made public. These LLMs have capabilities older scripted systems can never match.
Eventually they will replace workers. Our society is too self-centered to make that a good thing.
Capitalism will ruin any good opportunities with said technology. Much like every technology that preceeded it.
They make me want to summon the ghost of General Ludd from his grave.
Most GenAI was trained on material they had no right to train on (including plenty of mine). So I’m doing my small part, and serving known AI agents an infinite maze of garbage. They can fuck right off.
Now, if we’re talking about real AI, that isn’t just a server park of disguised markov chains in a trenchcoat, neural networks that weren’t trained on stolen data, that’s a whole different story.
I like to think somewhere researchers are working on actual AI and the AI has already decided that it doesn’t want to read bullshit on the internet
Let me know when we have some real AI to evaluate rather than products labeled as a marketing ploy. Anyone remember when everything had to be called “3D” because it was cool? I missed my chance to get 3D stereo cables.
It’s a glorified crawler that is incredibly inefficient. I don’t use it because I’ve been programmed to be picky about my sources and LLMs haven’t.
It’s a tool with some interesting capabilities. It’s very much in a hype phase right now, but legitimate uses are also emerging. Automatically generating subtitles is one good example of that. We also don’t know what the plateau for this tech will be. Right now there are a lot of advancements happening at rapid pace, and it’s hard to say how far people can push this tech before we start hitting diminishing returns.
For non generative uses, using neural networks to look for cancer tumors is a great use case https://pmc.ncbi.nlm.nih.gov/articles/PMC9904903/
Another use case is using neural nets to monitor infrastructure the way China is doing with their high speed rail network https://interestingengineering.com/transportation/china-now-using-ai-to-manage-worlds-largest-high-speed-railway-system
DeepSeek R1 appears to be good at analyzing code and suggesting potential optimizations, so it’s possible that these tools could work as profilers https://simonwillison.net/2025/Jan/27/llamacpp-pr/
I do think it’s likely that LLMs will become a part of more complex systems using different techniques in complimentary ways. For example, neurosymbolics seems like a very promising approach. It uses deep neural nets to parse and classify noisy input data, and then uses a symbolic logic engine to operate on the classified data internally. This addresses a key limitation of LLMs which is the ability to do reasoning in a reliable way and to explain how it arrives at a solution.
Personally, I generally feel positively about this tech and I think it will have a lot of interesting uses down the road.
I personally hate the path that AI is going. Generative ai steals art and scrapes text to create garbage on demand using too much power and computing resources that could be spent on better purposes, such as simulating protein folding for disease research (see folding at home). u/yogthos@lemmy.ml gave some good uses of ai.
To be honest, I think it’s a severe mistake that AI is continuing to improve, as long as you aren’t gullible and know what to look for, you can tell when something is ai generated, but there are too many people who are easily fooled by ai generated images and videos. When chatpgt released, I thought it was a nice toy, but now that I know the methods of which such large scale models are obtaining their data to train on, I can only resent it. So long as generative models continue to improve in accuracy of text and images, so will my hatred towards it in turn.
p.s: don’t use the term “AI art” for the love of God. art captures human emotions and experiences, machines can’t understand them, they are only silicon. Only humans can create art, nothing else.
I think it’s fine if used in moderation. I use mine for doing the mindless day-to-day stuff like writing cover letters or business-type emails. I don’t use it for anything creative though, just to free myself up to do that stuff.
I also suck at coding so I use it to write little scripts and stuff. Or at least to do the framework and then I finish them off.
It’s bullshit. It’s inauthentic. It can be useful for chewing through data, but even then the output can’t be trusted. The only people I’ve met who are absolutely thrilled by it are my bosses, who are two of the most frustrating, stupid, pig-headed, petty people I’ve ever met. I wish it would go away. I’m quitting my job next week, taking a big paycut and barely being able to pay the bills, specifically because those two people are unbearable. They also insist that I use AI as much as possible.
Mixed feelings. I decided not to study graphic design because I saw the writing on the wall, so I’m a little salty. I think they can be really useful for cutting back on menial tasks though. For example, I don’t see why people bitch about someone using AI for their cover letter as long as they proofread it afterwards. That seems like the kind of thing you’d want to automate, unlike art and human interaction.
I think right now I just kind of hate AI because of capitalism. Tech companies are trying to make it sound like they can do so many things they really can’t, and people are falling for it.
Writing a cover letter is a good exercise in self reflection
True, I just assumed that reflection was required in order to give the AI the prompt, and the AI was mainly used to format it correctly. I might be talking out of my ass here since I haven’t used it extensively.