Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.
I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.
Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”
From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.
You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code.
Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.
LLM code can range to “doesn’t even compile” to “it actually works as requested”.
The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From “let’s abstract this with 5 new layers” to “I’m going to refactor that whole class of objects to get this simple method in there”.
The requested feature might actually work. 100%.
It’s just very possible that it either broke other stuff, or made the codebase less maintainable.
That’s why it’s important that people actually know the codebase and know what they/the model are doing. Just going “works for me, glhf” is not a good way to keep a maintainable codebase
LOL. So true.
On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.
That’s a fun little mini-game that comes with vibe coding.
Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.
AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.
Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”
LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.
They’re not going to maintain it - they’ll just throw it back to the LLM and say “enhance”.
From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.
You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.
Can Cloudflare help prevent this?
yes.
literally yes.
It’s insane
That’s how you know who never even tried to run the code.
that’s the annoying part.
LLM code can range to “doesn’t even compile” to “it actually works as requested”.
The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From “let’s abstract this with 5 new layers” to “I’m going to refactor that whole class of objects to get this simple method in there”.
The requested feature might actually work. 100%.
It’s just very possible that it either broke other stuff, or made the codebase less maintainable.
That’s why it’s important that people actually know the codebase and know what they/the model are doing. Just going “works for me, glhf” is not a good way to keep a maintainable codebase
LOL. So true.
On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.
That’s a fun little mini-game that comes with vibe coding.