Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.
That’s not a new thing, doctors had this for at least a decade with WebMD.
No, you don’t have cancer
I guess lawyers are feeling the pain that doctors experienced when WebMD became a thing.
I find it useless for even basic tasks. The fact that some people follow it blindly like a god is so concerning.
A lot of people are very stupid, and also very easily tricked/conned.
We are basically just finding all the people who pretty much already were NPCs, and now, well, they’re formalizing that.
To those people, well, the LLM probably just actually generally is more intelligent / informed than them.
George Carlin:
Imagine how stupid the average person is.
Now, realize half of all people are more stupid than that.
I work in a health-care-adjacent industry and you’d be surprised how many people blindly follow LLMs for medical advice
My partners midwife googled stuff in front of us and parroted the AI summary back to us when we asked if a specific drug was okay for pregnant people
It’s been doing wonders to help me improve materials I produce so that they fit better to some audiences. Also I can use those to spot missing points / inconsistencies against the ton of documents we have in my shop when writing something. It’s quite useful when using it as a sparing partner so far.
It’s great when you have basic critical thinking skills and can use it like a tool.
Unfortunately, many people don’t have those and just use AI as a substitute for their own brain.
Yeah well same applies for a lot of tools… I’m not certified for flying a plane and look at me not flying one either… but I’m not shitting on planes…
But planes don’t routinely spit out false information.
I understand what you mean, but… looks at Birgenair 301 and Aeroperu 603 looks at Qantas 72 looks at the 737 Max 8 crashes Planes have spat out false data, and in of the 5 cases mentioned, only one avoided disaster.
It is down to the humans in the cockpits to filter through the data and know what can be trusted. Which could be similar to LLMs except cockpits have a two person team to catch errors and keep things safe.
So you found five examples in the history of human aviation, how often do you think AI hallucinates information? Because I can guarantee you its a hell of a lot more frequently than that.
You should check out Air Crash Investigation, amigo, all 26 seasons, you’d be surprised what humans in metal life support machines can cause when systems breakdown.
If you can’t fly a plane chances are you’ll crash it. If you can’t use llms chances are you’ll get shit out of it… outcome of using a tool is directly correlated to one’s ability?
Sound logical enough to me.
Except with a plane, if you know how to fly it you’re far less likely to crash it. Even if you “can use LLMs” there’s still a pretty strong chance you’re going to get shit back due to its very nature. One the machine works with you, the other the machine is always working against you.
Nha that’s just plain wrong…also you can also fantastically screw flying a plane but so long you use LLMs safely you’re golden.
It also has no will on its own; it is not « working against you ». Don’t give those apps a semblance of intent.
Sure. However, the outcome of the tool LLM always looks very likely. And if you aren‘t a subject matter expert the likely expected result looks very right. That‘s the difference - hard to spot the wrong things (even for experts)
So is a speedometer and an altimeter until you reaaaaaaaaly need to understand them.
I mean it all boils down to proper tool with proper knowledge and ability. It’s slightly exacerbated by the apparent simplicity but if you look at it as a tool it’s no different.
Honestly if you are that dependent on A.I. now when it’s still in a test phase then you are already lost, A.I. won’t make us smarter if anything it has the opposite effect.
I’m watching that happen in my industry (software development). There’s this massive pressure campaign by damn near everyone’s employers in software dev to use LLM tools.
It’s causing developers to churn out terrible, fragile, unmaintainable code at a breakneck pace, while they’re actively forgetting how to code for themselves.








