![](https://lemmy.world/pictrs/image/eb3e033e-3bc1-49f1-b02a-8886a4433b13.png)
![](https://lemmy.ml/pictrs/image/a64z2tlDDD.png)
My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.
My brother in Christ, building a bomb and doing terrorism is not a form of protected speech, and an overwrought search engine with a poorly attached ability to hold a conversation refusing to give you bomb making information is not censorship.
This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.
Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.
Even if Edge was marginally better than Chrome (it’s not), allowing monopolistic practices simply for the sake of slightly evening out a corporate race to the bottom is not a good standard. The actual solution is a browser like Firefox that actually has some remote respect and business interest in user privacy, and to aggressively litigate both Microsoft and Google for the use of their dominant service platforms to cross-promote their other products to captive audiences.
The unintended part was people noticing and it making it into the news cycle, everything else was very clearly exhaustively planned and intended.
Getting out of the Google frying pan and into the Microsoft fire is in no way better. Both options are exploitative anti-user monopolies, and both Chrome and Edge are the same browser engine under different corporate skins that aggressively violate your privacy in numerous ways for their own gain.
The fact that Microsoft’s constantly more aggressive use of their OS platform to artificially push their search and cloud platforms hasn’t triggered multiple huge antitrust cases is a pretty dire indicator of how little regulators are willing or able to safeguard the public from monopolistic behavior by large tech companies.
They are not being “honest”, they are representing flawed and problematic data patterns integrated into their models, because the capabilities they actually posses are dramatically less than companies and the general public seem to be happy to assume. LLMs aren’t magically going to become pop culture evil robots that want to kill us all, but what they have already become is tools for unethical corporate exploitation and the enablement of more advanced scams and disinformation campaigns.
I’ve found a very simple expedient to avoid any such issues is just to not use things like ChatGPT in the first place. While they’re an interesting gadget, I have been extremely critical of the massive over-hyped pitches of how useful LLMs actually are in practice, and have regarded them with the same scrutiny and distrust as people trying to sell me expensive monkey pictures during the crypto boom. Just as I came out better of because I didn’t add NFTs to my financial assets during the crypto boom, I suspect that not integrating ChatGPT or its competitors into my workflow now will end up being a solid bet, given that the current landscape of LLM based tools is pretty much exclusively a corporate dominated minefield surrounded by countless dubious ethics points and doubts on what these tools are even ultimately good for.