I’m actually quite happy to see some art with tits that are just plain nice. The giant bazongas everyone likes to stick on everything just don’t appeal to me, and can’t be comfortable for the bazonga-bearer either.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
I’m actually quite happy to see some art with tits that are just plain nice. The giant bazongas everyone likes to stick on everything just don’t appeal to me, and can’t be comfortable for the bazonga-bearer either.
If it’s not communicating anything, what’s the point?
My point is that if we turn up our gibberish dial now then at least our llms will be learning the wrong thing & we have some control.
We’d be covering ourselves in poop to prevent people from sitting next to us on the train. Sure, people will avoid sitting next to us, but in the meantime we’ll be covered in poop.
And then other people will learn the trick, cover themselves in poop too, and now everyone’s poopy and the trick stops working.
There is still a lot of understanding that we do automatically that an llm will never do.
Are you willing to bet the convenience of comprehensible online discourse on that? “Automatically understanding stuff” is basically the one job of LLMs.
LLMs model language, and coming up with some kind of “gibberish” filter is simply inventing a new language. If there’s semantic meaning in it the LLMs will figure it out just like any other language, and if there isn’t semantic meaning then we’ve lost the ability to communicate entirely. I see no upside.
Well, the “at least for now” part is my point - if people start using “gibberish” to communicate or to hide their communication, that provides training material for LLMs to let them figure out how to use it too.
LLMs learn how to communicate based on existing examples of communication. As long as humans are communicating with each other somehow then LLMs will be able to train how to do that too. They have the same communication capabilities that we do at this point, so there’s not really any way we can make a secret clubhouse that they can’t figure out how to infiltrate.
Personally, I think there’s two main routes we can go to deal with this. Either we can simply accept that there’s no way to be 100% sure we’re talking to a human any more and evaluate the value of our conversation based on the content of the words spoken rather than the composition of the entity generating them, or we could come up with some kind of “proof of personhood” system to allow people to label the text the write as coming from them.
The latter is extremely hard to do, of course, both from a technical and cultural perspective. And such a system would likely still allow someone’s “person token” to be sneakily used by AI, either by voluntarily delegating it (I could very well be retyping all of this out of a ChatGPT window) or through hackery.
So I’m inclined toward the former. If I’m chatting with someone and I’m having a good time doing it, and then later I find out it was a bot, why should that change how much fun I had?
I don’t see how that would be practical. People who aren’t “in on the joke”, as it were, will call out the gibberish and downvote it. If enough people are “in on the joke” then the whole forum becomes useless and some other forum will be created to fill the role of the original. The AI will train off of that one.
Basically, if you don’t want an AI training on your content, then don’t post your content in public where an AI will see it. The Fediverse is the last place you should be posting since its very nature is about openly broadcasting your content to whoever wants to see it.
You realize that this is only going to train LLMs how to recognize “gibberish?”
It’s more impressive when you use inpainting to preserve the beak, eye, and feet from the original source image.
Yeah, I’ve got my own anecdote to chip in with on that, my dad was in the hospital for a month with a plethora of various potentially-fatal difficulties he was fighting with. There were ups and downs but many of the problems were being addressed. Then the diagnosis finally came in that the root cause was advanced lymphoma and there was no realistic chance of “beating” it, he died later that very day.
I don’t think that it’s necessarily a question of “willing yourself to die” or “willing yourself to live,” but I do think that one can decide how much effort is worth putting into the fight versus deciding to relax and let it go. Whether consciously or subconsciously.
Yeah. Scientific papers may teach an AI about science, but Reddit posts teach AI how to interact with people and “talk” to them. Both are valuable.
The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.
I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.
By the time intergalactic navigation is relevant we’ll have likely dismantled Earth. The vast majority of it is just sitting there generating gravity, a huge waste of its potential.
I was going to suggest the Great Attractor or the Shapley Supercluster, but I think your suggestion is better. It’s more point-like and since it’s farther away (well outside of the reachable universe) it results in a more uniform set of directions over long distances.
Of course, cultural influence will be big. If these explorers are Terragen then most likely the Milky Way’s north/south direction will be pretty deeply ingrained in their coordinate systems. They might keep on using that, since it’s not like manual astrolabe-style navigation will ever be relevant at that level of technology.
If this isn’t a military battle then that makes Israel’s actions look even worse.
They were triggered indiscriminately. Israel had no way of knowing who was holding each pager or where it was located when it went off.
It’s complicated, but this might be considered a war crime. A key quote from the article:
A booby trap is defined as “any device designed or adapted to kill or injure, and which functions unexpectedly when a person disturbs or approaches an apparently harmless object,” according to Article 7 of a 1996 adaptation of the Convention on Certain Conventional Weapons, which Israel has adopted. The protocol prohibits booby traps “or other devices in the form of apparently harmless portable objects which are specifically designed and constructed to contain explosive material.”
The prohibition is presumably intended to make it less likely that a civilian or other uninvolved person will get injured or killed by one of these seemingly harmless objects. If you’re booby-trapping military equipment or military facilities then that’s not a problem, civilians wouldn’t be using those.
I’m Canadian. I would say that I don’t think much about it in terms of current events, I haven’t heard much in the news about it in recent years. And my assumption from that is that’s probably a good sign. There used to be a steady stream of bad news, and “no news” lies along the path in between “bad news” and “good news.”
I did see a video recently about Iraq’s plans for a giant new port facility on that little tidbit of Persian Gulf shoreline it has and road/rail link from it up through to Turkey, and thence onward into Europe. It sounded like a very optimistic development if it can be seen through to fruition, opening an alternative trade corridor to the Suez Canal. Anything that diversifies a country’s economy is a good thing, and anything that removes single points of failure in global shipping networks is also a good thing. I can’t imagine the Houthi obstruction of the Red Sea would still be a thing by the time that route opens up but at least it’ll be an option if something like it happens again.
Not in every way. They’re cheaper and faster.
If you simply don’t want to engage in a discussion with him, then that’s fine, you should let him know that you’re not interested in talking about it. You don’t have to justify your choices to him, if you want to use a particular browser then that’s fine and if he spontaneously decides he needs to “talk you out of it” then that’s a dick move. Tell him that you don’t want to debate the subject and it’s no skin off of his nose so he shouldn’t try to engage you in one.
But if you’re asking “how can I convince him that he’s wrong”, well that is engaging in the debate. And if you’re going to engage in a debate you should try to be as open about it as you’d like your debate opponent to be in turn. Have you considered that perhaps he has some valid points and is not taking that position just to be contrarian?
Personally, I find that it’s pretty much impossible to talk someone with a strongly-held position out of that position. The value of Internet debates with people like that is that lots of spectators who don’t have such strongly-held positions may be watching, but when it’s a one-on-one situation it’s likely to be a futile and frustrating effort with no benefit. So I would advise going with the “don’t bother engaging” route. But of course, if you feel strongly that you want to engage, I can’t change your mind on that and won’t try. It’s your time to spend.
I think it’s generally pointless, spiteful, and only harms ordinary users who might someday have found value in coming across your old posts on Reddit from a search. It doesn’t harm Reddit itself, the “value” of your individual account is very small compared to their vast archive. And they still have it, deletion just removes it from the public-facing front end. If the reason you’re deleting it is because you don’t want AI to be trained on it, that ship has long ago sailed. There are downloadable archives of Reddit floating around that it will never be deleted from.
So I wouldn’t bother.
That’s not what they’re arguing, not even close.
Username checks out.