Except people can and clearly have been doing so. Whether or not the comparison is fully accurate regarding cost-quality is another matter.
Except people can and clearly have been doing so. Whether or not the comparison is fully accurate regarding cost-quality is another matter.
I was friends with her and I trusted that she would handle it well if she wasn’t interested. And turns out she indeed wasn’t interested, but we did talk about it and decided to just stay friends. It was a little awkward as my feelings for her still lingered a bit, but eventually that passed and I’m now with a wonderful girl who I think is a much better match for me.
We’re still friends to this day.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.
There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.
And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
This is a gross misrepresentation of the study.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.
I won’t pretend I understand all the math and the notation they use, but the abstract/conclusions seem clear enough.
I’d argue what they’re presenting here isn’t the LLM actually “reasoning”. I don’t think the paper really claims that the AI does either.
The CoT process they describe here I think is somewhat analogous to a very advanced version of prompting an LLM something like “Answer like a subject matter expert” and finding it improves the quality of the answer.
They basically help break the problem into smaller steps and get the LLM to answer smaller questions based on those smaller steps. This likely also helps the AI because it was trained on these explained steps, or on smaller problems that it might string together.
I think it mostly helps to transform the prompt into something that is easier for an LLM to respond accurately to. And because each substep is less complex, the LLM has an easier time as well. But the mechanism to break down a problem is quite rigid and not something trainable.
It’s super cool tech, don’t get me wrong. But I wouldn’t say the AI is really “reasoning” here. It’s being prompted in a really clever way to increase the answer quality.
It’s not a direct response.
First off, the video is pure speculation, the author doesn’t really know how it works either (or at least doesn’t seem to claim to know). They have a reasonable grasp of how it works, but what they believe it implies may not be correct.
Second, the way O1 seems to work is that it generates a ton of less-than-ideal answers and picks the best one. It might then rerun that step until it reaches a sufficient answer (as the video says).
The problem with this is that you still have an LLM evaluating each answer based on essentially word prediction, and the entire “reasoning” process is happening outside any LLM; it’s thinking process is not learned, but “hardcoded”.
We know that chaining LLMs like this can give better answers. But I’d argue this isn’t reasoning. Reasoning requires a direct understanding of the domain, which ChatGPT simply doesn’t have. This is explicitly evident by asking it questions using terminology that may appear in multiple domains; it has a tendency of mixing them up, which you wouldn’t do if you truly understood what the words mean. It is possible to get a semblance of understanding of a domain in an LLM, but not in a generalised way.
It’s also evident from the fact that these AIs are apparently unable to come up with “new knowledge”. It’s not able to infer new patterns or theories, it can only “use” what is already given to it. An AI like this would never be able to come up with E=mc2 if it hasn’t been fed information about that formula before. It’s LLM evaluator would dismiss any of the “ideas” that might come close to it because it’s never seen this before; ergo it is unlikely to be true/correct.
Don’t get me wrong, an AI like this may still be quite useful w.r.t. information it has been fed. I see the utility in this, and the tech is cool. But it’s still a very, very far cry from AGI.
This is true, but it’s specifically not what LLMs are doing here. It may come to some very limited, very specific reasoning about some words, but there’s no “general reasoning” going on.
Shareholders can demand external audits under threat of selling the stock. There’s plenty shareholders can do (and have done in the past). They don’t just sit idle and not do anything you know.
Shareholders seek to maximize profits. If that includes a lawsuit to squeeze out even more investments, then why not?
They never bothered to check if Boeing did what they had to do security wise. Only once it threatened their profits they sprang into action.
⛤
I think the current logo would work fine as a unicode character. I dislike the three anuses for a logo.
It’s additional space around components showing what’s behind it. So you’re seeing more stuff in between windows, making it look less organised imo. The “whitespace” isn’t really white here. It looks like another unnecessary element crammed inbetween two windows that might as well just sit neatly next to one another, making the windows slightly larger. I also like being able to move my mouse to the edge of things (e.g. the taskbar) without ending up in the whitespace, which causes misclicks for me.
Again, my opinion. Not stating absolute truths here.
I’m surprised you find that the gaps makes things feel less cluttered. Imo it looks considerably more cluttered.
… No you just use Windows built-in rollback feature. Which I think even auto-recovers these days of it detects a failure to boot after an update.
If you’re talking about Cube World by any chance, the dev is still working on it and posts semi-regular updates: https://nitter.net/wol_lay
Video file sizes are actually getting smaller all the time, but when filming we don’t save a neatly compressed video file. On-the-fly compression and encoding would help a ton in reducing camera video files, but is very expensive at the moment CPU-wise.
I often had an issue that an audio device wouldn’t show up or work. Just running the troubleshooter for it probably triggers some audio device rediscovery, which managed to fix it every single time I had the issue.
“OpenAI’s GovernGPT 11 gets better economic results than Meta’s ‘LincolnBot’, more at eleven”
I noticed my capability to keep my attention on a single subject dramatically increased after Reddit shit the bed and killed 3rd party apps, making me effectively quit social media for a month or two.
I should also really drop Lemmy as well, as much as it is fun it is constantly nagging my brain for attention. It’s better than Reddit imo, but short-form content really does make you less able to keep your mind focused. After all, a distraction is just a couple taps away…
Try taking a break for a month and see how much you actually remember. In my experience it was depressingly little, and I’m not generally bad with languages at all.
Misspelled “Invidious” there :)
Looks neat though!