

Ooh perfect, now I can get crumbs all over my keyboard and say it’s part of the aesthetic, mom
Ooh perfect, now I can get crumbs all over my keyboard and say it’s part of the aesthetic, mom
yeah i mean, theres no way lol. even if the tech gets here that quickly there’s 0% chance prices come down significantly on lower capacity drives. these’ll be at least $500 and possibly far far more
Yeah this seems common. I had a friend who grew up with parents who alternated between English, Portuguese, Italian, and French, and he told me he wound up not being able to speak at all until he was over 2 years old. It didn’t affect him badly later on, and he always insisted it was worth it
Unless the game is solely bottlenecked by processing stuff
yyyyep. Pretty much any mainline GPU made since 2010 should be able to handle TF2 just fine, and if you’ve got a 1060 or better I seriously doubt your GPU is the problem. The problem is that CPU usage is horribly optimized and it can only really utilize 2 threads (not cores, threads). After that it’s your clock speed that makes a difference.
I played competitively on a 680 and an overclocked i5-4690k@4.2Ghz until I finally upgraded last year, and would only dip below 100FPS playing pubs on Halloween. In 6s I never went below 200.
I mean, back in the day, I already used to get better performance when I would boot in Ubuntu 16 instead of Windows. Not sure that’s the case anymore with modern stock Ubuntu but I imagine Mint would still do better thanks to having less bloat. But I have trouble imagining that better shaders are going to help many people at all.
edit: the vulkan update probably won’t help. the switch to 64 bit has the potential to be HUGE.
This might help very slightly but it isn’t gonna fix it. Runs terrible on windows too
I recently read a neat little book called “Rethinking Consciousness” by SA Graziano. It has nothing to do with AI, but is an attempt to describe the way our myriad neural systems come together to produce our experience, how that might differ between animals with various types of brains, and how our experience might change if some systems aren’t present. It sounds obvious, but the simpler the brain, the simpler the experience. For example, organisms like frogs probably don’t experience fear. Both frogs and humans have a set of survival instincts that help us detect movement, classify it as either threat or food or whatever, and immediately respond, but the emotional part of your brain that makes your stomach plummet just doesn’t exist in them.
Humans automatically respond to a perceived threat in the same way a frog does–in fact, according to the book, the structures in our brains that dictate our initial actions in those instinctive moments are remarkably similar. You know how your eyes will automatically shift to follow a movement you see in the corner of your vision? A frog responds in much the same way. It’s not something you have to think about–often your eye will have darted over to the point of interest even before you realize you’ve noticed something. But your experience of that reaction is also much richer than it is possible for a frog’s to be, because we have far more layers of systems that all interact to produce what we call consciousness. We have a much deeper level of thought that goes into deciding whether that movement was actually important to us.
It’s possible for us to continue to live even if we lose some parts of the brain–our personalities will change, our memory may get worse, or we may even lose things like our internal monologue, but we still manage to persist as conscious beings until our brains lose a large number of the overlying systems, or some very critical systems. Like the one that regulates breathing–though even that single function is somewhat shared between multiple systems, allowing you to breathe manually (have fun with that).
All that to say the things we’re currently calling AI just don’t have that complexity. At best, these generative models could fill out a fraction of the layers that would be useful for a conscious mind. We have developed very powerful language processing systems, at least in terms of averaging out a vast quantity of data. Very powerful image processing. Audio processing. What we don’t have–what, near as I can tell, we haven’t made any meaningful progress on at all–is a system to coalesce all these processing systems into a whole. These systems always rely on a human to tell them what to process, for how long, and ultimately to check whether the result of a process is reasonable. Being able to process all of those types of input simultaneously, choosing which ones to focus on in the moment, and continuously choosing an appropriate response? Barely even a pipe dream. And even all of that would be distinct from a system to form anything like conscious thought.
Right now, when marketing departments say “AI,” what they’re describing is like that automatic response to movement. Movement detected, eye focuses. Input goes in, output comes out. It’s one small piece of the whole that’s required when science fiction writers say “AI.”
TL;DR no, the current generative model race is just tech stock market hype. The absolute best it can hope for is to reproduce a small piece of the conscious mind. It might be able to approximate the processing we’re capable of more quickly, but at a massively inflated energy expenditure, not to mention the research costs. And in the end it still needs a human double checking its work. We will need to develop a vast number of other increasingly complex systems before we even begin to approach a true AI.