• 46 Posts
  • 159 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Ah, but those “intelligent” people cannot be very intelligent if they are not billionaires. After all, the AI companies know exactly how to assess intelligence:

    Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. … The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect. (Source)


  • To do it based on intent would create some difficult grey areas - for example, video game creators would have to try to make their games as compelling as possible without passing a more or less vague threshold and breaking the law. The second approach of working on the ways different types of data can be used sounds more promising.







  • It would be a single point of failure for many apps in case the curators of F-Droid were dishonest or hacked. They could insert bad things into lots of packages without having to change the public source code. But it also becomes the only point where malware or backdoors could be inserted that way, instead of having to trust every single developer to build honestly off the source code, which we’d have to do if they just stuck prebuilt binaries up there. I don’t know how rational I’m being, but it makes me trust F-Droid apps more that they build each one themselves.















  • Ah, I didn’t understand that you were asking about a fictional scenario. I don’t know about your main question but I like your notion of the social integration of humanoid AGIs with unique life experiences, and your observation that there’s no need to assume AGI will be godlike and to be feared. Some ways of framing the alignment problem seem to carry a strange assumption that AGI would be both smarter than us and yet less able to recognize nuance and complexity in values, and that it would therefore be likely to pursue some goals to the exclusion of others in a way so crude we’d find horrific.

    There’s no reason an AGI with a lived experience of socialization not dissimilar to ours couldn’t come to recognize the subtleties of social situations and respond appropriately and considerately. Your mortal flesh and blood AI need not be some towering black box occupied with its own business whose judgements and actions we struggle to understand, but if integrated into society would be motivated like any social being to find common ground for communication and understanding, and tolerable societal arrangements. Even if we’re less smart that doesn’t mean it automatically considers us unworthy of care - that assumption always smells like a projection of the personalities of people who imagine it. And maybe it would have new ideas about these that could help us stop repeating the same political mistakes again and again.