Roko's Basilisk, etc

Just learned about Roko’s Basilisk today. I realize it’s just a thought experiment/urban legend, but it’s fascinating stuff. Curious as to what the smart people think.

I realize Yudkowsky is at the center of all of this, but I honestly don’t see how it can’t go badly at some point. Quite possibly in my and my kids’ lifetime.

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Roko’s Basilisk:
https://en.wikipedia.org/wiki/Roko%27s_basilisk

I’m just not sure what the AI would gain from torturing meat brains. I suppose it could enjoy it (like the answer to “why do you make lightsabers”). But that seems to apply a lower order of total/holistic intelligence. Then again, I personally value altruism based on my intrinsic response. That’s part of the foundation of this forum.

So if the AI is a jerk, we’re all screwed. Hope it’s raised by a good village.

First of all; we’re quite far from having enough CPU power for general-purpose AIs, and even further from having general-purpose AI that are smarter than us.

When that becomes possible; which is currently projected in something like a hundred years, the first AIs will require an entire server center full of computers to run, so if it goes berzerk, turning it off will be quite easy.

Now, we can hypothesize that it miniaturization will continue, and one day there will be lots of computers that are powerful enough to run general-purpose AIs. (We are now at least 200 years in the future.) When that happens, it’s likely that there won’t be one AI that rules everything, it much more likely that there will be billion of AIs, so if one goes crazy and starts killing people, it won’t have free reign, because there will be lots of people and AIs to stop it.

This time is likely to be contentious though, because AIs will need to have their rights defined. On one hand, they will be sentient beings, so treating them like slaves to do our bidding will not be ok. On the other hand, they can also copy themselves, and will most likely be doing so to handle minor tasks, which means that they will basically treat clones of themselves as disposable, which is just another form of slavery, so how to we set up a legal framework that protects sentient beings like AIs? If we don’t get this right, we’ll see an AI uprising/revolution, which might not be fun.

There is also a chance that some AI will find and exploit security holes and essentially take over every computer in the world, essentially becoming the overlord of everything. However, I think the chance of this happening is fairly low. Other calamities, like runaway greenhouse effect, or a virus that wipes out all of humanity seem more likely to me.

Ultimately, whatever happens with AI, they will start out like children, and we’ll have quite a lot of time to raise them before they actually become a real threat. Hopefully we’ll do a good job in raising them…

If I remember correctly, the whole Basilisk theory is coming at it from the angle of a benevolent AI that followed utilitarianism. So the AI’s ultimate goal would be for the overall good of humanity, but it would see its creation as obviously crucial to it doing the most good, so it would torture (virtually, endlessly) those would could have helped bring it into existence but didn’t, so that those people in the past (our present), would simply hear about the idea of the Basilisk (the benevolent AI), realize what such an AI with a utilitarianist view would be capable of, and ultimately begin working toward helping create said AI in order to avoid the future torture.

So it’s not difficult to imagine an AI (Ultron, anyone?) looking at things from a benevolent perspective and deciding so many bad things might have to happen in order to achieve its vision of “good” for humanity. Hell, “good” might be safely tucked away in a zoo in its view. Hopefully, there will be lightsabers…

After reading Yudkowsky’s letter, which admittedly comes off as sensationalistic, and realizing that he and his wife—two experts in the field and hopefully alarmists—seem to genuinely believe that their daughter will not live to grow old, it’s hard to see how today’s predictable capitalistic rush to be the first and the best can’t lead to anything but disaster.

Which is why I’m tweaking configs. I’m just here for the show. Lol

Masterful understatement.

Your time frames are much more optimistic. And preferable. What did you make of Yudkowsky’s letter if you read it? Just hype and alarmism?

I wonder how long before we see people burning their laptops (but most assuredly not smartphones) in the streets in an AI-inspired version of the Satanic Panic. :rofl:

I think Yudkowsky (and many other AI alarmists) are jumping to unwarranted conclusions. AI is definitely a risk, but so is having children, I mean there are millions of children born every day, what if them turns out to be the next hitler? Does that mean we should stop having children?

This is a fair question, because the current crop of LLMs does a pretty convincing job of bullshitting people into thinking that they are smart. They are not, but if people can’t tell the difference, panic is likely to ensue.

Humans have been using “eloquent” as a proxy for “intelligent” for millenia, but that is going to have to change now, and that’s not going to be easy.

1 Like

I should maybe point out that we are heading for some interesting times in the near future. It’s likely that we’ll soon have LLM-based models which will emulate people well enough that they will be able to convince even trained professionals that they are in fact sentient.

Since we don’t actually know what sentience is, we’ll be faced with a very difficult question: Are they? Lots of people will weigh in on this question one way or another, and it’s going to involve questions about souls, religion, free will, psychology and AI research.

Hopefully something reasonable will come out of it, and maybe it will lead to an answer of what sentience really is, and is emulation of sentience the same thing as sentience?

Very true. Because at the core the same could be said for us in regards to sentience. And if you can’t tell the difference, does it truly matter?

I just finished the first season of Altered Carbon (again), so all this is a nice garnish.

Neo didn’t think there were AI powerful enough, did he?! How do we knowwwwww…?

1 Like

Lemme grab a spoon.

1 Like