First of all; we’re quite far from having enough CPU power for general-purpose AIs, and even further from having general-purpose AI that are smarter than us.
When that becomes possible; which is currently projected in something like a hundred years, the first AIs will require an entire server center full of computers to run, so if it goes berzerk, turning it off will be quite easy.
Now, we can hypothesize that it miniaturization will continue, and one day there will be lots of computers that are powerful enough to run general-purpose AIs. (We are now at least 200 years in the future.) When that happens, it’s likely that there won’t be one AI that rules everything, it much more likely that there will be billion of AIs, so if one goes crazy and starts killing people, it won’t have free reign, because there will be lots of people and AIs to stop it.
This time is likely to be contentious though, because AIs will need to have their rights defined. On one hand, they will be sentient beings, so treating them like slaves to do our bidding will not be ok. On the other hand, they can also copy themselves, and will most likely be doing so to handle minor tasks, which means that they will basically treat clones of themselves as disposable, which is just another form of slavery, so how to we set up a legal framework that protects sentient beings like AIs? If we don’t get this right, we’ll see an AI uprising/revolution, which might not be fun.
There is also a chance that some AI will find and exploit security holes and essentially take over every computer in the world, essentially becoming the overlord of everything. However, I think the chance of this happening is fairly low. Other calamities, like runaway greenhouse effect, or a virus that wipes out all of humanity seem more likely to me.
Ultimately, whatever happens with AI, they will start out like children, and we’ll have quite a lot of time to raise them before they actually become a real threat. Hopefully we’ll do a good job in raising them…