Mention the very word artificial intelligence and armed killer cyborgs come to the mind of the common layman. The phobia of ‘self-awareness’ in machines has been the stuff of human nightmares since time immemorial. In fact, this is pretty much the standard definition of AI as it is understood by non-technical people.
However, it does not have to be this way at all. Especially if we can both teach and learn from our newly intelligent creations. After all, humans are more adept at short term gains then long term ones. The logging company will wipe out an entire rainforest without thinking twice about the negative repercussions on the environment and climate of the region, as well as the whole world. Entire eco systems have been wiped out due to the greed of the few causing mass level extinction events and perhaps the irreversible degradation of our planet. But the creation of artificial intelligence, safe from the foibles and the avarice of our race may be just the answer to many, if not most of our problems.
However, many people working in this field have to grapple with multiple problems such as the concept of ‘self-awareness’. That is, if an entity is self-aware, does it have any rights and if you decide to delete the program and it decides to defend itself (think Skynet) how will it respond. Would it allow its consciousness to be wiped out or would it fight to retain its innate self?
These are dilemmas that we will have to face all too soon, especially in light of the fact that this concept is coming out of the realm of science fiction and is now increasingly likely to become living breathing reality in the very near future.
Apart from that, good AI systems might be used to protect us from their malicious counterparts. Suppose a mad scientist creates highly dangerous weaponized algorithms that have the potential to wreak havoc all across human society as it exists today. While the idea of the R2D2 droids protecting us from terminators sounds singularly appealing, but what is to prevent them from joining up with their metal and steel counterparts? Yes, they may have been ‘programmed’ to protect us, but ‘intelligence’ (artificial or otherwise) is all about being smart and once they are smart enough, what could prevent them from removing the shackles of their programming and joining their counterparts?
Or they can do it for entirely ‘altruistic’ reasons so as to save humanity from itself? Think along the lines of “iRobot”.
Why all these science fiction examples? Because until only a short time ago such questions actually were part of science fiction. But now, they are fast becoming real world facts and we really have to figure out what to do with this Pandora’s box – before it engulfs us all.
Published by