A few days ago I watched two surprisingly good movies on a similar topic — how human future might be entangled with artificially intelligent robots. The first, Ex Machina, has a typically Hollywoodian feel to it — a totally implausible general setting and story-line, which reeks of idiocy. A genius programmer with a major hipster beard and a sweet tooth for booze living alone in a futuristic mansion in the middle of a vast wilderness invents and builds, apparently all by himself, artificially intelligent super human like and sexy robots (of course they would be female — he has to use them as sex dolls as well), one of which eventually rocks the show. Yet it is still watchable and raises some interesting questions about the nature and intentions of artificial intelligence and how it could relate to humans. The second, Automata, is surprisingly good and believable both in terms of setting and story-line. It is set a couple of decades into the future, when most of our planet has become uninhabitable, human population has dramatically shrunk and is still deteriorating and there are rather simple and clumsy, but good intentioned robots everywhere that were created to help humans out in all sorts of ways. And then some of them start to develop consciousness and the ability to improve themselves.
I do not want to mention the story-lines more, just some of the thoughts that these movies invoked, especially Automata.
- Humans have an intellectual ceiling — our brains as they are now are able to understand the world and ourselves only to some extent and not fully, because from one point onward things become too complex for us to understand or to process.
- Since we have effectively stopped our evolution (maybe, why not — where is the natural selection) or since it does not happen perhaps fast enough, especially as far as our intellectual capacities are concerned, to ensure our long term survival — we are not able to protect us against ourselves, let along cosmic catastrophes — what is the solutions to this?
- Are we smart enough to take our evolution into our own hands?
- Are we smart enough to make robots that could be smarter than us?
- Are we brave enough to actually create them?
- How would or could (or should) we protect ourselves against them?
- Would we be morally obliged to create them, so that there would be at least some kind of continuation to us and our civilization after we have sent ourselves back into stone age or after we are wiped out by some cosmic catastrophe?