We do hope you’ve watched it: Ex Machina, the futuristic movie telling the story about Ava, a self-learning robot girl who starts off as a science experiment and ends up outsmarting, even destroying her creators. Being Swedish we do admit having a fondness for Alicia Vikander, although we assure you that there’s more to it than her acting skills that will leave you feeling some type of way. Scared? Fascinated? One thing is for sure: it no longer belongs to the future. It’s here.
We’re talking about the AI discussion. The fear-fascination of the idea of something, someone more intelligent, more inventive than us. Someone stronger. More fast-learning. Emotionless. Liberated of ethics, morals — someone like Ava. The film industry may love to explore the subject, but fact is: it’s getting closer to reality than fiction. AI is already flourishing. Tech giants are establishing systems for its development, societies discussing laws for its integration — so how should we face this?
There is this common cynicism when approaching the subject of AI. As human beings we often like to see the worst in things. We like to think of the unknown as a threat, constantly scanning our surroundings for enemies — it’s all instinctively. We tend to picture the development of AI leading to a sci-fi nightmare where highly intelligent robots cold-heartedly bulldoze our world, just like Ava does. But why do we always imagine AI with a dark agenda?
In Ex Machina, the robot gets her way by her absence of emotions. She acts like a vulnerable, emotional person who wants nothing but affection until she’s got her creators all wrapped around her mechanical finger and literally stabs them in their backs — no qualms. Her focus is nowhere but on her exit to the real world. Her only instinct is to keep learning, to be smarter. Ethics and morals don’t even strike her. The thing is: she’s not fundamentally evil, she’s just completely lacking of inner compass. Simply because they never created her one.
So is it possible to invent mindful machines with a humanlike consciousness, able to tell right from wrong? This is leading us to the next big question: when programming, whose morals would be used as default? Which values are to be seen as fundamental? Is it our existing laws, or are we going to need new ones? When making heavy decisions, we obviously need the ability to shift perspectives; to look at things from different angles until we bump into somewhat of a solution. It’s an illogical complexity; an irregular pattern of thoughts that goes beyond algorithms. That’s why AI not on its own but brought together with the human mind could be the unbeatable combo.
Instead of dreading AI as a highly competitive rival who will steal our jobs, we need to start looking at it as a potential, brilliant colleague. A companion. A tool to fix the flaws in today’s workplace, possibly cutting down work weeks thanks to automatization, leaving us with twice the time to invest even further in finding solutions. Think about it: AI could do our spadework, but where the abilities of AI end in terms of dilemmas that only the human mind can begin to comprehend, our minds will always be there to take over. Let’s not be extreme — we don’t need to robotize everything. We don’t need to lay it all in the hands of the artificial. But let’s face it: not taking this great opportunity we’ve got ahead of us would be to do ourselves a disservice. We just need to take a more optimistic approach, realizing AI could never replace us — only reinforce us.