Artificial intelligence has recently emerged as a prominent part of our conversation. From conquering the last human-dominated game of GO to self-driving cars, A.I. is now increasingly influencing our lives. What enables this recent breakthrough is “neural networks” or “Deep Learning” – a technology that tries to mimic some of the brain’s behavior, such as recognising objects in images or controlling a walking robot, by learning from examples.
Some consider the Turing Test to be the ultimate goal of A.I. This test is passed when a human tester is unable to distinguish whether he is chatting with a human or an A.I. Chabot. And though we are still a long way from passing this test, it has long been speculated that machines will inevitably “take over the world.” So are we really doomed? In my opinion, the fact that we are ALL worried about it suggests the answer is NO. The argument is simple. Humanity, as a group, is extremely unsuccessful in collectively predicting the first time a certain catastrophe occurs, historically speaking. In 1894, everyone predicted “the Great Horse Manure Crisis” – that in 50 years’ time, every street will be buried under nine feet of horse manure. Before the year 2000, we all thought that Y2K, a change in one computer date digit, will cause all computers to halt. Positive predictions such as “The Titanic is unsinkable” were not extremely successful either. Even more sensitive predictions, such as total nuclear annihilation, has luckily not yet occurred, although it was thought to be highly likely 70 years ago. On the other hand, real events such as the drop of the first atomic bomb, the outbreak of World War Two, the 2008 financial crisis and others, were not foreseen by the majority of people. We even don't really know when frequently reoccurring events such as earthquakes, hurricanes and financial crises will happen, until they are in front of our
eyes.... My point is that the fact that we are worried perhaps makes us all come together and prevent the worst. And the field of AI is no exception. 'OpenAI,' a company backed by Elon Musk, is aimed at preventing any one company from controlling a superior A.I. by open-sourcing and distributing state-of-the-art solutions. Conversation is also beginning to ignite government regulation on A.I. research. Even from a practical stand point, when it comes to worrying about A.I., some of our conceptions are not entirely relevant to machines. For instance, the thought that shutting down a computer will cause it to revolt is not likely to happen since, unlike people, a computer’s “mind” is usually saved,
thus we are actually always putting it to sleep... So, finally, it seems that history proves that worrying about A.I. is essential to our future existence – so keep up the good worries!