This article is a short analysis of Artificial Intelligence and its benefits and risks.
When it comes to Artificial Intelligence, the field of computer science and engineering is concerned with the development of a computer system; that can accomplish the tasks that would under normal circumstances require interference by a person (human intelligence).
AI research not only includes the exploration into how to make the machine smarter, it also investigates the profound questions of moral decision-making and existential risks that may come with the natural progression of building intelligent behavior into machines.
There are many ways to achieve the goal, and thus the AI study investigates many different subfields. The current AI research investigates visual perception, speech recognition (understanding of human speech and translation among languages), competing in strategic games, decision-making (e.g. self-driving cars), ability to read, interpret and analyze complex information, and many other ways where intelligent behavior can provide benefits to businesses or society in general.
There are tremendous benefits that come with the implementation of AI technology. And while our society is still a long way from intelligent robots, we’re already getting accustomed to AI technology entering our lives and amending the ways we do things.
The AI science recognizes couple of ways to test AI (Artificial Intelligence, 2016):
- Optimal – cannot perform any better
- Sub-human – performs worse than most humans
- Super-human – performs better than most humans
- Strong super-human – performs better than all humans
As far as the present state, the Google’s head of self-driving technology Dmitri Dolgov said that “Google is in the superhuman-driver-making-business.” (Humans vs robots, 2015). And Mr. Dolgov may as well be correct, as he also mentioned that by September 2015, Google’s self-driving cars managed over 1.12 million kilometers without any collisions.
That said, while most of today AI technologies are really in the optimal to sub-human range, we can already see the signs of AI that perform better than human and falls into a super-human category. It’s not just self-driving cars, but also technologies that we already take for granted, such as following:
- optical character recognition (conversion of images into text)
- video captioning (speech recognition and conversion to text)
- playing games on our PCs against computer counterparts
The current director of research at Cambridge University, Dr. Stephen Hawking warned everyone about the dangers of artificial intelligence in his interview for BBC suggesting that “the development of full artificial intelligence could spell the end of the human race.” (Cellan-Jones, 2014). The similar sentiment shared by Elon Musk, the founder of Tesla Motors and SpaceX, who speaking to students at MIT AeroAstro Centennial Symposium advised of potential threats, stating that “we should be very careful about artificial intelligence.”, adding that “If I had to guess at what our biggest existential threat is, it’s probably that.” Gibbs, S. (2014)
There is no disputing that the advent of AI could go wrong, nor that there is a small chance of AI development going wrong (however farfetched that may seem today), but I believe, that we should mainly concentrate on making sure that AI technology is safe. I am mentioning the safety, particularly because when it comes to AI technologies currently deployed to the battlefield (used in modern warfare). We can already see the widespread use of unmanned drones, used with the goal to locate and kill terrorists. These machines are highly dependent on the use of artificial intelligence when it comes to some of the decisions. When the Time magazine asked its readers 2016, to choose between the sons and daughters of the community, or an autonomous A.I. weapons system to defend us from the enemy, 55% of the readers had a preference for A.I. soldiers.
But perhaps we should reassess. In my opinion, the long-awaited assessment of death toll under Obama’s administration, reported by The Guardian on July 1st of 2016, should serve as a stark reminder that we have a long way to go when it comes to the use of AI. It’s painfully obvious that the technology is not as error prone as we many citizens are led to believe. In the report, the administration claims that it “launched 473 strikes, mostly with drones, that killed between what it said were 2,372 and 2,581 terrorist combatants”. But within the same document also mentioned that just during Obama’s administration “drone and other air strikes, have killed between 64 and 116 civilians” (Ackerman, 2016).
So, the artificial intelligence is reshaping the way we fight the modern battles, but as stated above, not always with a positive outcome. There are much more examples of using AI in the contemporary warfare, for instance, the news reports about automated AI hacking attacks, etc. Just recently (Oct 12, 2016), Barack Obama discussed the possibilities and possible dangers of AI, in an interview with MIT Media Lab director Joi Ito and WIRED Editor-in-Chief Scott Dadich, wondering whether sophisticated adversaries might use AI to infiltrate the government’s most sensitive systems. “There could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles,’” Obama says. “If that’s its only job, if it’s self-teaching and it’s an effective algorithm, then you’ve got problems.” (Greenberg, A., 2016).
I am going to conclude this section, by saying that the current shift to more sophisticated artificial intelligence and the dangers that come with it, cannot be ignored. As Elon Musk says “I am increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Gibbs, S. (2014).
Above issues pose challenges to our society, and unless there are proper regulations, they will continue to do so for the humans in the future.
The very recent AI trends show that artificial intelligence can find its uses in a variety of fields. One of the fascinating developments is the use of AI to help with wildlife preservation. Tanya Berger-Wolf, a professor of computer science at the University of Illinois at Chicago says that the researchers do not know how many animals are out there and that there aren’t enough tracking devices to monitor them all. So, together with her colleagues she “developed Wildbook.org, a site that houses an AI system and algorithms. The system inspects photos uploaded online by experts and the public. It can recognize each animal’s unique markings, track its habitat range by using GPS coordinates provided by each photo, estimate the animal’s age and reveal whether it is male or female”, Berger-Wolf said. (Geggel, L., 2016)
We also see AI used in search and rescue, where AI techniques can analyze aerial footage many times faster than its human counterparts and dramatically lower the time required to find missing people. And in the field of computer security, instead of using AI to hack systems, we can use it to protect us. Such systems are already in development. In Aug 2016, during Darpa Cyber Grand Challenge, an AI system winning the second price overall demonstrated a capability to detect the hacking attempt and automatically deployed the patch that ended the breach.
There are many similar trends. We see progressions of using AI mainly in the transportation, where Uber and Google are pushing for the driverless cars. But AI is also finding its place in the healthcare field, where AI can help us detect genetic diseases, something that was already demonstrated by IBM’s Watson as it successfully diagnosed a rare case of leukemia. We can even find the AI in personalized mental health services, where X2AI Tess (psychological AI) is capable of providing services such as psychotherapy and possibly prevent depression or perhaps even suicide attempts.
Over the years, there has been a significant amount of research on the use of repetitive learning or behavior-based AI schemes such as Artificial Neural Networks (ANN), particularly in expert systems, robotics, and other manufacturing applications. This is how the ANN systems typically create the semblance of intelligence:
It’s really not that different from the way we all learn new things. In my opinion, as with human learning, it’s all about an action and reaction. Translated to the neural network, for it to acquire a new knowledge there similarly needs to be a component of reaction involved. Let’s take a look at how a dog would learn certain tricks or commands… it does so by being told what is right and wrong and by getting a treat for those behaviors that are positive. The neural network collects the feedback, all the time. If the goal of a neural network is, for example, an ability to solve a puzzle, all we need is to determine the end state. Then it can monitor all actions to see how close they are moving the machine towards the end state. Then it can store what is done right and what was done wrong and adjust its behavior accordingly. The larger the variance among the end state and actual outcome, the more radically the machine needs to alter its moves. In neural networks, this process is called backpropagation (abbrev. backprop).
As we can see it’s always about comparing the output a network produces with the output it was meant to produce, and using the difference between them to modify the weights of the connections between the units in the network, working from the output units through the hidden units to the input units—going backward, in other words. “In time, backpropagation causes the network to learn, reducing the difference between actual and intended output to the point where the two exactly coincide, so the network figures things out exactly as it should.” (Woodford, 2016)
And that’s basically how do ANN systems typically create the semblance of intelligence.
Even though Stephen Hawking and Elon Musk do not sound assuring when it comes to future of AI, my personal faith is little more optimistic. Especially when reviewing all the benefits of AI I’ve encountered during my research for this article.
In my opinion, the breakthrough everyone is waiting for is a state in which AI is fully capable of perceiving an outside world, reason with their human counterparts, and most importantly based on this input also able to inspect and adapt their system. A superintelligence, or a system that I would call an intelligent, conscious machine.
As the current research shows, the fore mentioned future state wouldn’t be as easy to achieve by using current computer processing techniques. But we’re getting close, and the innovations made in the field of Artificial Neural Networks already provide earlier examples of this. We’re able to imitate the neuron networks in current computer processing environment; and able to solve simple tasks that that require reasoning and logic.
As an example of where we might be heading, I want to point to a paper published on 12 October 2016 in magazine Nature. Here it was demonstrated by a company called DeepMind (owned by Google) that a neural network equipped with a memory could not only learn, but also accumulate and remember facts and adjust its behavior. It was shown that the DeepMind could very easily resolve problems such as cracking logical puzzles or finding the best way around the London Underground, all this deprived of any prior information about the system and without any predefined rules.
Let’s ask ourselves couple of important questions:
- Should the ultimate goal of AI be to make computers act more like humans?
- Is such a goal attainable or is that an unachievable distraction?
To answer… The human brain has 100 billion neurons and on average 7000 synaptic connections per neuron. So, let’s say that we want to measure our progress by comparing the number of neurons in animals and humans, to a total number of neurons that the current science can model in modern artificial neural network computers. We will find, that we’re in fact extremely quickly approaching a goal of modeling a human brain. In this specific example (Diagram 1), Google’s DeepMind is already somewhere between the bee and frog. Based on the estimates, we’ll have the human brain modeled in 35 years (circa 2050).
Google, Apple, Amazon, Facebook, Microsoft and also U.S. Army and also the Air Force Office of Scientific Research are heavily investing in the AI technology research. All of them making investments in the field of AI research, to capture the ultimate goal of making the Strong AI, one that surpasses the human skills.
One goal will perhaps be a lot harder to achieve in a machine world, and that is the ability to show emotions. Currently, there aren’t many algorithms that can computationally model human emotion (except a ‘surprise’ – likely the only emotion captured in a math formula). Computational models of emotion often rely on psychological theories of emotion as a basis for a computational model. Emotion is however inseparably coupled to how an agent—human or artificial—reacts and responds to the world. So unless we have the robot capable of perceiving all human senses (inputs), it’ll be very hard to for us (or an AI) to come up with an algorithm that can truly model the human-like emotions.
I am going to conclude by saying that the Artificial Intelligence is here to stay and that it’s mainly because of a tremendous future potential when it comes to helping people around the world to do their jobs better, smarter and faster.
So, let’s just say that “In artificial intelligence, today’s science fiction might well be tomorrow’s reality.” (Brylow, D., 2014)
Friend of mine said recently that ‘Robot can show the action of empathy with an engine but not feel anything inside.’
That an excellent point, I’ve never really thought about.
- Does it mean that the only way for the robot to feel anything is in the biological body?
- Isn’t it possible to reproduce feelings that come from various hormonal responses by using computer algorithms?
Elon Musk recently said: “There’s a billion to one chance we’re living in base reality,”. He is one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence. If it sounds a lot like The Matrix, that’s because it is.
And really, if we think about the building blocks of everything that makes our world, it surely appears a lot like a binary setup (including our brain functionic on electical signals – similarity with zeros and ones) as claimed by the Simulation Hypothesis.
So, let’s say that’s a possibility… Then even thoughts and feelings we humans experience can be reduced to zeroes and ones and in such case, simulation is trying to create a simulation :)
Some food for thought.
Cellan-Jones, R. (2014) Stephen hawking warns artificial intelligence could end mankind. Available at: http://www.bbc.com/news/technology-30290540 (Accessed: 13 August 2016).
Gibbs, S. (2014) Elon musk: Artificial intelligence is our biggest existential threat. Available at: https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat (Accessed: 15 October 2016).
History of artificial intelligence (2016) in Wikipedia. Available at: https://en.wikipedia.org/wiki/History_of_artificial_intelligence (Accessed: 15 October 2016).
Worland, J. (2016) How will artificial intelligence change war?. Available at: http://time.com/4180193/robots-war-artificial-intelligence-davos/ (Accessed: 15 October 2016).
Ackerman, S. (2016) Obama claims US drones strikes have killed up to 116 civilians. Available at: https://www.theguardian.com/us-news/2016/jul/01/obama-drones-strikes-civilian-deaths (Accessed: 15 October 2016).
Greenberg, A. (2016) Obama’s concerned an AI could hack America’s nukes. Available at: https://www.wired.com/2016/10/obamas-concerned-ai-hack-americas-nukes/ (Accessed: 15 October 2016).
Geggel, L. (2016) 5 Intriguing Uses for Artificial Intelligence. Available at: http://www.livescience.com/56497-artificial-intelligence-intriguing-uses.html (Accessed: 15 October 2016).
Sawyer, R.J. and trilogy, W. (2015) Stephen hawking fears AI could destroy humankind. Should you worry?. Available at: http://www.cbc.ca/news/technology/ai-could-destroy-humans-stephen-hawking-fears-should-you-worry-1.2864576 (Accessed: 15 October 2016).
Brylow, D. (2014) Computer Science: AN OVERVIEW. 12th edn. United States: Prentice Hall. (Accessed: 2 October 2016).
Gibney, E. (2016) ‘Google’s AI reasons its way around the London underground’, News, . doi: 10.1038/nature.2016.20784.
Minsky, M. (2007) The emotion machine: Commonsense thinking, artificial intelligence, and the … Available at: https://books.google.ca/books?id=OqbMnWDKIJ4C (Accessed: 16 October 2016).
Artificial Intelligence (2016) Available at: https://en.wikipedia.org/wiki/Artificial_intelligence (Accessed: 13 August 2016).
Humans vs robots (2015) Available at: http://www.theinquirer.net/inquirer/feature/2426988/humans-vs-robots-driverless-cars-are-safer-than-human-driven-vehicles (Accessed: 16 October 2016).
Portman, D. (2016) Human Brain vs Machine Learning – A Lost Battle?. Available at: https://www.linkedin.com/pulse/human-brain-vs-machine-learning-lost-battle-danny-portman (Accessed: 17 October 2016).
Marsella, S. and Gratch, J. (2014) Computationally modeling human emotion. Available at: http://cacm.acm.org/magazines/2014/12/180787-computationally-modeling-human-emotion/fulltext (Accessed: 17 October 2016).
Woodford, C. (2016) How neural networks work – A simple introduction. Available at: http://www.explainthatstuff.com/introduction-to-neural-networks.html (Accessed: 17 October 2016).
Srivastava, T., Jain, K., Kaushik, S. and Shaikh, F. (2016) ‘How does artificial neural network (ANN) algorithm work? Simplified!’, 17 October. Available at: https://www.analyticsvidhya.com/blog/2014/10/ann-work-simplified/ (Accessed: 17 October 2016).
Simulation hypothesis (2016) in Wikipedia. Available at: https://en.wikipedia.org/wiki/Simulation_hypothesis (Accessed: 17 October 2016).
Solon, O. (2016) Is our world a simulation? Why some scientists say it’s more likely than not. Available at: https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix (Accessed: 17 October 2016).