By Katherine Pioro
The Terminator. I, Robot. Ex Machina. What do all of these films have in common?
Step into any cinema, and you’re bound to find at least one science fiction movie about robots and artificial intelligence (AI). More often than not, these artificially intelligent robots pose a detrimental threat towards humans. Beneath the threat of robots is the deeper-seated fear that technology imbued with human intelligence will eventually overtake mankind. This fear is present all across literature as well, motivating the plots of well-known works such as Aldous Huxley’s Brave New World, H.G. Wells’ The Time Machine, and Karel Čapek’s R.U.R.
But artificial intelligence is not reserved for the big screen and literature. The past decade has seen a huge increase in the number of software giants pouring money into AI centric research and development programs. In 2014, Google acquired DeepMind, an AI company on a mission to “Solve Intelligence”, for more than $500 million. DeepMind’s claim to fame is its AlphaGo program that beat the world champion of a complex board game called Go. Go is tougher than traditional strategy games; for example, while chess has 35 possible moves for every turn, Go has 250. To address this complexity, DeepMind’s AlphaGo uses a computational approach involving ‘neural networks,’ which is a computer simulation designed to mimic the nonlinear processes of neurons. AlphaGo’s learning algorithm enforces neuronal behaviors by making connections on its own and self-correcting.
Google is not the only software giant that is focusing on artificial intelligence. In 2011, International Business Machine’s (IBM) supercomputer Watson beat two of Jeopardy’s greatest champions. Watson uses DeepQA, another software framework that uses natural language processing and evidence-based search to return a ranked list of possible answers. Today, IBM is exploring Watson’s potential in the medical field, where its supercomputing abilities are helping doctors diagnose diseases and train medical students. Watson’s potential is enormous – it can read every piece of medical literature on the internet in minutes and process more data in a day than a human can in a lifetime. Mark Kris, an oncologist at Memorial Sloan-Kettering Cancer Center in New York, predicts that Watson could eventually serve as “the world’s best second opinion.” But Watson’s abilities makes one pose the question: with one doctor that knows everything, is there really a need for human doctors anymore?
Technology has been a disruptor of the status quo since the dawn of the age of man. The discovery of fire allowed humans to cook their food and survive in extremely cold environments. The invention of the wheel during the Bronze Age directly contributed to the expansion of civilization, allowing for both increased mobility in trade and warfare. The invention of the steam-engine during the Industrial Revolution of the 18th and 19th centuries eventually led to the introduction of the assembly line by Henry Ford in 1913 and the mechanization of tasks that once required human hands. For the past three million years, technology has almost exclusively automated manual processes. But in the past several decades, the world has seen a shift. By way of artificial intelligence, technology now has the ability to automate cognitive processes, thereby threatening the jobs of white collar workers.
Artificial intelligence is already popping up in nearly every industry, from the legal field to the financial sector and beyond. In the legal field, ROSS, an iteration of IBM’s Watson, serves as a researcher for BakerHostetler, a leading law firm. According to BakerHostetler, ROSS has freed up time for lawyers to focus on tasks that require human creativity.
In the field of journalism artificial intelligence is helping write financial reports and shorten articles. Since July 2014, The Associated Press has employed artificial intelligence technology from a natural language generation company called Automated Insights to both generate selected sports reports and to turn out thousands of quarterly earnings reports. The Associated Press maintains that this new development is about ‘using technology to free journalists to do more journalism and less data processing, not about eliminating jobs’.
Artificial intelligence is also taking the financial sector by storm. In November of 2014, Goldman Sachs invested $15 million in a financial start-up called Kensho. Kensho provides analysts with a simple Google-style search bar to answer upwards of 65 million financial analysis questions using advanced data mining and natural language processing. Kensho also responds to phone calls from investors, giving instant, data-driven feedback. Generally, Goldman executives share the optimistic view that artificial intelligence will supplement human work and give humans more time to focus on problems that require creativity. Kensho’s founder Daniel Nadler disagrees – he predicts that automation software companies like Kensho will occupy between one third and one half of all existing jobs in finance within a decade.
Many companies employing artificial intelligence technologies are operating under the assumption that AI will do the “drudge work” and leave humans with the creative tasks. But this might not always be the case. A study conducted at Oxford University found that due to the declining price of computers and increasing computing efficiency, 47% of all US jobs are at ‘high risk’ of being automated within the next 20 years. President Obama’s February 2016 report to Congress echoed these somber sentiments. The report introduced a study by the White House Council of Economic advisers that examined job susceptibility to automation based on wage. The report found that 62% of American jobs are at risk of takeover by robots, with the majority of jobs at risk paying less than $40 an hour.
However, such a pessimistic view is not universal; a study by McKinsey & Company found that while 60% of all occupations could have 30% of their activities automated, fewer than five percent of all occupations could become entirely automated. In other words, artificial intelligence will much less likely automate entire occupations than redefine them. Another report by Deloitte that evaluated the census data for England and Wales since 1871 found that the growth of jobs in the creative, care, tech and business service industries have more than offset the loss of jobs in the agricultural and manufacturing sectors. The study postulates that despite AIs cognitive advantages, the general trend of short-term job displacement and long-term job growth will remain the same.
Clearly, there exists a wide range of conflicting opinions as to how artificial intelligence will impact white collar jobs. In general, there are two extreme beliefs: the first is that while AI might displace some workers, the net result with be massive job growth. The second extreme is that artificial intelligence’s ability to automate cognitive processes will make only a small number of high-paying jobs available in the future. It is true that artificial intelligence’s cognitive abilities separate it from the purely physical technological processes on the past. The exact effects of this separation remain to be seen.