AI is a buzzword in every business but as a society our general understanding of how it will be applied is low. Machine learning and programs that use vast amounts of information to create usable data fall under the general term artificial intelligence or AI.
Our hopes are that it will benefit society beyond our imaginations, and our current fears predominantly lie with programmers passing individual bias to the systems, thereby entrenching existing prejudices into our future technology.
By the year 2020, AI technologies will have created an insights-driven market of $1.2 trillion USD and AI researchers are commanding huge salaries of well over a $1 Million USD showing the value and desire for new knowledge.
In the past decade, we have seen a worldwide AI arms race for patents and IP amongst leading tech companies. A report in 2017 from global management company McKinsey reported that ‘tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions’. U.S.-based companies ‘absorbed 66% of all AI investments in 2016’. China was second with 17% and growing fast.
Governments are urgently allocating funds across the world – including the Australian Federal government, which announced in last week’s Federal budget that they plan to spend $29.9 million to grow Australia’s capabilities in AI. This will support and plan for future investments “that improve our expertise and maintain our competitiveness in these technologies” through a “technology roadmap”.
To manage this they are setting up a new national Ethics Framework and Standards Framework to help guide “the responsible development of these technologies”.
In April this year, the EU announced their “Charter on AI ethics”, stating Europe’s secret weapon in the race against the U.S. and China on artificial intelligence is ethics. Whilst many European countries are creating their own AI strategy, EU officials insist that similar rules must be agreed across the continent to boost consumer trust in European AI applications and enable them to catch up with the competition.
Welcome to the dark side of Big Data
Machine learning can propagate discrimination by automating the same biases they are supposed to eliminate. Elon Musk has voiced his concern over AI’s apocalyptic potential, but the problem we face now is how we stop the inequalities of our society being amplified and affecting the most vulnerable people in society. Cathy O’Neil, author of Weapons of Math Destruction says “If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues.
We have already seen examples of programs showing the darker side of the technology. Risk assessment software, that was used by Broward County in the US, was subject to a damning report published by ProPublica in 2016. It showed the software made racially prejudiced decisions on the likelihood of individuals to re-offend again and was “particularly likely to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate of white defendants”
In 2016, Microsoft’s chatbot ‘Tay’ was launched to interact with web users to become smarter as more people interacted with it on Twitter. However, it learned fast from a malicious subset of people and began sending out offensive and hurtful tweets that were deeply embarrassing for the company. “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” wrote Peter Lee, Microsoft’s vice president of research.
Last Tuesday Google introduced their Lifelike AI experience with Google Duplex – an AI-powered virtual assistant. CEO Sundar Pichai demonstrated how it conducted two effortlessly natural conversations, one with a hairdresser and another with a restaurant receptionist, mimicked the hesitations and affirmations of human speech and made the reservations required without them ever knowing it was a program. Seeing and hearing this concept in action ‘floored’ and ‘unsettled’ the audience at Google’s annual developers’ conference and raised questions about the need for transparency when we interact with a machine. Belatedly Google responded to the ethical outcry by saying it was “designing this feature with disclosure built-in, and [will] make sure the system is appropriately identified. What we showed at I/O was an early technology demo.” The technical advances are amazing but the big tech companies can’t take people’s trust for granted, as demonstrated so effectively by Facebook and the Cambridge Analytica scandal.
Human ethics in these systems need to be established more urgently than ever before and Google should be at the forefront of this conversation. A few days ago, however, it was announced that Google employees had resigned in protest at Google signing a military contract with the Pentagon which would allow for drones to better analyse its targets in warfare.
Anxiety over AI continues but the industry moves steadily forward. Self- driving cars and trucks are here and will continue to be adopted, but whilst they will be far less flawed than people – who do we blame when someone gets killed?
LG concept robots deliver goods, guide shoppers & check-in guests; machines can read X-Rays and algorithms will respond to inquiries from customers.
Humans out of work?
Technology is driving the progress of AI – increased processor speed, algorithm efficiency, data availability and cloud technology. However, as AI is increasingly applied to business opportunities and problems the resulting investment will only lead to increased algorithmic efficiency and greater potential. Currently, we apply AI to existing systems & processes to make them more efficient. Progress will truly happen when new products and business models are developed that couldn’t exist without AI. There are many examples of both technical and non-technical roles in the future but critically most of them will rely on your ability to be human.
Elizabethans were terrified that new machinery would put them out of work, but new jobs were created with new innovation. Air traffic controllers, engine mechanics or pilots would have been inconceivable a 100 years ago and as the new opportunities are created a report from MIT believes that AI will easily create as many jobs as it will eliminate. The possibilities are endless and an “Empathy trainer” for AI devices or “Ethics Compliance Officer for AI” might well be top of the list.
The new world
Elon Musk is building a new economy in Spaceflight technologies, companies such as Airbnb and Uber are leading the way for the explosion of the shared economy and since 2000 Nanotechnology jobs for medical science has risen to provide work for more than a million Americans.
The new world can blossom with human empathy and an abundance of fascinating new jobs. Humanity is central to the process. If technology becomes easy to use and we need less complicated systems, humans can then draw on their creativity and a unique understanding of the human mind. We can take away mundane and repetitive tasks and use the technology to magnify human capabilities. As we develop a code of ethics for AI here in Australia and across the world, we must unite globally to police these processes and ensure that those predictions of an apocalyptic future never come to pass.
If you enjoyed this article – check out our article on Shining a light on the Future of Work