The Future of AI: Should We Be Afraid?

 ›  › The Future of AI: Should We Be Afraid?

Science&Technology

The Future of AI: Should We Be Afraid?

Sarrah Jasmin discusses the potential, possibilities and pitfalls of artificial intelligence

Despite artificial intelligence (AI) already providing countless benefits to society, with the number ever-increasing, experts are still divided on the topic. SpaceX and Tesla CEO Elon Musk, for example, famously described AI as a “fundamental risk to the existence of human civilisation,”, with Stephen Hawking further reiterating these thoughts.

AI can be narrow, which is what we see day-to-day in systems such as our laptops and mobile phones, or general, which is a more complex and flexible form of intelligence. While narrow AI involves systems that can only be taught to perform various tasks, such as Siri on the iPhone or Tesla cars with vision sensors, general AI includes being able to perform complex tasks such as cutting hair or reasoning based on gained experience; perhaps what you would visualise when you think of robots.

Although this type of AI does not exist yet, a survey conducted by renowned researchers Muller and Bostrom reported a 50% chance that general AI would vastly develop between 2040 and 2050, and that by 2075 there is a 90% chance it will have developed completely, causing both concern and excitement amongst experts simultaneously.

AI has the capacity to change the world, and a special report from TechRepublic notes that autonomous transport technologies such as driverless cars and delivery robots will be extremely beneficial to our society. Already, companies have become advocates of using AI robotics; General Motors and Ford have both said they would build driverless cars with no wheel or pedals by 2019 and 2021 respectively. Further, Google’s Waymo is already looking to establish a driverless taxi service in Arizona.

Another benefit of AI would be to improve the accuracy of facial recognition systems such as CCTV, which has already made huge technological advances in the past decade. Baidu, a Chinese tech firm, reports it can match a face with 99% accuracy, therefore potentially decreasing crime rates due to the ability to track suspects more easily. Facial recognition glasses are also being trialled by authorities in China, which may aid police in tracking suspects when out on the streets.

Furthermore, healthcare systems such as the NHS can take advantage of AI technologies. For example, Google’s DeepMind system has been used by the NHS to find abnormalities, tumours and cancers in the head and neck, to allow for earlier and more effective therapeutic interventions.

However, there are some drawbacks to this leap in technology. Through AI, privacy regulations and barriers may be broken down, allowing for a more intrusive form of control. For example, having the vast neural networks that underlie AI able to replicate images and voices extremely realistically could have implications for using an individual’s profile for crime and fraud.

Many jobs are also at risk of being lost if AI succeeds in being able to more effectively complete tasks usually performed by human beings. For example, Amazon’s ‘Just Walk Out Technology’ automatically detects when products are taken from or returned to shelves in the Amazon Go store in Seattle. When a customer is done shopping, they can leave the store, and payment is taken from their Amazon account, rendering cashier work as unnecessary. Amazon have also deployed robotic systems to carry products around their warehouses, replacing industrial workers. Why pay people to do a job when you can create unlimited technology to do so?

On a larger scale, a report entitled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation was recently co-released by the Electronic Frontier Foundation and a number of academic and civil society organisations, and suggests AI is facing exploitation by criminals and terrorists. The manipulation of various technologies, including Google’s AlphaGo, could be used to exploit code or hack government systems. Further simplistic approaches, like training a drone with facial recognition software to carry out a harmful task, are just as dangerous. Yet, the most obvious concern is the ‘Singularity Phenomenon’, a warning expressed by Stephen Hawking that technology could advance so rapidly to the point that it will surpass human capabilities it poses a substantial threat to the human race.

However, while many experts fear the advancement of general AI, there are alternative outlooks. Musk, for example, has set up a non-profit research company, OpenAI, to promote and develop friendly AI to benefit society entirely. But the idea of AI advancing past humans may happen sooner than we think; Muller and Bostrom’s research also predicted that 30 years after achieving general AI, a form of superintelligence will manifest and exceed the human races’ cognitive capabilities.

Overall, whilst the benefits and disadvantages of AI are highly disputed, it is clear that AI technologies are advancing every day. Although AI can be greatly beneficial to society, it is universally agreed that mitigation of the misuse of AI technology is priority, and we must proceed with caution.

Image credit: unsplash.com

The Future of AI: Should We Be Afraid? Reviewed by on February 25, 2018 .

Sarrah Jasmin discusses the potential, possibilities and pitfalls of artificial intelligence Despite artificial intelligence (AI) already providing countless benefits to society, with the number ever-increasing, experts are still divided on the topic. SpaceX and Tesla CEO Elon Musk, for example, famously described AI as a “fundamental risk to the existence of human civilisation,”, with Stephen

ABOUT AUTHOR /

1 COMMENT

  • Jess

    Awesome article, really learnt something new! Maybe another piece on machine learning? Really enjoyed this.

LEAVE A REPLY

Your email address will not be published. Required fields are marked ( required )