Machine Learning and A.I. – Are We Ready?


November 2, 2016


Artificial Intelligence by Jeff Rense and Roman Yampolsky
Artificial Intelligence by Jeff Rense and Roman Yampolsky
Artificial Intelligence by Jeff Rense and Roman Yampolsky

Artificial Intelligence by Jeff Rense and Roman Yampolsky

What is the true potential unleashed by machine learning? To what extent self-training algorithms and sophisticated technological progress powered by artificial intelligence define both our seamless interaction and our exposure to newly-created vulnerabilities with the rest of the online community?

Society is approaching a point where software developments complemented with cutting-edge machine learning algorithms have made it possible for a program to simulate the voice of an individual almost to perfection. These type of technological advances have allowed us to become the type of society that can build omnipresent assistants that can even have a different tone of voice for every single individual that relies on them (i.e. Google Assistant). It is true: the combination and addition of machine learning and artificial intelligence instruments into our software algorithms has an impact that can range from developing computer programs that have already beaten individuals at board games (i.e. AlphaGo), to micromanaging the individual customization of computer-synthesized voices. We are not at a stage we DeepMind, the same developers of AlphaGo, have announced that they have designed a program that “mimics any human voice and sounds more natural than the best existing text-to-speech systems, reducing the gap with human performance by over 50 percent.”

Unfortunately, this is not the whole picture. As John Markoff rightly asks in his New York Times article, what if the software that can emulate the voice of human being can also be utilized by a stranger to call you and speak to you with the voice of your aging mother seeking your help because she forgot her banking password?  What if these artificial intelligence-enabled programs were able to use the vulnerabilities of the online world against their own designers? Although these scenarios seems to be occurring in no less than a distant future, it is important to realize that we are already seeing how “traditional”  cybercrime can be boosted by such technological advances.

Cyberattacks 2.0

Let’s look at what happened on Friday, October 21st, 2016, when a cyberattack caused Twitter, Netflix, Spotify, Airbnb and other giant’s to suspend their services for a significant period of time. The incident was all caused by what at first seemed an almost-routinary Distributed Denial of Service (DDOS) attack. In a nutshell, a DDOS is usually carried out by an agent who infects multiple systems with a software called a Trojan, which forces them to send requests to specific websites or platforms in order to fill their request capacity and not be able to respond to other legitimate requests coming from the company’s actual clients.Given that the bandwidth of these companies; websites is filled by the fake requests from the infected systems, the end-user who is attempting to log into the website can only see that the system is denying him service due to lack of capacity.

The important point here, however, is not the DDOS attack itself; these type of attacks have been around for a long time. The innovation comes with the strategy and execution of this newest one. This time, the DDOS was orchestrated with systems as small as internet-connected devices such as cameras and home routers, which were infiltrated  without the slightest hint of awareness from their owners. The only condition for these small gadgets to be vulnerable to the Trojan? To be connected to the Internet at the moment that the software was spread.

The initial target or that Friday’s attack was not the average platform either: the DDOS was directed towards Dyn, a platform that offers domain registration and regulation services for giants such as Twitter, Airbnb and Spotify. Dyn acts as a one of the middleman between the end user and the company that is hosting its domain name in its system, transforming the friendly name of a website typed in a laptop’s search bar to a numerical encryption that allows computers to communicate between each other. In this way, a DDOS attack on Dyn is significantly more powerful than an attack on any individual company’s website, making it clear the increased vulnerability of centralized operating systems, and the high risk that underlying the development of the Internet of Things ecosystem.

This type of attacks show us that with great power brought by new technology has to inevitably be complemented with great responsibility. The question remains, however, how capable are our institutions as they currently stand in enforcing this responsibility for anyone connected to the online world, and how efficient are their enforceability measures to apply sanctions to the relevant parties. At the very least, now we know that as cyberattacks get increasingly more sophisticated, companies, conglomerates and even governments around the planet won’t be able to afford staying behind or under investing in technologies that can help prevent such system infringements at the institutional level – right where they can cause the most damage by affecting tens of millions of users at once.

What is the line between a healthy degree of technological progress and the promotion of a toxic environment that might hinder the structures and systems that we currently use, enjoy and rely on a day-to-day basis? Every new development reminds us of the importance of this question, and the sooner we come up with an answer, the better off we will all be.