enterprisesecuritymag

The Need for a Risk-free Artificial Intelligence

By Jeb Linton, Chief Security Architect and Head of Cognitive Security, IBM

Jeb Linton, Chief Security Architect and Head of Cognitive Security, IBM

AI Security Today

There has been much discussion in recent years of the dangers of future Artificial Intelligence (AI), from such luminaries as Stephen Hawking and Elon Musk. Organizations such as MIRI and FLI are doing important work on future AI, but what of the dangers we face today? This technology is understood by few, and even fewer in the Security community, yet its use is growing by leaps and bounds. What are the implications? How must the field of Security evolve to keep up with AI as it matures?

"AI is starting to pervade IT in general, it's going to become weaponized rapidly by both sides in this race"

We may not have to worry about AI turning the whole world into paper clips just yet, but there are serious concerns for businesses and individuals using AI today. For clarity: I'm using AI by the broadest definition, to include all forms of Machine Learning (ML), but most of the recent explosive growth is in the use of Deep Learning (DL).

Attacks against Business AI systems

It is very likely that your business already uses AI today, and may be among the many that are not just using it but training it. If so, you are already aware that training a Deep Neural Net or other Supervised ML system is difficult and typically involves a tremendous effort in collection and curation of labeled training data. This data becomes the "ground truth" for the system, and the AI systems we have today do not know how to be skeptical of bad training data. For this reason, thorough curation and secure custody of training data are of paramount importance.

Researchers in the nascent field of Adversarial Machine Learning has shown that a Deep Learning system can be "tricked" by the introduction of a carefully crafted piece of training data into making tailored incorrect decisions. Typical examples include misclassifying photographs, but they prove that in principle an adversary with sufficient knowledge of your Machine Learning system and ability to introduce a bit of bad training data could cause it to make any sort of bad decision For example, a demand pricing algorithm or even a self-driving car. Even adversaries without much knowledge of your system can use related techniques to de-tune your algorithm to the point where it loses value—spammers have been defeating spam filters in this way for years. The take-away here is that you need to pay careful attention to the provenance of your training data and protect access to it and to the details of your ML systems in proportion to the damage that your business could sustain if it were polluted.

Use of AI by Bad Actors

We have seen a couple of dangerous tipping points in the world of Security over the last decade or so. First, we saw the rise of the professional bad actor and the development of nation-state backed cyber attackers, many of whom make extra income by moonlighting as pure cyber criminals in their off-hours. This has led to the emergence of a rich and complex specialized black market for cyber crime products and services as bad actors began to make their livings on cybercrime—an entirely new form of organized crime, much larger in scope than the Mafia could have imagined in the last century. I have been told by one of the largest security firms' CTO that there is now more money made on cybercrime than is spent on security, worldwide.

More recently, you will have noticed the emergence of very sophisticated robocallers being used to commit fraud over the phone, as well as legions of paid call-center workers committing phone fraud-all this is an indication that phone fraud is now a profitable business. You may also have found that the robocallers are becoming more difficult to distinguish from a human caller; this is due to the maturation of speech to text, text to speech, and directed-conversational Deep Learning systems. I predict that the call-center fraudsters will drop away rapidly in favor of AI robocallers as their users find that it's more cost effective to deploy an AI with a convincing dialogue than to pay a call center employee with a convincing accent.

Likewise, we confidently expect to see the use of AI becoming common among bad actors. The majority of hackers use social engineering techniques to perform cyber attacks today, and the same AI tools useful for building a convincing phone fraud system can and will be used for this purpose. Likewise, we can expect AI to be used to better target broad phishing campaigns-especially now that successful phishing has such a fast and direct payoff for bad guys deploying ransomware. Cybercriminals have an efficient market for services already, and researchers such as those at ZeroFOX have demonstrated that AI can get people to click on malicious links more efficiently than humans can; this is a good reason to think bad actors are already doing so.

AI in the Cyber Security Arms Race

Cyber Security is and has always been an arms race. What's changing now is that as AI is starting to pervade IT in general, it's going to become weaponized rapidly by both sides in this race. Having started the Cognitive Security initiative at IBM, I know from personal experience how the good guys are doing this, and we see strong hints that the bad guys are too. AI sophistication is already growing by leaps and bounds because of its value to businesses; however, there's nothing that throws fuel on the fire of technology development quite like an arms race funded by crime syndicates and nation-states. In this competition, one side will be using leading-edge AI to deceive humans and commit crimes. This is not how we want Strong AI to emerge over time.

If there's a lesson to be learned here, other than keeping your security controls tight and your security education tighter today, perhaps it's that AI Safety is something we all need to be thinking about. Not immediately in terms of super-intelligent Strong AI, but in terms of AI that is already strong enough to trick and harm many of us, and which will get stronger at an increasing pace.