AI Is Changing Security—Here’s How
Cybercriminals have evolved from lone hackers targeting small websites and systems into dangerous entities launching large-scale cyberattacks that affect millions of people worldwide. In recent years, we’ve seen multiple huge ransomware attacks like WannaCry and NotPetya cause hudereds of millions or even billions of dollars in damages and lost business.
Stolen credentials are now easily accessible on the black market at a nominal cost. Bank account details, credit card numbers, and full identity packets can be purchased for $40; credentials for unhacked Windows RDP servers are available for just $20; and a company can take down a competitor’s website for an hour for $60. Additionally, malicious software, such as bots, malware, and scripts, are now easily replicated to increase cybercriminals’ attack vector beyond simply their original target victims.
Alongside the sheer size of these cyberattacks, there’s also growing concern around their sophistication. With the onset of artificial intelligence (AI) and machine learning technologies, there’s a whole new realm of cyberthreats that are posing a significant risk to enterprises.
A little more on AI and machine learning
The concept of AI was first introduced over 60 years ago at the Dartmouth Workshop in 1956, where researchers explored human intelligence and how it could one day be mimicked by machines. Within the context of today’s digital capabilities, machine learning has evolved into a concept where powerful computers ‘learn’ from data sets that flow through their algorithms. While human input remains valuable for training machines, these algorithms streamline how a machine responds to certain events, making it ‘smarter’ after each instance.
We’ve also seen the emergence of deep learning. A subset of machine learning, deep learning is different in that it allows machines to correct themselves without human intervention—making it more scalable and accurate. This is the type of technology that powers innovations like image recognition and self-driving cars.
Deep learning operates using neural networks that have various layers of data propagation. Unfortunately, while it shows significant potential for businesses that want to rapidly identify threats and protect themselves from them, it’s also proving to be a useful tool for cybercriminals. This makes it even more critical for enterprises to understand these technologies and use them to mitigate these threats.
The threats behind innovation
In the wake of new technology, new potential attacks emerge. Bad actors have now designed threats that either compromise AI functionalities or harness the capabilities of machine learning itself.
Attacks that compromise AI infrastructure
Data poisoning: If a malicious actor knows that AI is collecting data, they can flood it with false or corrupt information. As a result, the AI is trained improperly, causing it to malfunction or to conduct activities it shouldn’t, such as classifying malware as safe.
Adversarial attacks: These occur during the inference stage, when the AI has already been trained and uses insights to make decisions about new data. The attacker introduces an element into the model that disrupts how accurately the machine identifies images or other items. Adversarial attacks can be chained, which poisons the training data and exploits the result during the inference process.
Attacks that use AI
Spear phishing: To enhance existing phishing practices, bad actors use machine learning algorithms to craft personalized, enticing messages designed to encourage users to share sensitive information. For instance, hackers may craft personalized emails disguised to look like they’re from a close relative or colleague.
Deepfakes: AI can combine and superimpose images and videos using a machine learning technique known as a generative adversarial network. These images and videos appear real, but are actually a conglomerate of disparate sources. Their most common use seems to be to misrepresent politicians, but they can also be used to doctor other questionable activities. Deepfakes have also occurred in audio—in one recent example, hackers are suspected of using an AI-generated voice to steal $243,000 from a U.K.-based energy firm.
Many experts worry that as this technology continues to advance, it will become increasingly difficult to differentiate legitimate video and audio footage from deepfakes, which may have serious social and political implications in coming years.
Mitigating cyberattacks with AI
In parallel to these growing threats, today’s wealth of data and computing power empower security providers to create tools to mitigate and prevent these types of attacks. Machine learning allows for the creation of predictive models that differentiate between ‘good,’ ‘bad,’ and ‘normal’ behavior, making them intelligent enough to proactively prevent bots and identify malicious activity. This capability, combined with human input that helps train the machine, is changing the way that businesses deploy security.
Here are a few examples of how:
- Anti-spam: Machine learning enables businesses to build filters that automatically detect what spam email looks like and, likewise, what non-spam email looks like.
- Biometrics: AI is already widely used for facial and smile recognition, with many smartphone users deploying these advanced security techniques in the form of Apple’s FacedID.
- Threat detection: Pattern recognition can detect threats and viruses, enhancing security defenses and helping enterprises spot potential malicious activity faster.
- Adaptive security: AI quickly learns how users behave, which means it also identifies when they do things that are out of character—pointing to a potential account takeover.
- Natural language processing: AI collects information from news stories and research articles on cyber threats, which helps the machine learn the latest security anomalies, hacking techniques, and prevention strategies. This ensures businesses stay on top of the latest risks and continually evolve their security strategies.
- Bot detection: Bots, or automated social media accounts, have swayed political elections, manipulated stock markets, and caused health epidemics. However, deep learning techniques can detect bots earlier than previously possible, separate them from human-owned accounts and minimize their threat.
Where machine learning fits into Okta
Taking the fight back to cybercriminals means bolstering security defenses with AI and machine learning technologies. At Okta, we’ve incorporated these practices in our Adaptive Multi-Factor Authentication (MFA) product.
Risk-based Authentication, a feature of Adaptive MFA, uses machine learning to deliver automated detection and response to identity-based attacks. Okta transforms data inputs and variables, such as device, location, IP address, and biometrics, into contextual behavior profiles, which informs quantifiable, actionable authentication and authorization decisions.
To learn more about how AI can work for you, try our Adaptive MFA or test out Risk-Based Authentication as part of our Early Access offerings. Read more about how we employ machine learning: