How AI impacts the SMB threat landscape

Artificial intelligence (AI), and specifically generative AI (GenAI), is helping jumpstart revolutions in various industries. Unfortunately, some of this technology’s earliest and most aggressive adopters have been cyberattackers. The exponential growth in GenAI capabilities has proven irresistible, allowing even novice criminals to conduct account takeover activities faster and more effectively than ever before. According to Deep Instinct's Voice of SecOps report, 75% of security professionals witnessed increased attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI

In this dynamic and rapidly changing security landscape, small and medium-sized businesses (SMBs) are particularly vulnerable. In our 2023 State of Secure Identity whitepaper, we found SMBs had similarly high rates of fraudulent registration and credential stuffing attempts to enterprise-level organisations (without the enterprise-level staff resources to dedicate to the issue). Small businesses, in particular, faced meaningfully higher rates of multi-factor authentication (MFA) bypass attempts than other organisation sizes — 20% of all MFA events were bypass attempts, compared to around 9% for enterprise and medium-sized organisations. That adds up. 

This blog post aims to shed light on the GenAI-enhanced account takeover threats SMBs face while providing practical insights for enhancing cybersecurity measures. 

Why SMBs are a sweet spot for account takeover attempts 

SMBs have always been in the crosshairs when it comes to cyberthreats. Though company size matters less and less when it comes to available tech and infrastructure, according to the 2023 Verizon Data Breach Investigations Report, companies under 1,000 employees were still attacked more frequently than others. In fact, 69.9% of polled SMBs reported security incidents vs. 49.6% of bigger organisations. SMBs also had slightly higher rates of compromised data in an attack, with 38% of incidents leading to leaked data compared to 22% for larger businesses. Only about a third of SMBs staff a dedicated cybersecurity specialist.  

Main account takeover threats to SMBs

Relative beginners can use publicly available Large Language Models (LLMs) to create and scale malicious tools and malware scripts more quickly and effectively than they could without GenAI. 
 

Multiple threads in an underground forum for how to use ChatGPT for fraud activity

Multiple threads in an underground forum for how to use ChatGPT for fraud activity. Image source: Chris Fernando, Security Review, "ChatGPT is Being Used for Cyber Attacks," 2023. 
 

You can see the perfect storm facing SMBs when countering rapidly innovated security threats like targeted, personalised phishing attacks (spear phishing) or voice cloning that makes social engineering more sophisticated. Bots, when used for attacks on an automated or mass scale, can be a scourge in several ways, including fake signups, denial-of-service (DDoS) attacks, automated brute-force attacks, and inundating users with scams and spam. AI-driven bots sound like an IT nightmare, but there’s good news too: bot detection tech bolstered by machine learning (ML) is significantly better at reducing bot attacks.  

It’s possible to improve security even when working with limited resources. By identifying the most common threats, you can help decide where to focus your efforts when fortifying your security posture against GenAI threats. 

Phishing attacks

GenAI tools can be misused to make convincing, professional-looking phishing emails in a matter of seconds — not just phishing, but eerily human spear phishing specifically targeted and customised to the individual target. Attackers can create credible-seeming profiles using GenAI to make photorealistic-looking headshots or mimic naturalistic human speech at speed. Bad actors can also potentially use GenAI tools to replicate a person’s voice, allowing them to impersonate key trusted staff like senior leadership or executive assistants when conducting voice phishing (vishing) and social engineering attacks. 

Brute-force attacks

Trial-and-error logins have evolved well beyond attackers randomly guessing usernames and passwords. No username/password combo is 100% unhackable: Given enough time, bad actors can eventually crack any combination. Before, clever cyberattacks would involve tools to speed things along. With AI, these types of attacks are even more sophisticated.

Credential stuffing

Credential stuffing is a popular type of brute-force attack involving automated injection of breached username and password combinations to fraudulently gain access to user accounts. Credential breaches and poor password policies allow attackers to gain access to accounts by using previously breached passwords. 

SMB security must-haves

Understanding the shifting technology landscape can empower businesses to meet emerging threats. When you know how malicious actors use AI, you make smarter investments and bolster security posture where you need it most. Before we get into specific security tools that target AI, here’s how you can stay safer regardless of the threat.

Keep software updated

Unpatched software is one of the most common unhealthy security practices, and it’s how many data breaches start. Don’t let attackers hit you in an unpatched vulnerability. 

Implement MFA and phishing-resistant authentication 

If you read our SMB blog post about MFA, you probably expected this tip. Get solid, phishing-resistant MFA, and don’t be afraid to tighten your authentication policies: block logins anomalous locations, tor exit nodes (the darknet), public VPN and network anonymisers, and devices without assurance.

Educate 

Consider that proper education on identifying AI-enhanced threats can make every user a potential security first responder. Arm employees and end users with knowledge and examples of how attacks are increasingly personalised and sophisticated. Introduce training on types of threats, spotting common red flags, and reporting suspicious activity — better a false positive than a non-reported breach. 

Monitor your security operations and posture 

How can you monitor your security posture if you’re not tracking it? Assess and monitor your systems configuration and policies to keep track of your risk levels and remediation requirements. Features like Okta Insights can leverage data to continuously check your risk levels and help protect yourself from Identity attacks, provide end users with a simple way to report suspicious activity, and recommend ways to bolster your security posture. Okta Identity Threat Protection extends the continuous risk assessment, monitoring, and remediation to all your critical systems.

Attack protection 

Some technology that protects against AI can also boost your security posture against more traditional threats. No matter what, you’re adding value to your business and securing your data with easy tools.

  • Bot Detection
    • For Customer Identity, Bot Detection helps you discern humans from bots. Easily configure Bot Detection levels based on your organisation's risk tolerance and business needs to trigger a CAPTCHA step in the login experience to eliminate bot and scripted traffic.
  • Brute Force Detection
    • With Workforce Identity, GenAI takes the heavy lifting out of trial-and-error login attempts. Protect against password spraying, credential stuffing, and brute-force attacks with an ML-based model that optimally detects organisations under attack and flags malicious IPs with ThreatInsight
  • Automation to ensure consistency and reduce errors
    • Human error happens. By replacing the need for manual processes, you can also reduce the risk of mistakes associated with manual Identity management. Guaranteed automatic deprovisioning for Workforce or automated threat alerts across systems and siloed data for Customer Identity are just a few possibilities.
  • Phishing-resistant MFA 
    • While MFA adoption is essential for reducing risk, not all forms of MFA are equally secure. MFA is officially phishing resistant when it can counter attempts to circumvent the authentication process, generally (but not exclusively) through phishing. At Okta, we champion Fastpass, enabling cryptographically secure access across devices, browsers, and applications while simplifying login for the end user. For Customer Identity, we also have AI-based Adaptive MFA.
  • Security policies that check network, access patterns, location, and device
    • If having the tech is step one, step two is not being squeamish when setting security policies. Block logins from specific geos, the darknet (tor exit nodes), and devices without assurance.

Make AI work for you with Okta

We’ve been using AI in our tech at Okta for a while. It turns out Identity is well suited to the application of GenAI. Our AI-boosted Bot Detection, for example, uses machine learning to reduce malicious bot attacks by 79% with minimal negative impact on the legitimate end user. The GenAI wave might seem intimidating, but with the right tools you can meet it like a pro. 

Hopefully, you better understand how to set your business up for success when countering increasingly sophisticated cyberattacks. Curious to learn more or ready to make a change now? Check out our other resources, or click here to chat with a Sales specialist.

Further reading:

Okta for Small Business

Okta & AI: How Artificial Intelligence is Reshaping Identity and Access Management 

What the GenAI paradigm shift means for Identity