Back
AI CERTS

2 months ago

AI is Making Cyberattacks More Sophisticated, Leaving Cybersecurity Teams Struggling to Keep Up

Cyberattacks are becoming more advanced as artificial intelligence (AI) plays a bigger role in automating and refining malicious activities. A new report by the Information Systems Audit and Control Association (ISACA) reveals that cybersecurity teams worldwide are struggling to keep up, especially as resources and funding for these teams lag behind the growing threat.

AI is Making Cyberattacks More Sophisticated, Leaving Cybersecurity Teams Struggling to Keep Up

The ISACA report, which surveyed nearly 6,000 organizations across the globe, found that 39% of respondents have seen an increase in cyberattacks over the past year. Privacy breaches have also risen, with 15% of companies reporting more incidents compared to the previous year. These attacks are becoming increasingly sophisticated, putting a strain on cybersecurity professionals who are already underfunded and understaffed.

Europe Facing Critical Challenges

Cybersecurity teams in Europe appear to be facing some of the most significant challenges. Over 60% of European respondents said their cybersecurity teams are understaffed, and 52% reported that their budgets were insufficient to deal with the increasing volume and complexity of attacks.

One of the key drivers of this complexity is the rise of AI-powered attacks. Chris Dimitriadis, ISACA's chief global strategy officer, said AI has drastically changed the landscape of cybercrime, particularly through its role in enhancing ransomware attacks. Ransomware remains the most common type of attack, where hackers lock users out of their data and demand payment in exchange for access.

"The sophistication of AI is making these attacks harder to detect," Dimitriadis explained. "AI-driven tools can analyze and generate highly personalized phishing emails, for example, that are nearly indistinguishable from legitimate communications." Previously, phishing attempts were often riddled with language errors or odd phrasing, but AI now enables attackers to mimic human communication in both tone and content with striking accuracy.

Generative AI and the New Wave of Cybercrime

Generative AI (GenAI) is a particularly concerning development. These systems can create content that mirrors human language and cultural nuances with such precision that victims are easily fooled. Hackers can use GenAI to craft highly targeted messages that incorporate accurate personal or organizational details, making them much more convincing.

"AI allows attackers to deeply understand their targets, crafting messages that resonate on a personal or business level," Dimitriadis said. "This makes traditional security training, like spotting phishing attempts, less effective because the content now looks so legitimate."

A separate investigation by Norwegian AI start-up Strise found that even large language models (LLMs) like ChatGPT can be manipulated for nefarious purposes. While the chatbot refuses to answer illegal questions, such as how to launder money, creative prompts can trick it into providing valuable insights into illegal activities. Strise's CEO, Marit Rødevand, explained that by asking ChatGPT to generate a fictional script about laundering money for a character, the AI provided detailed advice.

"It was a real eye-opener," Rødevand said. "It's like having your own personalized corrupt financial adviser on your mobile 24/7." This raises concerns about the ability of AI systems to inadvertently assist criminals if proper safeguards are not in place.

Global Implications and State-Backed Threats

The global scale of AI-driven cybercrime is alarming, and even major corporations are feeling the effects. In February 2024, Microsoft and OpenAI revealed that hackers were leveraging AI tools to improve their attacks. These attacks are often backed by nation-states, with actors from Russia, North Korea, Iran, and China utilizing AI to refine their cyber tactics. The ability to use large language models to enhance research on targets and create more convincing phishing campaigns is a significant concern for governments and corporations alike.

Despite efforts to curb misuse, Microsoft and OpenAI admitted that it is nearly impossible to stop all instances of AI being used for malicious purposes. This highlights the need for more proactive cybersecurity measures to stay ahead of AI-driven threats.

Underfunded Cybersecurity Teams

The ISACA report paints a troubling picture for cybersecurity teams trying to protect their organizations. Over half of the surveyed teams reported that they are underfunded, making it difficult to invest in advanced technologies and strategies to combat evolving threats. Dimitriadis pointed out that cybersecurity is often seen as a cost center because it doesn’t directly contribute to revenue generation, leading to chronic underinvestment.

"Cybersecurity is still undervalued in many organizations," Dimitriadis said. "Decision-makers often don't realize the importance of proper cybersecurity measures until it's too late."

Another key finding from the report was the lack of training for staff on digital trust, with 71% of companies admitting they do not provide this essential education. This gap leaves organizations more vulnerable to social engineering attacks, where human error becomes the weak link in an otherwise secure system.

Combating the AI-Driven Threat

To counter the rise in AI-driven cyberattacks, organizations need to take proactive measures. One approach is to adopt future-proof technological platforms that can detect and mitigate threats early. Advanced threat detection systems powered by AI can help companies stay ahead of evolving attacks, but these technologies require significant investment.

Additionally, companies must prioritize cybersecurity training and awareness programs. Even with sophisticated AI attacks, educating employees on recognizing suspicious activity remains crucial. Rødevand emphasized the need for a balanced approach, combining technological advancements with human vigilance.

Ultimately, as AI continues to evolve and reshape the cyber threat landscape, cybersecurity teams will need more funding, resources, and support to protect organizations effectively. Without these investments, businesses may find themselves outmatched by AI-enhanced attackers.

Looking Forward

The future of cybersecurity will likely depend on how quickly organizations adapt to AI-driven threats. Companies that fail to recognize the growing sophistication of cyberattacks could face significant financial losses, privacy breaches, and damage to their reputation. Investments in both technology and human expertise will be critical to staying ahead in this rapidly changing landscape.

Source: AI is making cyberattacks more sophisticated and cybersecurity teams are struggling to keep up