Artificial Intelligence and the New Frontier of Cybersecurity

CYRIN Newsletter

Artificial Intelligence and the New Frontier of Cybersecurity

Today’s cybersecurity landscape resembles a high-stakes, ever-shifting chess match between defenders and cybercriminals. Everything changed when Open AI launched ChatGPT in November 2022, revolutionizing access, knowledge and everyday use of AI, while also evolving and widening the threat landscape. In short, AI completely upended the cybersecurity industry. The last three years have seen a marked rise in exciting advancements for companies and many industries, but the use of AI by malicious actors has risen concurrently as well. The industry is struggling to keep up. This month’s newsletter focuses on how AI is rapidly reshaping cybersecurity and what this means for professionals and organizations navigating this challenging and constantly shifting environment. The key? Utilizing AI’s benefits and managing its weaknesses.

AI has two faces

Rohan Pinto, writing for Forbes, writes that AI functions as a “double-edged weapon” in cybersecurity. While AI improves threat detection, monitoring, and response, making it central to modern defense strategies, its power is a threat as well as a safeguard: attackers use AI for advanced malware and targeted attacks in response to each new AI advancement. In this evolving “arms race,” defenders must work to position themselves in front of new threats rather than in reaction to them. As the article reiterates, AI is here to stay and is already transforming cybersecurity. To emphasize the staying part, it’s reported that sixty-seven percent of IT and cybersecurity professionals have begun implementing AI to enhance security, while an additional 27% plan to test its capabilities.

In its Cyber Insights 2025 issue, Security Week said “cybersecurity has always been a game of leapfrog advantage, with the attackers being proactive and defenders being reactive.” Now, however, AI is taking the scale and pace of this dynamic to a whole new level.

Role of defensive applications

Even before the rapid adoption of ChatGPT, cyberthreats have become increasingly common and more sophisticated. AI is capable of analyzing enormous data sets, which can help uncover vulnerabilities before they become a problem while also “automating solutions.” In his Forbes article, Pinto writes that machine learning algorithms powered by AI can “detect network irregularities, identify phishing attempts and find zero-day vulnerabilities.” In addition, this is happening in real time, stopping threats before they’re able to infiltrate and compromise systems. AI can also analyze user activity patterns, finding insider threats as they occur. Using historical data, AI can formulate “predictive models” that detect dangers before they begin.

Offensive use of AI

While AI can be a powerful tool for protection, it also has a flip side in terms of cyber threats. Attackers use AI to “create more sophisticated and elusive assault methods” that require the creation of new defense strategies. For example: polymorphic malware can change code to avoid detection. AI-generated deepfakes are becoming increasingly utilized for social engineering purposes, including launching campaigns for fraud and misinformation.

The emergence of AI in cybersecurity

Since its inception, AI has radically transformed cybersecurity, creating a potential crisis for cybersecurity professionals, possibly creating a situation where analysts no longer lead the charge against threats and malicious actors.

However, a McKinsey report indicates that by automating lower-risk tasks with AI agents, such as routine system monitoring and compliance checks, this allows organizations to free up their teams to focus on high-priority threats. This targeted automation can improve efficiency and enhance overall risk management.

The McKinsey report goes on to say that in parallel, agentic AI is expected to accelerate Security Operations Center (SOC) automation, “where AI agents could soon work alongside humans in a semi-autonomous manner to identify, think through, and dynamically execute tasks such as alert triage, investigation, response actions, or threat research.”

The ability to quickly analyze patterns across vast data sets means AI can identify threats that a human operator might not notice. According to PurpleSec, it could analyze trends and forecast potential attacks that might slip under the radar, like “subtle shifts in network traffic.” This can help teams stay ahead, turning raw data into a strategic advantage. As a career path, AI could actually create some new avenues of opportunities. It could potentially lift analysts out of endless dashboard stares, “shifting them to managing smart systems and hunting down tricky threats. It’s a chance to evolve from reactive firefighting to crafting proactive strategies.”

But just as AI can detect threats, malicious actors can simultaneously change those threats in real time, making certain anomalies harder to detect. AI can learn in real time as it evolves in real time causing problems to security safeguards, making a previously reliable defense situation obsolete. Every innovation can potentially create a new threat.

Forbes reported that “global cybercrime damages are projected to reach $10.5 trillion in 2025, with much of that growth attributed to the malicious use of AI,” this one-upmanship on both sides will not end any time soon. With greater advancements needed to respond to ever more sophisticated threats, AI will have an impact on predictive cybersecurity designed to “detect and respond to existing threats but also to anticipate and mitigate potential attacks before they occur.” In addition to advanced phishing detection, AI could make possible “self-healing cybersecurity systems” that can be automatically patched without human involvement.

The criminal element

While we’re only three years out from the launch of ChatGPT, Security Week predicts that “by the end of 2025, it’s reasonable to assume that criminal organizations and adversarial nation-states will have developed their own generative AI systems similar to ChatGPT but devoid of ethical safeguards,” warns Kevin Robertson, co-founder and COO at Acumen Cyber. “These ungated AI models could be exploited to scrape vast amounts of data from platforms like LinkedIn, as well as compile credentials from dark web listings. The convergence of such technology with malicious intent may enable the production of finely targeted spear-phishing campaigns executed at unprecedented speed and scale.” Melissa Ruzzi, director of AI at AppOmni, believes that criminals will use available AI frameworks to create havoc. “It takes tremendous effort and skill to develop original AI models. Instead, I expect criminals will continue to use available models, particularly those with the least security guardrails.”

Fortune Business Insights reveals that cybercriminals are already rapidly investing in AI systems, predicting more disruption and increasingly complex and sophisticated attacks. They project that the global artificial intelligence in cybersecurity market, valued at $26.55 billion in 2024, is projected to grow from $34.10 billion in 2025 to $234.64 billion by 2032, exhibiting a compound annual growth rate (CAGR) of 31.70% during this period.

AI in the job market

AI is changing the nature of the workplace in many innovative and exciting ways. That changing workplace however can have negative connotations in the nature of hiring and firing people. Market Watch recently reported that six in 10 managers are already using AI to help make decisions about promotions, raises and layoffs, and people are becoming increasingly worried about being replaced by AI.

Those who feel the most at risk are in entry-level positions or analysts charged with repetitive monitoring tasks that AI could easily complete. A recent article in Secure World may ease some of those worries, reporting that “AI is changing the nature of cybersecurity work but not eliminating it wholesale.” However, the “skills gap,” is a real issue, as previously trained workers need to be fluent in traditional threats and rapidly changing AI-powered technologies. “AI isn’t replacing cybersecurity talent. It’s redefining it. Our future advantage lies in how well we integrate human judgment with machine speed,” said Sanjay Sharma, CISO, Zafin. “The real risk isn’t AI taking jobs—it’s falling behind while others use it to move faster, smarter, and more secure.” In this current climate “upskilling” is becoming paramount for anyone interested in a career in cybersecurity. It’s not just about code anymore or “writing rules,” but instead “validating intent” to check for malicious actors.

AI in the marketplace

AI means big money and big changes. Various studies indicate that the global AI market for cybersecurity is growing anywhere from 25–30% in the next 5–7 years. In short, AI is a gamechanger for cybersecurity in every industry, business and household, and will continue to be so for the foreseeable future.

How can CYRIN help

CYRIN is coming out soon with our first AI product, based on our research into neural networks. It will be the first of many advanced labs we will introduce in 2025 and 2026. At our core we remain committed to providing the most advanced research and development into training humans, whether operating in conjunction with AI or trying to thwart AI enabled threat actors.

We’ll continue to work with our industry partners to address major challenges and set up realistic scenarios that allow them to train their teams and prepare new hires for the threats they will face. Government agencies have been using CYRIN for years, training their front-line specialists on the real threats faced on their ever-expanding risk surface.

For educators, we consistently work with colleges and universities both large and small to create realistic training to meet the environment students will encounter when they graduate and enter the workforce. In an increasingly digitized world, training and experiential training are critical. Unless you get the “hands-on” feel for the tools and attacks and train on incident response in real world scenarios, you just won’t be prepared for when the inevitable happens. A full-blown cyberattack is not something you can prepare for after it hits.

The best time to plan and prepare is before the attack. Our training platform teaches fundamental solutions that integrate actual cyber tools from CYRIN’s labs that allow you to practice 24/7, in the cloud, no special software required. Our new programs, including Digital Twins, can create real-world conditions for you to practice before you must act. Cyber is a team effort; to see what our team can do for you take a look at our course catalog, or better yet, contact us for further information and your personalized demonstration of CYRIN. Take a test drive and see for yourself!

< Read other CYRIN Newsletters

Contact Us for details or to Set Up a CYRIN Demo
+1-800-850-2170 sales@cyrintraining.com

Watch CYRIN: The Next-Generation Cyber Range

Learn More About How CYRIN Online Training Can Benefit You