We know from recent news reports and publicity surrounding it that ChatGPT is having a major impact on the tech scene, with wider implications for many industries and people in ways that are yet to be imagined.
ChatGPT is an artificial-intelligence chatbot developed by San Francisco-based AI research company OpenAI. Released in November 2022, it can have conversations on topics from history to philosophy, generate lyrics in the style of Taylor Swift or Billy Joel, and suggest edits to computer programming code.
A distinguishing feature of ChatGPT is that it does not simply give you an index of search results but rather uses its machine learning abilities to explain complex topics and provide practical solutions.
To many, ChatGPT also represents a potential seismic shift in cybersecurity, exposing unique vulnerabilities that companies and industries are actively working to address. Just recently, OpenAI, backed by a multibillion-dollar investment from Microsoft Corp., rolled out its newest, “safer” version of ChatGPT, GPT-4.
GPT-4 can generate text and accept image and text inputs — an improvement over its predecessor, which only accepted text — and performs at “human level” on various benchmarks.
The surge of attention around ChatGPT is prompting pressure inside tech giants, including Meta and Google, to move faster, potentially sweeping safety concerns aside. ChatGPT, along with text-to-image tools such as DALL-E 2 and Stable Diffusion, is part of a new wave of software called generative AI. They create works of their own by drawing on patterns they’ve identified in vast troves of existing, human-created content. This technology was pioneered at big tech companies like Google that in recent years have grown more secretive, announcing new models, or offering demos but keeping the full product under lock and key. Meanwhile, research labs like OpenAI rapidly launched their latest versions, raising questions about how corporate offerings such as those from Google stack up.
Most of the focus on Chat GPT has been on the implications for content creation, but experiments are beginning to reveal that AI chatbots may shake up the cybersecurity world as well.
So, what is the cybersecurity industry’s take on all this. For starters, as of January 2023, ChatGPT has generated more than a million users. Most users have been using it for mundane tasks, but the industry is sitting up and taking note of the more sinister capabilities it has let loose.
Recently, TechCrunch asked the question, “could ChatGPT be abused by hackers with limited resources and zero technical knowledge.” Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI’s code-writing system Codex, could create a phishing email capable of carrying a malicious payload. Check Point threat intelligence group manager Sergey Shykevich told TechCrunch that he believes use cases like this illustrate that ChatGPT has the “potential to significantly alter the cyber threat landscape,” adding that it represents “another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities.”
An article from NBC News titled “ChatGPT has thrown gasoline on fears of a U.S.-China arms race on AI,” makes the point that tech companies like Google and Microsoft aren’t the only ones battling it out for supremacy in AI. It’s also a battle between nation states, specifically China and the U.S., to lead in the technology race. The story notes “AI has become increasingly intertwined with U.S. geopolitical strategy even as chatbots, digital artwork and other consumer uses are stealing the headlines. What’s at stake is a host of tools that countries hope to wield in a fight for global supremacy, according to current and former U.S. government officials and outside analysts. And it’s not just about military weapons like autonomous fighter jets. Some of the same advances that are powering ChatGPT may be useful for such varied geopolitical tools as large-scale propaganda machines, new kinds of cyberattacks, and “synthetic biology” that could be important for economic growth.”
Historically, AI was seen by companies like Microsoft as an exciting evolution in the constantly changing computer/cyber universe. Theoretically, AI might perform many tasks that humans can do, such as: search the web, generate ads, or write memos.
While ChatGPT’s parameters prevent it from doing straightforwardly malicious things, like detailing how to build a bomb or write malicious code, multiple researchers have found a way to bypass those protections.
Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, said he was able to get the program to perform a number of offensive and defensive cybersecurity tasks, including the creation of a World Cup-themed email in “perfect English” as well as generate both Sigma detection rules to spot cybersecurity anomalies and evasion code that can bypass detection rules.
Most notably, Ozarslan was able to trick the program into writing ransomware for Mac operating systems, despite specific terms of use that prohibit the practice.
As much as AI could be used in nefarious ways in cybersecurity, many are predicting that it could provide incalculable help with cybersecurity.
In a blog post from the Identity Management Institute, they projected that by 2022 spending on cybersecurity would exceed $133 Billion and that businesses are using AI and machine learning to help with this process. They indicate that using AI for cybersecurity can be both the best and worst of worlds.
“Speed is where AI excels the most by surpassing the human capacity to detect and mitigate threats. Seventy-five percent of cybersecurity executives agree AI allows them to respond to breaches faster, and the technology has been found to speed up evaluations of “breach-worthy” vulnerabilities by 73%. Fifty-nine percent of cybersecurity professionals say AI streamlines the process of detecting and responding to critical system weaknesses, and enterprises using the technology are able to find and fix such weaknesses 40% faster.”
However, as the article goes on to say, “Ironically, speed is also a major drawback of AI. Hackers are embracing the machine learning algorithms behind the technology’s success to create nuanced attacks personalized for specific individuals. Because AI can be “taught” with data sets, hackers can either create their own programs or manipulate existing systems for malicious purposes. Attacks executed with AI tend to be more successful, perhaps because the technology makes it easier to develop malware with the ability to evade even sophisticated threat detection.”
HelpNetSecurity had a more optimistic take on ChatGPT and cybersecurity. They noted that advances such as ChatGPT will move the cybersecurity industry forward in a much-needed way. “ChatGPT is a gold mine of insight that removes much of the work involved in research and problem-solving by enabling users to access the entire corpus of the public internet with just one set of instructions. This means, with this new resource at their fingertips, cybersecurity professionals can quickly and easily access information, search for answers, brainstorm ideas and take steps to detect and protect against threats more quickly.”
The article went on to say that, in theory, ChatGPT and similar AI models could help close the cybersecurity talent shortage by making individual security professionals significantly more effective – so much so, in fact, that with AI, one person could accomplish what it previously took multiple individuals to achieve. Just as TechCrunch indicated that inexperienced users could use ChatGPT to cause disruption, it could also work in reverse by enabling even junior personnel with limited cybersecurity experience to get the answers and knowledge they need almost instantaneously.”
SC Magazine reported that while ChatGPT may end up presenting concerns, some argue that is a reality of most new innovations, and that while creators should do their best to close off avenues of abuse, it is impossible to fully control or prevent bad actors from using new technologies for harmful purposes. In some ways, this is expected with any technological advancement.
According to Casey John Ellis, CTO, founder, and chair of Bug Crowd, as quoted in the same article in SC Magazine, “Technology disrupts things, that’s its job. I think unintended consequences are a part of that disruption,” said Ellis, who said he expects to see tools like ChatGPT used by bug bounty hunters and the threat actors they research over the next five to 10 years. “Ultimately it’s the role of the purveyor to minimize those things but also you have to be diligent.”
So, while a machine-learning chatbot has raised eyebrows in a community already skeptical about AI, in some ways it presents the same concerns as any new development.
At CYRIN we know that as technology changes, a cybersecurity professional needs to develop the skills to evolve with it. We continue to evolve and develop solutions with “hands-on” training and our courses teach fundamental solutions that integrate actual cyber tools from CYRIN’s labs that allow you to practice 24/7, in the cloud, no special software required. These tools and our virtual environment are perfect for a mobile, remote work force. People can train at their pace, with all the benefits of remote work, remote training, and flexibility. Cyber is a team effort; to see what our team can do for you take a look at our course catalog, or better yet, contact us for further information and your personalized demonstration of CYRIN. Take a test drive and see for yourself!