4 min read

AI is changing cybersecurity as we know it. How to stay on top of it?

Explore the pivotal role of artificial intelligence in reshaping the cybersecurity landscape.
Server room and data center by luismmolina

Explore the pivotal role of artificial intelligence in reshaping the cybersecurity landscape.

As 2023 has drawn to a close, it's clear that artificial intelligence has had a profound impact on our digital world. This influence extends significantly into the realm of cybersecurity. AI is transforming the field at an unprecedented pace. Here are seven key strategies and trends to consider in 2024:

1. Understand the Risks of Language Models

No matter if you are currently building a next AI security unicorn or implementing a Large Language Models (LLMs) in your cybersecurity strategy, understanding the associated risks is essential. The OWASP Foundation, renowned for its expertise in cybersecurity threats, has recently published a comprehensive guide on LLM vulnerabilities. This is a must-read for any cybersecurity expert. One of the most common attack covered and widely reported in 2023 is "Prompt Leaking" where we trick LLM to reveal us some extra information from the initial prompt. For an in-depth analysis, refer to the Adversa Blog.

2. Look for the innovative practice. I will come from Devs (again)

For a last couple of years, cybersecurity has been re-shaped by innovative models figured out by the engineering teams. Traditionally, the central security teams within large organizations were primarily responsible for driving the adoption of security tools and practices. But a few years ago, this dynamic has changed, and security responsibilities are increasingly being assumed by Devs, DevOps, and Engineering teams. This trend is largely attributed to the adoption of cloud-native architectures and signifies a broader movement in the IT industry towards integrating development, operations, and security (DevSecOps). This evolution is expected to influence AI in cybersecurity, potentially leading to the emergence of new Cybersecurity-AI startups. These entities are likely to focus on the specific requirements of development and operations teams, fostering AI tools and solutions that are in sync with contemporary, agile, and cloud-centric IT frameworks. For an illustration of such an AI-driven security tool, view the GitHub Universe 2023 opening keynote about Github Copilot (starting from 21:21).

3. Extend your Toolkit

The realm of AI in cybersecurity is rapidly evolving, and these advancements are already making their way into mainstream use. For those utilizing major cloud services, AI integration is likely a current reality or an imminent change. A wave of novel AI functionalities is on the horizon. An example is Microsoft's development of Security Copilot, detailed in the Session of Charlie Bell and Vasu Jakkal at Microsoft Ignite. Concurrently, Google has enhanced their major security tools (Security Command Center) with advanced Sec-PaLM capabilities.

4. Embrace AI Governance

As AI becomes increasingly embedded in various sectors and matures, there's a growing need for standardized security controls and benchmarks. This is especially critical for those in roles such as security auditors or assessors, tasked with evaluating risks associated with AI systems. Here are some emerging governance initiatives that have a chance for wide adaption in the coming months:

  • Cloud Security Alliance (CSA) - Renowned for their comprehensive Cloud Control Matrix and the STAR certification program, CSA has recently launched the AI Safety Initiative. This is an opportune time to delve deeper as the CSA Virtual AI Summit 2024 is on the horizon (January 17-18, 2024). Notably, one of the sessions is intriguingly titled "How to Delay Building Skynet". So buckle up!
  • The OWASP Top 10 for LLMs - As previously mentioned, this list serves as an excellent resource for those aiming to comprehend and mitigate the security challenges of AI, especially in relation to language models.
  • Google's Secure AI Framework (SAIF) - Google's perspective on constructing responsible AI applications is encapsulated in SAIF. It's worth monitoring to observe how this framework evolves and influences the field.

5. Boot up your own assistant

If you are using a ChatGPT, it's a good idea to enhance and customise it to your specific needs. I created an article on how to turn ChatGPT it into the ultimate cybersecurity companion. Not too keen on feeding it your own data? No worries, there are pre-made options out there. Have a look at CloudSecGPT by Marco Lancini – it's a solid starting block to get your gears turning. And hey, if you're like me and enjoy a dash of humor in your AI, you might end up crafting something like Wintermute. It's all about making it work for you and maybe having a little fun along the way.

6. Protect your market value

Here's the thing. AI won't probably take your job in Cyber within next 2–3 years (the next decade is another story), but it can stop you from landing your next dream job. Why? The increasing reliance on candidate-screening algorithms in recruitment processes. Hilke Schellmann's eye-opening book, The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now, dives deep into how AI can sway your chances during the application process. It's a must-read.

What should cybersecurity professionals do in response? Until there's more oversight and regulation in this domain, it's imperative to adapt and stay ahead. Utilize AI tools like ChatGPT or Bard to refine your CV and prepare for interviews. Inputting the job description into these tools can generate a tailored list of interview questions, offering a strategic advantage.

But remember, AI can't beat the human touch in complex processes like career planning, especially during face-to-face interviews. Struggling to break into your next cybersecurity role? Check out www.breakincyber.com. Mike's got some serious skills in spicing up LinkedIn profiles and helping you nail that job interview.