Updated: September 11, 2024 12 mins read Published: October 25, 2023

The Future of AI in Cybersecurity: Is There a Space for Breakthrough?

In a recent conversation, Roman Reznikov, VP of Delivery, Digital Segment, at Intellias, and Ed Adams, CEO at Security Innovation, discuss the dynamic intersection between artificial intelligence and the ever-evolving world of cybersecurity

Join a riveting discussion that transcends the boundaries of artificial intelligence and cybersecurity as we engage in a conversation with Ed Adams, the CEO of Security Innovation and author of the book “See Yourself in Cyber: Security Careers Beyond Hacking.” The cybersecurity landscape has witnessed a significant transformation fueled by the rise of AI technologies. In this interview, we navigate the implications, challenges, and potentials AI brings to the table, shedding light on data integrity, model accuracy, and software security. Watch the full interview that delves into the heart of cybersecurity and AI, shaping the way we protect our digital future.

FULL VIDEO TRANSCRIPT

Roman Reznikov: Please welcome Ed Adams, owner and CEO of Security Innovation.

Hello, Ed. Nice to meet you here.

Ed Adams: Hello, Roman. Thank you for having me.

Roman Reznikov: Today, we’re going to have a conversation about AI and its impact on the cybersecurity world. My first question to Ed would be about how AI changed the landscape of cybersecurity. With the growing interest in AI across different domains, how can we address concerns arising from AI and the risks that companies might face using AI-driven cybersecurity tools?

Ed Adams: Obviously, artificial intelligence is very popular these days, and there’s a big rush for everyone to figure out how to use it, especially with cybersecurity gaining much interest. Regardless of how AI is used, it always comes down to three primary areas that organizations need to be concerned with:

  • The data used to train the algorithms.
  • The model for the AI systems.
  • The software itself.

Let me discuss each of these in detail.

Starting with the data used to train AI algorithms, AI is currently quite flawed. It often provides incorrect information and sometimes creates its own false sources, citing them as fact, which is disinformation or fake news. AI-created fake information and data stores inherit biases from their human creators, which need to be removed. Otherwise, AI becomes biased by default. For example, if you ask an AI system what a nurse looks like, it might produce results with racial, gender, or other biases built into the information. So, your AI is only as smart as the data it’s trained on. So that’s the data; that’s the most important.

The second is the model used for the AI system. An AI model is a program or an algorithm that uses a set of data to recognize certain patterns and allows it to reach conclusions and make calculations or predictions when provided with enough information. Hence, the AI model is particularly suitable for solving complex problems or providing higher efficiency, cost savings, and accuracy than human-based models. But just like humans doing a math problem, if the formula you use to calculate an answer is wrong, the answer will be wrong. For example, think about calculating the long side of a right triangle, a 90-degree triangle. If you use the formula a^3 + b^3 = C^3 instead of the correct formula, which is a^2 + b^2 = C^2, then you solve for C incorrectly. That’s the same problem with the models of AI systems.

So, you’ve got the data. That’s the problem. The models, and then the third is the security of the software itself because, after all, artificial intelligence is just software. It’s just a software program. It’s written by humans, which means it can be written insecurely. It can be attacked, hijacked, or altered just like any other piece of software. So those are the three areas that anyone has to be concerned about with respect to using AI, but in particular for cybersecurity.

Roman Reznikov: That’s an interesting topic. Do you see any product on the market that already uses AI specifically for cybersecurity purposes right now?

Ed Adams: There are a lot of products that claim to be using AI, whether they’re actually using AI. I don’t know how transparent those vendors are. They may be using machine learning, which is not quite the same as AI, but there’s a lot of marketing hype around AI-equipped or AI-powered systems. So, ask for transparency, and if you’re concerned, ask about those three areas. What data are you using to train the system? What models are you using, and then how secure is the software? How can you be confident in the security of that software?

Roman Reznikov: So, we’ll be correct to say that, as of now, there is no breakthrough in the cybersecurity world with appearance and growth interest to you. It’s more or less the same technology stack as it used to be a couple of years ago.

Ed Adams: Well, technology has changed, and more so because it’s been made more accessible. But in terms of significant developments and transformations in artificial intelligence, I don’t think we’re there yet. So, all of the stuff about artificial intelligence is really cool. It’s neat. I love it, and today, in my opinion, it’s still mostly a nice little game where you can generate information faster, but the fact is that you still have to have promptly engineered in a certain way to get the right information.

But it tells you that you have to dig deeper into those training datasets to find the pearls you’re looking for. So, there’s still a massive opportunity for artificial intelligence, and we’re going to need to understand what training data is going into the models. Do you agree with it or not? Is it the right information? And until those things happen to me, it’s still just a cool toy, and the data portion is going to be increasingly a security problem because a lot of these data stores are open data stores, decentralized, and shared, which is great for gathering much information.

But it also provides the opportunity for poisoning datasets, either intentionally or unintentionally. We’re going to have a data supply chain issue for AI, just like we have a software supply chain issue today. So, there was a recent presentation at the Black Hat Conference, and it was a guy named Will Pierce from NVIDIA. He made a great presentation showing how you could spend just 60 U.S. dollars to poison the data set, that could cause many AI models of consequence to not work correctly, and that’s going to be a huge issue moving forward because it’s very tied into that software supply chain issue that I was talking about.

So, if you already have a software supply chain issue, and AI is software, and now you have poisoned data sets that the AI is using, you have a defense in threat, not a defense in depth. I think we have a lot of progress to be made with respect to AI. It’s getting there, but it makes cybersecurity and AI cybersecurity even trickier.

Roman Reznikov: Oh, that’s an interesting point of view. So, we discussed that we hadn’t made any breakthroughs in cybersecurity yet, but what about another side of the fan? So, do you see any applications of AI from potential hackers, those who would like to attack your system, or some AI race users? Which of AI and those kinds of activities?

Ed Adams: Well, yes. Let me address AI for cybersecurity first, and then I will get into cybercrime because I believe that even though artificial intelligence has not yet transformed the cybersecurity landscape, it has the potential to. So, it’s already helped some cybersecurity products and professionals by enhancing things like threat detection and threat response.

These machine learning algorithms can identify those patterns in anomalies in those patterns faster than human analysts can, and they will find things that humans might miss. So, AI-powered tools can automate routine tasks that can reduce response times. However, you have to remember that cybercriminals are also using the same AI to create more sophisticated attacks, making ongoing innovation in AI crucial. So, I think this is just the continuation of the arms race in cybersecurity that we’ve always had.

There are areas where AI can be extremely useful today, and I mentioned threat detection before. You know, if you’re implementing artificial intelligence, you can help drive down the cost of threat detection because you can feed it the datasets that you’re getting from your SIMs and all your other threat intelligence and threat detection systems into AI and allow it to analyze that network traffic and behavior anomalies. So that’s fantastic.

AI can also help with a security problem that’s plagued us for many years: patch management. Automating software patching and updates to mitigate vulnerabilities is huge because almost 80% of attacks take advantage of known security vulnerabilities today. So, if you can patch those vulnerabilities, you can better protect yourself. Then, there’s cloud security and AI-powered cloud security services.

So, sorry, the cloud service providers will probably be first with AI-powered cloud security tools AWS, Azure, etc. They can offer, at very reasonable costs, good protection for your data and your applications, so you’ve got a lot of advantages. Still, user education is vitally important, and this is especially for smaller organizations that might look to artificial intelligence as a way to better compete because they have to invest in employee training to prevent things like phishing attacks and other human errors that can lead to breaches or allow their AI systems the data or the models to be compromised.

Roman Reznikov: Well, that’s an interesting interaction. And, if we look at more precisely in this people area, how do you see, in general, the skill set of employees should be changed and taken into account today’s? I’m interested in the reality of usage, AI, and, specifically, how you see the change in cybersecurity professionals. What new skills do they need to have to address this? Also, can you touch on the point about the skill set of chief security officers? What they need to have right now is some kind of new skill set to be ready for upcoming challenges.

Ed Adams: There is a lot of education that needs to happen for sure, and just like with any emerging technology, executives like me, I’m an executive. They want to embrace the latest technology, right? I mean, so many executives said. Let’s move to the cloud, and organizations will often adopt technology faster than they understand how to secure that technology, which causes a massive growth in the attack surface and the potential for data breaches and security vulnerabilities.

We talked earlier about how AI is changing the cybersecurity workforce slightly right now, but it requires some professionals to gain skills in artificial intelligence. But the other areas that surround that are areas like data science and software engineering, and software engineering is going to become more important in cybersecurity than it ever was, particularly for those that want to embrace artificial intelligence because of what we talked about a few minutes ago, the data of the model and the software itself, it’s all based on software.

So, you’ll see roles evolve, like AI ethics specialists and AI model auditors, which, to me, is a fascinating field. The new auditing of the AI models ensures they are correct and accurate. There will be some new jobs that are created because of AI-powered cybersecurity, but we can’t forget all those so-called soft skills like communication collaboration. They’re going to be vitally important because AI is going to allow, uh, now introduced disciplinary teams to communicate more effectively.

So, you need to be able to communicate and collaborate in terms that everyone can understand, and this is where your question about the CSO comes in. I think that the chief Information Security officer is like the captain of a ship; they have to make sure that all the stations are operating effectively, and they have to be able to communicate. Here’s our major goal. Here are the objectives. Here’s our heading. Here’s when we want to arrive at our destination. And then it’s their job to ensure that all of the station staff are properly trained. They know how to do their job, and they can communicate with the stations that they depend on and who depend on them to make sure that they understand what they’re doing.

So that’s why those soft skills become critical in terms of success. But chief information security officers need to start to understand what artificial intelligence is, how they can leverage it, and how they can leverage it safely, which goes back to those three core elements. If CSOs don’t understand those three core elements and how it is putting what they’re doing at more risk, then they’re most likely going to adopt a method that will be flawed.

Roman Reznikov: Look, I noted it down. I like this idea of an AI data auditor, which resonates greatly with me. So, you mentioned another job that could appear in the near future for the icon. You remind me what was there.

Ed Adams: AI ethics specialists.

Roman Reznikov: That’s an interesting direction. I wonder if you see any risks, for example, for cybersecurity experts, that AI may substitute for smaller or large organizations in the future. Do you see any risk for cybersecurity as a specialty or a job domain with this technology stack?

Ed Adams: I don’t right now, honestly, and I know this is something that a lot of cybersecurity professionals are feeling a little bit of anxiety about. They’re wondering, am I going to be replaced with artificial intelligence? You know, is my job going to be at risk? I don’t think so.

I think, if anything, artificial intelligence is going to create more job opportunities and certainly new job opportunities with respect to cybersecurity. There are places where jobs will change now. The stock analyst’s job will change for sure. You can never replace the human element for an analyst. You can just help them do their job better and faster with automation and artificial intelligence. And that’s where I think we’re going. This is going to be an evolution in cybersecurity, not a revolution, and I think there’s a big difference between those two.

Roman Reznikov: Awesome. This is an interesting topic about AI ethics, so maybe you can share your experience. Do you face any concerns and situations where the usage of AI is unethical or causes problems? I heard about different interesting cases, and happy to hear about your experience so far.

Ed Adams: I think this is going to be a massively growing field in terms of interest in terms of regulation and litigation. Ethical considerations are absolutely paramount in artificial intelligence. Really, really critical and important. Privacy concerns are right at the front of that, not just personal data privacy. I talked about biases earlier. So, the overlap of privacy and biases does come into play.

But for me, privacy concerns arise when artificial intelligence collects data and answers and analyzes sensitive data that requires strict data handling and transparency. And, you know, biases can emerge. Obviously, if AI algorithms unintentionally discriminate or misidentify threats, you know it demands continuous auditing and bias mitigation.

So that’s where I think the AI auditor and the AI ethics specialist are going to come in. It’s exactly in this space right here, responsible AI use that entails ensuring that the automated systems don’t replace human oversight entirely. Again, that’s why I think cybersecurity professionals are not going to be replaced by AI.

Human judgment is essential in a complex security context that can never be replaced with AI. The challenge for us as cybersecurity professionals will be to balance automation and human control while maintaining data privacy and addressing biases. It is going to be crucial to harness AI effectively while upholding ethical standards in cybersecurity. This field is going to be looked at very, very deeply in the next ten years.

Roman Reznikov: And how do you write now? See the application of AI in your particular company and security innovation. Do you have plans or something on your roadmap to augment your company’s new services or products?

Ed Adams: You might be surprised to hear Roman that the short answer is no. Right now, we’re still in the investigative stage as a cybersecurity solution provider. We are very careful and deliberate and conscientious, so we want to be very, very mindful of any new technology that we adopt if we’re going to be using that to identify security vulnerabilities in our customer systems and then helping them fix those vulnerabilities, we have to make sure that is done correctly, and any new technology adoption can put that accuracy at risk. And that’s the one thing we cannot compromise.

So, we would love to do what we do faster. We’d love to do it with more accuracy and would love to be able to analyze larger data sets to make sure that we’re improving upon that accuracy and speed. So, we are investigating, but we have not adopted AI presently, and any technology that we’re looking at that uses AI, we’re looking at with a lot of scrutiny like we do any new technology. So, a lot of cyber security solution providers are adopting AI, and they’re using it in their marketing collateral to talk about them as the next-generation cybersecurity solutions. We’re not ready to do that yet here at Security Innovation.

Roman Reznikov: I’m not surprised. Over the past six months, I’ve spoken to dozens of technology leaders, and the majority of them responded precisely like you: “We do experiments. We are observing.” So that’s absolutely in line with the market.

That’s all the questions from my side. Thank you very much for the insightful conversation.

Ed Adams: Roman, it’s always a pleasure talking to you. Thank you so much for having me on your podcast, and I wish you all the best.


How can we help you?

Get in touch with us. We'd love to hear from you.

We use cookies to bring you a personalized experience.
By clicking “Accept,” you agree to our use of cookies as described in our Cookie Policy

Thank you for your message.
We will get back to you shortly.