CYBERSECURITY AND ARTIFICIAL INTELLIGNCE
Artificial intelligence and cybersecurity can be used to both reinforce and deteriorate each other.
Resume
The cybersecurity and artificial
intelligence they are closely related. First, artificial intelligence (AI)
techniques can be used to improve the cybersecurity and resilience of products,
services, systems and, therefore, of companies and society (defense approach).
Second, AI is starting to be used by cybercriminals and other types of cyber
attackers to put cybersecurity at risk and perpetrate different types of
attacks and generate fake news (attack approach). Finally, AI systems are, in
turn, susceptible to cyberattacks, so secure AI systems must be developed that
preserve privacy that we can trust and that are accepted by their users (trust
approach). Given the interaction between the two, it is necessary that the
Cybersecurity Strategies.
In 1950 Alan Turing defined the
conditions that a machine had to meet in order to be considered intelligent,
but it was actually in 1956 that John McCarthy coined the term artificial
intelligence (AI) to refer to machines that perform tasks characteristic of
human intelligence and solve problems and achieve goals in a similar way to how
people did.
Although AI research continued
through the 1970s and into the 1980s, few wanted to invest their money in a
technology that was not delivering tangible results. It was in 1996, the day
that IBM's Deep Blue computer overcame the then world champion Kasparov in a
game of chess, that AI began to be seen to offer possibilities for practical
application. In 2012 people began to talk about deep learning when Google was
created, a system capable of identifying cats in images, and in 2015 AlphaGo
became the first machine to beat a professional player of the Chinese game go.
Precisely the greater knowledge
about the functioning of the brain acquired in recent years, together with
advances in microelectronics, the increase in computing power, as well as the
possibility of accessing large amounts of data and the ubiquitous connection
between systems, have made it possible the great advances in AI that are
happening today. This has led to AI being one of the most widely used terms
today, generating the impression that a system that does not use AI in any of
its variants (machine learning, deep learning) cannot be considered a system
relevant.
1. AI offers multiple application
possibilities
2. However, news is constantly
emerging regarding erroneous decisions made by AI systems or about conclusions
reached by AI systems whose process of obtaining is unintelligible to humans
3. This shows that such AI
systems have not been designed to ensure impartiality and transparency in
decision-making. Like many other technologies, AI can be used for both good and
evil
4. For this reason, we are going
to explain the possible good and bad use of AI in the field of cybersecurity
and the dangers that its use can represent if the AI has not been designed in a
safe way.
Artificial intelligence in support of cybersecurity
AI can be used to help security professional’s
deal with the increasing complexity of modern IT systems, Industry 4.0,
Internet of Things (IoT) infrastructure ... as well as the sheer number of data
created by them, and try to stay ahead of cyber attackers. Cybersecurity faces
multiple challenges, such as intrusion detection, privacy protection, proactive
defense, the identification of anomalous behavior or the detection of
sophisticated threats, but, above all, to the ever-changing threats that appear
continuously. Because of this, AI-based methods are being explored to
facilitate real-time analysis and decision-making for rapid detection and
reaction to cyberattacks. AI is also being used to develop self-adaptive
systems to automate responses to cyber threats.
Indeed, AI can be used in all
stages of intelligent end-to-end security: identification, protection,
detection, response and recovery from incidents. In this sense, cybersecurity
can be considered one more domain of application of AI, such as energy,
transport, industry or health. In fact, this is not a new area of application for AI, but
has been used for some time to develop solutions that can detect and tackle
complex and sophisticated cyber threats while preventing data leaks. As
indicated by ENISA, the use of AI in cyber threat intelligence should continue
to be investigated CTI to reduce the number of manual steps in the analyzes
performed and validate said analyzes, that is, supporting CTI throughout the
entire life cycle of the management and mitigation of security risks 5. In the
current pandemic situation due to COVID-19, what has been observed is the great
capacity that cybercriminals have shown to adapt quickly to the new vulnerable
context of teleworking, taking advantage of household internet connections to
access data and communication systems the company. Cybercriminals have
customized attack vectors with advanced credential theft methods, highly
targeted phishing attacks, sophisticated social engineering attacks, and
advanced malware concealment techniques, among others. As these techniques are
increasingly combined with AI, attacks will be more difficult to detect and
will be more successful, according to the aforementioned ENISA report.
The use of AI by cyber attackers
AI is already being used in
market applications to understand user behavior patterns and design commercial
campaigns using AI software available to everyone, so it would be very naive to
think that cybercriminals are not using it as well, in the simplest case, to
better know its victims and identify the best moment in which to carry out a
criminal action with the best guarantees of success.
The use of AI depends on the
profile of the cyber attackers, ranging from the most harmless, associated with
cyber malice, to the most dangerous, such as those related to cyber terrorism,
cyber espionage or cyber warfare. The same variety can be found in the level of
sophistication and complexity of cyberattacks, which varies greatly from one to
another. Behind the most dangerous and sophisticated cyberattacks that can use
AI may be highly specialized groups, funded by certain states and whose attacks
may be directed at critical infrastructure in another country or generate
disinformation campaigns. If we analyze potential attackers from another
perspective, that of their way of operating,
In addition to the use of the
offensive use of AI by cyber attackers to learn patterns of behavior of future
victims, it can also be used to more quickly break passwords and captchas, build
malware that avoids detection, hide where they cannot be found, adapt as soon
as possible to countermeasures that may be taken, as well as to automatically
obtain information using natural language processing (NLP) methods and the
impersonation and generation of false audios, videos and texts. Attackers are
also using conflicting generative networks (GANs ) 6to mimic normal
communications traffic patterns in order to distract from an attack and quickly
find and extract sensitive data 7 .
Are AI systems vulnerable to cyber-attacks?
Attackers can use AI systems not
only to make their decisions, but also to manipulate the decisions made by
others. In a simplified way, an AI system is still a software system in which
data, models and processing algorithms are used. In this sense, an AI system
that has not been developed with cybersecurity from the design can be
vulnerable to cyber-attacks that target the data, the model or the AI algorithm and cause
unwanted results or wrong decisions in the system. Different companies have
already suffered attacks on commercial AI systems, such as Microsoft, which has
observed a significant increase in this type of attack in the last four years 8
, or Tesla 9, Google 10 and Amazon 11 , to name a few.
The innovation network of the
SPARTA project, within its SAFAIR research program , has identified different
attack tactics that can be carried out on AI systems, both during the training
phase of the system and during the operation phase
Comments
Post a Comment