1. Ironically, our best hope to defend against AI-enabled

In the near future, as artificial
intelligence (AI) systems become more capable, we will begin to see more
automated and increasingly sophisticated cyber-attacks. The rise of AI-enabled
cyber-attacks is expected to cause an explosion of network penetrations,
personal data thefts, and an epidemic-level spread of intelligent computer
viruses. Ironically, our best hope to defend
against AI-enabled hacking is by
using AI. But this is very likely to lead to
an AI arms race, the consequences of which may be very troubling in the long
term, especially as big
government actors join the cyber wars.


In the future, as AIs increase in
capability, it may be anticipated that they will first reach and then overtake
humans in all domains of performance, as we have already seen with games like chess and are now seeing with important human tasks
such as driving. It’s important for leaders to understand how that future
situation will differ from our current concerns and what to do about it.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now


3.         I strongly feel that use of AI to
counter AI-enable cyber-attack is a practical approach and hopeful that ongoing
research will bring additional solutions for safely
incorporating AI into the marketplace.


/ Justification of the study:


The market for cybersecurity keeps getting bigger, but
the results are not getting better. Over the past decade, the amount of money
spent on cyber defense has exploded from less than $10 billion to roughly $70
billion, according to Symantec CTO Amit Mital. And all forecasts show spending
continuing to skyrocket. But the litany of devastating hacks continues to pile
up. It is argued that Cybersecurity is “basically broken”. The “bad guys”
outnumber the “good guys” in the hacking world, and they make more money, too. It
appears to be impossible to beat them, out-hire them, and outspend them.  Increasing spread
of mobile technology and open source software is only making traditional
cybersecurity more daunting, and the idea of a defendable “perimeter” is going


If one of today’s cybersecurity
systems fails, the damage can be unpleasant, but is tolerable in most cases:
Someone loses money or privacy. But for human-level AI or above, the
consequences could be catastrophic. A single failure of a superintelligent AI
(SAI) system could cause an existential risk event – an event that has the
potential to damage human well-being on a global scale. The risks are real, as
evidenced by the fact that some of the world’s greatest minds in technology and
physics, including Stephen Hawking, Bill Gates, and Elon Musk, have
expressed concerns about the potential for AI to evolve to a point where humans
could no longer control it.


Russian President Vladimir Putin,
while addressing students in September 2017, says, “Artificial intelligence is
the future, not only for Russia, but for all humankind … It comes with colossal
opportunities, but also threats that are difficult to predict. Whoever becomes
the leader in this sphere will become the ruler of the world.”  Tesla and SpaceX CEO Elon Musk says, ‘ AI
represents a fundamental risk to human civilisation’, he argues artificial intelligence could be humanity’s greatest
existential threat, by even starting a third world war. 


Threat of growing AI development is
even considered so terrifying that recently an open letter signed by 116
founders of robotics and AI companies from 26 countries urges the United
Nations to urgently address the challenge of lethal autonomous weapons (often
called ‘killer robots’) and ban their use internationally.  “Lethal autonomous weapons threaten to become
the third revolution in warfare,” the letter states. “Once developed, they will
permit armed conflict to be fought at a scale greater than ever, and at
timescales faster than humans can comprehend. Signatories of the 2017 letter

Musk, founder of Tesla, SpaceX and
OpenAI (USA)

Suleyman, founder and Head of Applied AI at
Google’s DeepMind (UK)

Østergaard, founder & CTO of Universal
Robotics (Denmark)

Monceaux, founder of Aldebaran Robotics,
makers of Pepper robots (France)

Schmidhuber, leading deep learning expert
and founder of Nnaisense (Switzerland)

Bengio, leading deep learning expert and
founder of Element AI (Canada)

 From the above, it appears that AI is one of the few beacons of hope in this grim picture.
One crucial advantage that AI defense systems would have is the ability to
react instantly in real time to take action to combat a hack. We even may not have
humans in the mix. By the time people recognize it may be too late to react.  Thus my
proposed research is at the intersection of AI and cybersecurity. In
particular, I am planning to research how we can use AI in combating
cyber-attacks especially AI-enabled cyber-attacks including bad actors, failed
or malevolent AI. In addition, I aim to carry out limited study as to how best
we can regulate AI proliferation to our benefit.


8.         The objectives of this thesis/Project

§  Identify prominent
threat areas of AI-enabled cyber-attack.

§  Ascertain ways
to combat cyber-attack using AI.

§  Suggest
modalities to regulate undesired AI proliferation.





9.         In conducting the research, both
qualitative and quantitative approach has been planned. Issues being highly
technical and complex, extensive study shall be carried out from available
literature, both electronic and non-electronic. Primary data would be derived
from interviewing experts and policy makers. Secondary data would be collected
from different related surveys, reports, academic and non-academic materials
such as relevant law, books, papers and publications. Global best practices
will also be examined taking the reference from secondary sources and other sources.