Stealing Artificial Intelligence: A Warning for the Singularity Institute

Maybe it’s because I’ve been reading a Bruce Schneier book lately (he’s a security expert), but I think that the Singularity Institute for Artificial Intelligence is facing a very real threat.

It is easy to imagine at some point in the future, when the Institute is well into the implementation phase, that many countries will consider AI a national security issue and that they will use their intelligence agencies to spy on work done in the field.

It can be extremely hard to defend yourself against such well funded attackers; they can try to steal or hack hardware or software, bug your offices and homes, infiltrate your team, bribe employees, use social engineering, make things look like an accident or petty crime, etc.

One tactic might be to wait for the Singularity Institute to be almost done (in the pre-launch testing/auditing phase, for example), steal the code and throw a lot of resources at it to be the first who launches a recursively improving artificial general intelligence. This could lead to disaster if whoever does this does not have benevolent intentions or is not as careful as the Institute would be.

My recommendation to the Singularity Institute is to make sure to have top security experts on the team and to prepare well in advance for the future when security might become critical.

Advertisements

4 Responses to “Stealing Artificial Intelligence: A Warning for the Singularity Institute”

  1. Michael Anissimov Says:

    Most of the higher-ups (and the population in general) don’t believe general AI will be possible for many, many decades, if not centuries. So the threat is not that great unless we accidentally convince them that AI is possible in the closer term.

  2. Accelerating Future » Singularity-Related Activity for July Says:

    […] and you should consider checking out the latest posts. On his own blog, Michael Graham Richard considers the security risks that a late-stage AGI project might face. My response: if AGI researchers […]

  3. Jack Says:

    “So the threat is not that great unless we accidentally convince them that AI is possible in the closer term” – Michael Anissimov

    Which puts us paradoxical situation.

    Also, doesn’t belief that a threat is if low probability, rise the probability of threat.

  4. Overcoming Bias: Torture or Dust Specks? « Michael Graham Richard Says:

    […] Yudkowsky, a research fellow over at the Singularity Institute for Artificial Intelligence (see my previous post warning them about security issues), asks a very interesting question over at Overcoming Bias: […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: