It is easy to imagine at some point in the future, when the Institute is well into the implementation phase, that many countries will consider AI a national security issue and that they will use their intelligence agencies to spy on work done in the field.
It can be extremely hard to defend yourself against such well funded attackers; they can try to steal or hack hardware or software, bug your offices and homes, infiltrate your team, bribe employees, use social engineering, make things look like an accident or petty crime, etc.
One tactic might be to wait for the Singularity Institute to be almost done (in the pre-launch testing/auditing phase, for example), steal the code and throw a lot of resources at it to be the first who launches a recursively improving artificial general intelligence. This could lead to disaster if whoever does this does not have benevolent intentions or is not as careful as the Institute would be.
My recommendation to the Singularity Institute is to make sure to have top security experts on the team and to prepare well in advance for the future when security might become critical.