Artificially intelligent hackers will find out vulnerabilities in rival system while fixing their own
How about having an autonomous hacking system without having any human involved, which could find and patch up vulnerabilities in computer systems before cyber attackers could abuse them?
Well, that was the contest that the seven teams were competing for in Darpa’s Cyber Grand Challenge in August.
While every team who participated in the contest has already won $750,000 for qualifying, they must now put their hacking systems up against six others in a game of “capture the flag”. The software must not only be able to attack the other team’s susceptibilities but also discover and fix flaws in their own software – all while guarding its performance and functionality. The winner of the contest gets prize money of $2m.
Speaking at the RSA security conference in San Francisco, Giovanni Vigna, a professor of computer science at University of California Santa Barbara, explained that “Fully automated hacking systems are the final frontier. Humans can find vulnerabilities but can’t analyse millions of programs.”
Vienna is also the founder of hacking team Shellphish, which has built one of the systems – nicknamed as Mechanical Phish – that will participate in the Cyber Grand Challenge.
“Hacking is usually just a bunch of guys around a table who are very tired just typing on a laptop,” Vigna adds, adding that it’s “not as sexy” as hacking is shown in movies. “We do this because we either want to attack somebody, hack defensive to find bugs before they are deployed, or because it’s fun.”
Organizations who do not have a team of highly skilled human “uber-hackers” in house and are looking to protect their network could find robo-hackers extremely useful, as they can quickly recognize and fix problems before anyone compromises them to either interrupt online services or steal data.
Other groups, who are not a part of the Cyber Grand Challenge, are working on hacking machines powered by artificial intelligence.
Chief Technology Officer of BT Americas, Konstantinos Karagiannis has been developing a hacking system that utilizes the neural networks to simulate the way the human brain learns and solves problems.
He explained how an artificially intelligent program called MarI/O with no prior knowledge and just 34 tries was able to learn a complete level of Super Mario World. The software did not teach anything about how to play the game; it just had a few simple parameters set. MarI/O just attempted to try different things, it “thought” would work and when they did, it “learned”.
“Using this approach a security scanner could identify intricate flaws using creative approaches you would have never thought of,” explained Karagiannis. “And it can be written with very modest hardware. A $1,000 GPU [graphics processing unit, typically used in gaming] can outrun a supercomputer that used to fill a building 10 years ago.”
By the summer of 2016, Karagiannis is hoping to exhibit a proof-of-concept.
While robo-hackers could offer security professionals with a valuable weapon in their armory, the threat is that they could fall into the wrong hands. Karagiannis told us that he wouldn’t be amazed if criminal hackers had adopted these techniques “within a year”.
Alex Rice, co-founder of security company HackerOne, agrees. “Anything that can be used to defensively find vulnerabilities can be used by criminals – they all end up becoming a double-edged sword,” he told the Guardian.
In spite of this, the rise of automation in security is a good thing thinks Rice. “Everybody is struggling to keep up. There’s not a single organization that hasn’t had a compromise that was life-threatening, so clearly everything we’re doing is failing.”
He said that the best solution would be to amalgamate the skills of humans with machines. “Humans are much better at what we haven’t figured out yet.”
“Until we have fully sentient machines, they still have to be instructed by humans.”
Source: The Guardian