AI coders from Fast.ai created an algorithm that outdid codes from Google’s researchers
A small team of student AI (artificial intelligence) coders outperformed codes from Google’s researchers, reveal an important benchmark.
Students from Fast.ai, a non-profit group that creates learning resources and is dedicated to making deep learning “accessible to all”, have created an AI algorithm that beats code from Google’s researchers.
Researchers from Stanford measured the algorithm using a benchmark called DAWNBench that uses a common image classification task to track the speed of a deep-learning algorithm per dollar of compute power. According to the benchmark, the researchers found that the algorithm built by Fast.ai’s team had beaten Google’s code.
Fast.ai consists of part-time students who are eager to try out machine learning and convert it into a career in data science. It rents access to computers in Amazon’s cloud. In fact, it is important that a small organization like Fast.ai succeed, as it is always thought that only those who have huge resources can do advanced AI research.
The previous rankings were topped by Google’s researchers in a category for training on several machines, using a custom-built collection by its own chips designed specifically for machine learning. The Fast.ai team was able to deliver something even faster, on more or less equivalent hardware.
“State-of-the-art results are not the exclusive domain of big companies,” says Jeremy Howard, one of Fast.ai’s founders and a prominent AI entrepreneur. Howard and his co-founder, Rachel Thomas, created Fast.ai to make AI more accessible and less exclusive.
Howard’s team have competed with the likes of Google by doing a lot of simple things, such as ensuring that the images fed to its training algorithm were cropped correctly. More information can be found in a detailed blog post. “These are the obvious, dumb things that many researchers wouldn’t even think to do,” Howard says.
Recently, a collaborator at the Pentagon’s new Defense Innovation Unit developed the code needed to run the learning algorithm on several machines, to help the military work with AI and machine learning.
Although the work of Fast.ai is remarkable, huge amounts of data and significant compute resources are still important for several AI tasks, notes Matei Zaharia, a professor at Stanford University and one of the creators of DAWNBench.
The Fast.ai algorithm used 16 Amazon Web Service (AWS) instances and was trained on the ImageNet database in 18 minutes, at a total computer cost of around $40. While this is about 40 percent better than Google’s effort, the comparison is tricky considering the hardware used was different, Howard claims.
Jack Clark, director of communications and policy at OpenAI, a nonprofit, says Fast.ai has produced valuable work in other areas such as language understanding. “Things like this benefits everyone because they increase the basic familiarity of people with AI technology,” Clark says.