Microsoft India to set up 10 AI labs, train 5 lakh youths in the country

Microsoft India to set up 10 AI labs

Microsoft to set up 10 AI labs, train 5 lakh youths and upskill 10,000 Indian developers

Microsoft India on Wednesday announced its plans to set up Artificial Intelligence labs (AI) in 10 universities in the country in the next three years. The company also plans to upskill over 10,000 developers to bridge the skills gap and enhance employability, and train 5 lakh youths across the country in disrupting technologies.

Microsoft has 715 partners who are working with the company in India to help customers design and implement a comprehensive AI strategy.

“The next wave of innovation for India is being driven by the tech intensity of companies – how you combine rapid adoption of cutting edge tech with your company’s own distinctive tech and business capabilities,” Anant Maheshwari, President of Microsoft India, said at the ‘Media and Analyst Days 2019’ held in Bengaluru.

“We believe AI will enable Indian businesses and more for India’s progress, especially in education, skilling, healthcare, and agriculture. Microsoft also believes that it is imperative to build higher awareness and capabilities on security, privacy, trust, and accountability. The power of AI is just beginning to be realized and can be a game-changer for India.”

According to Microsoft, the company’s AI and cloud technologies has today digitally transformed more than 700 customers, of which 60 percent customers are large manufacturing and financial services enterprises.

The Redmond giant has partnered with Indian government’s policy think tank, NITI Aayog, to “combine the cloud, AI, research and its vertical expertise for new initiatives and solutions across several core areas including agriculture and healthcare and the environment,” said Microsoft India in an official press release.

“We are also an active participant along with CII in looking at building solution frameworks for application in AI across areas such as Education, skills, health, and agriculture,” the company added.

In December last year, Microsoft had announced a three-year “Intelligent Cloud Hub” collaborative programme in India, which will “equip research and higher education institutions with AI infrastructure, build curriculum and help both faculty and students to build their skills and expertise in cloud computing, data sciences, AI and IoT.”

Source: Microsoft

read more

Netflix to track and stop users from sharing their accounts with friends

Netflix to track and stop users from sharing

A new AI could stop users from sharing their Netflix passwords with others

Synamedia, a UK company, is offering a new artificially intelligent (AI) service that will help, especially pay-TV operators and video streaming platforms, to track shared passwords. The company is currently showcasing the solution at the Consumer Electronics Show (CES) 2019 in Las Vegas.

Netflix, Amazon Video, HBO Now, for instance, are some of the popular video streaming services as of now.

The service, called Credentials Sharing Insights, uses AI, behavioral analytics and machine learning, which identifies, monitors and analyzes credentials sharing activity across streaming accounts. In other words, it will keep tabs on casual password sharing between friends and family as well as criminal enterprises or individuals who want to make money by reselling login credentials of payment channels or streaming services.

“The way you secure OTT is evolving,” said Jean Marc Racine, CPO and GM EMEA of Synamedia, explained in an interview to Variety. In the past, cable TV operators largely depended on secured devices, such as locked down devices and smart cards to decrypt satellite TV.

However, with the content transitioning to streaming, operators are finding ways to make things simpler for end consumers. “Passwords are easy to share,” he argued.

How does the service work?

Synamedia’s Credential Sharing Insights service analyzes streaming data from all its users. It will train the AI-based system on factors such as location from where an account is being accessed from, what time it’s used and for what duration, the content being watched, which device is being used, so on and so forth.

For example, the service can determine whether users are viewing at their main home and a holiday home, or whether they have shared credentials with friends or grown-up children who live away from home. In the case of the latter, these users will be offered a premium shared account service that includes a pre-authorized level of password sharing and a higher number of concurrent users.

The service provider or platform then gets a probability score, where the system would classify users between scores of 1 to 10, where “1” would indicate that this user is unlikely to share their password, and “10” would represent a user who has high chances of sharing that password.

“Casual credentials sharing is becoming too expensive to ignore,” said Racine. “Our new solution gives operators the ability to take action. Many casual users will be happy to pay an additional fee for a premium, shared service with a greater number of concurrent users. It’s a great way to keep honest people honest while benefiting from an incremental revenue stream.”

Available as a cloud or on-premise offering, Synamedia Credentials Sharing Insight is already in trials with a number of pay-TV operators.

Media research firm Magid suggests that 26% of millennials share passwords for video streaming services, while consulting firm, Parks Associates predicts $9.9 billion of pay-TV revenues and $1.2 billion of OTT revenues will be lost to credentials sharing by the year 2021.

AT&T, Comcast, Disney, Verizon, and Sky are some of the biggest names, who are currently using Synamedia Credentials Sharing Insight service.

read more

10 Best Movies About Artificial Intelligence That You Must Watch

Best Movies About Artificial Intelligence

Science fiction movies have always attracted a fair share of the audience towards them. For more than five decades a majority of sci-fi movies were based on the age-old storyline of a machine gaining control over human beings and destroying its creator.

Well, with the advent of artificial intelligence the concept, storyline, and the overall quality of sci-fi movies have drastically improved. So these are the ten best movies about artificial intelligence that are worth watching.

ALSO READ: 10 Best Horror Movies To Watch On Netflix Right Now

1. The Terminator

The first best and possibly one of the most popular movies about artificial intelligence is The Terminator. This classic movie about AI is about a cyborg who is sent from 2029 to 1984 and his mission is to kill a waitress.

In near future, this lady will give birth to a person who will lead humanity in a war against the machines. That said, another soldier is sent from the future to protect this lady.

2. I, Robot

The next best movie about artificial intelligence on the list is I, Robot. Directed by Alex Proyas, I, Robot is a story of a detective who is technophobic. This detective investigates the suicide of leading robotics scientist, Dr Alfred Lanning and while investing he discovers a real threat to mankind.

Overall I, Robot is an interesting movie about artificial intelligence and it’s totally worth watching.

3. Bicentennial Man

Directed by Chris Columbus, Bicentennial Man is another classic AI movie. Bicentennial Man is a story of an android robot who is bought for household work by a family. After some time the robot gains human traits and emotions.

The movie clearly depicts how even non-living machines can be friendly. Bicentennial Man is a must watch movies about robots.


WALL-E is one of the most popular animated movies about artificial intelligence. Directed by Andrew Stanton, WALL-E is a story of a small waste-collecting robot who falls in love with another robot EVE.

WALL-E is full of adventure and it’s a robot movie that you should definitely watch with your family.

5. Transcendence

Directed by Wally Pfister, Transcendence is a story of a scientist Johnny Depp who is working on a machine that has complete human knowledge and human emotions. Once anti-tech radicals attempt to kill the scientist his teammates upload his consciousness to the machine.

Transcendence consists of a series of events that make it a must watch movie about artificial intelligence.

6. Ex-Machina

Ex-Machina is another interesting AI movie that depicts close human interaction with the most advanced artificial intelligence robot on the planet. Directed and written by Alex Garland, Ex-Machina is a story of a 26 years old programmer who falls in love with an AI girl robot.

Ex-Machina is a good movie about artificial intelligence and it’s totally worth watching.

7. The Matrix

The next best artificial intelligence movie on the list is The Matrix. Directed by Lana Wachowski and Lilly Wachowski, The Matrix is a story of Neo who is a programmer during the day and a hacker during the night. Morpheus, another hacker reveals the truth about human rebellions.

Neo then does everything he could to save human beings and fights against the matrix. Overall, The Matrix is a good action and sci-fi movie.

8. Tron 

Directed by Steven Lisberger, Tron is a good artificial intelligence movie. Tron is a story of a computer hacker Kevin Flynn who wants to prove that senior executive Ed Dillinger stole five video games that were created by him. Doing this Flynn is pulled into the virtual world.

Tron is an independent system security program that could help Flynn to get out of the virtual world.

9. Metropolis

Metropolis is one of the oldest sci-fi movies in existence that is definitely ahead of its time. The story of Metropolis is about a man who accidentally discovers the underground world of workers who run the machinery that keeps the Utopian world functioning.

Freder then sets out to help workers in their struggle for a better life. Overall, this classic movie about AI will definitely impress you.

10. Star Trek: First Contact

The last best movie about Artifical intelligence on the list is Star Trek: First Contact. Directed by,  Jonathan Frakes Star Trek: First Contact is a story of Borg who travels back in time so as to prevent alien interactions with earth.

This action and sci-fi movie is full of adventures and you should definitely watch it.


So these were some of the best movies about artificial intelligence that are worth watching. Do share your personal recommendations about the best artificial intelligence movies in the comments section below.

read more

CBSE Will Soon Introduce Artificial Intelligence Courses For Classes 8, 9, 10

CBSE Will Soon Introduce Artificial Intelligence Courses For Classes 8, 9, 10

The Central Board of Secondary Education often abbreviated as CBSE is a national level board of education in India. This popular educational board is available for both public and private schools and it is controlled and managed by the Union Government of India.

Well, CBSE is now planning to introduce Artificial Intelligence courses for classes 8, 9, and 10. So here’s everything you need to know about the introduction of Artificial Intelligence courses by CBSE.

ALSO READ: China builds an ‘artificial sun’ that is 6 times hotter than our ‘natural sun’

Artificial Intelligence Courses For Classes 8, 9, 10

CBSE has decided to introduce Artificial Intelligence as an elective subject for students of class 8th, 9th, and 10th. The board has also confirmed that it will provide help to schools for gathering teaching and learning resources for Artifical Intelligence. This elective subject will be introduced from the next academic session.

According to an official “The board held consultations with stakeholders, including with a school that was already teaching the subject. Consequently, after comprehensive discussions, the board has decided to include artificial intelligence as an optional subject”.

Ameeta Mulla Wattal, an eminent educationist and principal of the Springdales School, New Delhi stated that “If such subjects are brought in the school space, they will definitely help children look at technology in new ways”.

Artificial Intelligence And Its Importance

We have witnessed drastic technological advancements in the last decade. Artificial Intelligence or the ability of a machine to think on its own and function according to environmental conditions has played a major rule in the development of modern technology.

In fact every modern OEM is now focusing on improvising the AI aspects of their products rather than slightly improvising the hardware each year.

Artifical Intelligence

It is worth noting that, around 20299 schools in India and 220 in 25 schools in foreign countries are affiliated to CBSE. Consequently, including Artifical Intelligence as an optional subject will definitely help many students to understand the significance and developments in the technological field.

read more

Unlock Any Smartphone With AI-generated ‘Master’ Fingerprints

Unlock Any Smartphone With AI-generated ‘Master’ Fingerprints

This AI-Generated ‘Master’ Fingerprint Can Unlock Any Smartphone

Like a master key that can open any lock, researchers from the University of Michigan and New York University have created an AI-generated ‘master’ fingerprint that is capable of unlocking most of the modern smartphones.

The research team presented their work in a paper titled DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution.

How Are Master Fingerprints Generated?

The fingerprints dubbed as “DeepMasterPrints” by the researchers can be artificially generated using machine learning algorithm.

These can be used to fool databases protected by fingerprint authentication without essentially requiring any information about the user’s fingerprints.

The artificially generated prints were able to accurately replicate more than one in five real fingerprints in a database, which should only have an error rate of one in a thousand.

DeepMasterPrints takes advantage of two flaws in fingerprint-based authentication systems. The first is that many fingerprint scanners do not read the entire finger at once.

Secondly, some different fingertip portions are more common than others, which means that scanners that only read partial prints are more likely to be tricked by common fingerprint characteristics.

The team trained a neural network to create artificial fingerprints and used evolutionary optimization methods to find their best DeepMasterPrints.

They used a common machine learning method, called “generative adversarial network” (GAN) to artificially create new fingerprints that matched as many certain portions of other fingerprints as possible.

The team points out that the attack using their AI-driven method can be distributed against random devices “with some probability of success.”

The researchers used a NIST public database with 54,000 fingerprints and 8640 finger scans as input for learning and improving their neural networks.

However, such attacks may not be able to break into your phone.

“A similar setup to ours could be used for nefarious purposes, but it would likely not have the success rate we reported unless they optimized it for a smartphone system,” lead researcher Philip Bontrager of the New York University engineering school told Gizmodo. “This would take a lot of work to try and reverse engineer a system like that.”

But, if a hacker is able to use such attacks against many fingerprint-accessible accounts, then the success rate of unlocking devices would be much more.

According to Bontrager, “the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.”

He and his team want their research to motivate companies to step up fingerprint-security efforts. “Without verifying that a biometric comes from a real person, a lot of these adversarial attacks become possible,” Bontrager said. “The real hope of work like this is to push toward liveness detection in biometric sensor.”

read more

Sony’s revamped aibo robotic dog to launch in the U.S.

Sony’s revamped aibo robotic dog to launch in the U.S.

Sony’s adorable aibo robot puppy returns with a hefty price of $2,899

Sony’s robotic pooch aibo is making a return to the U.S., with the “first litter” of aibos scheduled to be released in September, Sony announced in New York on Thursday.

For those unaware, aibo is a series of robotic pets designed and manufactured by the Japanese electronics company, Sony. The first consumer model was introduced in May 1999, but was discontinued in January 2006. Already a modest hit in Japan, Sony has til now sold 20,000 units after the January release in Japan earlier this year.

“The Aibo has been received very well in Japan. We want to keep the momentum when selling it in the United States,” Senior Vice President Izumi Kawanishi told a news conference in New York on Thursday.

The new version, which is the sixth-generation puppy from Sony pairs “cutting-edge robotics” with a new, cloud-connected artificial intelligence (AI) engine and advanced image sensors, giving aibo the ability to learn and recognize faces. This will enable aibo to develop its own unique personality as time passes by. In other words, no two robot pups will grow up to be exactly the same, as its development will be influenced by the approach of its owners.

“Each owner’s approach to raising their aibo shapes its personality, behavior and knowledge, creating a unique environment for growth. In fact, aibo is able to learn new tricks through owners’ interactions, experiences with changing seasons and different events. Not content to sit and wait to be beckoned, aibo will actively seek out its owners and can recognize their faces,” Sony says.

Also, the powerful on-board computer and advanced image sensors makes aibo smarter and more lifelike. The cloud-connected AI engine enables it to recognize its owner’s face, detect smiles and words of praise, and learn new tricks over a period of time. It can also detect words of praise and smiles, as well as react to being petted or scratched on the head.

So, how did Sony develop such a loveable, expressive and life-like robo-puppy?

The company said, it “integrated a wide range of sensors, cameras and actuators to bring aibo to life. Ultra-compact 1- and 2-axis actuators give aibo’s body the freedom to move along a total of 22 axes. Its adaptable behavior is made possible through deep learning AI technology in the form of built-in sensors that can detect and analyze sounds and images. aibo’s whimsical body language is expressed through a combination of eye, ear, and tail movements, as well as different voice sounds. In addition, two OLED displays are utilized for aibo’s eyes and give the appearance of blinking and closing, allowing for diverse, nuanced expressions.”

Using a Wi-Fi connection, aibo uploads all of its day-to-day experiences to the cloud, forming a memory database that enables its unique personality to grow and evolve over time.

“aibo keeps on growing and changing, constantly updating its data in the cloud,” Sony explained.

“Over time, your approach to nurturing aibo will gradually shape its personality – it could be a doting partner, a wild, fun-loving companion, or anywhere in between. It’ll even learn new tricks through interactions with other aibo, experiences with changing seasons and different events,’ the company added.

Through aibo, Sony can showcase its existing technology. With the image sensing and recognition technology, already present in its televisions, cameras, and PlayStation Move, it enables aibo to better understand and remember its surroundings.

“This is truly a one-of-a-kind product designed to connect with its owners on an emotional level,” said Mike Fasulo, president and chief operating officer of Sony Electronics North America. “aibo’s charming personality, dog-like behaviors and ability to intelligently interact with family members help to create a personal bond. Bringing aibo back to the U.S. reflects Sony’s broader commitment to provide consumers with products that not only entertain them, but also enrich their lives.”

Costing a whopping $2,899, Sony’s First Litter Edition will be a limited, all-in-one aibo bundle that includes the autonomous robot pup, a three-year AI Cloud Plan, an assortment of toys for the doggy to play with (such as a pink ball and an aibone), an individually numbered commemorative dog tag, paw pads and a charging station. Users can also take their aibo experience to the next level with the “My aibo” app.

Sony says that aibo First Litter Edition will be available for purchase in September 2018 with delivery in time for the holidays. The company is exhibiting the aibo at Sony Square NYC from August 24th to October 14th. The exhibition is open to the public where they can go and experience aibo before it goes on sale next month.

read more

MIT’s image editing AI tool easily replaces the background in any image

MIT's image editing AI tool easily replaces the background in any image

This AI-assisted image editing tool from MIT can change the background in any image

For long, Photoshop has been the program that many of us go to make our photos look great with quick fixes and creative enhancements.

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an AI-assisted image editing tool that automates many parts of the editing process for photos. This will make the job of editing photos easier.

“Instead of needing an expert editor to spend several minutes tweaking an image frame-by-frame and pixel-by-pixel, we’d like to make the process simpler and faster so that image-editing can be more accessible to casual users,” says Yagiz Aksoy, a visiting researcher at MIT’s CSAIL.

“The vision is to get to a point where it just takes a single click for editors to combine images to create these full-blown, realistic fantasy worlds.”

Called as “Semantic Soft Segmentation” (SSS), this method uses AI (artificial intelligence) to automatically separate objects in an image that enables easy image editing. For instance, using this technique, you can change the look of the background or merge foreground and background images into an entirely new scene.

How do SSS works?

The tool studies the original image’s texture and color and combines it with information collected by a neural network about what the objects within the image actually are. The neural network processes the image features and detects “soft transitions” such as hair and grass.

“Once these soft segments are computed, the user doesn’t have to manually change transitions or make individual modifications to the appearance of a specific layer of an image,” says Aksoy. “Manual editing tasks like replacing backgrounds and adjusting colors would be made much easier.”

Pixels across the image are then associated with each other by color. This information is combined with the features detected by the neural network to estimate image layers. However, this task is not simple, as computers need to be taught how to do this.

“The tricky thing about these images is that not every pixel solely belongs to one object,” added Aksoy, who presented the paper at the annual SIGGRAPH computer-graphics conference in Vancouver this past week. “In many cases, it can be hard to determine which pixels are part of the background and which are part of a specific person.”

Where can SSS be used?

According to Aksoy, SSS in its current iteration could be used by social platforms like Instagram and Snapchat to make their filters more realistic, mainly for changing the backgrounds on selfies or simulating specific kinds of cameras.

Plans for future

In the future, researchers have the plan to work on lessening the time it takes to compute an image from minutes to seconds and to make images even more realistic by improving the system’s ability to match colors and handle things like illumination and shadows.

Source: CSAIL

read more

5 Jobs Robots Will Never take from Humans

robots human job


Do robots really come to take over our jobs? The line has been drawn with latest AI breakthrough programmes, like AlphaZero, which is able to beat you in Chess, Go and Shogi, so human worries are never been as real, as they are today. Some still think, that we still have a long road to go, referencing new technologies, that are only implemented at the very beginning stages, like surgical robotics for instance. These people, however, forget, that they are already using automatic cashiers and train sales station assistants as a regular convenience, forgetting that not so long ago these tasks were performed by another human-being and was considered a fully obligated paid job. Is this true then, that every task robot performs is better what human abilities can achieve? With this question in mind, for all of you critics, believers, skeptics and dreamers, today we will talk about certain job skills in which humans can still give AI a good run for their money.


This one is a no-brainer – yes, machines can actually recognize your face on a photograph, but can they actually paint one? Computer programs are very effective at calculating a viable solution from a number of options, but when it comes to creating their own creative choice – they fail miserably. Creating something from scratch is still something that robots are yet to replicate since even we as humans do not fully understand what makes our brain spark with a new idea. Experts are getting robots to make some works of art, recipes and even inspirational quotes, but the end results are, well, mixed, to say the least. All of this means, that any job that is heavily based on a creative process, like musicians, writers, entrepreneurs, etc. can stop breathing heavily – they can safely bet for being untouched for a long while.

Physical skills

You would think, that if robots can make you a morning coffee, they would be able to handle other mundane routines? Think twice! We, as humans, consider walking and picking different objects the most basic and straightforward tasks that are out there, but for robots, this is actually a huge challenge. If you look at any robotic hands that are currently in development, you will notice how slow and awkward their movements are, not mentioning that their body parts are responding to touch efficiently, knowing neither if they are holding the object, or how much pressure should they apply to it. Similar can be said about the robot’s walking skills, where they seriously struggle with coping with uneven terrain or from a sudden gust of wind. This is not even mentioning the need for the battery power. So any physical skill-based work, from sport to crafting, can worry no more.


Even though robots can already analyze your face to tell how are you feeling, they fail to read your emotions in real time – even a small change of tone can confuse the program. There are certain AI applications already implemented in a medical field, that can more accurately detect diseases on a scan and follow your personal health conditions. However, I can hardly imagine a machine that would have been able to deliver bad news, better than humans do. Robots seem to hard-cold with their iron logic and calculations, no matter how friendly their pre-recorded automated voices may sound. Will machines be able to ever show empathy – remains one of the biggest questions in AI field, and as for right now, there is still no even remotely close prediction. This way, any job that requires empathy, like therapists, caregivers, and primary care physicians remain unlikely to be outsourced to robots any time soon.


In the classic science fiction movies, when a human would ask a computer to calculate a probability of risk, it would usually give him a 99 percent failure result. However, when the main hero would act anyways, he will succeed, no matter how bad the odds were. Of course fictional, but for me, it is still a perfect example of a person’s spontaneous nature, critical thinking and being able to deliver a positive outcome in a pressure situation. These are all connected to the ideas of creativity and empathy, we have already talked above, something that computer programs lack drastically, since they are only doing what they have been told to and only with the materials given, and cannot adapt to the quickly changing events. Robots can only deliver from data, where we as humans can deliver from “our guts”, something that certain situations seriously require. This brings me to the point, that every decision-making based work (like jury’s for instance),  should be left with people since we can take to consider every aspect, that machines could not.

Technical maintenance

Last but not least, until robots will be creating another robot to install and upkeep them, humans will always be necessary. It is up to us, to plan, design, implement and manage robotics and AI technology systems. According to the software development company Elinext, a robot able to move or recognize simple commands will take at least two months of human work behind its code. And with such an uprising demand of new robots, there will be dozens of new positions, for people to keep these machines running. I mean hey, if robots will eventually take over, we will need more people to help maintain all of them. Thus, I do not think that any job, that is connected to technological maintenance, as constructors or engineers are at any risk any time soon.

Closing thoughts

People are better than machines in so many different ways. So the next time you will wonder if a robot will be serving you at the unemployment facility, just remember that we have always found ways to make machines work for us, rather than against us. But if you still deeply concerned about the future, and you consider robots as a threat, rather than help – start working on your soft skills. Work on your strategic thinking, problem-solving, empathy and creativity. This way, even if robots do take over, you can find yourself one of the only few quality-fitting robots for a job.

read more

Machine Learning Could Help Identify Author of an Anonymous Code

Machine Learning Could Help Identify Author of an Anonymous Code

Machine Learning Algorithm That De-anonymizes Programmers From Source Code And Binaries

Researchers have found that machine learning can be used to help identify pieces of codes, binaries, and exploits written by anonymous programmers, according to Wired. In other words, machine learning can ‘de-anonymize’ programmers from source-code or binary form.

The study was presented by Rachel Greenstadt, an associate professor of computer science at Drexel University, and Aylin Caliskan, Greenstadt’s former Ph.D. student and now an assistant professor at George Washington University, at the DefCon hacking experience.

How To De-Anonymize Code

According to the researchers, the code written in the programming language is not completely anonymous. The abstract syntax trees contain stylistic fingerprints that can be used to potentially identify programmers from code and binaries.

In order to study the binary experiment, the researchers examined code samples in machine learning algorithms and removed all the features such as choice of words used, how to organize codes and length of the code. They then narrowed the features to only include the ones that actually differentiate developers from each other.

Examples of a programmer’s work are fed into the AI where it studies the coding structure. This approach trains an algorithm to recognize a programmer’s coding structure based on examples of their work.

For the testing, Caliskan and the other researchers used code samples from Google’s annual Code Jam competition. It was found that 83% of the time, the AI was successful in identifying the programmers from the sample size.

Where can it be used?

This approach could be used for identifying malware creators or investigating instances of hacks. It can also be used to find out if students studying programming stole codes from others, or whether a developer violated a non-compete clause in their employment contract.

Privacy Implications

However, this approach could have privacy implications, especially for those thousands of developers who contribute open-source code to the world and choose to remain anonymous for certain reasons.

Future Work

Greenstadt and Caliskan plan to study how other factors might affect a person’s coding style. For instance, questions such as what happens when members of the same organization work together on a project, or whether people from different countries code in different ways. Also, whether the same attribution methods could be used across different programming languages in a uniform way.

“We’re still trying to understand what makes something really attributable and what doesn’t,” says Greenstadt. “There’s enough here to say it should be a concern, but I hope it doesn’t cause anybody to not contribute publicly on things.”

Source: Defcon

read more

Small group of students beat Google’s machine learning code

Small group of students beats Google's machine learning code

AI coders from created an algorithm that outdid codes from Google’s researchers

A small team of student AI (artificial intelligence) coders outperformed codes from Google’s researchers, reveal an important benchmark.

Students from, a non-profit group that creates learning resources and is dedicated to making deep learning “accessible to all”, have created an AI algorithm that beats code from Google’s researchers.

Researchers from Stanford measured the algorithm using a benchmark called DAWNBench that uses a common image classification task to track the speed of a deep-learning algorithm per dollar of compute power. According to the benchmark, the researchers found that the algorithm built by’s team had beaten Google’s code. consists of part-time students who are eager to try out machine learning and convert it into a career in data science. It rents access to computers in Amazon’s cloud. In fact, it is important that a small organization like succeed, as it is always thought that only those who have huge resources can do advanced AI research.

The previous rankings were topped by Google’s researchers in a category for training on several machines, using a custom-built collection by its own chips designed specifically for machine learning. The team was able to deliver something even faster, on more or less equivalent hardware.

“State-of-the-art results are not the exclusive domain of big companies,” says Jeremy Howard, one of’s founders and a prominent AI entrepreneur. Howard and his co-founder, Rachel Thomas, created to make AI more accessible and less exclusive.

Howard’s team have competed with the likes of Google by doing a lot of simple things, such as ensuring that the images fed to its training algorithm were cropped correctly. More information can be found in a detailed blog post. “These are the obvious, dumb things that many researchers wouldn’t even think to do,” Howard says.

Recently, a collaborator at the Pentagon’s new Defense Innovation Unit developed the code needed to run the learning algorithm on several machines, to help the military work with AI and machine learning.

Although the work of is remarkable, huge amounts of data and significant compute resources are still important for several AI tasks, notes Matei Zaharia, a professor at Stanford University and one of the creators of DAWNBench.

The algorithm used 16 Amazon Web Service (AWS) instances and was trained on the ImageNet database in 18 minutes, at a total computer cost of around $40. While this is about 40 percent better than Google’s effort, the comparison is tricky considering the hardware used was different, Howard claims.

Jack Clark, director of communications and policy at OpenAI, a nonprofit, says has produced valuable work in other areas such as language understanding. “Things like this benefits everyone because they increase the basic familiarity of people with AI technology,” Clark says.

Source: MIT

read more