Just to let you know, we’re an affiliate for Amazon, Bluehost, CJ and Rakuten Marketing and some of the links below are affiliate links, meaning that, at no additional cost to you, I may earn a commission if you click through and make a purchase. Thank you if you use our links, we really appreciate it!
Let’s face it, deep learning and artificial intelligence are not the easiest things to get your head around. There’s lots of things to learn like getting your data or images into the right format, you’ve got to understand how things like deep convolutional neural networks work, you have to have a decent understanding of the languages involved (such as Python) and often there’s a lot complicated maths to understand. Not only that, even if you do understand all the ins and outs of AI, building your own custom machine learning model can be both time-intensive and complicated.
Fortunately, Google has come to the rescue yet again and have released a new AI tool that lets you train a custom machine learning model. It’s not the first time Google have released image recognition tools, in fact I wrote about Google’s object detection software last year, but this time you don’t have to write a single line of code!
The software is called Cloud AutoML Vision and it basically helps users take advantage of the latest developments in artificial intelligence without having to have all the machine learning know-how.
So, the basic premise is that you can take a bunch of images (a ton of them, seriously, up to 10,000), upload them along with their tags and then Google’s system will then build a model for you automatically. The great thing is, everything is handled through a drag and drop interface, from importing the data through to tagging it and training the model.
Once the model is built and the system thinks that the model is accurate enough, the model can then be used to classify what it thinks are in the new images that are given to it – one’s that it hasn’t seen before. And that’s the basic idea of image recognition and machine learning. You give the algorithm lots and lots of examples of images of the types of things that you want it to recognize, along with the appropriate labels, and it will learn to identify patterns.
So for example, if you were to provide lot’s of images of cars of different makes, models, colors etc, then given a car that it hasn’t seen before, it should be able to recognize it as a car and not something else such as a ‘boat’.
In Google’s blog post, they mention that Disney, for example, has used this system to “build vision models to annotate our products with Disney characters, product categories and colors. These annotations are being integrated into our search engine to enhance the impact on Guest experience through more relevant search results, expedited discovery and product recommendations on shopDisney.” say Mike White, CTO and SVP, for Disney Consumer Products and Interactive Media.
So if you want to get access to AutoML Visions, then developers have to apply for access to the alpha version. Currently there’s no mention of cost, but it’s likely that businesses will have to pay to access their models via the API in AutoML.
I think this is a really positive move by Google because it does allow companies to get on-board with artificial intelligence and machine learning without having to recruit or train people in house – which can be costly to most companies and can demand a lot of resources.
I guess the only difficulty companies may face is the data collection stage, because you need to be able to gather thousands of images and label them all up correctly, else you can end up with poor results. Even so, Google have done most of the hard word so you don’t have to.
H/T – TechCrunch