Google’s ML Kit for iOS and Android Developers

Introducing the Google’s ML Kit

Well, Apple offers numerous tools and services to their community from time to time. So, Google isn’t going to back here as the core team of Google is also always working hard to serve their community. Google has been constantly working to improve their user’s experience by developing their products like Google cloud, TensorFlow, Firebase etc.

In Google I/O 2018, Google has launched the ML Kit for the iOS and Android Developers. This toolkit is designed to capture the Google’s Dream of Artificial intelligence. With this ML Kit models, Google has given the enormous power in the hands of developers.

Difference Between Core ML and ML Kit

The Google’s ML Kit offers numerous machine learning tools to the developers. But, it has direct competition from the Apple’s Core ML. Well, basically, it is a golden situation for the developers where they can explore two dynamic machine learning model. Both the Google’s and Apple’s ML are little different from each other. Like:

In Core ML, you have to create your own models to work. But, with ML Kit you can either create your own model or can work with the Google models. For beginners using Google’s models is very beneficial.

If your model is large, then you can run it over the Firebase in the Google ML Kit. But, in Core ML you have to run machine learning tools on one device only.

Things You Can Do With ML Kit

Here we have enlisted the things that you can do with the Google’s ML Kit. The ML Kit offers the following features to the developers:

Barcode Scanning

This is the feature which enables your app to scan the different barcodes. It can be implemented to your app using BarcodeViewController feature. You will see the code which Chooses Image button when it is clicked. The options variable enables ML to recognize the type of barcode. The ML Kit can recognize the following formats of barcodes, Codabar, Code 39, Code 93, UPC-A, UPC-E, Aztec, PDF417, QR Code, etc.

Face Detection

Next, the thing that you can do with the ML Kit is face detection. We are not talking about regular box construction around the face, no we are referring to face detection where the smile on the face of the person can be detected.  To implement this feature, you have to define some constants as:

import UIKit

import Firebase

let options = VisionFaceDetectorOptions()

lazy var vision = Vision.vision()

Image Labelling

Now, you can label your images with the ML Kit. Image Labeling is easy to do as compared to the Face Detection. Basically, you have two options available for image labeling, either you have all of the machine learning done on-device or you can use Google’s Cloud Vision. The advantage here is that the model will automatically be updated and a lot more accurate since it’s easier to have a bigger, accurate model size in the cloud than on the device.

Text Recognition

In the last few years, Optical Character Recognition, or OCR, has become immensely popular in the mobile phones. With the ML Kit, performing text recognition has become very easy as it is connected to the Google cloud. It can make calls to the model in the cloud just like image labeling but we’ll work with the on-device API for now.

 

import UIKit

import Firebase

lazy var vision = Vision.vision()

var textDetector: VisionTextDetector?

import UIKit

import Firebase

lazy var vision = Vision.vision()

var textDetector: VisionTextDetector?

Landmark Recognition

Just like the other four categories, you can implement landmark recognition with the ML Kit. Unfortunately, to implement Landmark Recognition, you have to shift the project to the Google Cloud version and as on-device landmark recognition isn’t available.

So, that’s our wrap about the Google ML Kit, to know more about the features of ML Kit leave your comments and we will be back with more information.

Send a Message