Introduction of Apple’s Machine Learning Framework – Core ML

Latest Version of Core ML

Core ML is a machine learning framework provided by Apple to their developers. This framework enables developers to introduce machines capabilities to their apps in limited lines of codes. In WWDC 2018 San Jose, Apple has introduced a new version of Core ML 2. This framework is going to provide developers with the model size optimizing, performance enhancement and ability to customize their own Core ML.

In the App Store, you will plenty of great apps. Such as there are apps who understand your text, and few apps understand your work out routine from your motions. These all functional apps are the example of high powered machine learning. All these apps are working because the developer has rendered Core ML to make them functional.

Core ML has made it possible for the developers to integrate machine learning into their apps. This is all the work of Core ML that today apps can recognize your context conversion and audio. The Apple has taken it extra mileage by introducing real-time image analysis using the two lucrative languages; Vision and Natural Language. With the VNCoreMLRequest API and the NLModel API, you can heavily increase your app’s ML capabilities since Vision and Natural Language are built upon Core ML.

Integrated Parts of Core ML 2

In the latest version of Core ML, Apple has focused mainly on the models. The new version on Core ML has been revolving around three main pointers:

  • Model Size
  • Performance of Model
  • Customizing Model

So, here in this post, we will try to understand each of the new Core ML pointers one by one to get an overview.

Model Size – One of the advantages of Core ML is that everything is done on one device. When everything is done on the one user’s device, then the safety and preservation factor increases. But, the machine learning models are larger in size and to upload them to the app requires larger space, then the one user device.

So, Apple tried to provide the solution to this problem in the form of Core ML. Apple decided to use the Quantizing technique in which numbers are stored and calculated in a compact form. As basically, in the machine learning model, a number is calculated by the machine. So, if we reduce the numbers or store them in the less space, then this will eventually reduce the size of the model.

Basically, in reducing the model size three main things are altered by Core ML; the number of models, number of weights and size of weights. So, when we are using quantizing technique, then we are reducing the size of weights. Like, in iOS 11, Core ML models were stored in 32-bit models. With iOS 12, Apple has given us the ability to store the model in 16-bit and even 8-bit models.

For example: Let’s take an example, suppose the first time you visited a supermarket and taken the longer route to reach there. But, by the second and third you will figure out the shortcut route to reach the supermarket as you eventually know the location of the supermarket.

Performance – The next important thing Apple focused on the performance of the model. As the whole model of machine learning will be completed in one device that’s why performance has to be fast and accurate. This is a hard task to make it performance efficient, but Apple has somehow achieved it in the Core ML.

For example, Style Transfer is a machine learning application which basically transforms a certain image into the style of another. If you have the Prisma app before, that’s a sample use of Style Transfer.

Customizing – When you open any neural network, then you see numerous complexed layers there. Which can be only understood by the skilled data science and machine learning engineers. So, here we don’t go into deep and just inform you that with the Core ML 2, you can play with the neural network if you have skills to do so.

Conclusion

So, in the latest version of Core ML, Apple has focused on smaller, faster and customized models. Core ML has plenty of new features, but as a developer, you might use weight quantization most. In this blog, we have only captured the main highlights of Core ML as numerous features are stored up in it.

Send a Message