Blog
June 19, 2019Hands-On with the All-New Create ML App: Machine Learning for the Masses - Part 1
Apple announced a staggering amount of updates at this year's WWDC and if you take a look at their updated developer homepage, you'll see two things at the top you've probably heard a good deal about - SwiftUI and Project Catalyst. Also prominently featured, however, is something you may not have seen garner a lot of coverage - Machine Learning.
Machine learning is a technology becoming increasingly important, but not yet understood by everyone. Apple is aiming to change that and continue their "Everyone Can Code" push with their updates to machine learning this year. These updates, especially the new Create ML app, make it easier than ever for someone with no prior machine learning knowledge to dive right in.
Create ML was first introduced last year as a framework integrated into Swift Playgrounds, but this year it becomes its own fully featured app. Now, using Create ML, someone with just a little bit of data (as simple as a folder of images) and no previous machine learning expertise can create a model in minutes. Its power is even more evident for experienced developers as it adds powerful new functionality such as interactive preview, live progress and metrics visualization.
What's New
Create ML is a whole new way to train custom machine learning models. It offers an extremely simplified experience for training a model using a new, straightforward workflow.
Before going any further, let's pause to define some terminology that Apple uses when discussing machine learning. A domain is a grouping of similar input/model types:
Previously, only three domains were available - Image, Text and Tabular. This year Apple has added Sound and Activity bringing the full list to five with a total of nine model types. A model type is the specific kind of model you want Create ML to train and create. You will also see these referred to in Create ML as templates.
Model Types
- Image Classifier - classify an image based on its content
- Object Detector* - finds multiple items inside a single image
- Sound Classifier - classify the most dominant sound
- Activity Classifier* - classify the contents of motion data from a variety of sensors
- Text Classifier* - classify text based on its contents and generate topics or categorization
- Word Tagger* - mark words of interest in text
- Tabular Classifier, Tabular Regressor* - these are the most general and identify the best of multiple models to use and classify tabular data
- Recommendor* - recommendations based on user behavior and interaction on device
*Not available in the first beta
These additional model types allow you to take advantage of more sensors for input and allow for more personalized and intelligent models.
All of these will be built into the all-new Create ML app. The app is especially powerful because it leverages a technology called transfer learning that allows it to harness the intelligence built right into the OS to accelerate its performance. This also means smaller models allowing for deployment to nearly any device.
Enough talk, let's dive in...
Create ML
A colleague and I were sitting in the office discussing all of the machine learning announcements when an opportunity to test the new app out conveniently presented itself.
The following piece of furniture was in the office and a friendly debate sprung up on whether this was a chair or a stool.
It has a back, a traditional trademark of a chair, but the rest of it feels very stool-like. Let's see if Create ML can help settle this debate.
Getting Set Up
The first thing you need to know about running the Create ML app is that it requires the betas of both macOS Catalina AND Xcode 11 to be installed.
The Create ML app is actually built into Xcode's Developer Tools suite and can be launched from the Xcode Menu>Developer Tools
Upon launch, you'll be presented with a dialog to open an existing project (of which you'll have none) so simply select File>New Project to present the following screen:
*Notes and screenshots based on seed 1
On the left is where you'll see the list of domains discussed earlier. You may notice this screen this looks slightly different than the one demoed at WWDC. Unfortunately, the first beta of Create ML only has 2 domains, each with one model type. Apple has stated on Twitter that the rest will be coming with additional versions leading up to the public release.
Since we're trying to see what a model might tell us about a picture of our chair/stool, we'll select the Image Classifier template/model type and click Next, name the project, choose where to save it and click Create to present the main interface.
Create ML has a simple, intuitive interface that guides you through the process with tabs along the top. Your journey of creating a model moves left to right in three phases, beginning with input and ending with output, a concept everyone can understand. What anyone new to ML might not understand is some of the basic concepts of model creation that are built into the tool.
Phase 1: Input
Having a clean, organized data set is key to any machine learning project. Once you have it, it's as easy as dragging it into Create ML, but let's talk about our data set for a minute. In order to have a model that can guess whether our item is a chair or stool, we need to train it on what chairs and stools are. We got 25 images of chairs and stools, put them into separate folders and put the two folders in one folder called "train". A quick note on organizing the data - it's recommended to use ~80% of your images for training and use ~20% for testing. We'll add our testing images shortly, but for now, with this collection of training images, all you need to do is drag the folder into Create ML:
and let it do its magic:
You can see it knows we have 50 total items and has detected we have 2 classes. We'll leave the validation setting in the middle area set as Automatic for now and discuss that more later.
Now we'll drag in our test folder which is simply another 5 images of each class that we'll use to test our trained model.
Create ML also lets you run multiple trainings at the same time, increasing efficiency.
Phase 2: Training
As mentioned earlier, Create ML's workflow has three phases: Input, Training, and Output. The input phase is now complete (indicated by its box being populated at the top) and all that's required for training is to click Run. As the training is running, you can view its progress live:
Our data set was relatively small, so the whole operation only took a few seconds, but you can imagine that for large data sets, this would be extremely valuable.
Once complete, a screen similar to the following will be presented:
You now have a fully functional model ready to try and determine, based on what you taught it if the item is a chair or stool. Let's pause for a minute to appreciate that. With relative ease, we now have a trained model that is ready to be deployed and used in an app. But let's look a little closer and see what Create ML has done for us. Click back on the Training tab:
Here we see the completed training status we watched live before. The table at the bottom is actually interactive, enabling filtering for various insights into your training data.
At the top, we expect to see 100% accuracy because we told it explicitly which images were chairs and stools. What isn't expected to be 100% is the validation accuracy. It actually seems a little low at 71% and I'll tell you why. In the input phase above, we selected the option for automatic validation. What we learned that does is use a random ~10% of your images for validating the training. Remember that we only gave it 25 images of each so 10% is only going to be a few images and that's not a great subset for validation. In the current beta, there's not a ton of insight into which images were used for validation, but I'm hoping that's something we get down the road.
If you click the Testing tab, we can see our test data performed better achieving a 90% accuracy. The Testing tab, as well as the Validation tab, also provides the ability for interactive filtering, useful for larger data sets.
Let's look now at phase 3 of our process - the output.
Phase 3: Output
This screen may look a little empty upon the first view, but it's hiding some powerful features.
First of all, the tab in the top right actually contains your model and the .mlmodel file (only 17 KB!) can simply be dragged out of Create ML to wherever you need it.
The main area of the screen is perhaps the most unique and amazing feature of Create ML - Preview.
For any model you create, you obviously want to see how it performs with new data it hasn't seen before (our chair/stool in this example). Previously, this might require a model to be deployed and built in an app and the app could be used to take the picture of our chair/stool. Now, with Create ML, it's as simple as drag and drop:
It's a chair! There's no need to elaborate on whether myself or my colleague was correct. In addition to being able to quickly settle a debate without needing to build a full app, this is truly a powerful feature. The ability to preview a trained model without waiting is a huge time saver in the traditional machine learning workflow. This feature allows users to perfect their model BEFORE deployment.
Create ML can handle multiple files at once as well. I went around the office and took some various pictures of (what I thought were) chairs and tested them against the model by simply dragging the folder in. The model was pretty accurate, identifying the chairs as chairs with 100% confidence almost every time. A couple of the pictures I took at different angles gave slightly different results:
The preview window gives you its prediction and the confidence as well as any other prediction/confidence it can. These preview screens are also custom built for each model type so the sound classifier's preview will show the waveform for example.
There's one last feature to discuss for the preview functionality. Let's say my colleague or I wanted to try and test the model some more, perhaps with some new angles or lighting conditions. With the continuity camera feature, you can connect your device and actually test the model using your device's camera live right in Create ML. Unfortunately, this feature is not available in the first beta so we couldn't test it, but we think this will be incredibly useful (when the feature is added, it can be accessed using File>Import from iPhone>Take Photo).
The preview is a great way to dynamically test your model, but you can also go to the testing tab and retest with different data to try and fine tune your model. In addition, you can also easily add more training data, customize the validation and even add augmentations (discussed more in Part 2 link).
Beta Observations
Encountering a few issues is always the price you pay for the chance to test drive the new software so here are a few more of the notes we discovered using the first beta:
- If you navigate to the Testing tab and retest with additional data, you'll get the aggregate accuracy score at the top, but no visibility into the performance of the individual images like you do with the preview feature. If you use the preview feature and drag in a folder of images, you don't get an aggregate score. We're hoping the functionality will become shared between the Testing and Output tab.
- We also couldn't find a way to delete an image from the preview pane, but if you drag another group, it will overwrite.
- When trying to train a large data set of around 800 images in 25 different classes, we kept receiving an "internal error" we couldn't get past until we pared down the size of the data set.
A Powerful New Tool for Everyone
Before WWDC, I had always been interested in its power of machine learning but always felt intimidated since I had no experience. After spending a couple of days with Create ML, I'm hooked and I think others will be too. It all but eliminates the barrier for entry and requires almost no prior knowledge whatsoever.
It's amazing that only a couple years ago, model creation was a complicated process and today it's as simple as drag and drop and the click of a button.
Machine learning is growing in adoption and will become an integral component of software going forward. Create ML is an amazing tool that makes it easy for anyone to create a model and start building.