At their Worldwide Developer’s Conference in 2019, Apple added object detection support to CreateML, their no-code machine learning app.
This means, in theory, you can get a trained model suitable for use in your iPhone application without writing a single line of code.
And that’s true except for one thing. To train a machine learning model you need data.
And to load data into CreateML you’ll need it in the proper format.
Unfortunately, Apple CreateML uses a proprietary JSON format that isn’t widely supported in labeling tools.
So, most users will need to write some boilerplate code to convert their annotations.
Luckily for us, Roboflow is the universal converter for computer vision and so by combining Roboflow with CreateML we can fully realize the vision of no-code model training.
You can collect data simply by taking pictures with your iPhone. Collect a variety of photos of your subject from different angles, with different backgrounds, and in different lighting.
Once you’ve got a hundred or so, images you’re ready to label them. Then drop your images and annotations into Roboflow and export in CreateML format.
As the model trains, we can see that the model is better at detecting faces with masks than faces without masks.
This is a pretty good start! I can drop this CoreML file directly into the Apple sample app and run it on my iPhone’s live camera feed.
But getting a model that is good enough to use in production requires experimentation and iteration.