During its product event on Tuesday, Apple announced new updates for the Apple Watch line, a new “Apple One” bundle of services, and plenty more — but what got me most excited as a developer is the new iPad Air.
The new tablet comes with an updated bezel design that looks like the current iPad Pro’s, several color offerings, a TouchID-capable wake button, and a suite of performance upgrades. The iPad Air will now have the A14 chip with 6-core CPU and neural engine capabilities not yet seen in the iPad. The chip allows for up to 11 million operations per second, which means that developers using CoreML for their apps will see an immense improvement in the speed at which their apps can do machine learning.
One of the developers Apple used to showcase the new ML capabilities was Karim Morsy, who created the app djay Pro AI. Karim explained how with the new iPad Air, users can now DJ in the air, without touching the iPad. The app uses the camera to track the user’s hand movements and then uses machine learning to translate those movements into spinning the turntables. When I saw that, I envisioned an app that could help you learn and improve your sign language by tracking your hand movements.
Game developers should be pumped, as well. Alexey Scherbakov, developer of War Robots, showed improved graphics and textures with the iPad Air.
I haven’t yet used CoreML in any of my own apps yet, but the new iPad Air may just be that push for me to dive in. If I was working on a photo-sharing app, I might use CoreML for an image classification feature to help users pre-populate tags for photo posts. An art app could use it to recommend color palettes or pairings to complement the colors in a user’s piece. A language-learning app could give users feedback on how well they’re speaking a new language. With faster processing times, CoreML allows developers to take advantage of machine learning features without sacrificing performance.