WWDC 17Machine learning and augmented reality have become major areas of emphasis for Apple this year, and that carries important implications for businesses. Both topics received considerable attention at the June 5-9 Apple WorldWide Developers Conference (WWDC17) in San Jose, CA.

Each year, Apples uses WWDC to provide an update on software changes coming to iOS, watchOS, tvOS, and macOS. It is the seminal event for Apple mobile developers and for businesses that have a stake in mobile applications. The conference is usually light on hardware announcements, but this year was a little different, with new iPads and Macs announced, as well as the new HomePod product line (an Amazon Echo competitor).

Among the updates and developments that CapTech believes business need to pay attention to:

Machine Learning

Throughout the conference, Apple dropped the term machine learning, a phrase I don't recall hearing at all last year. It was so pervasive that the Twitterverse began joking about taking a drink each time machine learning was invoked in any given presentation - and becoming wasted by the end of the talk.

Apple isn't focusing on building trained models on servers, but on running trained models directly on phones. In part, that reflects the company's desire to enable stronger user privacy. It means that businesses will need to train their models on servers and then deploy the trained models to devices such as iPhones. Moreover, Apple doesn't provide a built-in way to train or improve the model on devices.

Through this type of remote training, combined with local execution, businesses can bring real-time machine-learning capabilities to mobile apps. These capabilities will support real-time image recognition on the device, real-time natural language processing, real-time decisioning, and real-time sentiment analysis. For the user, that could enable such possibilities as receiving a warning that a newly written email conveys an angry tone. (Are you sure you want to send it?) It could provide better AI players in games. And it could enable businesses to train models on the data of specific customers.

  • CapTech's recommendation: See what data your company has and what models could be trained for on-device machine learning. Continue to focus on collecting data to train your models.

Augmented Reality (AR)

Apple's new AR framework, ARKit, was displayed prominently during the keynote presentation and follow-up sessions. Apple was confident enough to do a live demo in the relatively uncontrolled environment of a stage in front of thousands. That speaks to the company's confidence in the framework and technology.

The demo, which included virtual spaceships flying over the audience, was extremely impressive and an overwhelming success.

Apple provides three ways to build AR functionality into an app. All are somewhat gaming-centric but can be used for interesting commercial applications; for example, a furniture retailer could use the technology to show a potential customer how a couch will look in the context of that person's living room. A car retailer could let a customer see a given car parked in the driveway at home before the customer buys the vehicle. A shoe shopper could see an exciting pair of shoes on their feet.

  • CapTech's recommendation: Consider how your product and service offerings can be enhanced by portraying them in an augmented-reality world. Start building a library of 3D models of your products so that when the opportunity presents itself you'll have assets at hand. Keep in mind that providing 3-D models of products will take significant content curation.

Siri

Apple focused on the intelligent assistant functionality of Siri and gave little attention to the voice-interface functionality of Siri.

With Amazon Alexa and Google Assistant, developers can create their own interaction domains and intents. That is allowing these devices to grow at the speed of the internet. In contrast, Apple's Siri voice response is, in my opinion, being crippled by Apple's inability or unwillingness to allow developers to develop more openly on the Siri platform.

The Apple HomePod appears promising as a speaker but, without a compelling and rich set of apps, it will lag Echo and Google Home.

Apple has improved the on-device natural language processing available to apps, so that apps can now collect audio from the microphone, transcribe it, and derive some actionable knowledge from what the user says to the app.

  • CapTech's recommendation: Keep an eye on voice-interface functionality. We don't expect Apple to cede the voice space to Amazon or Google without a fight. One day, they'll open this platform. In the meantime, make sure your voice applications are built on an extensible architecture that can easily support new platforms from Apple and others.

App Store Changes

As of iOS 10.3, Apple has allowed app owners to respond to reviews in the app store. While this isn't strictly new, enterprises should be monitoring reviews and stand ready to provide responses to reviews in a way that reflects positively on the brand and provides excellent customer service. Apple recommends that app owners read and respond to reviews every day.

Apple also announced that the app store will provide a phased rollout of apps. This is a big win for enterprises that want to gain better control over app deployments. Currently, when an app update is released it is immediately available for update on any phone that already has the app. If you're deploying new services or a major new feature, a phased rollout will help you manage the risks of deployment. Details of how phased rollout will work are still sparse. Don't dispose of your feature-switch logic or A/B testing frameworks just yet, as they may still be needed.

  • CapTech's recommendation: Develop a strategy for responding to app reviews that corresponds with your brand.

Development Tool Changes

Progress in the Swift language has proceeded unabated with the release of Swift 4. You can expect developers to need some time to migrate to the new language, although the migration from Swift 3 to 4 is relatively easy. If your apps are still on Swift 2, the migration will take longer because the tools for going from v2 to v3 are not as robust as those for going from v3 to v4.

Xcode now supports refactoring of Swift, Objective-C, and C++ code. This long-delayed feature has undoubtedly led to a build-up of technical debt due to poorly structured code. Developers can now more efficiently refactor out technical debt.

Xcode provides enhanced debugging, testing, and analysis features to help developers produce more stable apps. In addition, Xcode now allows you to run multiple simulators at the same time, which will help reduce the bottleneck in CI/CD systems. Previously, only one test could be run at a time. At this point, there's no reason why the majority of functionality in an app cannot be validated with automated testing.

Apple continues to improve its continuous-integration (CI) capabilities, but these are still no match for a general-purpose CI/CD tool such as Jenkins or Travis/CI.

  • CapTech's recommendation: Give developers time to use the new tools in Xcode to improve overall app stability and user experience. At CapTech, we recently used one of these tools on a client app and found three significant defects in third-party libraries that were causing user-experience problems.

Looking Ahead

Nothing presented at WWDC17 was revolutionary, but Apple's mobile capabilities continue to evolve rapidly, allowing apps to do more and more things on the device. The emphasis on the device versus the server underscores Apple's drive to enable stronger user privacy.

While it's difficult to predict what WWDC2018 will bring, my hope is that Apple will open up Siri for developers. That would immediately turn millions upon millions of iPhones into voice-enabled devices, leapfrogging both Google and Amazon in the voice-interface field and truly giving everyone something to talk about.