December 25, 2024

Apple’s practical approach to A.I.: No bragging, just features

Apple #Apple

  • On Monday at WWDC, Apple subtly touted just how much work it’s doing in state-of-the-art artificial intelligence and machine learning.
  • Unlike most tech companies doing AI, Apple performs sophisticated processing on its devices instead of relying on the cloud.
  • Instead of talking about AI models and technology, Apple’s product emphasis means it usually just shows new features that are quietly enabled by AI behind the scenes.
  • Apple Park is seen ahead of the Worldwide Developer Conference (WWDC) in Cupertino, California, on June 5, 2023. © Provided by CNBC Apple Park is seen ahead of the Worldwide Developer Conference (WWDC) in Cupertino, California, on June 5, 2023.

    On Monday during Apple’s annual developer’s conference, WWDC, the company subtly touted just how much work it’s doing in state-of-the-art artificial intelligence and machine learning.

    As Microsoft, Google, and startups like OpenAI embraced cutting-edge machine learning technologies like chatbots and generative AI, Apple appeared to be sitting on the sidelines.

    But on Monday, Apple announced several significant AI features, including an improved iPhone autocorrect based on a machine learning program using a transformer language model, which is the same technology underpinning ChatGPT. It will even learn from how the user texts and types to get better, Apple said.

    “In those moments where you just want to type a ducking word, well, the keyboard will learn it, too,” said Craig Federighi, Apple’s chief of software, joking about autocorrect’s tendency to use the nonsensical word “ducking” to replace a common expletive.

    The biggest news on Monday was its fancy new augmented reality headset, Vision Pro, but Apple nonetheless showed how it’s working on and paying attention to developments in state-of-the-art machine learning and artificial intelligence. OpenAI’s ChatGPT may have hit over 100 million users in two months when it launched last year, but now Apple is taking the technology to improve a feature 1 billion iPhone owners use every day.

    Unlike its rivals, who are building bigger models with server farms, supercomputers, and terabytes of data, Apple wants AI models on its devices. The new autocorrect feature is particularly impressive because it’s running on the iPhone, while models like ChatGPT require hundreds of expensive GPUs working in tandem.

    On-device AI bypasses a lot of the data privacy issues that cloud-based AI faces. When the model can be run on a phone, then Apple needs to collect less data in order to run it.

    It also ties in closely with Apple’s control of its hardware stack, down to its own silicon chips. Apple packs new AI circuits and GPUs into its chips every year, and its control of the overall architecture allows it to adapt to changes and new techniques.

    Apple’s practical approach to AI

    Apple doesn’t like to talk about “artificial intelligence” — it prefers the more academic phrase “machine learning” or simply talks about the feature the technology enables.

    Some of the other leading AI firms have leaders from academic backgrounds. That has led to an emphasis on showing your work, explaining how it might improve in the future, and documenting it so other people can study and build on it.

    Apple is a product company, and has been intensively secretive for decades. Instead of talking about the specific AI model, or the training data, or how it might improve in the future, Apple simply mentions the feature and says that there is cool technology working behind the scenes.

    One example of that on Monday was an improvement to AirPods Pro that automatically turns off noise cancelling when the user engages in conversation. Apple didn’t frame it as a machine learning feature, but it’s a difficult problem to solve, and the solution is based on AI models.

    In one of the most audacious features announced on Monday, Apple’s new Digital Persona feature makes a 3D scan of the user’s face and body and then can recreate what they look like virtually while videoconferencing with other people while wearing the Vision Pro headset.

    Apple also mentioned several other new features that used the company’s skill in neural networks, such as the ability to identify fields to fill out in a PDF.

    One of the biggest cheers of the afternoon in Cupertino was for a machine learning feature that enables the iPhone to identify your pet — versus other cats or dogs — and put all the user’s pet photos in a folder.

    Leave a Reply