Artificial intelligence has become one of the best allies of large companies. This technology expands and evolves very quickly and is embedded in every nook and cranny of the systems that surround us. For many, artificial intelligence is directly related to virtual assistants like Siri. However, this statement is very far from reality. John Giannandrea, Apple’s head of artificial intelligence, says it’s almost impossible to find a single place in Apple that doesn’t use machine learning. In an interview, Giannandea reflects on the importance of models and the power of Apple thanks to AI.
Giannandrea: ‘A better model does not imply big data’
The ARS Technica interview featured John Giannandrea, Apple’s head of artificial intelligence. He came to the big apple in 2018 leaving behind a long career at Google where he worked in the Artificial Intelligence, investigation and searches team. In addition, he is a co-founder of two companies: Tellme Networks and Metaweb Technologies. Currently, it is Vice President of Machine Learning and Artificial Intelligence Strategy.
We made the pencil, we made the iPad, we created the software for both. They are unique opportunities to do really good work. What are we doing a really good job at? Let someone take notes and be productive with your creative thoughts on digital paper. What interests me is that these experiences are used at scale in the world.
One of the theses defended by Giannandrea throughout the interview is the importance of offering experiences to users. He stressed the importance of Apple creating its own software, hardware, and connections. This avoids third-party interference by offering users the most fruitful experience possible. He compares it to Google, his previous company, where a product is not offered to a consumer that will be used on a large scale.
Machine learning is found all over Apple
Giannandrea was also asked by the use of machine learning at Apple today. Software and hardware news that refer to the use of AI are announced in all presentations. However, it is not given the importance it really has. Every corner of iOS is packed with artificial intelligence: from Siri to the Photos app to taking photos or using the Apple Pencil:
Machine learning is used to help iPad software distinguish between a user accidentally pressing their palm against the screen while drawing with the Apple Pencil, and intentional pressure intended to provide input.
Apple’s AI chief says «there are fewer and fewer places on iOS where we don’t use machine learning ». And this is a reality. Considering that Apple is betting heavily on ARKit and other artificial intelligence frameworks, they try to convey the idea of machine learning integration to all developers.
Finally, they tried the augmented reality as another asset of the use of artificial intelligence:
Machine learning is widely used in augmented reality. The most difficult problem is what is called SLAM, that is, simultaneous localization and mapping. So trying to figure out if you have an iPad with a lidar scanner and it’s moving, what do you see? And building a 3D model of what you are seeing.