Reuters reports that Apple is working on significantly increasing the number of artificial intelligence specialists it employs as it works to make Siri smarter, but that the company’s commitment to user privacy imposes constraints.

Machine learning relies heavily on large-scale data-crunching to figure out what users are likely to want to know. But while Google analyses the data of Android users en-masse, Apple’s approach to privacy means that far less data is sent from the iPhone to its servers, making it more challenging to increase Siri’s intelligence … 

Craig Federighi, Apple’s SVP of software engineering, said of iOS 9 that it was “adding intelligence throughout the user experience in a way that enhances how you use your device but without compromising your privacy.”

While Apple does impose stricter controls on user data than most other companies, Siri’s servers do still retain data for up to two years – it simply does this in an anonymised form. A former employee said even doing this much is unusual for Apple, with most other services retaining data for far less time – as little as 15 minutes, in the case of Apple Maps.

The ramp up in Apple’s AI hiring may not be as dramatic as the Reuters piece makes it sound.

With Siri first launched in 2011, a tripling or quadrupling of what is likely to be a relatively small number of specialists over the course of several years sounds to me pretty unremarkable.

One former Apple employee in the area, who asked not to be named to protect professional relationships, estimated the number of machine learning experts had tripled or quadrupled in the past few years.

Though some AI graduates reportedly shy away from Apple as they want as much data as possible to play with, John Duchi, an assistant professor at Stanford University, says that others are likely to relish the challenge of combining intelligence with a high degree of privacy. “New flavors of problems are exciting,” he said.

Not everyone is as happy with Apple’s strong commitment to privacy as its customers. A former iAd exec complained that the company’s advertising efforts were hampered by its privacy policy, and lawyers have even suggested that the company’s use of strong encryption could leave it liable to terrorism-related lawsuits.