Fitness tracker use is exploding in the US, especially among rich young women

A recent Pew survey shows 21% of US adults regularly wear a smartwatch or fitness tracker. Over half of them think it acceptable for the device makers to share user data with medical researchers.

According to the survey results shared by the Pew Research Center, an American think-tank, smartwatch and fitness tracker adoption may have crossed the chasm from earlier adopters to early majority. 21% of the surveyed panellists already are regularly using smartwatch or specialised tracker to monitor their fitness.

Such a trajectory is in line with the recent market feedback that the total wearables market volume has nearly doubled from a year ago (though what counts as wearables may be contested), and both wristbands and smartwatches have grown by nearly 50%.

When it comes to difference in adoption rates between social groups, the penetration went up to nearly a third (31%) among those with a household income of over $75,000. In comparison, among those with a household income of less than $30,000, only 12% regularly wear such a device. In addition to variance by income groups, women, Hispanic adults, and respondents with a college degree and above are also more likely to wear such devices than men, non-college graduates, and other major ethnic groups.

Another question on the survey asks the respondents if they think makers of a fitness tracking app can share “their users’ data with medical researchers seeking to better understand the link between exercise and heart disease”. The response is divided. 41% of all the respondents said yes, as opposed to 35% saying no, while 22% unsure. However, the percentage of those believing such sharing acceptable went up to 53% among the respondents that are already regularly using such devices, compared to 38% among the non-adopters.

Due to the lack of a GDPR equivalent in the US, it is not much of a surprise that there is neither a consensus among users nor a standard industry practice related to user data sharing. “Recently, some concerns have been raised over who can and should have access to this health data. Military analysts have also expressed concern about how third parties can use the data to find out where there is an American military presence,” Pew said in its press release.

Meanwhile, how useful the data tracked by the devices can be for medical research purposes may also be debatable. For example, even the best of the devices, the Apple Watch, does not qualify as a medical device, despite its being “FDA certified”.

The survey was conducted by Pew from 3 to 17 June 2019. 4,272 qualified panellists responded to the survey.

You don’t need to understand AI to trust it, says German politician

The minister for artificial intelligence at the German government has spoken about the European vision for AI, especially how to grow and gain trust from non-expert users.

Prof. Dr. Ina Schieferdecker, a junior minister in Germany’s Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), who has artificial intelligence in her portfolio, recently attended an AI Camp in Berlin (or KI-Camp in German, for “künstliche Intelligenz”). She was interviewed there by DW (Deutsche Welle, Germany’s answer to the BBC World Service) on how the German government and the European Union can help alleviate concerns about AI among ordinary users of the internet and information technologies.

When addressing the question that AI is often seen as a “black box”, and the demand for algorithms to be made transparent, Schieferdecker said she saw it differently. “I don’t believe that everyone has to understand AI. Not everyone can understand it,” she said. “Technology should be trustworthy. But we don’t all understand how planes work or how giant tankers float on water. So, we have learn (sic.) to trust digital technology, too.”

Admittedly not all Europeans share this way of looking at AI and non-expert users. Finland, the current holder of the European presidency, believes that as many people as possible should understand what AI is about, not only to alleviate the concerns but also unleash its power more broadly. So it decided to give 1% of its population AI training.

Schieferdecker also called for a communal approach to developing AI, which should involve science, technology, education, and business sectors. She also demanded that AI developers should consider users’ safety concerns and other basic principles from the beginning. This is very much in line with what has been outlined in the EU’s “Ethics guidelines for trustworthy AI” published in April this year, where, as guideline number one, it is stated “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.” As we subsequently reported, those guidelines are too vague and lack tangible measurements of success.

Schieferdecker was more confident. She believed when Germany, which has presumable heavily shaped the guidelines, assumes the European presidency in the second half of 2020, it “will try to pool Europe’s strengths in an effort to transform the rules on paper into something real and useable for the people.”

The interview also touched upon how user data, for example shopping or browsing records, are being used by AI in an opaque way and the concerns about privacy this may raise. Schieferdecker believed GDPR has “made a difference” while also admitting there are “issues here and there, but it’s being further developed.” She also claimed the government is working to achieve a data sovereignty in some shape and “offer people alternatives to your Amazons, Googles, Instagrams” without disclosing further details.

The camp took place on 5 December in Berlin as part of the Science Year 2019 programme (Wissenschaftsjahr 2019) and was co-organised by the BMBF and the Society for Information Technology (Gesellschaft für Informatik, GI), an industry organisation. The interview was subjected to a vetting process by the BMBF before it could be published. As DW put it, “the text has been redacted and altered by the BMBF in addition to DW’s normal editorial guidelines. As such, the text does not entirely reflect the audio of the interview as recorded”.