Recently, Google announced that it is buying Fitbit, the fitness tracker company which, not long ago, was the leader of the fitness wearable devices market. This acquisition gives Google a foothold in the large, growing and lucrative smartwatches / fitness band arena, which was lacking from their product portfolio. Most importantly, for Google, it gives them access to data in an area of consumer behaviour which they lacked.
As the undisputed market leader in search engines, Google already knew what we search for online. With the use of online trackers, Google knew the websites that we visit and the paths that we follow to acquire products and services. By scanning e-mails exchanged via the Gmail platform, Google already had access to our electronic conversations, online subscriptions, electronic invoices and other documentary evidence of our digital lives. Furthermore, with its mapping service, Google was able to track where we travel to, and what the traffic conditions are, in real time. Then, with its smart home products (e.g., Nest and Pebble), Google gained access to the home, and could start learning about how we live our private lives: the temperature that we keep our homes at, how much energy we use, a family’s schedules, and what we talk about at home.
Now, with the move into fitness tracking devices, Google can gain insight into the human body. It can track how much we move, when we move and how we move. It can find out how much we sleep, and whether we slept through the night or, on the contrary, had an interrupted sleep. Google will be able to track our heart rate and other health indicators, and even figure out when someone is pregnant long before they are aware of that, themselves.
Will customers care?
Google’s quest for more and more data is part of the expansion of what social psychologist Shoshana Zuboff describes as “Surveillance Capitalism”. Personal and contextual data are routinely and systematically collected, often without the subjects being aware of the extent and range of such data collection.
Data collected at the granular level of the individual, specific location, time of day, and what happened immediately before and after, offers valuable insight into the minutiae of everyday life. This insight, in turn, can be used by firms to optimise production, rationalise the use of limited resources, increase sales, maximise profits, or develop new value propositions and business models.
Not all firms are equally inclined to collect data about their customers, or use personal data for the same purposes. Customer surveillance is very much linked to a company’s business strategy and their privacy orientation. For instance, Apple is known to collect data for product improvement, only; Amazon to maximise product sales within its own platform; and Google to sell and deliver hyper-targeted advertising by third-parties. So, Apple is likely to follow a less aggressive data collection, and to protect customer data more fiercely than Amazon or Google.
As my own research shows, when customers buy fitness wearables (such as movement tracking bands and smartwatches), they accept the collection and tracking of their health and fitness data in exchange for a better product. Customers choose brands or funding schemes (e.g., self-funded vs. employer-funded) that reflect those preferences. Likewise, when current Fitbit customers chose this brand, they accepted a particular trade-off between privacy and product functionality. With the Google acquisition, that trade-off is under threat. For the time being, Google is promising not to use Fitbit data for Google ads. Though, as Google itself was quick to point out apropos its promise not to use Nest data for ad personalisation: “We can never say never”.
What’s the problem with personalisation, anyway?
According to the defenders of personalisation, this activity is not only good for customers, it is even a necessity in this age of hyper-connectedness and information overload. They argue that personalisation can result in higher social welfare and aggregate consumer surplus, if customers get exactly what they need or want, instead of getting what McAfee and Brynjolfsson call the ‘HiPPO’ (the highest-paid person’s opinion). Personalisation can also reduce waste of resources, such as limited advertising budgets, inventories of perishable goods and services, or customers’ time. After all, who would want to spend their precious evenings searching through Netflix’s catalogue of 4,000 films and 47,000 TV episodes? Isn’t it much better if Netflix presents us a with a much smaller range of titles, carefully selected based on our past viewing and other information that they might have gathered about us? And what could possibly be wrong with Google realising that we have a cold and recommending that we top up on medicine, or buy a lovely, warm blanket?
However, it has been shown that customer gains are by no means guaranteed: profiling leads to the higher pricing for high valuation customers, thus eroding the consumer surplus. As Ghoshal et al show, when a firm improves its recommender systems, customers end up paying higher prices, not just for the products offered by the firms doing the personalisation, but also for the products offered by non-personalising ones.
But it’s not just prices that are the problem. Personalisation is also a form of control by the firm. The online company is controlling what information we can see online, or the products that we get offered and at what price, effectively reducing our ability to access information and options beyond those programmed in the algorithm. Profiling is also a form of discrimination, as it runs against the principle of equal treatment; and it can result in the exploitation of vulnerable individuals – for instance, via exposure to radicalising materials or conspiracy theory videos on YouTube.
A further problem is that consumers have very limited ability to resist or even challenge personalisation. First, they may not even be aware that personalisation is taking place. For instance, some Facebook users may not be aware that they are missing out on certain jobs, because they are women and social network’s algorithm makes it cheaper to target men than women.
Second, customers may be aware that profiling is taking place, but they are unable to challenge the results because the algorithms are proprietary, and firms do not reveal how they came up with a particular result, or why. An example of this, is the recent debacle over Apple’s credit card, which is offering much higher credit limits to men than to women, even when they have comparable credit scores.
Third, with so much of our daily lives taking place online, there are significant financial and social costs to opting out of those services. For instance, if your local club decides to drop its e-mail newsletter and move to Facebook, you can either give in or stay in the dark about what your club is doing, offers, etc… Moreover, opting-out does not guarantee that someone is safe from the consequences of profiling. Just think about the cases of family secrets revealed, because someone decided to get a genetic test.
I think that personalisation, much like attention, is better in limited doses; and always with full consent.