There is a new initiative in the UK, the iRights, proposing guiding principles for the design and operation of digital spaces used by children, so that these young users of digital technology can fully benefit from the opportunities presented by digital technologies. I think that these principles are really helpful in moving the debate away from the ‘privacy vs. service’ perspective, and helping identify opportunities to improve the playing field for all consumers, not just children. Here, I propose an organising structure for the iRights, and think about some of the issues to consider when implementing them.
The five principles
The principles are inspired by the UN Convention on the Rights of the Child. They aim to give ‘children clear rights so that they can flourish in a safe and supportive environment’ (page 1), and are summarised this way in the iRights website:
I think that these principles refer to three different aspects of interacting with digital technology. So, I propose the following organising structure for the 5 iRights principles:
As a customer-profiling researcher, I am particularly interested in the last category (principles 1 and 2) and I have been thinking about the issues likely to arise from, or affect, the implementation of these principles:
– First order data vs. metadata and combination of datasets – My own experience, is that some people worry very much about aspects such as whether to post their children’s real names, or photos of their faces, but give little thought about geo-location and other valuable metadata, and even less to how different ‘low value’ datasets may be combined to produce valuable insight;
– Who is collecting what data – If I am completing a quiz, on a social network application, installed on my mobile phone, data are being collected by at least three different parties. In order to fully exercise my rights to know and remove, I need to know who is collecting what data, what us happening to the data, who is it being shared with. As this article discusses, by and large, we ‘don’t appreciate the extent of the information we may be handing over and what happens to it‘;
– Varying interpretations and implementations of the principles in different jurisdictions – These principles are likely to be interpreted very differently in different jurisdictions. We just need to look at data protection legislation within the EU, for an example. Besides, there are always gaps between designing a law and forcing companies to comply with that law… and in the digital world, that gap can mean that embarrassing photos, posts and information will go viral and produce wide damage, as discussed here;
– Socio-economic differences – It is one thing to have these rights, but another one to know how to act on them and, of course, have the energy, ability, and capacity to pursue them;
– Data being shared about the child by third parties – These principles refer, only, to the data shared by the children. But what about the data shared by third parties? The sharenting phenomenon means that huge amounts of data are being shared about the child and, effectively, building this child’s story on their behalf, without them being able to influence that story.
What am I missing?
One thought on “Protecting the rights of children as consumers of digital technology”