Beyond the ‘self-tracking’ craze: Towards a true technological enhancement of human intelligence

This mini-essay forms the basis of my contribution to the ‘self-tracking and the emergence of hybrid beings’ panel at the University of Liverpool’s Being Human Festival on 10 December 2015. The reader will see that I’m not especially enamored by the framing of the topic.

 

Of the various ‘hybrid beings’ that are emerging between humans and increasingly intelligent machines, the ‘quantified self’ has taken centre stage – but mainly because it captures a pervasive phenomenon on which critical theory is well suited to offer commentary. The idea, already implicit in Foucault, involves the alienation of one’s sense of freedom to a range of possible metric values over which one acquires mastery. Many forests will undoubtedly be felled talking about the character and necessity of the ‘structured agency’ permitted by the quantified self, including the issues of privacy and security, all of which turn on the increasingly fluid boundaries dividing selves from one another. Perhaps the most interesting angle on self-tracking is from the standpoint of political economy, especially if mass technological unemployment – even among professionals – is a long-term consequence of our inhabiting a world of artificial intelligences. In that case, self-tracking may be key to how we earn a livelihood – namely, by selling our consumption patterns, perhaps administered in the form of automatic micropayments.

While some have written positively about the ‘enhanced’ character of human life that may result from self-tracking technologies, the ‘enhancement’ is mainly at the level of amplifying tendencies already registered to some degree by the subject. In contrast, a much more interesting sense of smart technologies capable of enhancing the human condition is captured by the idea of ‘data surfacing’, which has been proposed as a counterbalance to ‘data mining’, a long-standing technique in knowledge management systems.

In ‘data mining’, algorithms are used to survey big data streams to find specific patterns, etc. that the end-user has already identified of relevance. The traditional advantage of data mining systems is that they ‘cut through the noise’ to get at what the end-user wants. This means, however, that the end-user never really sees the full range of data available. After all, the data may contain patterns of potential interest but because they were not specified by the end-user (no doubt due to ignorance), they were not programmed into the data-mining algorithm. Manifesting these latent data patterns requires ‘data surfacing’ techniques, which consists of algorithms designed to structure large amounts of data in ways that enable the end-user to search for patterns inductively – that is, with a relatively loose sense of what might be of interest. Whereas data-mining reinforces your cognitive biases, data-surfacing genuinely extends your cognitive range – to see things you hadn’t seen before.

A characteristic feature of data-surfacing techniques is that they require significant human input through the life of the algorithm in order for the output to be represented in a way that enables the end-users to gain maximum advantage. Thus, the great champion of data-surfacing, the Silicon Valley cybersecurity firm Palantir, prides itself in selling not only data-surfacing platforms but also installing its own engineering staff to enable end-users to discover things about the data at their disposal that they perhaps would not never have thought of looking for before. While these services are compelling in the concrete context of anticipating the next cyber-attack, they underscore just as concretely how human intelligence may be literally expanded – and not merely replaced or disciplined – by machine intelligence.


Categories: Digital Sociology, Outflanking Platitudes, Rethinking The World, Social Theory

Tags: , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *