In this RSA talk the pioneering online campaigner Eli Pariser talks about a crucial and, as yet under-discussed, danger facing the the social media web: the expansion of filtering into every aspect of our online activity. Sites collect data on usage patterns, particularly our reactions to being presented with content and the action (e.g. ‘like’, ‘share’, ‘+1’) we take in response to what we see.
Without collecting such data any possibility of a semantic web is immediately foreclosed because human meaning has to enter the processing system somewhere. Yet the sheer opacity with which these technologies are being developed, let alone how they are being implemented on the web, demands urgent political debate.
However it would be easy to be alarmist about this and throw the baby out with the bath water. The problem is not filtering per se but rather the private and opaque nature of this filtering. In so far as the development and roll out of the technology is reliant on the corporate structures of capitalism, it’s difficult to avoid the former entirely. But the demand shouldn’t be for liberation from the filter bubble these corporations have placed us in – it should be for them to make their technology available to us so that we can design and implement our own filtering bubbles, as part of our ongoing day-to-day interactions with the internet, driven by our awareness of what we do and do not want to see. Certainly the computational systems they’ve developed allow us to see connections which we might no be consciously aware of: I’ve come across rafts of fascinating reading through following Amazon’s ‘other customers who bought this also bought’ system. But this should be an opt in system, rather than something imposed upon us.
It could be argued that there are political problems inherent in this as well – as Cass Sunstein plausibly argues in his Republic 2.0 – given the possibility that already politically divided societies are likely to become ever more polarized when individuals self-select for all the content they encounter. However firstly it’s necessary if we’re going to have any possibility of engaging productively and creatively with modern digital technology simply because of the exponential trend of content growth which goes hand-in-hand with the mass uptake of social media tools. Secondly, the problems attached to it are contingent and emergent (i.e. they result from when people in practice doing this filtering badly, often for reasons not of their own making) rather than being intrinsic to filtering itself. Thirdly, the sheer cultural value of web 2.0 demands new proficiencies on the part of its users: we can either retreat from information overload (see the growing trend for going offline, protectively lock ourselves into virtual bubbles of our own making, stay passively within the corporate infosphere* OR we can embrace the challenges that come from this revolution in human communication, using the tools available to us in order to dialogically develop a dynamic filtering orientation as we negotiate an ongoing path through human culture in the 21st century.
*Which I think is the main concern which arises from the filter bubble as it presently stands