Key Dialectics in Cloud Computing

Delivered to IFIP Workshop: “The challenges of virtuality and the cloud: the implications of social accountability and professional ethics”, Middlesex University, London, Feb. 2015

Since the central topic of this workshop is cloud computing, I should perhaps commence with a definition of what I understand “cloud computing” to refer to. The US National Institute of Standards and Technology Special Publication 800-145 defines the essential characteristics of cloud computing as being

  • The ability to provide services whenever desired without human intervention.
  • Being available to a wide range of client devices via networking technology
  • The “virtualisation” of computing resources, such that digital operations are not linked to specific servers or locations.
  • Scalability – the capability of the systems to scale up or down in response to changes in demand. This is, of course, a necessary corollary of virtualisation.
  • Often, but not necessarily, Software as a Service. (Mell and Grance 2011)

Clearly this definition applies to many, if not most, internet systems and digital services, not merely to the virtualisation of server functions previously found in the traditional client-server network. Under this view, Facebook and Google search are both cloud services. I think this is both valid and important – confining discussion of cloud computing to data processing or file storage functions limits us to a few contingent uses of a wider system and obscures the essential factors we need to consider.

My short presentation is an attempt to identify some themes regarding the cloud by organising them into three dialectical axes:

  • The nature of the relationship between personal privacy and service provision
  • The degree to which people who build or operate cloud-based services are ethically responsible for the actions or effects of those services,
  • and the nature of the marketplace for those services.

AXIS 1

One axis is the necessity versus the contingency of reductions to privacy under new digital services.

That is to say, there is one body of opinion which holds that the erosion of personal privacy is a necessary and unavoidable consequence of, or precondition for, the delivery of digital services. These positions tend to be a reflexive response within the development community, rarely stated formally, and is a minority view in the literature, so detailed arguments as to why privacy must be reduced to enable cloud services are scarse. However, Professor Bergkamp’s paper “The Privacy Fallacy” (Bergkamp 2002), published in Computer Law & Security marshals all the arguments in this camp.

Opposing this is the view, to which I subscribe, that there is no necessary and unavoidable relationship between privacy consequences and functionality. One does not have to reduce privacy in order to extend services. Rather, it is always a question of choice, either in how the system is constructed or in the type of business model under which it operates, and there are always alternatives. Now, it may be argued that some of those alternatives are more expensive than the privacy-reducing models, or that alternatives are more technically challenging, and that may very well be the case. However, that, in and of itself, is not an argument for the necessity of privacy-reducing models, but rather an argument underpinning a particular business model or software approach.

There is a great deal of technical development under the paradigm of the first position; that is to say; those who are building systems generally work on the basis that privacy is exchanged for digital services. However, there is also a growing body of those seeking to develop alternatives, in terms of governance or business model or in terms of code (such as the Privacy by Design movement). Here I recommend Langheinrich’s “Principles of Privacy-Aware Ubiquitous Systems” from the Third International Conference on Ubiquitous Computing (Langheinrich 2001), and I will return to this theme at the end in more detail.

AXIS 2

The next axis is concerned with the degree to which people who design, build or operate cloud services are ethically responsible for the consequences of the actions of those systems.

The competing positions are that, on the one hand, programmers and operators are not ethically responsible for the actions of autonomous systems, versus a view that they are. It’s difficult to argue that the person holding a hammer is not ethically responsible for the consequences of whatever happens when the hammer hits something. However, with large industrially produced complex automated systems, especially those that include some form of weak AI functionality, arguments emerge in favour of the position that those who build the systems are not ethically responsible for the actions of those systems. This argument will no doubt be exacerbated the more powerful and the more intelligent and autonomous these systems become. This issue is discussed most frequently with regard to autonomous military systems, whose lethality makes the question of ethical responsibility both stark and urgent. However, the question is just as pertinent for any form of autonomous system, even simpler cloud-based decision systems such as those Google uses to order search results or those which undertake user profiling to drive targeted advertising.

Andreas Matthias’ paper, The Responsibility Gap, published in Ethics and Information in 2004 (Matthias 2004) offers a fairly straightforward account of the philosophical logic behind the position that programmers are not responsible for the actions of their autonomous systems, while Robert Sparrow’s Killer Robots (Sparrow 2007) presents the same view via an examination of the practicalities of creating and deploying autonomous systems.

In essence I believe the “programmer is not responsible” position is founded on the proposition that one can only be responsible for effects which one can foresee. Under this view, as systems become more autonomous, so one’s ethical responsibility for that system’s actions is reduced. Once a system begins to operate in a manner which could not be predicted by the programmer, the programmer ceases to be responsible for its actions. Positions within this camp tend to fall into either “the system is now a moral agent in its own right, therefore responsibility cannot be passed further up the causal chain to the programmer” or “there is no moral responsibility at all.” As I said, these arguments tend to be based on moral responsibility as being dependant on control and foreseeability. Thus one cannot be responsible for actions one has no control over, nor for consequences which are unforeseeable.

I believe this position mistakes the nature of the relationship between the creator of the system, the system, and the consequences of the system’s actions. I believe the details of responsibility for individual unforeseeable acts by autonomous systems is a red herring. Irrespective of the ethical connection between the creator and the system’s actions, there is no doubt the programmer is ethically responsible for the existence of the system itself, and is responsible for the fact the system possesses autonomy. Thus, while the programmer may not be able to anticipate the details of any individual action, they are responsible for the fact the system has the capacity to commit unforeseeable acts in general. In this sense, the programmer has knowingly and freely chosen to create something which will act in ways the programmer cannot foresee in detail. Under such a view the programmer becomes responsible for everything the system does. Obviously one needs more detail to justify this position, and obviously I don’t have the time to make do so. Here I would recommend Software Agents, Anticipatory Ethics, and Accountability (Johnson 2011) by Johnson et. al. This paper uses the concept of “anticipatory ethics” and develops a moral ontology which provides a detailed linkage of programmers to the consequences of their system’s actions.

For balance I should also mention Dahiyat’s “Intelligent Agents and Liability” which summarises both sides of the debate nicely then firmly positions itself in the middle (Dahiyat 2010)

AXIS 3

The next axis concerns the (for want of a better word and it really needs a better word) marketplace of cloud services.

Here I think the dominating dialectic is that of hierarchical vs flat. We see this in architecture, in standards and in organisational models. I think it is the pivot on which all our concerns rest, as I hope to show.

Firstly, let me recommend a wonderful short speech by Eben Moglen called “Freedom in the Cloud”, delivered to the Internet Society in 2010 (Moglen 2010). Allow me to quickly summarise his argument:

Cloud architecture works on a thin client – fat server model. Processing and most data storage is done on centralised servers servicing a wide variety of dumb clients. These servers maintain activity logs. These logs can be mined for behavioural data. Marketing companies learned they could mine these to understand, predict and influence user behaviour in order to sell advertising. As the perceived value of this information grew, it spurred the development of a secondary internet infrastructure of tracking services designed to add to the growing database of what we now call “user profiles.”

Thus we see that an architecture which concentrates processing power and data at centralised locations promotes a concentration of both technical proficiency and economic power, while also promoting a top-down hierarchical organisational model and, in a global internet, the development of a limited number of very large monopoly service providers. This produces an extreme power dichotomy between those who own the services and those who use them.

That’s the essence of Moglen’s argument, but allow me to develop the theme further.

This combination of architecture and business model produces walled gardens. These are silos of private technology and proprietary data formats which are not compatible with, or accessible by, other systems or organisations. The patent system combines with a capitalist marketplace to financially reward such behaviour. If I am the sole owner of a system everyone wants, I can make money. If I create a system then give it to everyone, I do not benefit. What I therefore need to do is lock everyone into my technology, and then I’ll “own” the market. Or so the thinking goes. IBM pioneered this model, Microsoft and Apple consciously imitated it, the internet giants which have grown since have followed it.

The effect of this is to lock data and services into a single provider. The provider becomes the gatekeeper over the knowledge of what they do and how they do it. Users cannot migrate to a competitor without significant effort and loss. This can extend to the most mundane level. If you close your account with Amazon, they will remove all the books from your Kindle. You cannot therefore switch to an alternative, such as Adobe Digital Editions, without re-purchasing your digital library. Different legal regimes permit different levels of access inside these walled gardens, but in no case I know of does a society have full knowledge or any substantive control. Such a system has no interest in open standards, interoperability, or a free flow of information. This lack of interoperability and open standards was why the internet and HTML were not developed by commercial enterprises. Early pre-cursors of the web tried the same walled garden approach, including America Online, CompuServe and Lotus Notes. It was only when Tim Berners-Lee gave HTML away that we broke free of this limiting system and gained the web. He gave it away because he saw things in exactly this same way and believed that if he patented or sold HTML, it would become just another walled garden (Berners-Lee 1999).

However, as companies have developed services which sit atop these communally owned standards, so they have developed further proprietary systems. Who knows what Google’s search algorithms are or the details of Google’s user profiles? Who knows what data Facebook stores on people, never mind what format that data uses? So what we have is an open platform on top of which companies have built a new layer of walled gardens. The scale of the internet user base combines with a shared service delivery infrastructure to enable the rise of extremely large global monopolies, such Google, Amazon and Facebook.

So what we end up with is the range of cloud services portioned out amongst a limited number of very large hierarchical organisations, each of which hides its use of data from public scrutiny and uses its monopoly position and data ownership as a competitive advantage. The net effect is that people are locked to service providers like serfs to their lord. However, unlike in the middle ages, there is no competing lord to flee to if you are unhappy with your lot.

Now, opposing this are a disparate range of alternatives. Each alternative tends to focus on one aspect of this system, such as technical architecture or business model. Technically, the existence of the internet is based on open standards, so alternatives have always been available on a technical level. Here we have the open source activists, such as the Free Software Foundation and the IETF. In addition, we have less obvious alternative architectures based on peer-to-peer (as opposed to client-server) models, such as the BOINC platform for community computing and the BitTorrent protocol. Standards like XML and RDF provide a means of breaking open walled gardens through data exchange, while people such as Chris Marsden in the UK or Robert McChesney in the USA, have developed the rationale for them (Brown and Marsden 2013b; Brown and Marsden 2013a; Brown and Marsden 2013a; McChesney 2014). We have to also acknowledge that the development of the hidden infrastructure of ubiquitous surveillance has spurred the development of a fairly mature alternative internet, the so-called DarkNet, and popularised anonymizing systems such as TOR.

Alternatives to the prevailing organisational model are far less developed, but the early signs are there. There is a (relatively) long history of critical approaches to the prevailing models, such as we see in the works of Christian Fuchs (Fuchs 2012; Fuchs 2011; Fuchs and Sandoval 2014) and Mark Andrejevic (Andrejevic 2011). Recently the House of Lords called for the Internet to be treated like a public utility rather than a market place of optional luxuries (The Select Committee on Digital Skills 2015). International bodies, such as the EU and UNESCO, are starting to call for wider civic involvement in determining how services are provided (UNESCO Secretariat 2014; Kroes and Buhr 2014) and for the development of alternative service provision models. For example, the outgoing EU Vice President, Neelie Kroes, stated last November

“Why should we have to give up our privacy for a “free” service if we prefer to pay for that same service with cash and keep our privacy?” (Kroes and Buhr 2014).

SUMMARY

Let’s see if I can now draw together these different axes, what is their common basis? I believe these all point to differing perspectives on agency. On the first axis we have, at one extreme, those who hold that preservation of privacy and delivery of service are necessarily in opposition, that there is no possibility of agency in the relationship between privacy and service design. In our second axis, the position of there being no ethical connection between the creator of an autonomous system and that system’s effects, is also a position of there being no agency. Here lack of agency pertains not to the nature of the system, but to the consequences of the system. In that I have no agency in the system’s actions, I am not ethically connected to them. Finally, in our third axis, we see a lack of agency in the broader internet culture, which accepts one particular architecture and the most obvious organisational and economic models. In all cases, we see one pole in each dialectic disempowering itself, primarily I believe, because it simply failed to see that there was a choice, that it had agency, the power to act differently. My belief is that a key step is therefore inculcating in programmers and leaders in cloud services that they possess agency, there are alternatives and that they have the power to explore them.

Since 1992 cyberspace has been left to evolve as a commercial marketplace. It followed the logic of the market in its development without much thought as to alternatives and without much push back against a single prevailing business model. Along the way it became of central social importance. Now people are starting to dispute this reflexivity and develop alternatives.

And I guess that’s why we’re here today.


References

Andrejevic, Mark B. 2011. “Surveillance and Alienation in the Online Economy.” Surveillance & Society 8 (2): 278–87.

Bergkamp, Lucas. 2002. “The Privacy Fallacy.” Computer Law & Security Report 18 (1): 31–47.

Berners-Lee, Tim. 1999. Weaving the Web. New York, N.Y: HarperCollins.

Brown, Ian, and Christopher T. Marsden. 2013a. “Regulating Code: Towards Prosumer Law?” SSRN Electronic Journal. doi:10.2139/ssrn.2224263.

———. 2013b. “Interoperability as a Standard-Based ICT Competition Remedy.” In 8th International Conference on Standardization and Innovation in Information Technology 2013, 1–8. Sophia-Antipolis: IEEE. doi:10.1109/SIIT.2013.6774570.

Dahiyat, Emad Abdel Rahim. 2010. “Intelligent Agents and Liability: Is It a Doctrinal Problem or Merely a Problem of Explanation?” Artificial Intelligence and Law 18 (1): 103–21. doi:10.1007/s10506-010-9086-8.

Fuchs, Christian. 2011. “Web 2.0, Prosumption, and Surveillance.” Surveillance & Society 8 (3): 288–309.

———. , ed. 2012. Internet and Surveillance: The Challenges of Web 2.0 and Social Media. Routledge Studies in Science, Technology and Society 16. New York: Routledge.

Fuchs, Christian, and Marisol Sandoval, eds. 2014. Critique, Social Media and the Information Society. Routledge Studies in Science, Technology and Society 23. New York: Routledge/Taylor & Francis Group.

Johnson, Deborah G. 2011. “Software Agents, Anticipatory Ethics, and Accountability.” In The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, edited by Gary E. Marchant, Braden R. Allenby, and Joseph R. Herkert, 7:61–76. Dordrecht: Springer Netherlands. http://www.springerlink.com/index/10.1007/978-94-007-1356-7_5.

Kroes, Neelie, and Carl-Christian Buhr. 2014. “Human Society in a Digital World.” Digital Minds for a New Europe. https://ec.europa.eu/commission_2010-2014/kroes/en/content/human-society-digital-world-neelie-kroes-and-carl-christian-buhr.

Langheinrich, Marc. 2001. “Privacy by Design – Principles of Privacy-Aware Ubiquitous Systems.” In Proceedings of the Third International Conference on Ubiquitous Computing, edited by Gregory Abowd, Barry Brumitt, and Steven Shafer, 273–91. LNCS. Atlanta, USA: Springer-Verlag. http://www.vs.inf.ethz.ch/publ/papers/privacy-principles.pdf.

Matthias, Andreas. 2004. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology 6 (3): 175–83. doi:10.1007/s10676-004-3422-1.

McChesney, Robert W. 2014. “Be Realistic, Demand the Impossible: Three Radically Democratic Internet Policies.” Critical Studies in Media Communication 31 (2): 92–99. doi:10.1080/15295036.2014.913806.

Mell, Peter, and Timothy Grance. 2011. The NIST Definition of Cloud Computing. Special Publication 800-145. National Institude for Standards & Technology.

Moglen, Eben. 2010. “Freedom in the Cloud.” Speech, New York, N.Y. http://www.softwarefreedom.org/events/2010/isoc-ny/FreedomInTheCloud-transcript.html.

Sparrow, Robert. 2007. “Killer Robots.” Journal of Applied Philosophy 24 (1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x.

The Select Committee on Digital Skills. 2015. Make or Break: The Digital Future. Report of Session 2014–15. UK: The Authority of the House of Lords. http://www.publications.parliament.uk/pa/ld201415/ldselect/lddigital/111/111.pdf.

UNESCO Secretariat. 2014. Internet Universality. United Nations Educational, Scientific and Cultural Organization (UNESCO).