Steve Torrance

Photo on 13-04-2014 at 15.51

Steve Torrance was trained as a philosopher at Sussex and Oxford, and has lectured at Middlesex and Sussex Universities.  Having retired, he now holds the position of Professor Emeritus in Cognitive Science at Middlesex University, and teaches part-time at Goldsmiths, University of London.  His research and teaching has been set in an interdisciplinary context for most of his career, but it still retains a philosophical emphasis.  In the 1980s he became interested in the issues raised by computing and artificial intelligence, and particularly in the implications of AI and robotics for the philosophy of cognition and consciousness, and for moral theory.  Since the mid-1990s he has held positions in departments of psychology and informatics.  He is a Visiting Senior Research Fellow in the School of Engineering and Informatics at the University of Sussex.

Much of Steve’s research has been concerned with enactive approaches to mind.   He has organized several meetings on enactive and sensorimotor approaches to perception and interaction, and has edited two special issues of the journal Phenomenology and the Cognitive Sciences on “Enacting Experience” (2005, 2007).  While at Sussex he worked closely with Ezequiel Di Paolo and others on enactivist approaches to interactive dynamics and joint sense-making.  Together with Ezequiel, Hanne De Jaegher and Tom Froese, he co-chaired a 2008 workshop, funded by the EUCognition Network, on Enactive Approaches to Social Cognition in Battle, at which several of the themes for today’s talk were incubated.

Steve’s research has also focused on the philosophical foundations of Machine Consciousness and Machine Ethics, for instance on the moral status of intelligent machines or robots in the context of psychological ascriptions of cognition, sentience, etc.  This bears on questions about the bounds of the ‘moral constituency’ – human, animal and, perhaps, artificial   What is the relation between applying or withholding attributions of consciousness to artificial agents, and applying or withholding attributions of ethical status? It is clear from consideration of the constituency question that ethical status comes in two kinds – ethical agent (or ‘producer’) status and ethical patient (or ‘consumer’) status.  Working on the ethical agent/patient (or producer/consumer) duality, Steve is seeking to apply some of these reflections to an enactively-informed understanding of conflictual interpersonal interactions (for now, to small-scale conflicts; but with potential application to larger-scale group or political conflicts).

To summarize, then, Steve is interested, for this meeting, in the exploring the question ‘How far can insights from enactive philosophy and from the conceptual foundations of ethics (particularly machine ethics) help in the understanding of conflict in interpersonal relations?

Presentation title:  Ethics, Conflict and Inter-Enaction

Abstract: In this presentation I will develop some reflections concerning ‘inter-enaction’ and related notions, as well as some parallel work on the conceptual foundations of machine ethics.

Inter-enaction.  The notion of ‘inter-enaction’ was developed by Giovanna Colombetti and myself in a 2009 paper [1] and subsequently elaborated in a 2011 paper that I co-authored with Tom Froese [2].  ‘Inter-enaction’ arose out of the discussions on enaction, intersubjectivity and empathy, by Evan Thompson and colleagues at the turn of the millennium [3]; and, further, from work at Sussex by Ezequiel Di Paolo’s evolutionary robotics group.  The latter led to a seminal 2007 paper on Participatory Sense-Making (PSM) by Hanne De Jaegher and Di Paolo [4].   PSM takes the enactive/autopoietic view of an organism or agent elaborating, or co-constituting, through dynamic interaction with its environment, a world of significances, and extends it into the realm of interpersonally shared meaning creation, through social co-action.  A major idea explored in the research of Di Paolo’s group was that the process of interaction between two or more agents (humans, animals, robots) had on its own autonomous dynamics, which could be experimentally isolated and measured.  PSM applied this more specifically to human relational dynamics.

Emotion, ethics. The notion of ‘inter-enaction’ imported several linked ideas.  One idea stressed the centrality of emotion, or affective inter-connection, as pervading social interactions: in this strand we explored the variety of ways in which we are affectively involved in interpersonal encounters.  A second strand, central to today’s presentation, was the ubiquity of ‘moral’ colouring in such interactions.  In traditional conceptions of the scope of the moral,  the latter is seen as a relatively circumscribed realm in human life.  By contrast the inter-enactive account sees human social relations as endemically suffused by implicit, affectively charged, moral (or quasi-moral) significances – these significances being continually reshaped and renegotiated in our unfolding relationships.

Inter-enaction and normative order.  The intertwining of the affective and moralistic character of the inter-enactive account was explored in [1].  The ethical aspect of inter-enaction was further elaborated in [2], where we sought to relate PSM, inter-enaction, and related notions, with their emphasis on local, inter-individual dynamics, to a broader, suprapersonal, notion of normative order, central to social theory as developed by Durkheim, Weber and others.

Interaction-oriented ethics.  Another theme in this work explored the idea of inter-enaction as a distinctive ethical paradigm –  ‘interaction-oriented ethics’ (IOE) – which could be seen as a competitor (or complement) to the trio of consequentialist, deontological and virtue ethics which dominate much discussion in contemporary moral theory [5].   Such ethical perspectives are usually conceived as individuocentric in nature: consequentialism deals with the balance of good over bad resulting from the acts of individual doers, or the aggregates of such acts; deontology deals with duties as they apply to individualized actors or group agencies; virtue ethics emphasizes the most appropriate qualities of character or dispositions that an individual might cultivate in life.  IOE, on the other hand, proposes a different – or an additional – focus for ethical analysis and appraisal:  the processes of interaction, and joint meaning-elaboration, between agents.   These inter-agent processes are considered as having their own autonomous dynamics that enable and constrain the individual moves within this interactive play.

Agent- and interaction-autonomy.  In this connection it is desirable to distinguish two notions of autonomy: agent-autonomy and interaction-autonomy.  Agent-autonomy refers to how the actions, intentions, feelings, choices, etc. of an individual agent enable it maintain a trajectory allowing it to survive and flourish in its physical and social environment.  The autonomy of one agent modulates, and is modulated by, the autonomy of other agents in its interactional space. Interaction-autonomy, on the other hand, comprises the processes whereby two or more agents meet in their interactional space, whereby that meeting takes on its own, more or less complex, independent dynamic.

As an illustration of interaction-autonomy within human relations, consider the ‘deadly embrace’ between two cars driving close together in the fast lane of a motorway.  The rear driver, wishing to overtake, draws up close, tail-gating the driver in front.  The leading driver may slow down or (more likely) speed up; for seconds or minutes the two cars are in a locked formation.  This encounter can obviously be described in individualistic terms:  one driver does X, to which the other driver responds with Y, and so on.   More likely than not, the individual feelings and actions of the participants in this scenario embody a strongly moralistic flavour – the approach of the rear driver is experienced as an invasion of personal space by the driver in front, which elicits a retributive reaction, etc.   However, from the point of view of interaction-dynamics, the (temporary) stability of the relationship between the two drivers can be described as an emergent process of locked co-action, with its own ‘life’, unfolding as a temporally structured play.  Importantly, it is not just a dynamics of movement which being expressed in this situation, but also a dynamics of jointly constructed meanings (e.g. the collective persistence of the ‘fight’ for lane-dominance).

An individual-oriented ethics will interpret and appraise the situation by ascribing merit or blame to the actors.  An interaction-oriented ethics, by contrast, will take the interaction itself as a primary focus for understanding and appraisal – that is, by treating the interaction as a jointly unfolding product, rather than a sum of individual moves or holding positions.  The details of interaction-oriented ethical appraisal remain to be worked out in some detail – a complex project.  For example, one can differentiate overall styles of interaction – conciliatory, conflictual, etc.  One can have short-, medium- and long-term episodes of interaction, all with their distinctive properties.   One can describe the evolution of an interactional episode in terms of affective heat or coolness.  And so on.

Machine ethics.  One set of tools will be described, that could play an important part in a fully-worked out inter-enactive ethical analysis.  This is derived from work done on the philosophical foundations of Machine Ethics [6,7].  In the philosophy of AI much discussion has traditionally centred on the conditions for authentic attributions of different human properties (thinking, consciousness, creativity, and so on) to artificial agents – ‘genuine’ thinking, etc. as opposed to courtesy or ‘as-if’ ascriptions .  In Machine Ethics a key issue concerns the conditions under which an AI – or indeed a natural being – might be accorded full-blooded moral status.  It turns out that very different kinds of conditions for moral status may be applicable depending on whether one is referring to a machine or robot as a moral agent (or ‘producer’) or as a moral patient (or ‘consumer’).

Producer-consumer relationships.  In order to make progress on this, one has to get clear what exactly the difference is between these two – between moral producer status and moral consumer status.  On one account, a moral producer is an agent which performs an act that is morally evaluable, and a moral consumer is any agent affected by that act. However, a number of complications arise.  Can one, for instance, be a moral producer and a moral consumer at the same time? In order to answer this, different examples can be offered.  One kind of example is that of being rewarded for a good action (or punished for a bad one).  In the simplest version of this case one is on the receiving end of an action (reward, punishment) – hence a consumer – but that one is such a recipient stems from a previous morally significant action one has performed (i.e., as a moral producer).  In another kind of example, when A scolds B because B is smoking, A’s complaint expresses A’s conception of herself as a (potential) consumer of a negative moral value.  But, in making that complaint A is expressing a moral norm and, in that respect, making an act of (as A believes) positive moral polarity (seeking to curb B’s antisocial behaviour) and, thus playing a moral producer role.  In a different case, that of two lovers in amorous union, each actor both produces expressions of the other’s dearness and worth, and receives such expressions from the other.   As each lover’s gesture of endearment mutually reinforces the other’s matching gesture, the producer and consumer roles are fused in a certain way.  Similar things can be said about lovers’ quarrels, except here the polarities are negative, and each actor’s hostile move both answers and elicits hostility from the other, again in a mutually reinforcing way.

Implications of the producer/consumer framework.  A number of tentative conclusions can be drawn from these reflections on the producer/consumer duality.  Here are three. First, it seems clear that the producer/consumer framework may turn out to be subtle, intricate and powerful. Second, it seems to be applicable across a wide range of human interactions (let alone a growing class of interactions between humans and machines that display humanoid qualities).  So the framework may be helpful in helping us to clarify the microstructure of interpersonal conflicts.   Third, it buttresses the theme, central to inter-enaction, that moralistic or quasi-moralistic affective relations are much more ubiquitous than are generally thought.

Steve Torrance can be contacted at


[1]  Colombetti, G. and Torrance, S. (2009)  Emotion and Ethics, an Inter-(en)active Approach.  Phenomenology and the Cognitive Sciences.  8(4):  505-526.

[2]  Torrance, S. and Froese, T.  (2011)  An Inter-enactive Approach to Agency:  Phenomenology, Dynamics and Sociality.  Humana Mente, 15:  21-53.

[3]  Thompson, E.  (2001)  Empathy and Consciousness.  In E. Thompson (ed.)  Between Ourselves. Second-Person Issues in the Study of Consciousness. Special issue of  Journal of Consciousness Studies.  8(5-7): 1-32.

[4]  De Jaegher, H. and Di Paolo, E. (2007)  Participatory Sense-making. An Enactive Approach to Social Cognition. Phenomenology and the Cognitive Sciences 6(4):  485- 507.

[5]  Torrance, S.  (2008a)  Ethics and Interaction.   Background paper for Powder Mills workshop on Enactive Approaches to Social Cognition.

[6]  Torrance, S. (2008b)  Ethics and Consciousness in Artificial Agents.  AI & Society.  22(4):  495-521.  (Special Issue on Ethics and Artificial Agents.)

[7]  Torrance, S. and Chrisley, R.  (forthcoming)  Modelling Consciousness-dependent Expertise in Machine Medical Moral Agents.   In M. Pontier and S. Van Rysewyk (eds).  Medical Machine Agents, Springer, in preparation.