Identifying
the normative challenges posed by technology’s ‘soft’
impacts1
Tsjalling Swierstra
Department of Philosophy, Maastricht University, The
Netherlands t.swierstra@maastrichtuniversity.nl
Abstract: In this paper I argue that we
can no longer afford to ignore technology’s so-called
‘soft’ impacts, as this type of impact is becoming
increasingly prominent in affluent societies where
people have sufficient resources to pursue
self-realization and where technologies are becoming
more and more ‘intimate’ as they pervade our life-world.
These soft impacts come with their own type of normative
challenges. The first challenge is to acknowledge the
mutual shaping of technology and morality that causes
soft impacts to be fundamentally morally ambiguous. The
second challenge is to anticipate soft impacts, which
requires a rich and thick description of our
morally-laden current practices in the light of
plausible technomoral change provoked by emerging
technologies. The third and last challenge is to avoid
both relativism and foundationalism, by opting for an
open and learning attitude vis à vis the ways new and
emerging technologies put our current morals into
question.
Keywords: Technology Assessment, ethics, soft
impacts, technomoral change, intimate technologies
Introduction
In the summer of 2014, the
Department of Philosophy and Religious Studies of
the Norwegian University of Science and Technology
in Trondheim organized an international seminar on
two important questions in the philosophy of
technology:
- “What are the
adequate methods for answering the normative
challenges posed by emerging technologies?
- How do we combine
empirical methods and philosophical theories in
answering particular research questions related
to technology development?”
In this paper I focus on
the first question, but in doing so also touch upon
the second one. My main claim is that in our times
new – so called ‘intimate’ – technologies have a
type of ‘soft’ impact (Swierstra & te Molder
2012) that differs markedly from the technological
risks – or ‘hard’ impacts - that we have more or
less learned to cope with. We are still in the
process of devising methods to deal with the new
normative challenges that come with these soft
impacts, although some promising approaches can
already be identified.
In the first section I explain the – gradual –
distinction between hard and soft impacts of
technology. In the next section I propose two
developments that that help explain why soft impacts
are becoming ever more prominent in public
discussions about emerging technologies. In the
subsequent three sections I sketch how these soft
impacts come with their own type of normative
challenges. A first normative challenge regards the
essential moral ambiguity of soft impacts. I will
argue that this ambiguity follows to a large extent
from the phenomenon of technomoral change (Swierstra
et al 2009). A second normative challenge is how to
devise adequate anticipations of these soft impacts.
I will argue that although such anticipation seems
to be a descriptive endeavour, it is in fact
normatively charged. A third normative challenge has
to do with the way a society can best come to terms
with soft impacts. I conclude with a brief summary
of my main propositions.
‘Hard’ and ‘soft’
impacts
We have by now learned –
often the hard way – that technology is not a
cornucopia, pouring forth an endless stream of
gifts. Some of these gifts prove to be poisonous, to
explode in our face, or to pollute and deplete our
environment. Technologies not only make our lives
more productive, comfortable, and longer, but can
also cause great harm to users and non-users alike.
Over the past century or so, developed societies
have struggled to devise methods, strategies and
institutions to deal with these technological
hazards. One key strategy is the retrospective or
prospective assessment of the impacts of
technologies. These assessments are meant to provide
policymakers with a cost-benefit analysis which
helps to decide whether the technology has to be
stimulated, regulated, modified, or even banned all
together. As it is often easier to avoid future harm
than to repair past harms, these forms of Technology
Assessment (TA) are usually to some extent
anticipatory: what will be the impacts of our
current or emerging technologies? Especially in the
latter case TA can be used as input to guide
scientific and technological development while the
technologies in question are still malleable. Of
course, the assessment of possible future impacts of
technologies that are not yet fully developed can
only be highly speculative.
As no one can predict the future, an important
challenge is how to ensure that these anticipations
are somewhat authoritative and not easily dismissed.
An important first step to enhance TA’s authority,
at least with influential actors like technologists
and policymakers, was the development of methods to
quantify both the chance that a hazard would
occur and the undesirable outcome itself.
Quantification helped to transform fuzzy,
unmanageable ‘hazards’ into specific ‘risks’ that
could serve as the basis for policy decisions. But
quantification required that the ‘undesirable
outcome’ that constitutes the core of any hazard (or
risk), had to be defined in such a way that it lent
itself to such quantification. This was achieved by
interpreting ‘undesirable outcome’ in terms of the
harm principle, as coined by John Stuart Mill. This
famous principle has long served to demarcate the
domain of legitimate state intervention in civil
society: if clear and objective harm is done, the
state should step in and private actors (like
companies) should be held accountable to redress the
harm done. Clear and non-controversial values like
Safety, Health, and – more recently – the
Environment are the main values enabling the public
identification of technological harms. A third way
to ensure a TA’s authority was by focusing on
problems that can noncontroversially be ascribed to
technology actors and corporations. So, a strict
line was drawn: a risk is only a technological
risk if the harm is a direct causal
consequence of the technology in question.2
Together these three specifications –
quantifiability, clear and noncontroversial harm,
direct causation – helped to turn unmanageable
hazards into manageable risks. They also helped to
create a specific regime of accountability. If
impacts of technology answer to these three
conditions, they are considered to be sufficiently
‘hard’ (in the sense of objective, rational, public,
and concrete) by technology and policy actors to
accept and allocate accountability. This
accountability regime provides actors with a
powerful incentive to avoid or reduce such ‘hard
impacts’ of technology. Given a legal and political
environment that is capable of holding technology
actors effectively accountable for the quantifiable,
noncontroversial harms caused by their technologies,
this discursive construction of ‘risk’ has thus been
a successful and powerful instrument in making TA
more authoritative, and technology safer, healthier,
and more sustainable.
However, technologies don’t exclusively have ‘hard
impacts’ like poisoning, exploding, polluting and
depleting. Almost two and a half millennia ago
Plato, in the Phaedrus, denounced the alphabet as a
technology that was destructive to true knowledge
and as being politically disruptive. So for a very
long time, we have been aware that technologies have
other types of impacts too. As a modern example, it
suffices to update Plato’s concerns about the
written text to recent concerns about the Internet.
The concerns that surfing the Internet will destroy
knowledge and will undermine intellectual virtues by
making us shallow and less concentrated (Carr 2011),
or that Facebook turns friendship into a travesty
(Turkle 2010), are not easy to quantify.
Furthermore, it is highly contested whether the new
ways ‘taught’ by the Internet are in fact better,
worse, or simply different than the old ways, so
there is no consensus whether harm is actually done.
And thirdly, even if we were to agree that something
important was lost, it would still be unlikely that
we could simply blame the Internet for this. It is
evident that users make different uses of the
Internet and are differently affected by it. As a
consequence, it is impossible to identify a direct
causal link between technology and impact. In brief:
impacts like these are qualitative rather than
quantitative; the core values at stake are unclear
or contested rather than clear instances of harm;
and the results are co-produced by the user rather
than being caused solely by the technology. The fact
there can be different disagreeing perspectives on
the nature and (un)desirability of a consequence, is
referred to in the literature as ‘ambiguity’ (Renn
& Roco 2006); the fact that some consequences
are causally open because they are codetermined by
e.g. human behavior, is recognized as the problem of
‘indeterminacy’ (Stirling 2003).3
Impacts that are qualitative, ambiguous, and/or
indeterminate tend to fly under the radar of the
prevailing accountability regime. They are dismissed
by technology and policy actors as too fuzzy, or too
‘soft,’ to take seriously. As a consequence, it is
unclear who can be held accountable for them – if
anyone. As no regime is in place, soft impacts tend
to remain orphan impacts.
It should be stressed that the distinction between
hard and soft impacts is not neutral or descriptive.
Instead, it is a largely rhetorical distinction
brought into play by one group of powerful players
(policymakers and technology actors) for practical –
or strategic – purposes: for which impacts are they
willing to accept some degree of accountability, and
for which impacts not. It is therefore to be
expected that the distinction is constantly fought
about by other parties who seek a place for their
concerns on the public agenda. In the rest of this
article I adopt the hard-soft distinction as a
shorthand for this practical distinction, not as an
endorsement of it.
Furthermore, it should also be clear that the
hard-soft distinction is a gradual one, and, equally
important, that the demarcation line is drawn
differently at different times and places. Impacts
can (and do) score differently on each of the three
dimensions: they can be hard in one but soft in
another. For instance, one may be able to quantify
the chance that certain genetically modified
organisms will spread in nature, without, however,
agreeing on whether this would be in fact be a
problem. Or we may agree that it is very harmful to
play violent computer games all day, without
agreeing to what extent the game or the user is to
blame. Furthermore, the line between hard and soft
is not carved in stone. For a long time, doing
something ‘unnatural’ was considered to be a clear
and noncontroversial instance of harm. On the other
hand, if a technology failed to be sustainable, this
was hardly noticed. Nowadays, these tables have been
completely turned. Engineers and policymakers hardly
raise an eyebrow anymore if, for instance, a food
product or a way to produce food is deemed
‘unnatural’; instead, sustainability has been
‘hardened’ into an important criterion in the
prevailing accountability system. Similarly, as
differences in sexual moralities make abundantly
clear: what is considered a grave public harm in one
culture – e.g. female promiscuity, or homosexuality
– is considered to be a private lifestyle choice in
another. So, not only is the distinction between
hard and soft gradual, it is also linked to spatial
and temporal contexts and bound to change with them.
This implies that the corresponding accountability
regimes coevolve.
Why soft impacts
can no longer be ignored
So, at least since Plato we
have been aware that technologies have more than
‘hard’ impacts. It is even fair to say that in
traditional societies most of the concerns regarding
emerging technologies would in modern parlance be
labeled ‘soft’. People worried mostly about
technology’s disruptive effect on religion (e.g. the
printing of the Bible) or on existing power
hierarchies (e.g. the failed introduction of guns in
feudal Japan, as incompatible with the prevailing
system of martial honor).
It is only since the Industrial Revolution that
public attention has shifted from the impacts
typically worried about by priests, philosophers,
and artists, to the ever more manifest harms
inflicted by industrialization. Whereas the cultural
or spiritual impacts of technology became ever
softer and thus privatized, the impacts of
technology on our health, safety, and environment
began to dominate the political agendas. In our
modern, technological culture we stake our belief on
numbers rather than on words (Porter 1995). Ours is
also a liberal culture built on the lessons of the
religious wars of the 16th century: a pragmatic way
to pacify social conflict is to distinguish between
the right and the good (Rawls 1988), between a thin,
procedural, binding morality and a thick,
substantive, and private morality (Walzer 1994). And
ours is a culture in which the dominant
instrumentalist perception regards technology as a
collection of docile, passive, neutral tools.
However, this era now seems to be ending. Our
society and our technologies are evolving in a
direction that makes the exclusive focus on hard
impacts increasingly untenable. I want to offer two
hypotheses to explain why technology’s soft impacts
are coming to the fore.
First, technological societies have become more
affluent. As an accompaniment of this growing
affluence our needs have moved upwards, according to
Maslow’s ‘hierarchy of needs’. Most citizens of
modern, Western, technological societies spend less
time than ever on satisfying their basic needs -
food, drink, safety, physical integrity - and more
time than ever on satisfying their ‘higher’ needs
like sexual intimacy, friendship, self-esteem,
recognition, and self-development. There is an
important ethical difference between the basic and
higher needs. The former needs give rise to a type
of ethics that is basically protective and
geared towards decreasing our vulnerability. The
latter needs give rise to a type of ethics that is
more aspirational, geared towards human
flourishing. If we apply this ethical shift to
technology, we can witness a gradual shift away from
a type of ethics that centers on avoiding or
protecting against technology’s harms, to a type of
ethics that seeks to elucidate how technology can
contribute to realizing positive goals like
happiness.
This increased public attention to higher needs
provides one explanation for the rising prominence
of technology’s soft impacts. Basic needs seem more
readily compatible with the discourse on
technology’s hard impacts than the higher needs are.
For one, harm is most easily and non-controversially
identified in terms of the lower needs. It is much
easier to reach agreement on minimizing harm than on
maximizing happiness because, as Karl Popper (1945, p.159)
pointed out: “Those who suffer can judge for
themselves, and the others can hardly deny that they
would not like to change places.” Secondly, basic
(preferably physical) needs seem to be more
objective and quantifiable than the higher, more
fuzzy, needs. And with regard to causality: in cases
where basic needs take center stage, it is easier to
establish that someone is simply a victim of
technology. In the case of the higher needs, one is
most often not simply a victim of the technology but
also to some extent a co-culprit.
In contrast, in the case of the higher needs the
hard impacts discourse (with its accompanying
accountability regime) becomes less convincing.
Calories may be quantifiable, but it is much less
clear how much knowledge, aesthetic pleasure and
self-development ‘healthy’ people should have each
day. And at what exact point can they claim to be
harmed if their higher needs are not fulfilled? And
to what extent are they themselves to blame for such
a lack?
Second, the technologies have changed too. Modern
technologies like ICT, biotechnology, and
neuroscience come ever nearer to our bodies, minds,
and life-worlds. In Chaplin’s Modern Times,
technology typically belonged to the (semi-)public
world of the factory or transportation.
Technology was not portrayed as pervading the
personal, private sphere. And although in the
decades following the film intellectuals were
concerned about the devastating effects of
mass-consumption on the human psyche, what they
worried over was technology’s products invading the
private sphere rather than technology itself.
Household technologies, radio and television are
important exceptions here. However, even granting
that, the private sphere was for decades perceived
as a low-tech environment.
This has radically changed over the last three or
four decades, especially because of the ICT
revolution. Many of today’s most eye-catching
technologies are ‘intimate’ (Van Est 2014). They are
no longer primarily ‘there’; they are now also very
much ‘here’. Biotechnology is turning our bodies,
including our brains, more and more into objects for
scientific study and technological manipulation. For
example, Deep Brain Stimulation deeply invades what
we perceive to be the core of our personality: our
brains. The same holds for all kinds of enhancement
drugs. We are also witnessing the rise of
Brain-Computer Interfaces – for instance bionic arms
connected to our nerve system or exoskeletons that
respond to our brainwaves – which further blur the
dividing line between humans and technologies.
Visionaries envision a time when we will upload our
brains into the computer, thus completely merging
with the machine. Even if one discards these
fantasies as far-fetched, technologies are growing
increasingly important and influential in mediating
our contact with the object world (think of
Microsoft’s Hololens’ augmented reality) and our
contact with the social world of fellow human beings
(think of the Internet, mobile phones, or telecare
systems). These modern technologies are also getting
ever more intimate in the sense that they invade our
privacy by amassing all kinds of data on a scale
that we are still struggling to understand (think
NSA or Google). Who can still be sure that Big Data
doesn’t know more about us than we know ourselves?
Finally, intimate technologies not only exist in us,
between us, and around us, but they also become like
us. Robots are starting to take over our tasks and
they sometimes eerily resemble how we look.
E-coaches give us personal – albeit on the basis of
algorithmic automation – advice about how to cope
with our problems.
These new ‘intimate technologies’ raise all kinds of
concerns, but not primarily that they will poison
us, explode in our faces, or pollute and deplete the
environment. Nor are they simply instruments that
enable us to pursue our personal conception of the
good life. Rather, they co-define the good life and
what we owe to each other. We are beginning to
become aware that these technologies co-shape our
norms and values; our identities and our mind; our
bodies and our emotions; our aspirations, hopes, and
ideals; our needs, wants, and desires; our rights,
obligations, duties, and responsibilities; our
virtues and dispositions. In other words, these new
technologies raise a different kind of concerns than
the ‘hard’ ones we were used to.
As a result of these two developments – more
attention to higher needs, plus technology becoming
increasingly intimate –more people are concerned
about technology’s soft impacts. And when those in
positions of power discredit these concerns as too
‘soft’ to take seriously, then that only adds to
those worries.
The normative
challenges raised by soft impacts
Soft impacts raise
different, and more difficult, normative challenges
than hard impacts. I want to focus on three of
those challenges that are particular to soft
impacts: their moral ambiguity; the difficulty of
describing them in such a way that they can become
the subject of public deliberation; and finally, how
to deal with them in a constructive way. First I
will turn to the moral ambiguity of soft impacts.
Hard impacts come with their own kind of normative
challenges, and sometimes these do indeed require
profound philosophical reflection. For example,
there are difficult epistemic and political issues
involved in establishing risk (Slovic 1999; Roesser
2006); there are normative questions about how to
weigh the risks against the possible benefits and
about who should carry the burden of proof for
establishing the risks of a technology
(precautionary principle); there are questions
relating to individual versus corporate/collective
responsibility (‘the problem of many hands’), and to
short term versus long term responsibility; and
finally there are normative challenges having to do
with the just distribution of benefits and costs.
Philosophers are still struggling with these
challenges, many of which have been on the
philosophical agenda since Hans Jonas formulated
them in the seventies (1973).
Important as these normative challenges of hard
impacts may be, compared to the challenges raised by
soft impacts they only possess a second-order
nature. This is because in the case of hard impacts
the core normative question, that is: whether the
technological impact itself is indeed (un)desirable,
is considered to be answered already. No one in her
or his right mind thinks hurting people or polluting
and depleting nature is a good thing. As a result of
this basic normative consensus, hard concerns come
with the expectation that they can be solved on
the basis of empirical facts: how large is the
chance that a certain event will occur? What is the
extent of the harm? And is the harm directly caused
by the technology under investigation? In this
sense, one could even ask whether hard impacts do
not primarily pose factual challenges rather
than normative ones. Granted that everyone
wants to minimize pain and agrees to delegate other
normative concerns to private discretion, all the
relevant practical-ethical questions turn into
questions of empirical fact. Indeed, one could
consider this the particular strength of hard
impacts. This makes them highly commensurable with
the consequentialist-utilitarian and liberalist
discourse favored by policymakers and technology
actors.
By contrast, soft impacts are essentially defined by
the fact that they are morally ambiguous, and that
there is no consensus on the question of whether the
impact is good or not. This ambiguity can be the
result of conflicting values, e.g. when we are
confronted with a trade-off between privacy and
security in the case of surveillance techniques.
However, in the case of the intimate technologies
mentioned above, the moral ambiguity goes deeper as
it is caused by the destabilization of the normative
and moral routines that we rely on to assess the
(un)desirability of the impacts of those
technologies. As stated in the previous section,
these technologies affect our norms, values, and
aspirations. In short, the technologies themselves
turn our normative standards into a subject for
ethical doubt, deliberation and discussion.
A good example of the morally destabilizing effects
of an intimate technology is shown by the impacts
that condoms and the contraception pill had on
sexual morals. Looking through the lens of the
dominant moral standards of the fifties, these
technologies appeared extremely suspect as they
allowed women to discard moral standards regarding
chastity without being ‘punished’ by an unplanned
pregnancy and the resulting public shame, and they
allowed women to subvert the 'natural' hierarchy
between the sexes. Looking back we can observe two
things. On the one hand, the pessimists were proven
right: those technologies indeed allowed women to
break away from the restrictive sexual morality and
the gender hierarchies of the fifties. On the other
hand, the same technologies resulted in today’s
dominant sexual morality being considerably
different from the one that dominated fifty years
ago. As a result, the moral pessimism of that decade
from our perspective no longer seems warranted (at
least not on the grounds put forward then). What our
parents depicted as moral decay is now perceived as
sexual emancipation and liberation of women.
This example demonstrates that the moral standards
we apply to intimate technologies are not
independent from those technologies. In other words,
our morals co-evolve with the technologies they are
supposed to guide. Of course, this is not to suggest
that morals passively adapt to inevitable
technologies. Morals can and do influence
technological development – think about all the
attempts to make technologies safer and more
sustainable. But this cannot deny that the opposite
also holds. The phenomenon that technology and
morality mutually shape each other, I call
technomoral change. And acknowledging and dealing
with this technomoral change constitutes the first
normative challenge posed by technology’s soft
impacts.
Moral change is not caused by technological change,
but can be provoked by it. In the end, human beings
devise moral solutions to the problems of the world.
But technologies can fundamentally change the world,
solving or redefining old problems and creating new
ones. Established moral routines and ways of
understanding the world can thus become destabilized
and turn into problems. For a period of time we are
no longer sure what moral standards to apply to
those impacts, because the technologies in question
rob those standards of their self-evident relevance
and truth. Of course, in the end the destabilization
can result in a reaffirmation of the old morals or
even a deeper understanding of them (e.g. one could
argue that chastity only could become a truly moral
rather than prudential virtue after technology had
severed the link between sex and procreation and
removed the fear of pregnancy). Alternatively, a
shift towards a new broad moral consensus can
develop (e.g. few would nowadays justify different
moral standards for women and men – which is not to
say that such standards do not still exert influence
over us), or the issue can be left unresolved.
Anticipating soft
impacts
Identifying the
impacts of technology is important as a means to
establish whether a given technology is desirable or
not. In the case of new and emerging technologies, we
cannot identify the potential impacts yet, but we have
to try to anticipate them. We have seen that the way
to anticipate hard impacts is basically to calculate
‘chance multiplied by the amount of harm’. However,
this form of anticipation is not adequate in the case
of soft impacts. Here anticipation inevitably takes
the form of narrative. To understand why this is so,
we should look more closely at what soft impacts are
and how they manifest themselves. Soft impacts are
not, in distinction to hard impacts, typically
isolated events, such as physical harm or an
environmental disaster. They are rather changes in
practices.
The concept of practice has become ever more central
to modern philosophy. Since roughly halfway through
the 19th century, we have witnessed a far-reaching
albeit very gradual and uneven paradigm shift in
Western philosophy. Philosophy as a discipline has for
more than two millennia been defined by the Platonist
primacy of theoria over praxis. But philosophers have
been increasingly questioning this theoreticism,
aiming to give ontological, anthropological, epistemic
and ethical priority to praxis. Modern philosophy can
be understood as an ever-boldening exploration of
putting praxis first (Reckwitz 2002, Nicolini 2012,
Schatzki et al 2001).
For Plato, the primary relation of humans to reality
was one of theoria, of contemplation. In his
philosophy the central sense is the eye, and human
beings are primarily knowledge subjects. Following
Plato, Western philosophy has since given priority to
the problem of true knowledge over the problem of
correct action – as it was believed that the latter
could only be based on the former. Furthermore, true
knowledge could only be about true objects, that is,
universals which could be perceived directly by one’s
‘mind’s eye’ (rationalism), or indirectly by one’s
real eye (empiricism). Central to all epistemic ideas
about true knowledge is the asymmetry between the
receiving subject and the object that presses itself
on the passive subject. Between those true
objects, only harmony and stability can exist. For
example, Platonic ideas cannot conflict, nor can the
laws of nature. As such, the domain of universals
starkly contrasts with the tangible world of
particulars in which we, as embodied beings, desire,
act, live, flourish, suffer, and die, and where peace
and harmony are at best isolated moments in a vast
ocean of conflict and strife.
All these elements of the theoria-paradigm have been
taken apart by philosophers for whom humans’ primary
relation to reality was not contemplation of
the world but praxis, the interaction with the
world that we inhabit as embodied beings. This basic
starting point is shared by philosophies as widely
divergent as Marxism, pragmatism, phenomenology,
post-structuralism, and Actor Network Theory.
Philosophers as radically different as Marx,
Nietzsche, Peirce/James/Dewey, Heidegger,
Merleau-Ponty, Wittgenstein, Arendt, Habermas,
Foucault, Rorty, Harraway, and Latour also share this
point of departure. For these philosophies and
philosophers, not Plato but the Aristotle of the Nicomachean
Ethics is the hero and main source of
inspiration. Not the (mind’s) eye, but the hand – the
touch – is the primary sense: we grasp at the world.
And whereas the relation between subject and object is
asymmetric in the case of contemplation, it is
symmetric in the case of touch: feeling a table
results from me pressing on it and the table
pressing back. This attempt to replace
asymmetric, hierarchical dichotomies with
symmetric and dynamic relations where two or more
entities mutually constitute each other belongs to the
common grammar of all philosophies that look for
praxis rather than theoria: the relation ontologically
precedes the separate entities. These philosophies
typically embrace heterogeneity (often including some
form of conflict), matter, particulars, the body, the
emotions, finitude, and so forth. They tend to
celebrate that we live in a human-made world, a
technotope, and that the distinction between humans
and artifacts has always been blurry at best. And if
they philosophize about knowledge, they will typically
point out its situated and constructed character; the
hard work necessary to produce it; its dependence on
the body (think of recent developments in Artificial
Intelligence) (Brooks 1990); its being inseparable
from values and emotions; and that knowledge was
acquired through the use of material devices.
Of course, not all these elements are found in all the
philosophers and philosophies mentioned above. But the
different elements can be – and are being – assembled
into a fairly coherent philosophical approach that has
now been formally christened ‘practice philosophy’.
For a firmer grip on soft impacts, it is worthwhile to
see what practice-philosophers, most notably
pragmatists and phenomenologists, have had to say
about morality and ethics. They perceive morality as a
type of normativity that is distinguished by strong
evaluation: “discriminations of right and wrong,
higher or lower, which are not rendered valid by our
own desires, inclinations, or choices, but rather
stand independent of these and offer standards by
which they can be judged” (Taylor 1986). We are the
authors of morality, but we cannot change it by will.
Partly this is because morals are deeply constitutive
of our identity and engrained in us in the form of
dispositions to blame and praise, as is evidenced by
the strong link between our morals and our emotions.
For another part it is because morals precede the
individual. We may also be very well hard-wired for
altruism, as e.g. Frans de Waal (2009) argues. The
most basic component of morality is our experience of
our own vulnerability and capacity to flourish, that
enables us to recognize this vulnerability and this
capacity in other beings. Sayer (2011, p.145) puts it
like this: “People are ethical to the extent that they
are concerned about how to act with regard to others’
well-being as well as their own.” In my own words,
morals concern 1) how we should behave towards others,
with an eye to their well-being (rule ethics; what do
we owe one another), and 2) how we should behave
towards ourselves, with an eye to our well-being
(ethics of the good life).
Practice-oriented forms of ethics stress that norms
and values, including moral ones, are part and parcel
of our engagement with the world. And this engagement
is not primarily theoretical, as if we were constantly
applying explicit rules to practical questions, but
practical. To a large extent our normative and moral
know-how exists in the form of embodied knowledge, of
tacit understanding, tightly linked to our emotions
(e.g. compassion, gratitude, shame, guilt, pride,
hate, disgust, resentment, embarrassment, indignation,
humility). This know-how takes form in particular
attachments, commitments, and character dispositions
that make us value some things and detest other
things. In most cases the fact-value split carries
little weight: if I see someone in pain, I don’t first
establish if she is in pain and then decide to help.
Witnessing innocent suffering immediately calls for
action: perception, assessment, and action are all
intermingled. Understanding morals in terms of lived
practice rather than in terms of a quasi law book
containing generalized rules allows one to be
interested in how people live their morals in everyday
situations. In other words, one strives to understand
lay normativity (e.g. Boltanski & Thevenot 2006
(1991), Sayer 2011).
What does this digression teach us about the
anticipation of soft impacts? Soft impacts are
considerably less tangible than hard impacts, as they
manifest themselves in sometimes subtle changes in
practices, for example nursing, raising children, or
self-management. Soft impacts are not events
that can be calculated for plausibility, but they
involve changes in the manifold ways we relate to the
world, to our fellow beings, and to ourselves. And the
only way these impacts can be invoked and made
‘present’ for anticipatory reflection, is by using
thick description, or narrative. Regarding the
character of narrative, Ricoeur notes that
A story describes a sequence of actions
and experiences done by a certain number of people…
These people are presented either in situations that
change or as reacting to such change. In turn, these
changes reveal hidden aspects of the situation and
the people involved, and engender a new predicament
which calls for thought, action or both (Ricoeur
1988, 150).
This aptly describes why stories are such a good
vehicle for anticipating softer impacts of technology.
They match the three basic features of soft impacts:
their qualitative character, their moral ambiguity,
and the fact that they are always coproduced by
technologies and people who adapt their behaviors to
those technologies. They help us imagine how we
respond to changes in our environment, and invite us
to evaluate newly evolving practices from the
perspective of insiders. Moreover, stories allow for
contingencies and for different, competing responses
by different actors. So, the anticipation of soft
impacts typically takes the form of vignettes offering
snapshots of possible future practices, or scenarios
describing how current practices are destabilized
under the influence of technologies, and how they may
evolve in response to these technological challenges.
At first sight, ‘stories’ may seem to carry little
weight in heated discussions on the pros and cons of
new and emerging technologies. There are two reasons,
however, why we should not underestimate their power.
First, discussions about hard and soft impacts often
deal with the future – even in a double sense: the as
yet non-existing impacts of as yet non- (or hardly)
existent technologies. Stories are powerful ways to
present possible futures. Policymakers and technology
actors who dismiss stories as fantasies that should
best be ignored, willfully forget that all scientific
and technological developments are born in a cradle of
expectations, promises, and larger visions, which are
meant to mobilize professional, political, and
financial support (Brown & Michael 2003, Borup et
al. 2006). Secondly, the fact that scenarios rest on
the imagination does not deny that they are, and
should, be supported by empirical evidence, such as
what is technologically feasible, what society wants,
how technologies and practices interact, human
psychology, and so forth (Lucivero et al. 2011), in
order to gain sufficient plausibility to win the
public’s support. So, stories can be, and are,
critically tested for their plausibility.
To explore those changes, we can draw on the so-called
philosophy of technological mediation (Verbeek 2010).
The basic starting point of this philosophical
approach is that technologies influence our
perceptional and practical relations to the world.
Perception is mediated, for instance, because
technologies can highlight some aspects of reality and
hide others. Action is mediated, for instance, because
technologies can enable or stimulate us to undertake
certain actions, and forbid or dissuade us to take
others. This technological mediation doesn’t stop at
the door of our moral experience. Investigating how
our moral perception and action in the world are
mediated by technology makes us aware of recurring
patterns of technomoral change that can help us to
imagine plausible scenarios regarding future soft
impacts (Swierstra et al. 2009, Swierstra 2011,
Swierstra 2013).
For instance, technology can mediate our relation to
other stakeholders, as in the case when television
opens our hearts to the suffering of distant others.
Or vice versa, when military technology anonymizes the
victims and hides them from our hearing and seeing –
as in the case of bombs or military robots. In both
cases, our knowledge of, and interaction with, these
stakeholders is affected by technologies, with direct
results for our moral experience and judgment.
Similarly, technologies can make us more or less aware
of the consequences of our actions, which also affect
our morals. Take for example industrial production
technologies that ensure that pollution and depletion
primarily happen in faraway places or times, thus
hampering the public awareness of those consequences.
Or consider the opposite, such as large scale
computing that helps us establish that seemingly
innocuous actions – like using chlorofluorocarbons
(CFCs) as refrigerants and propellants – are actually
very harmful in the long run. Furthermore,
technologies can bring people closer together by
opening up new practical possibilities that result in
new relations and (inter)dependencies. Immanuel Kant
(1996, 1793) famously said that 'Ought implies Can,'
but the reverse can also often be true: if
technologies create new opportunities to do good or
avoid harm, new moral obligations and rights appear,
and the dividing line between bad luck and injustice
needs to be redrawn. So, our rights, obligations and
responsibilities are not immune from the technologies
that help shape our interactions with the world.
Technology also affects our ideas about the good life
and character, i.e. the domain of virtue or
eudaimonist ethics. Our desires and frustrations are
not independent of the technological landscape in
which we find ourselves, as technology directly
impacts what we do and do not perceive to be
achievable. Similarly, technologies also affect the
relation we entertain with ourselves, establishing who
we are and who we want to be. All true knowledge
starts with self-knowledge, according to Socrates, but
self-knowledge is becoming increasingly
technologically mediated. This holds true not only for
our bodies, but also for our minds, such as in
neurofeedback. General views of the world are closely
connected to our views of the self and the good life.,
For example, is the world to be perceived as chaotic
or does it contain an order that we should strive to
emulate in our personal lives? Can the world be
controlled, or does it escape our grasp so that it is
therefore better to withdraw into ourselves, as the
Stoics maintained? Whether we perceive the world as
chaotic or ordered, as controllable or not, these
basic worldviews are also closely connected to our
science and technology. For a long time, for instance,
it was the progress of science and technology that
warranted an interventionist worldview. However, and
ironically, for many (post)moderns nowadays it is
exactly scientific and technological progress that
have made the world so complex as to escape our
rational grasp. True, the relation between technology
and worldview is never unequivocal or unilinear; the
same technology can allow competing worldviews. But it
is equally clear that worldviews are influenced by
science and technology in so far as they are
challenged by them.
These and other patterns of technological mediation of
moral experience allow us to some extent to draft
scenarios that explore plausible soft impacts of
emerging sciences and technologies. By this is meant
the way emerging sciences and technologies may affect
existing practices, including the internal (moral)
rules, values and virtues implied therein. Not meant
is that emerging sciences and technologies predict
them, because soft impacts are by definition
indeterminate. They are coshaped by how actors decide
to respond to the invitations, nudges or provocations
that emerging technologies bring. Exploring them in
advance enables society to be better prepared for the
technomoral future that will eventually unfold. How
will specific technologies mediate the relation
between actors and stakeholders? Will they help these
stakeholders to make themselves heard, or will those
stakeholders be all but silenced? Will the
technologies make us more or less aware of the
consequences of our choices? Will the technologies
create new opportunities, and if so, what new claims
regarding rights, obligations, and responsibilities
can be expected to accompany those novel
opportunities? Will the technologies influence how we
understand ourselves and/or how we can shape and
manipulate ourselves, and if so, how will those new
insights and affordances challenge prevailing
conceptions of the good life, of a good character, of
essential virtues, of essential goods? And how will
new technologies influence our worldview, our
foundational ideas about chaos and order, fate and
control? All of these questions can be asked in the
case of new and emerging technologies – not because
they will receive definitive and precise answers, but
because the questions provide a heuristic that can
spark private and collective imagination of how our
practices, and we, may be challenged by the emerging
technologies pervading our life-world.
One question remains at the end of this section: it
may be a challenge to pre-imagine soft impacts of
emerging technologies, but to what extent can it be
deemed a normative challenge? It is clear that this
type of anticipatory research has to combine empirical
methods and philosophical theories. To assess the way
emerging technologies may impact society requires both
philosophical theories explicating the technological
mediation of morality, and solid empirical knowledge
of existing practices – knowledge which usually
requires the input of the people who know these
practices inside out. But I also believe the
theoretical challenge is normative in two respects.
First, it is a challenge to create a public space for
these types of anticipations. In a liberal society
dominated by ‘tough-minded’ technologists and
policymakers, explorations of soft impacts are easily
discarded as being too fuzzy to take seriously. As a
result, concerns about soft impacts are typically
delegated to the private sphere of free individual
choice. We can no longer afford this privatization
strategy, since from the moment technologies enter our
life-world, soft impacts are both collective and real
and thus deserving of public attention. An easy
example is the Internet; although it does not explode
or poison us, it has fundamentally changed our
economic, academic, relational, caring, and other
practices. The new practices that have evolved in
relation to the Internet may have started from some
individual choices, but they have grown to become so
pervasive and solid, that it is hardly a realistic
option anymore for individuals to avoid this
technology. Soft impacts of emerging technologies
therefore require us to redraw the normative
boundaries between public and private matters, between
public reason and private choice.
The anticipation of soft impacts implies a normative
challenge in a second sense, the fundamental invisibility
of morals. This may seem counterintuitive at first,
but lay normativity/morality mainly exists in the form
of practical ways of doing and seeing, of tacit know-how
rather than explicit know-that. I often
illustrate this implicit character of morality to my
students by asking them to write down the moral norms
and values that they practice while in my class. What
they write down is not so important. What counts is
that this exercise is for the students to realize that
we have conscious access to only a very limited set of
our own ‘norms’ and ‘values’. They are vividly aware
that there must be a vast ocean of possible norms and
values, but they experience that they only have
conscious access to a very limited subset. It is
interesting to see what they do manage to write down,
for example: “Students should come on time;” “We
should listen respectfully to each other;” “Turn off
your mobile phone in class.” The interesting question
is then, why are only these norms conscious and
articulate? The answer is that these norms are visible
because they are in one respect or another
problematic. Regarding the examples mentioned above
all students share the experience that these norms are
regularly disregarded by themselves or others. These
norms are visible, because they conflict with other
desires, preferences, or norms.
The point I am trying to make is one of hermeneutics:
of all the normativities and moralities that surround
us, we are only aware of the small subset that is
problematic. These normativities become problematic
when they conflict with competing normativities. I am
aware of a norm because it tends to conflict with some
of my preferences. Or because – as in the case of a
moral dilemma – it conflicts with another norm
demanding my loyalty. Or because in an intercultural
encounter I find out that what I considered to be
self-evident, is weird in the eyes of the other. Or
because a new technology disrupts my practical
routines, thus forcing me to reflect on the norms
embedded in those routines. This is a crucial finding
in the context of anticipating soft impacts of
emerging technologies. It basically implies that it is
impossible to first describe the existing practice,
and subsequently explore how this practice may be
disturbed by the emerging technology we seek to
understand. The normativity embedded in the current
practice only becomes visible because, and to the
extent that, it is contrasted with an alternative
practice that we can imagine resulting from the
introduction of a new technology. Clemens Driessen and
Michiel Korthals (2012) have described this mechanism
in some detail for two different emerging
technologies: so-called pig towers and in-vitro meat.
By introducing these technologies into the debate, it
became possible to articulate (some of) the meanings,
norms and values that are embedded in the current
practices of pig husbandry and eating meat. The
normative challenge inherent in devising scenarios on
the soft impacts of emerging technologies is thus to
articulate as precisely as possible, and in rich
detail, what are the normative (including moral)
stakes inherent in these current practices, and how
these stakes may be challenged by the new
technologies. At the same time, we must realize that
our articulation of these normativities is always
restricted by the specific ways they are disclosed by
the technologies in question.
Avoiding relativism
It is realistic to
assume that in a different technological environment,
our present morals will have co-evolved – not as
passive adaptation, but as the result of active coping
with new realities. But doesn’t this observation open
the door for moral relativism? This is the last
normative challenge posed by soft impacts.
Scenarios are a good way to explore soft impacts,
including technomoral change. But from what standpoint
can one judge the morals of a possible future? Here we
have to avoid two extremes. Moral presentism
simply favors current morals over the future ones we
imagined as part of the scenario exercise. This
precludes the possibility that our future selves, or
our children, might have learned something worth
knowing and applying in the present. Moral futurism
rests on the opposite mistake of precluding the
possibility that we presently possess a sharper
insight into rights and wrongs than our future selves
will. Once a technological opportunity exists, it is
hard to pause and critically reflect on the novel
rights and duties that this opportunity has called
into existence.
How can one avoid moral relativism, either in its form
of moral presentism or in its form of moral futurism?
The obvious answer is to look for moral principles
that are so basic that they are somehow immune to the
flow of time. Such foundationalism, however, is
incompatible with the turn from theoria to praxis, as
I sketched it above. Like relativism, foundationalism
risks deflating core intellectual virtues such as
openness, curiosity, reflexivity, creativity and
willingness to learn. These virtues are indispensable
in a technological culture, as they reflect its
dynamism. We do not need an Archimedean point to
decide on the best morality. In our search for (moral)
truth it suffices, as Hans Georg Gadamer (1986)
pointed out decades ago in his book on philosophical
hermeneutics, to seek out conflicting perspectives
that invite us to question our prejudices. Moral
learning can occur whenever people are confronted with
‘strange’ or new conflicting morals — even when, as in
art, we have to devise these conflicting perspectives
in our imagination. Moral plurality invites
reflection, (self-)criticism, dialogue and the open
exchange of ideas. By developing technomoral
scenarios, we travel to future worlds where different
technologies and morals prevail. It is by seeking this
confrontation between present and imagined morality,
that we learn to guide technological change in a
manner both reflective and flexible, without reifying
either the present or the possible future.
Much of ethics is predicated on law, which is geared
towards taking the right decision in a certain moment,
e.g. left or right; guilty or innocent? In the case of
technology, this type of ethics always leads to the
choice: Prohibit or Allow? If prohibiting, then there
should be clear and present harm. If allowing, end of
discussion and reflection. However, if we take a
practice-based approach to ethics, alternative
paradigms crystallize. For example: how to deal with
your adolescent child? Or: how to coexist with your
neighbours and co-citizens? In these contexts Yes or
No choices make little sense. Here we are dealing with
evolving relations that require permanent negotiating,
updating, articulating and investigating what is
important in life, in society, checking whether
existing practices are still the best way to deal with
that challenge, trying out different coping strategies
such as peaceful cohabitation, separation of domains,
outright conflict or compromise. This is neither
relativism nor foundationalism; this is coping.
Conclusion
In this paper I have
argued that we can no longer afford to ignore
technology’s soft impacts, as this type of impact is
becoming increasingly prominent in affluent societies
where people have sufficient resources to pursue
self-realization and where technologies are becoming
ubiquitous in the life-world. These soft impacts come
with their own type of normative challenges. The first
challenge is to acknowledge the mutual shaping of
technology and morality. The second one is to
anticipate soft impacts, which requires a rich and
thick description of our morally-laden current
practices in the light of plausible technomoral change
provoked by emerging technologies. The third and last
challenge is to avoid both relativism and
foundationalism, by opting for an open and learning
attitude vis à vis the ways new and emerging
technologies put our current morals into question.
Notes
1 I want to thank the
anonymous reviewer for some very astute comments on this
article, to which I hope to have done justice in this
version.
2 If a party causes harm by
misusing a technology, that is usually not regarded as a
technological risk in the strict sense, but it can be
‘adopted’ as a responsibility of regulators.
3 Of course, ‘indeterminate
impacts’ are somewhat of an oxymoron, as these ‘impacts’
are co-produced by society/the user, which/who is
therefore not a passive dupe of technology but a
co-producer of socio-technical phenomena.
References
Boltanski, L., &
Thévenot, L. (2006). On justification: Economies of
worth. Princeton: Princeton University Press.
(Original version in French: 1991)
Borup, M., Brown, N., Konrad, K., & Lente, H. van
(2006). The sociology of expectations in science and
technology. Technology Analysis & Strategic
Management, 18(3-4), 285-298. CrossRef
Brooks, R. A. (1990). Elephants don't play chess. Robotics
and autonomous systems, 6(1), 3-15. CrossRef
Brown, N., & Michael, M. (2003). A sociology of
expectations: Retrospecting prospects and prospecting
retrospects. Technology Analysis & Strategic
Management, 15(1), 3-18. CrossRef
Carr, N. (2011). The shallows: What the Internet is
doing to our brains. WW Norton & Company.
De Waal, F. (2009). Primates and Philosophers: How
Morality Evolved: How Morality Evolved. Princeton:
Princeton University Press.
Dewey J (1954). The later works. Swallow, Athens, OH
Dewey J (1994). In: Gouinlock J (ed.) The moral
writings of John Dewey (revised edition). Amherst,
NY: Prometheus.
Driessen, C., & Korthals, M. (2012). Pig towers and
in vitro meat: disclosing moral worlds by design. Social
Studies of Science, 42(6), 797-820. CrossRef
Gadamer, H.-G. (1986). Wahrheit und Methode:
Grundzüge einer philosophischen Hermeneutik (Vol.
Band 1). Tübingen: J.C.B.Mohr (Paul Siebeck).
Jonas, H. (1973). Technology and responsibility:
Reflections on the new tasks of ethics. Social
Research, 40(1), 31-54.
Kant, I. (1996, 1793). Religion Within the Boundaries of
Mere Reason (trans. George di Giovanni), in Immanuel
Kant, Religion and Rational Theology, ed. Allen
W. Wood and George di Giovanni. Cambridge. p. 92 as
cited in Stern, R. (2004) Does 'Ought' Imply 'Can'? And
Did Kant Think It Does? Utilitas, 16 (1). pp.
42-61.
Ketting, E. (2000). De invloed van orale anticonceptie
op de maatschappij. Nederlands Tijdschrift
Geneeskunde 144, 283-286.
Nicolini, D. (2012). Practice theory, work, and
organization: An introduction. Oxford: Oxford
University Press.
Lucivero, F., Swierstra, T., & Boenink, M. (2011).
Assessing expectations: towards a toolbox for an ethics
of emerging technologies. NanoEthics, 5(2),
129-141. CrossRef
Popper, K. R. (1962) The open society and its
enemies (two volumes). London: Routledge &
Kegan Paul, . Original edition 1945
Porter, T.M. (1995). Trust in Numbers: The Pursuit
of Objectivity in Science and Public Life,
Princeton: Princeton University Press.
Reckwitz, A. (2002). Toward a Theory of Social
Practices: A development in culturalist theorizing. European
journal of social theory, 5(2), 243-263. CrossRef
Renn, O. and M. Roco (2006). Nanotechnology Risk
Governance, Geneva: International Risk Governance
Council. IRGC White Paper No. 2.
Ricoeur, P. (1988). Time and Narrative. 3 vols.
Chicago: University of Chicago Press.
Roeser, S. (2006). The role of emotions in judging
the moral acceptability of risks. Safety Science,
44(8), 689-700. CrossRef
Sayer, A. (2011). Why things matter to people:
Social science, values and ethical life. Cambridge
University Press. CrossRef
Schatzki, T. R., Knorr-Cetina, K., & von Savigny, E.
(eds.). (2001). The practice turn in contemporary
theory. Psychology Press.
Slovic, P. (1999). Trust, emotion, sex, politics, and
science: Surveying the risk‐assessment battlefield. Risk
analysis, 19(4), 689-701. CrossRef
Stirling, A. (2003). Risk, Uncertainty and Precaution:
some instrumental implications from the social sciences,
in Berkhout F., M. Leach, I. Scoones (eds.), Negotiating
Change, London: Edward Elgar, 33-76. CrossRef
Swierstra, T., M. Boenink, and D. Stemerding. (2009).
“Exploring Techno-moral Change: The Case of the Obesity
Pill.” In Evaluating New Technologies. Methodological
Problems for the Ethical Assessment of Technology
Developments, edited by P. Sollie and M. Düwell,
119–138. Dordrecht: Springer.
Swierstra, T., and H. te Molder. (2012). “Risk and Soft
Impacts.” In Handbook of Risk Theory, edited by
S. Roeser, R. Hillerbrand, P. Sandin, and M. Peterson,
1049–1066. Dordrecht: Springer. CrossRef
Swierstra, T., and A. Rip. (2007). Nano-ethics as
NEST-ethics: Patterns of Moral Argumentation about New
and Emerging Science and Technology. Nanoethics
1(1): 3–20. CrossRef
Swierstra, T. (2011) Heracliteïsche ethiek. Omgaan
met de soft impacts van technologie. (inaugural
lecture) Maastricht University, Maastricht. ISBN
978-905-681-3789
Swierstra, T., and K. Waelbers. (2012). Designing a Good
Life: A Matrix for the Technological Mediation of
Morality. Science and Engineering Ethics 18(1):
157–172. CrossRef
Swierstra, T. (2013). Nanotechnology and
Technomoral Change. Ethica & Politica/Ethics
& Politics 15(1), 200-219.
Taylor, C. (1986). Sources of the self: The making
of the modern identity. Harvard University Press.
Turkle, S. (2010). Alone Together. Why we expect
more from technology and less from another. New
York: Basic Books.
Van der Burg, S. (2009). Taking the “Soft Impacts” of
Technology into Account: Broadening the Discourse in
Research Practice. Social Epistemology 23 (3–4):
301–316. CrossRef
Van Est, R., et al. (2014). From Bio to NBIC
convergence-From medical practice to daily life.
Report written for the Council of Europe, Committee on
Bioethics, The Hague, Rathenau Instituut.
Verbeek, P. P. (2010). What things do: Philosophical
reflections on technology, agency, and design.
Penn State Press.
Walzer, M. (1994). Thick and thin: Moral argument at
home and abroad. Notre Dame: University of Notre
Dame Press
|