Etikk i praksis. Nordic Journal of Applied Ethics (2023), 17(2), 53-67 |
http://dx.doi.org/10.5324/eip.v17i2.5056 |
Early View publication date:
16 December 2023 |
Is it getting too personal?
On personalized advertising and autonomy
Sebastian Jon Holmen
Roskilde University, Department
of Communication and Arts, Denmark
It has
recently been suggested that
personalized advertising is
often more of an affront to
a person's autonomy and thus
more morally worrisome than
its generic counterpart
precisely because it
involves or takes advantage
of such personalization.
This paper argues that
central reasons put forward
to support this claim are
unpersuasive and that
generic and personalized
advertising should therefore
be treated as morally on par
in terms of their potential
to undermine consumer
autonomy. The paper then
suggests that, if this is
true, scholars who defend
the existence of moral
asymmetry between
personalized and generic
advertising in terms of
their effect on consumer
autonomy need to choose from
among three argumentative
avenues. However, none of
these avenues is likely to
be particularly attractive
for a defender of the
asymmetry.
Keywords: Personalized advertising, Generic Advertising, Autonomy
Introduction
Philosophers,
social scientists, law scholars and others
have recently argued that cases like Personalized,
where efforts to sell a product are
based on data points about specific
consumers’ or groups of consumers’
preferences, habits and cognitive
vulnerabilities may pose a threat to
their autonomy (for
overviews, see Tsamados et al. 2021;
Mittelstadt et al. 2016; Taddeo and
Floridi 2018). Indeed, some even argue that
such personalized advertising is
often more of an affront
to a person’s autonomy and thus more morally
worrisome than its generic counterpart –
precisely because it involves or takes
advantage of such personalization. Generic
advertising, on the other hand, is
based on statistical information regarding general
preferences, habits and cognitive
vulnerabilities. At a general level, a
person is autonomous if and only if (i) she
has the ability to identify what she has
good reason to do, (ii) is able to be moved
by her own reasoning, and (iii) can act in
accordance with these reasons (Buss
and Westlund 2018). While many theorists disagree
about how to more precisely specify the
content of these conditions, conditions (i)
and (ii) are related to what has been termed
the decisional dimension of
autonomy, that is, to the autonomous
formulation and determination of what ends
to pursue. Condition (iii) relates to the practical
dimension of autonomy. More
specifically, it stipulates that persons
must to some extent be able to pursue the
ends they have set for themselves.1
It has been argued that marketing efforts
like advertising can potentially compromise
especially the decisional dimension of
autonomy (see
e.g. Anker 2020).2
As we shall see in what follows, the
arguments positing an ethically relevant
difference between generic and personalized
advertising in terms of their effect on
autonomy, can be understood to involve
conditions belonging to both the decisional
and practical dimensions of personal
autonomy. What I shall argue in this paper,
however, is that central reasons put forward
to support this claim are unpersuasive and
that generic and personalized advertising
should therefore be treated as morally on
par in terms of their potential to undermine
consumer autonomy. I will then suggest that
if this is true, scholars who defend the
existence of a moral asymmetry between
personalized and generic advertising in
terms of their effect on consumer autonomy
need to choose from among three
argumentative avenues. However, none of
these avenues seems particularly attractive
for friends of the asymmetry.
Predicting behaviour without consent
The
first autonomy-based concern specifically
raised in relation to personalized
advertising that we shall consider is
based on the fact that a company that
sends out such ads bases its product
choices on the user’s past behaviour and a
prediction of their future preferences.
This was exactly how we imagined that the
retailers would attempt to market their
bread makers in Personalized. Why
might one find this practice to be an
affront to a person’s autonomy? One reason
might be that individuals have not
provided their informed consent to predict
their future preferences. Such predictions
would show improper respect for the
decisional dimension of an individual’s
autonomy. In their widely cited paper
laying out the potential ethical issues
involved in the collection and use of big
data, Herschel and Miori, for example,
claim that: […] with Big Data individuals are frequently represented as data points that are then used to manipulate what the person will view in the future. That is, information is presented to individuals online that Big Data calculations determine best reflects their projected preferences based upon their previous search and online page view history. This algorithmic manipulation presumes the will of the individual without their explicit consent. (Herschel and Miori 2017: 34, my italics)
As
I understand these authors, the crux of the
matter is that individuals should be
consulted before predictions about their
future preferences are made, to show proper
respect for their decisional autonomy. Many
might find this view intuitively appealing
because it tracks the views expounded in
other contexts, such as in biomedical ethics(Beauchamp
and Childress 2009). Showing the proper respect
for individuals’ decisional autonomy,
according to this view, requires obtaining
their informed consent about matters that
affect them. Upon further scrutiny, however,
it becomes clear that this view must be
rejected, because a claim to the effect that
there is a consent-based moral constraint on
making predictions about persons’ future
preferences is false. By way of example,
suppose that I go to a store to buy a pair
of shoes. I never buy my shoes anywhere
else, and I always buy shoes with laces.
Suppose further that the shoes I choose this
day do not come with shoelaces and
these must be purchased separately. Lastly,
suppose that, when I go to the counter to
pay for the shoes, the employee at the
counter – having observed my earlier laced
shoe purchases on several occasions –
presents me with a selection of shoelaces in
the belief that I will prefer to wear shoes
that can be laced. In Herschel and Miori’s
account, this would seem to be a morally
dubious action by the employee because he
has predicted my preferences (or, “presumed
my will”) without my consent. However,
surely that cannot be right, and hence, the
consent-based constraint should be rejected.3
It should be rejected, because a moral
constraint that invites us to consider acts
that are clearly morally innocuous as
morally dubious or bad – even wrong – is
itself highly dubious. Some might claim that
this conclusion is premature because
consent-based constraint applies only to
algorithmic predictions, not the kind of
prediction my case relies on. However, this
seems ad hoc, absent a cogent
explanation of what makes these two types of
predictions relevantly different, and I
cannot think of what would comprise such an
explanation.4
In sum, if what I have
argued in this section is true, then
personalized ads are not more morally
dubious qua more autonomy violating
than generic ads because they assume
the will of the person to whom the ad
is served. This is so because the view
that there is a consent-based
constraint on making predictions about
a person’s future preferences is false. Personalization limits information and options
The second objection to personalized ads also turns on the personalized selection and presentation of information that it involves; however, it highlights that ads tailored to a person or group will result in diminished personal autonomy because of the information such ads do not offer regarding other purchasing alternatives.5 Obviously, all ads, whether generic or personalized, likely leave out some information about product alternatives, but the narrowing of alternatives might be more pronounced when the range is customized based on data about a consumer’s preferences or search history, for example. To provide a somewhat crude example, if I buy myself a new pair of golf clubs online, I might in the future be presented with ads for other types of golf equipment rather than, say, ads attempting to sell me rock-climbing gear. Alternatively, I might be presented only with ads for the particular brand of golf clubs that I have purchased rather than product alternatives. This filtering of information has had some scholars sound the alarm. Jeannie Marie Paterson et al., for example, suggest that
[…] by removing alternative
options from consumers’ sight, targeted
advertising narrows their opportunities for
choice. This means consumers are making
decisions from a position of less than full
information, undermining the preconditions
for the exercise of autonomy. This may well
lead to a reduced number of and variation in
the overall options presented to consumers.
In other words, consumers may be constrained
in their own echo-chambers of advertising
that constrain their world view on the basis
of their constructed digital profiles. (Paterson
et al. 2021: 10)
In
a similar vein, Eliza Mik (2016) has argued that when the range
of product options that a consumer is
presented with is personalized, “[…] his
autonomy is limited as he is not given the
opportunity to choose from – or become aware
of – the full range of available options” (p.
21). I think we can discern two
somewhat overlapping but ultimately distinct
autonomy-related concerns from these quotes,
and it is not clear to me whether Paterson
et al. and Mik have both or only one of them
in mind. On the one hand, these authors seem
to take issue with the fact that
personalizing ads is likely to worsen the
available level of information about product
options, information that could be important
when consumers decide which product best
aligns with their ends. Call this the disclosure
objection. On the other hand, it seems
that the mere fact that fewer action
alternatives are presented to consumers
motivates them to raise a moral red flag in
the name of autonomy. Call this the option
objection. Let us consider each in
turn.
The disclosure objection derives some plausibility from the idea that disclosure of information relevant to a person’s medical decision-making, for example, is often highlighted as a key to respecting their autonomy (Beauchamp and Childress 2009: 121f). It is obvious why this is the case; without information about different medical courses of action, it is impossible for a patient to judge which one aligns best with his or her preferences, values or long-term goals. Indeed, and perhaps more relevant for the present context, Thomas Anker (2020) has persuasively argued that failing to disclose “[…] information that is relevant, proportionate, sufficient and understandable to the average, targeted consumer […]” (537) about a particular product fails to respect a consumer’s decisional autonomy. Paterson et al. and Mik may hold that something similar is true in regard to the range of commercial products presented.6 That is, personalized ads may often disclose too little information for a customer to make an informed choice about what product best aligns with their ends, such as their needs, preferences or values. However, I see at least two reasons why we should be sceptical of the disclosure objection. First, supposing that the personalized ads are well-informed, the products offered would likely be ones that do, in fact, align with the preferences and values of the receiver. Indeed, part of the very motivation for personalizing ads in the first place is presumably to attempt to offer us exactly the product(s) that are a good match for us, on the presumption that we will then more likely buy said product(s). Second, insisting, as Paterson et al. do, that “[…] making decisions from a position of less than full information […]” (Paterson et al. 2021: 10, my italics) undermines the exercise of personal autonomy goes too far. If full information was necessary for such decisions, then the fact that a person lacked knowledge about product alternative B at the time of purchasing product A would mean that their decision to buy A was non-autonomous, and this would be so even if they were aware of option alternatives C through Z. The problem is that this sets the bar far too high for what autonomy requires, and it implies that most (all?) of our decisions fail to be autonomous since most (all?) of us lack knowledge of at least some possible option alternatives. However, if this is true, Paterson et al. and Mik have yet to offer us a plausible explanation for when personalized ads undermine consumer autonomy by restricting their knowledge of different purchasing options. One way that they might attempt to provide such an explanation, would be to build on the work of Thomas Anker (2020), which similarly focuses on information about particular products. That is, perhaps they could attempt to argue that information concerning the range of available products presented to a consumer via a piece of advertising need not be complete (indeed, a complete list of products may undermine autonomy by causing informational overload), but nevertheless must meet certain conditions in order not to undermine autonomy. For example, the range of products presented must be relevant and proportional, that is, the information can be processed by the average targeted consumer “[…] within a reasonable period of time” (Anker 2020: 533). While I believe there may be something to this way of developing the objection in regard to the presentation of information about product alternatives in general, it is not entirely clear to me whether this approach would succeed in driving a wedge between personalized advertising and generic advertising in terms of their putative effect on consumer autonomy. For such an objection to stick it would need to be argued that only personalized adds fail to meet the specified condition(s) (whatever they may more precisely be), and I for one cannot think of any plausible conditions that would provide the grounds from which to argue this.
Let
us now turn to the option objection.
This objection, as will be recalled,
concerns the relationship between
reducing the range of options available
to a person and a reduction of their
autonomy. This objection shifts the
point of concern from the informational
background that consumers make
purchasing decisions against – that is,
the conditions for the proper formation
of autonomous choice regarding what ends
to pursue – to one concerning agents’
ability to act to attempt to achieve an
end they have autonomously set for
themselves. That is, the focus is moved
from the decisional dimension of
autonomy to its practical dimension.
According to a view popular among some
autonomy theorists, an agent is
practically autonomous only if he has
“[…] adequate options available for him
to choose from” (Raz 1988:
373; see also Hurka 1987). So, perhaps the concern
that Paterson et al. and Mik have in
mind can be formulated in the following
way: (1) presenting an agent with a
personalized and thus more limited range
of product options is a way of (2)
reducing his or her action alternatives
and (3) reducing action alternatives
undermines practical autonomy. However,
although the option objection thus
formulated does have some intuitive
appeal, it should, in my view,
be rejected. The reason is that the move
from (1) to (2) is illegitimate.7
Specifically, it is false that the
action alternatives in question – that
is, acquiring goods other than the ones
offered by the personalized advertising
– are reduced by the fact that a limited
range of products is being presented to
the agent. That is, in principle, the
options to purchase alternative products
remain completely unaffected by
personalized ads, viz. the fact that my
being offered a type of shampoo by a
personalized ad has no effect on whether
I have the option to go out and buy
another type of shampoo. Hence, agents’
practical autonomy is not limited by
personalized ads, and the option
objection fails. However, perhaps
this is too quick. Some might want to
follow Hurka (1987) in arguing that to be
practically autonomous, “[i]t is not
sufficient for autonomous action that a
person has many options open. He must,
most obviously, know about the options”
(367). If this view is correct, it
implies that consumers’ beliefs about
their option alternatives are central to
them being practically autonomous. On
this basis, some commentators might want
to argue that personalized ads could
reasonably cause consumers to believe
that they have no action alternatives
(even if they do have these
alternatives). Thus, even if what I
argued above is correct, consumers'
practical autonomy might still be
undermined by personalized ads because
consumers are then effectively barred
from acting in pursuit of their
self-chosen ends. I am sympathetic to
the general view that agents’ beliefs
about their option alternatives matter
for an assessment of whether they are
practically autonomous. However, it is
ultimately an empirical question whether
personalized ads ever have the suggested
radical effects on individuals’ beliefs
(i.e., believing that there are no
available alternatives), a question for
which, to the best of my knowledge, we
do not yet have an answer. Furthermore,
and more importantly, if personalized
ads can affect practical autonomy by
affecting consumers’ beliefs about
action alternatives, then personalized
ads may sometimes enhance rather than
impede practical autonomy. For example,
if I receive a personalized ad offering
me an alternative product Y to a product
X that I already use, and I happen to
believe that X is the only product
available, then the offering of product
Y enhances my practical autonomy by
making me aware of an option alternative
that was already there. If this is true,
then on this interpretation of what
practical autonomy requires, we would
have another reason to reject the move
from (1) to (2).
To summarize briefly, in
this section I have argued that
concerns based on the idea that
consumer autonomy is undermined by the
information that personalized ads
leave out are unpersuasive.
Specifically, two somewhat overlapping
concerns that commentators have argued
personalized ads may have – turning on
the restricted background information
for forming autonomous choice and the
limiting of options for exercising
autonomous choice – fail to establish
that personalized ads undermine
autonomy more than generic forms of
advertising do.
Taking advantage of
irrationality and vulnerability
|