Uploaded by jordanwwong

Clicks and tricks: The dark art of online persuasion (2024)

advertisement
Available online at www.sciencedirect.com
ScienceDirect
Review
Clicks and tricks: The dark art of online persuasion
Patrick Fagan
Internet users are inundated with attempts to persuade,
including digital nudges like defaults, friction, and reinforcement. When these nudges fail to be transparent, optional, and
beneficial, they can become ‘dark patterns’, categorised here
under the acronym FORCES (Frame, Obstruct, Ruse, Compel,
Entangle, Seduce). Elsewhere, psychological principles like
negativity bias, the curiosity gap, and fluency are exploited to
make social content viral, while more covert tactics including
astroturfing, meta-nudging, and inoculation are used to
manufacture consensus. The power of these techniques is set
to increase in line with technological advances such as predictive algorithms, generative AI, and virtual reality. Digital
nudges can be used for altruistic purposes including protection
against manipulation, but behavioural interventions have
mixed effects at best.
Addresses
Patrick Fagan University of the Arts London, United Kingdom
Email address: pf@patrickfagan.co.uk (P. Fagan)
Introduction
Humans are ‘cognitive misers’ with limited conscious
brainpower, relying for most day-to-day decisions on
emotion and heuristics [1]. Far from making people
more sophisticated, the ubiquitous information age may
result in even less cognitive reflection thanks to an
outsourcing of careful thought to ‘the brain in the
pocket’ [2] and the effects of technology developments
on emotional dysregulation [3]. Few people are sufficiently equipped to manage the cognitive vulnerabilities
associated with online decision-making [4], and they
may have a deleterious impact on mental health [5]. It is
crucial, therefore, to catalogue the influence strategies
used to alter netizens’ decisions e the ‘nudges’, the
technological advances empowering them, and the
propagation of ideas online.
Current Opinion in Psychology 2024, 58:101844
This review comes from a themed issue on Nudges (2025)
Edited by Brian Bauer and N.B Schmidt
For complete overview about the section, refer Nudges (2025)
Available online 10 July 2024
https://doi.org/10.1016/j.copsyc.2024.101844
2352-250X/Crown Copyright © 2024 Published by Elsevier Ltd. This is an
open access article under the CC BY license (http://creativecommons.
org/licenses/by/4.0/).
www.sciencedirect.com
Digital nudges
Two recent papers [6,7] have codified digital nudges,
which can be broadly categorised into seven types: information provision (such as adding labels or suggesting
alternatives); framing (such that some choices seem more
attractive or popular); salience, reminders, and prompts;
defaults and commitments; adding or reducing friction;
reinforcing behaviours with feedback, rewards, and punishments; and engaging users emotionally through elements like urgency, imagery, or anthropomorphism. The
papers each identified a category of ‘deceptive’ nudges,
though in practice these used specific psychological
principles found in the other categories.
Distinguishing between ‘good’ and ‘bad’ nudges, a study
of Uber drivers [8] used Thaler’s [9] litmus test: a
nudge is ‘bad’ if it is misleading or not transparent, if it is
hard to opt out of, and if it is unlikely to improve the
welfare of the person being nudged. For example, Uber’s
driver rating is a ‘bad’ nudge since the scoring is not
transparent, there is no opt-out, and it is not always in
the driver’s interest to try and please difficult passengers; while being able to earn badges (e.g., ‘Great conversation’), being clear, optional, and beneficial, is
‘good’. Driver satisfaction with these nudges concurred
with the categorisation as good or bad. Another paper
defined ‘dark’ nudges in similar terms, stating that they
are misleading (i.e., covert, deceptive, asymmetric, or
obscuring), restrictive, or unfair [10].
Accordingly, several studies [11,12,13,14] have
collated so-called ‘dark nudges’ or ‘dark patterns’. Six
key themes emerge e Frame, Obstruct, Ruse, Compel,
Entangle, Seduce (FORCES) - as summarised in
Table 1. On effectiveness, one experiment found that
such dark patterns can indeed increase purchase
impulsivity [15].
Importantly, while nudges can be used nefariously
online, there is also potential for good. For example, they
can be used to help consumers make healthier grocery
choices, such as prefilling carts with healthy goods or
making healthier goods more visually salient [16].
Nudges can also ironically be used to counter manipulation. One experiment found that postponing a purchase decision, being distracted from it, or reflecting on
the reasons to buy or not, all reduced impulsive purchasing in the presence of dark nudges [15]. Elsewhere,
a smartphone app was designed to nudge people towards
Current Opinion in Psychology 2024, 58:101844
2 Nudges (2025)
Table 1
Author’s typology of dark nudges based on recent reviews [11–14].
Tactic
Frame
Information is presented in a way
that biases choice
Obstruct
It is made harder for users to do
what they intended to do
Ruse
Users are tricked into making a
choice other than what they
intended
Compel
Users are forced to do something
they may not have wanted to do
Entangle
Users are kept occupied for longer
than they may have intended
Seduce
Users are engaged emotionally
rather than rationally
Examples
Extraneous reference prices (e.g., old sale price vs. new sale price)
Fake or ambiguous scarcity claims (e.g., low stock, limited time)
Fake social proof and parasocial pressure (e.g., high demand label, reviews, endorsements,
testimonials)
Decoys (i.e., a product added to a set simply to make the others look more attractive)
False hierarchies, in which on option is more visually salient than the others
Confirmshaming (e.g., ‘No thanks, I don’t want to be a better marketer’)
Roach motel tactics, where it is easy to subscribe or access but hard (or impossible) to leave or
logout
Roadblocks to actions, like time delays to account deletion
Price obfuscation (e.g., prevent pricing comparison, bundling prices, or using intermediate
currencies)
Adding extra steps; make navigation or privacy policies labyrinthine; hiding information
Using a foreign language, complex wording or jargon to inhibit understanding
Products being sneaked into the basket, usually due to an obscured opt-out button prior
Drip pricing; hidden costs like delivery fees added to basket at the end
Ads with a delayed appearance so that users accidently click on them when they meant to click
something else
Disguised ads (e.g., that look like a download button)
Ambiguous information causing users to get a different outcome to what they expected
Bait and switch, where the user sets out to do one thing but something else happens instead
Trick questions (e.g., a list of checkboxes where the first means opt-out and the second means
opt-in)
Distraction, e.g., focusing attention on one element to distract from a small opt-out checkbox
Sponsored adverts disguised as normal content
Forced continuity, like automatically charging a credit card once a free trial comes to an end
Grinding, where gamers are forced to repeat the same process in order to secure game elements
like badges
Forced registration to use a website, and pay-to-play
Nagging (e.g., to buy the premium version of a service)
Privacy Zuckering and Contact Zuckering, wherein users are tricked into sharing data or address
book contacts
Defaults and pre-selected options
Playing by appointment (users are forced to use a service at specific times lest they lose
advantages or achievements)
Fake notifications (e.g., about content never interacted with) to draw users (back) in
Pausing notifications rather than being able to permanently stop them
Never-ending autoplay (e.g., a new video plays when the current one is finished)
Infinite scroll (i.e., new content continuously loads at the bottom of the feed)
Casino pull-to-refresh (i.e., users get an animated refresh of content by swiping down)
Time fog (e.g., hiding the smartphone clock so the amount of time spent in the app is not ‘felt’)
Highly emotive language or imagery; cuteness
Pressured selling (e.g., under time pressure)
Bamboozlement: Choice overload or information overload
Guilty pleasures (i.e., personalised suggestions that prey on individual vulnerabilities)
more conscious social media use [17], detecting users’
swiping behaviours to infer their ‘infinite scroll’ and
‘pull-to-refresh’ habits and then encouraging them to
consider taking a break if needed. Across a 2-week
intervention with 17 users, the app reduced compulsive
pull-to-refreshes and reduced the average length of
scrolling sessions (though there was no impact on total
time spent on social media).
Technological advances
The usefulness of specific nudges can vary according to
the psychological make-up of their audience. A review
Current Opinion in Psychology 2024, 58:101844
[18] argued that individual differences can be used to
make large-scale behaviour-change interventions more
personalised (either by matching content to audiences
or vice-versa) and thus more effective, pointing to evidence in the domains of political campaigning, health,
education, consumer psychology, and organisational
psychology. For example, one experiment demonstrated
the efficacy of personality-matched messages in politics
(e.g., “bring out the hero in you” for extraverts, and
“make a small contribution” for introverts) [19]. The
appeal of latent image features has been linked to the
Big Five via machine learning models (with, for example,
www.sciencedirect.com
Clicks and tricks: the dark art of online persuasion Fagan
agreeableness being associated with preferred number
of people, introversion with level of detail, and neuroticism with number of cats), suggesting that communications can be targeted to audiences based on
aesthetics [20].
This personalised persuasion is contingent on being able
to detect the personality of the audience. An empirical
review, titled ‘Can Machines Read Our Minds’ [21],
highlighted how samples of behaviour taken from online
interactions (so-called ‘digital footprints’) could be
subjected to machine learning algorithms to automatically infer a wide range of psychological attributes, such
as the prediction of personality from gait, depression
from tweets, and sexual orientation from profile photos.
For instance, a computational text model based on
fiction-writing subreddits predicted personality with an
average performance of r = 0.33; examples of linguistic
markers included swear words for disagreeableness, the
word ‘game’ for introversion, and ‘damage’ for neuroticism [22].
Indeed, a meta-analysis of 21 studies demonstrated how
the Big Five personality traits could be predicted from
smartphone data (a phenomenon the authors called
‘digital phenotyping’), with an association of r = 0.35 for
extraversion, and associations ranging from r = 0.23 to
0.25 for the other four traits [23]. For instance, extraverts show a higher frequency of calling behaviours,
while neuroticism is linked to more time spent
consuming media. Meanwhile, a second meta-analysis
[24] similarly demonstrated how the Big Five could be
predicted from social media footprints, with correlations
ranging from 0.29 (agreeableness) for 0.40 (extraversion). Notably, targeting item-level nuances (e.g.,
gregariousness as a facet of extraversion) may lead to
small but meaningful improvements in prediction accuracy [25].
Putting it all together, a large Australian bank predicted
customers’ personality from their interactions’ text and
voice data, using this to send either personality-targeted
or generic advertising messages and finding a conversion
rate of 2.24% for the former and 1.24% for the
latter [26].
Of course, generative AI has enormous potential to make
this kind of personalised persuasion scalable: a series of
four studies [27] found that, across 33 messages tested,
61% of personalised messages produced by ChatGPT
were directionally and significantly more effective than
non-personalised equivalents (a proportion significantly
higher than chance).
Indeed, across three experiments and a sample of 4836
participants, messages created by ChatGPT were
persuasive across a range of political issues including gun
rights and carbon taxes e in fact, AI-generated messages
www.sciencedirect.com
3
were as persuasive as those written by humans [28], a
finding echoed elsewhere [29]. There is also some
mixed evidence that artificial pictures and videos
created by generative AI (‘deepfakes’) can create false
memories: people were significantly more likely to
‘remember’ Jim Carrey having starred in a remake of
The Shining (which he hadn’t) if the prompt was
accompanied by a deepfake photo or video [30].
The persuasiveness of AI-generated content is mediated by its perceived verisimilitude and its perceived
creativity, since creative content is more attentiongrabbing and engaging [31]; another mechanism is the
vividness advantage conferred on AI-generated photo or
video over text, which increases credibility and
engagement [32]. Concerningly, human ability to detect
‘deepfake’ images (of faces) is only just above chance
and immune to interventions, while confidence in this
ability is high [33].
Similarly, the burgeoning landscape of virtual and
augmented reality and the metaverse is fertile ground
for psychological influence. A meta-analysis of 39 social
studies found that virtual reality has a significantly
bigger impact on social attitudes around topics like
migration, mental health and intergroup conflict than
did non-immersive interventions [34].
Propagation of ideas
Psychological principles are used to enhance the virality
of ideas e and one of the biggest predictors is emotional
arousal. An analysis of 3000 tweets from Austrian politicians found that high emotional arousal increased the
likelihood of being reshared [35], while an analysis of
10,141 influencer tweets found that sharing is higher if
emotions are more prevalent than argument quality
[36]. News is similarly more likely to be shared if the
headline uses surprise and exclamation marks [37].
Within emotions, there is a strong negativity bias. An
investigation of 51 million tweets about current affairs
hypothesised and found that message virality is driven
by three things: negativity, causal arguments (e.g.,
apportioning blame), and threats to personal or societal
values [38]. Tweets and fake news alike are more likely
to go viral if they involve strong negative emotions
[37,39]. In fact, an analysis of 105,000 news stories
found that each additional negative word in the headline
increased click-through rates by 2.3% [40].
This negativity bias may be recognised and used by bad
actors, with clickbait being more likely to contain
negative emotion [41] and fake news being more likely
to involve sensational content like crime [42].
However, ‘clickbaitiness’ may reduce sharing due to
perceptions of manipulative intent [41], and many
Current Opinion in Psychology 2024, 58:101844
4 Nudges (2025)
studies have reported that positive emotion makes
content more likely to be shared, perhaps because users
would rather be the messenger of positive news
[35,36,41]. Similarly, dominance has been found to be
the strongest predictor of sharing viral advertising, and a
follow-up study found this effect was mediated by a
feeling of psychological empowerment [43].
Besides emotion, various elements are used to increase
virality. Clickbait headlines (for example, ‘The Scary
New Science That Shows Milk Is Bad For You’) are more
likely to omit information, utilising a psychological
principle called the ‘curiosity gap’; and the more information that is omitted, the more likely a headline is to
be shared [41]. Other important feature include:
interactive elements like URLs or mentions [44]; the
use of simplicity, like shorter words and more readable
language [37,45]; sensory language, where an additional
sensory word on TikTok has been associated with 11,030
additional likes or comments [46]; and authoritative
language (like more ‘we’ words and fewer negations or
swear words) [36].
More covert tactics are often used online to inseminate
and crystallise public opinion. An empirical analysis of
the flat Earth community on YouTube [47] found a twostage process: firstly, there is ‘seeding’, in which agents
insert deceptions into the discourse, disguised as legitimate information; secondly, there is ‘echoing’, in which
viewpoints become solidified through identity-driven
argumentation (e.g., ‘us vs them’).
Both seeding and echoing are facilitated by ‘astroturfing’
(i.e., inflating perceptions of the popularity of an
opinion or policy through bots or agents who mimic
genuine humans online; manufacturing consensus); a
paper in Scientific Reports found consistent patterns of
coordination for Twitter astroturfing across a wide range
of countries, political contexts, and time periods [48].
On the one hand, astroturfing inseminates ideas and
gives the illusion of social proof: one experiment found
that adding simply two contra-narrative comments underneath a news story shared on Facebook was enough to
significantly bias opinion away from the direction of the
news story e and that three interventions tested had no
mitigating effect in the long term [49]. A similar principle is ‘inoculation’, in which, for example, agents seed
a weakened version of an argument (e.g., as wrong, bad,
or ridiculous) into the discourse in order to prevent the
audience from engaging with or believing the idea when
they encounter it ‘in the wild’ [50].
On the other hand, astroturfing reinforces ideas via
polarising debate: an analysis of 309,947 tweets found
evidence of an organised network in which ‘cybertroops’
engaged in astroturfing by mixing disinformation or
polarised content with benign topics of interest to the
Current Opinion in Psychology 2024, 58:101844
target group [51]. Indeed, more polarising influencers
produce stronger engagement for the brands who
sponsor them, since controversy is emotionally engaging
and since the influencer’s fans will rush to defend them
(and thus their own identity) from attacks via motivated
reasoning [52].
Importantly, the effectiveness of online nudges can be
dampened by audience scepticism and reactance, and
thus some have pointed to ‘meta-nudging’ as an alternative approach; it involves sending a nudge indirectly
via social influencers who, being trusted authorities, are
better placed to change behaviour and enforce norms
[53]. Indeed, followers can develop parasocial relationships with influencers, which in turn produce
feelings of connection and community and foster the
creation of personal identities [54], and make people
more likely to adopt influencers’ recommendations [55].
Even computer-generated influencers can have a psychological impact on audiences, wherein followers
anthropomorphise them, blurring the lines between the
real and the unreal and producing feelings of friendship,
belonging, and jealousy [56].
Impact on mental health
These tactics for digital persuasion encourage people to
engage more heavily [57] with technologies like social
media which may have deleterious effects on mental
health [58]. They may similarly ‘nudge’ people into
unhealthy behaviours like impulsive purchasing [15]
and online addiction [59]. Manipulation can also foster
feelings of helplessness and thus paranoia amongst their
target audiences [60]. On the other hand, well-crafted
behavioural interventions online do also have the potential to help people achieve better mental health by,
for example, nudging people into more conscious screen
time [17] or delivering psychologically-targeted mental
health campaigns [61].
Conclusion
Recent evidence demonstrates how users can be influenced online e not always to the benefit of their mental
health e through dark patterns, emotionally-charged
social media content, and the covert use of astroturfing and meta-nudging, while the risk of manipulation
looks set to grow in line with advances in predictive
algorithms, generative AI, and VR. Behavioural interventions appear to have small effects at best.
CRediT authorship contribution statement
Patrick Fagan: Conceptualization, Investigation,
Writing e original draft, Writing e review & editing.
Declaration of competing interest
The authors declare that they have no known competing
financial interests or personal relationships that could
www.sciencedirect.com
Clicks and tricks: the dark art of online persuasion Fagan
5
have appeared to influence the work reported in
this paper.
17. Monge Roffarello A, De Russis L: Nudging users towards
conscious social media use. In Proceedings of the 25th international conference on mobile human-computer interaction; 2023,
September:1–7.
Data availability
18. Matz SC, Beck ED, Atherton OE, White M, Rauthmann JF,
Mroczek DK, … Bogg T: Personality science in the digital age:
the promises and challenges of psychological targeting for
personalized behavior-change interventions at scale.
Perspect Psychol Sci 2023, 17456916231191774.
No data was used for the research described in
the article.
References
References of particular interest have been highlighted as:
of special interest
19. Zarouali B, Dobber T, De Pauw G, de Vreese C: Using a
personality-profiling algorithm to investigate political microtargeting: assessing the persuasion effects of personalitytailored ads on social media. Commun Res 2022, 49:
1066–1091.
1.
Stanovich KE: Why humans are cognitive misers and what it
means for the great rationality debate. In Routledge handbook
of bounded rationality. Routledge; 2020:196–206.
20. Matz SC, Segalin C, Stillwell D, Müller SR, Bos MW: Predicting
the personal appeal of marketing images using computational methods. J Consum Psychol 2019, 29:370–390.
2.
Barr N, Pennycook G, Stolz JA, Fugelsang JA: The brain in your
pocket: evidence that Smartphones are used to supplant
thinking. Comput Hum Behav 2015, 48:473–480.
21. Burr C, Cristianini N: Can machines read our minds? Minds
Mach 2019, 29:461–494.
3.
Haidt J: The anxious generation: how the great rewiring of childhood is causing an epidemic of mental illness. Random House;
2024.
4.
Masur PK: Situational privacy and self-disclosure: communication
processes in online environments. Springer; 2018.
5.
Henzel V, Håkansson A: Hooked on virtual social life. Problematic social media use and associations with mental
distress and addictive disorders. PLoS One 2021, 16,
e0248406.
6.
Bergram K, Djokovic M, Bezençon V, Holzer A: The digital
landscape of nudging: a systematic literature review of
empirical research on digital nudges. In Proceedings of the
2022 CHI conference on human factors in computing systems;
2022, April:1–16.
7.
Bhatt E, Seetharaman P: Rethinking digital nudging: a taxonomical approach to defining and identifying characteristics
of digital nudging interventions. AIS Trans Hum-Comput
Interact 2023, 15:442–471.
8.
Uzunca B, Kas J: Automated governance mechanisms in
digital labour platforms: how Uber nudges and sludges its
drivers. Ind Innovat 2023, 30:664–693.
9.
Thaler RH: Misbehaving: the making of behavioral economics.
WW Norton & Company; 2015.
10. Mathur A, Kshirsagar M, Mayer J: What makes a dark pattern...
dark? design attributes, normative considerations, and measurement methods. In Proceedings of the 2021 CHI conference
on human factors in computing systems; 2021, May:1–18.
11. Gray CM, Santos C, Bielova N, Mildner T: An ontology of dark
patterns knowledge: foundations, definitions, and a pathway for
shared knowledge-building. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems 2024, May:1–22.
12. Lacey C, Beattie A, Sparks T: Clusters of dark patterns across
popular websites in New Zealand. Int J Commun 2023, 17:25.
13. Monge Roffarello A, Lukoff K, De Russis L: Defining and iden
tifying attention capture deceptive designs in digital interfaces. In Proceedings of the 2023 CHI conference on human
factors in computing systems; 2023, April:1–19.
22. Simchon A, Sutton A, Edwards M, Lewandowsky S: Online
reading habits can reveal personality traits: towards detecting
psychological microtargeting. PNAS nexus 2023, 2, pgad191.
23. Marengo D, Elhai JD, Montag C: Predicting Big Five personality traits from smartphone data: a meta-analysis on the
potential of digital phenotyping. J Pers 2023, 91:1410–1424.
24. Azucar D, Marengo D, Settanni M: Predicting the Big 5 personality traits from digital footprints on social media: a metaanalysis. Pers Indiv Differ 2018, 124:150–159.
25. Hall AN, Matz SC: Targeting item–level nuances leads to
small but robust improvements in personality prediction from
digital footprints. Eur J Pers 2020, 34:873–884.
26. Shumanov M, Cooper H, Ewing M: Using AI predicted per
sonality to enhance advertising effectiveness. Eur J Market
2022, 56:1590–1609.
27. Matz SC, Teeny JD, Vaid SS, Peters H, Harari GM, Cerf M: The
potential of generative AI for personalized persuasion at
scale. Sci Rep 2024, 14:4692.
28. Bai H, Voelkel J, Eichstaedt J, Willer R: Artificial intelligence can
persuade humans on political issues. 2023 [Pre-print].
29. Goldstein JA, Chao J, Grossman S, Stamos A, Tomz M: How
persuasive is AI-generated propaganda? PNAS nexus 2024, 3,
pgae034.
30. Murphy G, Flynn E: Deepfake false memories. Memory 2022,
30:480–492.
31. Campbell C, Plangger K, Sands S, Kietzmann J: Preparing for
an era of deepfakes and AI-generated ads: a framework for
understanding responses to manipulated advertising.
J Advert 2022, 51:22–38.
32. Lee J, Shin SY: Something that they never said: multimodal
disinformation and source vividness in understanding the
power of AI-enabled deepfake news. Media Psychol 2022, 25:
531–546.
33. Bray SD, Johnson SD, Kleinberg B: Testing human ability to
detect ‘deepfake’images of human faces. Journal of Cybersecurity 2023, 9, tyad011.
14. Stavrakakis I, Curley A, O’Sullivan D, Gordon D, Tierney B:
A framework of web-based dark patterns that can be detected
manually or automatically. 2021.
34. Nikolaou A, Schwabe A, Boomgaarden H: Changing social at
titudes with virtual reality: a systematic review and metaanalysis. Annals of the International Communication Association
2022, 46:30–61.
15. Sin R, Harris T, Nilsson S, Beck T: Dark patterns in online
shopping: do they work and can nudges help mitigate impulse buying? Behavioural Public Policy 2022:1–27.
35. Pivecka N, Ratzinger RA, Florack A: Emotions and virality:
social transmission of political messages on Twitter. Front
Psychol 2022, 13, 931921.
16. Valen
ci
c E, Beckett E, Collins CE, Seljak BK, Bucher T: Digital
nudging in online grocery stores: a scoping review on current practices and gaps. Trends Food Sci Technol 2023, 131:
151–163.
36. Weismueller J, Harrigan P, Coussement K, Tessitore T: What
makes people share political content on social media? The
role of emotion, authority and ideology. Comput Hum Behav
2022, 129, 107150.
www.sciencedirect.com
Current Opinion in Psychology 2024, 58:101844
6 Nudges (2025)
37. Esteban-Bravo M, Vidal-Sanz JM: Predicting the virality of fake
news at the early stage of dissemination. Expert Syst Appl
2024, 248, 123390.
53. Dimant E, Shalvi S: Meta-nudging honesty: past, present, and
future of the research frontier. Current Opinion in Psychology
2022, 47, 101426.
38. Mousavi M, Davulcu H, Ahmadi M, Axelrod R, Davis R, Atran S:
Effective messaging on social media: what makes online
content go viral?. In Proceedings of the ACM web conference
2022; 2022, April:2957–2966.
54. Hoffner CA, Bond BJ: Parasocial relationships, social media, &
well-being. Current Opinion in Psychology 2022, 45, 101306.
55. Conde R, Casais B: Micro, macro and mega-influencers on
instagram: the power of persuasion via the parasocial relationship. J Bus Res 2023, 158, 113708.
39. Saquete E, Zubcoff J, Gutiérrez Y, Martínez-Barco P,
Fernández J: Why are some social-media contents more
popular than others? Opinion and association rules mining
applied to virality patterns discovery. Expert Syst Appl 2022,
197, 116676.
56. Mrad M, Ramadan Z, Nasr LI: Computer-generated influencers: the rise of digital personalities. Market Intell Plann
2022, 40:589–603.
40. Robertson CE, Pröllochs N, Schwarzenegger K, Pärnamets P,
Van Bavel JJ, Feuerriegel S: Negativity drives online news
consumption. Nat Human Behav 2023, 7:812–822.
57. Mildner T, Savino GL: How social are social media the dark
patterns in facebook’s interface. arXiv preprint arXiv:
2103.10725 2021.
41. Mukherjee P, Dutta S, De Bruyn A: Did clickbait crack the code
on virality? J Acad Market Sci 2022, 50:482–502.
58. Braghieri L, Levy RE, Makarin A: Social media and mental
health. Am Econ Rev 2022, 112:3660–3693.
42. Nanath K, Kaitheri S, Malik S, Mustafa S: Examination of fake
news from a viral perspective: an interplay of emotions,
resonance, and sentiments. J Syst Inf Technol 2022, 24:
131–155.
59. Newall PW: Dark nudges in gambling. Addiction Res Theor
2019, 27:65–67.
43. Wen TJ, Choi CW, Wu L, Morris JD: Empowering emotion: the
driving force of share and purchase intentions in viral
advertising. J Curr Issues Res Advert 2022, 43:47–67.
44. Chen S, Xiao L, Kumar A: Spread of misinformation on social
media: what contributes to it and how to combat it. Comput
Hum Behav 2023, 141, 107643.
45. Chatterjee S, Panmand M: Explaining and predicting clickbaitiness and click-bait virality. Ind Manag Data Syst 2022,
122:2485–2507.
46. Cascio Rizzo GL, Berger J, De Angelis M, Pozharliev R: How
sensory language shapes influencer’s impact. J Consum Res
2023, 50:810–825.
47. Diaz Ruiz C, Nilsson T: Disinformation and echo chambers: how
disinformation circulates on social media through identitydriven controversies. J Publ Pol Market 2023, 42:18–35.
48. Schoch D, Keller FB, Stier S, Yang J: Coordination patterns
reveal online political astroturfing across the world. Sci Rep
2022, 12:4572.
49. Zerback T, Töpfl F: Forged examples as disinformation: the
biasing effects of political astroturfing comments on public
opinion perceptions and how to prevent them. Polit Psychol
2022, 43:399–418.
50. Traberg CS, Roozenbeek J, van der Linden S: Psychological
inoculation against misinformation: current evidence and
future directions. Ann Am Acad Polit Soc Sci 2022, 700:136–151.
51. Arce García S, Said-Hung EM, Mottareale D: Astroturfing as a
strategy for manipulating public opinion on Twitter during the
pandemic in Spain. 2022.
52. Beheshti MK, Gopinath M, Ashouri S, Zal S: Does polarizing
personality matter in influencer marketing? Evidence from
Instagram. J Bus Res 2023, 160, 113804.
Current Opinion in Psychology 2024, 58:101844
60. Schaerer M, Foulk T, Du Plessis C, Tu MH, Krishnan S: Just
because you’re powerless doesn’t mean they aren’t out to get
you: low power, paranoia, and aggression. Organ Behav Hum
Decis Process 2021, 165:1–20.
61. Ahmad R, Siemon D, Gnewuch U, Robra-Bissantz S: Designing
personality-adaptive conversational agents for mental health
care. Inf Syst Front 2022, 24:923–943.
Further information on references of
particular interest
11.
A wide-ranging and detailed review of most dark patterns identified to date, categorised into a novel typology.
13.
This paper takes a novel approach to dark patterns, looking
specifically at those which capture and keep attention.
18.
An important and recent review of personality targeted behaviour
change interventions, led by a leading scholar in the field.
20.
Novel, useful insights alluding to how to practically design
communications to aesthetically appeal to personality-target
groups.
21.
A comprehensive review of how psychological traits can be
inferred from data points.
26.
A real-world study bridging both machine learning-based personality prediction and then targeted advertising.
27.
A pioneering paper bringing personalised persuasion into the
realm of generative AI.
34.
A meta-analytical review on the power of virtual/augmented reality persuade on important social issues.
50.
A review by field-leading scholars on a burgeoning and important
topic, psychological incoluation.
53. An interesting and novel concept that could prove revolutionary
for behavioural science interventions.
www.sciencedirect.com
Download