No one sets out to be a doormat. Yet some people are chronically passive, always putting other’s needs before their own. These are the folks who end up babysitting for an acquaintance instead of going to their yoga class. In the long run, being unable to express what you want is a recipe for perpetual dissatisfaction, because your needs always end up on the back burner. The good news is people can learn to ask for the things they want at home, at work and even at a local restaurant when you get a burnt steak and want a new one. Read on to discover how.
إِذا كان لكل عصرٍ إِدمانُه، فإِدمانُ هذا العصر الانحباسُ الطوعي في شَــرَك الشِباك الاجتماعية الـمُكْـتَـظَّـة على مساحة الشاشة الصغيرة.
شاشةُ الكومـﭙـيوتر أَو الهاتفِ الـخَلَويّ أَو اللوحِ الذكيّ، زنزانةٌ مرصوصةٌ ضَــيِّـقَةٌ يُـمضي فيها الـمُدمنون ساعاتٍ طوالًا بقلقٍ وتَرَقُّبٍ وتَوَتُّرٍ ومشاعرَ تنعكس مباشرةً على حالتهم النفسية وجهازهم العصَبي، شعروا بذلك ظاهرًا أَو لم يشعُروا.
قدرةُ تركيز السمكة الحمراء لا تتجاوز 10 ثوانٍ، وإِدمانُ الإِنسان مَـرَضِــيًّــا على شاشته الصغيرة يتمطَّى ساعاتٍ طويلة.
فهذا يترك جهازَه مفتوحًا طول الليل، ويأْرق مرارًا كي يتفقَّد مهجوسًا إِن كان رنَّ أَو جاءتْه رسالةٌ وهو نائم.
وهذا يُـجَـنُّ إِن كان في مكانٍ لا شبكة فيه موصولة على الإِنترنت.
وهذا الذي ينسى ما حوله ومَن حوله ويغوص على شاشته منتظِرًا، متفَقِّدًا، كاتبًا، مُـجيبًا عما تلقَّاه، منعزلًا عن محيطه.
وهذا الـملهوفُ إِلى استطلاع ما جرى لزميلٍ له أَو صديق أَو حبيبة، مُتابعًا أَو ناصحًا أَو مُقَدِّمًا حُـلُولًا واقتراحات.
وهذا الـمترقِّبُ رنَّةَ جهازه كي يَهرعَ فيفتَحَه قارئًا آخر خبرٍ وصَلَ من موقع أَو تطبيق، يقرأُه وحده ويقاطعُ الذين حوله كي يقرأَ لهم ما قرأَه، ويُـمضي الوقتَ متفَقِّدًا جهازَه، قارئًا ما يأْتيه، مشاركًا به مَن حوله.
وهذا أَسيرُ أَناه الـمُتعالية، يتابع في المواقع والبريد الإِلكتروني ما يأْتيه من ردودٍ على نص نشَرَه أَو صورة أَرسلها أَو مقطوعة كتبَها أَو نكتة فاقعة وزَّعها، كي يرى عددَ الذين أَجابوا أَو تفاعلوا. وإِذا كان العددُ ضئيلًا شعرَ بالوحدة والانعزال والبُعاد لأَن ردود الفعل حيالَه كانت ضعيفة أَو معدومة، فيُحسُّ بالفشل أَو الدُونية.
وهذا الذي يضيع تائهًا كما في صحراء إِذا نسِيَ جهازه وخرَج من دونه، كأَنه فَقَدَ توازُنَه أَو عقلَه أَو ذاكرتَه، فيَظلُّ مُشَوَّشًا مضطربًا متوتِّــرًا ولا يهدَأُ حتى يعودَ فيشعرَ أَن هاتفَه عاد إِلى قبضته أَو جيبه.
وهذا الذي يُـمضي وقتًا طويلًا يُسجِّل رسائل صوتية وينتظر أَجوبتها الصوتية ويسجِّل رُدودًا على الأَجوبة فينحَرُ وقتَه في تُــرَّهات وتفاهات تَذبح عُمره وهو لا يدري، أَو هو يدري لكنه عاطلٌ عن الإِنتاج وتاليًا عن فاعلـيَّته في الـمجتمع.
وهذا الذي يضع هاتفه على مقوَد السيارة كي يكتُبَ أَو يتلقَّى، مُعَـرِّضًا ذاته للخطَر والسوى للحادث، وغالبًا ما يكون الحادث مؤُذِيًا أَو كارثِــيًّا أَو حتى... قاتلًا.
هذا الهاجسُ هو مَرَضُ العصر الناتجُ عن الانحباس الـمتواصل في هذه الشاشة الصغيرة ليلًا نهارًا، وهو يؤْذي إِنسانَ العصر في صحته العقلية التي باتت مرتبطةً بِـجِهازٍ آلـيٍّ ذكيٍّ، أَذكى ما فيه أَنه يَـحبس الإِنسان في مساحةٍ أَصغرَ من دماغه أَكبرَ من حُــرِّيَّـــتِـه فتجعلُه سجينَها الـمحكومَ بـهالتها الهولاكية الفاغِرَة شَدقَيْها لافتراس وقتِه، وقَضْم أَعصابه، وأَسْرِ حياته داخلَ زنزانةٍ افتراضية.
Over the past several years, teenage suicide rates have spiked horrifically. Depression rates are surging and America’s mental health over all is deteriorating. What’s going on?
My answer starts with technology but is really about the sort of consciousness online life induces.
When communication styles change, so do people. In 1982, the scholar Walter Ong described the way, centuries ago, a shift from an oral to a printed culture transformed human consciousness. Once, storytelling was a shared experience, with emphasis on proverb, parable and myth. With the onset of the printing press it became a more private experience, the content of that storytelling more realistic and linear.
As L.M. Sacasas argues in the latest issue of The New Atlantis, the shift from printed to electronic communication is similarly consequential. I would say the big difference is this: Attention and affection have gone from being private bonds to being publicly traded goods.
That is, up until recently most of the attention a person received came from family and friends and was pretty stable. But now most of the attention a person receives can come from far and wide and is tremendously volatile.
Sometimes your online post can go viral and get massively admired or ridiculed, while other times your post can leave you alone and completely ignored. Communication itself, once mostly collaborative, is now often competitive, with bids for affection and attention. It is also more manipulative — gestures designed to generate a response.
People ensconced in social media are more likely to be on perpetual alert: How are my ratings this moment? They are also more likely to feel that the amount of attention they are receiving is inadequate.
As David Foster Wallace put it in that famous Kenyon commencement address, if you orient your life around money, you will never feel you have enough. Similarly, if you orient your life around attention, you will always feel slighted. You will always feel emotionally unsafe.
New social types emerge in such a communications regime. The most prominent new type is the troll, and in fact, Americans have elected a troll as the commander in chief.
Trolls bid for attention by trying to make others feel bad. Studies of people who troll find that they score high on measures of psychopathy, sadism and narcissism. Online media hasn’t made them vicious; they’re just vicious. Online has given them a platform to use viciousness to full effect.
Trolls also score high on cognitive empathy. Intellectually, they understand other people’s emotions and how to make them suffer. But they score low on affective empathy. They don’t feel others’ pain, so when they hurt you, they don’t care.
Trolling is a very effective way to generate attention in a competitive, volatile attention economy. It’s a way to feel righteous and important, especially if you claim to be trolling on behalf of some marginalized group.
Another prominent personality type in this economy is the crybully. This is the person who takes his or her own pain and victimization and uses it to make sure every conversation revolves around himself or herself. “This is the age of the Cry-Bully, a hideous hybrid of victim and victor, weeper and walloper,” Julie Burchill wrote in The Spectator a few years ago.
The crybully starts with a genuine trauma. The terrible thing that happened naturally makes the crybully feel unsafe, self-protective and self-conscious to the point of self-absorption. The trauma makes that person intensely concerned about self-image.
The problem comes from the subsequent need to control any situation, the failure to see the big picture, the tendency to lash out in fear and anger as a way to fixate attention on oneself and obliterate others. Crybullying is at the heart of many of our campus de-platforming and censorship outrages.
Trolling, crybullying and other attention-grabbing tactics emerge out of a feeling of weakness, and create a climate that causes more pain, in which it is not safe to lead with vulnerability, not safe to test out ideas or do the things that create genuine companionship.
The internet has become a place where people communicate out of their competitive ego: I’m more fabulous than you (a lot of Instagram). You’re dumber than me (much of Twitter). It’s not a place where people share from their hearts and souls.
Of course, people enmeshed in such a climate are more likely to feel depressed, to suffer from mental health problems. Of course, they are more likely to see human relationship through the abuser/victim frame, and to be acutely sensitive to any power imbalance. Imagine you’re 17 and people you barely know are saying nice or nasty things about your unformed self. It creates existential anxiety and hence fanaticism.
Two words loom large in this moment: trauma and equity. Trauma is living with the aftershocks of a bad event — or, more important, it is having no place to go where the aftershocks can be healed because the public conversation is unsafe. Equity is the dream of a world in which all are given equal attention and dignity. The dream is still out there, but it’s receding with every vicious attack done in its name.
He never stood a chance. His first mistake was looking for food alone; perhaps things would have turned out differently if he’d been with someone else. The second, bigger mistake was wandering too far up the valley into a dangerous wooded area. This was where he risked running into the Others, the ones from the ridge above the valley. At first, there were two of them, and he tried to fight, but another four crept up behind him and he was surrounded. They left him there to bleed to death and later returned to mutilate his body. Eventually, nearly 20 such killings took place, until there was no one left, and the Others took over the whole valley.
The protagonists in this tale of blood and conquest, first told by the primatologist John Mitani, are not people; they are chimpanzees in a national park in Uganda. Over the course of a decade, the male chimps in one group systematically killed every neighboring male, kidnapped the surviving females, and expanded their territory. Similar attacks occur in chimp populations elsewhere; a 2014 studyfound that chimps are about 30 times as likely to kill a chimp from a neighboring group as to kill one of their own. On average, eight males gang up on the victim.
If such is the violent reality of life as an ape, is it at all surprising that humans, who share more than 98 percent of their DNA with chimps, also divide the world into “us” and “them” and go to war over these categories? Reductive comparisons are, of course, dangerous; humans share just as much of their DNA with bonobos, among whom such brutal behavior is unheard of. And although humans kill not just over access to a valley but also over abstractions such as ideology, religion, and economic power, they are unrivaled in their ability to change their behavior. (The Swedes spent the seventeenth century rampaging through Europe; today they are, well, the Swedes.) Still, humankind’s best and worst moments arise from a system that incorporates everything from the previous second’s neuronal activity to the last million years of evolution (along with a complex set of social factors). To understand the dynamics of human group identity, including the resurgence of nationalism—that potentially most destructive form of in-group bias—requires grasping the biological and cognitive underpinnings that shape them.
Such an analysis offers little grounds for optimism. Our brains distinguish between in-group members and outsiders in a fraction of a second, and they encourage us to be kind to the former but hostile to the latter. These biases are automatic and unconscious and emerge at astonishingly young ages. They are, of course, arbitrary and often fluid. Today’s “them” can become tomorrow’s “us.” But this is only poor consolation. Humans can rein in their instincts and build societies that divert group competition to arenas less destructive than warfare, yet the psychological bases for tribalism persist, even when people understand that their loyalty to their nation, skin color, god, or sports team is as random as the toss of a coin. At the level of the human mind, little prevents new teammates from once again becoming tomorrow’s enemies.
The human mind’s propensity for us-versus-them thinking runs deep. Numerous careful studies have shown that the brain makes such distinctions automatically and with mind-boggling speed. Stick a volunteer in a brain scanner and quickly flash pictures of faces. Among typical white subjects in the scanner, the sight of a black man’s face activates the amygdala, a brain region central to emotions of fear and aggression, in under one-tenth of a second. In most cases, the prefrontal cortex, a region crucial for impulse control and emotional regulation, springs into action a second or two later and silences the amygdala: “Don’t think that way, that’s not who I am.” Still, the initial reaction is usually one of fear, even among those who know better.
This finding is no outlier. Looking at the face of someone of the same race activates a specialized part of the primate brain called the fusiform cortex, which recognizes faces, but it is activated less so when the face in question is that of someone of another race. Watching the hand of someone of the same race being poked with a needle activates the anterior cingulate cortex, a region implicated in feelings of empathy; being shown the same with the hand of a person of another race produces less activation. Not everyone’s face or pain counts equally.
At every turn, humans make automatic, value-laden judgments about social groups. Suppose you are prejudiced against ogres, something you normally hide. Certain instruments, such as the Implicit Association Test, will reveal your prejudice nonetheless. A computer screen alternates between faces and highly emotive terms, such as “heroic” or “ignorant.” In response, you are asked to quickly press one of two buttons. If the button pairings fit your biases (“press Button A for an ogre’s face or a negative term and Button B for a human face or a positive term”), the task is easy, and you will respond rapidly and accurately. But if the pairings are reversed (“press Button A for a human face or a negative term and Button B for an ogre’s face or a positive term”), your responses will slow. There’s a slight delay each time, as the dissonance of linking ogres with “graceful” or humans with “smelly” gums you up for a few milliseconds. With enough trials, these delays are detectable, revealing your anti-ogre bias—or, in the case of actual subjects, biases against particular races, religions, ethnicities, age groups, and body types.
Needless to say, many of these biases are acquired over time. Yet the cognitive structures they require are often present from the outset. Even infants prefer those who speak their parents’ language. They also respond more positively to—and have an easier time remembering—faces of people of their parents’ race. Likewise, three-year-olds tend to prefer people of their own race and gender. This is not because children are born with innate racist beliefs, nor does it require that parents actively or implicitly teach their babies racial or gender biases, although infants can pick up such environmental influences at a very young age, too. Instead, infants like what is familiar, and this often leads them to copy their parents’ ethnic and linguistic in-group categorizations.
Sometimes the very foundations of affection and cooperation are also at the root of humankind’s darker impulses. Consider oxytocin, a compound whose reputation as a fuzzy “cuddle hormone” has recently taken a bit of a hit. In mammals, oxytocin is central to mother-infant bonding and helps create close ties in monogamous couples. In humans, it promotes a whole set of pro-social behaviors. Subjects given oxytocin become more generous, trusting, empathic, and expressive. Yet recent findings suggest that oxytocin prompts people to act this way only toward in-group members—their teammates in a game, for instance. Toward outsiders, it makes them aggressive and xenophobic. Hormones rarely affect behavior this way; the norm is an effect whose strength simply varies in different settings. Oxytocin, however, deepens the fault line in our brains between “us” and “them.”
Put simply, neurobiology, endocrinology, and developmental psychology all paint a grim picture of our lives as social beings. When it comes to group belonging, humans don’t seem too far from the families of chimps killing each other in the forests of Uganda: people’s most fundamental allegiance is to the familiar. Anything or anyone else is likely to be met, at least initially, with a measure of skepticism, fear, or hostility. In practice, humans can second-guess and tame their aggressive tendencies toward the Other. Yet doing so is usually a secondary, corrective step.
TURBANS TO HIPSTER BEARDS
For all this pessimism, there is a crucial difference between humans and those warring chimps. The human tendency toward in-group bias runs deep, but it is relatively value-neutral. Although human biology makes the rapid, implicit formation of us-them dichotomies virtually inevitable, who counts as an outsider is not fixed. In fact, it can change in an instant.
For one, humans belong to multiple, overlapping in-groups at once, each with its own catalog of outsiders—those of a different religion, ethnicity, or race; those who root for a different sports team; those who work for a rival company; or simply those have a different preference for, say, Coke or Pepsi. Crucially, the salience of these various group identities changes all the time. Walk down a dark street at night, see one of “them” approaching, and your amygdala screams its head off. But sit next to that person in a sports stadium, chanting in unison in support of the same team, and your amygdala stays asleep. Similarly, researchers at the University of California, Santa Barbara, have shown that subjects tend to quickly and automatically categorize pictures of people by race. Yet if the researchers showed their subjects photos of both black and white people wearing two different colored uniforms, the subjects automatically began to categorize the people by their uniforms instead, paying far less attention to race. Much of humans’ tendency toward in-group/out-group thinking, in other words, is not permanently tied to specific human attributes, such as race. Instead, this cognitive architecture evolved to detect any potential cues about social coalitions and alliances—to increase one’s chance of survival by telling friend from foe. The specific features that humans focus on to make this determination vary depending on the social context and can be easily manipulated.
Even when group boundaries remain fixed, the traits people implicitly associate with “them” can change—think, for instance, about how U.S. perceptions of different immigrant groups have shifted over time. Whether a dividing line is even drawn at all varies from place to place. I grew up in a neighborhood in New York with deep ethnic tensions, only to discover later that Middle America barely distinguishes between my old neighborhood’s “us” and “them.” In fact, some actors spend their entire careers alternating between portraying characters of one group and then the other.
This fluidity and situational dependence is uniquely human. In other species, in-group/out-group distinctions reflect degrees of biological relatedness, or what evolutionary biologists call “kin selection.” Rodents distinguish between a sibling, a cousin, and a stranger by smell—fixed, genetically determined pheromonal signatures—and adapt their cooperation accordingly. Those murderous groups of chimps are largely made up of brothers or cousins who grew up together and predominantly harm outsiders.
Humans are plenty capable of kin-selective violence themselves, yet human group mentality is often utterly independent of such instinctual familial bonds. Most modern human societies rely instead on cultural kin selection, a process allowing people to feel closely related to what are, in a biological sense, total strangers. Often, this requires a highly active process of inculcation, with its attendant rituals and vocabularies. Consider military drills producing “bands of brothers,” unrelated college freshmen becoming sorority “sisters,” or the bygone value of welcoming immigrants into “the American family.” This malleable, rather than genetically fixed, path of identity formation also drives people to adopt arbitrary markers that enable them to spot their cultural kin in an ocean of strangers—hence the importance various communities attach to flags, dress, or facial hair. The hipster beard, the turban, and the “Make America Great Again” hat all fulfill this role by sending strong signals of tribal belonging.
Moreover, these cultural communities are arbitrary when compared to the relatively fixed logic of biological kin selection. Few things show this arbitrariness better than the experience of immigrant families, where the randomness of a visa lottery can radically reshuffle a child’s education, career opportunities, and cultural predilections. Had my grandparents and father missed the train out of Moscow that they instead barely made, maybe I’d be a chain-smoking Russian academic rather than a Birkenstock-wearing American one, moved to tears by the heroism during the Battle of Stalingrad rather than that at Pearl Harbor. Scaled up from the level of individual family histories, our big-picture group identities—the national identities and cultural principles that structure our lives—are just as arbitrary and subject to the vagaries of history.
REVOLUTION OR REFORM?
That our group identities—national and otherwise—are random makes them no less consequential in practice, for better and for worse. At its best, nationalism and patriotism can prompt people to pay their taxes and care for their nation’s have-nots, including unrelated people they have never met and will never meet. But because this solidarity has historically been built on strong cultural markers of pseudo-kinship, it is easily destabilized, particularly by the forces of globalization, which can make people who were once the archetypes of their culture feel irrelevant and bring them into contact with very different sorts of neighbors than their grand-parents had. Confronted with such a disruption, tax-paying civic nationalism can quickly devolve into something much darker: a dehumanizing hatred that turns Jews into “vermin,”Tutsis into “cockroaches,” or Muslims into “terrorists.” Today, this toxic brand of nationalism is making a comeback across the globe, spurred on by political leaders eager to exploit it for electoral advantage.
In the face of this resurgence, the temptation is strong to appeal to people’s sense of reason. Surely, if people were to understand how arbitrary nationalism is, the concept would appear ludicrous. Nationalism is a product of human cognition, so cognition should be able to dismantle it, too.
Yet this is wishful thinking. In reality, knowing that our various social bonds are essentially random does little to weaken them. Working in the 1970s, the psychologist Henri Tajfel called this “the minimal group paradigm.” Take a bunch of strangers and randomly split them into two groups by tossing a coin. The participants know the meaninglessness of the division. And yet within minutes, they are more generous toward and trusting of members of their in-group. Tails prefer not to be in the company of Heads, and vice versa. The pull of us-versus-them thinking is strong even when the arbitrariness of social boundaries is utterly transparent, to say nothing of when it is woven into a complex narrative about loyalty to the fatherland. You can’t reason people out of a stance they weren’t reasoned into in the first place.
Modern society may well be stuck with nationalism and many other varieties of human divisiveness, and it would perhaps be more productive to harness these dynamics rather than fight or condemn them. Instead of promoting jingoism and xenophobia, leaders should appeal to people’s innate in-group tendencies in ways that incentivize cooperation, accountability, and care for one’s fellow humans. Imagine a nationalist pride rooted not in a country’s military power or ethnic homogeneity but in the ability to take care of its elderly, raise children who score high on tests of empathy, or ensure a high degree of social mobility. Such a progressive nationalism would surely be preferable to one built on myths of victimhood and dreams of revenge. But with the temptation of mistaking the familiar for the superior still etched into the mind, it is not beyond the human species to go to war over which country’s people carry out the most noble acts of random kindness. The worst of nationalism, then, is unlikely to be overcome anytime soon.
“We are learning how to write the music, and then we let the music make them dance.”
The New Masters of the Universe Big Tech and the Business of Surveillance
By Paul Starr
In his 1944 classic, The Great Transformation, the economic historian Karl Polanyi told the story of modern capitalism as a “double movement” that led to both the expansion of the market and its restriction. During the eighteenth and early nineteenth centuries, old feudal restraints on commerce were abolished, and land, labor, and money came to be treated as commodities. But unrestrained capitalism ravaged the environment, damaged public health, and led to economic panics and depressions, and by the time Polanyi was writing, societies had reintroduced limits on the market.
Shoshana Zuboff, a professor emerita at the Harvard Business School, sees a new version of the first half of Polanyi’s double movement at work today with the rise of “surveillance capitalism,” a new market form pioneered by Facebook and Google. In The Age of Surveillance Capitalism, she argues that capitalism is once again extending the sphere of the market, this time by claiming “human experience as free raw material for hidden commercial practices of extraction, prediction, and sales.” With the rise of “ubiquitous computing” (the spread of computers into all realms of life) and the Internet of Things (the connection of everyday objects to the Internet), the extraction of data has become pervasive. We live in a world increasingly populated with networked devices that capture our communications, movements, behavior, and relationships, even our emotions and states of mind. And, Zuboff warns, surveillance capitalism has thus far escaped the sort of countermovement described by Polanyi.
Zuboff’s book is a brilliant, arresting analysis of the digital economy and a plea for a social awakening about the enormity of the changes that technology is imposing on political and social life. Most Americans see the threats posed by technology companies as matters of privacy. But Zuboff shows that surveillance capitalism involves more than the accumulation of personal data on an unprecedented scale. The technology firms and their experts—whom Zuboff labels “the new priesthood”—are creating new forms of power and means of behavioral modification that operate outside individual awareness and public accountability. Checking this priesthood’s power will require a new countermovement—one that restrains surveillance capitalism in the name of personal freedom and democracy.
THE RISE OF THE MACHINES
A reaction against the power of the technology industry is already underway. The U.S. Justice Department and the Federal Trade Commission are conducting antitrust investigations of Amazon, Apple, Facebook, and Google. In July, the FTC levied a $5 billion fine on Facebook for violating promises to consumers that the company made in its own privacy policies (the United States, unlike the European Union, has no general law protecting online privacy). Congress is considering legislation to limit technology companies’ use of data and roll back the broad immunity from liability for user-generated content that it granted them in the Communications Decency Act of 1996. This national debate, still uncertain in its ultimate impact, makes Zuboff’s book all the more timely and relevant.
The rise of surveillance capitalism also has an international dimension. U.S. companies have long dominated the technology industry and the Internet, arousing suspicion and opposition in other countries. Now, chastened by the experience of Russian interference in the 2016 U.S. presidential election, Americans are getting nervous about stores of personal data falling into the hands of hostile foreign powers. In July of this year, there was a viral panic about FaceApp, a mobile application for editing pictures of faces that millions of Americans had downloaded to see projected images of themselves at older ages. Created by a Russian firm, the app was rumored to be used by Russian intelligence to gather facial recognition data, perhaps to create deepfake videos—rumors that the firm has denied. Early last year, a Chinese company’s acquisition of the gay dating app Grindr stirred concern about the potential use of the app’s data to compromise individuals and U.S. national security; the federal Committee on Foreign Investment in the United States has since ordered the Chinese firm to avoid accessing Grindr’s data and divest itself entirely of Grindr by June 2020. It is not hard to imagine how the rivalry between the United States and China could lead not only to a technology divorce but also to two different worlds of everyday surveillance.
According to Zuboff, surveillance capitalism originated with the brilliant discoveries and brazen claims of one American firm. “Google,” she writes, “is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production-based managerial capitalism.” Incorporated in 1998, Google soon came to dominate Internet search. But initially, it did not focus on advertising and had no clear path to profitability. What it did have was a groundbreaking insight: the collateral data it derived from searches—the numbers and patterns of queries, their phrasing, people’s click patterns, and so on—could be used to improve Google’s search results and add new services for users. This would attract more users, which would in turn further improve its search engine in a recursive cycle of learning and expansion.
Google’s commercial breakthrough came in 2002, when it saw that it could also use the collateral data it collected to profile the users themselves according to their characteristics and interests. Then, instead of matching ads with search queries, the company could match ads with individual users. Targeting ads precisely and efficiently to individuals is the Holy Grail of advertising. Rather than being Google’s customers, Zuboff argues, the users became its raw-material suppliers, from whom the firm derived what she calls “behavioral surplus.” That surplus consists of the data above and beyond what Google needs to improve user services. Together with the company’s formidable capabilities in artificial intelligence, Google’s enormous flows of data enabled it to create what Zuboff sees as the true basis of the surveillance industry—“prediction products,” which anticipate what users will do “now, soon, and later.” Predicting what people will buy is the key to advertising, but behavioral predictions have obvious value for other purposes, as well, such as insurance, hiring decisions, and political campaigns.
Zuboff’s analysis helps make sense of the seemingly unrelated services offered by Google, its diverse ventures and many acquisitions. Gmail, Google Maps, the Android operating system, YouTube, Google Home, even self-driving cars—these and dozens of other services are all ways, Zuboff argues, of expanding the company’s “supply routes” for user data both on- and offline. Asking for permission to obtain those data has not been part of the company’s operating style. For instance, when the company was developing Street View, a feature of its mapping service that displays photographs of different locations, it went ahead and recorded images of streets and homes in different countries without first asking for local permission, fighting off opposition as it arose. In the surveillance business, any undefended area of social life is fair game.
This pattern of expansion reflects an underlying logic of the industry: in the competition for artificial intelligence and surveillance revenues, the advantage goes to the firms that can acquire both vast and varied streams of data. The other companies engaged in surveillance capitalism at the highest level—Amazon, Facebook, Microsoft, and the big telecommunications companies—also face the same expansionary imperatives. Step by step, the industry has expanded both the scope of surveillance (by migrating from the virtual into the real world) and the depth of surveillance (by plumbing the interiors of individuals’ lives and accumulating data on their personalities, moods, and emotions).
The surveillance industry has not faced much resistance because users like its personalized information and free products. Indeed, they like them so much that they readily agree to onerous, one-sided terms of service. When the FaceApp controversy blew up, many people who had used the app were surprised to learn that they had agreed to give the company “a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you.” But this wasn’t some devious Russian formulation. As Wired pointed out, Facebook has just as onerous terms of service.
Even if Congress enacts legislation barring companies from imposing such extreme terms, it is unlikely to resolve the problems Zuboff raises. Most people are probably willing to accept the use of data to personalize their services and display advertising predicted to be of interest to them, and Congress is unlikely to stop that. The same processes of personalization, however, can be used to modify behavior and beliefs. This is the core concern of Zuboff’s book: the creation of a largely covert system of power and domination.
MAKE THEM DANCE
From extracting data and making predictions, the technology firms have gone on to intervening in the real world. After all, what better way to improve predictions than to guide how people act? The industry term for shaping behavior is “actuation.” In pursuit of actuation, Zuboff writes, the technology firms “nudge, tune, herd, manipulate, and modify behavior in specific directions by executing actions as subtle as inserting a specific phrase into your Facebook news feed, timing the appearance of a BUY button on your phone, or shutting down your car engine when an insurance payment is late.”
Evidence of the industry’s capacity to modify behavior on a mass scale comes from two studies conducted by Facebook. During the 2010 U.S. congressional elections, the company’s researchers ran a randomized, controlled experiment on 61 million users. Users were split up into three groups. Two groups were shown information about voting (such as the location of polling places) at the top of their Facebook news feeds; users in one of these groups also received a social message containing up to six pictures of Facebook friends who had already voted. The third group received no special voting information. The intervention had a significant effect on those who received the social message: the researchers estimated that the experiment led to 340,000 additional votes being cast. In a second experiment, Facebook researchers tailored the emotional content of users’ news feeds, in some cases reducing the number of friends’ posts expressing positive emotions and in other cases reducing their negative posts. They found that those who viewed more negative posts in their news feeds went on to make more negative posts themselves, demonstrating, as the title of the published article about the study put it, “massive-scale emotional contagion through social networks.”
The 2016 Brexit and U.S. elections provided real-world examples of covert disinformation delivered via Facebook. Not only had the company previously allowed the political consulting firm Cambridge Analytica to harvest personal data on tens of millions of Facebook users; during the 2016 U.S. election, it also permitted microtargeting of “unpublished page post ads,” generally known as “dark posts,” which were invisible to the public at large. These were delivered to users as part of their news feeds along with regular content, and when users liked, commented on, or shared them, their friends saw the same ads, now personally endorsed. But the dark posts then disappeared and were never publicly archived. Micro-targeting of ads is not inherently illegitimate, but journalists are unable to police deception and political opponents cannot rebut attacks when social media deliver such messages outside the public sphere. The delivery of covert disinformation on a mass basis is fundamentally inimical to democratic debate.
Facebook has since eliminated dark posts and made other changes in response to public criticism, but Zuboff is still right about this central point: “Facebook owns an unprecedented means of behavior modification that operates covertly, at scale, and in the absence of social or legal mechanisms of agreement, contest, and control.” No law, for example, bars Facebook from adjusting its users’ news feeds to favor one political party or another (and in the United States, such a law might well be held unconstitutional). As a 2018 study by The Wall Street Journal showed, YouTube’s recommendation algorithm was feeding viewers videos from ever more extreme fringe groups. That algorithm and others represent an enormous source of power over beliefs and behavior.
Surveillance capitalism, according to Zuboff, is moving society in a fundamentally antidemocratic direction. With the advent of ubiquitous computing, the industry dreams of creating transportation systems and whole cities with built-in mechanisms for controlling behavior. Using sensors, cameras, and location data, Sidewalk Labs, a subsidiary of Google’s parent company, Alphabet, envisions a “for-profit city” with the means of enforcing city regulations and with dynamic online markets for city services. The system would require people to use Sidewalk’s mobile payment system and allow the firm, as its CEO, Dan Doctoroff, explained in a 2016 talk, to “target ads to people in proximity, and then obviously over time track them through things like beacons and location services as well as their browsing activity.” One software developer for an Internet of Things company told Zuboff, “We are learning how to write the music, and then we let the music make them dance.”
Such aspirations imply a radical inequality of power between the people who control the play list and the people who dance to it. In the last third of her book, Zuboff takes her analysis up a level, identifying the theoretical ideas and general model of society that she sees as implicit in surveillance capitalism. The animating idea behind surveillance capitalism, Zuboff says, is that of the psychologist B. F. Skinner, who regarded the belief in human freedom as an illusion standing in the way of a more harmonious, controlled world. Now, in Zuboff’s view, the technology industry is developing the means of behavior modification to carry out Skinner’s program.
The emerging system of domination, Zuboff cautions, is not totalitarian; it has no need for violence and no interest in ideological conformity. Instead, it is what she calls “instrumentarian”—it uses everyday surveillance and actuation to channel people in directions preferred by those in control. As an example, she describes China’s efforts to introduce a social credit system that scores individuals by their behavior, their friends, and other aspects of their lives and then uses this score to determine each individual’s access to services and privileges. The Chinese system fuses instrumentarian power and the state (and it is interested in political conformity), but its emerging American counterpart may fuse instrumentarian power and the market.
The Age of Surveillance Capitalism is a powerful and passionate book, the product of a deep immersion in both technology and business that is also informed by an understanding of history and a commitment to human freedom. Zuboff seems, however, unable to resist the most dire, over-the-top formulations of her argument. She writes, for example, that the industry has gone “from automating information flows about you to automating you.” An instrumentarian system of behavior modification, she says, is not just a possibility but an inevitability, driven by surveillance capitalism’s own internal logic: “Just as industrial capitalism was driven to the continuous intensification of the means of production, so surveillance capitalists are . . . now locked in a cycle of continuous intensification of the means of behavioral modification.”
As a warning, Zuboff’s argument deserves to be heard, but Americans are far from mere puppets in the hands of Silicon Valley. The puzzle here is that Zuboff rejects a rhetoric of “inevitabilism”—“the dictatorship of no alternatives”—but her book gives little basis for thinking we can avoid the new technologies of control, and she has little to say about specific alternatives herself. Prophecy you will find here; policy, not so much. She rightly argues that breaking up the big technology companies would not resolve the problems she raises, although antitrust action may well be justified for other reasons. Some reformers have suggested creating an entirely new regulatory structure to deal with the power of digital platforms and improve “algorithmic accountability”—that is, identifying and remedying the harms from algorithms. But all of that lies outside this book.
The more power major technology platforms exercise over politics and society, the more opposition they will provoke—not only in the United States but also around the world. The global reach of American surveillance capitalism may be only a temporary phase. Nationalism is on the march today, and the technology industry is in its path: countries that want to chart their own destiny will not continue to allow U.S. companies to control their platforms for communication and politics.
The competition of rival firms and political systems may also complicate any efforts to reform the technology industry in the United States. Would it be a good thing, for example, to heavily regulate major U.S. technology firms if their Chinese rivals gained as a result? The U.S. companies at least profess liberal democratic values. The trick is passing laws to hold them to these values. If Zuboff’s book helps awaken a countermovement to achieve that result, we may yet be able to avoid the dark future she sees being born today
“I will live twice more diligently now that you are gone,” she said. “Dear fans, I will be fine. Don’t worry about me.”
But on Sunday, six weeks after Sulli’s death, Ms. Goo herself was found dead in her Seoul home in what the police were calling a suicide. The suicides by two of K-pop’s most beloved stars have left fans in South Korea soul-searching over what has gone wrong in K-pop, their country’s most successful cultural export.
Lee Yong-pyo, the chief of the Seoul Metropolitan Police Agency, told reporters that Ms. Goo’s body was found by a maid on Sunday evening. Investigators also found a handwritten memo in which Ms. Goo expressed her despair, Mr. Lee said.
As grief-stricken fans flocked to the Seoul hospital where her body lay, her family was planning to hold the funeral in private.
Once popular mainly just in Asian countries, K-pop girl groups and boy bands, like BTS, now command huge global followings. The genre has captured the imagination of fans around the world with its fusion of synthesized songs, video art, fashionable outfits and synchronized dance routines that mix teasing sexuality with doe-eyed innocence.
But entertainment industry experts have long warned about the dark side of the scandal-ridden K-pop industry, which has remained largely hidden behind its glamour.
Legions of young South Koreans train for years, often starting in their early teens, honing their singing skills and dance moves in hopes of impressing “star management” agencies who deem them good enough to debut their first song. Even after they make the cut to become K-pop idols, their star status rarely lasts long, as younger stars with cuter looks and fancier dance moves replace them. K-pop stars in their late 20s are already considered old, and these fading idols often try to carve out new roles in acting or as solo singers or talk-show regulars — a difficult transition that is often not successful.
The K-pop phenomenon gets disseminated largely through YouTube, Instagram, Twitter and other social media channels, where its stars are exposed to both a flood of fan letters and hateful comments and cyberbullying on everything from their looks to their singing skills to their private lives.
“From an early age, they live a mechanical life, going through a spartan training regimen,” said Lee Hark-joon, a South Korean journalist who has produced a TV documentary on the making of a K-pop girl group and co-wrote the book “K-pop Idols: Popular Culture and the Emergence of the Korean Music Industry.” “They seldom have a chance to develop a normal school life or normal social relationships as their peers do.”
“Their fall can be as sudden and as dramatic as their rise to the height of fame,” and all at a young age, Mr. Lee added. “Theirs is a profession especially vulnerable to psychological distress — they are scrutinized on social media around the clock, and fake news about their private lives is spread instantly.”
In 2017, Sulli, a former member of the South Korean girl group f(x), attended a memorial for another K-pop star, Kim Jong-hyun, 27, who had killed himself after leaving a note that said he was consumed by depression.
Ms. Goo, 28, a former member of the wildly popular K-pop girl group Kara, had also struggled with online attacks. Trolls spread rumors that she owed her looks largely to plastic surgery. She admitted that she had gone under the knife for droopy eyes.
Things turned worse for her after she broke up with her hair-designer boyfriend, Choi Jong-beom. And rumors spread that there was video footage of the couple engaging in sex.
“I won’t be lenient on these vicious commentaries any more,” Ms. Goo wrote on her Instagram account in June, complaining about her “mental health” problems and “depression.” (After her death, such posts on her Instagram account were removed.)
“Is there no one out there with a beautiful mind who can embrace people who suffer?” she pleaded.
“Public entertainers like myself don’t have it easy — we have our private lives more scrutinized than anyone else and we suffer the kind of pain we cannot even discuss with our family and friends,” she said. “Can you please ask yourself what kind of person you are before you post a vicious comment online?”
The situation with her ex-boyfriend, Mr. Choi, became particularly contentious. She sued him last year, accusing him of threatening to spread the footage of them having sex. In August, he was sentenced to a year and a half in prison on charges of blackmailing, coercion and inflicting bodily harm against Ms. Goo. But his jail term was suspended by the court, keeping him free.
Ms. Goo’s suicide has already resulted in soul-searching in South Korea. The number of people who supported an online petition to the office of President Moon Jae-in asking for harsher punishment for sexual harassment has more than doubled to 217,000 since her suicide was reported.
In her last Instagram message, Ms. Goo uploaded a photo of her lying on her bed. She wrote “Jalja,” or “sleep tight.”
What My Epilepsy Taught Me About the Value of Time By Elizabeth Bruenig
We know more about epilepsy than ever. But I am still trying to reckon with mine.
Among the many special causes entrusted to the patronage of St. Valentine — beekeeping, love — is epilepsy, though no one seems to know exactly why. The great 20th-century psychiatrist Leo Kanner guessed in a 1930 paper on epileptic folklore that the association was earned by the similarity between the sound of Valentine’s name spoken in German and the epithet “fallende Sucht,” “the falling disease.” It may have been that over time, entreaties to Valentine from epileptics were answered with particular generosity. They needed all the help they could get. Kanner cites several other saints known to be patrons of epilepsy, whose names were given over time as euphemisms for the disease — St. John, St. Donato, St. Cornelius and scores more.
It has been some time now since epileptics had only the saints for recourse, though the path from superstition and desperation to social acceptance and medical improvements has been fraught. The bad old days are far behind us, but the essential features of the disease — the loneliness, the suffering, the search for meaning in it — remain much the same. I have been epileptic all my life, and I am still trying to reckon with it.
To have epilepsy is to have one of any number of underlying conditions. Epilepsy describes not the problem — which could be congenital; acquired, as through injury; or some combination of both — but its manifestation: having recurrent, unprovoked seizures. Roughly 3.4 million people in the United States have some form of active epilepsy. The kind I have, Janz Syndrome, is among the more common.
There are remedies for seizures: medications, devices, surgeries and diets. All have varying levels of success depending on the syndrome and the type of seizure, and even the particular patient. And none are without long and miserable side-effect profiles. I have tried a half-dozen of these drugs and have hated them all, and a few years ago I decided to dispense with medication altogether. I have more seizures, but I’m happier, too.
It isn’t advised — and I wouldn’t necessarily advise it — but for me, the occasional trauma of seizures is preferable to the daily misery of headaches, nausea and incoherent drowsiness. I doubt most epileptics would be willing to go so far, especially those with catastrophic forms of the disease. If my condition were much worse, I would most likely find the viselike press of a permanent headache preferable to the alternative. Neurologists tend to be impatient with pickiness about medications, and perhaps they have a point. Things used to be much worse, and we ought to be grateful. But one at least imagines the saints to be sympathetic.
Which is not to say that premodern societies were especially solicitous regarding the welfare of epileptics. The historical record indicates that civilizations dating back to antiquity were aware of people who had seizures chronically and that they struggled to figure out what to make of them. Around 400 B.C., an anonymous physician compiled a monograph on the subject titled “On the Sacred Disease,” which was meant to dispel the apparently widespread belief that epilepsy had some magical aspect.
His effort to establish epilepsy as an ordinary medical phenomenon was valiant, but long in the vindication. By the Middle Ages, seizures had become associated less with prophetic insight and more with demonic activity, though some physicians held to the ancient idea of epilepsy as a natural disease. Supernatural explanations for seizures lasted through the Enlightenment, and then modernity bestowed its own strange gifts upon epileptics.
In the proceedings of the first annual meeting of the National Association for the Study of Epilepsy and the Care and Treatment of Epileptics, in Washington in May of 1901, a philanthropist listed only by the name I.F. Mack wondered how many such “hopeless, helpless, unfortunate creatures” there must have been in the United States, all in want of internment in residential colonies for their kind. By his count, thousands were already locked away in such centers, and thousands more would be over time.
In fact, Buck v. Bell, the 1927 Supreme Court decision that enshrined involuntary, eugenic sterilization in law, regarded a woman who was held at the Virginia State Colony of Epileptics and Feebleminded, though she herself was neither. Nevertheless, untold numbers of epileptics were sterilized against their will under the ruling, which has never been overturned, though forced sterilizations declined significantly in the second half of the 20th century.
Today, thanks in part to a broadening of civil rights and in part to advances in medical science, epilepsy is neither a spiritual gift nor a moral malady. Epileptics are not generally thought of as incapable of managing independent life, nor do any public structures remain that could house and treat them at any rate. Finding meaning in the disease, and sorting out how to live with it, is up to each one of us — alone. If the past saw epilepsy as a communal problem, either relating to gods and devils and their intentions for humankind or to the genetic quality of whole nations, it is now a distinctly private one.
And what does it mean?
It means that there are limits on the things I can do — I don’t drive, for instance — but it moreover means there are limits on my time. My greatest seizure trigger is sleep deprivation. Even an hour of missed sleep can be disastrous — but I still miss hours and hours of sleep. This condition has given me cause to reflect on what is worth rising early for or staying up late for. If my daughters are sick, it is worth the risk to hold a late vigil; if they stir at dawn on Christmas morning, it is worth the risk to see them delight in their presents. When friends arrive after midnight from out of town, or when election results on which everything depends (there seem to be many of these now) come in late or when I am in the middle of a conversation I don’t want to leave — all of these things come at a cost, and I am willing to pay it.
Willing because they are worth it. They are the conduits through which love flows into our lives — and so perhaps the dual patronage of St. Valentine is especially apt. This is what living with epilepsy is like, this ready payment for minutes and hours, because to give them up is to cease living. I used to think of this as a heavy tax leveled by my epilepsy, but now I see that I simply misunderstood the value of time before. I see it even in the plain and uneventful moments: My daughter and I sit on the floor so that we can cook together, and she peers down as I chop an onion, her eyes reddening at the corners. Why does it hurt my eyes, she asks, when you cut up the onion. It’s how onions protect themselves, I tell her: Every living thing wants most of all to live. And I am no different.
Please see important information about how to help someone who is having a seizure here.
‘Love Island’ Returns Amid Debate About Contestants’ Mental Health By Anna Codrea-Rado
After two former participants killed themselves, the British Parliament said it would look into the ethics of reality TV.
LONDON — It seemed like business as usual when a new season of “Love Island” aired here Monday night. All the familiar elements of the cult reality show were there, with the luxury villa in Spain and the skimpy swimsuits.
But as the credits rolled, the sunny atmosphere darkened and a black screen appeared with a photograph of Michael Thalassitis, a former contestant who killed himself in March. The episode had been dedicated to his memory.
Thalassitis was one of two former “Love Island” contestants whose suicide stirred a debate in Britain over the ethics of reality television and the duty that broadcasters have to care for contestants.
ITV, the production company behind “Love Island,” released new guidelines in May to promote contestants’ well-being. ITV said its producers would maintain regular contact with contestants for 14 months after broadcast. The contestants will also be offered “training on dealing with social media” and “advice on finance and adjusting to life back home.” ITV declined to comment for this article.
The committee leading the inquiry is seeking submissions from the public and broadcasters to decide whether enough support is offered during and after filming, and whether the government should take action.
Jo Hemmings, a psychologist who works on reality shows in Britain, said in an interview that a lack of regulations on programs like “Love Island” led to poor judgment. “There are a few recommendations knocking around, but nothing that obliges anyone to do anything,” she said.
“The things that make reality TV entertaining are things like conflict, distress, jeopardy, the unexpected,” she added. “None of these things are things we would promote in terms of mental health positivity.”
Hemmings said contestants needed ongoing support after the cameras stopped rolling because the effect of leaving the show was stress-inducing regardless of whether they became famous. “That is a really, really hard thing for people to take on in psychological terms,” she said.
But some mental health experts said blaming reality shows for mental health problems failed to address the complexity of the issue.
In England, suicide is the leading cause of death for men under the age of 45, according to government research. Honey Langcaster-James, a psychologist who worked on earlier seasons of “Love Island,” said that the focus of debates on mental health should be on the causes of the statistics, and that they should be more relevant to the wider population.
“On a big reality show, it’s not uncommon to have 24-hour access to psychological services,” she said. “People in everyday life don’t get access to that.”
While the debate in Britain about reality TV and mental health is being conducted in the news media and in Parliament, the genre faces similar controversies in other countries, but without the same level of scrutiny.
Melody Parks said she saw things that were “exploitative, inappropriate and unethical” while working in reality TV.
Parks, who worked on a number of American reality shows, including “The Real World” and “Bad Girls Club,” has since left the industry and retrained as a family therapist. She said her new line of work made her see these programs in a different light. “I’m more cognizant of how people are triggered, sometimes intentionally, in order to get an explosive reaction,” she said.
She added that she would like to see producers directly address questions of mental health in reality TV shows. “When someone has a meltdown or a fight, producers could encourage casts to share what triggered them, how the situation affected them,” Parks said.
Sometimes, this happens organically. In last year’s “Love Island,” a contestant talked about how her feelings about her body had been shaped by childhood bullying. And reality TV stars have used their fame to further the conversation around mental health after the show has aired. Nadiya Hussain, the 2015 winner of “The Great British Bake Off,” spoke candidly in interviews about her struggle with anxiety and appeared in a BBC documentary about her experiences.
“We were on the right path when everyone was talking about mental health,” said Mitchell, the former “Love Island” contestant, “but I just think that people are too fickle.”
“People have such short memories,” he said, predicting that some “Love Island” viewers would just go “back to trying to destroy the people who have signed up for it.”
“If this kind of show is still going to happen, it has to be done properly,” he added. “We can’t be sacrificing people’s lives and their mental health for the sake of seven weeks of TV.”