Neurociencia | Neuroscience
Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
Why young people don’t want to run for office and how to change their minds 
Just say run: How to overcome cynicism and inspire young people to run for office
Ask young people whether they would ever consider running for office and this is what you’ll hear:
These are not hypothetical responses. They are the words of a handful of the more than 4,000 high school and college students we surveyed and interviewed for our new book, Running from Office: Why Young Americans Are Turned Off to Politics. Having come of age in a political context characterized by hyper-partisanship, gridlock, stalemate, and scandal, the overwhelming majority of 13 to 25 year olds view the political system as ineffective, broken, and downright nasty. As a consequence, nine out of 10 will not even consider running for office. They’d rather do almost anything else with their lives.
This should sound alarm bells about the health of our democracy. The United States has more than half a million elective positions. And most people who become candidates don’t go through life never thinking about politics and then wake up one morning and decide to throw their hats into the ring. The idea has usually been percolating for a long time, often since adolescence.
In the final chapter of the book, we offer a series of recommendations that could stimulate political ambition and help chart a new course. Here, we summarize three. They are all major endeavors and will require substantial funding and deep commitment from government officials, entrepreneurs, educators, and activists. But each has the potential to change young people’s attitudes toward politics. At the very least, we hope to trigger a national conversation about how to show the next generation that politics is about more than men behaving badly in the nation’s capital.
1. Launch the YouLead Initiative: Since John F. Kennedy signed an executive order in 1961establishing the Peace Corps, the organization has sent hundreds of thousands of Americans abroad “to tackle the most pressing needs of people around the world.” AmeriCorps has deployed 800,000 Americans to meet similar domestic needs. And Teach for America has recruited thousands of citizens to “build the movement to eliminate educational inequity.” Together, these programs send a strong signal that the government values, and American society depends on, public service. If we want to put the next generation on the path to politics, then what better way than by demonstrating that running for office is just as valuable, effective, and noble a form of public service?
Whether developed as a government program, non-profit endeavor, or corporate project, a two-pronged national campaign—we call it the YouLead Initiative—could send a strong signal to young people that running for office is a worthwhile way to serve the community, country, and world. The first piece would entail a technologically savvy media campaign that changes perceptions of politics. When young people think about government, they conjure up images of self-interested, egotistical conservatives fighting self-interested, egotistical liberals in a broken system to the point of paralysis. Placing the spotlight on local and state-level leaders—most of whom are not professional politicians—would convey that many elected officials care about their communities and are making positive change. In addition, a series of fun public service announcements in which parents, teachers, public figures, and celebrities encourage young people to think about a future in politics would reinforce the message. Second, regional and state coordinators for YouLead could identify high school and college students who have already exhibited leadership success—those in student government, captains of sports teams, members of debate and mock trial teams, those participating in drama and music clubs. At a regional conference, they would be encouraged to channel their leadership capabilities into electoral politics. The program could even capitalize on their competitive spirit by hosting a national conference to which regional participants could apply.
2. Make Political Aptitude Part of the College Admission Process: The primary educational goal of most 12-17 year olds is to attend college (85% of the high school students we surveyed planned to go). But the five key ingredients in a college application—high school grades, standardized test scores, extra-curricular activities, personal essays, and letters of recommendation—make it entirely possible for students to apply to, and be accepted at, even the most prestigious schools without any political interest or knowledge. You can’t find Iraq on a map? That’s okay. You don’t know the name of the vice president? No big deal. You’re unfamiliar with which political party controls Congress? Don’t worry about it. Why not link political aptitude to the college application process, either in the form of a new component to the SAT or ACT, an additional exam, or an essay about public affairs? The vehicle is almost incidental. What matters is that it would force young people to take news and political information seriously.
Linking college admission, even in some small way, to political awareness could pay off. We found that young people with more exposure to politics—at home, at school, with their friends, and through the media—are far more likely to be interested in running for office. Sure, they see the same negative aspects of contemporary politics as everyone else. But they also see some examples of politicians behaving well, elected officials solving problems, and earnest, well-meaning candidates aspiring to improve their communities. The habit of staying politically informed might fade once students submit their college applications, but it might not. And there is no downside for colleges and universities to take the position that to be successful citizens, students must be connected to the world around them. In fact, a similar approach has generated a sense of volunteerism among many high school students. Of the roughly 75% of high school seniors who do some sort of community service, many start these efforts with the hope of “impressing” college admission officers.
3. Develop the GoRun App: We live in the era of the app. You can upload photos, request an Uber, find cheap airline tickets, locate the closest Mexican restaurant, or listen to your favorite music with the simple touch of an app. And young people do. Eighty-one percent of people under the age of 25 sleep with their phone next to them on the bed; 74% reach for their smartphones as the first thing they do when they wake up; and 97% of teens regularly use smartphones in the bathroom to check messages. There’s no activity, time of day, or location that is out of bounds for young people’s smartphone and app use. So, let’s take advantage of the digital world young people inhabit by creating an app that helps them identify political offices and informs them about how to run for them.
Surprisingly, it is quite difficult to find out what elected positions exist in any given community, let alone determine the responsibilities associated with each or the nuts and bolts involved in running for them. No central database houses this information. The GoRun app would allow users to enter an address and receive a complete list of all the elected positions representing that residence—from the local school board all the way up to president of the United States. Clicking on each position would result in a description of the office, a set of core responsibilities, and information about the logistics and rules required to run. Figuring out how to become a candidate would literally be at our fingertips. Educators could easily incorporate the app into their curricula. And young people who are even the least bit curious about how to run for office would not have to engage in a fact-finding mission. This easy-to-access information would showcase the thousands of electoral opportunities that have nothing to do with dysfunction in Washington, DC.
Our political system has done a number on young people. It has turned them off to the idea of running for office, discouraged them from aspiring to be elected leaders, and alienated them from even thinking about a career in politics. Steering a new course will be difficult, but being creative about how we do it is the only choice we have.
Wikipedia Deploys AI to Expand Its Ranks of Human Editors 
Wikipedia Deploys AI to Expand Its Ranks of Human Editors
AARON HALFAKER JUST built an artificial intelligence engine designed to automatically analyze changes to Wikipedia.
Wikipedia is the online encyclopedia anyone can edit. In crowdsourcing the creation of an encyclopedia, the not-for-profit website forever changed the way we get information. It’s among the ten most-visited sites on the Internet, and it has swept tomes like World Book and Encyclopedia Britannica into the dustbin of history. But it’s not without flaws. If anyone can edit Wikipedia, anyone can mistakenly add bogus information. And anyone can vandalize the site, purposefully adding bogus information. Halfaker, a senior research scientist at the Wikimedia Foundation, the organization that oversees Wikipedia, built his AI engine as a way of identifying such vandalism.
"It turns out that the vast majority of vandalism is not very clever."
AARON HALFAKER, WIKIMEDIA
In one sense, this means less work for the volunteer editors who police Wikipedia’s articles. And it might seem like a step toward phasing these editors out, another example of AI replacing humans. But Halfaker’s project is actually an effort toincrease human participation in Wikipedia. Although some predict that AI and robotics will replace as much as 47 percent of our jobs over the next 20 years, others believe that AI will also create a significant number of new jobs. This project is at least a small example of that dynamic at work.
“This project is one attempt to bring back the human element,” says Dario Taraborelli, Wikimedia’s head of research, “to allocate human attention where it’s most needed.”
Don’t Scare the Newbies
In the past, if you made a change to an important Wikipedia article, you often received an automated response saying you weren’t allowed to make the change. The system wouldn’t let you participate unless you followed a strict set of rules, and according to study by Halfaker and various academics, this rigidity prevented many people from joining the ranks of regular Wikipedia editors. A 2009 study indicated that participation in the project had started to decline, just eight years after its founding.
“It’s because the newcomers don’t stick around,” Halfaker says. “Essentially, Wikipedians had traded efficiency of dealing with vandals and undesirable people coming into the wiki for actually offering a human experience to newcomers. The experience became this very robotic and negative experience.”
With his new AI project—dubbed the Objective Revision Evaluation Service, or ORES—Halfaker aims to boost participation by making Wikipedia more friendly to newbie editors. Using a set of open source machine learning algorithms known asSciKit Learn—code freely available to the world at large—the service seeks to automatically identify blatant vandalism and separate it from well-intentioned changes. With a more nuanced view of new edits, the thinking goes, these algorithms can continue cracking down on vandals without chasing away legitimate participants. It’s not that Wikipedia needs to do away with automated tools to attract more human editors. It’s that Wikipedia needs better automated tools.
“We don’t have to flag good-faith edits the same way we flag bad-faith damaging edits,” says Halfaker, who used Wikipedia as basis for his PhD work in the computer science department at the University of Minnesota.
In the grand scheme of things, the new AI algorithms are rather simple examples of machine learning. But they can be effective. They work by identifying certain words, variants of certain words, or particular keyboard patterns. For instance, they can spot unusually large blocks of characters. “Vandals tend to mash the keyboard and not put spaces in between their characters,” Halfaker says.
Halfaker acknowledges that the service can’t going to catch every piece of vandalism, but he believes it can catch most. “We’re not going to catch a well-written hoax with these strategies,” he says. “But it turns out that the vast majority of vandalism is not very clever.”
Wikipedia Articles That Write Themselves?
Elsewhere, the giants of the Internet—Google, Facebook, Microsoft, and others—are embracing a new breed of machine learning known as deep learning. Using neural networks—networks of machines that approximate the web of neurons in the human brain—deep learning algorithms have proven adept at identifying photos, recognizing spoken words, and translating from one language to another. By feeding photos of a dog into a neural net, for instance, you can teach it to identify a dog.
With these same algorithms, researchers are also beginning to build systems that understand natural language—the everyday way that humans speak and write. By feeding neural nets scads of human dialogue, you can teach machines to carry on a conversation. By feeding them myriad news stories, you can teach machines to write their own articles. In these cases, neural nets are a long way from real proficiency. But they point towards a world where, say, machines can edit Wikipedia.
'"I'm not sure we'll ever get to the place where an algorithm will beat human judgment."
AARON HALFAKER, WIKIMEDIA
Will A.I. drive the human race off a cliff? 
Will A.I. drive the human race off a cliff?
Artificial intelligence (A.I.) and machine learning have the potential to help people explore space, make our lives easier and cure deadly diseases.
But we need to be thinking about policies to prevent the technology from one day killing us all.
That's the general consensus from a panel discussion in Washington D.C. today sponsored by the Information Technology and Innovation Foundation.
"When will we reach general purpose intelligence?" said Stuart Russell, a professor of electrical engineering and computer sciences at U.C. Berkeley. "We're all working on pieces of it.... If we succeed, we'll drive the human race off the cliff, but we kind of hope we'll run out of gas before we get to the cliff. That doesn't seem like a very good plan.... Maybe we need to steer in a different direction."
Russell was one of the five speakers on the panel today that took on questions about A.I. and fears that the technology could one day become smarter than humans and run amok.
Just within the last year, high-tech entrepreneur Elon Musk and the world's most renowned physicist Stephen Hawking have both publicly warned about the rise of smart machines.
Hawking, who wrote A Brief History of Time, said in May that robots with artificial intelligence could outpace humans within the next 100 years. Late last year, he was even more blunt: "The development of full artificial intelligence could spell the end of the human race."
Musk, CEO of SpaceX as well as CEO of electric car maker Tesla Motors, also got a lot of attention last October when he said A.I. threatens humans. "With artificial intelligence, we are summoning the demon," Musk said during an MIT symposium at which he also called A.I. humanity's biggest existential threat. "In all those stories with the guy with the pentagram and the holy water, ...he's sure he can control the demon. It doesn't work out."
With movies like The Terminator and the TV series Battlestar Galactica, many people think of super intelligent, super powerful and human-hating robots when they think about A.I. Many researchers, though, point out that A.I. and machine learning are already used for Google Maps, Apple's Siri and Google's self-driving cars.
As for fully autonomous robots, that could be 50 years in the future -- and self-aware robots could be twice as far out, though it's impossible at this point to predict how technology will evolve.
"Our current A.I. systems are very limited in scope," said Manuela Veloso, a professor of computer science at Carnegie Mellon University, speaking on today's panel. "If we have robots that play soccer very well by 2050, they will only know how to play soccer. They won't know how to scramble eggs or speak languages or even walk down the corridor and turn left or right."
Robert D. Atkinson, president of the Information Technology and Innovation Foundation, said people are being "overly optimistic" about how soon scientists will build autonomous, self-aware systems. "I think we'll have incredibly intelligent machines but not intentionality," he said. "We won't have that for a very long, long time, so let's worry about it for a very long, long time."
Even so, Russell said scientists should be focused on, and talking about, what they are building for the future. "The arguments are fairly persuasive that there's a threat to building machines that are more capable than us," he added. "If it's a threat to the human race, it's because we make it that way. Right now, there isn't enough work to making sure it's not a threat to the human race."
There's an answer for this, according to Veloso.
"The solution is to have people become better people and use technology for good," she said. "Texting is dangerous. People text while driving, which leads to accidents, but no one says, 'Let's remove texting from cell phones.' We can weigh this danger and make policy about texting and driving to keep the benefit of the technology available to the rest of the world."
It's also important to remember the potential benefits of A.I., she added.
Veloso pointed to the CoBot robots working on campus at Carnegie Mellon. The autonomous robots move around on wheels, guide visitors to where they need to go and ferry documents or snacks to people working there.
"I don't know if Elon Musk or Stephen Hawking know about these things, but I know these are significant advances," she said. "We are reaching a point where they are going to become a benefit to people. We'll have machines that will help people in their daily lives.... We need research on safety and coexistence. Machines shouldn't be outside the scope of humankind, but inside the scope of humankind. We'll have humans, dogs, cats and robots."
Will Artificial Intelligence Transform How We Grow and Consume Food? 
Will Artificial Intelligence Transform How We Grow and Consume Food? [Video]
Today, agriculture is more efficient than ever, but it's also more dependent on environmental, technological, and social issues like never before. Climate change, drought and other disasters, shifting energy landscapes, population growth, urbanization, GMOs, changes in the workforce, automation — these are just a handful of the factors that affect the global access to food.
We'd all like to see a future in which people worldwide have a sufficient amount of safe and nutritious food to help them maintain healthy and active lives. Is this realistic in our lifetimes?
To borrow a phrase, artificial intelligence is eating the world, and it can help move us closer to this future of abundant food. Neil Jacobstein, Artificial Intelligence & Robotics Co-Chair at Singularity University, explains what AI can do for the future of food, as part of a video series on food as a Global Grand Challenge.
[image courtesy of Shutterstock]
Wireless AI Device Tracks and Zaps the Brain, Takes Aim at Parkinson’s 
Wireless AI Device Tracks and Zaps the Brain, Takes Aim at Parkinson’s
Zapping the brain with implanted electrodes may sound like a ridiculously dangerous treatment, but for many patients with Parkinson’s disease, deep brain stimulation (DBS) is their only relief.
The procedure starts with open-skull surgery. Guided by MRI images, surgeons implant electrodes into deep-seated brain regions that contain malfunctioning neural networks. By rapidly delivering electrical pulses, DBS can dampen — or completely quiet — the severe motor tremors that invade Parkinson’s patients’ lives.
Yet getting the best result out of DBS is an infuriating process of trial-and-error. To fit the stimulation to each patient’s needs, clinicians often repeatedly tweak the treatment’s many parameters, such as amplitude, frequency, and how long each stimulation lasts. Feedback is based on the patient’s behavioral response, which is often subjective, and as the disorder progresses a program that works today may lose its therapeutic effects tomorrow.
A major cause of all this guesswork is that current generation devices can’t record how the brain is responding to the treatment, which leaves everyone in the dark.
It’s a costly problem that’s expected to spread.
In addition to Parkinson’s, DBS is being tested as a potential treatment for obsessive-compulsive disorder, Tourette’s syndrome, treatment-resistant depression and even Alzheimer’s disease. Despite its promising results, little is known about how electrical pulses work on neural networks to change behavior.
But now, a team led by Dr. Kendall Lee at the Mayo Clinic in Rochester, MN, engineered a closed-loop, wireless device called WINCS Harmony that can simultaneously measure neurotransmitter levels from multiple brain regions and adjust its stimulation pattern accordingly in real time.
Given that neurons communicate via electrical and chemical signaling, neurotransmitter levels act as proxy for treatment efficacy. Combined with sophisticated artificial neural networks, this information fine-tunes the stimulation process automatically.
In addition, researchers also gain insight into the mysterious mechanisms behind DBS that’s eluded the field so far.
“It’s really a game-changer,” says Dr. Karen Davis, a neuroengineer at the Toronto Western Hospital, who uses DBS for pain management.
The team presented their results this week at the Neuroscience 2015, the largest annual international gathering of neuroscientists organized by the Society for Neuroscience in Chicago.
Closing The Loop
Harmony builds upon previous DBS technology that’s been already been approved by the FDA for human use.
Previous devices have tried to capture stimulation-induced neural feedback by recording the neurons’ electrical responses, says Dr. J. Luis Lujan, the lead author of the study. The problem was that the signals from the stimulating and recording electrodes heavily interfered with each other.
It’s a common and terrible problem, says Lujan, the data was far too messy to use.
Instead, the team turned to fast-scan cyclic voltammetry, a chemical sensing technique originally developed for animal research. Every 10 milliseconds or so, the device applies a local voltage charge, which transiently pulls electrons out of neurotransmitters in the area. This generates a small electrical current that can be picked up by the electrode.
Since each neurotransmitter produces a unique current signature, the recordings can both identify what type it is and estimate its concentration.
This data is then wirelessly fed into a single-layer artificial neural network, which uses the electrical and chemical patterns as feedback to tweak the weight of each node in the network. This in turn changes DBS parameters to keep the brain in an optimally functional state.
As proof-of-concept, the team tested their device on three rats by measuring local dopamine levels in a brain region called the striatum.
“Dopamine is involved in many disorders that we want to treat with DBS, such as Parkinson’s,” said Lujan, “that’s why we tried it first.” But the device can also work on other transmitters such as serotonin, which is involved in depression.
By using data from 25 stimulation trials as the training set for the artificial neural network, the team showed that the device rapidly adjusted its stimulation patterns to reach a predefined optimal level. The system was highly resistant to errors: when researchers deliberately began the stimulation using an off-target pattern, Harmony rapidly adjusted and brought itself back on course.
Right now, in order for DBS to work, it has to be on 24-7, says Lujan. But patients don’t exhibit symptoms all the time, so we’re likely overstimulating the brain. Since we still don’t really understand what DBS is doing in the brain, we may be inadvertently damaging other brain functions without realizing.
Harmony may also help illuminate what’s malfunctioning in the brain at the exact time point when patients exhibit symptoms.
For example, we can observe chemical signatures in the brain of Parkinson’s patients when they display tremors and compare to when they do not. That’s extremely valuable information, says Lujan.
Similarly, Harmony may also be able to monitor the brain for telltale signs that a bipolar patient is entering a manic episode, and automatically produce the stimulation pattern needed to stop the attack before it strikes.
The results are highly promising, but it’s still a few years off before we can start testing the device in humans, admitted Lujan. The team is working on making the lunch box-sized device smaller, so that it can be directly implanted into the brain along with the electrodes. To reduce the numbers of recurrent brain surgeries, the device also has to be made more durable.
Then there’s basic neurobiology. Our current understanding of the brain networks underlying disorders such as depression is still relatively primitive.
But the team is hopeful.
Devices like Harmony are one of the best tools to help us understand how malfunctioning neural networks misfire and what DBS does to the brain, says Lujan. We’re starting with Parkinson’s disease because we know far more about the networks involved, and since the behavioral outcomes are easily observable motor symptoms, they are highly objective and easy to measure.
That doesn’t mean we’re scared to tackle the other ones though, laughed Lujan. We just want to push this to human patients as soon as possible.
“Do we have all the answers? Of course not!” he said. “But now we have the tools to figure it out.”
Image Credit: Shutterstock.com
Women After All 
Book Excerpt from Women After All
In the introduction to his latest book, author Melvin Konner explains why he considers maleness a departure from normal physiology.
There is a birth defect that is surprisingly common, due to a change in a key pair of chromosomes. In the normal condition the two look the same, but in this disorder one is shrunken beyond recognition. The result is shortened life span, higher mortality at all ages, an inability to reproduce, premature hair loss, and brain defects variously resulting in attention deficit, hyperactivity, conduct disorder, hypersexuality, and an enormous excess of both outward and self-directed aggression. The main physiological mechanism is androgen poisoning, although there may be others. I call it the X-chromosome deficiency syndrome, and a stunning 49 percent of the human species is affected.
It is also called maleness.
My choice to call being male a syndrome and to consider it less normal than the usual alternative is not (as I will show you) an arbitrary moral judgment. It is based on evolution, physiology, development, and susceptibility to disease. Once in our distant past, all of our ancestors could reproduce from their own bodies; in other words, we were all basically female. When biologists ask why sex evolved, they are not asking rhetorically—the fact that sex feels good was a valuable addition. What they are really asking is: Why did those self-sufficient females invent males? It had to be a very big reason, since they were bringing in a whole new cast of characters that took up space and ate their fill, not to mention being quite annoying, but could not themselves realize the goal of evolution: creating new life.
We’ll consider this in chapter 2, but briefly, the best answer to the puzzle seems to be: to escape being wiped out by germs. When you make new life on your own, you basically clone yourself, and ultimately lots of your offspring and relatives have the same genes. The germ that gets one of you gets you all. Create males, and in due course there is much more variation. Mate with a male that’s a bit different from you, and you produce a creature different from both of you. Result: germs confounded. Meanwhile, you export the fiercest part of the competition. You do the reproducing, he doesn’t (except for his teensy donation), so he can duke it out with the other males and they can evolve faster. Your daughters inherit the variation, and they compete and evolve, too.
But it turns out you have created a sort of Frankenstein monster, after a certain point hard to control. Consider the lowly, graceful water striders that scoot over pond tops in summer. Females signal that they are ready to mate by causing ripples of a certain frequency to billow out in the water, and the ripples turn males on. But the females don’t take all comers. Female choice is vital. Males that don’t rate, they drive away. Yet males have their ways. They have evolved grasping antennae, perfectly shaped to get a grip on the female’s head. A male approaches from behind and secures his hold, then flips her and himself upside down. Using his rear legs, he positions their bodies. If he gets this far, she stops resisting. He is the one. Or one of the ones, at any rate. She mates several times a day and seems to play males against each other.
This is not an allegory of human mating; it is an illumination, more parallel than parable. Female choice is crucial in humans, too, but males didn’t evolve grasping antennae. They evolved strategies of seduction, including romance, patience, persistence, gifts, help, verbal praise, argument, promises, threats, family influence, and deception. Human females have protected themselves with skepticism, social alliances, and a tendency to stay aloof and keep men guessing. The man who talks the best game has usually convinced himself first, and (unlike the water striders, which do it physically) you might say they emotionally flip for each other. Sometimes males use force. In this they rely on superior physical strength, gained through eons of competition with other males for access to those very selective females.
Women, of course, compete as well, against men and among themselves, also with skills honed over eons. But the need to reproduce, with all its risk and cost, has kept them relatively levelheaded and dubious of men’s schemes. For most of the history of sexual reproduction, females have often stood by while males fought over them, physically or otherwise. They know that they won’t always be able to tell a lifelong pal from a sperm donor, and in many species one good sperm is all they want. But they, too, have to reproduce, and that means tolerating uncertainty and being prepared for contingencies. For us humans, the trouble is that men’s competitive antics and untold ages of imposing their will on women have created a world in peril from their rivalries. Females, whether water striders or women, might be forgiven for looking back with a jaded eye on whichever ancestor it was that gave birth to the first male.
Women have always had to struggle for equality, even in the small hunter-gatherer bands we evolved in. Yet with further cultural evolution, it got worse. With the rise of what we like to call civilization, men’s superior muscle fostered a vast military, economic, and political conspiracy, enabling them to exclude women from leading roles. Jealousy of women’s power to give sex—and, more importantly, to give life—led men to build worlds upon and against them for millennia. Or as Camille Paglia put it in Sexual Personae,“Male bonding and patriarchy were the recourse to which man was forced by his terrible sense of woman’s power.” Appealing myths about Amazons are just that: myths. Only women whose fathers, sons, or husbands gave them the scepters of power could wield it, and then only temporarily. Even in matrilineal societies, men had most of the power. The result was ten millennia in which we squandered half of the best talent in the human race. Brawn mattered for those one hundred centuries, but in spite of their greater strength, men had to make laws to suppress women, because on a truly level playing field, women were destined to compete successfully and very often win.
That is the other meaning of the quote from de Beauvoir: “The problem of woman has always been a problem of men.” Although I don’t agree with her that all differences between men and women are culturally determined, I fully accept that the majority of the differences we have seen throughout history are caused by male supremacy and the subordination of women. History is written by the victors, and the victors in the battle between the sexes have for many centuries been males; of course they have defined women downward and have invented and promulgated an “essential” inferiority of women as a part of femininity itself. That is the part that is not at all inherent in biology; rather, it is, literally, a man-made myth.
But millennial male dominance is about to come to an end. Glass ceilings are splintering into countless shards of light, and women are climbing male power pyramids in every domain of life. Even in the world’s most sexist societies, women and girls form a fundamentally subversive group that, as communications technology shows them other women’s freedoms, will undermine age-old male conceit and give them the sway of the majority they are.
The freer and more educated girls and women become, the fewer children they have; men are proven obstacles to family planning. Even in the poorest lands, the increasing availability of women’s suffrage, health services, microloans, and savings programs, is giving them control over their destinies. As soon as that happens, they reduce the size and poverty of their families. It becomes clearer every year that the best way to spend an aid dollar in the developing world is to educate and empower women and girls. The consequences are manifold.
Replacing quantity with quality in childbearing will not save just women, or even just struggling, impoverished countries. It will save the planet and make it habitable for our species. It will greatly reduce the necessity for violence of all kinds, as it has already begun to do. Male domination has outlived any purpose it may once have had. Perhaps it played some role in our success as a species so far, but now it is an obstacle. Empowering women is the next step in human evolution, and as the uniquely endowed creatures we are, we can choose to help bring it about.
Excerpted from Women After All: Sex, Evolution, and the End of Male Supremacy by Melvin Konner. Copyright © 2015 by Melvin Konner. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.
Workplace Bullying a Costly Epidemic in the Enterprise 
Workplace Bullying a Costly Epidemic in the Enterprise
Workplace bullying opens your organization up to poor productivity, lower retention rates and possible legal action. And it's not an isolated issue - the workplace statistics are shocking. Is your culture cultivating a bully mentality?
Old bullies never die, they just get … promoted. And older doesn't always mean wiser. Those bullies you remember from your school days don't always grow out of that behavior. Many in fact, carry it with them into the workplace.
If you think bullying isn't happening in your organization, think again. According to a Zogby poll commissioned by WBI in January 2014, 27 percent of the 1,000 U.S. workers surveyed had been the target of bullying; an additional 21 percent had witnessed an incident or incidents of bullying in the workplace.
A recent Forbes article reported that an alarming number of respondents, 96 percent, admitted to being bullied in the workplace.
The issue is so prevalent that Gary Namie and his wife, Ruth Namie created the Workplace Bullying Institute(WBI), an organization dedicated to eradicating workplace bullying.
Defining Workplace Bullying
Different people may have different ideas about what workplace bullying means, but the WBI offers these thoughts: "We have a fairly high threshold for the definition of bullying; we define it as repeated mistreatment and abusive conduct that is threatening, humiliating, or intimidating, work sabotage or verbal abuse. Even so, we consider it something of an epidemic," Gary Namie says.
Bullying Has Widespread Organizational Impact
Bullying in the workplace affects more than just the individual targeted. It has negative effects on an entire organization, according to Namie and WBI data. "Victims suffer from depression, anxiety and panic. They take more sick days, resulting in higher rates of absenteeism. They have higher rates of stress-related health problems, increasing employers' healthcare costs. They aren't as motivated, engaged or productive - why would they be?" says Namie.
These individuals who are bullied are more likely to leave your organization and they certainly aren't going to recommend your company to their talented friends, family or professional contacts.
Julie Moriarity, general manager of corporate training and communications strategy at The Network, an enterprise governance, risk and compliance firm that works with organizations' ethics and compliance teams to preventing bullying, agrees. "Bullying has a hugely negative impact on an organization as a whole. Whether you're a direct victim or whether you're a witness, it's going to impact your ability to work in teams, it decreases productivity and it can affect businesses' ability to recruit and retain talent. A lot of the best employees come through referrals, and no one's going to refer their friends, family, colleagues to an abusive work environment," says Moriarity. In extreme situations, this can create a PR nightmare for businesses, which also may be subjected to expensive, high-visibility lawsuits if victims choose to prosecute their attackers.
Why Aren't Businesses Putting a Stop to It?
So, why aren't businesses doing more to stop bullying? One of the most obvious answers is that they're unaware that it's happening, says Moriarity. Victims may be unwilling to report bullying if it's being done by their supervisor or by a colleague for several reasons, not the least of which is fear of being viewed as a troublemaker and losing their job.
"People don't have the courage or the power to be able to stand up to bullies in the workplace, because they're afraid of losing their jobs. Especially in a recovering economy, it can be terrifying to think of losing their livelihood if they're not believed," says Moriarity.
This fear is not unfounded, according to WBI data. Fifty-six percent of reported bullies are the victim's boss; 33 percent report that a coworker is the bully. "Most of the time, the bully is in a position of power over the victim. The message is, 'I can treat you however I want, and you have to put up with it or you'll be out on the street - and don't even think about asking for an employment reference,'" says Namie.
What's worse, according to Moriarity, is the fear of retaliation by the bullies themselves, if they discover they've been reported, but no corporate action is taken. "If bullying is reported and an organization doesn't respond, what happens? The abuse escalates. It gets worse. Often, whistleblowers face retaliation, not just by their bullies either; sometimes being fired from their job because they're seen as a troublemaker," says Moriarity.
In her role at The Network, Moriarity consults with businesses about the need to create a culture that does not tolerate bullying and, more importantly, to consistently, quickly and forcefully deal with bullies while protecting victims brave enough to speak out.
"In the ethics and compliance industry, we talk a lot about creating this kind of culture, but you can't just talk about it, you can't just pay lip service to the idea of having a respectful, safe and open workplace. If you don't have rules in place to enforce that, it doesn't do any good," says Moriarity.
The Catch-22: Bullies Are Often High Performers
What makes the situation even more complex is that some organizations are actually benefiting from bullying behavior, Namie says. There's a fine line between being an aggressive, hard-driven, high-performing go-getter that's bringing in profits and closing deals and being a bully. Particularly if a worker has used aggressive bullying tactics in the past and been rewarded for the behavior.
"Many of these bullies are praised for similar behavior, under different circumstances. They're pushy, they're ruthless, cunning and they'll do whatever it takes to get ahead, to win - and that helps them succeed and the business succeed," says Namie.
"It's true more often than not in the workplace that bullies 'fail up.' The challenge in a workplace is that a lot of times, the same qualities that make aggressive, nasty bullies also make them quite good at their job. If they're high performers, great salespeople, major earners, that's all the company's looking at. And, so, the victim's fighting an uphill battle to get the company to see past that person's performance," Moriarity says.
How to Address the Bullying Issue
One way to address the problem is through a corporate ethics and compliance officer, or a team dedicated to that function. However, this position or team must report either to the CEO, the highest echelons of leadership, or to a board of directors to ensure it's operating apolitically, fairly and objectively, according to Moriarity.
"An ethics officer, or a team of ethics and compliance specialists, must be an independent body; they have to be able to make hard, objective decisions - like firing or disciplining a bully - and have those enforced without being overruled or dismissed because of office politics or financial concerns," says Moriarity.
Organizations should have a reporting process in place, too, and be prepared to offer anonymity for victims. However, doing so can make investigating claims of bullying more difficult.
"Most organizations would prefer you go to your manager, but if that's not possible - sometimes, they are the bully - then, to Human Resources. Public companies often have a hotline or a reporting structure in place so they can track complaints and work to address them," says Moriarity.
If those options aren't available, or aren't effective, there may be legal recourse victims can take outside of the corporate structure, Namie says.
Laws Compel Action to Protect Victims
Namie and WBI have introduced legislation in many states to help victims take legal action against businesses who turn a blind eye to bullying. For example, WBI has introduced theHealthy Workplace Bill, which sets out a clear definition of workplace bullying and protects both employers and employees. Here are some more details on the Healthy Workplace Bill.
In the absence of a legal imperative, WBI statistics show that even when bullies are reported, approximately 44 percent of businesses do not take any action to rectify the situation, but while this willful ignorance might help companies in the short term, it's ignoring the long-term effects and far-reaching impact of the behavior according to Moriarity.
"Businesses that do nothing are trading short-term gain for long-term pain, as I like to say. By the time they understand the damage that's being done to their productivity, their retention and their public reputation, it's much too late. Employees won't ever forget that; they'll always remember that your company chose to do what was financially right over what was morally right, and your business will suffer for it," says Moriarity.
World’s Data Could Fit on a Teaspoon-Sized DNA Hard Drive and Survive Thousands of Years 
World’s Data Could Fit on a Teaspoon-Sized DNA Hard Drive and Survive Thousands of Years
The blueprint of every living thing on the planet is encoded in DNA. We know the stuff can hold a lot of information. But how much is a lot? We could theoretically encode the world's data (from emails to albums, movies to novels) on just a few grams of DNA. DNA already preserves life itself—now it might also preserve life as we live it.
According to New Scientist, a gram of DNA could theoretically store 455 exabytes of data. And Quartz drives the point home. If the world has about 1.8 zettabytes of data, according to a 2011 estimate, all the world's information would fit on a four-gram DNA hard drive the size of a teaspoon.
Svalbard Global Seed Vault in Norway.
But why choose DNA (other than its massive storage potential)? Because in the right conditions, DNA can survive for thousands of years. Long past the time traditional hard drives have degraded.
Scientists at the Swiss Federal Institute of Technology in Zurich set out to find out just how long DNA might last.
Encapsulated in tiny, dry glass spheres, the researchers say that DNA kept at a temperature of 10 °C would remain uncorrupted (and the data readable) for 2,000 years. At even lower temperatures—like those kept at Norway's Svalbard Global Seed Vault—the data's longevity jumps to two million years.
But here's the kicker. Preserving data in DNA is very, very expensive. The Swiss researchers encoded 83-kilobytes, the medieval Swiss federal charter and Archimedes Palimpsest, at a cost of $1,500. There are a nearly two quintillion kilobytes in the world's 1.8 zettabytes. We could mortgage the global economy—and still be woefully short of cash.
But it's a pretty fascinating idea. More research and improving techniques may bring the cost closer to Earth. And of course, in practice, we would be far more judicious about what information we encode—using the technique as a time capsule or the ultimate backup for the modern world's most critical information.
Worm ‘Brain’ Uploaded Into Lego Robot 
Worm ‘Brain’ Uploaded Into Lego Robot
Can a digitally simulated brain on a computer perform tasks just like the real thing?
For simple commands, the answer, it would seem, is yes it can. Researchers at the OpenWorm project recently hooked a simulated worm brain to a wheeled robot. Without being explicitly programmed to do so, the robot moved back and forth and avoided objects—driven only by the interplay of external stimuli and digital neurons.
While there are already similarly capable robots using traditional software, the research shows a digitally simulated brain can behave like its biological analog, and the demonstration has implications for big brain projects.
The BRAIN Initiative in the US and the Human Brain Project in Europe aim to map the human brain’s connections and, one day, to simulate the brain digitally. Such a simulation might yield insights into disease or breakthroughs in computer science.
But when it comes to simulating brains in silica—it’s sensible to start simple. The OpenWorm project’s simulated brain is based on the lowly C. elegans roundworm.
C. elegans is an eminently humble creature, and for that reason, an extensively researched one. Scientists published the first map of the synaptic connections, or connectome, of the brain of C. elegans in 1986 and a refined draft in 2006.
The worm’s brain contains 302 neurons and 7,000 synapses. The human brain, in comparison, has 86 billion neurons and 100 trillion synapses. Whether we’ll ever fully map the human brain (or should) is a hotly debated topic.
But since we’ve already mapped the C. elegans connectome—the researchers at OpenWorm thought they’d feed it stimuli using a few external sensors and give it a robotic body to carry out whatever motor instructions the brain provided.
The robot, as you can see in the video, moves a little like a Roomba, with one critical distinction—the Roomba’s collision avoidance mechanism was written in by programmers. The OpenWorm bot’s movements, on the other hand, were not.
How does it work? The brain cells in the worm’s connectome are labeled sensory neurons, motor neurons, and interneurons (connecting the two). The OpenWorm team simulated these neurons and their connections in software.
The digital neurons sum input signals and fire when they exceed a threshold (similar to but not exactly like the real thing).
Sensory neurons link to the robot’s sensors—a sonar sensor, for example, stands in for the worm’s nose. And the sim’s motor neurons drive the robot’s right and left motors as if they were right and left groups of muscles.
The fascinating thing? The robot behaves much like a real worm would, given similar sensory stimulation—tripping the nose sensor halts forward progress, touching the front and rear sensors makes the robot move forward and back.
Now, the simulation isn’t perfect, and the robot doesn’t have every sensory input the real worm might have, but the OpenWorm bot seems to show that a stimulated digital brain might behave like a biological brain does—and we might not have to understand it in detail to make it work. That is, behaviors might emerge of their own accord.
In this example, we’re talking very simple behaviors. But could the result scale? That is, if you map a human brain with similarly high fidelity and supply it with stimulation in a virtual or physical environment—would some of the characteristics we associate with human brains independently emerge? Might that include creativity and consciousness?
There’s only one way to find out.
Building the first digital life form. Open source.
Would telepathy help? 
Would telepathy help?
Kat McGowan writes about health, medicine and science for magazines including Nautilus and Quanta, and is a contributing editor at Discover. She lives in New York City and California.
Edited by Pam Weintraub
Will the next generation of telepathy machines make us closer, or are there unforeseen dangers in the melding of minds?
Every modern generation has had its own idiosyncratic obsession with telepathy, the hope that one human being might be able to read another person’s thoughts. In the late 19th century, when spiritualism was in vogue, mind-reading was a parlour game for the fashionable, and the philosopher William James considered telepathy and other psychic phenomena legitimate subjects of study for the new science of psychology. By the 1960s, the Pentagon was concerned about Soviet telepathy research and reports that they had established remote communications with submarine commanders. In the 1970s, one ambitious Apollo 14 astronaut took it upon himself to try broadcasting his brainwaves from the moon.
In our technologically obsessed era, the search for evidence of psychic communication has been replaced by a push to invent computerised telepathy machines. Just last year, an international team of neurobiologists in Spain, France and at Harvard set up systems that linked one brain to another and permitted two people to communicate using only their thoughts. The network was basically one massive kludge, including an electroencephalography cap to detect the sender’s neural activity, computer algorithms to transform neural signals into data that could be sent through the internet and, at the receiving end, a transcranial magnetic stimulation device to convert that data into magnetic pulses that cross another person’s skull and activate certain clusters of neurons with an electrical field. With this contraption, the researchers were able to send a signal of 140 bits (the word ‘ciao’) from one person’s brain to another.
This apparatus is complex, expensive and extremely low-bandwidth, achieving a speed of about two bits per minute. Nonetheless, this study and others like it inspire a wave of hope that it might one day be possible to read another person’s thoughts. It’s easy to see why people won’t give up on the idea. Telepathy promises an intimate connection to other human beings. If isolation, cruelty, malice, violence and wars are fuelled by misunderstandings and communication failures, as many people believe, telepathy would seem to offer the cure.
But findings from affective neuroscience, social psychology and the new neuroscientific study of empathy suggest that tapping directly into other people’s thoughts would be a pretty bad idea. In the past decade or so, this research has revealed that we already have deep insights into what other people feel and think. We really do have a sixth sense, but it’s psychological rather than psychic, made up of an entirely natural and completely human blend of emotional intuition and clever reasoning.
The more we know about empathy and ordinary human mind-reading, the less it looks like a way to achieve world peace. Technologically assisted telepathy could exaggerate flaws in our moral thinking and saddle us with unbearable intimacy, encouraging us to tune out the suffering of the most vulnerable. Emotional-mindreading is no guarantee of kindness; it is also how psychopaths and bullies manipulate and torment their victims. This research suggests an entirely sensible, completely ordinary, not-at-all-clairvoyant prediction about the future: rather than a dreamy bliss of togetherness, artificial telepathy would be a nightmare.
This new appreciation of the limitations of empathy is in part the result of a surprising discovery about brain organisation. One of the general rules of neurobiology is that many mental tasks are handled by dedicated anatomical regions of the brain. The motor cortex, for example, controls body movements. The visual cortex is specialised for processing information from your eyes. Other parts are for remembering, for analysing objects, for feeling emotions, or for functions as specific as verbal fluidity or measuring rewards. So if you watch a ball roll down a hill, for example, parts of your brain involved in recognising objects, analysing movements, and ensuring that your eyes move in sync with an object all become active.
This rule of thumb still holds, but a big asterisk was put next to it in the 1990s, when a group of Italian scientists discovered that primate brains analyse the actions of other similar creatures in a special way. Their discovery was initially made with monkeys, but something similar happens in humans too: as you watch another person do something, your brain responds with a rough mental simulation of the action. When you see the Brazilian footballer Reynaldo kick a ball, for example, the parts of your motor cortex that would be involved in preparing your legs and feet to move and in co‑ordinating that movement also become active. When he runs or falls or leaps in joy, so do you – but only in your mind. Other people’s bodies seem to be inside your head, and that is the way you comprehend their motion. ‘We use ourselves as a heuristic, an approximation of the other,’ says the social neuroscientist Christian Keysers of the Netherlands Institute for Neuroscience and the University of Amsterdam, who worked with the Italian team in the late 1990s.
Keysers and others have since found that the same exception applies to sensations and emotions: you respond to other people’s experiences by recreating them in your own mind. You have a ‘vicarious brain’, he says, ‘a brain that uses a lot of its own private space to represent automatically the actions, sensations and motions of others’. If you see an angry face, even just momentarily, the neurons that cause you to narrow your eyes and fix your jaw flicker with activity. In one series of neural imaging experiments, Keysers had people taste a disgusting liquid (quinine), listen to an appalling story about finding a maggot-infested dead rat in bed, or look at pictures of actors reacting with disgust. All three situations – seeing, imagining and experiencing – activated some of the same brain regions, albeit in slightly different ways. More recent findings suggest that our brains also simulate other people’s good feelings; seeing someone look pleased by a sugary drink or a happy event causes your own mind to respond in a similar way.
The vicarious experience of emotion might even affect your own mood. A baby often cries when it hears another baby crying, but for adults, this phenomenon of emotional contagion is more subtle. People who heard someone speaking in a sad tone of voice, for example, subsequently rated their moods a bit lower than those who heard neutral or cheerful voices. This is also how laughter or weeping spreads through a room, or crowds suddenly turn violent or panicky.
It’s not clear why our minds work this way, but Keysers and others point out that it is a good way to get fast insights with very little information: just one glance tells you what you need to know. For a social species like ours, anticipating what someone else is about to do can be a major advantage.
This intuitive, automatic fellow-feeling is not the only kind of mind-reading humans do; we also learn more deliberate, strategic methods to infer other people’s thoughts. In the first years of life, children become theorists of desire: they notice that other people’s mental states predict their actions, and they begin figuring out what other people want as a way to anticipate what they will do. By adulthood, we have learned to infer other people’s motives and thought processes, read hidden or mixed emotions, detect when people are faking their feelings, even pick up on irony. The cognitive neuroscientist Uta Frith considers this ‘theory of mind’ to be a hallmark of human cognition.
Together, the intuitive and the strategic components of fellow-feeling enable empathy, the ability to step into someone else’s mind and know what they feel. Being connected to other people in this way is a deep part of our nature; we might be selfish and competitive, but we are also hitched to one another, obliged to take on other people’s pleasures and their suffering. It seems to explain the strange human phenomenon of moral behaviour – the peculiar tendency of people to help one another even when it is risky and difficult. If your joy is my joy, and your pain my pain, this sort of altruism only makes sense.
But in case you hadn’t noticed, the sophisticated, multi-layered capacity for fellow-feeling doesn’t prevent people from behaving terribly towards one another. They fight, murder, abuse and steal. Even those who don’t purposely harm others often ignore other people’s pain and fail to help those who need it.
Maybe our natural ability to empathise just isn’t strong enough. Perhaps machine-assisted telepathy could help, amplifying the faint signal of compassion into an intense blast. For the moment, let’s just assume that the monumental technological and biological challenges could be resolved, and we could invent a device that would effectively transmit one person’s experience to another. What would happen if we turned up the volume on empathy?
To begin with, we might help psychopaths – people who ruthlessly exploit others – be even better at what they do. Research from Keysers and others reveals that these apparently cold-blooded predators are actually good at detecting emotions. In one 2013 study, Keysers asked 20 criminal psychopaths to watch short videos of two people either caressing or striking one another’s hands – a simple way to evoke an emotional response that in ordinary people activates brain regions associated with emotional processing. Initially, these participants did not have much activation in regions involved in feelings or pain. But when Keysers instructed them to empathise while watching, their patterns of neural activity became fairly normal.
His interpretation is that these violent predators can feel empathy, but often choose not to. They deploy the ability strategically in order to win over their victims and secure their trust, and then shut it down in order to swindle, rape and kill. So a mind-reading machine wouldn’t necessarily turn a psychopath into a creampuff. Instead, he might become an even more effective manipulator – more cunning, more perceptive, and harder to outwit.
The rest of us aren’t really so different: we also evade or down-regulate our mind-reading abilities when it becomes painful or inconvenient. In one 1988 experiment, psychologists in Canada set up a donation table in a busy corridor and monitored the pathways of passersby. If the table featured a picture of a dejected child, people veered far away from it, in order to avoid getting their heartstrings jerked. In another experiment, people told that a fellow participant had been given electric shocks downgraded their opinion of him – and justified it by concluding that he probably deserved it. Rather than feel his feelings, they found ways to emotionally distance themselves through rationalisations. In similar ways, people frequently underestimate the suffering of foreigners, people of other ethnicities, or prisoners.
These and other findings in the new science of empathy converge upon a new appreciation of how malleable empathy can be. It can be used for good or ill; it can be turned up or down. It is motivated, argues Jamil Zaki, the director of the Stanford Social Neuroscience Laboratory. ‘We tend to view it as something relatively automatic, but people exert control over their experiences of empathy,’ he says. Although it seems self-evident that people who feel more empathy will behave more morally, in practice there is only weak evidence that feeling someone else’s pain induces you to do something about it. Some data even indicates that people who sense others’ emotions most intensely tend to avoid situations that will expose them to deep suffering. Their own pain prevents them from helping those who need it the most.
Amplifying empathy is not even a sure-fire means of building trust or dissolving suspicion; other findings from empathy research suggest that encouraging people to consider the perspectives and thoughts of those they already distrust and dislike can backfire. ‘As a premise, it’s a terrible idea,’ says Zaki. ‘I don’t think that understanding what people are feeling would make you like them.’
He points to studies that instruct rivals to empathise with one another, and have the paradoxical effect of fostering unethical behaviour. In competitive negotiating scenarios devised by the psychologist Adam Galinsky of the Columbia Business School, for example, people who were told to think about the mindset of a rival became more likely to lie or cheat in order to win. Galinsky suspects this is because that act of mind-reading serves as a reminder that a rival is capable of being equally dishonest.
In other experiments, people asked to consider the feelings and perspectives of rival groups were more selfish, more intolerant, and judged outsiders more harshly. In a study pairing Mexican immigrants and white Americans, the neuroscientist Emile Bruneau of the Massachusetts Institute of Technology found that asking lower-status immigrants to take on the perspective of the dominant group tended to lower their opinions of the higher-status whites.
The more we know about empathy, the less it seems to guarantee moral rectitude. People generally feel more empathy toward members of their own racial, political or social ‘tribe’, and limit the amount they extend to outsider. It directs you to respond to the needs of the person right in front of you and downgrade those who are abstract and far away. Empathy often biases you toward people who look and act like you, at the expense of those who do not. It is easy to manipulate, responding strongly to cuteness, proximity, or particularly heartbreaking details. ‘A morality based on empathy would lead to preferential treatment and grotesque crimes of omission,’ writes Jesse Prinz, a philosopher at the Graduate Center of the City University of New York.
A telepathy machine, if it could ever be built, would undoubtedly have wonderful applications. It could allow people who are immobilised by a stroke or neurological disease to communicate, or create incredible opportunities for artists to collaborate. But it seems unlikely that it could broadcast world peace. Empathy is too compromised, too complicated, and too subject to intentions and motivations to be a magic solution for our moral problems. It is far too human.
Zaki considers it extremely unlikely that a mind-reading machine could ever be built. (‘It’s a lot further on the horizon that people realise,’ he says). He has a different invention in mind – a cognitive innovation rather than a technological one. As we come to understand the motivational nature of empathy, we should be able to figure out how to compensate for its limitations and biases. Zaki does not see empathy as a salve for all our moral problems, but he also believes that it is possible to make use of what it can do, which is provide a strong emotional motivation to act on behalf of others.
If we know that empathy favours the specific and familiar over the foreign and abstract, we can seek out, as our inspiration, personal details about someone far away who needs help. If empathy is easily overwhelmed and blocked by intense suffering, we could compensate by regulating how much information about tragedy we consume. In this way, we could hijack it, redirecting it away from in-group bias and toward morally courageous acts. We would strategically harness the power of what nature gave us – the remarkable ability to see into someone else’s mind and to feel what they are feeling – for the service of moral good.
Just as psychopaths turn down their empathy in order to prey upon people, we might learn to up-regulate empathy in exactly the right situations – to inspire us when abstract moral intentions aren’t enough. We can deliberately put ourselves in empathy’s way.
That might sound like a relatively modest goal, compared with past efforts that sought to read ghostly messages from the spiritual realm, or win the Cold War via psychic warfare, or use the internet to let neurons talk directly to one another. But this vision of telepathy – as a way to help people actually live up to the moral standards they believe in – could turn out to be exactly the one we need.