Neurociencia | Neuroscience


Neuroscience 

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  ...  65  66  67  68  69  70  71  72  73  74  (Siguiente)
  TODAS

W

Logo KW

Why We Should Use Behavioral Economics to Design Technology That Doesn’t Kill Us [1667]

de System Administrator - lunes, 15 de febrero de 2016, 17:24
 

Why We Should Use Behavioral Economics to Design Technology That Doesn’t Kill Us

BY ALISON E. BERMAN

One hundred years ago, bad decision making accounted for less than ten percent of human deaths. Nowadays, it represents a little over 44%. What has changed?

Behavioral economist, Dan Ariely, offered one explanation during his talk at Singularity University this month: With all the new technology we’ve invented, we’ve also created many new ways to kill ourselves.

Think about texting while driving, an activity that makes drivers six times more likely to cause an accident than while driving drunk. Most of us know it’s dangerous, but when we get behind the wheel and receive a new notification, we can’t seem to stop ourselves.

Though classical economics assumes humans are rational maximizers able to make wise decisions for our long-term health and wellbeing, research shows we’re not.

As Ariely puts it, we’re predictably irrational and programmed to prioritize daily desires and temptations over what’s best in the long term.

“In the future, we are all wonderful people,” says Ariely.

The problem is, we live in the present where we are horrible decision makers, with poor self-control—we eat the donut, forget to take our medication, and say we’ll put money in our savings account tomorrow. But tomorrow never happens.

Not only are humans poor decision makers, but the convenience of technology can tempt us away from making positive choices. Automated payment apps and marketplaces like the iTunes store aren’t designed to help us save for retirement, but to decrease decision-making time and increase feasibility—we click “buy” before considering where else our money could go.

“We create technology that hacks our ability to make good decisions, and the consequence is, we kill ourselves,” says Ariely.

It’s a frightening thought, but it’s not as grim as it sounds.

In fact, we can design technology to take into account the limits of human decision making and help us take actions that will benefit our future selves.

Positive behavior change is something technology companies across the board are taking on—from financial planning apps focused on putting savings away for college funds, to apps that help individuals with chronic diseases adhere to medication and healthy lifestyle routines.

Though it’s in our nature to make irrational choices, technology does not have to exploit this—rather, it can mend it.

Here are the two classical behavioral economics principles Ariely presented that, applied to technology, can support better decision making for the long haul.

1) Reward Substitution

Principle: Because humans are not designed to care about what’s best for ourselves in the future, we can create other immediate rewards to keep us on track. The rewards do not have to be large, only enough to help us stick with a positive behavior daily.

Example: Ariely points to global warming as the perfect example of this challenge. It is one of the greatest threats to humanity, yet most of the world is apathetic towards it because the dangers are too distant. On an individual level, saving for retirement is a similar long-term challenge. Using reward substitution, however, we can set incremental goals that are tied to short-term rewards in order to focus on smaller chunks of the savings journey.

Watch Ariely explain this principle in detail below.

2) Ulysses Contracts (or Self-Control Contracts)

Principle: Lower the dangers of weak self-control by stopping temptations before they arise.

Example: This idea shows up in Homer’s Odyssey. Ulysses (or Odysseus in the Greek) orders his sailors to plug their ears and tie him to the mast of the boat so he can hear the sirens, whose deadly song tempts ships to steer into the rocks, but avoid wrecking his own ship. A current day example of this is driving with your cell phone, but placing it in the trunk so you aren’t tempted to text and drive.

Watch Ariely explain this principle in detail below.

Both principles show the power of creating and following through with rules, even in the face of ourincreasingly hyper-connected and impulse driven lives.

As technology inserts itself into more areas of life, it is important to learn to identify which technologies have been designed with our best interests in mind, and which have been created to maximize our irrationality and fondness for making poor short-term choices.

The next time you look to download a new app or purchase the latest tech gadget, ask, “Has this been designed with my best interest in mind?” And if the answer is, “No,” consider putting your time and money elsewhere.

Image Credit: Shutterstock.com

RELATED TOPICS: 

Link: http://singularityhub.com

Logo KW

Why young people don’t want to run for office and how to change their minds [1301]

de System Administrator - viernes, 10 de julio de 2015, 17:28
 

Just say run: How to overcome cynicism and inspire young people to run for office

Ask young people whether they would ever consider running for office and this is what you’ll hear:

  • Politicians are just liars.
  • Most politicians are hypocrites.
  • People in politics are two-faced.
  • It’s about lying, cheating, getting nothing done. That’s not how I want to spend my time.
  • I don’t even want to think about a career in politics.
  • I’d rather milk cows than run for office.

These are not hypothetical responses. They are the words of a handful of the more than 4,000 high school and college students we surveyed and interviewed for our new book, Running from Office: Why Young Americans Are Turned Off to Politics. Having come of age in a political context characterized by hyper-partisanship, gridlock, stalemate, and scandal, the overwhelming majority of 13 to 25 year olds view the political system as ineffective, broken, and downright nasty. As a consequence, nine out of 10 will not even consider running for office. They’d rather do almost anything else with their lives.

This should sound alarm bells about the health of our democracy. The United States has more than half a million elective positions. And most people who become candidates don’t go through life never thinking about politics and then wake up one morning and decide to throw their hats into the ring. The idea has usually been percolating for a long time, often since adolescence.

In the final chapter of the book, we offer a series of recommendations that could stimulate political ambition and help chart a new course. Here, we summarize three. They are all major endeavors and will require substantial funding and deep commitment from government officials, entrepreneurs, educators, and activists. But each has the potential to change young people’s attitudes toward politics. At the very least, we hope to trigger a national conversation about how to show the next generation that politics is about more than men behaving badly in the nation’s capital.

1. Launch the YouLead Initiative: Since John F. Kennedy signed an executive order in 1961establishing the Peace Corps, the organization has sent hundreds of thousands of Americans abroad “to tackle the most pressing needs of people around the world.” AmeriCorps has deployed 800,000 Americans to meet similar domestic needs. And Teach for America has recruited thousands of citizens to “build the movement to eliminate educational inequity.” Together, these programs send a strong signal that the government values, and American society depends on, public service. If we want to put the next generation on the path to politics, then what better way than by demonstrating that running for office is just as valuable, effective, and noble a form of public service?

Whether developed as a government program, non-profit endeavor, or corporate project, a two-pronged national campaign—we call it the YouLead Initiative—could send a strong signal to young people that running for office is a worthwhile way to serve the community, country, and world. The first piece would entail a technologically savvy media campaign that changes perceptions of politics. When young people think about government, they conjure up images of self-interested, egotistical conservatives fighting self-interested, egotistical liberals in a broken system to the point of paralysis. Placing the spotlight on local and state-level leaders—most of whom are not professional politicians—would convey that many elected officials care about their communities and are making positive change. In addition, a series of fun public service announcements in which parents, teachers, public figures, and celebrities encourage young people to think about a future in politics would reinforce the message. Second, regional and state coordinators for YouLead could identify high school and college students who have already exhibited leadership success—those in student government, captains of sports teams, members of debate and mock trial teams, those participating in drama and music clubs. At a regional conference, they would be encouraged to channel their leadership capabilities into electoral politics. The program could even capitalize on their competitive spirit by hosting a national conference to which regional participants could apply.

2. Make Political Aptitude Part of the College Admission Process: The primary educational goal of most 12-17 year olds is to attend college (85% of the high school students we surveyed planned to go). But the five key ingredients in a college application—high school grades, standardized test scores, extra-curricular activities, personal essays, and letters of recommendation—make it entirely possible for students to apply to, and be accepted at, even the most prestigious schools without any political interest or knowledge. You can’t find Iraq on a map? That’s okay. You don’t know the name of the vice president? No big deal. You’re unfamiliar with which political party controls Congress? Don’t worry about it. Why not link political aptitude to the college application process, either in the form of a new component to the SAT or ACT, an additional exam, or an essay about public affairs? The vehicle is almost incidental. What matters is that it would force young people to take news and political information seriously.

Linking college admission, even in some small way, to political awareness could pay off. We found that young people with more exposure to politics—at home, at school, with their friends, and through the media—are far more likely to be interested in running for office. Sure, they see the same negative aspects of contemporary politics as everyone else. But they also see some examples of politicians behaving well, elected officials solving problems, and earnest, well-meaning candidates aspiring to improve their communities. The habit of staying politically informed might fade once students submit their college applications, but it might not. And there is no downside for colleges and universities to take the position that to be successful citizens, students must be connected to the world around them. In fact, a similar approach has generated a sense of volunteerism among many high school students. Of the roughly 75% of high school seniors who do some sort of community service, many start these efforts with the hope of “impressing” college admission officers.

3. Develop the GoRun App: We live in the era of the app. You can upload photos, request an Uber, find cheap airline tickets, locate the closest Mexican restaurant, or listen to your favorite music with the simple touch of an app. And young people do. Eighty-one percent of people under the age of 25 sleep with their phone next to them on the bed; 74% reach for their smartphones as the first thing they do when they wake up; and 97% of teens regularly use smartphones in the bathroom to check messages. There’s no activity, time of day, or location that is out of bounds for young people’s smartphone and app use. So, let’s take advantage of the digital world young people inhabit by creating an app that helps them identify political offices and informs them about how to run for them.

Surprisingly, it is quite difficult to find out what elected positions exist in any given community, let alone determine the responsibilities associated with each or the nuts and bolts involved in running for them. No central database houses this information. The GoRun app would allow users to enter an address and receive a complete list of all the elected positions representing that residence—from the local school board all the way up to president of the United States. Clicking on each position would result in a description of the office, a set of core responsibilities, and information about the logistics and rules required to run. Figuring out how to become a candidate would literally be at our fingertips. Educators could easily incorporate the app into their curricula. And young people who are even the least bit curious about how to run for office would not have to engage in a fact-finding mission. This easy-to-access information would showcase the thousands of electoral opportunities that have nothing to do with dysfunction in Washington, DC.

Our political system has done a number on young people. It has turned them off to the idea of running for office, discouraged them from aspiring to be elected leaders, and alienated them from even thinking about a career in politics. Steering a new course will be difficult, but being creative about how we do it is the only choice we have.

Link: http://www.brookings.edu

Logo KW

Wikipedia Deploys AI to Expand Its Ranks of Human Editors [1611]

de System Administrator - domingo, 6 de diciembre de 2015, 20:15
 

Wikipedia Deploys AI to Expand Its Ranks of Human Editors

by CADE METZ 

AARON HALFAKER JUST built an artificial intelligence engine designed to automatically analyze changes to Wikipedia.

Wikipedia is the online encyclopedia anyone can edit. In crowdsourcing the creation of an encyclopedia, the not-for-profit website forever changed the way we get information. It’s among the ten most-visited sites on the Internet, and it has swept tomes like World Book and Encyclopedia Britannica into the dustbin of history. But it’s not without flaws. If anyone can edit Wikipedia, anyone can mistakenly add bogus information. And anyone can vandalize the site, purposefully adding bogus information. Halfaker, a senior research scientist at the Wikimedia Foundation, the organization that oversees Wikipedia, built his AI engine as a way of identifying such vandalism.

"It turns out that the vast majority of vandalism is not very clever."

AARON HALFAKER, WIKIMEDIA

In one sense, this means less work for the volunteer editors who police Wikipedia’s articles. And it might seem like a step toward phasing these editors out, another example of AI replacing humans. But Halfaker’s project is actually an effort toincrease human participation in Wikipedia. Although some predict that AI and robotics will replace as much as 47 percent of our jobs over the next 20 years, others believe that AI will also create a significant number of new jobs. This project is at least a small example of that dynamic at work.

“This project is one attempt to bring back the human element,” says Dario Taraborelli, Wikimedia’s head of research, “to allocate human attention where it’s most needed.”

Don’t Scare the Newbies

In the past, if you made a change to an important Wikipedia article, you often received an automated response saying you weren’t allowed to make the change. The system wouldn’t let you participate unless you followed a strict set of rules, and according to study by Halfaker and various academics, this rigidity prevented many people from joining the ranks of regular Wikipedia editors. A 2009 study indicated that participation in the project had started to decline, just eight years after its founding.

“It’s because the newcomers don’t stick around,” Halfaker says. “Essentially, Wikipedians had traded efficiency of dealing with vandals and undesirable people coming into the wiki for actually offering a human experience to newcomers. The experience became this very robotic and negative experience.”

With his new AI project—dubbed the Objective Revision Evaluation Service, or ORES—Halfaker aims to boost participation by making Wikipedia more friendly to newbie editors. Using a set of open source machine learning algorithms known asSciKit Learn—code freely available to the world at large—the service seeks to automatically identify blatant vandalism and separate it from well-intentioned changes. With a more nuanced view of new edits, the thinking goes, these algorithms can continue cracking down on vandals without chasing away legitimate participants. It’s not that Wikipedia needs to do away with automated tools to attract more human editors. It’s that Wikipedia needs better automated tools.

“We don’t have to flag good-faith edits the same way we flag bad-faith damaging edits,” says Halfaker, who used Wikipedia as basis for his PhD work in the computer science department at the University of Minnesota.

In the grand scheme of things, the new AI algorithms are rather simple examples of machine learning. But they can be effective. They work by identifying certain words, variants of certain words, or particular keyboard patterns. For instance, they can spot unusually large blocks of characters. “Vandals tend to mash the keyboard and not put spaces in between their characters,” Halfaker says.

Halfaker acknowledges that the service can’t going to catch every piece of vandalism, but he believes it can catch most. “We’re not going to catch a well-written hoax with these strategies,” he says. “But it turns out that the vast majority of vandalism is not very clever.”

Wikipedia Articles That Write Themselves?

Elsewhere, the giants of the Internet—Google, Facebook, Microsoft, and others—are embracing a new breed of machine learning known as deep learning. Using neural networks—networks of machines that approximate the web of neurons in the human brain—deep learning algorithms have proven adept at identifying photos, recognizing spoken words, and translating from one language to another. By feeding photos of a dog into a neural net, for instance, you can teach it to identify a dog.

With these same algorithms, researchers are also beginning to build systems that understand natural language—the everyday way that humans speak and write. By feeding neural nets scads of human dialogue, you can teach machines to carry on a conversation. By feeding them myriad news stories, you can teach machines to write their own articles. In these cases, neural nets are a long way from real proficiency. But they point towards a world where, say, machines can edit Wikipedia.

'"I'm not sure we'll ever get to the place where an algorithm will beat human judgment."

AARON HALFAKER, WIKIMEDIA

Link: http://www.wired.com

Logo KW

Will A.I. drive the human race off a cliff? [1304]

de System Administrator - sábado, 11 de julio de 2015, 23:36
 

Will A.I. drive the human race off a cliff?

By Sharon Gaudin

Artificial intelligence (A.I.) and machine learning have the potential to help people explore space, make our lives easier and cure deadly diseases.

But we need to be thinking about policies to prevent the technology from one day killing us all.

That's the general consensus from a panel discussion in Washington D.C. today sponsored by the Information Technology and Innovation Foundation.

"When will we reach general purpose intelligence?" said Stuart Russell, a professor of electrical engineering and computer sciences at U.C. Berkeley. "We're all working on pieces of it.... If we succeed, we'll drive the human race off the cliff, but we kind of hope we'll run out of gas before we get to the cliff. That doesn't seem like a very good plan.... Maybe we need to steer in a different direction."

Russell was one of the five speakers on the panel today that took on questions about A.I. and fears that the technology could one day become smarter than humans and run amok.

Just within the last year, high-tech entrepreneur Elon Musk and the world's most renowned physicist Stephen Hawking have both publicly warned about the rise of smart machines.

Hawking, who wrote A Brief History of Time, said in May that robots with artificial intelligence could outpace humans within the next 100 years. Late last year, he was even more blunt: "The development of full artificial intelligence could spell the end of the human race."

Musk, CEO of SpaceX as well as CEO of electric car maker Tesla Motors, also got a lot of attention last October when he said A.I. threatens humans. "With artificial intelligence, we are summoning the demon," Musk said during an MIT symposium at which he also called A.I. humanity's biggest existential threat. "In all those stories with the guy with the pentagram and the holy water, ...he's sure he can control the demon. It doesn't work out."

With movies like The Terminator and the TV series Battlestar Galactica, many people think of super intelligent, super powerful and human-hating robots when they think about A.I. Many researchers, though, point out that A.I. and machine learning are already used for Google Maps, Apple's Siri and Google's self-driving cars.

As for fully autonomous robots, that could be 50 years in the future -- and self-aware robots could be twice as far out, though it's impossible at this point to predict how technology will evolve.

"Our current A.I. systems are very limited in scope," said Manuela Veloso, a professor of computer science at Carnegie Mellon University, speaking on today's panel. "If we have robots that play soccer very well by 2050, they will only know how to play soccer. They won't know how to scramble eggs or speak languages or even walk down the corridor and turn left or right."

Robert D. Atkinson, president of the Information Technology and Innovation Foundation, said people are being "overly optimistic" about how soon scientists will build autonomous, self-aware systems. "I think we'll have incredibly intelligent machines but not intentionality," he said. "We won't have that for a very long, long time, so let's worry about it for a very long, long time."

Even so, Russell said scientists should be focused on, and talking about, what they are building for the future. "The arguments are fairly persuasive that there's a threat to building machines that are more capable than us," he added. "If it's a threat to the human race, it's because we make it that way. Right now, there isn't enough work to making sure it's not a threat to the human race."

There's an answer for this, according to Veloso.

"The solution is to have people become better people and use technology for good," she said. "Texting is dangerous. People text while driving, which leads to accidents, but no one says, 'Let's remove texting from cell phones.' We can weigh this danger and make policy about texting and driving to keep the benefit of the technology available to the rest of the world."

It's also important to remember the potential benefits of A.I., she added.

Veloso pointed to the CoBot robots working on campus at Carnegie Mellon. The autonomous robots move around on wheels, guide visitors to where they need to go and ferry documents or snacks to people working there.

"I don't know if Elon Musk or Stephen Hawking know about these things, but I know these are significant advances," she said. "We are reaching a point where they are going to become a benefit to people. We'll have machines that will help people in their daily lives.... We need research on safety and coexistence. Machines shouldn't be outside the scope of humankind, but inside the scope of humankind. We'll have humans, dogs, cats and robots."

Link: http://www.computerworld.com

Logo KW

Will Artificial Intelligence Transform How We Grow and Consume Food? [1497]

de System Administrator - jueves, 8 de octubre de 2015, 23:53
 

Will Artificial Intelligence Transform How We Grow and Consume Food? [Video]

By David J. Hill

Today, agriculture is more efficient than ever, but it's also more dependent on environmental, technological, and social issues like never before. Climate change, drought and other disasters, shifting energy landscapes, population growth, urbanization, GMOs, changes in the workforce, automation — these are just a handful of the factors that affect the global access to food.

We'd all like to see a future in which people worldwide have a sufficient amount of safe and nutritious food to help them maintain healthy and active lives. Is this realistic in our lifetimes?

To borrow a phrase, artificial intelligence is eating the world, and it can help move us closer to this future of abundant food. Neil Jacobstein, Artificial Intelligence & Robotics Co-Chair at Singularity University, explains what AI can do for the future of food, as part of a video series on food as a Global Grand Challenge.

 Video

[image courtesy of Shutterstock]

 

 

Logo KW

Wireless AI Device Tracks and Zaps the Brain, Takes Aim at Parkinson’s [1533]

de System Administrator - miércoles, 28 de octubre de 2015, 14:24
 

Wireless AI Device Tracks and Zaps the Brain, Takes Aim at Parkinson’s

BY SHELLY FAN

Zapping the brain with implanted electrodes may sound like a ridiculously dangerous treatment, but for many patients with Parkinson’s disease, deep brain stimulation (DBS) is their only relief.

The procedure starts with open-skull surgery. Guided by MRI images, surgeons implant electrodes into deep-seated brain regions that contain malfunctioning neural networks. By rapidly delivering electrical pulses, DBS can dampen — or completely quiet — the severe motor tremors that invade Parkinson’s patients’ lives.

 

Yet getting the best result out of DBS is an infuriating process of trial-and-error. To fit the stimulation to each patient’s needs, clinicians often repeatedly tweak the treatment’s many parameters, such as amplitude, frequency, and how long each stimulation lasts. Feedback is based on the patient’s behavioral response, which is often subjective, and as the disorder progresses a program that works today may lose its therapeutic effects tomorrow.

A major cause of all this guesswork is that current generation devices can’t record how the brain is responding to the treatment, which leaves everyone in the dark.

It’s a costly problem that’s expected to spread.

In addition to Parkinson’s, DBS is being tested as a potential treatment for obsessive-compulsive disorder, Tourette’s syndrome, treatment-resistant depression and even Alzheimer’s disease. Despite its promising results, little is known about how electrical pulses work on neural networks to change behavior.

But now, a team led by Dr. Kendall Lee at the Mayo Clinic in Rochester, MN, engineered a closed-loop, wireless device called WINCS Harmony that can simultaneously measure neurotransmitter levels from multiple brain regions and adjust its stimulation pattern accordingly in real time.

Given that neurons communicate via electrical and chemical signaling, neurotransmitter levels act as proxy for treatment efficacy. Combined with sophisticated artificial neural networks, this information fine-tunes the stimulation process automatically.

In addition, researchers also gain insight into the mysterious mechanisms behind DBS that’s eluded the field so far.

“It’s really a game-changer,” says Dr. Karen Davis, a neuroengineer at the Toronto Western Hospital, who uses DBS for pain management.

The team presented their results this week at the Neuroscience 2015, the largest annual international gathering of neuroscientists organized by the Society for Neuroscience in Chicago.

Closing The Loop

Harmony builds upon previous DBS technology that’s been already been approved by the FDA for human use.

Previous devices have tried to capture stimulation-induced neural feedback by recording the neurons’ electrical responses, says Dr. J. Luis Lujan, the lead author of the study. The problem was that the signals from the stimulating and recording electrodes heavily interfered with each other.

 

It’s a common and terrible problem, says Lujan, the data was far too messy to use.

Instead, the team turned to fast-scan cyclic voltammetry, a chemical sensing technique originally developed for animal research. Every 10 milliseconds or so, the device applies a local voltage charge, which transiently pulls electrons out of neurotransmitters in the area. This generates a small electrical current that can be picked up by the electrode.

Since each neurotransmitter produces a unique current signature, the recordings can both identify what type it is and estimate its concentration.

This data is then wirelessly fed into a single-layer artificial neural network, which uses the electrical and chemical patterns as feedback to tweak the weight of each node in the network. This in turn changes DBS parameters to keep the brain in an optimally functional state.

As proof-of-concept, the team tested their device on three rats by measuring local dopamine levels in a brain region called the striatum.

“Dopamine is involved in many disorders that we want to treat with DBS, such as Parkinson’s,” said Lujan, “that’s why we tried it first.” But the device can also work on other transmitters such as serotonin, which is involved in depression.

By using data from 25 stimulation trials as the training set for the artificial neural network, the team showed that the device rapidly adjusted its stimulation patterns to reach a predefined optimal level. The system was highly resistant to errors: when researchers deliberately began the stimulation using an off-target pattern, Harmony rapidly adjusted and brought itself back on course.

Looking ahead

Right now, in order for DBS to work, it has to be on 24-7, says Lujan. But patients don’t exhibit symptoms all the time, so we’re likely overstimulating the brain. Since we still don’t really understand what DBS is doing in the brain, we may be inadvertently damaging other brain functions without realizing.

 

Harmony may also help illuminate what’s malfunctioning in the brain at the exact time point when patients exhibit symptoms.

For example, we can observe chemical signatures in the brain of Parkinson’s patients when they display tremors and compare to when they do not. That’s extremely valuable information, says Lujan.

Similarly, Harmony may also be able to monitor the brain for telltale signs that a bipolar patient is entering a manic episode, and automatically produce the stimulation pattern needed to stop the attack before it strikes.

The results are highly promising, but it’s still a few years off before we can start testing the device in humans, admitted Lujan. The team is working on making the lunch box-sized device smaller, so that it can be directly implanted into the brain along with the electrodes. To reduce the numbers of recurrent brain surgeries, the device also has to be made more durable.

Then there’s basic neurobiology. Our current understanding of the brain networks underlying disorders such as depression is still relatively primitive.

But the team is hopeful.

Devices like Harmony are one of the best tools to help us understand how malfunctioning neural networks misfire and what DBS does to the brain, says Lujan. We’re starting with Parkinson’s disease because we know far more about the networks involved, and since the behavioral outcomes are easily observable motor symptoms, they are highly objective and easy to measure.

That doesn’t mean we’re scared to tackle the other ones though, laughed Lujan. We just want to push this to human patients as soon as possible.

“Do we have all the answers? Of course not!” he said. “But now we have the tools to figure it out.”

Image Credit: Shutterstock.com

Logo KW

Women After All [1110]

de System Administrator - miércoles, 18 de febrero de 2015, 18:33
 

Book Excerpt from Women After All

In the introduction to his latest book, author Melvin Konner explains why he considers maleness a departure from normal physiology.

By Melvin Konner

There is a birth defect that is surprisingly common, due to a change in a key pair of chromosomes. In the normal condition the two look the same, but in this disorder one is shrunken beyond recognition. The result is shortened life span, higher mortality at all ages, an inability to reproduce, premature hair loss, and brain defects vari­ously resulting in attention deficit, hyperactivity, conduct disorder, hypersexuality, and an enormous excess of both outward and self-directed aggression. The main physiological mechanism is androgen poisoning, although there may be others. I call it the X-chromosome deficiency syndrome, and a stunning 49 percent of the human spe­cies is affected.

It is also called maleness.

My choice to call being male a syndrome and to consider it less normal than the usual alternative is not (as I will show you) an arbi­trary moral judgment. It is based on evolution, physiology, devel­opment, and susceptibility to disease. Once in our distant past, all of our ancestors could reproduce from their own bodies; in other words, we were all basically female. When biologists ask why sex evolved, they are not asking rhetorically—the fact that sex feels good was a valuable addition. What they are really asking is: Why did those self-sufficient females invent males? It had to be a very big rea­son, since they were bringing in a whole new cast of characters that took up space and ate their fill, not to mention being quite annoying, but could not themselves realize the goal of evolution: creating new life.

We’ll consider this in chapter 2, but briefly, the best answer to the puzzle seems to be: to escape being wiped out by germs. When you make new life on your own, you basically clone yourself, and ultimately lots of your offspring and relatives have the same genes. The germ that gets one of you gets you all. Create males, and in due course there is much more variation. Mate with a male that’s a bit different from you, and you produce a creature different from both of you. Result: germs confounded. Meanwhile, you export the fiercest part of the competition. You do the reproducing, he doesn’t (except for his teensy donation), so he can duke it out with the other males and they can evolve faster. Your daughters inherit the varia­tion, and they compete and evolve, too.

But it turns out you have created a sort of Frankenstein monster, after a certain point hard to control. Consider the lowly, graceful water striders that scoot over pond tops in summer. Females sig­nal that they are ready to mate by causing ripples of a certain fre­quency to billow out in the water, and the ripples turn males on. But the females don’t take all comers. Female choice is vital. Males that don’t rate, they drive away. Yet males have their ways. They have evolved grasping antennae, perfectly shaped to get a grip on the female’s head. A male approaches from behind and secures his hold, then flips her and himself upside down.  Using his rear legs, he positions their bodies. If he gets this far, she stops resisting. He is the one. Or one of the ones, at any rate. She mates several times a day and seems to play males against each other.

This is not an allegory of human mating; it is an illumination, more parallel than parable. Female choice is crucial in humans, too, but males didn’t evolve grasping antennae. They evolved strategies of seduction, including romance, patience, persistence, gifts, help, verbal praise, argument, promises, threats, family influence, and deception. Human females have protected themselves with skepti­cism, social alliances, and a tendency to stay aloof and keep men guessing. The man who talks the best game has usually convinced himself first, and (unlike the water striders, which do it physically) you might say they emotionally flip for each other. Sometimes males use force. In this they rely on superior physical strength, gained through eons of competition with other males for access to those very selective females.

Women, of course, compete as well, against men and among themselves, also with skills honed over eons. But the need to repro­duce, with all its risk and cost, has kept them relatively levelheaded and dubious of men’s schemes. For most of the history of sexual reproduction, females have often stood by while males fought over them, physically or otherwise. They know that they won’t always be able to tell a lifelong pal from a sperm donor, and in many spe­cies one good sperm is all they want. But they, too, have to repro­duce, and that means tolerating uncertainty and being prepared for contingencies. For us humans, the trouble is that men’s competitive antics and untold ages of imposing their will on women have created a world in peril from their rivalries. Females, whether water striders or women, might be forgiven for looking back with a jaded eye on whichever ancestor it was that gave birth to the first male.

Women have always had to struggle for equality, even in the small hunter-gatherer bands we evolved in. Yet with further cultural evo­lution, it got worse. With the rise of what we like to call civiliza­tion, men’s superior muscle fostered a vast military, economic, and political conspiracy, enabling them to exclude women from leading roles. Jealousy of women’s power to give sex—and, more impor­tantly, to give life—led men to build worlds upon and against them for millennia. Or as Camille Paglia put it in Sexual Personae,“Male bonding and patriarchy were the recourse to which man was forced by his terrible sense of woman’s power.” Appealing myths about Amazons are just that: myths. Only women whose fathers, sons, or husbands gave them the scepters of power could wield it, and then only temporarily. Even in matrilineal societies, men had most of the power. The result was ten millennia in which we squandered half of the best talent in the human race. Brawn mattered for those one hundred centuries, but in spite of their greater strength, men had to make laws to suppress women, because on a truly level playing field, women were destined to compete successfully and very often win.

That is the other meaning of the quote from de Beauvoir: “The problem of woman has always been a problem of men.” Although I don’t agree with her that all differences between men and women are culturally determined, I fully accept that the majority of the differences we have seen throughout history are caused by male supremacy and the subordination of women. History is written by the victors, and the victors in the battle between the sexes have for many centuries been males; of course they have defined women downward and have invented and promulgated an “essential” inferiority of women as a part of femininity itself. That is the part that is not at all inherent in biology; rather, it is, literally, a man-made myth.

But millennial male dominance is about to come to an end. Glass ceilings are splintering into countless shards of light, and women are climbing male power pyramids in every domain of life. Even in the world’s most sexist societies, women and girls form a fundamentally subversive group that, as communications technology shows them other women’s freedoms, will undermine age-old male conceit and give them the sway of the majority they are.

The freer and more educated girls and women become, the fewer children they have; men are proven obstacles to family planning. Even in the poorest lands, the increasing availability of women’s suffrage, health services, microloans, and savings programs, is giv­ing them control over their destinies. As soon as that happens, they reduce the size and poverty of their families. It becomes clearer every year that the best way to spend an aid dollar in the develop­ing world is to educate and empower women and girls. The conse­quences are manifold.

Replacing quantity with quality in childbearing will not save just women, or even just struggling, impoverished countries. It will save the planet and make it habitable for our species. It will greatly reduce the necessity for violence of all kinds, as it has already begun to do. Male domination has outlived any purpose it may once have had. Perhaps it played some role in our success as a species so far, but now it is an obstacle. Empowering women is the next step in human evolution, and as the uniquely endowed creatures we are, we can choose to help bring it about.

Excerpted from Women After All: Sex, Evolution, and the End of Male Supremacy by Melvin Konner. Copyright © 2015 by Melvin Konner. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

Logo KW

Workplace Bullying a Costly Epidemic in the Enterprise [985]

de System Administrator - lunes, 10 de noviembre de 2014, 19:33
 

Workplace Bullying a Costly Epidemic in the Enterprise

By Sharon Florentine

Workplace bullying opens your organization up to poor productivity, lower retention rates and possible legal action. And it's not an isolated issue - the workplace statistics are shocking. Is your culture cultivating a bully mentality?

Old bullies never die, they just get … promoted. And older doesn't always mean wiser. Those bullies you remember from your school days don't always grow out of that behavior. Many in fact, carry it with them into the workplace.

If you think bullying isn't happening in your organization, think again. According to a Zogby poll commissioned by WBI in January 2014, 27 percent of the 1,000 U.S. workers surveyed had been the target of bullying; an additional 21 percent had witnessed an incident or incidents of bullying in the workplace.

A recent Forbes article reported that an alarming number of respondents, 96 percent, admitted to being bullied in the workplace.

The issue is so prevalent that Gary Namie and his wife, Ruth Namie created the Workplace Bullying Institute(WBI), an organization dedicated to eradicating workplace bullying.

Defining Workplace Bullying

Different people may have different ideas about what workplace bullying means, but the WBI offers these thoughts: "We have a fairly high threshold for the definition of bullying; we define it as repeated mistreatment and abusive conduct that is threatening, humiliating, or intimidating, work sabotage or verbal abuse. Even so, we consider it something of an epidemic," Gary Namie says.

Bullying Has Widespread Organizational Impact

Bullying in the workplace affects more than just the individual targeted. It has negative effects on an entire organization, according to Namie and WBI data. "Victims suffer from depression, anxiety and panic. They take more sick days, resulting in higher rates of absenteeism. They have higher rates of stress-related health problems, increasing employers' healthcare costs. They aren't as motivated, engaged or productive - why would they be?" says Namie.

These individuals who are bullied are more likely to leave your organization and they certainly aren't going to recommend your company to their talented friends, family or professional contacts.

Julie Moriarity, general manager of corporate training and communications strategy at The Network, an enterprise governance, risk and compliance firm that works with organizations' ethics and compliance teams to preventing bullying, agrees. "Bullying has a hugely negative impact on an organization as a whole. Whether you're a direct victim or whether you're a witness, it's going to impact your ability to work in teams, it decreases productivity and it can affect businesses' ability to recruit and retain talent. A lot of the best employees come through referrals, and no one's going to refer their friends, family, colleagues to an abusive work environment," says Moriarity. In extreme situations, this can create a PR nightmare for businesses, which also may be subjected to expensive, high-visibility lawsuits if victims choose to prosecute their attackers.

Why Aren't Businesses Putting a Stop to It?

So, why aren't businesses doing more to stop bullying? One of the most obvious answers is that they're unaware that it's happening, says Moriarity. Victims may be unwilling to report bullying if it's being done by their supervisor or by a colleague for several reasons, not the least of which is fear of being viewed as a troublemaker and losing their job.

"People don't have the courage or the power to be able to stand up to bullies in the workplace, because they're afraid of losing their jobs. Especially in a recovering economy, it can be terrifying to think of losing their livelihood if they're not believed," says Moriarity.

This fear is not unfounded, according to WBI data. Fifty-six percent of reported bullies are the victim's boss; 33 percent report that a coworker is the bully. "Most of the time, the bully is in a position of power over the victim. The message is, 'I can treat you however I want, and you have to put up with it or you'll be out on the street - and don't even think about asking for an employment reference,'" says Namie.

What's worse, according to Moriarity, is the fear of retaliation by the bullies themselves, if they discover they've been reported, but no corporate action is taken. "If bullying is reported and an organization doesn't respond, what happens? The abuse escalates. It gets worse. Often, whistleblowers face retaliation, not just by their bullies either; sometimes being fired from their job because they're seen as a troublemaker," says Moriarity.

In her role at The Network, Moriarity consults with businesses about the need to create a culture that does not tolerate bullying and, more importantly, to consistently, quickly and forcefully deal with bullies while protecting victims brave enough to speak out.

"In the ethics and compliance industry, we talk a lot about creating this kind of culture, but you can't just talk about it, you can't just pay lip service to the idea of having a respectful, safe and open workplace. If you don't have rules in place to enforce that, it doesn't do any good," says Moriarity.

The Catch-22: Bullies Are Often High Performers

What makes the situation even more complex is that some organizations are actually benefiting from bullying behavior, Namie says. There's a fine line between being an aggressive, hard-driven, high-performing go-getter that's bringing in profits and closing deals and being a bully. Particularly if a worker has used aggressive bullying tactics in the past and been rewarded for the behavior.

 

"Many of these bullies are praised for similar behavior, under different circumstances. They're pushy, they're ruthless, cunning and they'll do whatever it takes to get ahead, to win - and that helps them succeed and the business succeed," says Namie.

"It's true more often than not in the workplace that bullies 'fail up.' The challenge in a workplace is that a lot of times, the same qualities that make aggressive, nasty bullies also make them quite good at their job. If they're high performers, great salespeople, major earners, that's all the company's looking at. And, so, the victim's fighting an uphill battle to get the company to see past that person's performance," Moriarity says.

How to Address the Bullying Issue

One way to address the problem is through a corporate ethics and compliance officer, or a team dedicated to that function. However, this position or team must report either to the CEO, the highest echelons of leadership, or to a board of directors to ensure it's operating apolitically, fairly and objectively, according to Moriarity.

"An ethics officer, or a team of ethics and compliance specialists, must be an independent body; they have to be able to make hard, objective decisions - like firing or disciplining a bully - and have those enforced without being overruled or dismissed because of office politics or financial concerns," says Moriarity.

Organizations should have a reporting process in place, too, and be prepared to offer anonymity for victims. However, doing so can make investigating claims of bullying more difficult.

"Most organizations would prefer you go to your manager, but if that's not possible - sometimes, they are the bully - then, to Human Resources. Public companies often have a hotline or a reporting structure in place so they can track complaints and work to address them," says Moriarity.

If those options aren't available, or aren't effective, there may be legal recourse victims can take outside of the corporate structure, Namie says.

Laws Compel Action to Protect Victims

Namie and WBI have introduced legislation in many states to help victims take legal action against businesses who turn a blind eye to bullying. For example, WBI has introduced theHealthy Workplace Bill, which sets out a clear definition of workplace bullying and protects both employers and employees. Here are some more details on the Healthy Workplace Bill.

  • What the HWB Does for Employers
    • Precisely defines an "abusive work environment" -- it is a high standard for misconduct
    • Requires proof of health harm by licensed health or mental health professionals
    • Protects conscientious employers from vicarious liability risk when internal correction and prevention mechanisms are in effect
    • Gives employers the reason to terminate or sanction offenders
    • Requires plaintiffs to use private attorneys
    • Plugs the gaps in current state and federal civil rights protections
  • What the HWB Does for Workers
    • Provides an avenue for legal redress for health harming cruelty at work
    • Allows you to sue the bully as an individual
    • Holds the employer accountable
    • Seeks restoration of lost wages and benefits
    • Compels employers to prevent and correct future instances

In the absence of a legal imperative, WBI statistics show that even when bullies are reported, approximately 44 percent of businesses do not take any action to rectify the situation, but while this willful ignorance might help companies in the short term, it's ignoring the long-term effects and far-reaching impact of the behavior according to Moriarity.

"Businesses that do nothing are trading short-term gain for long-term pain, as I like to say. By the time they understand the damage that's being done to their productivity, their retention and their public reputation, it's much too late. Employees won't ever forget that; they'll always remember that your company chose to do what was financially right over what was morally right, and your business will suffer for it," says Moriarity.

Link: http://www.cio.com

 

Logo KW

World’s Data Could Fit on a Teaspoon-Sized DNA Hard Drive and Survive Thousands of Years [1124]

de System Administrator - martes, 24 de febrero de 2015, 17:41
 

World’s Data Could Fit on a Teaspoon-Sized DNA Hard Drive and Survive Thousands of Years

BY JASON DORRIER

The blueprint of every living thing on the planet is encoded in DNA. We know the stuff can hold a lot of information. But how much is a lot? We could theoretically encode the world's data (from emails to albums, movies to novels) on just a few grams of DNA. DNA already preserves life itself—now it might also preserve life as we live it.

According to New Scientist, a gram of DNA could theoretically store 455 exabytes of data. And Quartz drives the point home. If the world has about 1.8 zettabytes of data, according to a 2011 estimate, all the world's information would fit on a four-gram DNA hard drive the size of a teaspoon.

 

Svalbard Global Seed Vault in Norway.

But why choose DNA (other than its massive storage potential)? Because in the right conditions, DNA can survive for thousands of years. Long past the time traditional hard drives have degraded.

Scientists at the Swiss Federal Institute of Technology in Zurich set out to find out just how long DNA might last.

Encapsulated in tiny, dry glass spheres, the researchers say that DNA kept at a temperature of 10 °C would remain uncorrupted (and the data readable) for 2,000 years. At even lower temperatures—like those kept at Norway's Svalbard Global Seed Vault—the data's longevity jumps to two million years.

But here's the kicker. Preserving data in DNA is very, very expensive. The Swiss researchers encoded 83-kilobytes, the medieval Swiss federal charter and Archimedes Palimpsest, at a cost of $1,500. There are a nearly two quintillion kilobytes in the world's 1.8 zettabytes. We could mortgage the global economy—and still be woefully short of cash.

But it's a pretty fascinating idea. More research and improving techniques may bring the cost closer to Earth. And of course, in practice, we would be far more judicious about what information we encode—using the technique as a time capsule or the ultimate backup for the modern world's most critical information.

Learn more about preserving humankind's data on DNA hard drives at Quartz and New Scientist

Image Credit: Shutterstock.comMari Tefre/Svalbard Global Seed Vault

Logo KW

Worm ‘Brain’ Uploaded Into Lego Robot [1018]

de System Administrator - lunes, 15 de diciembre de 2014, 22:08
 

Worm ‘Brain’ Uploaded Into Lego Robot

BY JASON DORRIER

Can a digitally simulated brain on a computer perform tasks just like the real thing?

For simple commands, the answer, it would seem, is yes it can. Researchers at the OpenWorm project recently hooked a simulated worm brain to a wheeled robot. Without being explicitly programmed to do so, the robot moved back and forth and avoided objects—driven only by the interplay of external stimuli and digital neurons.

 

While there are already similarly capable robots using traditional software, the research shows a digitally simulated brain can behave like its biological analog, and the demonstration has implications for big brain projects.

The BRAIN Initiative in the US and the Human Brain Project in Europe aim to map the human brain’s connections and, one day, to simulate the brain digitally. Such a simulation might yield insights into disease or breakthroughs in computer science.

But when it comes to simulating brains in silica—it’s sensible to start simple. The OpenWorm project’s simulated brain is based on the lowly C. elegans roundworm.

 

C. elegans.

C. elegans is an eminently humble creature, and for that reason, an extensively researched one. Scientists published the first map of the synaptic connections, or connectome, of the brain of C. elegans in 1986 and a refined draft in 2006.

The worm’s brain contains 302 neurons and 7,000 synapses. The human brain, in comparison, has 86 billion neurons and 100 trillion synapses. Whether we’ll ever fully map the human brain (or should) is a hotly debated topic.

But since we’ve already mapped the C. elegans connectome—the researchers at OpenWorm thought they’d feed it stimuli using a few external sensors and give it a robotic body to carry out whatever motor instructions the brain provided.

The robot, as you can see in the video, moves a little like a Roomba, with one critical distinction—the Roomba’s collision avoidance mechanism was written in by programmers. The OpenWorm bot’s movements, on the other hand, were not.

How does it work? The brain cells in the worm’s connectome are labeled sensory neurons, motor neurons, and interneurons (connecting the two). The OpenWorm team simulated these neurons and their connections in software.

The digital neurons sum input signals and fire when they exceed a threshold (similar to but not exactly like the real thing).

Sensory neurons link to the robot’s sensors—a sonar sensor, for example, stands in for the worm’s nose. And the sim’s motor neurons drive the robot’s right and left motors as if they were right and left groups of muscles.

The fascinating thing? The robot behaves much like a real worm would, given similar sensory stimulation—tripping the nose sensor halts forward progress, touching the front and rear sensors makes the robot move forward and back.

Now, the simulation isn’t perfect, and the robot doesn’t have every sensory input the real worm might have, but the OpenWorm bot seems to show that a stimulated digital brain might behave like a biological brain does—and we might not have to understand it in detail to make it work. That is, behaviors might emerge of their own accord.

In this example, we’re talking very simple behaviors. But could the result scale? That is, if you map a human brain with similarly high fidelity and supply it with stimulation in a virtual or physical environment—would some of the characteristics we associate with human brains independently emerge? Might that include creativity and consciousness?

There’s only one way to find out.

 Building the first digital life form. Open source.

 Image Credit: Shutterstock.comKbradnam/Wikimedia Commons


Página: (Anterior)   1  ...  65  66  67  68  69  70  71  72  73  74  (Siguiente)
  TODAS