Neurociencia | Neuroscience


Neuroscience 

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  ...  65  66  67  68  69  70  71  72  73  74  (Siguiente)
  TODAS

W

Logo KW

Why companies fail to train future leaders [1624]

de System Administrator - domingo, 27 de diciembre de 2015, 19:42
 

The person anchoring the initiative should be a thought leader, not a programme coordinator. Photo: iStockphoto

Why companies fail to train future leaders

by Hema Ravichandar

A lack of real commitment and an entitlement culture are some of the reasons firms are unable to develop a leadership pipeline

The results of our first global benchmarking study conducted on the Baldrige framework had just come in, at the turn of the century. Software services firm Infosys had only recently listed on Nasdaq in 1999 and was eager to check out the robustness of its management practices for performance excellence on the framework’s key dimensions, including leadership, strategy, customers and knowledge management.

The results drove home the criticality of formal leadership development to create a strong pipeline of leaders to take the organization into the next decade and beyond of blistering growth. The emphasis was, first, on defining and deploying a leadership system and, then, on developing, mandating and institutionalizing leadership training for all levels of leaders. The responsibility for this focused effort was assigned directly to the chief executive officer (CEO); a true reflection of the importance given to leadership development in the framework.

Zoom out from the microcosm of one organization, to the macrocosm. In a recent worldwide survey of nearly 500 CEOs carried out by management consulting firm McKinsey & Co., 83% of executives felt that leadership development was one of the top three current and future priorities in their organizations. But wait a minute—only 7% felt that their organizations were current and future ready in having a qualified leadership pipeline to address the organizational mandate.

And that is precisely the point. Most organizations realize the need for, and importance of, leadership development and invest considerable financial and time resources on it. And yet the unsolved puzzle in so many cases globally is that programmes that companies so enthusiastically embrace fail to get institutionalized in the DNA of the organization or last in the long haul.

So why do leadership development programmes falter? Research highlights several reasons for this failure. These include the inability to create a context, or the why of the programme, the lack of a culture fostering leadership development, not integrating learnings into real work, not having a proper implementation plan, even not building the “emotional courage in participants” to accept their shortcomings and undertake personal development.

Having seen several organizations both as a practitioner and a consultant, here are the main reasons I think these programmes fail.

- When leadership development is adopted only in letter, and not in spirit.

Organizations do not hesitate to invest significant resources on the infrastructure—the facilities and campus, programmes, even faculty. Yet if the DNA of the organization remains essentially that of command and control mode, with tight decision making at the top, the efforts are bound to yield disappointing results. In contrast, when an organization culture encourages a high degree of empowerment, and allows leaders to make and learn from mistakes, the spirit of leadership development thrives.

In the “line of fire” of the dreaded budget cuts.

Take the case of a fast-growing company and the formal leadership journey that it embarked on. Two years on, with considerable time and energy invested by leaders, the programme had been designed and was all set to roll. The detailing had been done, including the required proficiency levels for various competencies, for different roles, and the most appropriate development initiatives for each identified leader. Even the financial approvals were in place. And then the next quarter tanked, and financial considerations immediately saw the programme being shelved. Yet, that was probably the time when the programme was needed most to help the leaders steer the organization through turbulent times. It would have been more appropriate to continue with the initiatives while re-engineering the programme’s financial design more innovatively for cost-effectiveness.

- A model championed by one CEO is quietly buried by another.

This burial is even easier if there is, simultaneously, a departure of key stakeholders like the leadership development head or the chief human resources officer. And then, if the board wakes up and pushes the company for a leadership talent strategy, hey presto, we have an all-new framework with organization-wide assessments starting all over again. During high growth phases, initiatives like leadership development are easily consigned to the back-burner. If such cycles become the norm, it would come as no surprise if this blow-hot, blow-cold attitude turned the organization and talent cynical about the programme. Investing a significant portion of time and resources up front on the basis of a needs analysis, and designing a system that can withstand the test of business cycles and CEO transitions, is advisable.

- Initiatives great, target audience—wrong.

When the programme agenda is driven by an entitlement culture epitomized by seniority or favouritism, instead of embracing employees with the highest leadership potential, the output is bound to be flawed. Employees who have hit Peter’s principle need to be firmly passed over in favour of talent with head space for growth lying maybe two levels down the hierarchy. This calls for firmness and courage from those driving the initiative in asking such “seniors” to step aside because they are truly beyond the pale of the initiative.

- Focus on the wrong areas for leadership development.

An organization that recently ran into a profitability crisis found that while earlier leaders were doing the right things, the next generation of leaders were doing only a sub-set of those right things. The company thus ended up being successful in one dimension but weak in others. The company had not consciously identified the leadership behaviours for success around which the next generation of leaders should have been developed.

Further, different roles and times require different types of leadership—project leadership, strategic leadership, change leadership, operational leadership, entrepreneurial leadership, transition leadership, even adversity leadership.

As a corollary, leaders performing those roles need to be measured for and developed in those competencies. A one-size-fits-all programme can only end up being below par.

- Ineffective programme management can trip the initiative.

A strong programme manager will pull it all together. A great design is not enough in itself. Without the seemingly mundane jobs of follow-up, closure and knowledge management, the results will be suboptimal. This should also involve smoothening any tensions that could arise in the triad of coach, boss and employee.

Articulation-demonstration-reinforcement.

While positioning is important, there can be a loss of credibility if, in the final analysis, those who embrace the initiatives are not seen as being rewarded; in contrast, those who shun or ridicule it are. Take a leap of faith and up the responsibility quotient for the newly minted leaders. Else, it could be a case of “all developed, and nowhere to go”. Reinforce the programme by making individual development plans a living document, reviewing and refining as required.

And finally, please ensure that the person anchoring the initiative is a thought leader. Don’t settle for just a programme coordinator. Because it is the anchor’s knowledge of the secret sauce that goes into making great leaders, and conceptual clarity in designing appropriate interventions, that will ultimately decide the credibility and longevity of the programme.

“Leaders are made, they are not born. They are made by hard effort, which is the price all of us must pay to achieve any goal that is worthwhile,” said Vince Lombardi, the American football player, coach, and executive. Do ensure that your organization’s efforts to make leaders do not get derailed.

Hema Ravichandar is a strategic human resources consultant. She serves as an independent director and an advisory board member for several organizations. She was formerly the global head of HR for Infosys Ltd.

Hema Ravichandar

TOPICS: 

Logo KW

Why Digital Overload Is Now Central to the Human Condition [1637]

de System Administrator - martes, 19 de enero de 2016, 23:14
 

Why Digital Overload Is Now Central to the Human Condition

 

BY ALISON E. BERMAN

A mom pushes a stroller down the sidewalk while Skyping. A family of four sits at the dinner table plugged into their cell phones with the TV blaring in the background. You get through two pages in a book before picking up your laptop and scrolling through a bottomless stream of new content.

Information technology has created a hyper-connected, over-stimulated, distracted and alienated world. We’ve been living long enough with internet-connected computers and other mobile devices to have begun to take it for granted.

But already the next wave is coming, and it promises to be even more immersive.

As our lives are increasingly augmented and infused with new digital technologies—intelligent search engines, AI assistants, smartwatches, in-home virtual reality, and the like—it only makes sense that the conversation about whether this is all okay will keep getting louder.

 

A new report from ZenithOptimedia estimates people spent eight hours a day consuming media in 2015, and a recent comScore study reported that the time individuals spent on smartphones consuming digital media increased by 90% from 2013 to 2015 in the US.

These numbers aren’t necessarily a threat to humanity though—we actually get pleasure from consuming new information. In fact, our brains are wired to prioritize it. Even receiving ordinary new pieces of information, like a text message or an email after refreshing your browser, triggers the brain to release dopamine.

Our fondness for new information alone isn’t anything novel. Only now, the volume of new information, and our access to it, has increased faster than our ability to process and balance it.

Countless op-ed pieces debate the implications of being plugged into technology around the clock—is it uselessly barraging humanity, or elevating it? Both sides make valid points.

Video producer Chris Milk believes that VR will be the great empathy-inducing machine of our time, allowing us to step into the shoes of others like never before.

But others argue immersive new technologies cause social isolation—an idea also explored in recent science fiction stories where people spend more time in virtual reality than reality.

Though to some the growing noise around this subject feels like we’re beating a dead horse, this dialogue and tension is itself deeply human. And it’s been an ongoing discussion since the dawn of the industrial age.

In the 21st century, the exchange of ideas and knowledge is increasingly a marker of individual value and capital. Some even claim knowledge exchange represents a new advanced form of capitalism, where the democratization of ideas and knowledge are driving a portion of economic growth in the information age.

 

Think of it like this—mass information access is one of the new technologies of the 21st century transforming our economy, similar to how the new machinery adopted by factories and farms while leaving the agricultural age and moving into the industrial age transformed the economy of the 18th and 19thcenturies.

The power of knowledge in the 21st century makes its availability that much more tempting, but we’re still trying to understand how much of it we can handle, or how readily we want to be able to consume it.

It’s hoped new technologies, like artificial intelligence, will learn to find and even anticipate the information we most value. The problem may be less about the amount of information and more about how we sort through it to find the good stuff.

But make no mistake, new technologies will bring new worries. And the speed at which technology is moving is sure to increase anxiety and debate. 

 

This back and forth dialogue is basic to the human condition. It’s a critical part of how we adapt to technology and find harmony between the old and new.

Ultimately, we each have to choose how we interact with technology and what our own comfort levels with it are. The greater awareness and honesty we have about its strengths and weaknesses, the better we can make healthy decisions about how to use it in our own lives—individually and collectively.

As new technologies appear, this integration doesn’t happen overnight; it takes years of everyday use by everyday people. And yes, lots of op-eds hashing it out too.


What are your thoughts on digital overload? We'd love to hear them. You can comment here or tell us on twitter@SingularityHub and @DigitAlison

Image Source: Shutterstuck

Alison E. Berman

  • Staff Writer at Singularity University
  • Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).

RELATED TOPICS: 

Link: http://singularityhub.com

 

Logo KW

Why thinking you're ugly is bad for you [914]

de System Administrator - martes, 7 de octubre de 2014, 15:29
 

 

About 10,000 people a month Google the phrase, “Am I ugly?” Meaghan Ramsey of the Dove Self-Esteem Project has a feeling that many of them are young girls. In a deeply unsettling talk, she walks us through the surprising impacts of low body and image confidence—from lower grade point averages to greater risk-taking with drugs and alcohol. And then shares the keys things all of us can to disrupt this reality.

Video

Meaghan Ramsey | Self-esteem advocate

The Global Director of the Dove Self-Esteem Project, Meaghan Ramsey believes in business growth that stems from real social change. Full bio

0:12 - This is my niece, Stella. She's just turned one and started to walk. And she's walking in that really cool way that one-year-olds do, a kind of teetering, my-body's-moving- too-fast-for-my-legs kind of way. It is absolutely gorgeous. And one of her favorite things to do at the moment is to stare at herself in the mirror. She absolutely loves her reflection. She giggles and squeals, and gives herself these big, wet kisses. It is beautiful. Apparently, all of her friends do this and my mom tells me that I used to do this, and it got me thinking: When did I stop doing this? When is it suddenly not okay to love the way that we look?Because apparently we don't. 

Ten thousand people every month google, "Am I ugly?" This is Faye. Faye is 13 and she lives in Denver. And like any teenager, she just wants to be liked and to fit in. It's Sunday night. She's getting ready for the week ahead at school. And she's slightly dreading it, and she's a bit confused because despite her mom telling her all the time that she's beautiful, every day at school, someone tells her that she's ugly. Because of the difference between what her mom tells her and what her friends at school, or her peers at school are telling her, she doesn't know who to believe. So, she takes a video of herself. She posts it to YouTube and she asks people to please leave a comment: "Am I pretty or am I ugly?" Well, so far, Faye has received over 13,000 comments. Some of them are so nasty, they don't bear thinking about. 

This is an average, healthy-looking teenage girl receiving this feedback at one of the most emotionally vulnerable times in her life. Thousands of people are posting videos like this,mostly teenage girls, reaching out in this way. But what's leading them to do this? Well, today's teenagers are rarely alone. They're under pressure to be online and available at all times, talking, messaging, liking, commenting, sharing, posting — it never ends. Never before have we been so connected, so continuously, so instantaneously, so young. And as one mom told me, it's like there's a party in their bedroom every night. There's simply no privacy. And the social pressures that go along with that are relentless. This always-on environment is training our kids to value themselves based on the number of likes they get and the types of comments that they receive. There's no separation between online and offline life. What's real or what isn't is really hard to tell the difference between. And it's also really hard to tell the difference between what's authentic and what's digitally manipulated. What's a highlight in someone's life versus what's normal in the context of everyday. And where are they looking to for inspiration? Well, you can see the kinds of images that are covering the newsfeeds of girls today. Size zero models still dominate our catwalks. Airbrushing is now routine. And trends like #thinspiration, #thighgap, #bikinibridge and #proana. 

For those who don't know, #proana means pro-anorexia. These trends are teamed with the stereotyping and flagrant objectification of women in today's popular culture.It is not hard to see what girls are benchmarking themselves against. But boys are not immune to this either. Aspiring to the chiseled jaw lines and ripped six packs of superhero-like sports stars and playboy music artsits. But, what's the problem with all of this? Well, surely we want our kids to grow up as healthy, well balanced individuals. But in an image-obsessed culture, we are training our kids to spend more time and mental effort on their appearance at the expense of all of the other aspects of their identities. 

So, things like their relationships, the development of their physical abilities, and their studies and so on begin to suffer. Six out of 10 girls are now choosing not to do something because they don'tthink they look good enough. These are not trivial activities. These are fundamental activities to their development as humans and as contributors to society and to the workforce. Thirty-one percent, nearly one in three teenagers, are withdrawing from classroom debate. They're failing to engage in classroom debate because they don't want to draw attention to the way that they look. One in five are not showing up to class at all on days when they don't feel good about it. And when it comes to exams, if you don't think you look good enough, specifically if you don't think you are thin enough, you will score a lower grade point average than your peers who are not concerned with this. And this is consistent across Finland, the U.S. and China, and is true regardless of how much you actually weigh. So to be super clear, we're talking about the way you think you look, not how you actually look. Low body confidence is undermining academic achievement. But it's also damaging health. Teenagers with low body confidence do less physical activity, eat less fruits and vegetables, partake in more unhealthy weight control practices that can lead to eating disorders. 

They have lower self-esteem. They're more easily influenced by people around them and they're at greater risk of depression. And we think it's for all of these reasons that they take more risks with things like alcohol and drug use; crash dieting; cosmetic surgery; unprotected, earlier sex; and self-harm. The pursuit of the perfect body is putting pressure on our healthcare systems and costing our governments billions of dollars every year. And we don't grow out of it. Women who think they're overweight — again, regardless of whether they are or are not — have higher rates of absenteeism. Seventeen percent of women would not show up to a job interview on a day when they weren't feeling confident about the way that they look. Have a think about what this is doing to our economy. If we could overcome this, what that opportunity looks like. Unlocking this potential is in the interest of every single one of us. But how do we do that? Well, talking, on its own, only gets you so far.It's not enough by itself. If you actually want to make a difference, you have to do something. And we've learned there are three key ways: The first is we have to educate for body confidence. We have to help our teenagers develop strategies to overcome image-related pressures and build their self-esteem. Now, the good news is that there are many programs out there available to do this. The bad news is that most of them don't work. I was shocked to learn that many well-meaning programs are inadvertently actuallymaking the situation worse. 

So we need to make damn sure that the programs that our kids are receivingare not only having a positive impact, but having a lasting impact as well. And the research shows that the best programs address six key areas: The first is the influence of family, friends and relationships. The second is media and celebrity culture, then how to handle teasing and bullying, the way we compete and compare with one another based on looks, talking about appearance — some people call this "body talk" or "fat talk" — and finally, the foundations of respecting and looking after yourself. These six things are crucial starting points for anyone serious about delivering body-confidence education that works. An education is critical, but tackling this problem is going to require each and everyone of us to step up and be better role models for the women and girls in our own lives. Challenging the status quo of how women are seen and talked about in our own circles. 

It is not okay that we judge the contribution of our politicians by their haircuts or the size of their breasts, or to infer that the determination or the success of an Olympian is down to her not being a looker. We need to start judging people by what they do, not what they look like. We can all start by taking responsibility for the types of pictures and comments that we post on our own social networks. We can compliment people based on their effort and their actionsand not on their appearance. And let me ask you, when was the last time that you kissed a mirror?Ultimately, we need to work together as communities, as governments and as businesses to really change this culture of ours so that our kids grow up valuing their whole selves, valuing individuality, diversity, inclusion. We need to put the people that are making a real difference on our pedestals, making a difference in the real world. Giving them the airtime, because only then will we create a different world.A world where our kids are free to become the best versions of themselves, where the way they think they look never holds them back from being who they are or achieving what they want in life. Think about what this might mean for someone in your life. 

Who have you got in mind? Is it your wife? Your sister?Your daughter? Your niece? Your friend? It could just be the woman a couple of seats away from you today. What would it mean for her if she were freed from that voice of her inner critic, nagging her to havelonger legs, thinner thighs, smaller stomach, shorter feet? What could it mean for her if we overcame thisand unlocked her potential in that way? Right now, our culture's obsession with image is holding us all back. But let's show our kids the truth. Let's show them that the way you look is just one part of your identity and that the truth is we love them for who they are and what they do and how they make us feel.Let's build self-esteem into our school curriculums. Let's each and every one of us change the way we talk and comapre ourselves to other people. And let's work together as communities, from grassroots to governments, so that the happy little one-year-olds of today become the confident changemakers of tomorrow. Let's do this. 

(Applause)

Link: http://www.ted.com/talks/meaghan_ramsey_why_thinking_you_re_ugly_is_bad_for_you

 

 

Logo KW

Why We Should Teach Kids to Code Biology, Not Just Software [1690]

de System Administrator - sábado, 9 de abril de 2016, 16:29
 

Why We Should Teach Kids to Code Biology, Not Just Software

By Sveta McShane

Almost ten years ago, Freeman Dyson ventured a wild forecast: 

“I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.” Just recently, MIT researchers created a programming language for living cells that can be used by even those with no previous genetic engineering knowledge. This is part of a growing body of evidence pointing to a undeniable trend—Dyson’s vision is starting to come true.

Over the next several decades we will develop tools that will make biotechnology affordable and accessible to anyone—not just in a university or even biohacking lab—but literally at home.

“Domesticating” Computers

To appreciate the power of Dyson’s forecast, let’s first go back in time. Not so long ago, the only computers around were massive things that took up entire rooms or even floors of a building. They were complicated to use and required multiple university degrees just to make them do simple tasks.

 we-should-teach-kids-to-code-biology-1

Over the last 50 years, humans have collectively engineered countless tools—from programming languages to hardware and software—that allow anyone to operate a computer with no prior knowledge. Everyone from the age of 3 to 95 can pick up an iPad and intuitively begin using it.

The personal computer brought an explosion of business, art, music, movies, writing, and connectivity between people the likes of which we had never seen before.

Given accessible and affordable tools, the average person found lots of uses for her personal computer—uses that several decades ago we couldn’t even have imagined.

Now, we’re seeing a similar “domestication” happening in biotechnology. And likewise, we have no idea what our children will create with the biotech equivalent of a personal computer.

“Domesticating” Biotechnology

“What we’re finding over time is that biology isn’t this kind of mysterious unpredictable substrate; it just felt that way because we didn’t really have the tools to see what was going on. ” Christopher Voigt, MIT

Since 2003, when the human genome was sequenced and the cost of sequencing began to plummet, scientists and a rising number of citizen scientists have been building upon this accomplishment to create new tools that read, write, and edit DNA.

A lot of these tools have been built with “serious” science in mind, but many are also built for the casual tinkerer and biotech novice.

Today, just about anyone (even high school students) can...

  • Have their DNA sequenced You can learn about your ancestry composition and predisposition to certain inherited conditions like cystic fibrosis and sickle cell anemia at 23andMe.
  • Read BioBuilder Biobuilder is a recent book designed to teach high school and college students the fundamentals of biodesign and DNA engineering, complete with instructions on how to make your own glowing bacteria and other experiments.
  • Learn to use CRISPR Take a class on how to use CRISPR for your own experiments at Genspace, a citizen science lab in NYC (or a similar class in many community science labs across the world). No experience necessary.
  • Join iGEMiGEM is a worldwide synthetic biology organization initially created for college students and now open to entrepreneurs, community labs, and high schools.
  • Get started with “drag-and-drop” genetic engineering for freeDownload Genome Compiler software for free and experiment with “drag-and-drop genetic engineering."
  • Order synthetic DNA built to design or from the registry of standard biological parts online
  • Buy equipment for your home biotech lab like OpenqPCR or Open Trons

The Next Generation of Biohackers

“[In the future] designing genomes will be a personal thing, a new art form as creative as painting or sculpture.” – Freeman Dyson

 iGEM2014_01

For most people, the words genetic engineering and biotechnology do not bring to mind a vision of a new generation of artists designing a new variety of flower or a new breed of pet.

If this trend of biotechnology “domestication” continues, however, the next generation of engineers might be writing code not just for apps, but also new species of plants and animals.

And the potential here is much larger and more important than tinkering with the color of bacteria and flowers or designing new pets.

Last year, an iGEM team from Israel proposed a project to “develop cancer therapy that is both highly specific for cancer cells, efficient, and personalized for each tumor and patient genetics.” (You can read about their results here.) Another team proposed to upcycle methanol into a universal carbon source. And last year’s first prize winner at the high school level set out to prevent tissue damage from chronic inflammation in the human body.

 we-should-teach-kids-to-code-biology-2

To be clear, these are lofty goals—but the point is young people are already working towards them. And if they are working to solve huge challenges using synthetic biology today, imagine what they will be able to achieve given improved tools as adults?

Not only are teenagers already rewriting the code of life, their interest in doing more and learning more is quickly growing. So far, 18,000 people have participated in iGem. The competition has grown from 5 teams in 2004 to 245 teams in more than 32 countries in 2014.

What Could Go Wrong?

If Dyson’s prediction proves to be correct, we are already raising a generation of designers, engineers, and artists who will use amazing new toolsets to create on a new canvas—life itself.

So, what could possibly go wrong?

In a 2007 New York Times article “Our Biotech Future,” Dyson questions the ethics of domesticating biology. He asks: Can it or should it be stopped? If we’re not going to stop it, what limits should be imposed? If so, by whom and how should the limits be enforced?

The comparison to computers is useful to a point, but biology is obviously much more complicated and there were fewer ethical questions when we were building the first microchips. 

Domesticating biotechnology means bringing it to the masses, and that means we’d have even less control over it than when it was limited to university or government funded labs.

The answer to Dyson’s first question seems clear: This trend is not going to stop. There’s too much momentum. We have learned too much about how to control our own biology to turn back.

And this is all the more reason to teach the next generation early on about the power and ethics of rewriting the code of life.

Sveta covers biotech, synthetic biology and other interesting topics for Singularity Hub. She enjoys long walks on the beach, being underwater and climbing very large rocks. You can follow her @svm118.


Image credit: iGEM Foundation and Justin Knight

Link: http://singularityhub.com

Logo KW

Why We Should Use Behavioral Economics to Design Technology That Doesn’t Kill Us [1640]

de System Administrator - jueves, 28 de enero de 2016, 20:20
 

Why We Should Use Behavioral Economics to Design Technology That Doesn’t Kill Us

BY ALISON E. BERMAN

One hundred years ago, bad decision making accounted for less than ten percent of human deaths. Nowadays, it represents a little over 44%. What has changed?

Behavioral economist, Dan Ariely, offered one explanation during his talk at Singularity University this month: With all the new technology we’ve invented, we’ve also created many new ways to kill ourselves.

Think about texting while driving, an activity that makes drivers six times more likely to cause an accident than while driving drunk. Most of us know it’s dangerous, but when we get behind the wheel and receive a new notification, we can’t seem to stop ourselves.

Though classical economics assumes humans are rational maximizers able to make wise decisions for our long-term health and wellbeing, research shows we’re not.

As Ariely puts it, we’re predictably irrational and programmed to prioritize daily desires and temptations over what’s best in the long term.

“In the future, we are all wonderful people,” says Ariely.

The problem is, we live in the present where we are horrible decision makers, with poor self-control—we eat the donut, forget to take our medication, and say we’ll put money in our savings account tomorrow. But tomorrow never happens.

Not only are humans poor decision makers, but the convenience of technology can tempt us away from making positive choices. Automated payment apps and marketplaces like the iTunes store aren’t designed to help us save for retirement, but to decrease decision-making time and increase feasibility—we click “buy” before considering where else our money could go.

“We create technology that hacks our ability to make good decisions, and the consequence is, we kill ourselves,” says Ariely.

It’s a frightening thought, but it’s not as grim as it sounds.

In fact, we can design technology to take into account the limits of human decision making and help us take actions that will benefit our future selves.

Positive behavior change is something technology companies across the board are taking on—from financial planning apps focused on putting savings away for college funds, to apps that help individuals with chronic diseases adhere to medication and healthy lifestyle routines.

Though it’s in our nature to make irrational choices, technology does not have to exploit this—rather, it can mend it.

Here are the two classical behavioral economics principles Ariely presented that, applied to technology, can support better decision making for the long haul.

1) Reward Substitution

Principle: Because humans are not designed to care about what’s best for ourselves in the future, we can create other immediate rewards to keep us on track. The rewards do not have to be large, only enough to help us stick with a positive behavior daily.

Example: Ariely points to global warming as the perfect example of this challenge. It is one of the greatest threats to humanity, yet most of the world is apathetic towards it because the dangers are too distant. On an individual level, saving for retirement is a similar long-term challenge. Using reward substitution, however, we can set incremental goals that are tied to short-term rewards in order to focus on smaller chunks of the savings journey.

Watch Ariely explain this principle in detail below.

2) Ulysses Contracts (or Self-Control Contracts)

Principle: Lower the dangers of weak self-control by stopping temptations before they arise.

Example: This idea shows up in Homer’s Odyssey. Ulysses (or Odysseus in the Greek) orders his sailors to plug their ears and tie him to the mast of the boat so he can hear the sirens, whose deadly song tempts ships to steer into the rocks, but avoid wrecking his own ship. A current day example of this is driving with your cell phone, but placing it in the trunk so you aren’t tempted to text and drive.

Watch Ariely explain this principle in detail below.

Both principles show the power of creating and following through with rules, even in the face of ourincreasingly hyper-connected and impulse driven lives.

As technology inserts itself into more areas of life, it is important to learn to identify which technologies have been designed with our best interests in mind, and which have been created to maximize our irrationality and fondness for making poor short-term choices.

The next time you look to download a new app or purchase the latest tech gadget, ask, “Has this been designed with my best interest in mind?” And if the answer is, “No,” consider putting your time and money elsewhere.

Image Credit: Shutterstock.com

Alison E. Berman

  • Staff Writer at Singularity University
  • Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).

Link: http://singularityhub.com

Logo KW

Why We Should Use Behavioral Economics to Design Technology That Doesn’t Kill Us [1667]

de System Administrator - lunes, 15 de febrero de 2016, 17:24
 

Why We Should Use Behavioral Economics to Design Technology That Doesn’t Kill Us

BY ALISON E. BERMAN

One hundred years ago, bad decision making accounted for less than ten percent of human deaths. Nowadays, it represents a little over 44%. What has changed?

Behavioral economist, Dan Ariely, offered one explanation during his talk at Singularity University this month: With all the new technology we’ve invented, we’ve also created many new ways to kill ourselves.

Think about texting while driving, an activity that makes drivers six times more likely to cause an accident than while driving drunk. Most of us know it’s dangerous, but when we get behind the wheel and receive a new notification, we can’t seem to stop ourselves.

Though classical economics assumes humans are rational maximizers able to make wise decisions for our long-term health and wellbeing, research shows we’re not.

As Ariely puts it, we’re predictably irrational and programmed to prioritize daily desires and temptations over what’s best in the long term.

“In the future, we are all wonderful people,” says Ariely.

The problem is, we live in the present where we are horrible decision makers, with poor self-control—we eat the donut, forget to take our medication, and say we’ll put money in our savings account tomorrow. But tomorrow never happens.

Not only are humans poor decision makers, but the convenience of technology can tempt us away from making positive choices. Automated payment apps and marketplaces like the iTunes store aren’t designed to help us save for retirement, but to decrease decision-making time and increase feasibility—we click “buy” before considering where else our money could go.

“We create technology that hacks our ability to make good decisions, and the consequence is, we kill ourselves,” says Ariely.

It’s a frightening thought, but it’s not as grim as it sounds.

In fact, we can design technology to take into account the limits of human decision making and help us take actions that will benefit our future selves.

Positive behavior change is something technology companies across the board are taking on—from financial planning apps focused on putting savings away for college funds, to apps that help individuals with chronic diseases adhere to medication and healthy lifestyle routines.

Though it’s in our nature to make irrational choices, technology does not have to exploit this—rather, it can mend it.

Here are the two classical behavioral economics principles Ariely presented that, applied to technology, can support better decision making for the long haul.

1) Reward Substitution

Principle: Because humans are not designed to care about what’s best for ourselves in the future, we can create other immediate rewards to keep us on track. The rewards do not have to be large, only enough to help us stick with a positive behavior daily.

Example: Ariely points to global warming as the perfect example of this challenge. It is one of the greatest threats to humanity, yet most of the world is apathetic towards it because the dangers are too distant. On an individual level, saving for retirement is a similar long-term challenge. Using reward substitution, however, we can set incremental goals that are tied to short-term rewards in order to focus on smaller chunks of the savings journey.

Watch Ariely explain this principle in detail below.

2) Ulysses Contracts (or Self-Control Contracts)

Principle: Lower the dangers of weak self-control by stopping temptations before they arise.

Example: This idea shows up in Homer’s Odyssey. Ulysses (or Odysseus in the Greek) orders his sailors to plug their ears and tie him to the mast of the boat so he can hear the sirens, whose deadly song tempts ships to steer into the rocks, but avoid wrecking his own ship. A current day example of this is driving with your cell phone, but placing it in the trunk so you aren’t tempted to text and drive.

Watch Ariely explain this principle in detail below.

Both principles show the power of creating and following through with rules, even in the face of ourincreasingly hyper-connected and impulse driven lives.

As technology inserts itself into more areas of life, it is important to learn to identify which technologies have been designed with our best interests in mind, and which have been created to maximize our irrationality and fondness for making poor short-term choices.

The next time you look to download a new app or purchase the latest tech gadget, ask, “Has this been designed with my best interest in mind?” And if the answer is, “No,” consider putting your time and money elsewhere.

Image Credit: Shutterstock.com

RELATED TOPICS: 

Link: http://singularityhub.com

Logo KW

Why young people don’t want to run for office and how to change their minds [1301]

de System Administrator - viernes, 10 de julio de 2015, 17:28
 

Just say run: How to overcome cynicism and inspire young people to run for office

Ask young people whether they would ever consider running for office and this is what you’ll hear:

  • Politicians are just liars.
  • Most politicians are hypocrites.
  • People in politics are two-faced.
  • It’s about lying, cheating, getting nothing done. That’s not how I want to spend my time.
  • I don’t even want to think about a career in politics.
  • I’d rather milk cows than run for office.

These are not hypothetical responses. They are the words of a handful of the more than 4,000 high school and college students we surveyed and interviewed for our new book, Running from Office: Why Young Americans Are Turned Off to Politics. Having come of age in a political context characterized by hyper-partisanship, gridlock, stalemate, and scandal, the overwhelming majority of 13 to 25 year olds view the political system as ineffective, broken, and downright nasty. As a consequence, nine out of 10 will not even consider running for office. They’d rather do almost anything else with their lives.

This should sound alarm bells about the health of our democracy. The United States has more than half a million elective positions. And most people who become candidates don’t go through life never thinking about politics and then wake up one morning and decide to throw their hats into the ring. The idea has usually been percolating for a long time, often since adolescence.

In the final chapter of the book, we offer a series of recommendations that could stimulate political ambition and help chart a new course. Here, we summarize three. They are all major endeavors and will require substantial funding and deep commitment from government officials, entrepreneurs, educators, and activists. But each has the potential to change young people’s attitudes toward politics. At the very least, we hope to trigger a national conversation about how to show the next generation that politics is about more than men behaving badly in the nation’s capital.

1. Launch the YouLead Initiative: Since John F. Kennedy signed an executive order in 1961establishing the Peace Corps, the organization has sent hundreds of thousands of Americans abroad “to tackle the most pressing needs of people around the world.” AmeriCorps has deployed 800,000 Americans to meet similar domestic needs. And Teach for America has recruited thousands of citizens to “build the movement to eliminate educational inequity.” Together, these programs send a strong signal that the government values, and American society depends on, public service. If we want to put the next generation on the path to politics, then what better way than by demonstrating that running for office is just as valuable, effective, and noble a form of public service?

Whether developed as a government program, non-profit endeavor, or corporate project, a two-pronged national campaign—we call it the YouLead Initiative—could send a strong signal to young people that running for office is a worthwhile way to serve the community, country, and world. The first piece would entail a technologically savvy media campaign that changes perceptions of politics. When young people think about government, they conjure up images of self-interested, egotistical conservatives fighting self-interested, egotistical liberals in a broken system to the point of paralysis. Placing the spotlight on local and state-level leaders—most of whom are not professional politicians—would convey that many elected officials care about their communities and are making positive change. In addition, a series of fun public service announcements in which parents, teachers, public figures, and celebrities encourage young people to think about a future in politics would reinforce the message. Second, regional and state coordinators for YouLead could identify high school and college students who have already exhibited leadership success—those in student government, captains of sports teams, members of debate and mock trial teams, those participating in drama and music clubs. At a regional conference, they would be encouraged to channel their leadership capabilities into electoral politics. The program could even capitalize on their competitive spirit by hosting a national conference to which regional participants could apply.

2. Make Political Aptitude Part of the College Admission Process: The primary educational goal of most 12-17 year olds is to attend college (85% of the high school students we surveyed planned to go). But the five key ingredients in a college application—high school grades, standardized test scores, extra-curricular activities, personal essays, and letters of recommendation—make it entirely possible for students to apply to, and be accepted at, even the most prestigious schools without any political interest or knowledge. You can’t find Iraq on a map? That’s okay. You don’t know the name of the vice president? No big deal. You’re unfamiliar with which political party controls Congress? Don’t worry about it. Why not link political aptitude to the college application process, either in the form of a new component to the SAT or ACT, an additional exam, or an essay about public affairs? The vehicle is almost incidental. What matters is that it would force young people to take news and political information seriously.

Linking college admission, even in some small way, to political awareness could pay off. We found that young people with more exposure to politics—at home, at school, with their friends, and through the media—are far more likely to be interested in running for office. Sure, they see the same negative aspects of contemporary politics as everyone else. But they also see some examples of politicians behaving well, elected officials solving problems, and earnest, well-meaning candidates aspiring to improve their communities. The habit of staying politically informed might fade once students submit their college applications, but it might not. And there is no downside for colleges and universities to take the position that to be successful citizens, students must be connected to the world around them. In fact, a similar approach has generated a sense of volunteerism among many high school students. Of the roughly 75% of high school seniors who do some sort of community service, many start these efforts with the hope of “impressing” college admission officers.

3. Develop the GoRun App: We live in the era of the app. You can upload photos, request an Uber, find cheap airline tickets, locate the closest Mexican restaurant, or listen to your favorite music with the simple touch of an app. And young people do. Eighty-one percent of people under the age of 25 sleep with their phone next to them on the bed; 74% reach for their smartphones as the first thing they do when they wake up; and 97% of teens regularly use smartphones in the bathroom to check messages. There’s no activity, time of day, or location that is out of bounds for young people’s smartphone and app use. So, let’s take advantage of the digital world young people inhabit by creating an app that helps them identify political offices and informs them about how to run for them.

Surprisingly, it is quite difficult to find out what elected positions exist in any given community, let alone determine the responsibilities associated with each or the nuts and bolts involved in running for them. No central database houses this information. The GoRun app would allow users to enter an address and receive a complete list of all the elected positions representing that residence—from the local school board all the way up to president of the United States. Clicking on each position would result in a description of the office, a set of core responsibilities, and information about the logistics and rules required to run. Figuring out how to become a candidate would literally be at our fingertips. Educators could easily incorporate the app into their curricula. And young people who are even the least bit curious about how to run for office would not have to engage in a fact-finding mission. This easy-to-access information would showcase the thousands of electoral opportunities that have nothing to do with dysfunction in Washington, DC.

Our political system has done a number on young people. It has turned them off to the idea of running for office, discouraged them from aspiring to be elected leaders, and alienated them from even thinking about a career in politics. Steering a new course will be difficult, but being creative about how we do it is the only choice we have.

Link: http://www.brookings.edu

Logo KW

Wikipedia Deploys AI to Expand Its Ranks of Human Editors [1611]

de System Administrator - domingo, 6 de diciembre de 2015, 20:15
 

Wikipedia Deploys AI to Expand Its Ranks of Human Editors

by CADE METZ 

AARON HALFAKER JUST built an artificial intelligence engine designed to automatically analyze changes to Wikipedia.

Wikipedia is the online encyclopedia anyone can edit. In crowdsourcing the creation of an encyclopedia, the not-for-profit website forever changed the way we get information. It’s among the ten most-visited sites on the Internet, and it has swept tomes like World Book and Encyclopedia Britannica into the dustbin of history. But it’s not without flaws. If anyone can edit Wikipedia, anyone can mistakenly add bogus information. And anyone can vandalize the site, purposefully adding bogus information. Halfaker, a senior research scientist at the Wikimedia Foundation, the organization that oversees Wikipedia, built his AI engine as a way of identifying such vandalism.

"It turns out that the vast majority of vandalism is not very clever."

AARON HALFAKER, WIKIMEDIA

In one sense, this means less work for the volunteer editors who police Wikipedia’s articles. And it might seem like a step toward phasing these editors out, another example of AI replacing humans. But Halfaker’s project is actually an effort toincrease human participation in Wikipedia. Although some predict that AI and robotics will replace as much as 47 percent of our jobs over the next 20 years, others believe that AI will also create a significant number of new jobs. This project is at least a small example of that dynamic at work.

“This project is one attempt to bring back the human element,” says Dario Taraborelli, Wikimedia’s head of research, “to allocate human attention where it’s most needed.”

Don’t Scare the Newbies

In the past, if you made a change to an important Wikipedia article, you often received an automated response saying you weren’t allowed to make the change. The system wouldn’t let you participate unless you followed a strict set of rules, and according to study by Halfaker and various academics, this rigidity prevented many people from joining the ranks of regular Wikipedia editors. A 2009 study indicated that participation in the project had started to decline, just eight years after its founding.

“It’s because the newcomers don’t stick around,” Halfaker says. “Essentially, Wikipedians had traded efficiency of dealing with vandals and undesirable people coming into the wiki for actually offering a human experience to newcomers. The experience became this very robotic and negative experience.”

With his new AI project—dubbed the Objective Revision Evaluation Service, or ORES—Halfaker aims to boost participation by making Wikipedia more friendly to newbie editors. Using a set of open source machine learning algorithms known asSciKit Learn—code freely available to the world at large—the service seeks to automatically identify blatant vandalism and separate it from well-intentioned changes. With a more nuanced view of new edits, the thinking goes, these algorithms can continue cracking down on vandals without chasing away legitimate participants. It’s not that Wikipedia needs to do away with automated tools to attract more human editors. It’s that Wikipedia needs better automated tools.

“We don’t have to flag good-faith edits the same way we flag bad-faith damaging edits,” says Halfaker, who used Wikipedia as basis for his PhD work in the computer science department at the University of Minnesota.

In the grand scheme of things, the new AI algorithms are rather simple examples of machine learning. But they can be effective. They work by identifying certain words, variants of certain words, or particular keyboard patterns. For instance, they can spot unusually large blocks of characters. “Vandals tend to mash the keyboard and not put spaces in between their characters,” Halfaker says.

Halfaker acknowledges that the service can’t going to catch every piece of vandalism, but he believes it can catch most. “We’re not going to catch a well-written hoax with these strategies,” he says. “But it turns out that the vast majority of vandalism is not very clever.”

Wikipedia Articles That Write Themselves?

Elsewhere, the giants of the Internet—Google, Facebook, Microsoft, and others—are embracing a new breed of machine learning known as deep learning. Using neural networks—networks of machines that approximate the web of neurons in the human brain—deep learning algorithms have proven adept at identifying photos, recognizing spoken words, and translating from one language to another. By feeding photos of a dog into a neural net, for instance, you can teach it to identify a dog.

With these same algorithms, researchers are also beginning to build systems that understand natural language—the everyday way that humans speak and write. By feeding neural nets scads of human dialogue, you can teach machines to carry on a conversation. By feeding them myriad news stories, you can teach machines to write their own articles. In these cases, neural nets are a long way from real proficiency. But they point towards a world where, say, machines can edit Wikipedia.

'"I'm not sure we'll ever get to the place where an algorithm will beat human judgment."

AARON HALFAKER, WIKIMEDIA

Link: http://www.wired.com

Logo KW

Will A.I. drive the human race off a cliff? [1304]

de System Administrator - sábado, 11 de julio de 2015, 23:36
 

Will A.I. drive the human race off a cliff?

By Sharon Gaudin

Artificial intelligence (A.I.) and machine learning have the potential to help people explore space, make our lives easier and cure deadly diseases.

But we need to be thinking about policies to prevent the technology from one day killing us all.

That's the general consensus from a panel discussion in Washington D.C. today sponsored by the Information Technology and Innovation Foundation.

"When will we reach general purpose intelligence?" said Stuart Russell, a professor of electrical engineering and computer sciences at U.C. Berkeley. "We're all working on pieces of it.... If we succeed, we'll drive the human race off the cliff, but we kind of hope we'll run out of gas before we get to the cliff. That doesn't seem like a very good plan.... Maybe we need to steer in a different direction."

Russell was one of the five speakers on the panel today that took on questions about A.I. and fears that the technology could one day become smarter than humans and run amok.

Just within the last year, high-tech entrepreneur Elon Musk and the world's most renowned physicist Stephen Hawking have both publicly warned about the rise of smart machines.

Hawking, who wrote A Brief History of Time, said in May that robots with artificial intelligence could outpace humans within the next 100 years. Late last year, he was even more blunt: "The development of full artificial intelligence could spell the end of the human race."

Musk, CEO of SpaceX as well as CEO of electric car maker Tesla Motors, also got a lot of attention last October when he said A.I. threatens humans. "With artificial intelligence, we are summoning the demon," Musk said during an MIT symposium at which he also called A.I. humanity's biggest existential threat. "In all those stories with the guy with the pentagram and the holy water, ...he's sure he can control the demon. It doesn't work out."

With movies like The Terminator and the TV series Battlestar Galactica, many people think of super intelligent, super powerful and human-hating robots when they think about A.I. Many researchers, though, point out that A.I. and machine learning are already used for Google Maps, Apple's Siri and Google's self-driving cars.

As for fully autonomous robots, that could be 50 years in the future -- and self-aware robots could be twice as far out, though it's impossible at this point to predict how technology will evolve.

"Our current A.I. systems are very limited in scope," said Manuela Veloso, a professor of computer science at Carnegie Mellon University, speaking on today's panel. "If we have robots that play soccer very well by 2050, they will only know how to play soccer. They won't know how to scramble eggs or speak languages or even walk down the corridor and turn left or right."

Robert D. Atkinson, president of the Information Technology and Innovation Foundation, said people are being "overly optimistic" about how soon scientists will build autonomous, self-aware systems. "I think we'll have incredibly intelligent machines but not intentionality," he said. "We won't have that for a very long, long time, so let's worry about it for a very long, long time."

Even so, Russell said scientists should be focused on, and talking about, what they are building for the future. "The arguments are fairly persuasive that there's a threat to building machines that are more capable than us," he added. "If it's a threat to the human race, it's because we make it that way. Right now, there isn't enough work to making sure it's not a threat to the human race."

There's an answer for this, according to Veloso.

"The solution is to have people become better people and use technology for good," she said. "Texting is dangerous. People text while driving, which leads to accidents, but no one says, 'Let's remove texting from cell phones.' We can weigh this danger and make policy about texting and driving to keep the benefit of the technology available to the rest of the world."

It's also important to remember the potential benefits of A.I., she added.

Veloso pointed to the CoBot robots working on campus at Carnegie Mellon. The autonomous robots move around on wheels, guide visitors to where they need to go and ferry documents or snacks to people working there.

"I don't know if Elon Musk or Stephen Hawking know about these things, but I know these are significant advances," she said. "We are reaching a point where they are going to become a benefit to people. We'll have machines that will help people in their daily lives.... We need research on safety and coexistence. Machines shouldn't be outside the scope of humankind, but inside the scope of humankind. We'll have humans, dogs, cats and robots."

Link: http://www.computerworld.com

Logo KW

Will Artificial Intelligence Transform How We Grow and Consume Food? [1497]

de System Administrator - jueves, 8 de octubre de 2015, 23:53
 

Will Artificial Intelligence Transform How We Grow and Consume Food? [Video]

By David J. Hill

Today, agriculture is more efficient than ever, but it's also more dependent on environmental, technological, and social issues like never before. Climate change, drought and other disasters, shifting energy landscapes, population growth, urbanization, GMOs, changes in the workforce, automation — these are just a handful of the factors that affect the global access to food.

We'd all like to see a future in which people worldwide have a sufficient amount of safe and nutritious food to help them maintain healthy and active lives. Is this realistic in our lifetimes?

To borrow a phrase, artificial intelligence is eating the world, and it can help move us closer to this future of abundant food. Neil Jacobstein, Artificial Intelligence & Robotics Co-Chair at Singularity University, explains what AI can do for the future of food, as part of a video series on food as a Global Grand Challenge.

 Video

[image courtesy of Shutterstock]

 

 


Página: (Anterior)   1  ...  65  66  67  68  69  70  71  72  73  74  (Siguiente)
  TODAS