Neurociencia | Neuroscience


Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página:  1  2  3  4  5  6  7  8  9  10  ...  73  (Siguiente)


Logo KW

3 Ways Exponential Technologies Are Impacting the Future of Learning [1576]

de System Administrator - jueves, 3 de diciembre de 2015, 17:15

3 Ways Exponential Technologies Are Impacting the Future of Learning


“Simply put, we can’t keep preparing children for a world that doesn’t exist.”
-Cathy N. Davidson

Exponential technologies have a tendency to move from a deceptively slow pace of development to a disruptively fast pace. We often disregard or don’t notice technologies in the deceptive growth phase, until they begin changing the way we live and do business. Driven by information technologies,  products and services become digitized, dematerialized, demonetized and/or democratized and enter a phase of exponential growth.

Nicole Wilson, Singularity University’s vice president of faculty and curriculum believes education technology is currently in a phase of deceptive growth, and we are seeing the beginning of how exponential technologies are impacting 1) what we need to learn, 2) how we view schooling and society and 3) how we will teach and learn in the future.


Watch Nicole Wilson, VP of Faculty and Curriculum at Singularity University discuss the three ways which exponential technologies are impacting how we teach and learn.

Exponential Technologies Impact What Needs to be Learned

In a 2013 white paper titled Dancing with Robots: Human Skills for Computerized Work Richard Murnane and Frank Levy argue that in the computer age, the skills which are valuable in the new labor market are significantly different than what they were several decades ago.

Computers are much better than humans at tasks that can be organized into a set of rules-based routines. If a task can be reduced to a series of “if-then-do” statements, then computers or robots are the right ones for the job. However, there are many things that computers are not very good at and should be left to humans (at least for now).  Levy and Murnane put these into three main categories:

Solving unstructured problems. 
Humans are significantly more effective when the desired outcomes or set of information needed to solve the problem are unknowable in advance. These are problems that require creativity.  

Working with new information. 
This includes instances where communication and social interaction are needed to define the problem and gather necessary information from other people. 

Carrying out non-routine manual tasks.
While robots will continue to improve dramatically, they are currently not nearly as capable as humans in conducting non-routine manual tasks. 

In the past three decades, jobs requiring routine manual or routine cognitive skills have declined as a percent of the labor market. On the other hand, jobs requiring solving unstructured problems, communication, and non-routine manual work have grown.

The best chance of preparing young people for decent paying jobs in the decades ahead is helping them develop the skills to solve these kinds of complex tasks.

What are these skills exactly?

In March, the World Economic Forum released their New Vision for Education Report, which identified a set of “21st century skills.” The report broke these into three categories: ‘Foundational Literacies’, ‘Competencies’ and ‘Character Qualities’.


The foundational literacies are the “basics.” Reading, writing, sciences, along with more practical skills like financial literacy. Even in a world of rapid change, we still need to learn how to read, write, do basic math, and understand how our society works.

The competencies are often referred to as the 4Cs — critical thinking, creativity, communication and collaboration — the very things computers currently aren’t good at. Developing character qualities such as curiosity, persistence, adaptability and leadership help students become active creators of their own lives, finding and pursuing what is personally meaningful to them.

Exponential Technologies Impact How We View Schooling and Society 

In her book, Now You See ItCathy N. Davidson, co-director of the annual MacArthur Foundation Digital Media and Learning Competitions, says 65 percent of today’s grade school kids will end up doing work that has yet to be invented. 

Davidson, along with many other scholars, argues that the contemporary American classroom is still functioning much like the classroom of the industrial era — a system created as a training ground for future factory workers to teach tasks, obedience, hierarchy and schedules.

For example, teachers and professors often ask students to write term papers.  Davidson herself was disappointed when her students at Duke University turned in unpublishable papers, when she knew that the same students wrote excellent blogs online.

Instead of questioning her students, Davidson questioned the necessity of the term paper. “What if bad writing is a product of the form of writing required in school — the term paper — and not necessarily intrinsic to a student’s natural writing style or thought process? What if ‘research paper’ is a category that invites, even requires, linguistic and syntactic gobbledygook?”

And if term papers are starting to seem archaic, formal degrees might be the next to go.

Getting a four-year degree in any technology field makes little sense when the field will likely be radically different by the time the student graduates. Today, we’re seeing the rise of concepts like Mozilla's “open badges” and Udacity’s “nanodegrees.” Udacity recently reached a billion-dollar valuation, partially based on the promise of their new nanodegree program.

Exponential Technologies Impact How We Teach and Learn

Technologies like artificial intelligence, big data and virtual and augmented reality are all poised to change the way we teach and learn both in the classroom and outside of it.

The ed-tech company Knewton focuses on creating personalized learning paths for students by gathering data to determine what each student knows and doesn’t know and how the student learns best. Knewton takes any free, open content, and uses an algorithm to bundle it into a uniquely personalized lesson for each student at any moment.


And while there’s no lack of enthusiasm around the potential of using virtual reality to change education, we are just seeing the first baby steps toward what will eventually be the full use of the technology. Google Expeditions, which aims to take students on virtual field trips and World of Comenius, a project which brought Oculus Rift headsets to a grammar school in the Czech Republic to virtually teach biology and anatomy are just two examples of many teams going through the process of trial and error to define what works for education in VR and what doesn’t. 

It’s clear that technologies undergoing exponential growth are shaping the skills we need to be successful, how we approach education in the classroom, and what tools we will use in the future to teach and learn. The bigger question is: How can we guide these technologies in a way that produces the kind of educated public we wish to have in the coming years?

To get updates on Future of Learning posts, sign up here.



Logo KW

3D Computer Interfaces Will Amaze—Like Going From DOS to Windows [1347]

de System Administrator - martes, 15 de septiembre de 2015, 20:55

3D Computer Interfaces Will Amaze—Like Going From DOS to Windows

By Jody Medich

Today, we spend as much time immersed in the digital world as we do the real world. But our computer interfaces are flawed. They force us to work in 2D rectangles instead of our native 3D spaces. Our brains have evolved powerful ways of processing information that depend on dimensionality — and flat screens and operating systems short circuit those tricks.

The result is a bit like reading War and Peace one letter at a time, or the classic example of a group of blind folks trying to define an elephant from the one piece they are touching. We're missing the big picture, missing all the spatial systems that help us understand — the space is just far too limited.

All this, however, is about to change in a big way.

With augmented and virtual reality, we’ll be able to design user interfaces as revolutionary as those first developed for the Xerox Alto, Apple Macintosh, and Microsoft Windows. These new 3D computer interfaces and experiences will help us leverage dimensionality, or space, just as we do in the real world — and they’ll do it with the dynamic power of digital technology and design.

How We Naturally Use Space to Think

Ultimately, humans are visual creatures.

We gather a large amount of information about our world by sight. We can tell which way the wind is blowing, for example, just by looking at a tree. A very large portion of our brains, therefore, is dedicated to processing visual compared to linguistics.

If you take your hand and put it to the back of your head, that’s about the size of your visual cortex. Driven by this large part of our brain, we learned to communicate via visual methods much earlier than language. And even after we formulated words, our language remained very visual. In the representation of letters on a spatial grid to create meaning, for example, it’s the underlying spatial structure that ultimately allows us to read.

Space is therefore very important to humans. Our brains evolved to be visual in 3D spaces. We are constantly creating a spatial map of our surroundings. This innate process is called spatial cognition; the acquisition of which helps us to recall memories, reveal relationships, and to think. It is key to sensemaking.

Spatial memory is, in effect, “free.” It allows us to offload a number of cognitively heavy tasks from our working memory in the following ways:

Spatial Semantics: We spatially arrange objects to make sense of information and its meaning. The spatial arrangement reveals relationships and connections between ideas.

External Memory: The note on the refrigerator, the photo of a loved one, or the place you always put your keys are all examples of how space compensates for our limited working memory.

Dimension: Dimension helps us to immediately understand information about an object. For example, you can easily tell the difference between War and Peace and a blank piece of paper just by looking at the two objects.

Embodied Cognition: Physically interacting with space through proprioception — this is the human sense that locates our body in space — is essential to understanding its volume, but it also helps us process thought.

How We Interact With Computers Today: The Graphical User Interface (GUI)

In 1973, Xerox PARC was a hotbed of technological innovation, and Xerox researchers were determined to figure out how to make interacting with computers more intuitive. Of course, they were fully aware of the way humans use visual and spatial tools.

People had a hard time remembering all the specialized linguistic commands necessary to operate a computer in command-line interface (think MS-DOS). So, researchers developed an early graphical user interface (GUI) on the Xerox Alto as a way to reduce the cognitive load by providing visual/spatial metaphors to accomplish basic tasks.

The Alto used a mouse and cursor, a bitmapped display, and windows for multitasking. No one thought the job was complete, but it was a great first step on the road to a simplified user interface. And most operating systems today testify to the power of GUI.

The Problem With Modern Operating Systems

The problem is 2D computing is flat. GUI was invented at a time when most data was linguistic. The majority of information was text, not photos or videos — especially not virtual worlds. So, GUI organization is based on filenames, not spatial semantics.

The resulting “magic piece of paper” metaphor creates a very limited sense of space and prevents development of spatial cognition. The existing metaphor:

  • Limits our ability to visually sort, remember and access
  • Provides a very narrow field of view on content and data relationships
  • Does not allow for data dimensionality

This means the user has to carry a lot of information in her working memory. We can see this clearly in the example of the piece of paper and War and Peace.

In modern operating systems, these objects look dimensionally to be exactly the same because they are represented by uniformly similar icons. The user has to know what makes each object different — even has to remember their names and where they are stored. Because of this, modern operating systems interrupt flow.

Multiple studies have been focused on interruption cost for software engineers. It turns out that any interruption can cause significant distraction. Once distracted, it takes nearly half an hour to resume the original task. In our operating systems, every task switch interrupts flow. The average user switches tasks three times a minute. And the more cognitively heavy the task switch, the more potent the interruption.

The Solution? The Infinite Space of Augmented and Virtual Reality

But now augmented and virtual reality are emerging. And with them, infinite space.

The good news is that spatial memory is free, even in virtual spaces. And allowing users to create spatial systems, such as setting up tasks across multiple monitors, have been proven to dramatically improve productivity by up to 40%.

So, how do we create a system that capitalizes on the opportunity?

The most obvious place to start is to enable development of virtual spatial semantics. We should build 3D operating systems that allow users to create spatial buckets to organize their digital belongings — or even better, allow users to overlay the virtual on their pre-existing real spatial semantics. People already have well established real world spatial memory, so combining the two will lead to even better multi-tasking.

For example, I have a place where I always put my keys (external memory). If I need to remember something the next time I leave the house, I leave it in that spot next to my keys. If I could also put digital objects there, I could become immensely more productive.

Further, adding digital smarts to spatial semantics, users can change the structure dynamically. I can arrange objects in a certain way to find specific meaning, and with the touch of a button, instantly rearrange the objects into a timeline, an alphabetical list, or any other spatial structure that would help me derive meaning — without losing my original, valuable arrangement. Sanddance at Microsoft Research (Steven Drucker, et al) and Pivot by Live Labs are excellent examples of this type of solution.

And finally, the introduction of the z-plane enables digital object dimensionality.


Flat Icons


By applying sort parameters to the x- and y-axis, virtual objects take on meaningful dimension. But unlike the real world, where objects follow the rules of Euclidean geometry, in the virtual world dimension can be dynamic. The sort methods applied can quickly be easily swapped out, depending on the user need, to quickly and effectively change the dimension of the objects — allowing users to tell at a glance what is more or less pertinent to a query.


History x File Size | Favorites x Time Spent


Dimension also creates opportunities for virtual memory palaces.

Memory palaces are a mnemonic device from ancient times that enhance memory by using spatial visualization to organize and recall information. The subject starts with a memorized spatial location, then “walks” through the space in their mind, assigning a different item to specific features of the space. Later, to remember the items, the subject again “walks” through the space and is reminded of the items along the way.

With the advent of virtual 3D spaces, the same type of memory device can be created to allow users to organize, remember, and delineate large amounts of information with the added benefit of a digital “map” of the space; a map that can be dynamically rearranged and searched depending on the user needs in any given moment.

We Are On the Cusp of Another Technological Revolution

Humans are indeed visual creatures, and augmented and virtual reality is geared to help us use those abilities in the digital world. Through these technologies we can alleviate the heavy reliance on working memory needed to operate our tools, and enable instead our natural spatial cognitive abilities. We are on the cusp of another technological revolution — one in which we create superhumans, not supercomputers.

In her 20-year design career, Jody has created just about everything from holograms to physical products and R&D for over 300 companies. She’s spent the last three years working on AR/VR, most notably as a principal experience designer on the HoloLens Project at Microsoft and principal UX at LEAP Motion. Previously, she co-founded and directed Kicker Studio, a design consultancy specializing in Natural User Interface and R&D for companies including Intel, Samsung, Microsoft, and DARPA. You can learn more about Jody's work here follow her @nothelga

To get updates on Future of Virtual Reality posts, sign up here.

Logo KW


de System Administrator - lunes, 1 de septiembre de 2014, 20:10



Written By: Sveta McShane

Everyone has knick-knacks of sentimental value around their home, but what if your emotions could actually be shaped into household things?

A project recently unveiled at the Sao Paulo Design Weekend turns feelings of love into physical objects using 3D printing and biometric sensors. “Each product is unique and contains the most intimate emotions of the participants’ love stories,” explains designer Guto Requena.

"The LOVE PROJECT is a study in design, science and technology that captures the emotions people feel in relating personal love stories and transforms them into everyday objects. The project suggests a future in which unique products will bear personal histories in ways that encourage long life cycles, thus inherently combining deeply meaningful works with sustainable design."

As users recount the greatest love stories of their lives, sensors track heart rate, voice inflection, and brain activity. Data from their physical and emotional responses are collected and interpreted in an interface, transforming the various inputs into a single output.

The real time visualization of the data is modeled using a system of particles, where voice data determines particle velocity, heart rate controls the thickness of the particles, and the data from brain waves causes the particles to repel or attract each other. To shape these particles into the form of an everyday object such as a lamp, fruit bowl or vase, a grid of forces guides the particles as they flow along their course.

The final designs are then sent to a 3D printer, which can print in a variety of materials including thermoplastics, glass, ceramic or metal.


While this method is both creative and intriguing, are any of the produced objects meaningful if the viewer is unable to interpret the emotion from which the object was created?

Looking at the objects produced, it’s difficult to imagine them eliciting similar feelings in viewers as were felt by those recounting their tales of love. Furthermore, the data that produced the objects cannot be extracted, so that information is lost.


Other groups have begun to experiment with infusing printed objects with interactivity, where the object provides the user with information. Techniques to convert digital audio files into 3D printable audio records have been developed as well as a way to play a‘sound bite’ on a 3D printed object by running a fingernail or credit card along printed ridges which produce a sound.

The “Love Project” is an interesting experiment that successfully includes the end user in the process of creating objects of meaning while also democratizing and demystifying the use of interactive digital technologies, yet it’s a stretch to think that the aesthetics of the objects themselves can help understand the mysterious human emotion of love.

What would be truly exciting is if we were able to transform intangible emotions into data, that data into a physical object and interact with the object in a way which brings insight and meaning and new way of understanding and visualizing our emotional states. With designers like Requena and Neri Oxman finding new ways to integrate art and 3D printing, we’re likely to see even more exciting projects at the interface of technology and expression on the horizon.

[Photo credits: Guto Requena]

This entry was posted in Art and tagged 3d printed audio records3d printingbiometricsbrain wavesGuto Requenaheart rateLove Project,neri oxmanSao Paulo Design Weekendsensors.


Logo KW

3D reconstruction of neuronal networks provides unprecedented insight into organizational principles of sensory cortex [1223]

de System Administrator - jueves, 7 de mayo de 2015, 21:14

3D reconstruction of neuronal networks provides unprecedented insight into organizational principles of sensory cortex

Detail of 3D reconstruction of individual cortical neurons in rats. Credit: Max Planck Florida Institute for Neuroscience 

Researchers at the Max Planck Institute for Biological Cybernetics (Germany), VU University Amsterdam (Netherlands) and Max Planck Florida Institute for Neuroscience (USA) succeed in reconstructing the neuronal networks that interconnect the elementary units of sensory cortex – cortical columns.

A key challenge in neuroscience research is identifying organizational principles of how the brain integrates sensory information from its environment to generate behavior. One of the major determinants of these principles is the structural organization of the highly complex, interconnected networks of neurons in the brain. Dr. Oberlaender and his collaborators have developed novel techniques to reconstruct anatomically-realistic 3D models of such neuronal networks in the rodent brain. The resultant model has now provided unprecedented insight how neurons within and across the elementary functional units of the sensory cortex – cortical columns – are interconnected. The researchers found that in contrast to the decade-long focus of describing neuronal pathways within a cortical column, the majority of the cortical circuitry interconnects neurons across cortical columns. Moreover, these ‘trans-columnar’ networks are not uniformly structured. Instead, ‘trans-columnar’ pathways follow multiple highly specialized principles, which for example mirror the layout of the sensory receptors at the periphery. Consequently, the concept of cortical columns, as the primary entity of cortical processing, can now be extended to the next level of organization, where groups of multiple, specifically interconnected cortical columns form ‘intracortical units’. The researchers suggest that these higher-order units are the primary cortical entity for integrating signals from multiple sensory receptors, for example to provide anticipatory information about future stimuli.

3D model for studying cortex organization

Rodents are nocturnal animals that use facial whiskers as their primary sensory receptors to orient themselves in their environment. For example, to determine the position, size and texture of objects, they rhythmically move the whiskers back and forth, thereby exploring and touching objects within their immediate surroundings. Such tactile sensory information is then relayed from the periphery to the sensory cortex via whisker-specific neuronal pathways, where each individual whisker activates neurons located within a dedicated cortical column. The one-to-one correspondence between a facial whisker and a cortical column renders the rodent vibrissal system as an ideal model to investigate the structural and functional organization of cortical columns. In their April publication in Cerebral Cortex, Dr. Marcel Oberlaender, Dr. Bert Sakmann and collaborators describe how their research sheds light on the organization of cortical columns in the rodent brain through the systematic reconstruction of more than 150 individual neurons from all cell types (image [top] shows examples for each of the 10 cell types in cortex) of the somatosensory cortex’s vibrissal domain (the area of the cortex involved in interpreting sensory information from the rodent’s whiskers).


3D reconstruction of individual cortical neurons in rats – Top: Exemplary neuron reconstructions for each of the 10 major cell types of the vibrissal part of rat sensory cortex (dendrites, the part of a neuron that receives information from other neurons, are shown in red; axons are colored according to the respective cell type). Bottom: Superposition of all reconstructed axons (colored according to the respective cell type) located within a single cortical column (horizontal white lines in the center represent the edges of this column). The axons from all cell type project beyond the dimensions of the column, interconnecting multiple columns (white open circles) via highly specialized horizontal pathways. Credit: Max Planck Florida Institute for Neuroscience  

In particular, the researchers combined neuronal labeling in the living animal, with custom-designed high-resolution 3D reconstruction technologies and integration of morphologies into an accurate model of the cortical circuitry. The resultant dataset can be regarded as the most comprehensive investigation of the cortical circuitry to date, and revealed surprising principles of cortex organization. First, neurons of all cell types projected the majority of their axon – the part of the neuron that transmits information to other neurons – far beyond the borders of the cortical column they were located in. Thus, information from a single whisker will spread into multiple cortical columns (image [bottom] shows how axons of neurons located in one cortical column project to all surrounding columns [white circles]). Second, these trans-columnar pathways were not uniformly structured. Instead, each cell type showed specific and asymmetric axon projection patterns, for example interconnecting columns that represent whiskers with similar distance to the bottom of the snout. Finally, the researchers showed that the observed principles of trans-columnar pathways could be advantageous, compared to any previously postulated cortex model, for encoding complex sensory information.

According to Dr. Oberlaender, neuroscientist at the Max Planck Institute for Biological Cybernetics and guest-scientist at the Max Planck Florida Institute for Neuroscience, “There has been evidence for decades that cortical columns are connected horizontally to neighboring columns. However, because of the dominance of the columnar concept as the elementary functional unit of the cortex, and methodological limitations that prevented from reconstructing complete 3D neuron morphologies, previous descriptions of the cortical circuitry have largely focused on vertical pathways within an individual cortical column.”

The present study thus marks a major step forward to advance the understanding of the organizational principles of the neocortex and sets the stage for future studies that will provide extraordinary insight into how sensory information is represented, processed and encoded within the cortical circuitry. “Our novel approach of studying cortex organization can serve as a roadmap to reconstructing complete 3D circuit diagrams for other sensory systems and species, which will help to uncover generalizable, and thus fundamental aspects of brain circuitry and organization,” explained Dr. Oberlaender.

Note: Material may have been edited for length and content. For further information, please contact the cited source:






Logo KW

8 steps successful security leaders follow to drive improvement [1164]

de System Administrator - martes, 17 de marzo de 2015, 13:48

8 steps successful security leaders follow to drive improvement


Logo KW

A Big Year for Biotech: Bugs as Drugs, Precision Gene Editing, and Fake Food [1626]

de System Administrator - domingo, 3 de enero de 2016, 21:31


A Big Year for Biotech: Bugs as Drugs, Precision Gene Editing, and Fake Food


Speculations around whether biotech stocks are in a bubble remain undecided for the second year in a row. But one thing stands as indisputable—the field made massive progress during 2015, and faster than anticipated.

For those following the industry in recent years, this shouldn’t come as a surprise.

In fact, according to Adam Feuerstei at The Street, some twenty-eight biotech and drugs stocks grew their market caps to $1 billion or more in 2014, and major headlines like, “Human Genome Sequencing Now Under $1,000 Per Person,” were strewn across the web last year.

But 2015 was a big year in biotech too.

Cheeky creations like BGI’s micropig made popular headlines, while CRISPR/Cas9 broke through into the mainstream and will forever mark 2015 as a year that human genetic engineering became an everyday kind of conversation (and debate).

With the great leaps in biotech this year, we met with Singularity University’s biotech track chair, Raymond McCauley, to create a collaborative list on four categories where we saw progress within biotech.

While this list is not comprehensive (nor is meant to be), these are a few tangible milestones in biotech’s greatest hits of 2015.

Drag-and-drop genetic engineering is near

2015 will go down in the books as a historic year for genetic engineering. It seemed everyone was talking about, experimenting with, or improving the gene editing technology CRISPR-Cas9. CRISPR is a cheap, fast, and relatively simple way to edit DNA with precision. In contrast to prior, more intensive methods, CRISPR-Cas9 gives scientists a new level of control to manipulate DNA.

CRISPR appears to be broadly useful in plants, animals, and even humans. And although the focus of this technology is on virtuous pursuits, such as eliminating genetic diseases like Huntington’s disease, there has been widespread concern about additional applications—engineering superhuman babies, to name one.

In early December, hundreds of scientists descended on Washington DC to, in part, debate the ethics of genetic engineering with CRISPR. But the debate isn’t over by any means. The greater ethical implications of altering DNA are complex, and as Wired writer Amy Maxmen puts it, CRISPR-Cas9 gives us “direct access to the source code of life,” and thus, the ability to re-write what it means to be human.

Surrounding debates on CRISPR’s use has also been a patent war for the technology.

In April 2014, molecular biologist at the Broad Institute and MIT, Feng Zhang, earned the first patent for CRISPR-Cas9, but since then, one of the original creators of CRISPR at UC Berkeley, molecular biologist Jennifer Doudna, has been fighting back. Meanwhile, new CRISPR enzymes (beyond Cas9) were announced this fall.

All this took place as examples of the power of gene editing showed up in the headlines. In April, Chinese researcher Junjiu Huang and team used CRISPR to engineer human embryos. In November, the first-ever use of gene editing on a human patient cured a one-year-old girl of leukemia (using a different DNA-cutting enzyme called TALEN). And then there were those Chinese superdogs.

At large, we are only at the very beginning of what this technology will bring to medicine—and the debate on how we should best use our newfound power.

The FDA okays personal DNA testing  

Two years ago, 23andMe CEO Ann Wojcicki received a warning letter from the FDA, stating that selling their genetic testing service was violating federal law. The FDA further warned against giving consumers so much information on their personal health without a physician consultation.

Two years later, 23andMe broke through the FDA deadlock and announced their somewhat scaled back new product—Carrier Status Reports (priced at $199)—marking the first FDA-approved direct-to-consumer genetic test.

Whereas their original product tested for 254 diseases, the new version examines 36 autosomal recessive conditions and the “carrier status” of the individual being tested. The company now has a billion-dollar valuation and over a million clients served as of this year. The implications of affordable and FDA-approved consumer genetic testing are large, both for individuals and physicians.

“Fake food” became a real thing

Also known as food 2.0, a handful of companies using biotech methods to engineer new foods made headlines or hit the consumer market this year.

Most of us heard about that lab-grown hamburger that was priced in the six-figure range a few years back. Now, in vitro meat (animal meat grown in a lab using muscle tissue) is coming in at less than $12 a burger. Though the burger’s creator, Mark Post, says a consumer version is still a ways off, the effect could be significant. One of the biggest benefits? Less environmental impact from meat products—which require 2,400 gallons of water to produce just one pound of meat.

Impossible Foods, meanwhile, is developing a “plant-based hamburger that bleeds,” writes The Economist, in addition to plant-based egg and dairy products. To give the burgers a meat-like taste, Impossible Foods extracts iron-rich molecule from plants, though the company has been extremely private about their method.

Mexico-based startup Eat Limmo, is tackling healthy, sustainable, and affordable food from a different angle. Their patent-pending process extracts nutrients from the seeds and peels of fruit waste, recycling all the bits we typically discard into cost-effective, nutritious, and tasty (they say) ingredients for making food.

Microbiome drugs entered clinical trials

Talk about the microbiome isn’t anything new; however, this year Second Genome pushed a microbiome drug into clinical trials. One of the key conditions the company is addressing is inflammatory bowel disease (IBD).

Over five million people suffer from IBD worldwide, and of those, a majority due to ulcerative colitis and Crohn's disease. Both conditions at present have less-than-ideal treatment methods, including harsh medications, steroids, and in extreme cases, the removal of portions of patients’ intestines.

The first drug the company is pushing through clinical trails addresses inflammation and pain in people suffering from IBD and has completed a phrase one placebo controlled and double blind trial.

Another startup to watch is uBiome, a microbiome sequencing service that sells personal microbiome kits direct-to-consumer (think 23andMe style for the microbiome).

An article in Wired last month claimed that uBiome is planning to announce a partnership with the CDC, where they’ll be evaluating stool samples from roughly 1,000 hospital patients. According to Wired, “uBiome and the CDC have set out to develop something like a 'Microbiome Disruption Index' to track how treatments, like antibiotics, alter gut microbes.”

Not only is this a major accomplishment for the startup, they also began as a citizen science movement, so it’s a big win for the whole community.

Alison E. Berman

Staff Writer at Singularity University
Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).
Logo KW

A First Big Step Toward Mapping The Human Brain [1234]

de System Administrator - lunes, 18 de mayo de 2015, 22:55

Electrophysiological data collected from neurons in the mouse visual cortex forms the basis for the Allen Institute's Cell Type Database.  ALLEN INSTITUTE FOR BRAIN SCIENCE

A First Big Step Toward Mapping The Human Brain

by Katie M. Palmer

It’s a long, hard road to understanding the human brain, and one of the first milestones in that journey is building a … database.

In the past few years, neuroscientists have embarked on several ambitious projects to make sense of the tangle of neurons that makes the human experience human, and an experience. In the UK, Henry Markram—the Helen Cho to Elon Musk’s Tony Stark—is leading the Human Brain Project, a $1.3 billion plan to build a computer model of the brain. In the US, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative hopes to, in its own nebulous way, map the dynamic activity of the noggin’s 86 billion neurons.

Now, the Allen Institute for Brain Science, a key player in the BRAIN Initiative, has launched a database of neuronal cell types that serves as a first step toward a complete understanding of the brain. It’s the first milestone in the Institute’s 10-year MindScope plan, which aims to nail down how the visual system of a mouse works, starting by developing a functional taxonomy of all the different types of neurons in the brain.

“The big plan is to try to understand how the brain works,” says Lydia Ng, director of technology for the database. “Cell types are one of the building blocks of the brain, and by making a big model of how they’re put together, we can understand all the activity that goes into perceiving something and creating an action based on that perception.”

The Allen Cell Types Database, on its surface, doesn’t look like much. The first release includes information on just 240 neurons out of hundreds of thousands in the mouse visual cortex, with a focus on the electrophysiology of those individual cells: the electrical pulses that tell a neuron to fire, initiating a pattern of neural activation that results in perception and action. But understanding those single cells well enough to put them into larger categories will be crucial to understanding the brain as a whole—much like the periodic table was necessary to establish basic chemical principles.


to Open Overlay GalleryA researcher uses a pipette to measure a cell’s electrical traces while under a microscope.  ALLEN INSTITUTE FOR BRAIN SCIENCE

Though researchers have come a long way in studying the brain, most of the information they have is big-picture, in the form of functional scans that show activity in brain areas, or small-scale, like the expression of neurotransmitters and their receptors in individual neurons. But the connection between those two scales—how billions of neurons firing together results in patterns of activation and behavior—is still unclear. Neuroscientists don’t even have a clear idea of just how many different cell types exist, which is crucial to understanding how they work together. “There was a lot of fundamental information that was missing,” CEO Allan Jones says 

 about the database project. “So when we got started, we focused on what we call a reductionist approach, really trying to understand the parts.”

When it’s complete, the database will be the first in the world to collect information from individual cells along four basic but crucial variables: cell shape, gene expression, position in the brain, and electrical activity. So far, the Institute has tracked three of those variables, taking high-resolution images of dozens of electrically-stimulated neurons with a light microscope, while carefully noting their position in the mouse’s cortex. “The important early findings are that there are indeed a finite number of classes,” says Jones. “We can logically bend them into classes of cells.”


Click to Open Overlay Gallery Neurons in the database are mapped in 3D space.  Allen Institute for Brain Science

Next up, the Institute will accumulate gene expression data in individual cells by sequencing their RNA, and the overlap of all four variables ultimately will result in the complete cell type taxonomy. That classification system will help anatomists, physicists, and neuroscientists direct their study of neurons more efficiently and build more accurate models of cortical function. But it’s important to point out that the database isn’t merely important for its contents. How those contents were measured and aggregated also is crucial to the future of these big-picture brain mapping initiatives.

To create a unified model of the brain, neuroscientists must collect millions of individual data points from neurons in the brain. To start, they take electrical readings from living neurons by stabbing them with tiny, micron-wide pipettes. Those pipettes deliver current to the cells—enough to get them to fire—and record the cell’s electrical output. But there are many ways to set up those electrical readings, and to understand the neural system as a whole, neuroscientists need to use the same technique every time to make sure that the electrical traces can be compared from neuron to neuron.


Each of the colored dots shown is a single neuron detailed in the cell database, clustered in the mouse visual cortex.  Allen Institute for Brain Science

The Allen Institute, in collaboration with other major neuroscience hubs—Caltech, NYU School of Medicine, the Howard Hughes Medical Institute, and UC Berkeley—has made sure to use the same electrical tracing technique on all of the neurons studied so far (they call it “Neurodata without Borders”). And while the data for this first set of mouse neurons was primarily generated at the Institute, those shared techniques will make future work more applicable to the BRAIN Initiative’s larger goals. “In future releases, we’ll be working with other people to get data from other areas of the brain,” says Ng. “The idea is that if everyone does things in a very standard way, we’ll be able to incorporate that data seamlessly in one place.”

That will become increasingly important as the Institute continues mapping not just mouse neurons, but human ones. It’s easy to target specific regions in the mouse brain, getting electrical readings from neurons in a particular part of the visual cortex. It’s not so easy to get location-specific neurons from humans. “These cells actually come from patients—people who have having neurosurgery for epilepsy, or the removal of tumors,” says Ng. For a surgeon to get to the part of the brain that needs work, they must remove a certain amount of normal tissue that’s in the way, and it’s that tissue that neuroscientists are able to study.

Because they don’t get to choose exactly where in the brain that tissue comes from, scientists at Allen and other research institutes will have to be extra careful that their protocols for identifying the cells—by location, gene expression, electrical activity, and shape—are perfectly aligned, so none of those precious cells are wasted. All together, the discarded remnants of those human brains may be enough to reconstruct one from scratch.




Logo KW

A former Amazon engineer's startup wants to fix the worst thing about tech job interviews [1463]

de System Administrator - sábado, 26 de septiembre de 2015, 13:33

A former Amazon engineer's startup wants to fix the worst thing about tech job interviews

by Matt Weinberger

If there's one thing that techies hate, it's interviewing for a job in tech.

You'd think it'd be easier, right? After all, every company is soon to be a software company if you believe the Silicon Valley hype, which means that every company will soon need lots and lots more programmers.

The problem is that it's actually really hard to assess whether or not somebody is a good programmer.

Everybody thinks they're an expert, and it's often non-technical people like HR that get tapped to do the initial assessment.

And so two things tend to happen when you interview for a tech job: You either get completely insane "skill" tests on extremely basic knowledge that have little to do with the job at hand, or else they turn to brainteasers, riddles, and other weird stuff designed to gauge your personality as much as your set of skills. 

Regardless, the result is the same: Really excellent coders find themselves without a job, while recruiters hire people with the wrong skills for the job they're hired to do. 

That's a problem that HackerRank, a startup founded by ex-Amazon engineer and current CEO Vivek Ravisankar, wants to solve. 

On Amazon's Kindle team back in 2008, building the software that let people self-publish blogs to the e-reader store, Ravisankar had to conduct a lot of technical interviews. Over time, it became clear that the process was not great. 

"It's very hard to figure out how good a programmer is from looking at your resume," says Ravisankar, who left Amazon in 2009 to pursue the startup. 

HackerRank is a tool for automatically making programming tests, based on the skills that the company wants to test for, and then giving them a score based on their own algorithm. 

"Your 5 can be my 1.2," Ravisankar said.  

It's a pretty simple idea, but it was profound enough to get HackerRank into the prestigious Y Combinator startup accelerator program. Today, one million developers use it to compete in challenges and gauge their own skills, HackerRank claims, while big companies like Amazon, Riot Games, and Evernote use it in their own recruiting efforts. So far, HackerRank has raised $12.4 million from venture capital firms like Khosla Ventures and Battery Ventures. 

Now, HackerRank announces integration with the super-popular recruitment software Oracle Taleo, Greenhouse, and Jobvite, such that recruiters can instantly test job seekers and see their scores from right within the program.

Again, simple, but profound — recruiters can see how good a candidate is straight from the software they're already using to find good future employees. 

There's an interesting side effect here, too. The traditional technical interview has the bad habit of turning away women and minority groups for the simple reason that they prioritize people who code in a similar way to the interviewer.

A more objective score given by HackerRank could remove that barrier, ensuring candidates' applications live and die by their own merit.

"We're bringing in a huge change in recruiting," Ravisankar says.

Logo KW

A Maker’s Guide to the Metaverse [1361]

de System Administrator - miércoles, 26 de agosto de 2015, 22:50

A Maker’s Guide to the Metaverse

By Rod Furlan

The virtual reality renaissance that is now underway is creating much excitement surrounding the potential arrival of the “metaverse.” As I write, four great technology titans are competing to bring affordable head-mounted displays to market and usher VR into the mainstream [1].

While the term “metaverse” was coined by Neal Stephenson in his 1992 novel Snow Crash, current usage has diverged significantly from its original meaning. In popular contemporary culture, the metaverse is often described as the VR-based successor to the web.

Its recent surge in popularity is fueled by the expectation that the availability of affordable VR equipment will invariably lead to the creation of a network of virtual worlds that is similar to the web of today—but with virtual “places” instead of pages.

From a societal perspective, the metaverse is also associated with anticipation that as soon as VR technology reaches a sufficiently high level of quality, we will spend a significant portion of our private and professional lives in shared virtual spaces. Those spaces will be inherently more accommodating than natural reality, and this contextual malleability is expected to have a profound impact on our interpersonal relationships and overall cultural velocity.

Given its potential, it is no surprise the metaverse is a persistent topic in discussions about the future of virtual reality. In fact, it is difficult to find VR practitioners who can speculate about a plausible future where technological progress is unhindered and yet a metaverse is never created.

Still, there is little consensus on what a real-world implementation of the metaverse would be like.

Our research group, Lucidscape, was created to accelerate the advent of the metaverse by addressing the most challenging aspects of its implementation [2]. Our mandate is to provide the open source foundations for a practical, real-world metaverse that embraces freedom, lacks centralized control, and ultimately belongs to everyone.

In this article, which is part opinion and part pledge, I will share the tenets for what we perceive as an “ideal” implementation of the metaverse.  My goal is not to promote our work but to provoke thought and spark a conversation with the greater VR community about what we should expect from a real-world metaverse.

Tenet #1 – Creative freedom is not negotiable

“The first condition of progress is the removal of censorship.” – George Bernard Shaw

As the prerequisite technologies become available, the emergence of a proto-metaverse becomes all but inevitable. Nevertheless, it is too soon to know what kind of metaverse will arise—whether it will belong to everybody, embracing freedom, accessibility and personal expression without compromise, or be controlled and shaped by the will and whims of its creators.

To draw a relatable comparison, imagine a different world where the web functions akin to Apple’s iOS app store. In this world, all websites must be reviewed and approved for content before they are made available to users. In this impoverished version of the web, content perceived as disagreeable by its gatekeepers is suppressed, and users find themselves culturally stranded in a manicured walled garden.

While our (reasonably) free web has become a powerful driver of contemporary culture, I would argue that a content-controlled web would remain culturally impotent in comparison because censorship inevitably stifles creativity.

Some believe that censorship under the guise of curation is acceptable under a benevolent dictator. But let me again bring forth the common example of Apple, an adored company that has succumbed to the temptation of acting as a distorted moral compass for its customers by ruling that images of the human body are immoral while murder simulators are acceptable [3].

In contrast, the ideal metaverse allows everyone to add worlds to the network since there are no gatekeepers.

In it, human creativity is unshackled by the conventions and customs of our old world. Content creators are encouraged to explore the full spectrum of possible human experiences without the fear of censorship. Each world is a sovereign space that is entirely determined and controlled by its owner-creator.

Tenet #2 – Technological freedom is not negotiable either

“If the users do not control the program, the program controls the users” – Richard Stallman

The ideal metaverse is built atop a foundation of free software and open standards. This is of vital importance not only to enforce the right to creative freedom but to safeguard a nascent network from the risks of single-source solutions, attempts of control by litigation or even abuse by its own developers.

In the long term, a technologically free metaverse is also more likely to achieve a higher level of penetration and cultural relevance.

Tenet #3 – Dismantle the wall between creators and users

“Dismantle the wall between developers and users, to develop systems so easy to program that doing so would be a natural, simple aspect of use.” - The Xerox PARC Design Philosophy [4]

Most computer users have never written a program, and most web users have never created a website.

While creative and technological freedom are required, they are not sufficient to assure an inclusive metaverse if only a small portion of the user population can contribute to the network.

It is also necessary to break the wall that separates content creators from consumers by providing not only the means but also the incentives necessary to make each and every user a co-author in the metaverse network.

This empowerment begins with the outright rejection of the current “social contract” that delineates the submissive relationship between users and the computers they use. In the current model, user contributions are neither expected nor welcome which in turn greatly diminishes the value of becoming algorithmically literate unless you intend to become a professional in the field.

However, in a metaverse where virtual components are easily inspected and modified in real-time [5], everyone could become a tinkerer first, and a maker eventually.

Thus, every aspect of the user experience in the ideal metaverse is an invitation to learn, create or remix. Worlds can be quickly composed by linking to pre-existing parts made available by other authors. The source code and other building blocks for each shared part is readily available for study or tinkering. While each world remains sovereign, visitors are nonetheless encouraged to submit contributions that can be easily accepted and incorporated [6].


To illustrate the benefits of embracing users as co-authors, imagine that you have published a virtual model of Paris in the ideal metaverse. Over time, your simulation gains popularity and becomes a popular destination for Paris lovers around the world. To your amazement, your visitors congeal into a passionate community that submits frequent improvements to your virtual Paris, effectively becoming your co-authors. [7]

Most importantly, the basic tools of creation [8] of the ideal metaverse are accessible to children and those who are not technologically inclined. By design, these tools allow users to learn by experimentation thus blurring the lines between purposeful effort and creative play. [9]

Tenet #4 – Support for worlds of unprecedented scale

Virtual worlds of today, with a single notable exception [10], can only handle smaller-scale simulations with no more than several dozen participants in the same virtual space. To overcome this limitation, world creators sacrifice the experience of scale by partitioning worlds into a multitude of smaller instances where only a limited number of participants may interact with each other.

In contrast, the simulation infrastructure of the ideal metaverse supports worlds of unprecedented scale (e.g., whole populated cities, planets, solar systems) while handling millions of simultaneous users within the same shared virtual space.

This is an incredibly difficult challenge because it requires maintaining a coherent state across a vast number of geographically separated machines in real-time. Even as networking technology advances, there are fundamental physical limits to possible improvements in total bandwidth and latency. [11]


Fulfilling this requirement will require algorithmic breakthroughs and the creation of a computational fabric that allows an arbitrary number of machines to join forces to simulate large seamless worlds while at the same time gracefully compensating for unfavorable network circumstances.

Scalability of this magnitude is not something that can be easily bolted onto a pre-existing architecture. Instead, the creators of the ideal metaverse must take this requirement into consideration from the very beginning of development.

Tenet #5 – Support for nomadic computation

Of all tenets proposed in this essay, this is the one that is most easily contested because it is motivated not by strict necessity but by the desire to create a network that is more than the sum of its parts.

The same way that the web required a new way of thinking about information, the ideal metaverse requires a new way of thinking about computation. One of the ways this requirement manifests itself is by our proposal for the support of safe nomadic computation.

In the ideal metaverse, a nomadic program is a fully autonomous participant with the similar “rights” of a human user. Like any ordinary user, such programs can move from one server to the next on the network. To the underlying computational fabric, there is no meaningful distinction between human operators and nomadic programs other than the fact that programs carry along their source code and internal state as they migrate to a new server.

A powerful illustrative example of the potential for roaming programs is the approach taken by developer Hello Games in the development of “No Man’s Sky” [13].

By leveraging procedural content generation, a team of four artists have generated a virtual universe containing over 18 quintillion planets. Unable to visit and evaluate those worlds one by one, they resorted to the creation of a fleet of autonomous virtual robots to explore their many worlds. Each robot documents its journey and takes short videos of the most remarkable things they encounter to share with its developers.

While Hello Games’ robot explorers are not nomadic programs, the same idea could be implemented in the metaverse on a much grander scale. For example, more than merely visiting worlds in the network, nomadic programs can also interact with other users or programs, improve the worlds visited [14] or even act as the autonomous surrogate for a user who is currently offline.

Moreover, the infrastructure required for supporting nomadic computation can also be leveraged to offload work to the computers utilized by human visitors. This is beneficial because thousands of end-user machines running complex logic can create much richer experiences than what would be possible with server-side resources exclusively. [15]

The road ahead

The five tenets of the ideal metaverse shared in this article can be succinctly distilled to just two adjectives — free and distributed. Those are precisely the attributes that made the web widely successful, and the core values the metaverse must embrace to achieve a similar level of cultural relevance.

However, there are still many significant challenges ahead for the creation of a real-world metaverse.

From a hardware perspective, nothing short of a collapse of technological progress stands in the way of the required technologies being made available. Computing and networking performance continue to increase exponentially [16], and affordable head-mounted displays are just around the corner. [17]


From the software standpoint, there are a few groups already mobilizing to fulfill the promise of the metaverse. I have previously introduced our team at Lucidscape, and I feel compelled to mention our amazing friends at High Fidelity since they are also hard at work building their own vision of the metaverse. Similarly noteworthy are the efforts of, even though they are developing proprietary technology, their work could be useful to the metaverse in the long run [18].

Overall, recent progress has been encouraging. Last year our team at Lucidscape ran the first large-scale test of a massively parallel simulation engine where over ten million entities were simulated on a cluster composed of 6,608 processor cores [19]. Meanwhile, High Fidelity has already released the alpha version of their open source platform for shared virtual reality, and as I write this, there are 44 domains in their network, which can be earnestly described as a proto-metaverse.

Stress Test #1 - Illustrative Cluster

Where imagination becomes reality

Nothing is as uniquely human as the capacity to dream. Our ability to imagine a better world gives us both the desire and the resolve to reshape the reality around us.

Neuroscientist Gerald Edelman eloquently defined the human brain as the place “where matter becomes imagination” [20]. It is a wondrous concept which is about to be taken a step further as metaverse establishes itself as the place “where imagination becomes reality.”

While in natural reality our capacity to imagine greatly outstrips our power to realize, virtual reality closes that gap and mainstream availability of VR will release an unfathomable amount of pent-up creative energy.

Our urge to colonize virtual worlds is easily demonstrated by success stories of video games that give users easy-to-use tools to create on their own. Media Molecule’s “Little Big Planet” receives over 5,000 submissions of user-created levels every day. Meanwhile, the number of Microsoft’s “Minecraft” worlds is estimated at the hundreds of millions.


While it is true that some of us may never find virtual reality to be as fulfilling as natural reality, ultimately we are not the ones who will realize the full potential of VR and the metaverse.

Today’s children will be the first “virtual natives.” Their malleable brains will adapt and evolve along with the virtual worlds they create and experience. Eventually they will learn to judge experiences exclusively on the amount of cognitive enrichment it offers and not based on the arbitrary labels of “real” or “virtual.”

In time, the metaverse will become humanity’s shared virtual canvas. In it, we will meet to create new worlds and new experiences that bypass the constraints of natural reality. Its arrival will set in motion a grand social experiment that will ultimately reveal the true nature of our species. [21]

How will our culture and morality evolve when reality itself becomes negotiable? Will we create worlds that elevate the human spirit to new heights? Or will we use virtual reality to satisfy our darkest desires?

To the disappointment of both the eternally optimistic and relentlessly pessimistic, the answer is likely to be a complex mixture of both.

The real world metaverse will be just as full of beauty and contain just as much darkness as the web we have today. It will be an honest mosaic portrait of experiences that is fully representative of our true cognitive identity as a species.

The problem you did not know you had

I would like to conclude by asking you to imagine a line representing your personal trajectory through life’s many possibilities. This line connects your birth to each of your most salient moments up to the current point in time, and it represents that totality of your life’s experience.

Each decision you made along the way pruned the tree of possibilities of the branches that were incompatible with the sum of your previous choices. For each door you opened, countless others were sealed shut because such is the nature of a finite human existence — causality mercilessly limits how much you can do with the time you have.

I, for example, decided to specialize in computer science so it is unlikely that I will ever become an astronaut. Since I am male and musically challenged, I will also never know what is like to be a female j-pop singer, or a person of a different race, or being born in a different century. No matter what I do, those experiences are inaccessible to me in natural reality.

The goal of this exercise is to bring to your attention that no matter how rich of a life you have lived, the breadth of your journey represents an insignificantly narrow path through the spectrum of possible human experiences.

This is how natural reality limits you. It denies you access to the full spectrum of experiences your mind requires to achieve higher levels of wisdom, empathy and cognitive well-being.

This is the problem you did not know you had — and virtual reality is precisely the solution you did not know you needed.


[1] Namely: Oculus VR owned by Facebook, Google, Sony and HTC/Valve.

[2] Lucidscape is building a massively distributed computational fabric to power the metaverse (

[3] While I am not in any way opposed to violent video games, I want to make the point that by any reasonable moral scale, sex and nudity are inherently more acceptable than murder.

[4] Read more:

[5] This is conceptually similar to what was attempted by the developers of the Xerox Alto operating system because user changes are reflected immediately. See also [4]

[6] This mechanism would be conceptually similar to a "pull request":

[7] "Wikiworlds" would be a good cognitive shortcut for this co-authoring model: worlds that are like Wikipedia in the aspect that anyone can contribute.

[8] Emphasis is given to the fact that the basic tools must be accessible to non-technical users. Certainly, complex tools for power users are also of critical importance.

[9] This reflects my personal wish of seeing a whole generation of kids becoming algorithmically literate by "playing" on the metaverse.

[10] Eve Online (

[11] Read more:

[13] Read more:

[14] Imagine an autonomous builder program that travels around the metaverse and uses procedural content generation to suggest improvements to the visited worlds.

[15] Another important aspect of supporting nomadic computation is to minimize the cross-talk between servers as autonomous agents roam the metaverse. Since the execution of nomadic programs is local to the server it is currently visiting, a great deal of network bandwidth can be spared.

[16] Read more:

[17] Coming soon: Oculus CV, Sony Morpheus, Valve HTC Vive

[18] I would like to take this opportunity to invite the great minds at Improbable to consider building a free metaverse alongside Lucidscape and High Fidelity instead of limiting themselves to the scope of the video game industry.

[19] Read more:

[20] “How Matter Becomes Imagination” is the sub-title of “A Universe of Consciousness” by Nobel Prize winner Gerald Edelman and neuroscientist Giulio Tononi.

[21] It is this author’s opinion that technology does not change us, it merely enables us to act the way we wanted to all along.

Rod Furlan is an artificial intelligence researcher, Singularity University alumnus and the co-founder Lucidscape, a virtual reality research lab currently working on a new kind of massively-distributed 3D simulation engine to power a vast network of interconnected virtual worlds. Read more here and follow him @rfurlan.

To get updates on Future of Virtual Reality posts, sign up here.

Image Credit: and mediamolecule 

Logo KW

A Movable Defense [1043]

de System Administrator - martes, 6 de enero de 2015, 12:46

A Movable Defense

In the evolutionary arms race between pathogens and hosts, genetic elements known as transposons are regularly recruited as assault weapons for cellular defense.

By Eugene V. Koonin and Mart Krupovic

JUMPERS: Transposable elements, which make up as much as 90 percent of the corn genome and are responsible for the variation in kernel color, may also be at the root of diverse immune defenses.


Researchers now recognize that genetic material, once simplified into neat organismal packages, is not limited to individuals or even species. Viruses that pack genetic material into stable infectious particles can incorporate some or all of their genes into their hosts’ genomes, allowing remnants of infection to remain even after the viruses themselves have moved on. On a smaller scale, naked genetic elements such as bacterial plasmids and transposons, or jumping genes, often shuttle around and between genomes. It seems that the entire history of life is an incessant game of tug-of-war between such mobile genetic elements (MGEs) and their cellular hosts.

MGEs pervade the biosphere. In all studied habitats, from the oceans to soil to the human intestine, the number of detectable virus particles, primarily bacteriophages, exceeds the number of cells at least tenfold, and maybe much more. Furthermore, MGEs and their remnants constitute large portions of many organisms’ genomes—as much as two-thirds of the human genome and up to 90 percent in plants such as corn.


MOBILE DNA: A false-color transmission electron micrograph of a transposon, a segment of DNA that can move around chromosomes and genomes


Despite their ubiquity and prevalence in diverse genomes, MGEs have traditionally been considered nonfunctional junk DNA. Starting in the middle of the 20th century, through the pioneering work of Barbara McClintock in plants, and over the following decades in a widening range of organisms, researchers began to uncover clues that MGE sequences are recruited for a variety of cellular functions, in particular for the regulation of gene expression. More-recent work reveals that many organisms also use MGEs for a more specialized and sophisticated function, one that capitalizes on the ability of these elements to move around genomes, modifying the DNA sequence in the process. Transposons seem to have been pivotal contributors to the evolution of adaptive immunity both in vertebrates and in microbes, which were only recently discovered to actually have a form of adaptive immunity—namely, the CRISPR-Cas (clustered regularly interspaced short palindromic repeats–CRISPR-associated genes) system that has triggered the development of a new generation of genome-manipulation tools.

Multiple defense systems have evolved in nearly all cellular organisms, from bacteria to mammals. Taking a closer look at these systems, we find that the evolution of these defense mechanisms depended, in large part, on MGEs—those same elements that are themselves targets of host immune defense.

Layers of defense

As cheaters in the game of life, stealing resources from their hosts, parasites have the potential to cause the collapse of entire communities, killing their hosts before moving on or dying themselves. But hosts are far from defenseless. The diversity and sophistication of immune systems are striking: their functions range from immediate and nonspecific innate responses to exquisitely choreographed adaptive responses that result in lifelong immune memory after an initial pathogen attack.1

Transposons seem to have been piv­otal contributors to the evolution of adap­tive immunity both in vertebrates and in microbes.

Over the last two decades or so, it has become clear that nearly all organisms possess multiple mechanisms of innate immunity.2 Toll-like receptors (TLRs), common to most animals, recognize conserved molecules from microbial pathogens and activate the appropriate components of the immune system upon invasion. Even more widespread and ancient is RNA interference (RNAi), a powerful defense system that employs RNA guides, known as small interfering RNAs (siRNAs), to destroy invading nucleic acids, primarily those of RNA viruses. Conceptually, the biological function of siRNAs is analogous to that of TLRs: an innate immune response to a broad class of pathogens.

Prokaryotes possess their own suite of innate immune mechanisms, including endonucleases that cleave invader DNA at specific sites and enzymes called methylases that modify those same sites in the prokaryotes’ own genetic material to shield it from cleavage, a strategy known as restriction modification (RM).3 If overwhelmed by pathogens, many prokaryotic cells will undergo programmed cell death or go into dormancy, thereby preventing the spread of the pathogen within the organism or population. In particular, infected bacterial or archaeal cells can activate toxin-antitoxin (TA) systems to induce dormancy or cell death. Normally, the toxin protein is complexed with the antitoxin and thus inactivated. However, under stress, the antitoxin is degraded, unleashing the toxin to harm the cell.

Many viruses that infect microbes also encode RM and TA modules.4 These viruses are, in effect, a distinct variety of MGEs that sometimes have highly complex genomes. Viruses use RM systems for the very same purpose as their prokaryotic hosts: the methylase modifies the viral genome, whereas the endonucleases degrade any unmodified genomes in the host cell, thereby providing nucleotides for the synthesis of new copies of the viral genome. And the TA system can ensure retention of a plasmid or virus within the cell. The toxin and antitoxin proteins dramatically differ in their vulnerability to proteolytic enzymes that are always present in the cell: the toxin is stable whereas the antitoxin is labile. This does not matter as long as both proteins are continuously produced. However, if both genes are lost (for example, during cell division), the antitoxin rapidly degrades, and the remaining amount of the toxin is sufficient to halt the biosynthetic activity of the cell and hence kill it or at least render it dormant. A plasmid or virus that carries a TA module within its genome thus implants a self-destructing mechanism in its host that is activated if the MGE is lost. (See illustration.)

When an MGE inserts into the host genome, it inevitably modifies that genome, typically using an MGE-encoded recombinase (also known as integrase or transposase) as a breaking-and-entering tool. Speaking in deliberately anthropomorphic terms, the MGEs do so for their own selfish purposes, to ensure their propagation within the host genome. However, given the ubiquity of MGEs across cellular life forms, it seems extremely unlikely that host organisms would not recruit at least some of these naturally evolved genome manipulation tools in order to exploit their remarkable capacities for their own purposes. Immune memory that involves genome manipulation is arguably the most obvious utility of these tools, and in retrospect, it is not surprising that unrelated transposons and their recombinases appear to have made key contributions to the origin of both animal and prokaryotic forms of adaptive immunity.

Guns for hire


DISRUPTING COLOR: The variations in color seen in this dahlia “flower” (actually a cluster of small individual flowers, or florets) can be caused by transposon-induced mutations.


Until recently, prokaryotes had been thought to entirely lack the sort of adaptive immunity that dominates defense against parasites in vertebrates. This view has been overturned in the most dramatic fashion by the discovery of the CRISPR-Cas, RNAi-based defense systems found to be present in most archaea and many bacteria studied to date.5 In 2005, Francisco Mójica of the University of Alicante in Spain and colleagues,6 and independently, Dusko Ehrlich of the Pasteur Institute in Paris,7 discovered that some of the unique sequences inserted between CRISPR, known as spacers, were identical to pieces of bacteriophage or plasmid genomes. Combined with a detailed analysis of the predicted functions of Cas proteins, this discovery led one of us (Koonin) and his team to propose in 2006 that CRISPR-Cas functioned as a form of prokaryotic adaptive immunity, with memory of past infections stored in the genome within the CRISPR “cassettes”—clusters of short direct repeats, interspersed with similar-size nonrepetitive spacers, derived from various MGEs—and to develop a detailed hypothesis about the mechanism of such immunity.8

Subsequent experiments from Philippe Horvath’s and Rodolphe Barrangou’s groups at Danisco Corporation,9 along with several other studies that followed in rapid succession, supported this hypothesis. (See “There’s CRISPR in Your Yogurt,” here.) It has been shown that CRISPR-Cas indeed functions by incorporating fragments of foreign bacteriophage or plasmid DNA into CRISPR cassettes, then using the transcripts of these unique spacers as guide RNAs to recognize and cleave the genomes of repeat invaders. (See illustration.) A key feature of CRISPR-Cas systems is their ability to transmit extremely efficient, specific immunity across many thousands of generations. Thus, CRISPR-Cas is not only a bona fide adaptive immunity system, but also a genuine machine of Lamarckian evolution, whereby an environmental challenge—a virus or plasmid, in this case—directly causes a specific change in the genome that results in an adaptation that is passed on to subsequent generations.10

When a mobile genetic element (MGE) inserts into the host genome, it inevitably modifies that genome, typically using an MGE-encoded recombinase as a breaking-and-entering tool.

A torrent of comparative genomic, structural, and experimental studies has characterized the extremely diverse CRISPR-Cas systems according to the suites of Cas proteins involved in CRISPR transcript processing and target recognition.5,11While Type I and Type III systems employ elaborate protein complexes that consist of multiple Cas proteins, Type II systems perform all the necessary reactions with a single large protein known as Cas9. These findings opened the door for straightforward development of a new generation of genome editing. Cas9-based tools are already used by numerous laboratories all over the world for genome engineering that is much faster, more flexible, and more versatile than any methodology that was available in the pre-CRISPR era.12

And it seems that humans are not the only species to have stolen a page from the CRISPR book: viruses have done the same. For example, a bacteriophage that infects pathogenic Vibrio cholera carries its own adaptable CRISPR-Cas system and deploys it against another MGE that resides within the host genome.13Upon phage infection, that rival MGE, called a phage inducible chromosomal island-like element (PLE), excises itself from the cellular genome and inhibits phage production. But at the same time, the bacteriophage-encoded CRISPR-Cas system targets PLE for destruction, ensuring successful phage propagation.

Consequently, in prokaryotes, all defense systems appear to be guns for hire that work for the highest bidder. Sometimes it is impossible to know with any certainty in which context, cellular or MGE, different defense mechanisms first emerged.

Transposon origins of adaptive immunity


THE ULTIMATE MOBILE ELEMENT: Bacteriophages converging on an E. coli cell can be seen injecting their genetic material (blue-green threads) into the bacterium. Often, viral DNA will be taken up by the host genome, where it can be passed through bacterial generations.


Recent evidence from our groups supports an MGE origin of the CRISPR-Cas systems. The function of Cas1—the key enzyme of CRISPR-Cas that is responsible for the acquisition of foreign DNA and its insertion into spacers within CRISPR cassettes—bears an uncanny resemblance to the recombinase activity of diverse MGEs, even though Cas1 does not belong to any of the known recombinase families. As a virtually ubiquitous component of CRISPR-Cas systems, Cas1 was likely central to the emergence of CRISPR-Cas immunity.

During a recent exploration of archaeal DNA dark matter—clusters of uncharacterized genes in sequenced genomes—we unexpectedly discovered a novel superfamily of transposon-like MGEs that could hold the key to the origin of Cas1.14 These previously unnoticed transposons contain inverted repeats at both ends, just like many other transposons, but their gene content is unusual. The new transposon superfamily is present in both archaeal and bacterial genomes and is highly polymorphic (different members contain from 6 to about 20 genes), with only two genes shared by all identified representatives. One of these conserved genes encodes a DNA polymerase, indicating that these transposons supply the key protein for their own replication. While diverse eukaryotes harbor self-synthesizing transposons of the Polinton or Maverick families, this is the first example in prokaryotes. But it was the second conserved protein that held the biggest surprise: it was none other than a homolog of Cas1, the key protein of the CRISPR-Cas systems.

We dubbed this new transposon family Casposons and naturally proposed that, in this context, Cas1 functions as a recombinase. In the phylogenetic tree of Cas1, the casposons occupy a basal position, suggesting that they played a key role in the origin of prokaryotic adaptive immunity.

In vertebrates, adaptive immunity acts in a completely different manner than in prokaryotes and is based on the acquisition of pathogen-specific T- and B-lymphocyte antigen receptors during the lifetime of the organism. The vast repertoire of immunoglobulin receptors is generated from a small number of genes via dedicated diversification processes known as V (variable), D (diversity), and J (joining) segment (V(D)J) recombination and hypermutation. (See illustration.) In a striking analogy to CRISPR-Cas, vertebrate adaptive immunity also seems to have a transposon at its origin. V(D)J recombination is mediated by the RAG1-RAG2 recombinase complex. The recombinase domain of RAG1 derives from the recombinases of a distinct group of animal transposons known as Transibs.15 The recombination signal sequences of the immunoglobulin genes, which are recognized by the RAG1-RAG2 recombinase and are necessary for bringing together the V, D, and J gene segments, also appear to have evolved via Transib insertion.

The two independent origins of adaptive immune systems in prokaryotes and eukaryotes involving unrelated MGEs show that, in the battle for survival, organisms welcome all useful molecular inventions irrespective of who the original inventor was. Indeed, the origin of CRISPR-Cas systems from prokaryotic casposons and vertebrate V(D)J recombination from Transib transposons might appear paradoxical given that MGEs are primary targets of immune systems. However, considering the omnipresence and diversity of MGEs, it seems likely that even more Lamarckian-type mechanisms have, throughout the history of life, directed genomic changes in the name of host defense.16

Moreover, the genome-engineering capacity of immune systems provides almost unlimited potential for the development of experimental tools for genome manipulation and other applications. The utility of antibodies as tools for protein detection and of RM enzymes for specific fragmentation of DNA molecules has been central to the progress of biology for decades. Recently, CRISPR-Cas systems have been added to that toolkit as, arguably, the most promising of the new generation of molecular biological methods. It is difficult to predict what opportunities for genome engineering could be hidden within still unknown or poorly characterized defense systems.



Eugene V. Koonin is a group leader at the National Library of Medicine’s National Center for Biotechnology Information in Bethesda, Maryland. Mart Krupovic is a research scientist at the Institut Pasteur in Paris, France.


  1. T. Boehm, “Design principles of adaptive immune systems,” Nat Rev Immunol, 11:307-17, 2011.
  2. R. Medzhitov, “Approaching the asymptote: 20 years later,” Immunity, 30:766-75, 2009.
  3. K.S. Makarova et al., “Comparative genomics of defense systems in archaea and bacteria,” Nucleic Acids Res, 41:4360-77, 2013.
  4. J.E. Samson et al., “Revenge of the phages: defeating bacterial defences,” Nat Rev Microbiol, 11:675-87, 2013.
  5. R. Barrangou, L.A. Marraffini, “CRISPR-Cas systems: Prokaryotes upgrade to adaptive immunity,”Mol Cell, 54:234-44, 2014.
  6. F.J. Mójica et al., “Intervening sequences of regularly spaced prokaryotic repeats derive from foreign genetic elements,” J Mol Evol, 60:174-82, 2005.
  7. A. Bolotin et al., “Clustered regularly interspaced short palindrome repeats (CRISPRs) have spacers of extrachromosomal origin,” Microbiology, 151:2551-61, 2005.
  8. K.S. Makarova et al., “A putative RNA-interference-based immune system in prokaryotes: computational analysis of the predicted enzymatic machinery, functional analogies with eukaryotic RNAi, and hypothetical mechanisms of action,” Biol Direct, 1:7, 2006.
  9. R. Barrangou et al., “CRISPR provides acquired resistance against viruses in prokaryotes,” Science, 315:1709-12, 2007.
  10. E.V. Koonin, Y.I. Wolf, “Is evolution Darwinian or/and Lamarckian?” Biol Direct, 4:42, 2009.
  11. K.S. Makarova et al., “Evolution and classification of the CRISPR-Cas systems,” Nat Rev Microbiol, 9:467-77, 2011.
  12. H. Kim, J.S. Kim, “A guide to genome engineering with programmable nucleases,” Nat Rev Genet, 15:321-34, 2014.
  13. K.D. Seed et al., “A bacteriophage encodes its own CRISPR/Cas adaptive response to evade host innate immunity,” Nature, 494:489-91, 2013.
  14. M. Krupovic et al., “Casposons: a new superfamily of self-synthesizing DNA transposons at the origin of prokaryotic CRISPR-Cas immunity,” BMC Biology, 12:36, 2014.
  15. V.V. Kapitonov, J. Jurka, “RAG1 core and V(D)J recombination signal sequences were derived from Transib transposons,” PLOS Biol, 3:e181, 2005.
  16. E.V. Koonin, M. Krupovic, “Evolution of adaptive immunity from transposable elements combined with innate immune systems,” Nature Rev Genet, doi:10.1038/nrg3859, December 9, 2014.



Logo KW

A neural portrait of the human mind [907]

de System Administrator - sábado, 4 de octubre de 2014, 21:39

Nancy Kanwisher: A neural portrait of the human mind



Brain imaging pioneer Nancy Kanwisher, who uses fMRI scans to see activity in brain regions (often her own), shares what she and her colleagues have learned: The brain is made up of both highly specialized components and general-purpose "machinery." Another surprise: There's so much left to learn.


0:11 - Today I want to tell you about a project being carried out by scientists all over the world to paint a neural portrait of the human mind. And the central idea of this work is that the human mind and brain is not a single, general-purpose processor, but a collection of highly specialized components, each solving a different specific problem, and yet collectively making up who we are as human beings and thinkers. To give you a feel for this idea,

0:42 - imagine the following scenario: You walk into your child's day care center. As usual, there's a dozen kids there waiting to get picked up, but this time, the children's faces look weirdly similar, and you can't figure out which child is yours. Do you need new glasses? Are you losing your mind? You run through a quick mental checklist. No, you seem to be thinking clearly, and your vision is perfectly sharp. And everything looks normal except the children's faces. You can see the faces, but they don't look distinctive, and none of them looks familiar, and it's only by spotting an orange hair ribbon that you find your daughter.

1:22 - This sudden loss of the ability to recognize faces actually happens to people. It's called prosopagnosia,and it results from damage to a particular part of the brain. The striking thing about it is that only face recognition is impaired; everything else is just fine.

1:39 - Prosopagnosia is one of many surprisingly specific mental deficits that can happen after brain damage.These syndromes collectively have suggested for a long time that the mind is divvied up into distinct components, but the effort to discover those components has jumped to warp speed with the invention of brain imaging technology, especially MRI. So MRI enables you to see internal anatomy at high resolution, so I'm going to show you in a second a set of MRI cross-sectional images through a familiar object, and we're going to fly through them and you're going to try to figure out what the object is. Here we go.

2:23 - It's not that easy. It's an artichoke.

2:25 - Okay, let's try another one, starting from the bottom and going through the top. Broccoli! It's a head of broccoli. Isn't it beautiful? I love that.

2:34 - Okay, here's another one. It's a brain, of course. In fact, it's my brain. We're going through slices through my head like that. That's my nose over on the right, and now we're going over here, right there.

2:45 - So this picture's nice, if I do say so myself, but it shows only anatomy. The really cool advance with functional imaging happened when scientists figured out how to make pictures that show not just anatomy but activity, that is, where neurons are firing. So here's how this works. Brains are like muscles.When they get active, they need increased blood flow to supply that activity, and lucky for us, blood flow control to the brain is local, so if a bunch of neurons, say, right there get active and start firing, then blood flow increases just right there. So functional MRI picks up on that blood flow increase, producing a higher MRI response where neural activity goes up.

3:28 - So to give you a concrete feel for how a functional MRI experiment goes and what you can learn from itand what you can't, let me describe one of the first studies I ever did. We wanted to know if there was a special part of the brain for recognizing faces, and there was already reason to think there might be such a thing based on this phenomenon of prosopagnosia that I described a moment ago, but nobody had ever seen that part of the brain in a normal person, so we set out to look for it. So I was the first subject. I went into the scanner, I lay on my back, I held my head as still as I could while staring at pictures of faces like these and objects like these and faces and objects for hours. So as somebody who has pretty close to the world record of total number of hours spent inside an MRI scanner, I can tell you that one of the skills that's really important for MRI research is bladder control. (Laughter)

4:28 - When I got out of the scanner, I did a quick analysis of the data, looking for any parts of my brain that produced a higher response when I was looking at faces than when I was looking at objects, and here's what I saw. Now this image looks just awful by today's standards, but at the time I thought it was beautiful. What it shows is that region right there, that little blob, it's about the size of an olive and it's on the bottom surface of my brain about an inch straight in from right there. And what that part of my brain is doing is producing a higher MRI response, that is, higher neural activity, when I was looking at facesthan when I was looking at objects. So that's pretty cool, but how do we know this isn't a fluke? Well, the easiest way is to just do the experiment again. So I got back in the scanner, I looked at more faces and I looked at more objects and I got a similar blob, and then I did it again and I did it again and again and again, and around about then I decided to believe it was for real. But still, maybe this is something weird about my brain and no one else has one of these things in there, so to find out, we scanned a bunch of other people and found that pretty much everyone has that little face-processing region in a similar neighborhood of the brain.

5:49 - So the next question was, what does this thing really do? Is it really specialized just for face recognition?Well, maybe not, right? Maybe it responds not only to faces but to any body part. Maybe it responds to anything human or anything alive or anything round. The only way to be really sure that that region is specialized for face recognition is to rule out all of those hypotheses. So we spent much of the next couple of years scanning subjects while they looked at lots of different kinds of images, and we showed that that part of the brain responds strongly when you look at any images that are faces of any kind, and it responds much less strongly to any image you show that isn't a face, like some of these.

6:34 - So have we finally nailed the case that this region is necessary for face recognition? No, we haven't.Brain imaging can never tell you if a region is necessary for anything. All you can do with brain imaging is watch regions turn on and off as people think different thoughts. To tell if a part of the brain is necessary for a mental function, you need to mess with it and see what happens, and normally we don't get to do that. But an amazing opportunity came about very recently when a couple of colleagues of mine tested this man who has epilepsy and who is shown here in his hospital bed where he's just had electrodes placed on the surface of his brain to identify the source of his seizures. So it turned out by total chancethat two of the electrodes happened to be right on top of his face area. So with the patient's consent, the doctors asked him what happened when they electrically stimulated that part of his brain. Now, the patient doesn't know where those electrodes are, and he's never heard of the face area. So let's watch what happens. It's going to start with a control condition that will say "Sham" nearly invisibly in red in the lower left, when no current is delivered, and you'll hear the neurologist speaking to the patient first. So let's watch.

7:52 - (Video) Neurologist: Okay, just look at my face and tell me what happens when I do this. All right?

7:59 - Patient: Okay.

8:01 - Neurologist: One, two, three.

8:06 - Patient: Nothing. Neurologist: Nothing? Okay. I'm going to do it one more time. Look at my face. One, two, three.

8:19Patient: You just turned into somebody else. Your face metamorphosed. Your nose got saggy, it went to the left. You almost looked like somebody I'd seen before, but somebody different. That was a trip.(Laughter)

8:38 - Nancy Kanwisher: So this experiment — (Applause) — this experiment finally nails the case that this region of the brain is not only selectively responsive to faces but causally involved in face perception. So I went through all of these details about the face region to show you what it takes to really establish that a part of the brain is selectively involved in a specific mental process. Next, I'll go through much more quickly some of the other specialized regions of the brain that we and others have found. So to do this, I've spent a lot of time in the scanner over the last month so I can show you these things in my brain.

9:17 - So let's get started. Here's my right hemisphere. So we're oriented like that. You're looking at my head this way. Imagine taking the skull off and looking at the surface of the brain like that. Okay, now as you can see, the surface of the brain is all folded up. So that's not good. Stuff could be hidden in there. We want to see the whole thing, so let's inflate it so we can see the whole thing. Next, let's find that face area I've been talking about that responds to images like these. To see that, let's turn the brain around and look on the inside surface on the bottom, and there it is, that's my face area. Just to the right of that is another region that is shown in purple that responds when you process color information, and near those regions are other regions that are involved in perceiving places, like right now, I'm seeing this layout of space around me and these regions in green right there are really active. There's another one out on the outside surface again where there's a couple more face regions as well. Also in this vicinity is a region that's selectively involved in processing visual motion, like these moving dots here, and that's in yellow at the bottom of the brain, and near that is a region that responds when you look at images of bodies and body parts like these, and that region is shown in lime green at the bottom of the brain.

10:31 - Now all these regions I've shown you so far are involved in specific aspects of visual perception. Do we also have specialized brain regions for other senses, like hearing? Yes, we do. So if we turn the brain around a little bit, here's a region in dark blue that we reported just a couple of months ago, and this region responds strongly when you hear sounds with pitch, like these. (Sirens) (Cello music) (Doorbell) In contrast, that same region does not respond strongly when you hear perfectly familiar sounds that don't have a clear pitch, like these. (Chomping) (Drum roll) (Toilet flushing)

11:17 - Okay. Next to the pitch region is another set of regions that are selectively responsive when you hear the sounds of speech.

11:25 - Okay, now let's look at these same regions. In my left hemisphere, there's a similar arrangement — not identical, but similar — and most of the same regions are in here, albeit sometimes different in size.

11:35 - Now, everything I've shown you so far are regions that are involved in different aspects of perception,vision and hearing. Do we also have specialized brain regions for really fancy, complicated mental processes? Yes, we do. So here in pink are my language regions. So it's been known for a very long timethat that general vicinity of the brain is involved in processing language, but we showed very recently that these pink regions respond extremely selectively. They respond when you understand the meaning of a sentence, but not when you do other complex mental things, like mental arithmetic or holding information in memory or appreciating the complex structure in a piece of music.

12:20 - The most amazing region that's been found yet is this one right here in turquoise. This region respondswhen you think about what another person is thinking. So that may seem crazy, but actually, we humans do this all the time. You're doing this when you realize that your partner is going to be worried if you don't call home to say you're running late. I'm doing this with that region of my brain right now when I realize that you guys are probably now wondering about all that gray, uncharted territory in the brain, and what's up with that?

12:57 - Well, I'm wondering about that too, and we're running a bunch of experiments in my lab right now to try to find a number of other possible specializations in the brain for other very specific mental functions. But importantly, I don't think we have specializations in the brain for every important mental function, even mental functions that may be critical for survival. In fact, a few years ago, there was a scientist in my labwho became quite convinced that he'd found a brain region for detecting food, and it responded really strongly in the scanner when people looked at images like this. And further, he found a similar responsein more or less the same location in 10 out of 12 subjects. So he was pretty stoked, and he was running around the lab telling everyone that he was going to go on "Oprah" with his big discovery. But then he devised the critical test: He showed subjects images of food like this and compared them to images with very similar color and shape, but that weren't food, like these. And his region responded the same to both sets of images. So it wasn't a food area, it was just a region that liked colors and shapes. So much for "Oprah."

14:11 - But then the question, of course, is, how do we process all this other stuff that we don't have specialized brain regions for? Well, I think the answer is that in addition to these highly specialized components that I've been describing, we also have a lot of very general- purpose machinery in our heads that enables us to tackle whatever problem comes along. In fact, we've shown recently that these regions here in whiterespond whenever you do any difficult mental task at all — well, of the seven that we've tested. So each of the brain regions that I've described to you today is present in approximately the same location in every normal subject. I could take any of you, pop you in the scanner, and find each of those regions in your brain, and it would look a lot like my brain, although the regions would be slightly different in their exact location and in their size.

15:04 - What's important to me about this work is not the particular locations of these brain regions, but the simple fact that we have selective, specific components of mind and brain in the first place. I mean, it could have been otherwise. The brain could have been a single, general-purpose processor, more like a kitchen knife than a Swiss Army knife. Instead, what brain imaging has delivered is this rich and interesting picture of the human mind. So we have this picture of very general-purpose machinery in our heads in addition to this surprising array of very specialized components.

15:42 - It's early days in this enterprise. We've painted only the first brushstrokes in our neural portrait of the human mind. The most fundamental questions remain unanswered. So for example, what does each of these regions do exactly? Why do we need three face areas and three place areas, and what's the division of labor between them? Second, how are all these things connected in the brain? With diffusion imaging, you can trace bundles of neurons that connect to different parts of the brain, and with this method shown here, you can trace the connections of individual neurons in the brain, potentially someday giving us a wiring diagram of the entire human brain. Third, how does all of this very systematic structure get built, both over development in childhood and over the evolution of our species? To address questions like that, scientists are now scanning other species of animals, and they're also scanning human infants.

16:47 - Many people justify the high cost of neuroscience research by pointing out that it may help us somedayto treat brain disorders like Alzheimer's and autism. That's a hugely important goal, and I'd be thrilled if any of my work contributed to it, but fixing things that are broken in the world is not the only thing that's worth doing. The effort to understand the human mind and brain is worthwhile even if it never led to the treatment of a single disease. What could be more thrilling than to understand the fundamental mechanisms that underlie human experience, to understand, in essence, who we are? This is, I think, the greatest scientific quest of all time.

17:33 - (Applause)

Brain researcher
Using fMRI imaging to watch the human brain at work, Nancy Kanwisher’s team has discovered cortical regions responsible for some surprisingly specific elements of cognition.
Logo KW


de System Administrator - viernes, 1 de agosto de 2014, 23:10


Written By Cadell Last

In a new book, The Beginning and the End: The Meaning of Life in Cosmological Perspective, by philosopher Clement Vidal (@clemvidal), the two main trends of the universe – the trends of rising disorder and complexity – are explored. As a result, his investigation takes us to the most extreme conditions possible in our universe.

Continue reading on the site

Logo KW

A Step Closer to Human Genome Editing [1665]

de System Administrator - lunes, 15 de febrero de 2016, 16:37

UK Will Use CRISPR on Human Embryos — a Step Closer to Human Genome Editing


"It is human nature and inevitable in my view that we will edit our genomes for enhancements.”
J. Craig Venter

This week, Kathy Niakan, a biologist working at the Francis Crick Institute in London received the green light from the UK’s Human Fertilisation and Embryology Authority to use genome editing technique CRISPR/Cas9 on human embryos.

Niakan hopes to answer important questions about how healthy human embryos develop from a single cell to around 250 cells, in the first seven days after fertilization.

By removing certain genes during this early development phase using CRISPR/Cas9, Niakan and her team hope to understand what causes miscarriages and infertility, and in the future, possibly  improve the effectiveness of in-vitro fertilization and provide better treatments for infertility.

The embryos used in the research will come from patients who have a surplus of embryos in their IVF treatment and give consent for these embryos to be used in research. The embryos would not be allowed to survive beyond 14 days and are not allowed to be implanted in a womb to develop further. The team still needs to have their plans reviewed by an ethics board, but if approved, the research could start in the next few months.

In an op-ed for Time magazine, J. Craig Venter writes that the experiments proposed at the Crick Institute are similar to previous gene knockouts in mice and other species. While some results may be of interest, Venter believes, most will be inconclusive,  as the field has seen in the past.

He continues, “The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes.”

Venter’s stance on the matter of genome editing echoes that of many other scientists in the field: Proceed with caution. 

In December 2015, The National Academies of Sciences, Engineering and Medicine held an International Summit on Human Genome Editing, and after several days of discussion, released a statement of conclusions.

In a nutshell, the group recommended that basic and preclinical research should continue with the appropriate legal and ethical oversight. If human embryos or germline cells are modified during research, they should not be used to establish a pregnancy.

In cases of clinical use, the group underscored a difference between editing somatic cells (cells whose genomes are not passed on to the next generation) versus germline cells (whose genomes are passed on to the next generation).

Somatic cell editing would include editing genes that cause diseases such as sickle-cell anemia. Because these therapies would only affect the individual, the group recommends these cases should be evaluated based on “existing and evolving” gene-therapy regulations.

It’s worth noting that governments across the world have significantly diverse ways of handling gene-therapy regulations. 

In the US, the National Institutes of Health (NIH) won’t fund genomic editing research involving human embryos. Research like Kathy Niakan’s is not illegal, as long as it is privately funded. In China, the government doesn’t ban any particular type of research, while countries like Italy and Germany are on the other side of the spectrum, where all human embryo research is banned.

The International Summit on Genome Editing concluded that today it would be “irresponsible to proceed with any clinical use of germline editing” until we have more knowledge of the possible risks and outcomes of doing so. 

In spite of that, the group also concluded that as “scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis.”  Similarly, Venter writes of the need for the scientific community to gain better understanding of the “software of life before we begin re-writing this code.”

While the “proceed with caution” message from scientists is loud and clear, the age of programmable biology seems to be getting closer and closer.

Between Venter’s statement that it is inevitable that we will edit our genomes for enhancements and the suggestion that human germline editing should be ‘revisited’ as opposed to banned, it seems even the scientific community is assuming a future which includes human genome editing.

So, where do we go from here?

This brave new future seems equal parts exciting, frightening — and inevitable. At this stage, more research is critical — so when the time comes to rewrite the software of life, we do so with wisdom.

Image Credit:




Logo KW

A step towards gene therapy against intractable epilepsy [1612]

de System Administrator - lunes, 7 de diciembre de 2015, 11:42

3D model of DNA double helix. Credit: Peter Artymiuk / Wellcome Images

A step towards gene therapy against intractable epilepsy

by Nikitidou Ledri L et al.

By delivering genes for a certain signal substance and its receptor into the brain of test animals with chronic epilepsy, a research group at Lund University in Sweden with colleagues at University of Copenhagen Denmark has succeeded in considerably reducing the number of epileptic seizures among the animals. The test has been designed to as far as possible mimic a future situation involving treatment of human patients.

Many patients with epilepsy are not experiencing any improvements from existing drugs. Surgery can be an alternative for severe epilepsy, in case it is possible to localize and remove the epileptic focus in the brain where seizures arise.

"There is a period between the detection of this focus and the operation when the gene therapy alternative could be tested. If it works well, the patient can avoid surgery. If it doesn't, surgery will go ahead as initially planned and the affected part will then be removed. With this approach, the experimental treatment will be more secure for the patient", says Professor Merab Kokaia.

He and his group are working on a rat model that mimics temporal lobe epilepsy, the most common type of epilepsy. The test animals are given injections of the epilepsy-inducing substance, kainate, in the temporal lobe of one the cerebral hemispheres. Most of the animals had seizures of varying degrees, whereas some had no seizures, which Merab Kokaia considers a good result, as it is similar to the situation among people. Brain damage resulting from various accidents has very different consequences for different patients, as some develop epilepsy, whereas others do not.

The rats that developed epilepsy were then given gene therapy in the part of the brain in which the kainate had been injected, and where the seizures arose. Genes were delivered for both the signal substance (neuropeptide Y) and one of its receptors. The idea was that the combination would create a larger effect than only delivering the gene for the signal substance itself. Neuropeptide Y can bind to several different receptors and in the worst case it binds to a receptor that promotes increase in the number of seizures instead of decrease.

The study results have so far been positive. The increase in the frequency of seizures that has been seen among the control animals that were treated with inactive genes was halted after the treatment with combination of active genes, and for 80 % of the animals the number of seizures was reduced by almost half.

"The test must be repeated in more animal studies, so that the possible side effects, on memory for example, can be studied. But, we regard this study as promising proof of concept, a demonstration that the method works," states Merab Kokaia.

He expects that the first gene therapy treatments will be carried out on patients who have already been selected for surgical procedures. In the long term, however, gene therapy will be of the greatest benefit to those patients who cannot be operated on. There are patients with severe epilepsy whose epileptic focus is so badly placed that an operation is out of question since it can impair e.g. speech or movement. These patients can therefore never undergo a surgical procedure, but could be helped by gene therapy in the future.

Note: Material may have been edited for length and content. For further information, please contact the cited source.




Logo KW

A Vaulted Mystery [698]

de System Administrator - martes, 5 de agosto de 2014, 23:12



A Vaulted Mystery

Nearly 30 years after the discovery of tiny barrel-shape structures called vaults, their natural functions remain elusive. Nevertheless, researchers are beginning to put these nanoparticles to work in biomedicine.

By Eufemia S. Putortì and Massimo P. Crippa

In the mid-1980s, biochemist Leonard Rome of the University of California, Los Angeles, (UCLA) School of Medicine and his postdoc Nancy Kedersha were developing new ways to separate coated vesicles of different size and charge purified from rat liver cell lysates when they stumbled upon something else entirely. They trained a transmission electron microscope on the lysate to check whether the vesicles were being divvied up correctly, and the resulting image revealed three dark structures: a large protein-coated vesicle, a small protein-coated vesicle, and an even smaller and seemingly less dense object. (See photograph below.) The researchers had no idea what the smallest one was.

“There were many different proteins and membrane-bound vesicles in the various fractions we analyzed,” Kedersha recalls, but this small vesicle was different. And it was “not a contaminant,” she says, as additional micrographs of partially purified vesicles revealed similar strange objects, always found in association with the coated vesicles. The ovoid particles displayed a distinct shape, which reminded the researchers of a raspberry, a hand grenade, or a barrel, and all were smaller than any known organelle. Rome gave his postdoc the green light to investigate further.


A MINI MYSTERY: An electron micrograph taken by researchers 30 years ago reveals one of the first looks at the nanoscale structures now known as vaults. Abbreviations: large coated vesicle (LCV), vault (V), small coated vesicle (SCV)


Kedersha designed a way to purify the mystery particles, based on a procedure previously described in the literature for isolating coated vesicles, then stained and imaged what she’d collected using electron microscopy. The tiny structures had a complex but consistent barrel-shape morphology and measured 35 by 65 nanometers—much smaller than lysosomes, which range in diameter from 100 to more than 1,000 nanometers (1 micrometer), or mitochondria, which are 0.5 to 10 micrometers long. Kedersha also treated the particles with various proteases, as well as enzymes to digest RNA and DNA, to assess their constituent molecules, finding evidence of three major proteins and an RNA component. With a total mass of approximately 13 megadaltons, they appeared to be the largest eukaryotic ribonucleoprotein particles ever discovered. By comparison, ribosomes measure just 20 to 25 nanometers in diameter and weigh in at just over 3 megadaltons.

Kedersha dubbed the structures “vaults,” after the arched shape of the very first particle she and Rome observed, reminiscent of the vaulted ceilings of cathedrals.1 To screen for these new nanostructures in other species, Kedersha developed an antibody against one of the vault proteins she’d discovered, and used it to purify vaults from species across the animal kingdom: the minibarrels were abundant in the cells of rabbits, mice, chickens, cows, bullfrogs, sea urchins, and several human cell lines—varying from 10,000 to 100,000 per cell. Remarkably, they all appeared to be similar in size, shape, and morphology to those Kedersha and Rome isolated from rat livers. Clearly, this was an important cellular structure, and there were no reports of anything like it in the literature.

The broad distribution and strong conservation of vaults in eukaryotic species suggest that their function is essential to cells, but that function remains unclear to this day. In fact, in the three decades that have passed since their discovery, vaults have gone largely unnoticed by the scientific community. But a handful of dedicated groups are making strides in understanding what vaults are and what they do, with clues emerging that hint at their roles in cargo transport, cellular motility, and drug resistance, among other possible functions.

Cracking the vault

Scientists have taken several approaches to deciphering the structure of the nanosize vaults, including cryo-electron and freeze-etch microscopy and three-dimensional image reconstruction. Such work has revealed a symmetrical central barrel with a cinched middle and a cap protruding from the barrel’s top and bottom. (See illustration.) Cross sections reveal a very thin shell surrounding a large, hollow interior. Interestingly, a vault’s interior is spacious enough to enclose molecules as large as ribosomal subunits, but researchers have not confirmed whether vaults ever house cellular cargo.

As Kedersha’s early analyses suggested, vaults are composed of multiple copies of at least four distinct components: three proteins and one RNA molecule. The major vault protein (MVP) accounts for some 75 percent of the particles’ mass, with each vault containing 78 copies of the protein. In fact, the expression of MVP in an insect cell line—insects themselves are one of the few eukaryotic organisms that don’t have vaults—results in the spontaneous formation of particles with morphologic characteristics similar to those of endogenous vaults.2 Another protein typically found in vaults is vault poly(ADP-ribose) polymerase (VPARP). VPARP and MVP mRNA transcripts are expressed in similar patterns in the cell, and subcellular fractionation studies point to a strong binding between the two proteins.

Kedersha dubbed the structures “vaults,” after the arched shape of the very first particle she and Rome observed, reminiscent of the vaulted ceilings of cathedrals.

The third vault protein is TEP1, previously identified as the mammalian telomerase-associated protein 1, which binds RNA in the telomerase complex. TEP1-knockout mice exhibited no alterations in telomerase function, suggesting its role in the nucleus is redundant, but vaults purified from these animals revealed a complete absence of the fourth component of vaults: vault RNA (vRNA), a small untranslated RNA found at the tips of the particles. This work pointed to TEP1’s role in the recruitment and stabilization of vRNA.

The freeze-etching technique—which consists of physically breaking apart a frozen biological sample and then examining it with transmission electron microscopy—has revealed that vaults are not rigid, impermeable structures, but dynamic entities that are able to open and close, with a structure resembling a petaled flower.3 (See photograph below.) The “flowers” are usually seen in pairs, suggesting that an intact vault comprises two folded flowers with eight rectangular petals, each of which is connected to a central ring by a thin, short hook. (See illustration.)

The ability of vaults to open and close points to a possible function in cargo transport. At present, however, a definitive answer about the function of vaults remains elusive. In fact, in addition to cellular transport, more than a dozen roles for vaults have been proposed, including playing a part in multidrug resistance, cellular signaling, neuronal dysfunctions, and apoptosis and autophagy.

In search of function

Vaults are found in the cytoplasm, so far appearing to be completely excluded from the nucleus (except in sea urchins4). Within the cytoplasm, however, they are not randomly dispersed: they colocalize and interact with cytoskeletal elements, such as actin stress fibers and microtubules, and are also abundant in highly motile cells such as macrophages, suggesting the structures may help cells move around.


THE STRUCTURE OF VAULTS: Vaults are hollow, barrel-shape structures, measuring 35 x 65 nanometers. They are symmetrical, with a crease along the outside of the barrel’s middle and smaller caps on either end.
See full infographic: JPG | PDF


Vaults’ interactions with cytoskeletal elements also lend support to the idea that these particles act as cytoplasmic cargo transporters. Researchers hypothesize that vaults open, encapsulate molecules, then close and travel across the cytoplasm along microtubules or actin fibers before releasing their contents into the desired subcellular compartment.

In addition to the now well-characterized flower pattern of vault opening, Rome and colleagues have proposed two alternative hypotheses for how vaults might open: by separating at the waist, splitting into two completely dissociated halves, or by the raising of opposing petals on the two vault halves, hinging from the caps to open at the waist.5 (See illustration.) The latter may avoid destroying the integrity of the whole particle, potentially allowing vaults to repeatedly transport and release cargos. More recently, researchers have found evidence that the vaults “breathe” in solution, taking up and releasing proteins without ever fully opening.

Vaults also seem to be closely associated with nuclear pore complexes (NPC), protein conglomerations that span the inner and outer membranes of the nuclear envelope. This raises the possibility that vaults shuttle contents between the cytoplasm and nucleus. Interestingly, some structural characteristics of vaults, such as mass, diameter, and shape, are very similar to those of the NPC, although research has not yet conclusively established whether vaults actually form some sort of plug to stop up the NPC.

Researchers have also proposed a role for vaults in cancer cells’ ability to resist the pharmaceuticals doctors throw at them. In 1993, immunologist and experimental pathologist Rik Scheper of VU University in Amsterdam and colleagues found that a non-small-cell lung cancer cell line could be selected for resistance to the chemotherapy drug doxorubicin.6 The resulting cells overexpressed a large protein initially named lung resistance-related protein (LRP). Two years later, the group discovered that LRP was nothing other than human MVP,7 and the literature soon blossomed with papers on the possible role of vaults in chemotherapeutic drug resistance.

Experiments have yielded several observations that exclude a direct participation of MVP in such resistance, however. Knockdown of MVP does not affect cell survival, for instance, and upregulation of MVP does not increase resistance to anticancer drugs.8 Thus, while many clinical studies recognize MVP as a negative prognostic factor for response to chemotherapy, it remains to be seen whether vaults play a direct role in drug resistance or whether they are merely markers of a drug-resistance phenotype.

Putting vaults to work

While many questions about vaults remain, including whether they serve as cargo transporters for the cell, their large, hollow interiors have led some scientists to see the nanobarrels as potential tools for the delivery of biomaterials. A variety of strategies for encapsulating biomaterials already exists, including viruses, liposomes, peptides, hydrogels, and synthetic and natural polymers, but the use of these materials is often limited by insufficient payload, immunogenicity, lack of targeting specificity, and the inability to control packaging and release. Vaults, on the other hand, possess all the features of an ideal delivery vehicle. These naturally occurring cellular nanostructures have a cavity large enough to sequester hundreds of proteins; they are homogeneous, regular, highly stable, and easy to engineer; and, most of all, they are nonimmunogenic and totally biocompatible.


BLOOMING VAULTS: Splitting at the midsection, vaults appear to break open into two flower-shape structures (yellow arrows). Partially open vaults (orange arrows) are seen along the top of this electron micrograph image.


But the actual packaging of foreign materials into vaults remains challenging. In 2005, Rome and long-time UCLA collaborator Valerie Kickhoefer discovered a particular region at the VPARP’s C-terminus, named major vault protein interaction domain (mINT), which is responsible for binding VPARP to MVP. The researchers hypothesized that mINT acts as a kind of zip code directing VPARP to the inside of the vault and speculated that any protein tagged with the mINT sequence at the C-terminus could be packaged into vaults just like VPARP. Fusing the sequence to luciferase, the enzyme that makes fireflies glow, and expressing the construct in an insect cell line, they successfully generated vaults with the engineered protein packaged inside the central barrel in the same two rings typically formed of VPARP.9

Rome and his colleagues have since demonstrated that the technique can successfully incorporate any number of proteins into the tiny cellular particles, and even discovered that they can make changes to vault proteins to alter such packaging. For example, the addition of extra amino acids at the N-terminus of MVP produces vaults with the engineered protein packaged exclusively at the waist. Conversely, the addition of extra amino acids at the MVP C-terminus produces two blobs of densely packed protein at the ends of vaults. Vaults can also be engineered to bind antibodies or express cancer cell ligands on their surface, allowing for the precise delivery of biomaterials to target cells. Researchers believe that, once inside the body, the engineered vaults act as slow-release particles for whatever protein is packaged inside.

Three decades after their chance dis­covery, vaults remain mysterious. But researchers are not waiting for all the questions to be answered.

In collaboration with Rome, pathologist Kathleen Kelly’s group at UCLA is working to create a vault-based nasal spray that acts as a vaccine against Chlamydia infection.10 They engineered vaults to encase the major outer membrane protein (MOMP) of Chlamydia, which possesses highly immunogenic properties, then created a nasal spray to deliver the modified vaults to the nasal mucosa. After the immunization, they challenged female mice with a Chlamydia infection and found that the treatment significantly limited bacterial infection in mucosal tissue.

Vaults may also help fight cancer. The lymphoid chemokine CCL21 binds to the chemokine receptor CCR7 and serves as a chemoattractant for tumor-fighting cells of the immune system. Pulmonologist Steven Dubinett and immunologist Sherven Sharma of UCLA and their colleagues injected CCL21 into mice with a lung carcinoma, but because CCL21 is small, it rapidly dissipated out of the tumor and was relatively ineffective at drawing immune cells to the tumor. In collaboration with Rome’s group, the researchers tagged the chemokine with mINT to package it into vaults prior to injection, causing an increase in the number of leukocytic cells that infiltrated the tumor and, most importantly, leading to a significant decrease in tumor growth.11 Rome and colleagues have since started a company to advance this vault-based therapy through human trials. (See “Opening the Medical Vault” below.)

Three decades after their chance discovery, vaults remain mysterious. But researchers are not waiting for all the questions to be answered. Vault-based therapies show promise in treating a variety of diseases, and the success of such applications could give these nanosize barrels a big dose of recognition.

Eufemia S. Putortì recently completed her bachelor’s degree in medical and pharmaceutical biotechnology at the Vita-Salute San Raffaele University in Milan, Italy, with a thesis on the history and function of vaults. While an undergraduate, she interned in the lab of Massimo P. Crippa, a senior researcher at the institute, and is now working on her master’s degree in molecular and cellular medical biotechnology.


Opening the Medical Vault


TUMOR-FIGHTING VAULTS: By tagging CCL21 cytokines with the mINT sequence derived from the C-terminus of the VPARP protein, researchers have engineered vaults carrying the immune-activating proteins. These CCL21-loaded vaults have shown promise in a mouse model of lung cancer, and clinical trials testing the therapeutic in patients are anticipated by the end of next year.


Fifteen years ago, my postdoc Andy Stephen brought me a result that blew my mind. Because he needed to make large amounts of the major vault protein (MVP) in order to further study its properties, he had expressed the protein in insect cells, which, unlike most animal cells, lack vaults. To our great surprise, MVP was not only expressed at high levels, but it assembled within the insect cell cytoplasm into empty vault-like particles that appeared structurally identical to the naturally occurring vaults we had purified from other eukaryotes.

This discovery changed the direction of my laboratory. My colleague Valerie Kickhoefer and I began to engineer vault particles as nanoscale capsules for a wide range of applications. We identified a section of the vault poly(ADP-ribose) polymerase (VPARP) protein that binds with high affinity to the inside of vaults. Fusion of this sequence, called mINT, to any protein or peptide of interest facilitated its packaging into vaults and, thanks to a tight but reversible binding interaction, its slow release. Moreover, by fusing peptides to the C-terminus of MVP, we are able to engineer vaults with specific markers displayed on their surface, allowing the development of strategies for targeting vaults to cells or tissues.12

To develop such vaults for medical needs, I partnered with entrepreneur Michael Laznicka to form Vault Nano Inc. in the summer of 2013. The first vault-based therapeutic that we are moving forward is a human recombinant vault packaged with the CCL21 chemokine, which is normally produced in lymph nodes, where it attracts and activates T cells and dendritic cells. Injecting the recombinant vaults into a lung tumor model in mice, we observed that the attracted T cells and dendritic cells react with tumor antigens to halt tumor growth.11 Now, in collaboration with Steven Dubinett and Jay Lee here at UCLA, Vault Nano is moving the CCL21-vault into clinical studies, hoping to initiate a Phase 1 trial by the end of next year. If successful, the CCL21-vault therapeutic would be an off-the-shelf reagent that can harness the power of a patient’s own immune system to attack cancer.

With our UCLA collaborators Kathleen Kelly and Otto Yang, we are also pursuing the development of vault vaccines against Chlamydia and HIV. Current studies in animal models have demonstrated that when a pathogen-derived protein or peptide is packaged in vaults, the resulting nanocapsules can stimulate a robust immune response. With the help of Vault Nano, these studies will soon advance to the clinic.  —Leonard H. Rome

Leonard H. Rome is a professor of biological chemistry at UCLA’s David Geffen School of Medicine and chief scientific officer of Vault Nano Inc.


  1. N.L. Kedersha, L.H. Rome, “Isolation and characterization of a novel ribonucleoprotein particle: Large structures contain a single species of small RNA,” J Cell Biol, 103:699-709, 1986.
  2. A.G. Stephen et al., “Assembly of vault-like particles in insect cells expressing only the major vault protein,” J Biol Chem, 276:23217-20, 2001.
  3. N.L. Kedersha et al., “Vaults. III. Vault ribonucleoprotein particles open into flower-like structures with octagonal symmetry,” J Cell Biol, 112:225-35, 1991.
  4. D.R. Hamill, K.A. Suprenant, “Characterization of the sea urchin major vault protein: a possible role for vault ribonucleoprotein particles in nucleocytoplasmic transport”, Dev Biol, 190:117-128, 1997.
  5. M.J. Poderycki et al., “The vault exterior shell is a dynamic structure that allows incorporation of vault-associated proteins into its interior,” Biochemistry, 45:12184-93, 2006.
  6. R.J. Scheper et al., “Overexpression of a M(r) 110,000 vesicular protein in non-P-glycoprotein-mediated multidrug resistance,” Cancer Res, 53:1475-79, 1993.
  7. G.L. Scheffer et al., “The drug resistance-related protein LRP is the human major vault protein,”Nat Med, 1:578-82, 1995.
  8. K.E. Huffman, D.R. Corey, “Major vault protein does not play a role in chemoresistance or drug localization in a non-small cell lung cancer cell line,” Biochemistry, 44:2253-61, 2005.
  9. V.A. Kickhoefer et al., “Engineering of vault nanocapsules with enzymatic and fluorescent properties,” PNAS, 102:4348-52, 2005.
  10. C.I. Champion et al., “A vault nanoparticle vaccine induces protective mucosal immunity,” PLOS ONE, 4:e5409, 2009.
  11. U.K. Kar et al., “Novel CCL21-vault nanocapsule intratumoral delivery inhibits lung cancer growth,” PLOS ONE, 6:e18758, 2011.
  12. L.H. Rome, V.A. Kickhoefer, “Development of the vault particle as a platform technology,” ACS Nano, 7:889–902, 2013.



Logo KW


de System Administrator - martes, 10 de marzo de 2015, 23:36


por Mariano García-Izquierdo, Mariano Meseguer, Mª Isabel Soler y Mª Concepción Sáez

Universidad de Murcia | ENAE. Business School

En las dos últimas décadas, el acoso psicológico en el trabajo o mobbing ha sido uno de los tópicos de investigación en el ámbito de la Psicología del Trabajo y de las Organizaciones. Desde su creación en 2001, el Servicio de Ergonomía y Psicosociología Aplicada (Serpa) de la Universidad de Murcia viene trabajando sobre este problema que tiene un gran protagonismo y repercusión en el ámbito laboral y social. En este trabajo se hace un repaso de los hallazgos más interesantes obtenidos por este grupo de investigación y su comparación con otros resultados que aparecen en la literatura científica. Así, se comienza con cuestiones sobre la delimitación del concepto, las formas de cuantificación, la prevalencia y la procedencia de las conductas de acoso. Posteriormente, se revisan los determinantes y consecuentes, y se hace mención a un recurso personal, la autoeficacia, que puede moderar los efectos de las conductas de acoso en la salud. Por último, se comentan las principales formas de intervención y se realizan algunas consideraciones de cara a una adecuada prevención.

Por favor lea el archivo PDF adjunto.

Logo KW

ADHD Isn’t a Disorder of Attention [1621]

de System Administrator - domingo, 27 de diciembre de 2015, 14:36

ADHD Isn’t a Disorder of Attention

By Margarita Tartakovsky, M.S. 

Many people think of ADHD as a disorder of attention or lack thereof. This is the traditional view of ADHD. But ADHD is much more complex. It involves issues with executive functioning, a set of cognitive skills, which has far-reaching effects.

In his comprehensive and excellent book Mindful Parenting for ADHD: A Guide to Cultivating Calm, Reducing Stress & Helping Children Thrive,developmental behavioral pediatrician Mark Bertin, MD, likens ADHD to an iceberg. 

Above the water, people see poor focus, impulsivity and other noticeable symptoms. However, below the surface are a slew of issues caused by impaired executive function (which Bertin calls “an inefficient, off-task brain manager”).

Understanding the role of executive function in ADHD is critical for parents, so they can find the right tools to address their child’s ADHD. Plus, what may look like deliberate misbehaving may be an issue with ADHD, a symptom that requires a different solution.

And if you’re an adult with ADHD, learning about the underlying issues can help you better understand yourself and find strategies that actually work — versus trying harder, which doesn’t.

It helps to think of executive function as involving six skills. In Mindful Parenting for ADHD, Dr. Bertin models this idea after an outline from ADHD expert Thomas E. Brown. The categories are:

Attention Management

ADHD isn’t an inability to pay attention. ADHD makes it harder to manage your attention. According toBertin, “It leads to trouble focusing when demands rise, being overly focused when intensely engaged, and difficulty shifting attention.”

For instance, in noisy settings, kids with ADHD can lose the details of a conversation, feel overwhelmed and shut down (or act out). It’s also common for kids with ADHD to be so engrossed in an activity that they won’t register anything you say to them during that time.

Action Management

This is the “ability to monitor your own physical activity and behavior,” Bertin writes. Delays in this type of executive function can lead to fidgeting, hyperactivity and impulsiveness.

It also can take longer to learn from mistakes, which requires being aware of the details and consequences of your actions. And it can cause motor delays, poor coordination and problems with handwriting.

Task Management

This includes organizing, planning, prioritizing and managing time. As kids get older, it’s task management (and not attention) that tends to become the most problematic.

Also, “Unlike some ADHD-related difficulties, task management doesn’t respond robustly to medication,” Bertin writes. This means that it’s important to teach your kids strategies for getting organized.

Information Management

People with ADHD can have poor working memory. “Working memory is the capacity to manage the voluminous information we encounter in the world and integrate it with what we know,” Bertin writes. We need to be able to temporarily hold information for everything from conversations to reading to writing.

This explains why your child may not follow through when you give them a series of requests. They simply lose the details. What can help is to write a list for your child, or give them a shorter list of verbal instructions.

Emotion Management

Kids with ADHD tend to be more emotionally reactive. They get upset and frustrated faster than others. But that’s because they may not have the ability to control their emotions and instead react right away.

Effort Management

Individuals with ADHD have difficulty sustaining effort. It isn’t that they don’t value work or aren’t motivated, but they may run out of steam. Some kids with ADHD also may not work as quickly or efficiently.

Trying to push them can backfire. “For many kids with ADHD, external pressure may decrease productivity …Stress decreases cognitive efficiency, making it harder to solve problems and make choices,” Bertin writes. This can include tasks such as leaving the house and taking tests.

Other Issues

Bertin features a list of other signs in Mindful Parenting for ADHD because many ADHD symptoms involve several parts of executive function. For instance, kids with ADHD tend to struggle with maintaining routines, and parents might need to help them manage these routines longer than other kids.

Kids with ADHD also have inconsistent performance. This leads to a common myth: If you just try harder, you’ll do better. However, as Bertin notes, “Their inconsistency is their ADHD. If they could succeed more often, they would.”

Managing time is another issue. For instance, individuals with ADHD may not initially see all the steps that are required for a project, thereby taking a whole lot more time. They may underestimate how long a task will take (“I’ll watch the movie tonight and write my paper before the bus tomorrow”). They may not track their time accurately or prioritize effectively (playing until it’s too late to do homework).

In addition, people with ADHD often have a hard time finishing what they start. Kids may rarely put things away, leaving cabinets open and leaving their toys and clothes all over the house.

ADHD is complex and disruptions in executive functioning affect all areas of a person’s life. But this doesn’t mean that you or your child is doomed. Rather, by learning more about how ADHD really works, you can find specific strategies to address each challenge.

And thankfully there are many tools to pick from. You can start by typing in “strategies for ADHD” in the search bar on Psych Central and checking out Bertin’s valuable book.

Child hearing noise photo available from Shutterstock

Logo KW

ADHD Isn’t a Disorder of Attention [1678]

de System Administrator - martes, 16 de febrero de 2016, 00:36

ADHD Isn’t a Disorder of Attention 

Many people think of ADHD as a disorder of attention or lack thereof. This is the traditional view of ADHD. But ADHD is much more complex. It involves issues with executive functioning, a set of cognitive skills, which has far-reaching effects.

In his comprehensive and excellent book Mindful Parenting for ADHD: A Guide to Cultivating Calm, Reducing Stress & Helping Children Thrive, developmental behavioral pediatrician Mark Bertin, MD, likens ADHD to an iceberg. 

Above the water, people see poor focus, impulsivity and other noticeable symptoms. However, below the surface are a slew of issues caused by impaired executive function (which Bertin calls “an inefficient, off-task brain manager”).

Understanding the role of executive function in ADHD is critical for parents, so they can find the right tools to address their child’s ADHD. Plus, what may look like deliberate misbehaving may be an issue with ADHD, a symptom that requires a different solution.

And if you’re an adult with ADHD, learning about the underlying issues can help you better understand yourself and find strategies that actually work — versus trying harder, which doesn’t.

It helps to think of executive function as involving six skills. In Mindful Parenting for ADHD, Dr. Bertin models this idea after an outline from ADHD expert Thomas E. Brown. The categories are:

Attention Management

ADHD isn’t an inability to pay attention. ADHD makes it harder to manage your attention. According to Bertin, “It leads to trouble focusing when demands rise, being overly focused when intensely engaged, and difficulty shifting attention.”

For instance, in noisy settings, kids with ADHD can lose the details of a conversation, feel overwhelmed and shut down (or act out). It’s also common for kids with ADHD to be so engrossed in an activity that they won’t register anything you say to them during that time.

Action Management

This is the “ability to monitor your own physical activity and behavior,” Bertin writes. Delays in this type of executive function can lead to fidgeting, hyperactivity and impulsiveness.

It also can take longer to learn from mistakes, which requires being aware of the details and consequences of your actions. And it can cause motor delays, poor coordination and problems with handwriting.

Task Management

This includes organizing, planning, prioritizing and managing time. As kids get older, it’s task management (and not attention) that tends to become the most problematic.

Also, “Unlike some ADHD-related difficulties, task management doesn’t respond robustly to medication,” Bertin writes. This means that it’s important to teach your kids strategies for getting organized.

Information Management

People with ADHD can have poor working memory. “Working memory is the capacity to manage the voluminous information we encounter in the world and integrate it with what we know,” Bertin writes. We need to be able to temporarily hold information for everything from conversations to reading to writing.

This explains why your child may not follow through when you give them a series of requests. They simply lose the details. What can help is to write a list for your child, or give them a shorter list of verbal instructions.

Emotion Management

Kids with ADHD tend to be more emotionally reactive. They get upset and frustrated faster than others. But that’s because they may not have the ability to control their emotions and instead react right away.

Effort Management

Individuals with ADHD have difficulty sustaining effort. It isn’t that they don’t value work or aren’t motivated, but they may run out of steam. Some kids with ADHD also may not work as quickly or efficiently.

Trying to push them can backfire. “For many kids with ADHD, external pressure may decrease productivity …Stress decreases cognitive efficiency, making it harder to solve problems and make choices,” Bertin writes. This can include tasks such as leaving the house and taking tests.

Other Issues

Bertin features a list of other signs in Mindful Parenting for ADHD because many ADHD symptoms involve several parts of executive function. For instance, kids with ADHD tend to struggle with maintaining routines, and parents might need to help them manage these routines longer than other kids.

Kids with ADHD also have inconsistent performance. This leads to a common myth: If you just try harder, you’ll do better. However, as Bertin notes, “Their inconsistency is their ADHD. If they could succeed more often, they would.”

Managing time is another issue. For instance, individuals with ADHD may not initially see all the steps that are required for a project, thereby taking a whole lot more time. They may underestimate how long a task will take (“I’ll watch the movie tonight and write my paper before the bus tomorrow”). They may not track their time accurately or prioritize effectively (playing until it’s too late to do homework).

In addition, people with ADHD often have a hard time finishing what they start. Kids may rarely put things away, leaving cabinets open and leaving their toys and clothes all over the house.

ADHD is complex and disruptions in executive functioning affect all areas of a person’s life. But this doesn’t mean that you or your child is doomed. Rather, by learning more about how ADHD really works, you can find specific strategies to address each challenge.

And thankfully there are many tools to pick from. You can start by typing in “strategies for ADHD” in the search bar on Psych Central and checking out Bertin’s valuable book.

Child hearing noise photo available from Shutterstock

About Margarita Tartakovsky, M.S.


Margarita Tartakovsky, M.S., is an Associate Editor at Psych Central. She also explores self-image issues on her own blog Weightless and creativity on her blog Make a Mess: Everyday Creativity.

Logo KW


de System Administrator - sábado, 18 de octubre de 2014, 20:23


Las tribus, los Ni-Ni, las generaciones X, Y y Z

por Hugo Lerner

Por favor lea el whitepaper adjunto.


Logo KW

AI interns: Software already taking jobs from humans [1195]

de System Administrator - sábado, 4 de abril de 2015, 22:55


AI interns: Software already taking jobs from humans

People have talked about robots taking our jobs for ages. Problem is, they already have – we just didn't notice

FORGET Skynet. Hypothetical world-ending artificial intelligence makes headlines, but the hype ignores what's happening right under our noses. Cheap, fast AI is already taking our jobs, we just haven't noticed.

This isn't dumb automation that can rapidly repeat identical tasks. It's software that can learn about and adapt to its environment, allowing it to do work that used to be the exclusive domain of humans, from customer services to answering legal queries.

These systems don't threaten to enslave humanity, but they do pose a challenge: if software that does the work of humans exists, what work will we do?

In the last three years, UK telecoms firm O2 has replaced 150 workers with a single piece of software. A large portion of O2's customer service is now automatic, says Wayne Butterfield, who works on improving O2's operations. "Sim swaps, porting mobile numbers, migrating from prepaid onto a contract, unlocking a phone from O2" – all are now automated, he says.

Humans used to manually move data between the relevant systems to complete these tasks, copying a phone number from one database to another, for instance. The user still has to call up and speak to a human, but now an AI does the actual work.

To train the AI, it watches and learns while humans do simple, repetitive database tasks. With enough training data, the AIs can then go to work on their own. "They navigate a virtual environment," says Jason Kingdon, chairman of Blue Prism, the start-up which developed O2's artificial workers. "They mimic a human. They do exactly what a human does. If you watch one of these things working it looks a bit mad. You see it typing. Screens pop-up, you see it cutting and pasting."

One of the world's largest banks, Barclays, has also dipped a toe into this specialised AI. It used Blue Prism to deal with the torrent of demands that poured in from its customers after UK regulators demanded that it pay back billions of pounds of mis-sold insurance. It would have been expensive to rely entirely on human labour to field the sudden flood of requests. Having software agents that could take some of the simpler claims meant Barclays could employ fewer people.

The back office work that Blue Prism automates is undeniably dull, but it's not the limit for AI's foray into office space. In January, Canadian start-up ROSS started using IBM's Watson supercomputer to automate a whole chunk of the legal research normally carried out by entry-level paralegals.

Legal research tools already exist, but they don't offer much more than keyword searches. This returns a list of documents that may or may not be relevant. Combing through these for the argument a lawyer needs to make a case can take days.

ROSS returns precise answers to specific legal questions, along with a citation, just like a human researcher would. It also includes its level of confidence in its answer. For now, it is focused on questions about Canadian law, but CEO Andrew Arruda says he plans for ROSS to digest the law around the world.

Since its artificial intelligence is focused narrowly on the law, ROSS's answers can be a little dry. Asked whether it's OK for 20 per cent of the directors present at a directors' meeting to be Canadian, it responds that no, that's not enough. Under Canadian law, no directors' meeting may go ahead with less than 25 per cent of the directors present being Canadian. ROSS's source? The Canada Business Corporations Act, which it scanned and understood in an instant to find the answer.

By eliminating legal drudge work, Arruda says that ROSS's automation will open up the market for lawyers, reducing the time they need to spend on each case. People who need a lawyer but cannot afford one would suddenly find legal help within their means.

ROSS's searches are faster and broader than any human's. Arruda says this means it doesn't just get answers that a human would have had difficulty finding, it can search in places no human would have thought to look. "Lawyers can start crafting very insightful arguments that wouldn't have been achievable before," he says. Eventually, ROSS may become so good at answering specific kinds of legal question that it could handle simple cases on its own.

Where Blue Prism learns and adapts to the various software interfaces designed for humans working within large corporations, ROSS learns and adapts to the legal language that human lawyers use in courts and firms. It repurposes the natural language-processing abilities of IBM's Watson supercomputer to do this, scanning and analysing 10,000 pages of text every second before pulling out its best answers, ranked by confidence.

Lawyers are giving it feedback too, says Jimoh Ovbiagele, ROSS's chief technology officer. "ROSS is learning through experience."

Massachusetts-based Nuance Communications is building AIs that solve some of the same language problems as ROSS, but in a different part of the economy: medicine. In the US, after doctors and nurses type up case notes, another person uses those notes to try to match the description with one of thousands of billing codes for insurance purposes.

Nuance's language-focused AIs can now understand the typed notes, and figure out which billing code is a match. The system is already in use in a handful of US hospitals.

Kingdon doesn't shy away from the implications of his work: "This is aimed at being a replacement for a human, an automated person who knows how to do a task in much the same way that a colleague would."

But what will the world be like as we increasingly find ourselves working alongside AIs? David Autor, an economist at the Massachusetts Institute of Technology, says automation has tended to reduce drudgery in the past, and allowed people to do more interesting work.

"Old assembly line jobs were things like screwing caps on bottles," Autor says. "A lot of that stuff has been eliminated and that's good. Our working lives are safer and more interesting than they used to be."

Deeper inequality?

The potential problem with new kinds of automation like Blue Prism and ROSS is that they are starting to perform the kinds of jobs which can be the first rung on the corporate ladders, which could result in deepening inequality.

Autor remains optimistic about humanity's role in the future it is creating, but cautions that there's nothing to stop us engineering our own obsolescence, or that of a large swathe of workers that further splits rich from poor. "We've not seen widespread technological unemployment, but this time could be different," he says. "There's nothing that says it can't happen."

Kingdon says the changes are just beginning. "How far and fast? My prediction would be that in the next few years everyone will be familiar with this. It will be in every single office."

Once it reaches that scale, narrow, specialised AIs may start to offer something more, as their computation roots allow them to call upon more knowledge than human intelligence could.

"Right now ROSS has a year of experience," says Ovbiagele. "If 10,000 lawyers use ROSS for a year, that's 10,000 years of experience."

This article appeared in print under the headline "You are being replaced"

Which jobs will go next?

Artificial intelligence is already on the brink of handling a number of human jobs (see main story). The next jobs to become human-free might be:

Taxi drivers: Uber, Google and established car companies are all pouring money into machine vision and control research. It will be held back by legal and ethical issues, but once it starts, human drivers are likely to become obsolete.

Transcribers: Every day hospitals all over the world fire off audio files to professional transcribers who understand the medical jargon doctors use. They transcribe the tape and send it back to the hospital as text. Other industries rely on transcription too, and slowly but surely, machine transcription is starting to catch up. A lot of this is driven by data on the human voice gathered in call centres.

Financial analysts: Kensho, based in Cambridge, Massachusetts, is using AI to instantly answer financial questions which can take human analysts hours or even days to answer. By digging into financial databases, the start-up can answer questions like: "Which stocks perform best in the days after a bank fails". Journalists at NBC can already use Kensho to answer questions about breaking news, replacing a human researcher.


Logo KW


de System Administrator - miércoles, 24 de septiembre de 2014, 15:22


Written By: Steven Kotler

Big Brother is feeling you—literally.

A few months back, I wrote about Ellie, the world’s first AI-psychologist. Developed by DARPA and researchers at USC’s Institute for Creative Technologies, Ellie is a diagnostic tool capable of reading 60 non-verbal cues a second—everything from eye-gaze to face tilt to voice tone—in the hopes of identifying the early warning signs of depressions and (part of the long term goal) stemming the rising tide of soldier suicide.

And early reports indicate that Ellie is both good at her job and that soldiers like talking to an AI-psychologist more than they like talking to a human psychologist (AI’s don’t judge).

More importantly, Ellie is part of the bleeding edge of an accelerating trend—what we could call the automation of psychology.

The goal in this story is to examine some surprising aspects of the long term ramifications of this trend, but before we get there a few more examples of exactly what’s been going on are helpful.

Our first example comes from Dartmouth, where computer scientist Andrew Campbelljust announced that he successfully used data gathered by student’s smartphones to predict their states of mind and subsequent behavior.

Campbell’s original question was why some students fail when others succeed. His suspicion was that a myriad of factors like quality of student sleep and number of interpersonal relationships played an important role in success, so he built an app—known as The StudentLife app (built for Android)—that monitored things like sleep duration, number of phone conversations per day, length of those conversations, physical activity, location (meaning are you out in public or hiding in the dorm), etc. This data—combined with some machine learning algorithms—was used to make inferences about student mental health.

48 students ran this app for 10 weeks. The results were surprisingly accurate. For example, students who have more conversations were less likely to be depressed, students who were physically active were less likely to be lonely, and, surprisingly, there is no correlation between class attendance and academic success.

As Campbell told New Scientist: “We found for the first time that passive and automatic sensor data obtained from phones without any action from the user, significantly correlates to student depression level, stress and loneliness, and with academic performance over the term.”

The point here is not that USC’s Ellie or Campbell’s app are the end-all-be-all of psychological diagnosis—but it’s really a matter of time. In the same way that researchers are hard at work at a portable, AI-driven, handheld medical diagnostic device (see the Qualcomm Tricorder X Prize), they’re getting down to work on similar breakthroughs in psychology.

Yet, diagnosis is only part of the issue. If we’re really talking about the automation of psychology, there’s still treatment to consider. And that’s where our second set of examples comes in.

Right now, a next wave of cheap, portable, and far more precise neurofeedback devices are hitting the market. One example is the Muse, a device Tim Ferris recently put through it’s paces. The goal of his experiment was stress reduction and, after two weeks of Muse training, Mr. Ferris reported that he a much calmer person.

Of course, the Muse is one product. But dozens more are hitting the market. And in every variety.

A few weeks ago, when I was at USC to meet Ellie, I also got to demo an immersive VR-based protocol for the treatment of PTSD, developed by 

. It’s an impressive piece of tech. With soldiers returning from combat, already Rizzo’s protocol has proven itself more effective than traditional methods.


And since brain science itself is advancing exponentially all of this work is only going to continue to accelerate. In other words, we’re none too far away from a combination platter of automatized psychological diagnosis and automatized treatment protocols—which means that mental health care is currently being digitized and about to become democratized.

So here’s my question, sort of a little thought experiment. Let’s say this works. Let’s say that by 2025, Google or Facebook or someone like that will have succeeded in their mission to bring free wireless to the world. Let’s say that smartphones follow the samegrowth curve they’re currently on and, again by 2025, have then become so cheap that just about anyone who wants one can have one. And let’s say that we manage to automate psychology successfully.

What are the results?

The easiest place to start is with the idea that we might soon live in a much happier world. I don’t mean this in a let’s hold hands and sing Kumbaya kind of way, I mean that when psychologists 

 (University of Illinois) and Shigehiro Oishi (UVA) conducted a study of more than 10,000 participants in 48 counties, they found that happiness is not just a global concern, rather is the global concern. In their study, the quest for happiness is more important to folks than making lots of money, living a meaningful life, or, even, going to heaven.

Moreover, while some sizable chunk of happiness appears to be genetic, there’s alsoreally good research showing that 40 percent of our happiness is entirely within our control.

What all this suggests is that once we have available (i.e. democratized) mental health tools, people will use these tools to strive for happiness. And, if early results are anything to go by, they might just find a little more happiness as well.

So, again, what does the world look like when we’re all in better moods?

Recent research shows that happier people tend to make more money and spend less money. So, does this mean that happiness is good for the banking industry (where that extra money might go if it’s not spent) and bad for economic growth (because that money is not being spent)? Truthfully, we don’t know.

When it comes to the economics of happiness, the research usually looks at the impact of money on happiness and not visa-versa. Check out this Atlantic article. The story sums up a lot of recent work, but again, moves from wealth to happiness and not the other way round.

More interesting, perhaps, is the question of unintended consequences. Consider the recent spate of work that has shown that happy people have a bunch of habits that unhappy people don’t. What we don’t yet know is if these habits are things that lead to more happiness or are they the results of being happy, but—it seems safe to assume—some of these habits will turn out to be more the effect (of happiness) than the cause.

Thus, in a happier world, we should see more of these effects. And the results will make for a very different world.

Let’s start with the fact that happy people are more curious and, by extension, more prone to risk-taking (in an attempt to try and satisfy that extra curiosity). So a happier society should be a more innovative society, as the result of all that curiosity and risk-taking.

And this is merely a single example. Researchers have also found that happy people tend to be less skeptical, less jealous, more grateful, more extroverted, better rested, more future-oriented, more willing to feel (but not dwell upon) negative emotions, dislike small talk, more generous, and, of course, more purpose driven.

By no means is this a complete list. But the point here is that while many of these changes may be causes of happiness, a number of them are bound to end up being its results. And, again, with serious effect.


University of Texas psychologist David Buss has called jealousy “the most destructive emotion housed in the human brain.” In his research, analyzing over a century’s worth of data, jealousy is the leading cause of spousal murder worldwide. So, if it turns out that, less jealousy is a by-product of more happiness (and not just a cause) then are we looking at a future with far fewer domestic homicides? Less spousal abuse as well?

What about a more generous world? Or a less skeptical world? Or a world that dislikes small talk (imagine what happens to reality TV then)?

The point here is that while all might this sound a little hypothetical—admittedly, it is—but the automation of psychology is already happening. All the data suggests that the democratization of mental health care should lead in the direction of a happier world. But what the data also suggests is that a happier world will be a far different world—meaning the impact of a shift in global mood will have some absolutely enormous socio-economic ramifications.

Stay tuned.

[Image credits: eggs in carton courtesy of Shutterstock, Digitalarti/Flickr, Sergey Galyonkin/Flickr]

This entry was posted in AILongevity And Health and tagged brain sciencedarpahappinessmuse,neurofeedback.


Logo KW

AI: Artificial Imagination? [1120]

de System Administrator - martes, 24 de febrero de 2015, 16:36

AI: Artificial Imagination?

by Margaret Boden

Professor of cognitive science at the University of Sussex, author of Mind As Machine, awarded an OBE in 2001.

Most of us are fascinated by creativity. New ideas in science and art are often hugely exciting – and, paradoxically, sometimes seemingly “obvious” once they’ve arrived. But how can that be? Many people, perhaps most of us, think there’s no hope of an answer. Creativity is deeply mysterious, indeed almost magical. Any suggestion that there might be a scientific theory of creativity strikes such people as absurd. And as for computer models of creativity, those are felt to be utterly impossible.

But they aren’t. Scientific psychology has identified three different ways in which new, surprising, and valuable ideas – that is, creative ideas – can arise in people’s minds. These involve combinational, exploratory, and transformational creativity. The information processes involved can be understood in terms of concepts drawn from Artificial Intelligence (AI). They can even be modelled by computers using AI techniques.

The first type of creativity involves unfamiliar combinations of familiar ideas. This is widely recognised. Indeed, it’s usually the only type that’s mentioned, even by people professionally committed to the study of creativity. Examples include puns, poetic imagery, and scientific analogies (the heart as a pump, the atom as a solar system).

The second, exploratory creativity, arises within a culturally accepted style of thinking. This may involve cooking, chemistry, or choreography, and, of course, it may concern either art or science. The notion that creativity is confined to the arts or to the “creative industries” is mistaken.

In exploratory creativity, the rules defining the style are followed, explored, and sometimes also tested in order to generate new structures that fit within that style. An example might be another impressionist painting, or another molecule within a particular area of chemistry. So rules aren’t the antithesis of creativity, as is widely believed. On the contrary, stylistic constraints make exploratory creativity possible.

The third and final form is transformational creativity. This grows out of exploratory creativity, when one or more of the previously accepted rules is altered in some way. It often happens when testing of the previous style shows that it cannot generate certain results which the person concerned wanted to achieve. The alteration makes certain structures possible which were impossible before.

For instance, the “single viewpoint” convention of classical portraiture implies that a face shown in profile must have only one eye. But cubism dropped that convention. Features visible from any viewpoint could be represented simultaneously – hence works such as Picasso’s The Weeping Woman (1937), which depicts its subject with two eyes on the same side of her face.

As that example reminds us, transformational creativity often produces results that aren’t immediately valued, except perhaps by a handful of people. That’s understandable, because one or more of the previously accepted rules has been broken.

All three types of creativity have been modelled by computers (and all have contributed to computer art). That is not to say that the computers are “really” creative. But it does demonstrate that they at least appear to be creative. Their performance would be regarded as creative if it were done by a person.

You might think that, with respect to combinational creativity, this isn’t surprising. After all, nothing could be simpler than to provide a computer with words, images, musical notes etc and get it to combine examples at random. Certainly, many of the results would be novel, and surprising.

Well, yes. But would they be valuable? Most random word combinations, for instance, would be senseless. A practiced poet might be able to use them in a way that showed their relevance – to each other and/or to some other ideas that we find interesting. But the computer itself could not. Unless the programmer had provided clear criteria for judging relevance, the random word combinations couldn’t be pruned to keep only the valuable ones. There are no such criteria, at present – and I’m not holding my breath!

Those few AI models of creativity that do rely on novel combinations generally combine random choice with specific criteria chosen for the task at hand. For example, a joke-generating programme called JAPE churns out riddles like these: 

Q: What do you call a depressed train?

A: A low-comotive


Q: What do you get if you combine a sheep with a kangaroo?

A: A woolly jumper.

JAPE is really doing exploratory creativity. It has structured templates for eight types of joke, and explores the possibilities with fairly acceptable results.

Exploratory creativity in general is easier to model in computers than combinational creativity is. But that’s not to say it’s easy: the style of thinking concerned has to be expressed with the supreme clarity required by a computer programme. In JAPE, the style is the joke template. In other cases, it’s a way of writing music (from a Bach fugue to a Scott Joplin rag), or of drawing Palladian villas or human figures. All these, and more, have been achieved already.

Many people assume that transformational creativity is the most difficult of all for computer modelling – perhaps even impossible. After all, a computer can do only what its programme tells it to do. So how can there be any transformation?

There can be, if the programme can alter its own rules. Such programs exist, and are used not only in some computer art but also in designing engines. They are evolutionary programmes, employing “genetic algorithms” inspired by mutations in biology to make random changes in their rules.

Some evolutionary programmes can also prune the results, selecting those which are closest to what the task requires, and using them to breed the next generation. That’s true of engine-design systems, for instance. Often, however, the selection is done by a human, because the programme can’t define suitable selection criteria to do the job automatically. In short, transformation isn’t the problem. The key problem again is relevance, or value.

So creativity is, after all, scientifically intelligible, as I’ve argued in my books The Creative Mind: Myths and Mechanisms (2004) and Creativity and Art: Three Roads to Surprise (2010). But it’s not scientifically predictable. Human minds are far too rich, far too subtle, and far too idiosyncratic for that.




Logo KW

Al cerebro le encanta el dinero [1757]

de System Administrator - domingo, 16 de abril de 2017, 23:39


Al cerebro le encanta el dinero


Dos personas, una frente a otra. Una recibe 10 euros que puede repartir como quiera entre las dos, pero hay truco. Si el trato que ofrece el que tiene los 10 euros no convence a su compañero, los dos se quedan sin nada.

Bastaría con una oferta de 1 euro para que la persona que se sienta al otro lado de la mesa la apreciara como rentable: 1 euro es más que 0 euros. Pues bien, la razón tiende a toparse con el egoísmo. La mayoría de los individuos que han pasado por este experimento, repetido en innumerables ocasiones, rechazan cualquier oferta que esté por debajo de los 4 euros. O reciben la mitad o boicotean a su compañero, aunque les cueste dinero.

Esta pequeña y lucrativa para los participantes función teatral se conoce como el juego del ultimátum y sirve para demostrar que muchas veces los individuos se comportan en los mercados de manera irracional. La venganza es tan sólo una de las caras que adopta el egoísmo cuando el dinero aparece en la ecuación.

El dinero da algo que se le parece mucho a la felicidad. Con un buen sueldo se pueden conseguir cosas útiles, experiencias inolvidables y sueños convertidos en realidad. Por eso la obsesión del ser humano desde la llegada del capitalismo ha sido tener más y más, una circunstancia que además sirve para diferenciarse de los vecinos y creerse mejor que ellos.

Desde los colonos españoles en América hasta Donald Trump, los hombres se han caracterizado por su insaciable sed de dinero, por su eterna fiebre del oro. En muchas ocasiones los ricos son ricos por ser gente sin escrúpulos; al fin y al cabo, nadie se hace multimillonario con el sudor de su frente sino mediante engaños, atajos e información privilegiada. De ahí la imagen del Tío Gilito, un avaro excéntrico que goza cuando se zambulle en el dinero custodiado en su cámara acorazada.

Desde que el modelo soviético se derrumbó sólo quedan cuestionables experimentos socialistas en un puñado de países y el capitalismo se ha convertido en la divisa de la humanidad. Ahora sólo importa producir, ganar y gastar. Ese estilo de vida cambia a las personas, que por dinero con capaces de hacer cualquier cosa.

La ciencia ha demostrado en múltiples ocasiones que el dinero genera estragos en nuestro cerebro, cableado para ser egoísta y exigir siempre una cuota justa del pastel económico. Uno de los descubrimientos más relevantes sobre los estímulos que reciben los humanos en situaciones en las que el vil metal está de por medio es el que elaboraron varios científicos de la Universidad de Bonn, que se propusieron estudiar desde el punto de vista neurológico las reacciones a las situaciones de desigualdad.

Los investigadores teutones reunieron a varias cobayas humanas y las pusieron a completar unas sencillas tareas por las que recibían una remuneración. Cuando la segunda persona recibía el mismo sueldo por ejecutar el mismo trabajo todo iba bien, pero cada vez que los responsables del estudio repartían un salario diferente por el mismo esfuerzo había problemas, sobre todo si un individuo recibía menos que otro.

Ni uno solo de los sujetos que obtuvo una paga menor que su compañero quedó satisfecho con el reparto del dinero, como es lógico. Más que egoísmo es sentido de la justicia.

En el caso contrario es donde se vislumbra algo de humanidad, donde se puede atisbar que el dinero no envilece tanto como pensamos. Aunque a la inmensa mayoría (39 de 64) les dio igual cobrar más que sus compañeros, tan sólo 9 quedaron satisfechos con su paga extra. Por su parte, 16 de los sujetos consideraron injusto cobrar más y hubieran preferido un sueldo similar al de sus semejantes.

El egoísmo no está bien visto y, además, no siempre es la actitud más económica. Por mucho que los modelos de mercado siempre partan de la base de que los ciudadanos quieren lo mejor para sí mismos, la «teoría de juegos» ha demostrado en numerosas ocasiones que lo mejor es cooperar, aunque no nos apetezca.

El trabajo del matemático John Forbes Nash, retratado en el filme Una mente maravillosa (2001), sirvió para demostrar que cada uno toma las decisiones que más le convienen sin contar con los demás, como sucede en el dilema del prisionero. Dos reos se enfrentan a un interrogatorio de la policía, que quiere incriminarles; si uno confiesa y el otro niega el crimen, el delator se libra de la cárcel y el compinche acaba 6 años en la trena. Si los dos confiesan se quedan 3 años tras las rejas, mientras que si los dos lo niegan tan sólo se quedan 1 año a la sombra.

La mejor solución, en conjunto, es que los dos lo nieguen y estén tan sólo 1 año en la cárcel, pero como cada uno quiere salvar el pellejo lo que hacen es delatar al compañero. Eso les conducirá, sin embargo, a 3 años en el penal.

Este juego es tan sólo un ejemplo práctico de la importancia del egoísmo, un factor que los modelos matemáticos han de tener en cuenta. Como también demuestra el juego del ultimátum, el bien común es una quimera y lo único que importa es el dinero.



Logo KW

Al cumplir los 80 [834]

de System Administrator - lunes, 13 de octubre de 2014, 16:18

Oliver Sacks reflexiona acerca de la vejez 

Al cumplir los 80

No pienso en la vejez como en una época cada vez más penosa que tenemos que soportar de la mejor manera posible, sino en una época de ocio y libertad, liberados de las urgencias artificiosas de días pasados.

Fuente: El País, Madrid

Anoche soñé con el mercurio: enormes y relucientes glóbulos de azogue que subían y bajaban. El mercurio es el elemento número 80, y mi sueño fue un recordatorio de que muy pronto los años que iba a cumplir también serían 80.

Desde que era un niño, cuando conocí los números atómicos, para mí los elementos de la tabla periódica y los cumpleaños han estado entrelazados. A los 11 años podía decir: “soy sodio” (elemento 11), y cuando tuve 79 años, fui oro. Hace unos años, cuando le di a un amigo una botella de mercurio por su 80º cumpleaños (una botella especial que no podía tener fugas ni romperse) me miró de una forma peculiar, pero más adelante me envió una carta encantadora en la que bromeaba: “tomo un poquito todas las mañanas, por salud”.

¡80 años! Casi no me lo creo

Me siento contento de estar vivo: “¡Me alegro de no estar muerto!”

Muchas veces tengo la sensación de que la vida está a punto de empezar, para en seguida darme cuenta de que casi ha terminado. Mi madre era la decimosexta de 18 niños; yo fui el más joven de sus cuatro hijos, y casi el más joven del vasto número de primos de su lado de su familia. Siempre fui el más joven de mi clase en el instituto. He mantenido esta sensación de ser siempre el más joven, aunque ahora mismo ya soy prácticamente la persona más vieja que conozco.

A los 41 años pensé que me moriría: tuve una mala caída y me rompí una pierna haciendo a solas montañismo. Me entablillé la pierna lo mejor que pude y empecé a descender la montaña torpemente, ayudándome solo de los brazos. En las largas horas que siguieron me asaltaron los recuerdos, tanto los buenos como los malos. La mayoría surgían de la gratitud: gratitud por lo que me habían dado otros, y también gratitud por haber sido capaz de devolver algo (el año anterior se había publicado Despertares).

A los 80 años, con un puñado de problemas médicos y quirúrgicos, aunque ninguno de ellos vaya a incapacitarme. Me siento contento de estar vivo: “¡Me alegro de no estar muerto!”. Es una frase que se me escapa cuando hace un día perfecto. (Esto lo cuento como contraste a una anécdota que me contó un amigo. Paseando por París con Samuel Beckett durante una perfecta mañana de primavera, le dijo: “¿Un día como este no hace que le alegre estar vivo?”. A lo que Beckett respondió: “Yo no diría tanto”).

Me siento agradecido por haber experimentado muchas cosas –algunas maravillosas, otras horribles— y por haber sido capaz de escribir una docena de libros, por haber recibido innumerables cartas de amigos, colegas, y lectores, y por disfrutar de mantener lo que Nathaniel Hawthorne llamaba “relaciones con el mundo”.

Siento haber perdido (y seguir perdiendo) tanto tiempo; siento ser tan angustiosamente tímido a los 80 como lo era a los 20; siento no hablar más idiomas que mi lengua materna, y no haber viajado ni haber experimentado otras culturas más ampliamente.

Siento que debería estar intentado completar mi vida, signifique lo que signifique eso de “completar una vida”. Algunos de mis pacientes, con 90 o 100 años, entonan el nunc dimittis“He tenido una vida plena, y ahora estoy listo para irme”—. Para algunos de ellos, esto significa irse al cielo, y siempre es el cielo y no el infierno, aunque tanto a Samuel Johnson como a Boswell les estremecía la idea de ir al infierno, y se enfurecían con Hume, que no creía en tales cosas. Yo no tengo ninguna fe en (ni deseo de) una existencia posmortem, más allá de la que tendré en los recuerdos de mis amigos, y en la esperanza de que algunos de mis libros sigan “hablando” con la gente después de mi muerte.

Las reacciones se han vuelto más lentas pero, con todo, uno se encuentra lleno de vida

El poeta W. H. Auden decía a menudo que pensaba vivir hasta los 80 y luego “marcharse con viento fresco” (vivió solo hasta los 67). Aunque han pasado 49 años desde su muerte yo sueño a menudo con él, de la misma manera que sueño con Luria, y con mis padres y con antiguos pacientes. Todos se fueron hace ya mucho tiempo, pero los quise y fueron importantes en mi vida.

A los 80 se cierne sobre uno el espectro de la demencia o del infarto. Un tercio de mis contemporáneos están muertos, y muchos más se ven atrapados en existencias trágicas y mínimas, con graves dolencias físicas o mentales. A los 80 las marcas de la decadencia son más que aparentes. Las reacciones se han vuelto más lentas, los nombres se te escapan con más frecuencia y hay que administrar las energías pero, con todo, uno se encuentra muchas veces pletórico y lleno de vida, y nada “viejo”. Tal vez, con suerte, llegue, más o menos intacto, a cumplir algunos años más, y se me conceda la libertad de amar y de trabajar, las dos cosas más importantes de la vida, como insistía Freud.

Cuando me llegue la hora, espero poder morir en plena acción, como Francis Crick. Cuando le dijeron, a los 85 años, que tenía un cáncer mortal, hizo una breve pausa, miró al techo, y pronunció: “Todo lo que tiene un principio tiene que tener un final”, y procedió a seguir pensando en lo que le tenía ocupado antes. Cuando murió, a los 88, seguía completamente entregado a su trabajo más creativo.

Mi padre, que vivió hasta los 94, dijo muchas veces que sus 80 años habían sido una de las décadas en las que más había disfrutado en su vida. Sentía, como estoy empezando a sentir yo ahora, no un encogimiento, sino una ampliación de la vida y de la perspectiva mental. Uno tiene una larga experiencia de la vida, y no solo de la propia, sino también de la de los demás.

Hemos visto triunfos y tragedias, ascensos y declives, revoluciones y guerras, grandes logros y también profundas ambigüedades. Hemos visto el surgimiento de grandes teorías, para luego ver cómo los hechos obstinados las derribaban. Uno es más consciente de que todo es pasajero, y también, posiblemente, más consciente de la belleza.

A los 80 años uno puede tener una mirada amplia, y una sensación vívida, vivida, de la historia que no era posible tener con menos edad. Yo soy capaz de imaginar, de sentir en los huesos, lo que supone un siglo, cosa que no podía hacer cuando tenía 40 años, o 60. No pienso en la vejez como en una época cada vez más penosa que tenemos que soportar de la mejor manera posible, sino en una época de ocio y libertad, liberados de las urgencias artificiosas de días pasados, libres para explorar lo que deseemos, y para unir los pensamientos y las emociones de toda una vida. Tengo ganas de tener 80 años.


Cuando me llegue la hora, espero poder morir en plena acción, como Francis Crick

Oliver Sacks es neurólogo y escritor. Entre sus obras destacan Los ojos de la mente, Despertares y El hombre que confundió a su mujer con un sombrero. Su último libro, Alucinaciones, lo publicará próximamente Anagrama.

© Oliver Sacks, 2013 | Traducción de Eva Cruz 


Logo KW

All the Brain-Boosting Goodness of Exercise…in a Pill? [1685]

de System Administrator - domingo, 21 de febrero de 2016, 22:17

All the Brain-Boosting Goodness of Exercise…in a Pill?


Lets face it: Love it or hate it, exercise is good for our brains.

Feeling stressed? Hit the trails: running boosts the fight-or-flight brain chemical norepinephrine andenhances our body’s ability to respond to stress. Got the winter blues? Working out just 30 minutes a day, a few times a week immediately ups overall mood and may combat depression. Can’t think straight? 20 minutes of lifting enhances long-term memory by about 10% in healthy young adults. Getting forgetful? Distance running increases brain volume, augments the birth of new neurons and slows age-related brain deterioration, even in old age.


The evidence is overwhelming: regular exercise may be as close to a brain-boosting elixir as we can get (plus it’s free!).

There’s just one problem: many of us don’t like to exercise. Some people — due to chronic illnesses such as cancer or other conditions — physically can’t. Yet these people often stand to gain the most from the mental benefits of working out.

The solution seems obvious. What if there’s a way to distill the positive effects of exercise, put it into a glorious pill and let everyone reap its mental benefits?

Muscle power

Yes, an “exercise pill” sounds like old news — but in truth, the quest is just getting started.

Late last year, a collaboration between the University of Sydney and the University of Copenhagen collected muscle tissue from four men who biked intensely for 10 minutes, and found over 1,000 different molecular changes.

“We’ve created an exercise blueprint that lays the foundation for future treatments, and the end goal is to mimic the effects of exercise,” said Dr. Nolan Hoffman, one of the authors of the study.

There’s more in the works. Recently the National Institute of Health organized a huge multi-center clinical study to try to map out, in unprecedented detail, how exercise changes our genes, protein turnover, metabolism and epigenetics — that is, how genes are expressed — in muscles and fatty tissue.


“Identification of the mechanisms that underlie the link between physical activity and improved health holds extraordinary promise for discovery of novel therapeutic targets and development of personalized exercise medicine,” wrote the participating researchers in a reportin Cell Metabolism.

With the help of bioinformatics and big data, the search has been fruitful: a team from Harvard announced a drugthat converts your stubborn white fat to the metabolically active brown fat, transforming fat-storing cells into thermogenic fuel-torching engines; a molecule dubbed “compound 14” tricks your body into thinking it’s low on energy, thereby pushing the cells’ energy factories into overdrive in an attempt to make more.

When we look at benefiting the brain, however, our cornucopia of potential exercise pills rapidly dwindles. None of the candidates discovered so far can make us sleep, feel and think better in the way that exercise can.

Scientists are just starting to bridge the gap. One crucial question stands in the way: exercise directly acts on the body’s skeletal muscle and cardiovascular system. So how does the brain know that the body just worked out?

The mythical messenger

What are the molecules that signal from the body to the brain after exercise?

It’s a question that’s already spurred one startupcaught the eye of leading venture capitalists and is potentially worth tens of millions of dollars.

It’s also one mired in controversy since the get-go.

In 2012, Dr. Bruce Spiegelman at Harvard Medical School announced the discovery of irisin, the bombshell “exercise mimetic” that thrilled the world. Named after the Greek messenger goddess Iris, irisin is a protein released by skeletal muscle after moderate running.

Irisin travels through the blood into the brain, where it stimulates brain cells to produce a nurturing protein called BDNF. BDNF is like mother’s milk for the brain: it coaxes the hippocampus — a brain region crucially involved in memory — to give birth to new neurons and adds to the brain’s computational powers. It also rebuilds worn-down structures within a neuron, and makes the brain more resilient to stressors that would usually cause cell death.

BDNF makes the brain bloom, and Irisin seems to be the harbinger.

Spiegelman’s work was almost immediately scrutinized by the scientific community. Anonymous comments flooded PubPeer, a popular forum where scientists discuss published papers, questioning the validity of the data and even the existence of the protein.

Three years later, Spiegelman and his team shot back at the naysayers with a new study that quantified the levels of irisin in the bloodstream of human volunteers — confirming that it’s real — and showed that it is, in fact, released after exercise.

Irisin’s rollercoaster to fame (or notoriety) isn’t just a nerd fight.

For one, it shows how tough it is to pinpoint messengers that bridge the blood and brain. For another, “exercise pill” has somewhat of the same negative connotations as “anti-aging pills.” Many scientists believethat exercise acts on the body and brain in too many ways, making the quest for a single pill nothing more than a fool’s errand. Others think “magical fat loss pills” have no place in respectable research.

But the potential impact of a true brain-boosting exercise pill is too strong to ignore. Amidst all the controversy, several pioneering teams are chipping away at the question.

One of the hottest new candidates is kynurenic acid, a muscle metabolite that protects the brain from stress-induced changes associated with depression, at least in mice.

Another is AICAR. Initially developed to protect the heart from injuries during surgery, AICAR boosts the expression of genes involved in oxygen metabolism and running endurance in mice. When used in old mice, it made them smarter and more agile.

Other examples include experimental drugs with somewhat unimaginative names, such as GW1516. These act locally on muscles but also affect the mind: mice taking GW1516 for a week performed much better on tests for learning and memory. When scientists peeked into their brains, they found abundant new neurons scattered all across the hippocampus.

Pills for treadmills?

Most of the research so far was done in mice. But the results are likely to work in humans as well — if you’re willing to bear side effects such as gout, heart valve defects and reduced blood flow to the brain and heart.

The side effects of most of these drug candidates are still unclear, but the temptation of a better body and brain is strong. Since 2013, a few drug candidates have made the (underground) leap into humans — specifically, endurance athletes. Cases of professional cyclists doping GW1516 were so frequent that it prompted the World Anti-Doping Agency to issue a warning against the drug.


For the rest of us, a safe and effective exercise pill is still a ways off. At the moment, scientists are still trying to figure out what kind of exercise — distance running, high-intensity interval training or weight training — benefits the brain the most (spoiler: running gets the gold).

"We are at the early stages of this exciting new field," said Dr. Ismail Laher. a scientist at the University of British Columbia who recently published a think piece on the future of exercise pills.

“These medications have life-changing potential for some people,” agreed bioethicist Dr. Arthur Caplan at the NYU School of Medicine. But he warns against unintended consequences.

“If you install seat belts and airbags in cars people drive faster,” he said. “I just think (exercise pills) should be introduced gradually so that populations who really can’t exercise might see the benefits while harm to the general population can be minimized.”

It’s a field that’s rapidly moving forward, and well worth keeping tabs on. And maybe someday we’ll have an exercise-mimicking pill that, in terms of health, transforms couch potatoes into elite athletes.

But until then, for a natural body and brain-boost, hit the treadmill.


Image credit:



Logo KW

Alopecia [946]

de System Administrator - sábado, 18 de octubre de 2014, 20:35

Tricología: Soluciones contra la alopecia desde las diferentes especialidades

Los problemas de caída del cabello son una de las principales preocupaciones estéticas de la población, sobre todo masculina. En este monográfico analizamos las diferentes soluciones que se proponen desde la variedad de disciplinas que componen la medicina para la belleza: Los doctores Sergio Vañó (Dermatología); Jesús A. F. Tresguerres (Endocrinología y Nutrición); Inma González (Medicina Estética) y Mauricio Verbauvede (Cirugía Plástica) aportan lo más novedoso en tratamientos de restauración capilar

Los problemas de caída del cabello son una de las principales preocupaciones estéticas de la población, sobre todo masculina. En este monográfico analizamos las diferentes soluciones que se proponen desde la variedad de disciplinas que componen la medicina para la belleza: Los doctores Sergio Vañó (Dermatología); Jesús A. F. Tresguerres (Endocrinología y Nutrición); Inma González (Medicina Estética) y Mauricio Verbauvede (Cirugía Plástica) aportan lo más novedoso en tratamientos de restauración capilar

Por favor lea el whitepaper adjunto.

Logo KW

Am I going to die? [890]

de System Administrator - jueves, 25 de septiembre de 2014, 16:07

Matthew O'Reilly: “Am I dying?” The honest answer. 



Matthew O’Reilly is a veteran emergency medical technician on Long Island, New York. In this talk, O’Reilly describes what happens next when a gravely hurt patient asks him: “Am I going to die?” 

0:11 - I've been a critical care EMT for the past seven years in Suffolk County, New York. I've been a first responder in a number of incidents ranging from car accidents to Hurricane Sandy.

0:20 - If you are like most people, death might be one of your greatest fears. Some of us will see it coming.Some of us won't. There is a little-known documented medical term called impending doom. It's almost a symptom. As a medical provider, I'm trained to respond to this symptom like any other, so when a patient having a heart attack looks at me and says, "I'm going to die today," we are trained to reevaluate the patient's condition.

0:44 - Throughout my career, I have responded to a number of incidents where the patient had minutes left to live and there was nothing I could do for them. With this, I was faced with a dilemma: Do I tell the dying that they are about to face death, or do I lie to them to comfort them? Early in my career, I faced this dilemma by simply lying. I was afraid. I was afraid if I told them the truth, that they would die in terror, in fear, just grasping for those last moments of life.

1:17 - That all changed with one incident. Five years ago, I responded to a motorcycle accident. The rider had suffered critical, critical injuries. As I assessed him, I realized that there was nothing that could be done for him, and like so many other cases, he looked me in the eye and asked that question: "Am I going to die?" In that moment, I decided to do something different. I decided to tell him the truth. I decided to tell him that he was going to die and that there was nothing I could do for him. His reaction shocked me to this day. He simply laid back and had a look of acceptance on his face. He was not met with that terror or fear that I thought he would be. He simply laid there, and as I looked into his eyes, I saw inner peace and acceptance. From that moment forward, I decided it was not my place to comfort the dying with my lies.Having responded to many cases since then where patients were in their last moments and there was nothing I could do for them, in almost every case, they have all had the same reaction to the truth, of inner peace and acceptance. In fact, there are three patterns I have observed in all these cases.

2:36 - The first pattern always kind of shocked me. Regardless of religious belief or cultural background, there's a need for forgiveness. Whether they call it sin or they simply say they have a regret, their guilt is universal. I had once cared for an elderly gentleman who was having a massive heart attack. As I prepared myself and my equipment for his imminent cardiac arrest, I began to tell the patient of his imminent demise. He already knew by my tone of voice and body language. As I placed the defibrillator pads on his chest, prepping for what was going to happen, he looked me in the eye and said, "I wish I had spent more time with my children and grandchildren instead of being selfish with my time." Faced with imminent death, all he wanted was forgiveness.

3:27 - The second pattern I observe is the need for remembrance. Whether it was to be remembered in my thoughts or their loved ones', they needed to feel that they would be living on. There's a need for immortality within the hearts and thoughts of their loved ones, myself, my crew, or anyone around.Countless times, I have had a patient look me in the eyes and say, "Will you remember me?"

3:53 - The final pattern I observe always touched me the deepest, to the soul. The dying need to know that their life had meaning. They need to know that they did not waste their life on meaningless tasks.

4:08 - This came to me very, very early in my career. I had responded to a call. There was a female in her late 50s severely pinned within a vehicle. She had been t-boned at a high rate of speed, critical, critical condition. As the fire department worked to remove her from the car, I climbed in to begin to render care.As we talked, she had said to me, "There was so much more I wanted to do with my life." She had felt she had not left her mark on this Earth. As we talked further, it would turn out that she was a mother of two adopted children who were both on their way to medical school. Because of her, two children had a chance they never would have had otherwise and would go on to save lives in the medical field as medical doctors. It would end up taking 45 minutes to free her from the vehicle. However, she perished prior to freeing her.

5:03 - I believed what you saw in the movies: when you're in those last moments that it's strictly terror, fear. I have come to realize, regardless of the circumstance, it's generally met with peace and acceptance, that it's the littlest things, the littlest moments, the littlest things you brought into the world that give you peace in those final moments.

5:25 - Thank you.

5:27 - (Applause)


Logo KW

Amor e Inteligencia [845]

de System Administrator - domingo, 7 de septiembre de 2014, 20:26

Cuándo lo que nos enamora es la inteligencia

Por Daniel Viñoles Zunino

¿Qué es lo que nos atrae particularmente del otro sexo? De hecho, la química entre las personas juega un papel primordial en nuestras relaciones, pero también hay ciertas características de la personalidad que nos acercan particularmente a determinadas personas.

Los factores que juegan en la atracción evidentemente son variados, la mayoría de las personas se sienten atraídas por la apariencia física, por la personalidad, por el carisma, simpatía, etc. Pero hay quienes encuentran en la inteligencia la característica (podríamos decir excluyente) más sexualmente atractiva en el sexo opuesto. Son los sapiosexuales, obviamente la palabra tiene su origen en 'sapiens' que significa sabio.

Aunque esta atracción no siempre está conectada con la sexualidad, en la mayoría de los casos sí, ya que es la propia sinergia intelectual lo que dispara la relación. Esto se ve a menudo en los lugares de trabajo y también puede ser como otro aspecto del sapiosexual, es decir, el deseo de estar conectado con intelectuales, aunque el resultado no siempre sea el encuentro íntimo.

Los sapiosexuales se sienten mucho más atraídos por la inteligencia de una persona que por su apariencia física, estatus social o económico.

En numerosos ámbitos, entre ellos la sexualidad, mucho de lo que somos tiene sus raíces en nuestra infancia y adolescencia. Lo que vivimos durante esta etapa de la vida actúa como base de lo que seremos de adultos, en especial, mucho depende de tres factores: la relación con nuestro progenitor del sexo opuesto, nuestra primera experiencia de amor y nuestro primer encuentro íntimo.

Por ejemplo, las mujeres que de niñas fueron muy mimadas por sus padres, esperan y exigen lo mismo de sus compañeros, en el otro extremo, no es raro que mujeres que tuvieron padres golpeadores, tengan como parejas a hombres golpeadores. Por otro lado, si un niño tuvo una madre narcisista, no es extraño que de adulto, la mayoría de sus parejas lo sean. 

Algo similar puede ocurrir con la inteligencia, niños o niñas en cuya infancia la exigencia excluyente de los padres era resaltar esa cualidad, pueden de mayores, evaluar esa característica en su pareja, mucho más que las otras.

En definitiva, quizás lo que buscamos en una pareja es lo que siempre quisimos en nosotros mismos (o lo que nos inculcaron ser) o tal vez, podría ser también un catalizador de nuestro ser más profundo.

Para terminar...

Aunque no sea la única, es indudable que la inteligencia de una persona es una característica muy importante para su atractivo sexual. 

En la narración de Platón "El Banquete" escrita aproximadamente hacia el año 390 A.C. el personaje principal de la obra, Sócrates, no era una persona bella ni rica, sin embargo seducía y cautivaba con su inteligencia. Esto es una prueba de que la naturaleza de las relaciones no ha cambiado con el tiempo y que los sapiosexuales no son un fenómeno moderno, ya que podemos asumir que la excitación que causa la inteligencia en el sexo opuesto se remonta a por lo menos 2500 años.


Logo KW

An Emerging Science of Clickbait [1176]

de System Administrator - domingo, 29 de marzo de 2015, 18:17

An Emerging Science of Clickbait

Researchers are teasing apart the complex set of links between the virality of a Web story and the emotions it generates.

In the world of Internet marketing and clickbait, the secret of virality is analogous to the elixir of life or the alchemy that turns lead into gold. It exists as a kind of Holy Grail that many search for and few, if any, find.

The key question is this: what is the difference between stories that become viral and those that don’t?

One idea is that the answer lies in the emotions stories generate for the people that read them. But what quality of emotion causes somebody to comment on a story or share it on social media?

Today, we can get an insight into this question thanks to the work of Marco Guerini at Trento Rise in Italy and Jacopo Staiano at Sorbonne Université in Paris. These guys have studied the data from two websites that allow readers to rate news stories according to the emotion each generates. That opens a fascinating window into the relationship between virality and emotion for the first time.

Psychologists have long categorized emotion using a three-dimensional scale known as the Valence-Arousal-Dominance model. The idea is that each emotion has a valence, whether positive or negative and a level of arousal, which is high for emotions such as anger and low for emotions like sadness.

Dominance is the level of control a person has over the emotion. At one end of this spectrum are overwhelming emotions like fear and at the other, emotions that people can choose to experience, such as feeling inspired.

Every emotion occupies a point in this Valence-Arousal-Dominance parameter space.

Guerini and Staiano’s idea is that it is not an emotion itself that determines virality but its position in this parameter space.

It turns out that two news-based websites have recently begun to collect data that throws light on exactly this problem. is a social news site that allows each user to rate the emotional value of each story using a “mood meter.” The Italian newspaper site offers a similar function.

Together, these sites have some 65,000 stories rated by emotional quality. That’s a significant database to explore the link between emotion and virality, which they measure by counting the number of comments each story generates as well as the number of votes it gets on social media sites such as Facebook and Google Plus.

Finally, they mine the data looking for patterns of emotion associated with the most viral content.

The results make for interesting reading. Guerini and Staiano argue there is a clear link between virality and particular configurations of valence, arousal and dominance. “These configurations indicate a clear connection with distinct phenomena underlying persuasive communication,” they say.

But there is a curious difference between the emotions that drive commenting behavior compared to voting behavior. Guerini and Staiano say that posts generate more comments when they are associated with emotions of high arousal, such as happiness and anger, and with emotions where people feel less in control, such as fear and sadness.

By contrast, posts generate more social votes when associated with emotions people feel more in control of, such as inspiration.

Curiously, the valence of an emotion does not influence virality at all. In other words, people are just as likely to comment or vote on a post regardless of whether it triggers a positive or negative emotion.

Of course, this is by no means a recipe for online success. But it should provide some food for thought for Internet marketers, bloggers and journalists alike.

Anybody who has spent some time trawling the Internet will have come across headlines designed to manipulate emotion in a pretty crude way. But that may only be the beginning.

Guerini and Staiano’s work provides some much more detailed insights into the fundamental emotional drivers of virality and, as such, could be thought of as laying the foundations for an emerging science of clickbait.


  • Deep Feelings: A Massive Cross-Lingual Study on the Relation between Emotions and Virality - 



Logo KW

Análisis del WISC-IV en una muestra de alumnos con Capacidad Intelectual Límite [1144]

de System Administrator - martes, 10 de marzo de 2015, 23:23

Análisis del WISC-IV en una muestra de alumnos con Capacidad Intelectual Límite

por Diego Jesús Luque, Eduardo Elósegui y Dolores Casquero

Universidad de Málaga, España

En los niños y niñas con Capacidad Intelectual Límite (CIL), el análisis de sus funciones cognitivas a través de escalas de inteligencia es siempre complejo, más aún cuando pueden aportar aspectos explicativos de sus dificultades de aprendizaje. La Escala de Wechsler, a través de las funciones y pruebas que se agrupan en la Memoria de Trabajo (MT) y en la Velocidad de Procesamiento (VP), proporciona información relevante sobre la estructura y funcionamiento cognitivo, con respecto a una posible disfunción psiconeurológica, en la base explicativa de las dificultades específicas de aprendizaje. En este trabajo se estudia el perfil del WISC-IV de 39 alumnos y alumnas con CIL, en particular, los índices de MT y de VP, en su asociación con las dificultades de aprendizaje que presentan. Se concluye que esos índices son una importante base explicativa de sus dificultades, a la vez que una referencia para los aspectos relacionados con su intervención psicopedagógica.


Por favor lea el PDF adjunto.

Logo KW

Animal Brains Networked Into Organic Computer ‘Brainet’ [1324]

de System Administrator - martes, 20 de octubre de 2015, 21:23

Animal Brains Networked Into Organic Computer ‘Brainet’

By Shelly Fan

Imagine a future where computers no longer run on silicon chips. The replacement?


Thanks to two separate studies recently published in Scientific Reports, we may be edging towards that future. In a series of experiments, scientists connected live animal brains into a functional organic computer. The “Brainet”, as they call it, could perform basic computational tasks—and do it better than each animal alone.

“Scientifically and technically, this is brilliantly done,” says Dr. Natasha Kovacevic, a brain-machine interface expert at the Rotman Research Institute who was not involved in the study, to Singularity Hub. “It’s amazing, but also scary that we can use live animals mechanistically as computer chips.”


The team, led by Dr. Miguel A. L. Nicolelis from Duke University, has long been pushing the boundaries of brain-machine interfaces—to the point machines are no longer even in the equation. A few years back, they broke ground when they developed a system that allowed monkeys to move a virtual arm with their brain waves alone, and “feel” whatever the digital avatar touched.

The new cutting-edge eschews arms—robotic or virtual—altogether, and goes directly brain-to-brain.

In 2013, Nicolelis and colleagues transferred information between two rat brains with the aid of a brain chip. They trained an “encoder” rat to press one of two potential levers upon seeing an LED light, while they recorded its cortical activity.

Next, the team used the recordings to stimulate the corresponding brain regions of a second rat that wasn’t trained on the task. Impressively this “decoder” rat picked the correct lever over 60% of the time—a result that, while imperfect, suggested it might be feasible to couple animals’ brains together into a network.

Given that wiring multiple processors in parallel speeds up digital computers, the team wondered if forming a Brainet might likewise give biological computers a speed boost.

In the first study, the team implanted arrays of microelectrodes that both record signals and stimulate neurons into the brains of four rats. They then physically hooked the rat brains up using a brain-to-brain interface.

After giving all rats a short zap that acted as a “go” signal, the team monitored their brain waves and rewarded the animals with water if the brain waves oscillated in unison. The purpose? To see whether by synching brains up, subjects might be able to achieve a goal that no single brain can do individually, Nicolelis told Motherboard.

Through many trials over the next 1.5 weeks, the rats learned to synchronize their brain waves at will.


In one experiment, a kind of bizarre game of “telephone,” the scientists found they could transmit information sequentially through the Brainet. First, they hooked up three rats, then stimulated the first rat’s brain, recorded the resulting activity, and delivered it to the second rat. Not only did the second ­rat produce a similar brain activity pattern that was further passed down the chain, but the third rat reliably decoded and delivered the pattern back to the first animal, which reported the correct “message” in roughly 35% of the trials—around two times better than how each rat performed when having to do the same four-step task alone.

Essentially, Nicolelis turned rat brains into a meaty artificial network that could classify, store and transfer data. However, no “thinking” in the traditional sense occurred; the animals’ sensory cortices simply functioned like silicon processors.

In a separate study, scientists built upon previous work in the field of neuroprosthetics to see if a Brainet could control a digital arm better than its individual components.

The team implanted a large electrode array into three rhesus monkeys to record their brain activity, and then taught the animals to move a virtual arm in 3D space by picturing the motion in their heads. The monkeys were then given shared control over the arm, with each member in charge of only two out of the three dimensions.

Despite not being physically wired together, the monkeys’ brain activity synchronized, allowing them to match each other’s movement speed and collectively grab a digital ball with ease. The system was also resilient to slackers—even if one member dropped the ball (pun intended) and tuned out momentarily, the other two still managed to perform the task (just far less efficiently).

(Watch a video of the monkey Brainet in action below. The colored dots represent the three monkeys, the black dot is their average, and the circle is the virtual ball.)

Both studies show that when it comes to combining brainpower, 1+1=2.1. The same holds true for humans. When gamers combined their brainwaves through EEG to control a spacecraft simulator in a computer game, they did it better than each person alone.

With increasingly sophisticated devices that stimulate and record the brain non-invasively, it’s not hard to picture the possibility of wiring up human brains to solve thorny problems that baffle individual minds.

In fact, Kovacevic recently crowdsourced EEG data from over 500 volunteers as they collectively played a neurofeedback game at My Virtual Brain, a spectacle that combines science with art and music.

Brainets are certainly intellectually intriguing, Kovacevic acknowledges. Yet privacy and other ethical issues aside, there’s something disturbing about this image, she says. “My main concern is that we, as humans, are losing something of ourselves when we use sentient beings as simple computational tools.”

Just because we can do something, does it mean we should?

“With all due respect, in this case I vote no.”

Image Credit: Katie Zhuang/Nicolelis lab/Duke University

Logo KW

Animations [711]

de System Administrator - martes, 20 de octubre de 2015, 21:25

by Janet Iwasa

How animations can help scientists test a hypothesis

3D animation can bring scientific hypotheses to life. Molecular biologist (and TED Fellow) Janet Iwasa introduces a new open-source animation software designed just for scientists.

Ver Video

0:11 - Take a look at this drawing. Can you tell what it is? I'm a molecular biologist by training, and I've seen a lot of these kinds of drawings. They're usually referred to as a model figure, a drawing that shows how we think a cellular or molecular process occurs. This particular drawing is of a process called clathrin-mediated endocytosis. It's a process by which a molecule can get from the outside of the cell to the inside by getting captured in a bubble or a vesicle that then gets internalized by the cell. There's a problem with this drawing, though, and it's mainly in what it doesn't show. From lots of experiments,from lots of different scientists, we know a lot about what these molecules look like, how they move around in the cell, and that this is all taking place in an incredibly dynamic environment.

1:02 - So in collaboration with a clathrin expert Tomas Kirchhausen, we decided to create a new kind of model figure that showed all of that. So we start outside of the cell. Now we're looking inside. Clathrin are these three-legged molecules that can self-assemble into soccer-ball-like shapes. Through connections with a membrane, clathrin is able to deform the membrane and form this sort of a cup that forms this sort of a bubble, or a vesicle, that's now capturing some of the proteins that were outside of the cell. Proteins are coming in now that basically pinch off this vesicle, making it separate from the rest of the membrane, and now clathrin is basically done with its job, and so proteins are coming in now —we've covered them yellow and orange — that are responsible for taking apart this clathrin cage. And so all of these proteins can get basically recycled and used all over again.


Janet Iwasa - Molecular animator - Full bio

1:48 - These processes are too small to be seen directly, even with the best microscopes, so animations like this provide a really powerful way of visualizing a hypothesis.

1:59 - Here's another illustration, and this is a drawing of how a researcher might think that the HIV virus gets into and out of cells. And again, this is a vast oversimplification and doesn't begin to show what we actually know about these processes.

2:14 - You might be surprised to know that these simple drawings are the only way that most biologists visualize their molecular hypotheses. Why? Because creating movies of processes as we think they actually occur is really hard. I spent months in Hollywood learning 3D animation software, and I spend months on each animation, and that's just time that most researchers can't afford. The payoffs can be huge, though. Molecular animations are unparalleled in their ability to convey a great deal of informationto broad audiences with extreme accuracy. And I'm working on a new project now called "The Science of HIV" where I'll be animating the entire life cycle of the HIV virus as accurately as possible and all in molecular detail. The animation will feature data from thousands of researchers collected over decades, data on what this virus looks like, how it's able to infect cells in our body, and how therapeutics are helping to combat infection.

3:16 - Over the years, I found that animations aren't just useful for communicating an idea, but they're also really useful for exploring a hypothesis. Biologists for the most part are still using a paper and pencil to visualize the processes they study, and with the data we have now, that's just not good enough anymore. The process of creating an animation can act as a catalyst that allows researchers to crystalize and refine their own ideas. One researcher I worked with who works on the molecular mechanisms of neurodegenerative diseases came up with experiments that were related directly to the animation that she and I worked on together, and in this way, animation can feed back into the research process.

3:56 - I believe that animation can change biology. It can change the way that we communicate with one another, how we explore our data and how we teach our students. But for that change to happen, we need more researchers creating animations, and toward that end, I brought together a team of biologists, animators and programmers to create a new, free, open-source software — we call it Molecular Flipbook — that's created just for biologists just to create molecular animations. From our testing, we've found that it only takes 15 minutes for a biologist who has never touched animation software before to create her first molecular animation of her own hypothesis. We're also building an online database where anyone can view, download and contribute their own animations. We're really excited to announce that the beta version of the molecular animation software toolkit will be available for download today. We are really excited to see what biologists will create with it and what new insights they're able to gain from finally being able to animate their own model figures.

4:59 - Thank you.

5:01 - (Applause)


Logo KW

Ansiedad Anticipatoria [1157]

de System Administrator - domingo, 15 de marzo de 2015, 01:39

Ansiedad anticipatoria: síntomas y soluciones a los miedos infundados

por María Dolors Mas Delblanch

David todavía tiembla, sentado en el sofá de mi consulta, cuando me cuenta “estaba sólo en casa, acabándome de preparar. Había quedado con mi novia a las ocho, teníamos que ir a una cena formal con nuestros padres. Mientras me ponía el reloj, de forma automática, miré la hora, eran las ocho y diez y Marta no había llegado.

En ese momento, sin esperarlo, empecé a pensar que, seguramente Marta había tenido un accidente al regresar y que, en cuanto llamaran sería la policía que me pediría  que fuera a reconocerla y…por favor, pensé tantas cosas en un minuto, el corazón me dio un vuelco y empezó a latir fuertemente, me tuve que quitar la corbata porque sudaba profusamente, empecé a ahogarme y a sentirme inestable, como si la visión se volviera borrosa, estaba convencido de que me volvería loco… otro vuelco fue treinta segundos después cuando el timbre sonó y Marta entró sonriente y pidiendo disculpas por haber llegado tarde…

¿Qué me pasó? ¿Cómo lo puedo controlar? Porque me ha vuelto a ocurrir otras veces, ¿me curaré algún día?”

Lo que le ocurrió a David es una situación bastante típica que sucede, también, a muchos otros pacientes y, a la cual llamamos ansiedad aunque, en este caso, le pondremos un “apellido”: anticipatoria. Como ya hemos visto en anteriores artículos en Siquia, llamamos ansiedad a unarespuesta positiva y natural del organismo que sirve para defenderse de una amenaza, ya sea está real o percibida. Sólo si se superan ciertos umbrales de activación, la ansiedad se convierte en una respuesta patológica, que se manifiesta en cuadros de pánico, quedando una elevada parte residual la cual se suele somatizar en el organismo ocasionando síntomas de tipo psicofisiológico, es cuando decimos que el cuerpo, duele, ya que lo hace de una forma totalmente inespecífica, vaga y afectando a diversos órganos y sistemas.

La anticipación está relacionada con imaginar el futuro. Cuando David mira la hora  y ve que pasan diez minutos respecto de la hora de la cita con su novia, empieza a pensar en lo qué le puede haber sucedido. Anticipa que  le debe haber ocurrido algo horrible que haya motivado que no estuviera allí a las ocho, tal y como quedaron. Más tarde, se imagina ya en la situación de que se cumplan sus peores expectativas y por todo lo que debería pasar. Mientras piensa esto su corazón late muy rápido, su respiración es superficial y rápida, comienzan las sensaciones que conducen a una crisis de pánico. Por tanto, como vemos, la función principal de la ansiedad es movilizar al organismo frente a posibles amenazas, reales o percibidas. Sin embargo, cuando hablamos de ansiedad anticipatoria, su función es activar al organismo antes de que este posible peligro ocurra. Es decir, nos advierte. En otras palabras, “quien avisa, no es traidor”… aunque demasiadas veces lo que ocurre es que el peligro es inexistente.

Funciones de la ansiedad anticipatoria

La ansiedad anticipatoria es un proceso de evaluación cognitiva que, teniendo en cuenta la experiencia, entre otras cuestiones, predice las consecuencias que un acontecimiento determinado ( el retraso de Marta, en el ejemplo) produce en la conducta del paciente ( David, en este caso).

  • Evaluación primaria: cómo, cuándo, de qué manera algo perjudica o beneficia al paciente
  • Evaluación secundaria: qué puede hacerse al respecto, por parte del propio paciente
  • Expectativa de eficacia: qué capacidad se atribuye la persona para hacer algo que cambie la situación
  • Expectativa de resultados: qué resultados se calculan como probables para el paciente lo cual produce un estado emocional agradable o desagradable, dependiendo de que el individuo se vea afectado positiva o negativamente.

La anticipación también produce un efecto motivacional. Según Bandura (1986) “los pensamientos anticipatorios que no exceden los límites de la realidad tienen un valor funcional porque motivan el desarrollo de competencias y de planes de acción”. La anticipación es parte de la acción ya que es un factor tanto regulador como inductor de la conducta y emociones.

El pensamiento tiene una importante capacidad de auto-activación fisiológica de la emoción. Las anticipaciones referidas a amenazas, daños o perjuicios, generan ansiedad. Estos pensamientos percibidos, lo cual no significa que sean irreales, pueden resultar tan activadores como los propios eventos reales. Es entonces cuando los pulmones, el corazón, el estómago y los músculos no saben qué ocurre y, por tanto, no toman ninguna decisión que implique una actuación por su parte. Son los centros nerviosos superiores, la corteza cerebral, especialmente, pero no únicamente,  la que presupone nuestra realidad, de forma correcta o incorrecta así como la manera cómo nos está afectando y toma decisiones acerca de lo que podemos hacer. Por tanto, se produce una respuesta psicofisiológica  que será la que permitirá responder al organismo.

Sintomatología de la ansiedad anticipatoria

No se puede menospreciar la importancia de la ansiedad anticipatoria ya que, en contra de lo que algunos puedan pensar, se trata de un problema real que produce una sintomatología absolutamente real. Cuando la mente espera lo peor (“Marta debe haber tenido un accidente”), el cuerpo se prepara para recibir el impacto emocional de una noticia que no recibirá (pero aún no lo sabe, por tanto, tensión, taquicardia…) y la preocupación (“llega 10 minutos tarde y no es habitual en ella, en un día tan importante”), es interpretado por el organismo como una situación peligrosa. Si esta forma de pensar se convierte en habitual, el trastorno tiende a cronificarse.

Por otra parte, además de hacerle sentir angustiado, la ansiedad puede jugar con su estado de ánimo, haciéndole sentir enojado, confundido, desesperanzado irritable o triste, lo cual puede acabar afectando su capacidad de concentración y de toma de decisiones.

La sintomatología física de la ansiedad anticipatoria incluye:

  • Tensión muscular.
  • Sudoración
  • Palpitaciones y/o taquicardia
  • Cefaleas
  • Disnea
  • Voz temblorosa
  • Mareos y náuseas
  • Problemas digestivo
  • Disminuye la capacidad del paciente para concentrarse, lo que puede disminuir su rendimiento.

Los síntomas físicos de la ansiedad anticipatoria pueden ser muy intensos  lo que produce que la persona puede pensar que  está sufriendo un  infarto. Ello es debido a que esté primer ataque de pánico suele ser  repentino e inesperado, pero, sin embargo,  cambia toda la percepción del paciente. Ya que, después de sufrir uno, comienza a sentir una ansiedad anticipatoria constante, debido al miedo a sufrir un segundo, de la misma manera, que les ocurre a los pacientes que padecen fobias o ansiedad específica.

Tratamiento de la ansiedad anticipatoria

En general, podemos decir que hay tres tipos de tratamiento: psicoterapia, farmacológico o una combinación de ambos. De ellos,hablamos un poquito enseguida. Lo más importante es que tanto la ansiedad específica como la anticipatoria, se pueden tratar.

Sin embargo, muchos de los consejos que se dan para tratar la ansiedad específica también funcionan en el caso de la ansiedad anticipatoria. Estos son algunos consejos a tener en cuenta:

1. Cambia tu pensamiento. La ansiedad anticipatoria es una interpretación catastrófica o negativa sobre un resultado desconocido. Aprende a contemplar la existencia de interpretaciones positivas. Dicho de otra forma: cambia tu pensamiento de tipo “profecía autocumplida” ( es decir, predicciones que son las “causantes” de que algo suceda en nuestro pensamiento o, de otra manera, ponernos nosotros mismos la piedra para tropezar a gusto) por pensamientos realistas ( si algo es, es).

2. Ejercicio físico. Porque posee indudables beneficios, a diversos niveles. Distrae, ayuda a quemar calorías, a relajarte y a liberar la adrenalina adicional que se produce cuando te pones ansioso.

3. Distráete. Encontrar alguna actividad agradable puede reducir la ansiedad ya que ocupa su mente, por ejemplo: leer un libro, salir a caminar, ir al cine, entre otras.

4. Aprende técnicas de relajación. Existen técnicas de relajación, respiración, visualización, imaginación, mindfulness, que sólo podrá practicar con su psicólogo y será capaz de reducir su nivel de ansiedad anticipatoria. Con los ejercicios correctos, puedes incluso aprender a calmar una crisis de ansiedad. 

Respecto al tratamiento formal de la ansiedad anticipatoria, muchas veces se recomienda el uso de medicamentos ansiolíoticos para tratar la ansiedad los cuáles pueden ser muy útiles para aliviar los síntomas a corto plazo. Sin embargo- y como ya dijimos en un post anterior- se está produciendo una verdadera adicción a los psicofármacos con escasas ventajas y muchos efectos secundarios, algunos de los cuales son peligrosos para la salud.

La psicoterapia, por su parte, ha demostrado ser bastante eficaz, especialmente, la terapia cognitivo- conductual que ayuda, sobre todo, a cambiar patrones de pensamiento, que es el punto clave para superar la ansiedad anticipatoria.

Para ello, usa varias técnicas como son;:parada de pensamiento, estar en contacto con el aquí y el ahora, visualizar el enfrentamiento a situaciones temidas o hallar que parte hay de realidad y cual de irrealidad en sus anticipaciones de hechos futuros.

Pasos para enfrentarse a lo desconocido e incierto y eliminar la ansiedad anticipatoria

Y, es que en ello consiste, precisamente. Si eres una persona que constantemente te preocupas por lo que va a pasar en el futuro, sea una actividad próxima o tu propio futuro, estás sufriendo posiblemente, un trastorno de ansiedad anticipatoria la cual está relacionada con la intolerancia a la incertidumbre y la necesidad de control. Por ello, se asocia con situaciones en las que la persona tiene elevadas expectativas sobre el propio rendimiento.

La persona repite en su mente, de forma recurrente, escenarios catastróficos en los que todo sale mal. Es decir, presenta un pensamiento negativo, repetido continuamente, lo cual favorece la aparición de las profecías autocumplidas o predicciones causantes de que algo  suceda. Como decíamos un poco antes, son la piedra que nos ponemos con la cual tropezar.

Cómo controlar ansiedad anticipatoria: consejos prácticos

1. Detener el círculo vicioso de las emociones negativas. Casicada pensamiento nos produce una emoción. Por tanto, si llenamos nuestra mente de ideas catastrofistas, estaremos muy ansiosos. Entonces, para interrumpir la ansiedad anticipatoria es necesario descubrir las emociones negativas que esta produce y calmarla. Por ejemplo: respira profundamente y relájate. Cuando creas que vuelves a tener el control sobre tus emociones puedes analizar la situación racionalmente.

2. Descubrir los pensamientos negativos. ¿Qué estás pensando? Puedes escribirlo en un papel. Te darás cuenta de que te centras más en las cosas que pueden salir mal que en los aspectos positivos. Esa es la causa de la ansiedad anticipatoria.

3. Descompone cada pensamiento. Toma los pensamientos, uno a uno, y piensa cuál podría ser el peor escenario que podría ocurrir. ¿Qué puedes esperar en el peor de los casos? ¿Cómo te sentirías? La mayoría de las veces observarás cómo tus respuestas no son tan malas como suponías en un principio y, por tanto, lo que ocurre es que estás magnificando las consecuencias de ese pensamiento y es ello, precisamente, lo que te da miedo.

4. Varía el foco de atención. Llegados a este punto, se observa que necesitas cambiar de actitud, lo cual significa que deberás centrarte en los aspectos positivos. Obviamente, te puede salir mal, pero cómo bien sabes, una actitud positiva mueve montañas. Y, por otra parte, también está claro que sólo con cambiar a un pensamiento positivo, tu problema no habrá terminado.

5. Prepárate para la incertidumbre. En realidad, la vida es bastante incierta y, cuanto antes lo asumas, mejor. Para eliminar la ansiedad anticipatoria es necesario que aprendas a vivir con la incertidumbre sin sentirte incómodo, sino tolerando que está forma parte de tu, -de nuestra- vida. Una buena estrategia es concentrarse en el aquí y el ahora, intentando controlar la tendencia a suponer lo que pasará en el futuro.

En definitiva, la ansiedad anticipatoria consiste en pensar que vamos a sufrir mucho y que pasaremos mucho miedo. Entonces, sentimos miedo por el miedo que creemos que vamos a pasar. De ahí que la expresión “Miedo al miedo” se haya hecho tan popular, hasta el punto de usarse como título para un libro de poesía (de Hernán Narbona), e incluso para el de una canción de hip-hop. La letra del rap que interpretan el grupo Desplante, con Diana Feria, comienza diciendo: “Esto ocurre muchas veces, cuando el temor a temer es más grande que tod

  • Alteraciones de sueño
  • Fatiga

Respecto a la sintomatología psicosocial, hallamos:

  • Problemas en las relaciones interpersonales, ya que los pacientes están constantemente tensos y preocupados, a veces, sin una razón aparente.o aquello en lo que crees…”.

Deja tu consulta a la psicóloga Dolors Mas o, si quieres más info, mirá como te ayudamos a través del equipo de psicólogos online de Siquia.

Sobre la autora de este artículo

María Dolors Mas Delblanch es psicóloga en Badalona con Nº Colegiada 17222. Sus especialidades son la ansiedad, la depresión y el TDAH. Atienda a parejas y a madres y padres con dudas sobre la crianza de sus hijos. Puedes leer más artículos de estas temáticas firmadas por la psicóloga en Siquia y dejarle tu consulta.


Logo KW

Ansiedad: Comentarios sobre las historias personales asociadas a ella [1145]

de System Administrator - martes, 10 de marzo de 2015, 23:30

Ansiedad: Comentarios sobre las historias personales asociadas a ella

por Héctor Gamba-Collazos | Universidad Manuela Beltrán

Acudiendo a la noción de narración, el texto busca hacer una reflexión sobre las experiencias de ansiedad, para ello inicialmente se destaca que el concepto no puede ser considerado en sí mismo como un problema o como una patología ya que, por el contrario, hace referencia a un conjunto de reacciones que tiene gran valor adaptativo puesto que prepara al sujeto para enfrentar una amenaza. Posteriormente, se resalta que las experiencias de ansiedad no se restringen al nivel fisiológico, sino que están asociadas con una narración particular que, por ejemplo, gira alrededor de las amenazas ambientales y la insuficiencia personal en los casos en los que la ansiedad se torna problemática Con lo anterior, se concluye que el concepto no hace referencia a un padecimiento del que el sujeto es víctima, sino a una experiencia en la que el individuo es un agente activo con un repertorio de acciones que se adopta y mantiene en función de la historia personal construida.

Por favor lea el PDF adjunto.

Logo KW

Antioxidants Facilitate Melanoma Metastasis [1492]

de System Administrator - jueves, 8 de octubre de 2015, 13:02


Antioxidants Facilitate Melanoma Metastasis

By Anna Azvolinsky

Two compounds boost the ability of melanoma cells to invade other tissues in mice, providing additional evidence that antioxidants can be beneficial to malignant cells as well as healthy ones.

Antioxidants decrease the levels of DNA-damaging, cancer-causing reactive oxygen species (ROS) that are formed during normal metabolism. Yet clinical trials that evaluated the health benefits of antioxidants like vitamin E and beta carotene have not found that these supplements can prevent cancer; some have even demonstrated an uptick in cancer risk associated with antioxidant supplementation.  

A team of researchers at the University of Gothenburg in Sweden has now shown that mice with melanoma fed an antioxidant had double the number of lymph node metastases and more malignant disease compared to animals with the same cancer who were not given antioxidants. The results, published today (October 7) in Science Translational Medicine, provide further evidence that antioxidants are likely not beneficial to the health of those with melanoma and other tumors and could, in fact, be harmful.

“Metastasis is really the most dangerous part of a cancer so we believe that melanoma patients and those who have an increased risk of this disease should be aware of the potential harm of antioxidants,” study coauthor Martin Bergo told The Scientist.

“This is a carefully designed, well-controlled, and beautifully executed study,” Dimitrios Anastasiou, who studies cancer metabolism at the U.K.’s Francis Crick Institute wrote in an email. “Antioxidants are easily accessible to the wider public . . . making them susceptible to potential misuse. These findings highlight the need for further robust studies that aim to clarify in which context antioxidants should be used or avoided.”

In January 2014, the Swedish group showed that antioxidants can accelerate the growth of primary lung tumors. For the present study, the researchers fed mice with early-stage malignant melanoma the antioxidant N-acetylcysteine (NAC). The researchers found that the sizes and number of primary melanoma tumors were the same between the control mice and the animals given antioxidants. But the latter group had twice as many lymph-node metastases “and when we looked inside the lymph nodes, those in the antioxidant-treated group contained more malignant cells,” said Bergo.

The team observed similar results working with cultured human melanoma cell lines. Adding either NAC or another antioxidant—a soluble vitamin E analog—to the culture didn’t affect the cells’ proliferation, but did increase their migration abilities and invasive properties. These properties were dependent on the production of glutathione, an antioxidant endogenous to cells.

Unlike the lung cancer study, which showed the antioxidants worked by reducing the activity of tumor suppressor p53, in the melanoma experiments, the antioxidants appeared to work by increasing the levels of reduced glutathione—which neutralizes ROS—and increasing levels of rhoA, an enzyme activated during cell migration and invasion. Combined, the results of both studies suggest that antioxidants can accelerate cancer progression through two apparently different mechanisms.

In an email, melanoma researcher Meenhard Herlyn of the Wistar Institute in Philadelphia noted that there’s more to learn about how antioxidants might affect people with melanoma. “Certainly the use of vitamin E and its analogs should be reconsidered if patients have already been diagnosed with a tumor,” he wrote.

While ROS can damage cells when present at high levels, they also help protect cells, including through the reversible oxidation of proteins, explained Anastasiou. “Various mechanisms exist to ensure that ROS levels are carefully balanced. Antioxidants may interfere with this balance, either within the tumor or its microenvironment, potentially disrupting regulatory pathways controlled by ROS,” he wrote in an email.

Bergo’s team will next test whether topical application of antioxidants—such as those found in lotions and sunscreens—have a similar effect on established melanoma tumors.

“Antioxidants can probably protect both healthy cells and tumor cells from free radical,” said Bergo noted. “Free radicals can slow down tumor proliferation and metastasis and antioxidants can help tumors overcome those limitations.”

“The challenges will be to understand how generally applicable are these observations to other tumor types and to translate these findings into clinically useful dietary guidelines,” noted Anastasiou.

K. Le Gal et al., “Antioxidants can increase melanoma metastasis in mice,” Science Translational Medicine, doi:10.1126/scitranslmed.aad3740, 2015.

Logo KW

Application of Genetically Humanized Mouse Models for Biomedical Research [989]

de System Administrator - miércoles, 12 de noviembre de 2014, 14:03


The Advantage and Application of Genetically Humanized Mouse Models for Biomedical Research

The use of genetically engineered mice in experimental medical research has led to significant advances in our understanding of human health and disease. 

From the development of transgenic and gene targeting methods to recent innovations in gene-editing technologies, manipulation of the mouse genome has become increasingly sophisticated.

This white paper discusses the available technologies used in the generation of genetically humanized mice and the favored applications of these models in biomedical research.

>> Download This White Paper to Learn: 

  • Transgenic Technologies
  • Types of DNA Vectors for Humanization
  • How to Apply Genetically Humanized Models in:
    • Efficacy and Safety Testing of Therapeutic Compounds and Biologics
    • Drug Metabolism and Disposition
    • Novel Therapeutic Approaches In Vivo
    • The Immune System

Please read the attached whitepaper.

Logo KW


de System Administrator - viernes, 26 de septiembre de 2014, 13:16


por Servicio Andaluz de Salud

Este folleto, elaborado por el Servicio Andaluz de Salud, pretende ser una herramienta más de apoyo dirigida a aquellas personas que cursan depresión, algún tipo de trastorno de ansiedad, o que por cualquier razón tengan problemas en la organización de sus actividades. En este sencillo folleto encontraremos consejos útiles, así como un plan diario de actividades simple pero muy eficaz.

Continúa en el folleto adjunto.

Logo KW

Are People in Silicon Valley Just Smarter? [1247]

de System Administrator - miércoles, 17 de junio de 2015, 10:25

Are People in Silicon Valley Just Smarter?


Why is Silicon Valley better at innovating than most of the world? Why are the number of successful startups so high there? Where is the next Mecca of tech-startup success going to emerge?

This post is about where and why innovation happens, and where it's going next.

It Started in a Coffee Shop

In the 18th century, coffeehouses had an enormous impact on Enlightenment culture.

As Steven Johnson says in his book Where Good Ideas Come From, "It's no accident that the Age of Reason accompanies the rise of caffeinated beverages."

The coffeehouse became the hub for information sharing.

Suddenly, commoners could interact with the royals, meet, mingle and share ideas.

In his book London Coffee Houses, Bryant Lillywhite explains it this way:

"The London coffee-houses provided a gathering place where, for a penny admission charge, any man who was reasonably dressed could smoke his long, clay pipe, sip a dish of coffee, read the newsletters of the day, or enter into conversation with other patrons.

"At the period when journalism was in its infancy and the postal system was unorganized and irregular, the coffeehouse provided a centre of communication for news and information . . . Naturally, this dissemination of news led to the dissemination of ideas, and the coffee-house served as a forum for their discussion."

Beyond the Coffee Shop

Today, researchers have recognized that the coffee-shop phenomenon is actually just a mirror of what occurs when people move from sparse rural areas to jam-packed cities.

As people begin living atop one another, so too do their ideas. And, as Matt Ridley aptly describes, innovation happens when these crowded ideas "have sex."

Geoffrey West, a physicist from Santa Fe Institute, found that when a city's population doubles, there is a 15 percent increase in income, wealth and innovation. (He measured innovation by counting the number of new patents.)

Why Silicon Valley Is Getting It Right

My friend Philip Rosedale, the creator of Second Life and now CEO of High Fidelity, spent some time investigating why the Bay Area in particular has become such a hub for technology and innovation.

As Rosedale explains, "I think the magic of Silicon Valley is not in fostering risk-taking, but instead in making it safe to work on risky things...There are two things happening in Silicon Valley that are qualitatively different than anywhere else."

Those things are:

  1. The sheer density of tech “founders per capita” is 10 times greater than the norm for other cities (see figure below).
  2. There is a far greater level of information sharing between entrepreneurs.

Image: San Francisco has about twice the density of the next-highest city (Boston), and about five times the density of New York.

Rosedale goes on, "You can't walk down the street without (almost literally) running into someone else who is starting a tech company. While tech ventures are individually risky, a sufficiently large number of them close to each other makes the experience of working in startups safe for any one individual."

"I like to visualize this as a series of lily pads in a pond, occasionally submerging as their funding runs out," he explains. "If you are a frog, and there are enough other lily pads nearby, you'll do just fine."

"Beyond simply having a lot of people near you to work with, I believe that the openness and willingness to share inherent to Silicon Valley is a big driver in this effect."

Beyond the Next Coffee House

For entrepreneurial technology innovation to occur, you need two things: a densely packed population of tech-savvy entrepreneurs and a culture of freely sharing and building on ideas.

Rosedale, who is working on the key technologies to intimately and powerfully connect people using virtual worlds, points out, "If we create a virtual world, we can expect a sudden disruption as the biggest 'city' of the tech future goes 100 percent online."

Just as the coffeehouse is a pale comparison to today's high-density city, so too will today's city be a pale comparison to the coming high fidelity, virtual online innovation communities.

Imagine a near-term future where any entrepreneur, anywhere on the planet, independent of the language they speak (think instant translation), can grab their VR headset (e.g., Oculus, Hololens, Magic Leap) and immerse themselves in an extremely high-resolution and low-latency VR world filled with like-minded, creative, insightful and experienced entrepreneurs.

But this hyperconnected world is not happening in isolation from other changes.

As I've noted in previous posts, the number of people connected to the Internet is exploding, going from 1.8 billion in 2010 to 2.8 billion today, and as many as 5 billion by 2020.

The opportunities for collaborative thinking are growing exponentially, and since progress is cumulative, the resulting innovations are going to grow exponentially as well.

Ultimately, these virtual worlds will create massive, global virtual coffeehouses for entrepreneurs to meet, to innovate, to create businesses and solve problems.

It's for this reason (among many others) that I believe we are living during the most exciting time ever.

The tools we are developing will bring about an age of abundance, and we will be able to meet the needs of every man, woman and child on Earth.

Image Credit: Impact Hub/Flickr


Logo KW

Aristotelian Biology [894]

de System Administrator - viernes, 26 de septiembre de 2014, 14:09

Aristotelian Biology

The ancient Greek philosopher was the first scientist.

By Armand Marie Leroi

In the Aegean Sea there is an island called Lesbos. It has pine-forested mountains, glades of chestnut trees, and valleys filled with blooming rhododendrons. Terrapins and eels navigate the rivers. In the spring and autumn, migrating birds pause there in the thousands as they travel between Africa and Europe.

And there is the Lagoon. Twenty-two kilometers long, 10 wide, Kolpos Kallonis nearly cuts the island in two. The richest body of water in the Aegean, it is famous for its pilchards, which are best eaten raw, washed down with ouzo. I first went to the Lagoon more than 10 years ago, but have returned many times since. I have done so because it was on its shores that my science—biology—was born.

Around 345 BC, Aristotle left Athens, where for 20 years he had studied and taught at Plato’s Academy. He traveled east, across the Aegean, married, and settled on Lesbos for three years. D’Arcy Thompson, the Scottish zoologist and Aristotle scholar, said it was the “honeymoon of his life.” It was there that Aristotle began to study the natural world and so turned himself into not merely the first biologist, but the first scientist. Other philosophers before him had speculated about the causes of the natural world, but he was the first to combine theory with empirical investigation.

Aristotle’s philosophical works, such as Metaphysics, Politics, Poetics, and his logical treatises loom over the history of Western thought like a mountain range. But he devoted nearly a third of his writings—a dozen volumes, thousands of pages—to living things. There is comparative zoology in Historia animalium, functional anatomy in The Parts of Animals, a book on growth, two on animal locomotion, and two on aging and death. There were books on plants, too, but they have been lost.

And then there is his greatest work of all. The Generation of Animals described how animals develop in the egg and womb and outlined a theory of inheritance. It was the best one around until the day, 2,300 years later, when Gregor Mendel published his “Experiments on Plant Hybridization.” Aristotle underpinned his biology with a physical and chemical theory and a scientific method that lies atop metaphysical bedrock. There’s a sense in which his entire philosophy was constructed in order to study living things.

Aristotle’s books are lecture notes for an epic biology course. They’re a hard read: terse and riddled with unfamiliar terms. He talks of the “soul” and you think of some mystical, immaterial substance that survives our mortal frames. But that’s Aquinas’s soul, not Aristotle’s: his soul is pure physiology. It’s the system that keeps us, and every living thing, alive. Taken together, I think that Aristotle’s biology is the greatest scientific edifice ever built by a single man. I’ll allow one challenger. And that is only because Charles Darwin gave us the idea that eluded Aristotle: evolution.

Aristotle’s biology is all but forgotten. It was the principal casualty of the Scientific Revolution. He was the giant who had to be slain so that we could pass through the gates of philosophy to reach the green fields of science that lay beyond. And yet, if, as a biologist, you read him, you realize how familiar it all seems—how so many of our ideas were first his.

But that’s no reason to read him. True, Max Delbrück said that Aristotle deserved a Nobel for having the idea of DNA—but that was just an affectionate joke. No, we should read him not for his science but for his example. He shows us how to transcend the ideas and theories that constrain our thoughts. He went down to the Lagoon’s shore, picked up a snail, and asked, “What’s inside?” It’s such a simple question, but it launched a science vast and beautiful. And that is what Aristotle gives us: the courage to seek and discover new worlds.

Armand Marie Leroi is a professor of evolutionary developmental biology at Imperial College London. Read an excerpt of The Lagoon: How Aristotle Invented Science.

Logo KW

Arte [681]

de System Administrator - martes, 5 de agosto de 2014, 00:45

Video: Las Reglas Ocultas del Arte [681]

Logo KW

Artículo Neurociencias: El valor del abrazo [1392]

de System Administrator - lunes, 7 de septiembre de 2015, 12:39

Artículo Neurociencias: El valor del abrazo

por Nse. Marita Castro 

Resumen: El homo sapiens sapiens es un ser social que necesita del contacto con otras personas para su mayor salud física y metal.

“Hay un traje que se amolda a todos los cuerpos… un abrazo”

Los homo sapiens sapiens somos seres emocionales y sociales, por lo que para nuestro bienestar físico y mental es muy importante el contacto con otras personas.

El afecto nos hace bien y el abrazo es un maravilloso modo de expresarlo. Algunas veces no encontramos las palabras adecuadas para mostrar lo que sentimos, y este gesto es un excelente modo de hacerlo.

Si nos pusiéramos a recordar momentos de nuestro pasado, seguramente encontraremos varias situaciones en las que un abrazo fue de ayuda para sobrellevar un momento difícil, nos sirvió de consuelo o nos hizo sentir queridos y protegidos.

La sensación que sentimos durante un abrazo sincero y después del mismo es sumamente placentera, y cabe preguntarse por ello si la misma puede producir algún cambio positivo en nuestra salud.

Este interrogante llevó a que científicos de la Universidad Carnegie Mellon, la Universidad de Virginia y Universidad de Pittsburgh se unieran para estudiar los efectos del apoyo social y los abrazos en más de 400 adultos.

Para su investigación los profesionales utilizaron un cuestionario en donde los participantes debían responder 12 ítems relacionados con su percepción sobre el apoyo social que recibían y, además, los llamaron durante 14 noches seguidas para conversar telefónicamente y registrar si habían tenido inconvenientes interpersonales y la cantidad de abrazos recibidos durante ese día.

Posteriormente, los participantes fueron expuestos a un virus que causaba un resfriado común y se los observó en condiciones de cuarentena para valorar si una mayor frecuencia de conflictos aumentaba el riesgo de infección.

Los resultados del estudio les demostraron a los investigadores que sentirse cuidados y apoyados por otras personas producía una respuesta de protección contra el riesgo de infección, mientras que los conflictos lo aumentan. También como parte de las concusiones encontraron que los abrazos disminuyen marcadamente las hormonas del estrés y esto contribuye a un mejor funcionamiento del sistema inmunológico.

En la publicación de este trabajo en Scientific American, los responsables del mismo expresaron que se debe tener en cuenta el otro lado de la experiencia, en la que es importante considerar cómo afecta la soledad a la salud.

Todos nos sentiríamos mejor si abrazáramos o nos dejáramos abrazar más a menudo, porque aunque hacerlo es algo natural, muchas veces no está dentro de nuestros actos cotidianos. Tal vez sea debido al estilo de vida actual lleno de "urgencias u otras prioridades" o porque desconocemos el inmenso valor que tiene dar y recibir afecto para toda nuestra UCCM (unidad cuerpo cerebro mente).

Características de los abrazos:

  • Liberan neurotransmisores como la oxitocina que nos hacen sentir queridos y protegidos;
  • Transmiten fuerza y seguridad;
  • Liberan endorfinas y dopamina (dan sensación de bienestar);
  • Reducen la presión arterial;
  • Ayudan al sistema inmunológico a cumplir con su correcto funcionamiento;
  • Disminuyen las hormonas del estrés;
  • No tienen costo;
  • Pueden darse en todo contexto.

Sin lugar a dudas, ¡hoy es un buen día para comenzar!

Y si bien es virtual, ¡va un fuerte abrazo!

Leer en sitio web / Descargar en PDF:

Artículo de uso libre, sólo se pide citar autor y fuente (Asociación Educar).


  • Cohen S, Janicki-Deverts D, Turner RB, Doyle WJ. Does hugging provide stress-buffering social support? A study of susceptibility to upper respiratory infection and illness. Psychol Sci. 2015 Feb;26(2):135-47. doi: 10.1177/0956797614559284.
  • Robinson KJ, Hoplock LB, Cameron JJ. When in Doubt, Reach Out: Touch Is a Covert but Effective Mode of Soliciting and Providing Social Support. Social Psychological and Personality Science 1948550615584197, first published on May 12, 2015. doi: 10.1177/1948550615584197.


Nse. Marita Castro

Directora general de Asociación Educar.

Capacitadora y asesora de proyectos de Neurosicoeducación en el aula en los colegios: River Plate, Río de la Plata Sur, Mecenas, Magnus, Redwood, Enriqueta Compte y Riqué, Instituto Pizzurno De Enseñanza Integral, Instituto Jesús María e Instituto Idio+DelFabro.

Co-creadora del Curso de Capacitación Docente en Neurociencias (finalizado por más de 2000 alumnos).

Sus cursos y formaciones cuentan con alumnos egresados en 34 países.

Docente de Neuroética y Neuroliderazgo en la escuela de negocios de la Universidad Maimónides, (2010-2013).

Docente en los Talleres de Neurobiología del aprendizaje. Universidad Nacional de la Plata (UNLP), (2009-2010).


Más artículos:

Logo KW

Artificial Enzymes from Artificial DNA [1015]

de System Administrator - miércoles, 10 de diciembre de 2014, 19:30

Artificial Enzymes from Artificial DNA Challenge Life As We Know It


In the decade or so since the Human Genome Project was completed, synthetic biology has grown rapidly. Impressive advances include the first bacteria to use a chemically-synthesized genome and creation of a synthetic yeast chromosome.

Recently, scientists from the MRC Laboratory of Molecular Biology in Cambridge, led by Dr. Philip Hollinger,reported creating the first completely artificial enzymes that are functional. The breakthrough was published in the journal Nature and builds on prior success by the group in creating several artificial nucleotides.


Nucleotides, the building blocks of DNA and RNA, consist of a phosphate group, one of five nitrogenous bases (adenine, cytosine, guanine, thymine, or uracil), and a sugar (deoxyribose in DNA and ribose in RNA).

In their previous studies, Dr. Hollinger’s group investigated whether nucleotides that don’t exist in nature (to our knowledge) could function like natural nucleotides. They designed six artificial nucleotides, keeping the phosphate group and one of the five nitrogenous bases, but switching between different sugars or even entirely different molecules.

The group found they could incorporate the artificial nucleotides, called xeno-nucleic acids or XNAs, into DNA and they behaved like “regular” DNA. The XNAs could be copied, could encode information and transfer it, and even undergo Darwinian natural selection—naturally occurring nucleic acids no longer seemed so special.

Moving forward, the scientists were interested in whether XNAs could function as enzymes, the proteins in cells that regulate biochemical reactions. They got this idea because RNA sometimes functions as an enzyme—the 1982 discovery that RNA could encode information, replicate itself and catalyze reactions filled in a big gap in our understanding of life and how it may have started on our planet.

Dr. Hollinger’s group created four new XNAzymes that could cut and paste RNA just like enzymes called polymerases. One of the XNAzymes even works on XNAs, something we think may have happened with RNA in the early days of life on Earth.

Besides the novelty factor in these experiments, the results suggest exciting possibilities. First, even though XNAs have never been found in nature, it doesn’t mean they don’t exist.


It is possible that on other planets—in our own galaxy or others—life isn’t restricted to DNA and RNA as we know them here on Earth. Under the right conditions, intelligent life that uses XNAs or even more exotic molecules could come into existence. That’s quite an eye-opener and something we need to keep in mind as we probe other worlds for life.

Further, Dr. Hollinger and other scientists also believe that XNAzymes could have therapeutic uses.

Because they are not naturally occurring, our bodies haven’t evolved a system to break down XNAzymes. If researchers could design an XNAzyme that can degrade specific RNA, then targeting an overactive cancer gene becomes possible. They can even be designed to target the DNA and RNA that viruses use to infect a cell and force it to make more viruses.

However, before any of these possibilities are realized, there are many questions to answer.

For example, all the work done by Dr. Hollinger’s group has been in test tubes—can they get similar results in live cells? Also, since the XNAzymes are not degraded by the cell, can they design a system to turn the XNAzymes on and off? An unnatural, long-lasting molecule that cannot be degraded risks serious unintended consequences if there is no system to regulate it.

Whatever becomes of XNAs and XNAzymes as therapeutic agents, the results published thus far are quite exciting. Even if we don’t find alien beings with an XNA genome, will our technology allow us to create whole living systems using these and perhaps other novel genetic material we haven’t created yet?

For further reading and discussion, the researchers behind this work (including Dr. Hollinger) recently participated in a Reddit AMA. Or to talk the merits and risks of creating synthetic life check out our discussion post on the subject.

Image Credit:

Logo KW

Artificial Intelligence Evolving From Disappointing to Disruptive [1002]

de System Administrator - martes, 25 de noviembre de 2014, 20:33

Summit Europe: Artificial Intelligence Evolving From Disappointing to Disruptive


Neil Jacobstein, Singularity University’s co-chair in AI and Robotics, has been thinking about artificial intelligence for a long time, and at a recent talk at Summit Europe, he wanted to get a few things straight. There’s AI, and then there’s AI.

Elon Musk recently tweeted this about Nick Bostrom’s book, Superintelligence: “We need to be super careful with AI. Potentially more dangerous than nukes.”

AI has long been a slippery term, its definition in near-constant flux. Ray Kurzweil has said AI is used to describe human capabilities just out of reach for computers—but when they master these skills, like playing chess, we no longer call it AI.

These days we use the term to describe machine learning algorithms, computer programs that autonomously learn by interacting with large sets of data. But we also use it to describe the theoretical superintelligent computers of the future.

According to Jacobstein, the former are already proving hugely useful in a range of fields—and aren’t necessarily dangerous—and the latter are still firmly out of reach.

The AI hype cycle has long been stuck at the stage of overpromise and underperformance.

Computer scientists predicted a computer would beat the world chess champion in a decade—instead it took forty years. But Jacobstein thinks AI is moving from a long period of disappointment and underperformance to an era of disruption.

What can AI do for you? Jacobstein showed participants a video of IBM’s Watson thoroughly dominating two Jeopardy champions—not because folks haven’t heard about Watson, but because they need to get a visceral feel of its power.

Jacobstein said Watson and programs like it don’t demonstrate intelligence that is “broad, deep, and subtle” like human intelligence, but they are a multi-billion dollar fulcrum to augment a human brain faced with zettabytes of data.

Our brains, beautiful and capable as they are, have major limitations that machines simply don’t share—speed, memory, bandwidth, and biases. “The human brain hasn’t had a major upgrade in over 50,000 years,” Jacobstein said.

Now, we’re a few steps away from having computer assistants that communicate like we do on the surface—speaking and understanding plain english—even as they manage, sift, and analyze huge chunks of data in the background.

Siri isn’t very flexible and still makes lots of mistakes, often humorous ones—but Siri is embryonic. Jacobstein thinks we’ll see much more advanced versions soon. In fact, with $10 million in funding, SIRI’s inventors are already working on a sequel.

And increasingly, we’re turning to the brain for inspiration. IBM’s Project SyNAPSE, led by Dharmendra Modha, released a series of papers—a real tour de force according to Jacobstein—outlining not just a new brain-inspired chip, but a new specially tailored programming language and operating system too.

These advances, among others highlighted by Jacobstein, will be the near future of artificial intelligence, and they’ll provide a wide range of services across industries from healthcare to finance.

But what of the next generation? A better understanding of the brain driven by advanced imaging techniques will inspire the future’s most powerful systems: “We’ll understand the human brain like we understand the kidneys and heart.”


If you lay out the human neocortex, the part of the brain responsible for higher cognition, it’s the size of a large dinner napkin. Imagine building a neocortex outside the confines of the skull—the size of this room or a city.

Jacobstein thinks reverse engineering the brain in silicon isn’t unreasonable. And then we might approach the kind of superintelligence Musk is worried about. Might such a superintelligent computer become malevolent? Jacobstein says it’s realistic.

“These new systems will not think like we do,” he said, “And that means we’ll have to exercise some control.”

Even if we don’t completely understand them, we’re still morally responsible for them—like our children—and it’s worth being proactive now. That includes planning diverse, layered controls on behavior and rigorous testing in a “sand box” environment, segregated and disconnected from other computers or the internet.

Ultimately, Jacobstein believes superintelligent computers could be a great force for good—finding solutions to very hard problems like energy, aging, or climate change—and we have a reasonable shot at these benefits without realizing the risks.

“We have a very promising future ahead,” Jacobstein said, “I encourage you to build the future boldly, but do it responsibly.”

Image Credit:

Logo KW

Artificial intelligence will replace traditional software approaches, Alphabet's Eric Schmidt says [1450]

de System Administrator - jueves, 24 de septiembre de 2015, 17:53

Artificial intelligence will replace traditional software approaches, Alphabet's Eric Schmidt says

Logo KW

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’ [1579]

de System Administrator - domingo, 15 de noviembre de 2015, 22:29

Robots manufactured by Shaanxi Jiuli Robot Manufacturing Co on display at a technology fair in Shanghai Photograph: Imaginechina/Corbis

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

by Charles Arthur

A new report suggests that the marriage of AI and robotics could replace so many jobs that the era of mass employment could come to an end

If you wanted relief from stories about tyre factories and steel plants closing, you could try relaxing with a new 300-page report from Bank of America Merrill Lynch which looks at the likely effects of a robot revolution.

But you might not end up reassured. Though it promises robot carers for an ageing population, it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines.

Haven’t we heard all this before, though? From the luddites of the 19th century to print unions protesting in the 1980s about computers, there have always been people fearful about the march of mechanisation. And yet we keep on creating new job categories.

However, there are still concerns that the combination of artificial intelligence (AI) – which is able to make logical inferences about its surroundings and experience – married to ever-improving robotics, will wipe away entire swaths of work and radically reshape society.

“The poster child for automation is agriculture,” says Calum Chace, author ofSurviving AI and the novel Pandora’s Brain. “In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed.

“But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

What if we’re the horses to AI’s humans? To those who don’t watch the industry closely, it’s hard to see how quickly the combination of robotics and artificial intelligence is advancing. Last week a team from the Massachusetts Institute of Technology released a video showing a tiny drone flying through a lightly forested area at 30mph, avoiding the trees – all without a pilot, using only its onboard processors. Of course it can outrun a human-piloted one.

MIT has also built a “robot cheetah” which can 

. Add to that the standard progress of computing, where processing power doubles roughly every 18 months (or, equally, prices for capability halve), and you can see why people like Chace are getting worried.

Drone flies autonomously through a forested area

But the incursion of AI into our daily life won’t begin with robot cheetahs. In fact, it began long ago; the edge is thin, but the wedge is long. Cooking systems with vision processors can decide whether burgers are properly cooked. Restaurants can give customers access to tablets with the menu and let people choose without needing service staff.

Lawyers who used to slog through giant files for the “discovery” phase of a trial can turn it over to a computer.An “intelligent assistant” called Amywill, via email, set up meetings autonomously. Google announced last week that you can get Gmail to write appropriate responses to incoming emails. (You still have to act on your responses, of course.)

Further afield, Foxconn, the Taiwanese company which assembles devices for Apple and others, aims to replace much of its workforce with automated systems. The AP news agency gets news stories written automatically about sports and business by a system developed by Automated Insights. The longer you look, the more you find computers displacing simple work. And the harder it becomes to find jobs for everyone.

So how much impact will robotics and AI have on jobs, and on society? Carl Benedikt Frey, who with Michael Osborne in 2013 published the seminal paperThe Future of Employment: How Susceptible Are Jobs to Computerisation? – on which the BoA report draws heavily – says that he doesn’t like to be labelled a “doomsday predictor”.

He points out that even while some jobs are replaced, new ones spring up that focus more on services and interaction with and between people. “The fastest-growing occupations in the past five years are all related to services,” he tells theObserver. “The two biggest are Zumba instructor and personal trainer.”

Frey observes that technology is leading to a rarification of leading-edge employment, where fewer and fewer people have the necessary skills to work in the frontline of its advances. “In the 1980s, 8.2% of the US workforce were employed in new technologies introduced in that decade,” he notes. “By the 1990s, it was 4.2%. For the 2000s, our estimate is that it’s just 0.5%. That tells me that, on the one hand, the potential for automation is expanding – but also that technology doesn’t create that many new jobs now compared to the past.”

This worries Chace. “There will be people who own the AI, and therefore own everything else,” he says. “Which means homo sapiens will be split into a handful of ‘gods’, and then the rest of us.

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.”

Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

Perhaps the biggest problem is that there has been so little examination of the social effects of AI. Frey and Osborne are contributing to Oxford University’s programme on the future impacts of technology; at Cambridge, Observercolumnist John Naughton and David Runciman are leading a project to map the social impacts of such change. But technology moves fast; it’s hard enough figuring out what happened in the past, let alone what the future will bring.

But some jobs probably won’t be vulnerable. Does Frey, now 31, think that he will still have a job in 20 years’ time? There’s a brief laugh. “Yes.” Academia, at least, looks safe for now – at least in the view of the academics.


Smartphone manufacturer Foxconn is aiming to automate much of its production facility. Photograph: Pichi Chuang/Reuters

The danger of change is not destitution, but inequality

Productivity is the secret ingredient in economic growth. In the late 18th century, the cleric and scholar Thomas Malthus notoriously predicted that a rapidly rising human population would result in misery and starvation.

But Malthus failed to anticipate the drastic technological changes - from the steam-powered loom to the combine harvester - that would allow the production of food and the other necessities of life to expand even more rapidly than the number of hungry mouths. The key to economic progress is this ability to do more with the same investment of capital and labour.

The latest round of rapid innovation, driven by the advance of robots and AI, is likely to power continued improvements.

Recent research led by Guy Michaels at the London School of Economics looked at detailed data across 14 industries and 17 countries over more than a decade, and found that the adoption of robots boosted productivity and wages without significantly undermining jobs.

Robotisation has reduced the number of working hours needed to make things; but at the same time as workers have been laid off from production lines, new jobs have been created elsewhere, many of them more creative and less dirty. So far, fears of mass layoffs as the machines take over have proven almost as unfounded as those that have always accompanied other great technological leaps forward.

There is an important caveat to this reassuring picture, however. The relatively low-skilled factory workers who have been displaced by robots are rarely the same people who land up as app developers or analysts, and technological progress is already being blamed for exacerbating inequality, a trend Bank of America Merrill Lynch believes may continue in future.

So the rise of the machines may generate huge economic benefits; but unless it is carefully managed, those gains may be captured by shareholders and highly educated knowledge workers, exacerbating inequality and leaving some groups out in the cold. Heather Stewart



Logo KW


de System Administrator - domingo, 12 de octubre de 2014, 17:28


Written By: Peniel M. Dimberu

In one of the gutsiest performances in sports history, NFL quarterback Chris Simms had to be carted off the field after taking several vicious hits from the defense during a game in 2006. Remarkably, Simms returned to the game shortly thereafter and led his team on a scoring drive before having to leave the game for good.

As it turns out, Simms had ruptured his spleen and lost nearly five pints of blood.

While you can live without your spleen, it serves several important functions in the body including making antibodies and maintaining a reservoir of blood. It also works to keep the blood clean by removing old blood cells and antibody-coated pathogens.

Now, scientists from Harvard’s Wyss Institute for Biologically Inspired Engineering in Boston have developed an artificial spleen that has been shown to rapidly remove bacteria and viruses from blood. The technology could be useful in many scenarios, including protecting people who suffer from immunodeficiencies and those infected with difficult to treat pathogens like Ebola virus. It also has great potential to reduce the incidence of sepsis, a leading cause of death that results from an infection that the immune system tries but fails to control effectively.

In the 2013 sci-fi thriller Elysium, the filmmakers imagined a futuristic body scanner that can quickly identify and treat almost any disease. While we may be far from an all-in-one machine that can handle any ailment, the artificial spleen developed by a Harvard team led by Dr. Donald Ingber could play a part in such a machine.

Their work, published last month in the journal Nature Medicine, was demonstrated to be effective in removing more than 90% of bacteria from blood.

Wyss Institute Founding Director Don Ingber, Senior Staff Scientist Michael Super and Technology Development Fellow Joo Kang explain how they engineered the Mannose-binding lectin (MBL) protein to bind to a wide range of sepsis-causing pathogens and then safely remove the pathogens from the bloodstream using a novel microfluidic spleen-like device.

While this device has potential to be a major advance in treating infections, the way it works is relatively straightforward. In most animals, a protein called mannose-binding lectin (MBL) binds to mannose, a type of sugar. Mannose is found on the outer surface of many pathogens, including bacteria, fungi and viruses. It is even found on some toxins that are produced by bacteria and contribute to illness.


Wyss Institute microfluidic biospleen.

Dr. Ingber’s team took a modified version of MBL and coated magnetic nanobeads with it. As the infected blood filters through the device, the MBL from the nanobeads binds to most pathogens or toxins that are around. As the blood then moves out of the device, a magnet grabs the magnetic nanobeads that have attached to the pathogens and removes them from the blood.

The blood can then be put right back into the patient, much cleaner than before.

In their initial experiments, the researchers used rats that had been infected with two common bacteria, Escherichia coli and Staphylococcus aureus. One group of rats was left untreated and the other group had their blood filtered using the new device. After five hours, 89% of the treated rats had survived while only 14% of the untreated rats were still alive.

The researchers also tested if the device could be effective for humans, which have about five liters of blood in an average adult. In five hours of testing, moving one liter of blood infected with bacteria and fungi through per hour, the device worked to remove the vast majority of the infectious bugs.

While five hours is a not a long time for patients who are hospitalized, it’s a bit long for patients who might be receiving outpatient treatment for an infection.

It is possible that as the design and function of the device is improved, it could work even faster than one liter per hour. The speed at which the artificial spleen is effective likely depends on several factors, including the pathogen load, the size of the patient (and thus their actual volume of blood) and the number of magnetic nanobeads in the device working to bind the pathogens.

Currently, the researchers are extending their experiments by testing the artificial spleen on a larger model animal, the pig.

If the device eventually makes it to market, it might provide a big boost to our arsenal against infectious microorganisms. It can bring the numbers of rapidly dividing bugs down to a level that can then make it easy for drugs or even just the immune system to finish them off, an important advancement for people who suffer from an immunodeficiency for any number of reasons. This device could also help reduce our overuse of antibiotics and give us a strong weapon against antibiotic-resistant bugs.

It might even find use in developing countries like those in Western Africa, where we are currently witnessing the devastation of the Ebola virus outbreak.

However, while many infectious bugs have mannose on their surface, not all of them do. Perhaps the 2.0 version of the artificial spleen will include proteins that can bind to other molecules on the surface of problematic microorganisms, leading us to closer to the all-in-one healing machine imagined in the futuristic world of Elysium.

Image Credit: Wyss Institute/Vimeo

This entry was posted in Medicine and tagged artificial organsartificial spleendonald ingberebola,Harvardmagnetic nanobeadswyss institute.


Logo KW

Asertividad [695]

de System Administrator - martes, 5 de agosto de 2014, 21:51

10 tips para ser asertivo sin dejar de ser uno mismo


La asertividad suele definirse como la capacidad de expresar las opiniones, los sentimientos, las actitudes y los deseos, y reclamar los propios derechos, en el momento adecuado, sin ansiedad excesiva, y de una manera que no afecte a los derechos de los demás.

La sabiduría popular dice que las personas asertivas salen adelante. Dicen lo que piensan, solicitan los recursos que necesitan, manifiestan sus deseos y sentimientos, y no aceptan un no por respuesta.  Pero si no eres una persona asertiva no debes preocuparte, se puede llegar a ser asertivo, pedir lo que necesitas y conseguir lo que quieres, sin dejar de ser uno mismo:

1. Comienza con algo pequeño. Si la idea de ser asertivo te hace sentir especialmente mal o inseguro, comienza con situaciones de bajo riesgo. Por ejemplo, si pides una hamburguesa, y el camarero te trae un salmón a la plancha, hazle ver su error y envíalo de vuelta. Si sales de compras  con tu pareja y estás tratando de decidir sobre un lugar para comer, manifiesta tu opinión a la hora de elegir a donde ir.

Una vez que te sientas cómodo en estas situaciones de bajo riesgo, comienza subiendo la dificultad poco a poco.

2. Empieza diciendo no. En el camino para ser más asertivo, el NO es tu mejor compañero. Debes decir no más a menudo. Es posible ser firme y decidido con el NO sin dejar de ser considerado. Al principio, decir que no puede hacer que te sientas ansioso, pero con el tiempo llegarás a sentirte bien y bastante liberado.

Es probable que algunas personas se sientan decepcionadas ante esta nueva situación. Pero recuerda que mientras expreses tus necesidades de una manera considerada, no eres en absoluto responsable de su reacción.

3. Sé simple y directo. Cuando te estás  afirmando a ti mismo, menos es más. Haz tus peticiones de manera sencilla y directa. No hay necesidad de dar explicaciones elaboradas (véase más adelante). Es suficiente con decir cortésmente lo que piensas, sientes o deseas.

4. Utiliza el “yo”. Al hacer una petición o expresar desaprobación usa el “yo”. Hazlo siempre en primera persona. En lugar de decir: “Eres muy desconsiderado. No tienes ni idea de lo duro que ha sido el día de hoy. ¿Por qué me pides que haga todas estas tareas?”, debes decir “Estoy agotado hoy. Veo que quieres que haga todas estas cosas, pero no voy a poder hacerlas hasta mañana”.

5. No te disculpes por expresar una necesidad o deseo. Al menos que estés pidiendo algo que sea manifiestamente irrazonable, no hay razón para sentirse culpable o avergonzado por expresar una necesidad o deseo. Así que deja de pedir disculpas cuando pides algo. Sólo pídelo educadamente y espera a ver cómo la otra persona responde.

6. Utiliza el lenguaje corporal y el tono de voz. Debes parecer seguro al hacer una solicitud o indicar una preferencia. Ponerse de pie, inclinarse un poco, sonreír o mantener una expresión facial neutra, mirar a la persona a los ojos, son acciones que denotan seguridad. También debes asegurarte de hablar con claridad y en voz lo suficientemente alta.

7. No tienes que justificar o explicar tu opinión. Cuando tomas una decisión o das una opinión con la que otros no están de acuerdo, un modo en el que van a tratar de ejercer control sobre ti será exigiendo que des una justificación de tu elección, opinión o comportamiento. Si no puedes encontrar una razón suficiente, suponen que debes estar de acuerdo con lo que quieren.

Las personas no asertivas, con su necesidad de agradar, se sienten obligadas a dar una explicación o una justificación para cada elección que hacen, incluso si la otra persona no se la pidió. Quieren asegurarse de que todo el mundo está de acuerdo con sus opciones, y de este modo lo que están haciendo es pedir permiso para vivir sus propias vidas.

8. Sé persistente. A veces te enfrentas a situaciones en las que inicialmente no encuentras respuesta a tus solicitudes. No te limites a decirte a ti mismo: “Al menos lo intenté “. A menudo para ser tratado con justicia tienes que ser persistente. Por ejemplo, si te cancelaron un vuelo, sigue preguntando acerca de otras opciones, como ser transferido a otra línea aérea, para poder llegar a tu destino a tiempo.

9. Mantén la calma. Si alguien está en desacuerdo o desaprueba tu elección, opinión o solicitud, no debes enojarte o ponerte a la defensiva. Es mejor buscar una respuesta constructiva o decidir evitar a esta persona en futuras situaciones.

10. Elije tus batallas. Un error común que cometemos en el camino para ser más asertivo es tratar de ser firme todo el tiempo. La asertividad es situacional y contextual. Puede haber casos en los que ser asertivo no te llevará a ninguna parte y tomar una postura más agresiva o pasiva es la mejor opción.

A veces, sin duda es necesario ocultar los sentimientos. Sin embargo, aprender a expresar tus opiniones, y lo más importante, a respetar la validez de esas opiniones y deseos, te convertirá en una persona con mayor confianza. El resultado de una acción asertiva puede llevarte a conseguir exactamente lo que quieres, o quizás un compromiso, o tal vez un rechazo, pero independientemente del resultado, dará lugar a que te sientas más cerca de controlar tu propia vida.

Logo KW


de System Administrator - domingo, 31 de agosto de 2014, 03:21


Autora: María Teresa Vallejo Laso

¿Te has parado a pensar alguna vez como reaccionas cuando interactúas por primera vez con una persona?
¿Por qué a primera vista te desagrada esa persona o te cae bien? 
¿Te sientes sorprendido?
¿Tal vez incómodo?
¿Responderás a esa persona o pasará desapercibida para ti?
¿Sabes que todas estas respuestas las puedes obtener en solo unos pocos segundos?


1. Reconocimiento de emociones: lo primero que hacemos es una evaluación acerca de su estado de ánimo, el cual elaboramos a partir de su rostro y del lenguaje corporal que observamos mediante gestos, movimientos, miradas, postura, etc.…. Así nuestra respuesta puede variar según creamos que la persona se encuentra angustiada, feliz o triste.

2. Seleccionamos la enorme cantidad de datos que nos llegan de la persona en cuestión y reducimos su complejidad.

3. Resumimos la información importante que tenemos sobre la persona que se nos acerca y omitimos y olvidamos otros muchos detalles. Por ejemplo, si nos gusta su modo de hablar, vestimenta y el contenido de su conversación, le ponemos un atributo (por ejemplo “es una persona agradable”).

4. Seguidamente, la información que estamos recibiendo , la almacenamos en nuestra memoria, la ponemos en relación con otras informaciones que ya disponemos de experiencias anteriores, la recuperamos y la aplicamos al caso en cuestión.

5. Después, intentamos ir más allá de la información obtenida, con el fin de predecir acontecimientos futuros y de este modo evitar o reducir la sorpresa. 

6. Ordenamos la información que tenemos y creamos categorías para clasificar su conducta, su apariencia y demás elementos informativos. Podemos categorizar en función de su atractivo físico, de su personalidad, de su procedencia geográfica, de la carrera universitaria que estudia, de su ideología política, etc., bien en un sistema categorial simple p.ej. amigo-enemigo, atractivo-poco atractivo, o en un sistema más complejo.

7. Buscamos los elementos invariantes de los estímulos que percibimos, ya que no nos resultan de interés los aspectos de la conducta que nos parezcan superficiales o inestables.

8. Los estímulos que percibimos pasan al interior de nuestra mente a través de un tamiz. Allí los interpretamos, y a partir de esta interpretación que hagamos, le otorgamos un significado. Si vemos a una persona que ayuda a un anciano a cruzar la calle, esa percepción la almacenamos en la memoria junto con la interpretación de que dicha persona es amable y ayuda a los demás.

9. Intentamos descubrir cómo la persona que estamos percibiendo es realmente, o cuáles son sus verdaderas intenciones ya que todos sabemos que los objetivos y deseos de la persona percibida influyen en la información de sí misma que presenta, lo que, unido a la ambigüedad que tiene gran parte de la información, hace que nos impliquemos en un proceso activo de conocerla mejor.

10. Realizamos una serie de inferencias, ya que la percepción de personas implica al propio Yo y ya que otras personas son similares a nosotros, todos podemos hacernos una idea de cómo se siente una persona cuando está triste, cuando le suspenden un examen o cuando le dan una buena noticia, porque nosotros hemos vivido esas experiencias o similares.

11. La percepción de personas suele darse en interacciones que poseen un carácter dinámico, es decir, cuando percibimos a otra persona, somos a la vez percibidos. Nuestra presencia, el hecho de sentirse observado, o el contexto, pueden hacer que la otra persona maneje la impresión que quiera causarnos, presentando o enfatizando ciertas características y omitiendo otras. Además, las expectativas o percepciones respecto a la persona que percibimos influyen en nuestra conducta hacia ella; esta conducta a su vez puede influir en la respuesta que la persona percibida emita, cerrando de esta manera una especie de círculo vicioso.


1. Inferimos las características psicológicas a partir de su conducta, así como de otros atributos de la persona observada. (Por ejemplo, la persona se encuentra sola, es atractiva físicamente, inteligente, de otra ciudad, amante de la ciencia, etc.).

2. Organizamos estas inferencias en una impresión coherente. Siguiendo el ejemplo, se podría producir el efecto Halo. Dicho efecto aparece cuando un rasgo positivo tiende a llevar asociados a él otros rasgos positivos, y un rasgo negativo otras cualidades negativas. De esta manera, los elementos informativos se organizan como un todo donde cada rasgo afecta y se ve afectado por todos los demás, generando una impresión dinámica.

3. Combinamos todos estos elementos ya que en cada impresión, aunque todos los rasgos se relacionan entre sí, hay unos que tienen un mayor impacto sobre los demás, sirviendo como elementos aglutinadores de la impresión.

4. Producimos una imagen global de la persona. Cuando percibimos a los demás, nos formamos impresiones globales y unitarias de cada persona. Sin embargo, la información que recibimos está fragmentada en pequeñas piezas informativas, de muy diversa índole.

5. Si hay elementos que a nuestro juicio son contradictorios, o son incoherentes entre sí, resolveremos las contradicciones. Cuando recibimos información inconsistente podemos hacer dos cosas. En primer lugar, se puede cambiar el significado de las características. En segundo lugar puede inferir nuevos rasgos que permitan reducir las contradicciones. Si sabemos de otra persona que es inteligente, afectuosa y mentirosa quizá deduzcamos que es político o diplomático. Con ambos mecanismos el resultado es el mismo. La impresión resultante es única y coherente.


Efecto primacía:

- Se da con mayor probabilidad cuando los sujetos se comprometen de alguna manera con el juicio basado en la primera información antes de que reciban la información adicional.

- Cuando la primera información es más clara, menos ambigua o más relevante para el juicio.

- Cuando la primera información se basa en la persona estímulo y/ o en la categoría.

- Cuando la información en general se refiere a una entidad que no se espera que cambie con el tiempo.

Efecto Recencia:

- Se da cuandolos últimos elementos informativos tienen un peso menor.

- Aparece cuando la información reciente es más fácil de recordar o más viva que la primera información.

- Los últimos adjetivos son descontados o ignorados en la medida en que sean inconsistentes con la información predominante anterior.

- Las personas prestamos menor atención a los últimos elementos informativos por cansancio o bien porque los consideramos menos creíbles o importantes, pensando quizá que, precisamente porque son menos importantes, por eso han sido colocados en último lugar.

Cuando la información que conocemos acerca de una persona contiene elementos positivos y negativos, estos últimos tienen una mayor importancia en la impresión formada. Por eso una primera impresión negativa es más difícil de cambiar que una positiva, pues los rasgos que conllevan una evaluación negativa parecen ser fáciles de confirmar y difíciles de desconfirmar, mientras que los rasgos positivamente evaluados son difíciles de adquirir pero fáciles de perder.

¿Cuáles son las razones de esto?

- Se ha sugerido una motivación egoísta por parte del perceptor, pues una persona que posea rasgos negativos supone un mayor grado de amenaza.

- La información negativa tiene un mayor valor informativo, dando por supuesto que la mayoría de las personas nos esforzamos por suministrar una imagen positiva de nosotros mismos, resulta evidente que la información positiva que suministramos dice poco acerca de nosotros como individuos únicos y peculiares.

- Dado que las evaluaciones negativas son menos habituales, su impacto sobre las impresiones es mayor


1. El congraciamiento: intentamos aparecer de una manera atractiva ante los demás, deseando ser aceptados, queridos…..Esto puede lograrse, por ejemplo, elogiando a la otra persona o mostrándose de acuerdo con sus opiniones y conductas. Básicamente, consiste en conformarse a las expectativas del perceptor.

2. Intimidación: con esta estrategia, las personas intentan mostrar el poder que ejercen sobre la otra persona, amenazando o creando temor. Esta táctica suele darse en relaciones que no son voluntarias, ya que en una relación voluntaria la probabilidad de que el otro abandone la relación es grande. Con frecuencia el perceptor se conforma a los deseos de la persona percibida con el fin de evitar las consecuencias negativas, o los disturbios emocionales de su desacuerdo.

3. Autopromoción: consiste en mostrar las propias habilidades y capacidades, ocultando los defectos. A veces esta táctica aumenta en eficacia si el individuo reconoce fallos menores o ya conocidos por los perceptores, pues de este modo su credibilidad aumenta. El problema de esta táctica es que con frecuencia resulta difícil hacer creer a los demás que uno tiene ciertas cualidades de las que carece.

4. Suscitar en los demás el deber moral, la integridad o incluso la culpabilidad (por ejemplo, cuando un compañero de trabajo le dice a otro “no importa, vete a casa que yo acabaré el trabajo, aunque me pierda el cumpleaños de mi hija”. O a veces, como último recurso, las personas muestran sus debilidades y dependencias respecto a la otra persona.

5. Una estrategia frecuentemente empleada en los dominios relacionados con la competencia o el rendimiento es la de auto-incapacidad que consiste en incrementar la probabilidad de que un posible fracaso futuro sea atribuido a factores externos y un posible éxito a factores internos.

En ocasiones el deseo de una auto-presentación favorable lleva a las personas a asociarse al éxito de los demás atribuyéndoselo de alguna manera. Es lo que se denomina disfrute del reflejo de la gloria de otros, es decir, sentir orgullo de la victoria o de los éxitos de otros a los que seguimos. También existe el distanciamiento del fracaso de otros. Por ejemplo, es frecuente en los seguidores de un equipo deportivo la expresión hemos ganado después de una victoria y la expresión el equipo ha perdido después de una derrota.

Sería incorrecto pensar que estos esfuerzos de los individuos por presentar unas imágenes determinadas de sí mismos son esfuerzos por presentar una imagen “falsa”, que no se corresponde con su Yo más profundo y auténtico. Elegir qué aspecto de nuestra identidad presentamos en una situación determinada puede llevarnos a elegir entre diversos aspectos de nuestra identidad igualmente verdaderos. En primer lugar porque estamos limitados por nuestra propia realidad y no todo lo que queremos podemos conseguirlo: quien no es inteligente puede intentar parecerlo pero lo logrará sólo relativamente. En segundo lugar, porque a veces nosotros, nuestro yo, se va convirtiendo en aquello que aparentamos especialmente cuando nuestra apariencia recibe la aprobación de quienes nos rodean.
Cuando percibimos a otra persona, recibimos información de muy diversa índole:

- Apariencia física: lo que percibimos inicialmente en otra persona la mayoría de las veces es su aspecto físico (que incluye no sólo características anatómicas, sino también vestimenta, forma de moverse). Esta información es crucial para que nos hagamos una idea de su estado de ánimo en ese momento (reconocimiento de emociones), para que sepamos a qué categorías sociales pertenece, e incluso para que nos hagamos una idea de qué rasgos de personalidad le caracterizan

- La conducta: lo que la otra persona hace es también unas de las fuentes de información cruciales. Sin embargo, es cierto, al mismo tiempo que la conducta no es un indicador muy fiable de los estados internos, pensamientos y sentimientos de la persona percibida.

- Los rasgos de personalidad: Las características de personalidad de la persona percibida son más importantes que sus características físicas cuando se hace un diagnóstico psicológico. La razón de este hecho parece estar en que al descubrir las disposiciones estables de otra persona, adquirimos también cierta capacidad predictiva sobre su conducta futura.

- Información sobre relaciones (roles, redes sociales, como por ejemplo, cuando sabemos que alguien es padre)

- Metas y objetivos que persigue (es una persona que busca el poder) y sobre contextos .La importancia de cada uno de estos diferentes tipos de contenidos depende en gran medida del contexto, de los objetivos del perceptor, así como de la propia característica. Una característica muy extrema (por ejemplo, una chica muy maquillada) puede tener un papel primordial en la organización de toda la información subsiguiente.

María Teresa Vallejo Laso

Referencias Bibliográficas:

  • Moya, M. Percepción de Personas
  • Echebarría, A. y Villareal, M, La percepción social


Logo KW

Assessing Research Productivity [1046]

de System Administrator - miércoles, 7 de enero de 2015, 14:49

Assessing Research Productivity

A new way of evaluating academics’ research output using easily obtained data

By Ushma S. Neill, Craig B. Thompson, and Donna S. Gibson

It can often be difficult to gauge researcher productivity and impact, but these measures of effectiveness are important for academic institutions and funding sources to consider in allocating limited scientific resources and funding. Much as in the lab, where it is important for the results to be repeatable, developing an algorithm or an impartial process to appraise individual faculty research performance over multiple disciplines can deliver valuable insights for long-term strategic planning. Unfortunately, the development of such evaluation practices remains at an embryonic stage.


Several methods have been proposed to assess productivity and impact, but none can be used in isolation. Beyond assigning a number to an investigator—such as the h-index, the number of a researcher’s publications that have received at least that same number of citations, or acollaboration index, which takes into account a researcher’s relative contributions to his or her publications—there are additional sources of data that should be considered. At our institution, Memorial Sloan Kettering Cancer Center (MSKCC) in New York City, there is an emphasis on letters of recommendation received from external expert peers, funding longevity, excellence in teaching and mentoring, and the depth of a faculty member’s CV. For clinicians, additional assessments of patient load and satisfaction are also taken into consideration by our internal committees evaluating promotions and tenure. Other noted evaluation factors include the number of reviews and editorials an individual has been invited to author; frequency of appearance as first, middle, or senior author in collaborations; the number of different journals in which the researcher has published; media coverage of his or her work; and the number of published but never-cited articles.

Here we propose a new bibliometric method to assess the body of a researcher’s published work, based on relevant information collected from the Scopus database and Journal Citation Reports (JCR). This method does not require intricate programming, and it yields a graphical representation of data to visualize the publication output of researchers from disparate backgrounds at different stages in their careers. We used Scopus to assess citations of research articles published between 2009 and 2014 by five different researchers, and by one retired researcher over the course of his career since 1996, a time during which this individual was a full professor and chair of his department. These six researchers included molecular biologists, an immunologist, an imaging expert, and a clinician, demonstrating that this apparatus could level the playing field across diverse disciplines.


ACROSS DISCIPLINES: A graphical display illustrates the publication productivity and impact of three researchers from disparate fields whose names appeared in The journal’s average impact for the year (gray squares) is compared to the impact of the researcher’s articles (red circles) in the same journal that year. Non-review journals ranked in the top 50 by impact factor, as determined by Journal Citation Reports, are noted in gold. This manner of representing journals equalizes researchers across disciplines such that the impact of a particular manuscript can be appreciated by seeing if the author’s red dot is higher or lower than the journal’s gray/gold one.
See full infographic: JPG | PDF

The metric we used calculates the impact of a research article as its number of citations divided by the publishing journal’s impact factor for that year, divided by the number of years since the article was published. The higher the number, the greater the work’s impact. This value is plotted together with the average impact of all research articles the journal published in that same year (average number of citations for all research articles published that year divided by the journal impact factor for that year divided by the number of years since publication). Publications in journals that rank in the top 50 by impact factor (not including reviews-only journals) are also noted.


ACROSS AGES: This method of visualizing researchers’ productivity can be a useful tool for comparing scientists at different points in their career.
See full infographic: JPG | PDF

By developing such a graph for each scientist being evaluated, we get a snapshot of his or her research productivity. Across disciplines, the graphs allow comparison of total output (number of dots) as well as impact, providing answers to the questions: Are the scientists’ manuscripts being cited more than their peers’ in the same journal (red dots above gray)? How many of each researcher’s papers were published in leading scientific journals (gold squares)? The method also allows evaluation of early-career scientists and those who are further along in their careers. (See graphs at right, top.) For young researchers, evaluators can easily see if their trajectory is moving upward; for later-stage scientists, the graphs can give a sense of the productivity of their lab as a whole. This can, in turn, reveal whether their laboratory output matches their allocated institutional resources. While the impact factor may be a flawed measurement, using it as a normalization tool helps to remove the influence of the journal, and one can visualize whether the scientific community reacts to a finding and integrates it into scientific knowledge. This strategy also allows for long-term evaluations, making it easy to appreciate the productivity of an individual, in both impact and volume, over the course of his or her career.


LONG-TERM ANALYSIS: A nearly 20-year stretch (1996–2014) is shown for a newly retired faculty member after a productive research career. Note that this individual did not publish any articles in top 50 non-review journals as determined by Journal Citation Reports impact factors. Although this researcher published several papers before 1996, Scopus has limited reliability for citations prior to that year; therefore the analysis excluded these data.
See full infographic: JPG | PDF

Assessing research performance is an important part of any evaluation process. While no bibliometric indicators alone can give a picture of collaboration, impact, and productivity, this method may help to buttress other measures of scientific success.

Ushma S. Neill is director of the Office of the President at Memorial Sloan Kettering Cancer Center (MSKCC). Craig B. Thompson is the president and CEO of MSKCC, and Donna S. Gibson is director of library services at the center.

Logo KW

At the Heart of Facebook’s Artificial Intelligence, Human Emotions [1215]

de System Administrator - sábado, 2 de mayo de 2015, 20:56

At the Heart of Facebook’s Artificial Intelligence, Human Emotions

By Amir Mizroch

Yann LeCun, director of AI Research at Facebook, and professor of computer science at New York University.FacebookFacebook Inc. doesn't yet have an intelligent assistant, like the iPhone's Siri.

But the social-networking company says it's aiming higher, in what has become one of the biggest battles raging between Silicon Valley's behemoths: How to commercialize artificial intelligence.

The once-niche field is aimed at figuring out how computers can make decisions on a level approaching that of human intelligence. Apple Inc.'s Siri, Microsoft  Corp.'s Cortana and Google  Inc.'s Google Now are all early manifestations. They are voice-recognition services that act as personal assistants on devices, helping users search for information–like finding directions or rating nearby restaurants. Both "learn" from their users, adapting to accents, for instance, and learning from previous searches about users' preferences.

Facebook thinks it can do better.

"Siri and Cortana are very scripted," says Yann LeCun, director of artificial-intelligence research at Facebook, in an interview. "There's only certain things they can talk about and dialogue about. Their knowledge base is fairly limited, and their ability to dialogue is limited," he said. "We're laying the groundwork for how you give common sense to machines."

Apple and Microsoft declined to comment.

Google Executive Chairman Eric Schmidt recently said the company was making progress in image and speech recognition, but admitted at a conference it was a "sore point" at the company that Siri was getting "all the credit."

Facebook's LeCun also sees promise in natural-language processing–machines understanding what is being said in speech in a more sophisticated way than Siri or Cortana. And he said image and video recognition is the "next frontier" at Facebook.

"It's clear that there's going to be a lot of progress in the way that machines can understand images and activities in video; personal interactions in video between people expressing emotions, and things like that," he said. A raised eyebrow might mean many different things in different contexts. After a computer shifts through reams of images of people raising an eyebrow, and what happens before or after, it can start to correlate that action.

The basic theory is that the more images the computer analyzes and correlates, the more precise it becomes, statistically. The goal is to approach the same level of correlation that the human brain makes as it processes images sent from a person's  eyes.

"It's not just about looking at your face to determine your emotions, it's about understanding interactions between different people and figuring out if those people are friends, or angry at each other," LeCun said.

French-born LeCun, 55 years old, is one of the world's leading figures in artificial-intelligence research, specifically of a subset of the science called "machine learning," or mathematical algorithms that adjust, and improve, as they receive and analyze new data.

While working at AT&T the late 80s and 90s, Mr. LeCun developed handwriting recognition processing that was eventually used by banks to scan and verify checks. Technology on pattern recognition he developed significantly pushed the commercial applications of image and text recognition and is being used in the search and voice-recognition products and services by Google and Microsoft.

Facebook hired LeCun in late 2013, luring him from New York University, where he remains a part-time professor, shuttling between campus and Facebook's nearby New York offices.  LeCun now spends one day a week at NYU, and the rest at Facebook, where he heads the AI research lab. The lab, split between Menlo Park and New York, is currently 40-members strong, much larger than most university AI research departments–which traditionally have done the heavy lifting on AI research.

Facebook's AI research is currently being used in image tagging, predicting which topics will trend, and face recognition. All of these services require algorithms to sift through vast amounts of data, like pictures, written messages, and video, to make calculated decisions about their content and context. Facebook has a big advantage over university campuses who have toiled for decades in the field. It can vacuum up the reams of data required to "teach" machines to make correlations.

Facebook last week said its main social network increased to 1.44 billion monthly users, up from 1.39 billion in the 2014 fourth quarter. The company added that it now has 4 billion video streams every day.

"You can work on a project that may take a few years to develop into something useful, but we all know that if it succeeds will have a big impact," LeCun said.

More: Artificial-Intelligence Experts Are in High Demand



Logo KW

Atrocidad y Solidaridad [644]

de System Administrator - martes, 20 de octubre de 2015, 21:28

El ser humano es capaz de las cosas más atroces y de las más solidarias


El Dr. Humberto Lucero recuerda con veneración a sus maestros y a su Córdoba natal y que encontró en la Medicina Legal el ámbito donde desarrollar su mayor pasión: el conocimiento de la conducta humana.

Continuar leyendo en el sitio

Logo KW

Augmented reality aims at big industry [1306]

de System Administrator - domingo, 12 de julio de 2015, 00:02

Chasing Brilliance / An Ars Technica feature

Toss your manual overboard—augmented reality aims at big industry

Papers, diagrams, and checklists would be replaced with intuitive visual tools.

by Lee Hutchinson

Dr. Manhattan manipulates this reactor's components in exploded view. GE is trying to do something similar with an augmented reality maintenance manual |© Warner Bros.

For better or for worse, augmented reality ("AR") is charging forward in the consumer space—but there’s a place for AR in the industrial world as well. We’re not quite at the point of putting Microsoft HoloLens kits on the heads of roughnecks working out on oil rigs, but when it comes to complex machinery out in remote locations, augmenting what field engineers can see and do can have a tremendous impact on a company’s bottom line.

By way of example, GE is focusing efforts on constructing an extensible "field maintenance manual" intended to be used for industrial equipment. The use case being tested in the labs is with oil and gas; researchers in GE’s Research Center in Brazil are building software that they hope will replace the need to deal with bulky printed maintenance manuals—manuals which have to be kept up to date and which lack any kind of interactivity.

To learn a bit more about augmented reality in industry, Ars spoke with Dr. Camila Nunes, a scientist in charge of software and productivity analytics with GE Brazil. Nunes has an extensive background in oil and gas, having done graduate and postdoctoral work with Petrobras, the largest Brazilian energy corporation (and, indeed, the largest company in the Southern Hemisphere). At GE, Nunes works on bringing to life the interactive field maintenance manual concept.


To elaborate on how GE is using augmented reality in oil and gas, Nunes explained that frequently offshore oil and gas workers are called upon to install "Christmas trees," which are complex assemblies of pipes and valves used for a variety of purposes, including monitoring wells or injecting fluids into them. Owing to the wide range of functions, these are necessarily complex devices, and their installation can take more than a hundred hours under ideal conditions.

Currently, explained Nunes, the installation and servicing of a Christmas tree is done using paper manuals and checklists, and GE is aiming to change that by making things electronic and interactive. The interactive field maintenance manual concept is a multi-pronged beast, with a front end in the field and a back end that contains a tremendous variety of hardware and which can be continually updated as new equipment enters the field or new procedures are devised.

Nunes demonstrated this with a tablet in the augmented reality lab and a small 3D-printed duplicate of a piece of well hardware. The maintenance manual app used the tablet’s camera to figure out what kind of hardware it was looking at, and then was able to track the component as the tablet moved around it. The operator could look up installation procedures and see steps demonstrated in 3D on the parts each step involves, rather than having to refer to static printed diagrams.

More, the app enables an operator to take any part of the complex assembly being worked on and "explode" it, expanding it so that its interlinked component parts are visible. This kind of 3D exploded view is superior to a printed page because it can be zoomed and manipulated, and individual parts can be directly addressed in the app, rather than having to refer to additional printed pages.


A 3D-printed model of a well component from the lab, with a QR-like tag for easier recognition by the manual application.

Head-mounted versus hand-held

As things stand, the maintenance manual concept currently exists on tablets—iOS and Android. The back-end where the bulk of the data lives is based on Java Web Services, so when the tablet is connected to the Internet, it can pull up data on any piece of equipment in the entire library. However, in the field—particularly on sea-based drilling platforms—Internet access is never a sure thing. Because of this, the app can be preloaded with data on the pieces of equipment that the operator expects to have to work on. Then, it can function without a network connection.

Using commodity tablet hardware lets GE do things in software that would have been very difficult or impossible a decade ago; in particular, the GPUs of most consumer-grade tablets are good enough that weaving in OpenGL-based 3D renderings on top of the tablet’s live camera feed is trivial. There’s also enough CPU power available to perform image recognition tasks—in the lab, the 3D printed miniature parts had QR-like barcodes on them that the app could use to know what it was pointed at, but in the field, the manual features full image recognition and can identify equipment purely through the tablet’s camera feed.

RAM is an issue—at least for now. Nunes explained that it was relatively easy for the GE developers’ reach to exceed their grasp at first, as the tablets’ graphical and computational capabilities led the developers to try to do multiple things at once without regard for memory management. They’ve learned from their earlier efforts, though.

This is going to be a lot more important going forward, because GE wants to eventually transition the current tablet-based app into a head-mounted display, moving from tablet augmented reality to actual in-your-field-of-view augmented reality. Nunes couldn’t disclose which vendors GE was working with to make this happen, but she did say that nothing on the market today was quite good enough to make the concept work as well as it does in tablet form.


Any augmented reality app intended for use in offshore platforms (like P-51, pictured here) must be able to work in the absence of an Internet connection.

Ahead of the class

Moving away from paper and into an augmented reality maintenance manual has another tremendous benefit: if used properly, it can shorten the training cycle of equipment technicians.

The goal here would be to actually put the exact same kind of AR manual in technician training classrooms as in the field. Prospective technicians can learn to service equipment not just by reading books and working with mock-ups, but by actually walking through virtual procedures—the same procedures they’d use out at sea on a platform. Further, instructors can "share screens" with them and assist with troubleshooting—just as remote experts could help when out in the field (network connectivity permitting).

Augmented Reality in industry faces different sets of challenges than it does in the consumer world. Where we’re concerned primarily with consumer AR being intrusive and distracting to the everyday person doing everyday things, industrial AR is a lot like the military’s use of the technology: it’s acceptable to implement it with the expectation that the user will have some relevant training and job skills. Plus, as mentioned, AR can be incorporated into the actual job training, getting technicians used to using it throughout their entire job.

The AR maintenance manual concept looks like a win—and the applications are clearly there beyond oil and gas. Any kind of industrial setting with complex machinery that requires installation and maintenance checklists could benefit; GE hopes to be able to roll the technology out in the near term.

Lee Hutchinson / Lee is the Senior Reviews Editor at Ars and is responsible for the product news and reviews section. He also knows stuff about enterprise storage, security, and manned space flight. Lee is based in Houston, TX.



Logo KW

Augmented reality gets to work—and gets past the “Glassholes” [1177]

de System Administrator - domingo, 29 de marzo de 2015, 18:35

Augmented reality gets to work—and gets past the “Glassholes”

Mobile tech, Internet of things, cloud delivers info to workers' fingers, eyeballs.

Logo KW

Aural History [1377]

de System Administrator - martes, 1 de septiembre de 2015, 20:36

Aural History

The form and function of the ears of modern land vertebrates cannot be understood without knowing how they evolved.

By Geoffrey A. Manley

Unlike eyes, which are generally instantly recognizable, ears differ greatly in their appearance throughout the animal kingdom. Some hearing structures may not be visible at all. For example, camouflaged in the barn owl’s facial ruff—a rim of short, brown feathers surrounding the bird’s white face—are clusters of stiff feathers that act as external ears on either side of its head. These feather structures funnel sound collected by two concave facial disks to the ear canal openings, increasing the bird’s hearing sensitivity by 20 decibels—approximately the difference between normal conversation and shouting. Similar increases in sensitivity result from the large and often mobile external structures, or pinnae, of many mammals, such as cats and bats. Internally, the differences among hearing organs are even more dramatic.

Although fish can hear, only amphibians and true land vertebrates—including the aquatic species that descended from them, such as whales and pinnipeds—have dedicated hearing organs. In land vertebrates belonging to the group Amniota, including lizards, birds, and mammals, sound usually enters through an external canal and impinges on an eardrum that is connected through middle-ear bones to the inner ear. There, hundreds or thousands of sensory hair cells are spread along an elongated membrane that acts as a spectral analyzer, with the result that each local group of hair cells responds best to a certain range of pitches, or sound frequencies. The hair cells then feed this information into afferent nerve fibers that carry the information to the brain. (See “Human Hearing: A Primer.”)

For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing.

Together, these hair cells and nerve fibers encode a wide range of sounds that enter the ear on that side of the head. Two ears complete the picture, allowing animals’ brains to localize the source of the sounds they hear by comparing the two inputs. Although it seems obvious that the ability to process nearby sounds would be enormously useful, modern amniote ears in fact arose quite late in evolutionary history, and to a large extent independently in different lineages. As a result, external, middle, and inner ears of various amniotes are characteristically different.1 New paleontological studies and comparative research on hearing organs have revealed the remarkable history of this unexpected diversity of ears.

Divergence from a common origin

Amniote vertebrates comprise three lineages of extant groups that diverged roughly 300 million years ago: the lepidosaurs, which include lizards and snakes; the archosaurs, which include crocodilians and birds; and mammals, which include egg-laying, pouched, and placental mammals. By comparing the skulls of the extinct common ancestors of these three lineages, as well as the ears of the most basal modern amniotes, researchers have concluded that ancestral amniotes had a small (perhaps less than 1 millimeter in length) but dedicated hearing organ: a sensory epithelium called a basilar papilla, with perhaps a few hundred sensory hair cells supported by a thin basilar membrane that is freely suspended in fluid. These rudimentary structures evolved from the hair cells of vestibular organs, which help organisms maintain their balance by responding to physical input, such as head rotation or gravity. Initially, the hearing organ only responded to low-frequency sounds. On their apical surface, all hair cells have tight tufts or bundles of large, hairlike villi known as stereovilli (or, more commonly stereocilia, even though they are not true cilia), which give hair cells their name. Between these stereovilli are proteinaceous links, most of which are closely coupled to sensory transduction channels that respond to a tilting of the stereovilli bundles caused by sound waves.

The amniote hearing organ evolved as a separate group of hair cells that lay between two existing vestibular epithelia. Low-frequency vestibular hair cells became specialized to transduce higher frequencies, requiring much faster response rates. This change is attributable in part to modifications in the ion channels of the cell membrane, such that each cell is “electrically tuned” to a particular frequency, a phenomenon still observed in some modern amniote ears. Moreover, the early evolution of these dedicated auditory organs in land vertebrates led to the loss of the heavy otolithic membrane that overlies the hair-cell bundles of vestibular organs and is responsible for their slow responses. What remains is the watery macromolecular gel known as the tectorial membrane, which assures that local groups of hair cells move synchronously, resulting in greater sensitivity.

Good high-frequency hearing did not exist from the start, however. For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing. They had not yet evolved any mechanism for absorbing sound energy from air; they lacked the middle ear and eardrum that are vital for the function of modern hearing organs. As such, ancestral amniotes most likely perceived only sounds of relatively low frequency and high amplitude that reached the inner ear via the limbs or, if the skull were rested on the ground, through the tissues of the head. It is unclear what kind of stimuli could have existed that would have led to the retention of such hearing organs for such a long time.

The magnificent middle ear

 CONVERGING ON THE EAR: Starting around 250 million years ago, the three amniote lineages—lepidosaurs (lizards and snakes), archosaurs (crocodilians and birds), and mammals—separately evolved a tympanic middle ear, followed by evolution of the inner ear, both of which served to increase hearing sensitivity. Despite the independent origin of hearing structures in the three lineages, the outcomes were functionally quite similar, serving as a remarkable example of convergent evolution.

During the Triassic period, some 250 to 200 million years ago, a truly remarkable thing happened. Independently, but within just 20 million to 30 million years of one another, all three amniote lineages evolved a tympanic middle ear from parts of the skull and the jaws.2

The tympanic middle ear is the assemblage of tiny bones that connects at one end to an eardrum and at the other end to the oval window, an aperture in the bone of the inner ear. Despite the temporal coincidence in the evolution of these structures in the three amniote lineages and the functional similarities of the adaptations, the groups were by this time so far separated that the middle ears evolved from different structures into two different configurations. The single middle-ear bone, the columella, of archosaurs and lepidosaurs derived from the hyomandibular, a bone that earlier had formed a large strut connecting the braincase to the outer skull. In modern representatives, the columella is long and thin, with several, usually cartilaginous extensions known as the extracolumella. One of these, the “inferior process,” connects the inner surface of the eardrum and the columella, which then connects to the footplate that covers the oval window of the inner ear. This two-part system forms a lever that, together with the pressure increase incurred by transmitting from the much larger eardrum to the footplate, greatly magnifies sound entering the inner ear.

In the mammals of the Triassic, the equivalent events were more complex, but the functional result was remarkably similar. Mammal ancestors reduced the number of bones in the lower jaw from seven to one and, in the process, formed a new jaw joint. Initially, the old and new jaw structures existed in parallel, but over time the old joint moved towards the rear of the head. This event, which at any other time would likely have led to the complete loss of the old joint bones, occurred simultaneously with the origin of the mammalian tympanic middle ear. Older paleontological and newer developmental evidence from Shigeru Kuratani’s lab at RIKEN in Japan indicate that the mammalian eardrum evolved at a lower position on the skull relative to that of the other amniotes, a position outside the old jaw joint.3 In time, the bones of this old joint, together with the hyomandibula, became the three bony ossicles (malleus, incus, and stapes) of the new middle ear. Like the middle ear of archosaurs and lepidosaurs, these ossicles form a lever system that, along with the large area difference between eardrum and footplate, greatly magnifies sound input.

Thus, remarkably, these complex events led independently to all modern amniotes possessing a middle ear that, at frequencies below 10 kHz, works equally effectively despite the diverse structures and origins. There is also evidence that the three-ossicle mammalian middle ear itself evolved at least twice—in egg-laying mammals such as the platypus, and in therians, which include marsupials and placentals—with similar outcomes.

Inner-ear evolution


PITCH PERFECT: The hearing organs of amniotes are organized tonotopically, with hair cells sensitive to high frequencies at the basal end of the papilla, grading into low-frequency hair cells at the apical end.


The evolution of tympanic middle ears kick-started the evolution of modern inner ears, where sound waves are converted into the electrical signals that are sent to the brain. The inner ear is least developed in the lepidosaurs, most of which retained a relatively small auditory papilla, in some just a few hundred micrometers long. Many lepidosaurs, predominantly diurnal species, also lost their eardrum. Snakes reduced their middle ear, limiting their hearing to frequencies less than 1 kHz, about two octaves above middle C. (For comparison, humans can hear sounds up to about 15 or 16 kHz.) Clearly, hearing was not under strong selective pressure in this group. There are a few exceptions, however. In geckos, for example, which are largely nocturnal, the papillar structure shows unique specializations, accompanied by high sensitivity and strong frequency selectivity. Indeed, the frequency selectivity of gecko auditory nerve fibers exceeds that of many mammals.

One part of the inner ear that did improve in lizards (but not in snakes) is the hair cells, with the papillae developing different areas occupied by two structural types of these sound-responsive cells. One of these hair cell groups responds to sounds below 1 kHz and perhaps corresponds to the ancestral version. The higher-frequency hair cells have a more specialized structure, particularly with regard to the size and height of the stereovilli, with bundle heights and stereovillus numbers varying consistently along the papilla’s length. Taller bundles with fewer stereovilli, which are much less stiff and therefore respond best to low frequencies, are found at one end of the membrane, while shorter, thicker bundles with more stereovilli that respond best to higher frequencies are found at the other end—a frequency distribution known as a tonotopic organization. Still, with the exception of one group of geckos, lizard hearing is limited to below 5 to 8 kHz.

In contrast to the relatively rudimentary lepidosaur inner ear, the auditory papilla of archosaurs (birds, crocodiles, and their relatives) evolved much greater length. Owls, highly proficient nocturnal hunters, boast the longest archosaur papilla, measuring more than 10 millimeters and containing many thousands of hair cells. As in lizards, archosaur hair cells show strong tonotopic organization, with a gradual change in the diameter and height of the stereovillar bundles contributing to the gradually changing frequency sensitivity along the papilla. In addition, the hair cells are divided along and across the basilar membrane, with tall hair cells (THCs) resting on the inner side and the apical end, most distant from the middle ear, grading into short hair cells (SHCs) on the outer side and at the basal end. Interestingly, many SHCs completely lack afferent innervation, which is the only known case of sensory cells lacking a connection to the brain. Instead of transmitting sensory information to the brain, these hair cells likely amplify the signal received by the inner ear. Despite the more complex anatomy, however, bird hearing is also generally limited to between 5 and 8 kHz, with the exception of some owls, which can hear up to 12 kHz.

The mammalian papilla, called the organ of Corti, also evolved to be larger—generally, but not always, longer than those of birds—but the extension in length varies in different lineages.4 Mammalian papillae also have a unique cellular arrangement. The papillae of modern egg-laying monotremes, which likely resemble those of the earliest mammals, include two groups of hair cells separated by numerous supporting pillar cells that form the tunnel of Corti. In any given cross section, there are approximately five inner hair cells (IHCs) on the inner side of the pillar cells, closer to the auditory nerve, and eight outer hair cells (OHCs) on the outer side. In therian mammals (marsupials and placentals), the numbers of each cell group have been much reduced, with only two pillar cells forming the tunnel in any given cross-section, and generally just a single IHC and three or four OHCs, though the functional consequences of this reduction remain unclear. About 90 percent of afferent fibers innervate IHCs, while only 10 percent or fewer innervate OHCs, despite the fact that OHCs account for some 80 percent of all hair cells. As with bird SHCs that lack afferent innervation, there are indications that the main function of OHCs is to amplify the physical sound signal at very low sound-pressure levels.

Therian mammals also evolved another key hearing adaptation: the cochlea. Shortly before marsupial and placental lineages diverged, the elongating hearing organ, which had always been curved, reached full circle. The only way to further increase its length was to form more than one full coil, a state that was reached roughly 120 million years ago. The result is hearing organs with 1.5 to 4 coils and lengths from 7 millimeters (mouse) to 75 millimeters (blue whale). Hearing ranges also diverged, partly depending on the size of the animal (larger mammals tend to have lower upper-frequency limits), but with a number of remarkable specializations, as expected in a lineage that radiated greatly during several evolutionary episodes.

As a result of these adaptations, most mammals have an upper frequency-response limit that well exceeds those of lepidosaurs and archosaurs. Human hearing extends to frequencies of about 15 kHz; a guinea pig can hear sounds up to about 45 kHz; and in the extreme cases of many bats and toothed whales, hearing extends into ultrasonic frequencies, sometimes as high as 180 kHz, allowing these animals to echolocate in air and water. This impressive increase in frequency limits is due to an extremely stiff middle ear, as well as a stiff cochlea. During early therian evolution, the bone of the canal surrounding the soft tissues invaded the supporting ridges of the basilar membrane, creating stiff laminae. Such bony ridges were retained in species perceiving ultrasonic frequencies, but tended to be reduced and replaced by softer connective-tissue supports in those with lower-frequency limits, such as humans.

Amplification within the ear


HAIRS OF THE EAR: Rows of inner-ear hair cells have villous bundles (blue) on their apical surface that convert sound waves to nervous signals sent to the brain.


In addition to the specialized structures of the middle and inner ears of amniotes that served to greatly increase hearing sensitivity, the hair cells themselves can produce active movements that further amplify sound stimuli. The evolutionarily oldest such active mechanism was discovered in the late 1980s by Jim Hudspeth’s group, then at the University of California, San Francisco, School of Medicine, working with frogs,5 and Andrew Crawford and Robert Fettiplace, then at the University of Cambridge, working with turtles.6 The amplification mechanism, called the active bundle mechanism, probably evolved in the ancestors of vertebrates and helped overcome the viscous forces of the surrounding fluids, which resist movement. When sound stimuli move the hair-cell bundle and thus open transduction channels to admit potassium ions, some calcium ions also enter the cell. These calcium ions bind to and influence the open transduction channels, increasing the speed with which these channels close. Such closing forces are exerted in phase with the incoming sound waves, increasing the distance that the hair cells move in response, and thereby increasing their sensitivity. It is likely that this mechanism operates in all vertebrate hair cells.5 In lizards, my group provided evidence that this bundle mechanism really does operate in the living animal.7

In 1985, a second mechanism of hair cell–driven sound amplification was discovered in mammalian OHCs by Bill Brownell’s group, then at the University of Florida School of Medicine. Brownell and his colleagues showed that mammalian OHCs, but not IHCs, changed their length very rapidly in phase with the signal if exposed to an alternating electrical field.8 Such fields occur when hair cells respond to sound. Subsequent experiments showed that the change in cell length is due to changes in the molecular configuration of a protein, later named prestin, which occurs in high density along the lateral cell membrane of OHCs. In mammals, the force produced by the OHCs is so strong that the entire organ of Corti, which includes all cell types that surround the hair cells and the basilar membrane itself, is driven in an up-and-down motion. This movement can amplify sounds by at least 40dB, allowing very quiet noises to be detected. There is evidence for the independent evolution of specific molecular configurations of prestins that allow for the amplification of very high ultrasonic frequencies in bats and whales.9

Bird ears also appear to produce active forces that amplify sound. The SHCs have bundles comprising up to 300 stereovilli (about three times as many as the bundles of mammalian OHCs),10 and the movement of these bundles probably drives the movement of THCs indirectly via the tectorial membrane. Also, very recent data from the lab of Fettiplace, now at the University of Wisconsin–Madison, suggests that in birds, prestin (albeit in a different molecular form) may work in the plane across the hearing organ (i.e., not up and down as in mammals), perhaps reinforcing the influence of the bundle active mechanism on the THCs via the tectorial membrane.11

Three hundred million years of evolution have resulted in a fascinating variety of ear configurations that, despite their struc­tural diversity, show remarkably similar physiological responses.

Remarkable convergence

Three hundred million years of evolution have resulted in a fascinating variety of ear configurations that, despite their structural diversity, show remarkably similar physiological responses. There are hardly any differences in sensitivity between the hearing of endothermal birds and mammals, and the frequency selectivity of responses is essentially the same in most lizards, birds, and mammals. The combined research efforts of paleontologists, anatomists, physiologists, and developmental biologists over several decades have clarified the major evolutionary steps in all lineages that modified the malleable middle and inner ears into their present-day kaleidoscopic variety of form, yet a surprising consensus in their function.

Geoffrey A. Manley is a retired professor from the Institute of Zoology at the Technical University in Munich, Germany. He is currently a guest scientist in the laboratory of his wife, Christine Köppl, at Oldenburg University in Germany.


  1. G.A. Manley, C. Köppl, “Phylogenetic development of the cochlea and its innervation,” Curr Opin Neurobiol, 8:468-74, 1998.
  2. J.A. Clack, “Patterns and processes in the early evolution of the tetrapod ear,” J Neurobiol, 53:251-64, 2002.
  3. T. Kitazawa et al., “Developmental genetic bases behind the independent origin of the tympanic membrane in mammals and diapsids,” Nat Commun, 6:6853, 2015.
  4. G.A. Manley, “Evolutionary paths to mammalian cochleae,” JARO, 13:733-43, 2012.
  5. A.J. Hudspeth, “How the ear’s works work: Mechanoelectrical transduction and amplification by hair cells,” C R Biol, 328:155-62, 2005.
  6. A.C. Crawford, R. Fettiplace, “The mechanical properties of ciliary bundles of turtle cochlear hair cells,” J Physiol, 364:359-79, 1985.
  7. G.A. Manley et al., “In vivo evidence for a cochlear amplifier in the hair-cell bundle of lizards,” PNAS, 98:2826-31, 2001.
  8. W.E. Brownell et al., “Evoked mechanical responses of isolated cochlear outer hair cells,” Science, 227:194-96, 1985.
  9. Y. Liu et al., “Convergent sequence evolution between echolocating bats and dolphins,” Curr Biol, 20:R53-R54, 2010.
  10. C. Köppl et al., “Big and powerful: A model of the contribution of bundle motility to mechanical amplification in hair cells of the bird basilar papilla,” in Concepts and Challenges in the Biophysics of Hearing, ed. N.P. Cooper, D.T. Kemp (Singapore: World Scientific, 2009), 444-50.
  11. M. Beurg et al., “A prestin motor in chicken auditory hair cells: Active force generation in a nonmammalian species,” Neuron, 79:69-81, 2013.
  12. C. Bergevin et al., “Salient features of otoacoustic emissions are common across tetrapod groups and suggest shared properties of generation mechanisms,” PNAS, 112:3362-67, 2015.

Sea Lion: © iStock/LFStewart; Squirrel: © Erik Mandre/Shutterstock; Frog: ©Frank B. Yuwono/Shutterstock; Owl: ©XNature.Photography/Shutterstock; Lizard: ©Andrew Wijesuriya/Shutterstock; Bat: © iStock/GlobalP; Ostrich: © Jamen Percy/Shutterstock; Dog: ©Annette Shaff/Shutterstock; Lynx: © Dmitri Gomon/Shutterstock


Logo KW

Autoaceptación [694]

de System Administrator - martes, 5 de agosto de 2014, 21:45

Autoaceptación: La importancia de admitir nuestros errores


Cuando nos enfrentamos cara a cara con nuestros fracasos, es difícil no negar las consecuencias de nuestros errores, y muy a menudo empeorar los problemas mediante comportamientos que hemos estado tratando de evitar con mucha dificultad.

Según un nuevo estudio publicado en el Journal of Consumer Researchla práctica de la autoaceptación puede ser la mejor manera de aumentar nuestra autoestima y evitar conductas de autodesprecio y sus consecuencias.

Los autores del estudio, Soo Kim y David Gal, describen este fenómeno del siguiente modo:“Tengamos en cuenta a una persona que acaba jubilarse y se da cuenta de que sus ingresos no serán suficientes a partir de ahora. Es muy probable que esta persona tenga el impulso de comprar cosas caras o de salir a comer más a menudo de lo que antes lo hacía, como una manera de evitar sus problemas. Se introduce la idea de que la práctica de la autoaceptación es una alternativa más eficaz a este tipo de comportamientos autodestructivos “.

Tras la realización de cinco experimentos diferentes, los autores confirmaron que la práctica de la autoaceptación ayuda a reducir la probabilidad de involucrarse en conductas perjudiciales y aumenta la probabilidad de trabajar en la mejora de otras habilidades alternativas.

En uno de los estudios, los participantes leyeron sobre el concepto de autoaceptación y luego se les pidió que eligieran entre una revista de lujo o un libro de autoayuda y crecimiento personal. Como se predijo, los participantes fueron más propensos a seleccionar el libro antes que la revista, lo que indica el deseo de mejorar su bienestar general.

Si bien los beneficios de la autoaceptación pueden ayudar a aumentar la autoestima de una persona como un medio para promover el bienestar, los autores advierten contra el uso de elogios inmerecidos, que pueden aportar creencias poco realistas y expectativas acerca de sus habilidades.

“Cuando se socavan las creencias y expectativas de una persona, se puede dañar muy negativamente su autoestima. A diferencia de la autoestima, la aceptación de uno mismo, que es de por sí incondicional, puede preparar mejor a alguien para los inevitables fracasos, y en última instancia, constituye una alternativa menos volátil para la promoción del bienestar “, concluyeron los autores.

Se completa el artículo con un interesante documento firmado por M.B. González-Fuentes y Patricia Andrade (Universidad Autónoma de México) que bajo el título “Autoaceptación como factor de riesgo para el intento de suicidio en adolescentes”, analiza la influencia de esta variable en una muestra de estudiantes de entre 14 y 20 años, con resultados relevantes:

“Los resultados obtenidos sobre el papel de la autoaceptación como factor asociado para intento de suicidio son interesantes porque muestran que los predictores de autoaceptación para el intento suicida fueron diferentes dependiendo del sexo de los adolescentes“.


Logo KW

Autoconfianza [696]

de System Administrator - martes, 5 de agosto de 2014, 22:14

Autoconfianza: Claves para mejorarla


Por Mireia Navarro

La autoconfianza puede definirse como “la confianza en uno mismo” y tiene mucha más importancia de lo que a simple vista puede parecer; de ella depende el sentimiento de utilidad que nos atribuimos respecto al mundo que nos rodea.

Nuestro nivel de autoconfianza determina la visión que tenemos de nosotros mismos, lo que a su vez, moldea nuestro rendimiento y nuestras actividades.

Por ejemplo; ¿Cuántas veces hemos pensado en realizar una receta, pero no la hemos hecho porque hemos visto que era difícil y no nos hemos considerado capaces? ¿Cuántas veces hemos pensado que no lograremos pasar un examen? Se trata de tan solo dos ejemplos de los muchos que dependen de la autoconfianza y que están integrados en nuestro día a día.

Este nivel de autoconfianza está determinado por muchos aspectos; nuestro pasado, nuestro presente y las expectativas que tengamos de nuestro futuro; las experiencias vividas y los aprendizajes que hemos ido adquiriendo de ellas, nuestra personalidad

Son muchos los factores que influyen en nuestro nivel de autoconfianza, no obstante estamos ante un constructo moldeable y mejorable. Basta un buen entrenamiento para subir nuestro nivel de autoconfianza y materializarlo con éxitos.

Tan desadaptativo es tener un bajo nivel de confianza, como lo es tener un nivel excesivo. Los valores medios de autoconfianza permiten el autoconocimiento de nuestras limitaciones, necesarias para mantener viva la llama del aprendizaje por experiencia. Estos niveles medios son los que nos permiten desarrollar nuestra personalidad de manera óptima.

Cómo mejorar nuestra autoconfianza

¿Qué podemos hacer para mejorar nuestra autoconfianza? Basta con seguir estos pasos con constancia:

-Conócete. Tómate tu tiempo para darte cuenta de las emociones que experimentas, y entenderlas. Este paso es básico y necesario para determinar qué aspectos propician tu baja autoconfianza. Solo con una buena autoestima puede conseguirse una buena autoconfianza.

-Crea un mundo a tu alrededor en el que estés verdaderamente cómodo. Vístete con ropa que te inspire confianza. Rodéate de gente que crea en ti. Convierte tus autoinstrucciones negativas en positivas (por ejemplo; ante un “no creo que lo consiga”, pensemos en un “lo voy a intentar”).

-Por cada característica negativa que venga a la mente sobre ti, piensa una positiva.

-Intenta lo que no creas capaz de hacer, pero entiende que los fallos son necesarios para mejorar. Usa tus primeros intentos para sacar aquello positivo. No te desanimes si a la primera no te sale bien; te servirá para saber en qué tienes que mejorar. A la segunda te saldrá mejor. ¡Puede que tras varios intentos más, te salga bien!

-Refuérzate cada éxito, por pequeño que sea. Te ayudará a seguir. El refuerzo deberá ir en consonancia con el éxito conseguido. A mayor éxito, mayor refuerzo.

-No te compares con nadie. Hay gente que tiene otras cualidades de las que tienes tú, pero tú también tienes unas cuantas que los demás no tienen.

-Se constante. No te desanimes. Los cambios se suceden poco a poco.

-Si te cuesta más de lo que creías y estás a punto de desistir, consulta a un profesional. Puede aplanarte el camino.

Mejorar la autoconfianza es cuestión de predisposición y constancia. El camino, desde los primeros logros, es muy gratificante; nos enseña que las limitaciones que anteriormente teníamos eran impuestas por nosotros mismos sin razón. Supone una fuente de motivación que puede cambiarnos por completo. Empezar a experimentar estos cambios depende de nosotros mismos.

Mireia Navarro Licenciada en Psicología (Universitat de València), con Master en Psicología y Gestión Familiar. Experiencia en psicología clínica y educativa. Servicio de Psicología domiciliaria. Intervención en contextos naturales.


Logo KW

Autoestima: Tres Estados [627]

de System Administrator - miércoles, 29 de octubre de 2014, 14:14


 por Maria Chevallier

Esta clasificación de los tres estados de la Autoestima fue propuesta por Martín Ross en el libro "El Mapa de la Autoestima" (año 2007, ISBN 978-84-686-3669-6, año 2013 ISBN: 978-9870267737), y, desde entonces, ha ido cobrando repercusión, encontrándose hoy muy utilizada en los trabajos que abordan el tema de la Autoestima.

Continúa leyendo en el sitio

Logo KW

Autoimmune Diseases [825]

de System Administrator - miércoles, 3 de septiembre de 2014, 22:39

Scientists discover how to ‘switch off’ autoimmune diseases

Aggressor cells are targeted by treatment causing them to convert to protector cells. Gene expression changes gradually during treatment, as illustrated by the colour changes in these heat maps. Credit: Dr Bronwen Burton

Scientists have made an important breakthrough in the fight against debilitating autoimmune diseases such as multiple sclerosis by revealing how to stop cells attacking healthy body tissue.

Rather than the body's immune system destroying its own tissue by mistake, researchers at the University of Bristol have discovered how cells convert from being aggressive to actually protecting against disease.

The study, funded by the Wellcome Trust, is published in Nature Communications.

It's hoped this latest insight will lead to the widespread use of antigen-specific immunotherapy as a treatment for many autoimmune disorders, including multiple sclerosis (MS), type 1 diabetes, Graves' disease and systemic lupus erythematosus (SLE).

MS alone affects around 100,000 people in the UK and 2.5 million people worldwide.

Scientists were able to selectively target the cells that cause autoimmune disease by dampening down their aggression against the body's own tissues while converting them into cells capable of protecting against disease.

This type of conversion has been previously applied to allergies, known as 'allergic desensitisation', but its application to autoimmune diseases has only been appreciated recently.

The Bristol group has now revealed how the administration of fragments of the proteins that are normally the target for attack leads to correction of the autoimmune response.

Most importantly, their work reveals that effective treatment is achieved by gradually increasing the dose of antigenic fragment injected.

In order to figure out how this type of immunotherapy works, the scientists delved inside the immune cells themselves to see which genes and proteins were turned on or off by the treatment.

They found changes in gene expression that help explain how effective treatment leads to conversion of aggressor into protector cells. The outcome is to reinstate self-tolerance whereby an individual's immune system ignores its own tissues while remaining fully armed to protect against infection.

By specifically targeting the cells at fault, this immunotherapeutic approach avoids the need for the immune suppressive drugs associated with unacceptable side effects such as infections, development of tumours and disruption of natural regulatory mechanisms.

Professor David Wraith, who led the research, said: "Insight into the molecular basis of antigen-specific immunotherapy opens up exciting new opportunities to enhance the selectivity of the approach while providing valuable markers with which to measure effective treatment. These findings have important implications for the many patients suffering from autoimmune conditions that are currently difficult to treat."

This treatment approach, which could improve the lives of millions of people worldwide, is currently undergoing clinical development through biotechnology company Apitope, a spin-out from the University of Bristol.

Note: Material may have been edited for length and content. For further information, please contact the cited source.

University of Bristol   press release


Bronwen R. Burton, Graham J. Britton, Hai Fang, Johan Verhagen, Ben Smithers, Catherine A. Sabatos-Peyton, Laura J. Carney, Julian Gough, Stephan Strobel, David C. Wraith. Sequential transcriptional changes dictate safe and effective antigen-specific immunotherapy.   Nature Communications, Published Online September 3 2014. doi: 10.1038/ncomms5741


Logo KW

Automated Learning [1123]

de System Administrator - martes, 24 de febrero de 2015, 17:17

Students, meet your new lesson planners. Photo by John Tlumacki/The Boston Globe via Getty Images.

Automated Learning

NEW YORK—Teacher John Garuccio wrote a multiplication problem on a digital whiteboard in a corner of an unusually large classroom at David A. Boody Intermediate School in Brooklyn.

About 150 sixth-graders are in this math class—yes, 150—but Garuccio’s task was to help just 20 of them, with a lesson tailored to their needs. He asked, “Where does the decimal point go in the product?” After several minutes of false starts, a boy offered the correct answer. Garuccio praised him, but did not stop there.

“Come on, you know the answer, tell me why,” Garuccio said. “It’s good to have the right answer, but you need to know why.”

A computer system picked this lesson for this group of students based on a quiz they’d taken a day earlier. Similar targeted lessons were being used by other teachers and students working together, in small groups, in an open classroom the size of a cafeteria. The computer system orchestrates how each math class unfolds every day, not just here, but for about 6,000 students in 15 schools located in four states and the District of Columbia.

As more schools adopt blended learning—methods that combine classroom teachers and computer-assisted lessons—some are taking the idea a step further and creating personalized programs of instruction. Technology can help teachers design a custom lesson plan for each student, supporters say, ensuring children aren’t bored or confused by materials that aren’t a good fit for their skill level or learning style.

At David A. Boody (I.S. 228)—a public school in a Brooklyn neighborhood where five of every six students qualify for free or reduced-price lunch—teachers use a program called Teach to One: Math. It combines small group lessons, one-on-one learning with a teacher, learning directly from software, and online tutoring. A nonprofit in New York City, New Classrooms Innovation Partners, provides the software and supports the schools that use it. New Classrooms evolved from a program created several years earlier at the New York City Department of Education.

One key feature of the program is apparent even before instruction begins. The entrance to the math class at David A. Boody looks a bit like a scene at an airport terminal. Three giant screens display daily schedule updates for all students and teachers. The area is huge, yawning across a wide-open space created by demolishing the walls between classrooms.

“There is a great deal of transparency here,” said Cathy Hayes, the school’s math director, explaining the idea behind the enormous classroom. “Children and teachers can see each other, and that transparency works to share great work. This isn’t a room where you can shut the door and contain what you are doing.”

The open design is meant to encourage collaboration; teachers can learn from each other by working in close proximity. Shelves, desks, and whiteboards divide the large room to create nooks for small-group instruction. Each area has a name, such as Brooklyn Bridge or Manhattan Bridge, which helps students know where they are supposed to go. Sometimes students are instructed by teachers; other times they work in sections monitored by teaching assistants. Some stations use pencil-on-paper worksheets with prompts to help guide students through group projects; stations in another part of the classroom are for independent computer-guided lessons and tutoring.

Dmitry Vlasov, an assistant math teacher, scanned his section of the room where more than a dozen children worked on small laptops independently. If needed, he would help a student who was stuck, but he said most students don’t require much more than a nudge.

“They treat it as if it were a game,” Vlasov said.

All this activity in the large, high-ceilinged room creates a constant buzz, which seemed to distract some students. Teachers had to remind them not to peer around the big room. Students seated in the back appeared to have trouble hearing the teacher.

Math class spans two 35-minute sessions, with students and teachers rotating to new stations after the first session. The school has about 300 students in the math program. Half of them report to class at one time. On a recent day in December, the classroom was staffed with one math director, five teachers, two teaching assistants, and a technology aide.

The program’s computer system coordinates where everyone will go based on how the children perform on a daily test at the end of class. The next day’s schedule is delivered electronically. Teachers, students, and parents can log on to a website to see it.

“It’s easy to keep track of everyone,” said Amelia Tonyes, a teacher. “It helps to target students with what they need.”

The software used by the Teach to One system pulls lessons from a database created and curated by the program’s academic team. When Teach to One staffers discover a lesson that fits their program, they negotiate with the publisher to buy the lesson à la carte—sort of like buying one song from iTunes instead of buying the entire album. Staffers say they examined about 80,000 lessons to create a library of 12,000 from 20 vendors. A room in the office of the nonprofit is filled with stacks of math textbooks.

According to Joe Ventura, a spokesman Teach to One’s parent company, New Classrooms, schools pay $225 per student each year in licensing fees to use the content in Teach to One’s curriculum—the materials that replace the typical textbook. They also pay between $50,000 and $125,000 a year for professional development and support services, but those fees typically decrease as schools gain more experience running the program, Ventura said.

Schools in New York City don’t pay the software licensing fees (although they pay some of the other costs), because the program originated within the city’s Department of Education; the nonprofit that now runs Teach to One was created by former Department of Education employees. Ventura said they are in negotiations for a new contract but could not elaborate on details. Requests for information from the Department of Education brought no reply.

Supporters say the program gives every student a custom-designed math class, allowing students to get different lessons, and move faster or slower than peers in the same class.

Skeptics of this type of program say more evidence is needed to justify the time and expense.

Larry Cuban, professor emeritus of education at Stanford University, recently devoteda three-part series on his blog to what he calls automation in schools. Quality teachers deliver not just information, but a human element that can’t be replicated by a machine, he concluded. In an interview, he was not enthusiastic about Teach to One, saying that, from what he’s seen, it looks like “a jazzed-up version of worksheets.”

Teach to One’s founders, hoping to demonstrate that the program works, commissioned a study by an education professor. The first year’s results were mixed. A second-year study, released late last year, documented significant progress in 11 of 15 schools in the program. Although students still scored lower on math tests than national averages—many are from disadvantaged populations—the growth in the Teach to One students’ scores outpaced national averages.

Changes made after the first year’s report, and increased teacher and student familiarity with the program, might explain the improved results, Ventura said. The updates included improvements to an online portal where students and parents can track progress and continue to work on lessons outside school; the addition of 130 multiday collaborative student projects; refinements to the algorithm that is used to select lessons, and expansion of the library of lessons.

The early research on Teach to One: Math shows schools that use the program have improved test scores, but could not conclude that Teach to One is the reason.

The author of the research on Teach to One: Math, Douglas Ready, an associate professor of education and public policy at Teachers College of Columbia University, was cautiously optimistic. (The Hechinger Report, in which this article first appeared, is an independently funded unit of Teachers College.) He noted, though, that the early research could not determine whether the improved math scores could be attributed to Teach to One or had been affected by some other factor in the schools.

The next step in evaluating the program will be supported by a $3 million U.S. Department of Education grant, announced this month. That grant will pay for deeper research into Teach to One, as well as an expansion of the program to more schools in one of its other cities, Elizabeth, New Jersey.

Gary Miron, a professor of evaluation, measurement, and research at Western Michigan University, is generally positive about the possibilities of blended learning. Asked by the Hechinger Report to review the research on Teach to One, he said the gains were “quite remarkable.” He noted, however, that there is a lot of pressure on schools to show gains on tests, which can lead to inflated scores.

That said, Miron is enthusiastic about the possibilities for blended learning programs that invest in both quality teaching and technology. Most online programs—in which students don’t ever report to a brick-and-mortar school—have less promising results, he said.

“That is the future, blended learning,” he said.

This story was written by The Hechinger Report, a nonprofit, independent news website focused on inequality and innovation in education. Read more about blended learning.

Nichole Dobo writes for the Hechinger Report.


Logo KW

Automation Is Eating Jobs, But These Skills Will Always Be Valued In the Workplace [1591]

de System Administrator - viernes, 20 de noviembre de 2015, 20:46

Automation Is Eating Jobs, But These Skills Will Always Be Valued In the Workplace


If you’d asked farmers a few hundred years ago what skills their kids would need to thrive, it wouldn’t have taken long to answer. They’d need to know how to milk a cow or plant a field. General skills for a single profession that only changed slowly—and this is how it was for most humans through history.

But in the last few centuries? Not so much.

Each generation, and even within generations, we see some jobs largely disappear, while other ones pop up. Machines have automated much of manufacturing, for example, and they’ll automate even more soon. But as manufacturing jobs decline, they’ve been replaced by other once unimaginable professions like bloggers, coders, dog walkers, or pro gamers.

In a world where these labor cycles are accelerating, the question is: What skills do we teach the next generation so they can keep pace?

More and more research shows that current curriculums, which teach siloed subject matter and specific vocational training, are not preparing students to succeed in the 21st century; a time of technological acceleration, market volatility, and uncertainty.

To address this, some schools have started teaching coding and other skills relevant to the technologies of today. But technology is changing so quickly that these new skills may not be relevant by the time students enter the job market.

In fact, in Cathy Davidson's book, Now You See It, Davidson estimates that,

“65 percent of children entering grade school this year (2011) will end up working in careers that haven't even been invented yet."

Not only is it difficult to predict what careers will exist in the future, it is equally uncertain which technology-based skills will be viable 5 or 10 years from now, as Brett Schilke, director of impact and youth engagement at Singularity University, noted in a recent interview.

So, what do we teach?

Finland recently shifted its national curriculum to a new model called the “phenomenon-based" approach. By 2020, the country will replace traditional classroom subjects with a topical approach highlighting the four Cs—communication, creativity, critical thinking, and collaboration. These four skills “are central to working in teams, and a reflection of the 'hyperconnected' world we live in today,” Singularity Hub Editor-in-Chief David Hillrecently wrote.

Hill notes the four Cs directly correspond to the skills needed to be a successful 21st century entrepreneur—when accelerating change means the jobs we’re educating for today may not exist tomorrow. Finland’s approach reflects an important transition away from the antiquated model used in most US institutions—a model created for a slower, more stable labor market and economy that no longer exists.

In addition to the four Cs, successful entrepreneurs across the globe are demonstrating three additional soft skills that can be integrated into the classroom—adaptability, resiliency and grit, and a mindset of continuous learning.

These skills can equip students to be problem-solvers, inventive thinkers, and adaptive to the fast-paced change they are bound to encounter. In a world of uncertainty, the only constant is the ability to adapt, pivot, and get back on your feet.

Like Finland, the city of Buenos Aires is embracing change.

Select high school curriculums in the city of Buenos Aires now require technological education in the first two years and entrepreneurship in the last three years. Esteban Bullrich, Buenos Aires’ minister of education, told Singularity University in a recent interview, “I want kids to get out of school and be able to create whatever future they want to create—to be able to change the world with the capabilities they earn and receive through formal schooling.”

The idea is to teach students to be adaptive and equip them with skills that will be highly transferable in whatever reality they may face once out of school, Bullrich explains. Embedding these entrepreneurial skills in education will enable future leaders to move smoothly with the pace of technology. In fact, Mariano Mayer, director of entrepreneurship for the city of Buenos Aires, believes these soft skills will be valued most highly in future labor markets.

This message is consistent with research highlighted in a World Economic Forum and Boston Consulting Group report titled, New Vision for Education: Unlocking the Potential of Technology. The report breaks out the core 21st-century skills into three key categories—foundational literacies, competencies, and character qualities—with lifelong learning as a proficiency encompassing these categories.


From degree gathering to continuous learning

This continuous learning approach, in contrast to degree-oriented education, represents an important shift that is desperately needed in education. It also reflects the demands of the labor market—where lifelong learning and skill development are what keep an individual competitive, agile, and valued.

Singularity University CEO Rob Nail explains, “The current setup does not match the way the world has and will continue to evolve. You get your certificate or degree and then supposedly you're done. In the world that we've living in today, that doesn't work.”

Transitioning the focus of education from degree-oriented to continuous learning holds obvious benefits for students. This shift in focus, however, will also support academic institutions sustain their value as education, at large, becomes increasingly democratized and decentralized.

Any large change requires we overcome barriers. And in education, there are many—but one challenge, in particular, is fear of change. 

“The fear of change has made us fall behind in terms of advancement in innovation and human activities,” Bullrich says.

“We are discussing upgrades to our car instead of building a spaceship. We need to build a spaceship, but we don't want to leave the car behind. Some changes appear large, but the truth is, it's still a car. It doesn't fly. That's why education policy is not flying.”

Education and learning are ready to be reinvented. It’s time we get to work.



Logo KW

Ayudando a niños y adolescentes a superar la violencia y los desastres: Qué pueden hacer los padres [625]

de System Administrator - viernes, 1 de agosto de 2014, 23:03



Logo KW

Back-up brains: The era of digital immortality [1080]

de System Administrator - sábado, 31 de enero de 2015, 20:35

Back-up brains: The era of digital immortality

 "It could change our relationship with death, providing some noise where there is only silence." — Aaron Sunshine

A few months before she died, my grandmother made a decision.

Bobby, as her friends called her (theirs is a generation of nicknames), was a farmer’s wife who not only survived World War II but also found in it justification for her natural hoarding talent. ‘Waste not, want not’ was a principle she lived by long after England recovered from a war that left it buckled and wasted. So she kept old envelopes and bits of cardboard cereal boxes for note taking and lists. She kept frayed blankets and musty blouses from the 1950s in case she needed material to mend. By extension, she was also a meticulous chronicler. She kept albums of photographs of her family members. She kept the airmail love letters my late grandfather sent her while he travelled the world with the merchant navy in a box. Her home was filled with the debris of her memories.

Yet in the months leading up to her death, the emphasis shifted from hoarding to sharing. Every time I visited my car would fill with stuff: unopened cartons of orange juice, balls of fraying wool, damp, antique books, empty glass jars. All things she needed to rehome now she faced her mortality. The memories too began to move out. She sent faded photographs to her children, grandchildren and friends, as well as letters containing vivid paragraphs detailing some experience or other.

On 9 April, the afternoon before the night she died, she posted a letter to one of her late husband’s old childhood friends. In the envelope she enclosed some photographs of my grandfather and his friend playing as young children. “You must have them,” she wrote to him. It was a demand but also a plea, perhaps, that these things not be lost or forgotten when, a few hours later, she slipped away in her favourite armchair.


The hope that we will be remembered after we are gone is both elemental and universal. The poet Carl Sandburg captured this common feeling in his 1916 poem Troths:

Yellow dust on a bumblebee’s wing, 
Grey lights in a woman’s asking eyes, 
Red ruins in the changing sunset embers: 
I take you and pile high the memories. 
Death will break her claws on some I keep.

It is a wishful tribute to the potency of memories. The idea that a memory could prove so enduring that it might grant its holder immortality is a romantic notion that could only be held by a young poet, unbothered by the aches and scars of age.

Nevertheless, while Sandburg’s memories failed to save him, theysurvived him. Humans have, since the first paintings scratched on cave walls, sought to confound the final vanishing of memory. Oral history, diary, memoir, photography, film and poetry: all tools in humanity’s arsenal in the war against time’s whitewash. Today we bank our memories onto the internet’s enigmatic servers, those humming vaults tucked away in the cooling climate of the far North or South. There’s the Facebook timeline that records our most significant life events, the Instagram account on which we store our likeness, the Gmail inbox that documents our conversations, and the YouTube channel that broadcasts how we move, talk or sing. We collect and curate our memories more thoroughly than ever before, in every case grasping for a certain kind of immortality.


Is it enough? We save what we believe to be important, but what if we miss something crucial? What if some essential context to our words or photographs is lost? How much better it would be to save everything, not only the written thoughts and snapped moments of life, but the entire mind: everything we know and all that we remember, the love affairs and heartbreaks, the moments of victory and of shame, the lies we told and the truths we learned. If you could save your mind like a computer’s hard drive, would you? It’s a question some hope to pose to us soon. They are the engineers working on the technology that will be able create wholesale copies of our minds and memories that live on after we are burned or buried. If they succeed, it promises to have profound, and perhaps unsettling, consequences for the way we live, who we love and how we die.

Carbon copy

I keep my grandmother’s letters to me in a folder by my desk. She wrote often and generously. I also have a photograph of her in my kitchen on the wall, and a stack of those antique books, now dried out, still unread. These are the ways in which I remember her and her memories, saved in hard copy. But could I have done more to save her?

San Franciscan Aaron Sunshine’s grandmother also passed away recently. “One thing that struck me is how little of her is left,” the 30-year-old tells me. “It’s just a few possessions. I have an old shirt of hers that I wear around the house. There's her property but that's just faceless money. It has no more personality than any other dollar bill.” Her death inspired Sunshine to sign up with, a web service that seeks to ensure that a person’s memories are preserved after their death online.

It works like this: while you’re alive you grant the service access to your Facebook, Twitter and email accounts, upload photos, geo-location history and even Google Glass recordings of things that you have seen. The data is collected, filtered and analysed before it’s transferred to an AI avatar that tries to emulate your looks and personality. The avatar learns more about you as you interact with it while you’re alive, with the aim of more closely reflecting you as time progresses.

“It’s about creating an interactive legacy, a way to avoid being totally forgotten in the future,” says Marius Ursache, one of’s co-creators. “Your grand-grand-children will use it instead of a search engine or timeline to access information about you – from photos of family events to your thoughts on certain topics to songs you wrote but never published.” For Sunshine, the idea that he might be able to interact with a legacy avatar of his grandmother that reflected her personality and values is comforting. “I dreamt about her last night,” he says. “Right now a dream is the only way I can talk to her. But what if there was a simulation? She would somehow be less gone from my life.”


While Ursache has grand ambitions for the service (“it could be a virtual library of humanity”) the technology is in still its infancy. He estimates that subscribers will need to interact with their avatars for decades for the simulation to become as accurate as possible. He’s already received many messages from terminally ill patients who want to know when the service will be available – whether they can record themselves in this way before they die. “It’s difficult to reply to them, because the technology may take years to build to a level that’s useable and offers real value,” he says. But Sunshine is optimistic. “I have no doubt that someone will be able to create good simulations of people's personalities with the ability to converse satisfactorily,” he says. “It could change our relationship with death, providing some noise where there is only silence. It could create truer memories of a person in the place of the vague stories we have today.”

It could, I suppose. But what if the company one day goes under? As the servers are switched off, the people it homes would die a second death.


As my own grandmother grew older, some of her memories retained their vivid quality; each detail remained resolute and in place. Others became confused: the specifics shifted somehow in each retelling. and other similar services counter the fallibility of human memory; they offer a way to fix the details of a life as time passes. But any simulation is a mere approximation of a person and, as anyone who has owned a Facebook profile knows, the act of recording one’s life on social media is a selective process. Details can be tweaked, emphases can be altered, entire relationships can be erased if it suits one’s current circumstances. We often give, in other words, an unreliable account of ourselves.

Total recall

What if, rather than simply picking and choosing what we want to capture in digital form, it was possible to record the contents of a mind in their entirety? This work is neither science fiction nor the niche pursuit of unreasonably ambitious scientists. Theoretically, the process would require three key breakthroughs. Scientists must first discover how to preserve, non-destructively, someone's brain upon their death. Then the content of the preserved brain must be analysed and captured. Finally, that capture of the person’s mind must be recreated on a simulated human brain.


First, we must create an artificial human brain on which a back-up of a human’s memories would be able to ‘run’. Work in the area is widespread. MIT runs a course on the emergent science of ‘connectomics’, the work to create a comprehensive map of the connections in a human brain. The US Brain project is working to record brain activity from millions of neurons while the EU Brain project tries to build integrated models from this activity.

Anders Sandberg from the Future of Humanity Institute at Oxford University, who in 2008 wrote a paper titled Whole Brain Emulation: A Roadmap, describes these projects as “stepping stones” towards being able to fully able to emulate the human brain.

“The point of brain emulation is to recreate the function of the original brain: if ‘run’ it will be able to think and act as the original,” he says. Progress has been slow but steady. “We are now able to take small brain tissue samples and map them in 3D. These are at exquisite resolution, but the blocks are just a few microns across. We can run simulations of the size of a mouse brain on supercomputers – but we do not have the total connectivity yet. As methods improve I expect to see automatic conversion of scanned tissue into models that can be run. The different parts exist, but so far there is no pipeline from brains to emulations.”


Investment in the area appears to be forthcoming, however. Google is heavily invested in brain emulation. In December 2012 the company appointed Ray Kurzweil as its director of engineering on the Google Brain project, which aims to mimic aspects of the human brain. Kurzweil, a divisive figure, is something of a figurehead for a community of scientists who believe that it will be possible to create a digital back-up of a human brain within their lifetime. A few months later, the company hired Geoff Hinton, a British computer scientist who is one of the world's leading experts on neural networks, essentially the circuitry of how the human mind thinks and remembers.

Google is not alone, either. In 2011 a Russian entrepreneur, Dmitry Itskov, founded ‘The 2045 Initiative’, named after Kurzweil’s prediction that the year 2045 will mark the point at which we’ll be able to back up our minds to the cloud. While the fruits of all this work are, to date, largely undisclosed, the effort is clear.


Neuroscientist Randal Koene,science director for the 2045 Initiative, is adamant that creating a working replica of a human brain is within reach. “The development of neural prostheses already demonstrate that running functions of the mind is possible,” he says. It’s not hyperbole. Ted Berger, a professor at the University of Southern California’s Center for Neuroengineering has managed to create a working prosthetic of the hippocampus part of the brain. In 2011 a proof-of-concept hippocampal prosthesis was successfully tested in live rats and, in 2012 the prosthetic was successfully tested in non-human primates. Berger and his team intend to test the prosthesis in humans this year, demonstrating that we are already able to recreate some parts of the human brain.

Memory dump

Emulating a human brain is one thing, but creating a digital record of a human’s memories is a different sort of challenge. Sandberg is cynical of whether this simplistic process is viable. “Memories are not neatly stored like files on a computer to create a searchable index,” he says. “Memory consists of networks of associations that are activated when we remember. A brain emulation would require a copy of them all.”


Indeed, humans reconstruct information from multiple parts of the brain in ways that are shaped by our current beliefs and biases, all of which change over time. These conclusions appear at odds with any effort to store memories in the same way that a computer might record data for easy access. It is an idea based on, as one sceptic I spoke to (who wished to remain anonymous) put it, “the wrong and old-fashioned ‘possession’ view of memory”.

There is also the troubling issue of how to extract a person’s memories without destroying the brain in the process. “I am sceptical of the idea that we will be able to accomplish non-destructive scanning,” says Sandberg. “All methods able to scan neural tissue at the required high resolution are invasive, and I suspect this will be very hard to achieve without picking apart the brain.” Nevertheless, the professor believes a searchable, digital upload of a specific individual’s memory could be possible so long as you were able to “run” the simulated brain in its entirety.


“I think there is a good chance that it could work in reality, and that it could happen this century,” he says. “We might need to simulate everything down to the molecular level, in which case the computational demands would simply be too large. It might be that the brain uses hard-to-scan data like quantum states (an idea believed by some physicists but very few neuroscientists), that software cannot be conscious or do intelligence (an idea some philosophers believe but few computer scientists), and so on. I do not think these problems apply, but it remains to be seen if I am right.”

If it could be done, then, what would preserving a human mind mean for the way we live?

Some believe that there could be unanticipated benefits, some of which can make the act of merely extending a person’s life for posterity seem rather plain by comparison. For example, David Wood, chairman of the London Futurists, argues that a digital back-up of a person’s mind could be studied, perhaps providing breakthroughs in understanding the way in which human beings think and remember.


And if a mind could be digitally stored while a person was still alive then, according to neuroscientist Andrew A Vladimirov, it might be possible to perform psychoanalysis using such data. “You could run specially crafted algorithms through your entire life sequence that will help you optimise behavioural strategies,” he says.

Yet there’s also an unusual set of moral and ethical implications to consider, many of which are only just beginning to be revealed. “In the early stages the main ethical issue is simply broken emulations: we might get entities that are suffering in our computers,” says Sandberg. “There are also going to be issues of volunteer selection, especially if scanning is destructive.” Beyond the difficulty of recruiting people who are willing to donate their minds in such a way, there is the more complicated issue of what rights an emulated mind would enjoy. “Emulated people should likely have the same rights as normal people, but securing these would involve legislative change,” says Sandberg. “There might be the need for new kinds of rights too. For example, the right for an emulated human to run in real-time so that they can participate in society.”


Defining the boundaries of a person’s privacy is already a pressing issue for humanity in 2015, where third-party corporations and governments hold more insight into our personal information than ever before. For an emulated mind, privacy and ownership of data becomes yet more complicated. “Emulations are vulnerable and can suffer rather serious breaches of privacy and integrity,” says Sandberg. He adds, in a line that could be lifted from a Philip K Dick novel: “We need to safeguard their rights”. By way of example, he suggests that lawmakers would need to consider whether it should be possible to subpoena memories.

Property laws

“Ownership of specific memories is where things become complex,” says Koene. “In a memoir you can choose which memories are recorded. But if you don't have the power of which of your memories others can inspect it becomes a rather different question.” Is it a human right to be able to keep secrets?

These largely un-interrogated questions also begin to touch on more fundamental issues of what it means to be human. Would an emulated brain be considered human and, if so, does the humanity exist in the memories or the hardware on which the simulated brain runs? If it's the latter, there’s the question of who owns the hardware: an individual, a corporation or the state? If an uploaded mind requires certain software to run (a hypothetical Google Brain, for example) the ownership of the software license could become contentious.


The knowledge that one’s brain is to be recorded in its entirety might also lead some to behave differently during life. “I think it would have the same effect as knowing your actions will be recorded on camera,” says Sandberg. “In some people this knowledge leads to a tendency to conform to social norms. In others it produces rebelliousness. If one thinks that one will be recreated as a brain emulation then it is equivalent to expecting an extra, post-human life.”

Even if it were possible to digitally record the contents and psychological contours of the human mind, there are undeniably deep and complicated implications. But beyond this, there is the question of whether this is something that any of us truly want. Humans long to preserve their memories (or, in some cases, to forget them) because they remind us of who we are. If our memories are lost we cease to know who we were, what we accomplished, what it all meant. But at the same time, we tweak and alter our memories in order to create the narrative of our lives that fits us at any one time. To have everything recorded with equal weight and importance might not be useful, either to us or to those who follow us.


Where exactly is the true worth of the endeavour? Could it actually be the comforting knowledge for a person that they, to one degree or other, won’t be lost without trace? The survival instinct is common to all life: we eat, we sleep, we fight and, most enduringly, we reproduce. Through our descendants we reach for a form of immortality, a way to live on beyond our physical passing. All parents take part in a grand relay race through time, passing the gene baton on and on through the centuries. Our physical traits – those eyes, that hair, this temperament – endure in some diluted or altered form. So too, perhaps, do our metaphysical attributes (“what will survive of us is love,” as Philip Larkin tentatively put it in his 1956 poem, ‘An Arundel Tomb’). But it is the mere echo of immortality. Nobody lives forever; with death only the fading shadow of our life remains. There are the photographs of us playing as children. There are the antique books we once read. There is the blouse we once wore.

I ask Sunshine why he wants his life to be recorded in this way. “To be honest, I'm not sure,” he says. “The truly beautiful things in my life such as the parties I've thrown, the sex I've had, the friendships I’ve enjoyed. All of these things are too ephemeral to be preserved in any meaningful way. A part of me wants to build monuments to myself. But another part of me wants to disappear completely.” Perhaps that is true of us all: the desire to be remembered, but only the parts of us that we hope will be remembered. The rest can be discarded.


Despite my own grandmother’s careful distribution of her photographs prior to her death, many remained in her house. These eternally smiling, fading unknown faces evidently meant a great deal to her in life but now, without the framing context of her memories, they lost all but the most superficial meaning. In a curious way, they became a burden to those of us left behind.

My father asked my grandmother’s vicar (a kindly man who had been her friend for many years), what he should do with the pictures; to just throw the photographs away seemed somehow flippant and disrespectful. The vicar’s advice was simple. Take each photograph. Look at it carefully. In that moment you honour the person captured. Then you may discard of it and be free.

Share this story on FacebookGoogle+ or Twitter.


Logo KW

Behavioral analytics vs. the rogue insider [1226]

de System Administrator - lunes, 11 de mayo de 2015, 23:35

Behavioral analytics vs. the rogue insider

By Taylor Armerding   

The promise of User Behavioral Analytics is that it can go beyond simply detecting insider threats to predicting them. Some experts say that creates a significant privacy problem.

The recent arrest by the FBI of a former employee of JP Morgan Chase for allegedly trying to sell bank account data, including PINs, ended well for the bank.

According to the FBI, the former employee, Peter Persaud, was caught in a sting operation when he attempted to sell the data to informants and federal agents.

But such things don’t always end so well for the intended victims. The arrest was yet another example of the so-called “rogue insider” threat to organizations.

And such incidents are providing increasing incentives to use technology to counter it.

The threat of employees going rogue – wittingly or not – is significant enough that some organizations are turning to behavior analytics that, according to its advocates, are able not only to detect insider security threats as they happen, but even predict them.

Such protection would likely be welcomed by most organizations, but it comes with an obvious consequence: Worker privacy. Predicting security threats calls up images of “Minority Report,” the 2002 movie starring Tom Cruise, in which police arrested people before they committed crimes.

In that sci-fi world, it was “precogs” – psychics – who predicted the impending crimes. The IT version is User Behavior Analytics (UBA).

According to Gartner, “UBA is transforming security and fraud management practices because it makes it much easier for enterprises to gain visibility into user behavior patterns to find offending actors and intruders.”

[ ALSO: How to Use Network Behavior Analysis Tools ]

Saryu Nayyar, CEO of Gurucul Solutions, in a recent statement, said her firm’s technology, “continuously monitors hundreds of (employee behavior) attributes to detect and rank the risk associated with anomalous behaviors.

"This is nothing new. What’s different today is the use of big data analytics, machine learning algorithms and risk scoring being applied to these logs."


Saryu Nayyar, CEO, Gurucul Solutions

“It identifies and scores anomalous activity across users, accounts, applications and devices to predict risks associated with insider threats.”

This should not be a surprise. Data analytics are being applied to just about every challenge in the workplace, from marketing to efficiency. So it is inevitable that it would be used to counter what has always been the weakest link in the security chain – the human.

Americans have also been told for years that personal privacy is essentially dead. Still, some of them may not appreciate just how dead it is, or soon will be, in the workplace.

But Nayyar and others note that there should be no expectation of privacy in the workplace when it comes to corporate data.

“This technology is simply monitoring activity within a company’s IT systems,” she said. “It does not read emails or personal communications.”

She added that monitoring of employee behaviors by IT has been going on for a long time. “This is nothing new,” she said. “What’s different today is the use of big data analytics, machine learning algorithms and risk scoring being applied to these logs.”

Michael Overly, technology partner at Foley & Lardner LLP, said companies should notify their employees that, “business systems should not be used for personal or private communications and other activities, and that the systems and data can and likely will be reviewed, including through automated means.”

"Employees must understand that if they want privacy with regard to their online activities, they need to use a means other than their employer’s computers."


Michael Overly, technology partner, Foley & Lardner LLP

But he agreed with Nayyar that privacy is necessarily limited in the workplace. “Employees must understand that if they want privacy with regard to their online activities, they need to use a means other than their employer’s computers, like a smartphone or a home computer,” he said.

That is also the view of Troy Moreland, chief technology officer at Identity Automation. “In general, if employees are using employer-provided equipment, they have no right to privacy as long as it’s clearly expressed,” he said.

But Joseph Loomis, founder and CEO of CyberSponse, said such policies, if they are too heavy handed, can cause morale problems. “I believe it’s justified,” he said, “it’s just that there are various opinions on what type of privacy someone is entitled to or not.”

He said it would likely take significant “training, education and explaining” to eliminate the feeling of a “Big Brother” atmosphere in the workplace.

"(UBA) will cause legal issues if one terminates without cause other than predictive intelligence."


Joseph Loomis, founder and CEO, CyberSponse

Gabriel Gumbs, vice president of product strategy at Identity Finder, said he believes the potential for morale problems is real. “At the core of UBA is an unspoken distrust of everyone, not just the rogue employees,” he said.

Matthew Prewitt, partner at Schiff Hardin and chairman of its cybersecurity and data privacy practice, said one problem with predicting misconduct is that it can become self-fulfilling. “An employee who is viewed with mistrust and suspicion is more likely to become a rogue employee,” he said.

He agrees that there is a limited expectation of privacy in the workplace, especially on the corporate network. But he said a “creative advocate” for an employee could argue that, “UBA is so different from other types of monitoring that some sort of express reference to UBA needs to be provided in the notice.”

Loomis added that in states not governed by “right-to-work” laws, UBA, “will cause legal issues if one terminates without cause other than predictive intelligence.”

The much larger problem, they say, is from unintentional rogues – those with too many access privileges, who use “shadow” IT and/or who are simply lazy or careless.

“In our experience over-privileged scenarios account for approximately 65% of insider threat incidents, shadow IT 20% and carelessness 15%,” Nayyar said.

Moreland has a list of labels for such employees, including “access hoarders” who “gobble up as much access as they possibly can and refuse to relinquish any of it, even when it's no longer needed.”

Others, who he calls “innovators,” are well intentioned – they are trying to be more productive – but one of the ways they do so is by circumventing IT policies.

Gumbs noted that the Verizon Data Breach Investigations Report found that, “privilege abuse is the most damaging of insider threats.”

But he added that not all abuse of access privileges is innocent, and does not necessarily mean an employee is over-privileged. “In the majority of cases, users had the proper level of privilege for their roles, they simply abused those privileges for personal or financial gain,” he said. In those cases, he and other experts say identity and access management can reduce the security risks significantly.

“Over-privilege is a substantial concern,” Overly said. “In general, the majority of users in businesses today are over-privileged. The concept of least privilege is seldom implemented properly and even more seldom addressed as personnel duties change and evolve over time.”

Dennis Devlin, cofounder, CISO and senior vice president of privacy practice at SAVANTURE, said he sees the same thing. “In my experience most individuals who have been with an organization for a long time are over-privileged,” he said. “Access privileges are accretive and tend to grow over time. The law of least privileges exists not just to prevent malicious access, but to also to prevent accidental or inadvertent disclosure.”

"In my experience most individuals who have been with an organization for a long time are over-privileged."

Dennis Devlin, cofounder, CISO and senior vice president of privacy practice, SAVANTURE

He said better access management could reduce the need for intrusive monitoring. “Appropriate privileges keep individuals in their respective ‘swim lanes,’ reduce the need for excessive monitoring and make SIEM analysis much more effective,” he said.

Beyond the legal and morale questions, however, the verdict is still out on how well UBA works.

Overly said in his experience, “it has a long way to go with regard to accuracy. All too often, the volume of false alarms causes the results to be disregarded when an actual threat is identified.”

Nayyar said it does work, through analysis of unusual or “anomalous” behaviors in things like geolocation, elevated permissions, connecting to an unknown IP or installing unknown software for backdoor access to sensitive data (see sidebar).

She provided an example of flagging rogue behavior: A software engineer who had resigned from a company and was leaving in a month, exhibited behavior never seen before.

While on vacation, the employee, “logged in from a previously unseen IP address, accessed source code repository and downloaded sensitive files from a project he wasn't assigned to,” she said.

“Two days later, the engineer accessed multiple servers and moved the downloaded files to a NFS (Network File System) location, which he made mountable and attempted to sync the files to prohibited consumer cloud storage service.”

She said the user was flagged as soon as he created the NFS mount point, “based on predictive modeling, and his VPN connection was terminated.”

But as effective as that sounds, even advocates of UBA warn that, like any security tool, it is a “layer” of protection, not a guarantee.

“Perfection cannot be achieved,” Overly said. “If an insider is intent on causing harm to the business, it may be impossible to prevent it.”

What does UBA track?

According to Saryu Nayyar, CEO of Gurucul Solutions, User Behavioral Analytics can detect behavioral anomalies by monitoring activities including:
  1. Geo-location: Access to resources from different geographies, locations not seen before, or from unauthorized locations, etc. This is a very simple use-case.
  2. Elevated Permissions: Employee elevating their access privileges to perform a task they or their peers have never performed in the past.
  3. Device Outbound Access: Certain high-value assets might be connecting to an unknown IP/geo-location they shouldn't connect to. This behavior could be an anomaly when compared to past behavior or peer group behavior.
  4. Employees accessing resources and installing unknown software for backdoor access to sensitive data that can be transmitted outside network.



Logo KW

BioData World Congress 2015 [1433]

de System Administrator - viernes, 18 de septiembre de 2015, 14:58

BioData World Congress 2015

Big Data Solutions is the leading big data initiative of Intel that aims to empower business with the tools, technologies, software and hardware for managing big data. Big Data solutions is at the forefront of big data analytics and today we talk to Bob Rogers, Chief Data Scientist, Intel about his role, big data for genomics and his contributions to the BioData World Congress 2015.

Please read the attached PDF

Logo KW

Biological Compass [1584]

de System Administrator - lunes, 16 de noviembre de 2015, 20:56

Researchers found the genes that code for the MagR and Cry proteins in the retinas of pigeons. WIKIMEDIA, TU7UH

Biological Compass

By Bob Grant

A protein complex discovered in Drosophila may be capable of sensing magnetism and serves as a clue to how some animal species navigate using the Earth’s magnetic field.

A variety of different animal species possess remarkable navigational abilities, using the Earth’s magnetic field to migrate thousands of miles every year or find their way home with minimal or no visual cues. But the biological mechanisms that underlie this magnetic sense have long been shrouded in mystery. Researchers in China may have found a tantalizing clue to the navigational phenomenon buried deep in the fruit fly genome. The team, led by biophysicist Can Xie of Peking University, discovered a polymer-like protein, dubbed MagR, and determined that it forms a complex with a photosensitive protein called Cry. The MagR/Cry protein complex, the researchers found, has a permanent magnetic moment, which means that it spontaneously aligns in the direction of external magnetic fields. The results were published today (November 16) in Nature Materials.

“This is the only known protein complex that has a permanent magnetic moment,” said Peter Hore, a physical chemist at the University of Oxford, U.K., who was not involved in the research. “It’s a remarkable discovery.”

Xie and his colleagues called upon long-standing biochemical models that sought to explain animals’ magnetic sense to initiate the search for a physical magnetoreceptor. One of these involves molecules that incorporate oxides of iron in their structure and another involves Cry, which is known to produce radical pairs in some magnetic fields. “However, this only proved that Cry plays a critical role in the magnetoreptive biological pathways, not necessarily that it is the receptor,” Xie wrote in an email to The Scientist. “We believe there [are] such universal magnetosensing protein receptors in an organism, and we set out to find this missing link.”

The researchers performed whole-genome screens of Drosophila DNA to search for a protein that might partner with Cry and serve as that magnetoreceptor. “We predicted the existence of a multimeric magentosensing complex with the attributes of both Cry- and iron-based systems,” Xie wrote. “Amazingly, later on, our genome-wide screening and experiments showed this is real.”

Xie and his colleagues were “brave enough to go through the whole genome-wide search to hunt for this protein,” said James Chou, a biophysicist at Harvard Medical School. “Sometimes it works, sometimes you don’t get anything. Luckily, this time he got something big out of it.”

In 2012, after identifying MagR in Drosophila, Xie and his colleagues screened the genomes of several other animal species, finding genes for both Cry and MagR in virtually all of them, including in butterflies, pigeons, robins, rats, mole rats, sharks, turtles, and humans. “This protein is evolutionarily conserved across different classes of animals (from butterflies to pigeons, rats, and humans),” Xie wrote.

Determining that MagR and Cry were highly expressed and colocalized in the retinas of pigeons, Xie’s team focused on that species to conduct further experiments to ferret out the structure and behavior of the protein complex. Using biochemical co-purification, electron microscopy, and cellular experiments in the presence of a magnetic field, the researchers constructed a rod-shaped model of the MagR/Cry complex, and suggested a potential mechanism for how the complex might work in situ to sense magnetism. “It is quite convincing that this complex may be the magnetoreceptor, at least for the organism they have fished it out from,” Chou said. “I think it’s a great step forward to open this whole mystery.”

Cry likely regulates the magnetic moment of the rod-shaped complex, while the iron-sulfur clusters in the MagR protein are probably what give rise to the permanent magnetic polarity of the structure. “The nanoscale biocompass has the tendency to align itself along geomagnetic field lines, and to obtain navigation cues from a geomagnetic field,” Xie wrote. “We propose that any disturbance of this alignment may be captured by connected cellular machinery such as the cytoskeleton or ion channels, which would channel information to the downstream neural system, forming the animal’s magnetic sense (or magnetic ‘vision’).”

Hore was cautious about saying that the newly modeled complex is absolutely responsible for magnetoreception in animals. “I don’t think I would say that its game-changing, but it is very interesting and will prompt a lot of experimental and theoretical work,” he said. “It may be very relevant to magnetoreception, it’s just too soon to know.”

“It may not be very accurate because this is really just a model,” Chou agreed, “but I think it’s a good effort, and it will stimulate follow-up work on the structure.”

Of course, there may well be additional biological components that play into giving animals a magnetic sense. Pigeons, for example, sense the inclination of the Earth’s magnetic field rather than the absolute direction of the field, Hore points out. The MagR/Cry complex, as described in the paper, would be capable of detecting the absolute direction or intensity of a field, not the inclination.

But beyond the clues into how animals sense the Earth’s magnetic field for navigational purposes, the discovery may yield new biochemical tools that could be used by other researchers. Chief among these applications is the potential to use the MagR/Cry complex along with controlled magnetic fields to control the behavior of cells or whole organisms. Such a development would be “sort of the magnetic version of optogenetics,” Cho said.

Xie agreed. “It may give rise to magnetogenetics,” he wrote.

The study also generates multiple questions about the biological components surrounding the protein complex and how they contribute to magnetic sensation. “This is just the tip of the iceberg,” Choe said. “This opens up a lot of future projects to unveil how this polarity, or alignment with the Earth’s magnetic field, can transmit signal, whether it’s a neural signal or one that regulates transcription.”

Xie said he thinks that while the “biocompass model” he and his colleagues proposed may serve as a universal mechanism for animal magnetoreception, there may be more magnetoreceptors to be discovered. Additionally, evolution enhanced the magnetic sense in some, especially migratory, species, which could have led to numerous variations on the theme. This may even extend to humans, he added. “I have a friend who has really good sense of directions, and he keep telling people that he can always know where is south, where is north, even to a new place he has never been,” Xie wrote in an email. “According to him, he felt there is a compass in his brain. I was laughed, but now I guess I understand what he meant. . . . However, human’s sense of direction is very complicated. Magnetoreception may play some roles.”

S. Qin et al., “A magnetic protein biocompass,” Nature Materials, doi:10.1038/nmat4484, 2015.


Logo KW

Biomarkers, apps help in suicide prevention [1397]

de System Administrator - martes, 8 de septiembre de 2015, 20:47

Report: Biomarkers, apps help in suicide prevention

Logo KW

Biomedical ecosystem focused on innovative knowledge encoded with SNOMED CT [1316]

de System Administrator - martes, 14 de julio de 2015, 21:29

Title: Biomedical ecosystem focused on innovative knowledge encoded with SNOMED CT

Presenter: Prog. Gustavo Tejera, SUEIIDISS and KW Foundation

October 25th-30th, 2015 - Radisson Montevideo, Uruguay


Doctors, Project Managers, Managers of Content’s Departments, Architects of the Digital Society, MBA, Engineers, Analysts of BIG DATA and IoT, Interoperability and Automation Specialists.


Transmitting the key concepts to create and share reusable contents encoded with SNOMED CT. They can improve application logic and training at the point of care, without its creators know programming. It is the first step towards a social network 3.0 based on SNOMED CT.



SNOMED CT is a source of incremental knowledge to articulate episodes, processes, tasks, forms, descriptors, indicators, rules and agents in all layers of the electronic health record.

How to build reusable components with SNOMED CT knowledge to improve the logic of the applications? How to swap? Web 3.0 is ready to start, but there are difficulties related to the training of professionals, the value of knowledge and market rules.

In this study we present the experience of the KW Foundation in the development and implementation of HealthStudio (open source) at the National Cancer Institute, Sanatorium John Paul II, Medical Federation of the Interior, Uruguayan Medical Sanatorium, College of Nursing and Maryland University.

In a BIG DATA composed of more than 20 million log events, we discover how to involve users in the construction process and content that increase both contextual cognition at the attention area, logistical and administrative tasks.

With the construction of reusable knowledge of SNOMED CT, the healthcare community gets a great facilitator for the essential alignment ("light"), connection ("camera") and interoperability ("action").

We believe that an innovative, cognitive, community and incremental ecosystem is possible to build on the basis of SNOMED CT, ready for the generation and analysis of BIG DATA and the Internet of Things. But above all, it is essential to ensure that these tools allow the inclusion of all levels in the democratic construction of eHealth and Digital Society.


  1. SUEIIDISS web page, HL7 Uruguay.
  2. HL7 International web page:
  3. UMLS web page:
  4. KW Foundation web page, free sources for community’s contents builders.
  5. HealthStudio web page, free downloads for community’s contents builders.
  6. Book “The New Warriors in the Digital Society”, Gustavo Tejera, 2014.
  7. 2012 Laureate Computerworld Award, KW Social Network initiative.


Logo KW

Bionic Eye [903]

de System Administrator - miércoles, 1 de octubre de 2014, 16:02

The Bionic Eye

Using the latest technologies, researchers are constructing novel prosthetic devices to restore vision in the blind.

By Various Researchers

See full infographic: JPG | PDF 

In 1755, French physician and scientist Charles Leroy discharged the static electricity from a Leyden jar—a precursor of modern-day capacitors—into a blind patient’s  body using two wires, one tightened around the head just above the eyes and the other around the leg. The patient, who had been blind for three months as a result of a high fever, described the experience like a flame passing downwards in front of his eyes. This was the first time an electrical device—serving as a rudimentary prosthesis—successfully restored even a flicker of visual perception.

More than 250 years later, blindness is still one of the most debilitating sensory impairments, affecting close to 40 million people worldwide. Many of these patients can be efficiently treated with surgery or medication, but some pathologies cannot be corrected with existing treatments. In particular, when light-receiving photoreceptor cells degenerate, as is the case in retinitis pigmentosa, or when the optic nerve is damaged as a result of glaucoma or head trauma, no surgery or medicine can restore the lost vision. In such cases, a visual prosthesis may be the only option. Similar to cochlear implants, which stimulate auditory nerve fibers downstream of damaged sensory hair cells to restore hearing, visual prostheses aim to provide patients with visual information by stimulating neurons in the retina, in the optic nerve, or in the brain’s visual areas.

In a healthy retina, photoreceptor cells—the rods and cones—convert light into electrical and chemical signals that propagate through the network of retinal neurons down to the ganglion cells, whose axons form the optic nerve and transmit the visual signal to the brain. (Seeillustration.) Prosthetic devices work at different levels downstream from the initial reception and biochemical conversion of incoming light photons by the pigments of photoreceptor rods and cones at the back of the retina. Implants can stimulate the bipolar cells directly downstream of the photoreceptors, for example, or the ganglion cells that form the optic nerve. Alternatively, for pathologies such as glaucoma or head trauma that compromise the optic nerve’s ability to link the retina to the visual centers of the brain, prostheses have been designed to stimulate the visual system at the level of the brain itself. (See illustration.)

While brain prostheses have yet to be tested in people, clinical results with retinal prostheses are demonstrating that the implants can enable blind patients to locate and recognize objects, orient themselves in an unfamiliar environment, and even perform some reading tasks. But the field is young, and major improvements are still necessary to enable highly functional restoration of sight.

Henri Lorach is currently a visiting researcher at Stanford University, where he focuses on prosthetic vision and retinal signal processing.




Substitutes for Lost Photoreceptors

by Daniel Palanker

 SEEING IN PIXELS: Photovoltaic arrays of pixels can be implanted on top of the retinal pigment epithelium (shown here in a rat eye, right), where they stimulate activity in the retinal neurons downstream of damaged photoreceptors.


In the subretinal approach to visual prosthetics, electrodes are placed between the retinal pigment epithelium (RPE) and the retina. (See illustration.) There, they stimulate the nonspiking inner retinal neurons—bipolar, horizontal, and amacrine cells—which then transmit neural signals down the retinal network to the retinal ganglion cells (RGCs) that propagate to the brain via the optic nerve. Stimulating the retinal network helps preserve some aspects of the retina’s natural signal processing, such as the “flicker fusion” that allows us to see video as a smooth motion, even though it is composed of frames with static images; adaptation to constant stimulation; and the nonlinear integration of signals as they flow through the retinal network, a key aspect of high spatial resolution. Electrical pulses lasting several milliseconds provide selective stimulation of the inner retinal neurons and avoid direct activation of the ganglion cells and their axons, which would otherwise considerably limit patients’ ability to interpret the spatial layout of a visual scene.

In the subretinal approach to visual pros­thetics, electrodes are placed between the retinal pigment epithelium and the retina, where they stimulate the nonspiking inner retinal neurons.

The Boston Retinal Implant Project, a multidisciplinary team of scientists, engineers, and clinicians at research institutions across the U.S., is developing a retinal prosthesis that transmits information from a camera mounted on eyeglasses to a receiving antenna implanted under the skin around the eye using radiofrequency telemetry—technology similar to radio broadcast. The decoded signal is then delivered to an implanted subretinal electrode array via a cable that penetrates into the eye. The information delivered to the retina by this device is not related to direction of gaze, so to survey a scene a patient must move his head, instead of just his eyes.

The Alpha IMS subretinal implant, developed by Retina Implant AG in Reutlingen, Germany, rectifies this problem by including a subretinal camera, which converts light in each pixel into electrical currents. This device has been successfully tested in patients with advanced retinitis pigmentosa and was recently approved for experimental clinical use in Europe. Visual acuity with this system is rather limited: most patients test no better than 20/1000, except for one patient who reached 20/550.1 The Alpha IMS system also needs a bulky implanted power supply with cables that cross the sclera and requires complex surgery, with associated risk of complications.


STIMULATING ARRAY: This prototype suprachoroidal array, which is implanted behind the choroid, can be larger than prostheses inserted in front of or behind the retina.


To overcome these challenges, my colleagues and I have developed a wireless photovoltaic subretinal prosthesis, powered by pulsed light. Our system includes a pocket computer that processes the images captured by a miniature video camera mounted on video goggles, which project these images into the eye and onto a subretinally implanted photodiode array. Photodiodes in each pixel convert this light into pulsed current to stimulate the nearby inner retinal neurons. This method for delivering the visual information is completely wireless, and it preserves the natural link between ocular movement and image perception.

Our system uses invisible near-infrared (NIR, 880–915 nm) wavelengths to avoid the perception of bright light by the remaining functional photoreceptors. It has been shown to safely elicit and modulate retinal responses in normally sighted rats and in animals blinded by retinal degeneration.2Arrays with 70 micrometer pixels restored visual acuity in blind rats to half the natural level, corresponding to 20/250 acuity in human. Based on stimulation thresholds observed in these studies, we anticipate that pixel size could be reduced by a factor of two, improving visual acuity even further. Ease of implantation and tiling of these wireless arrays to cover a wide visual field, combined with their high resolution, opens the door to highly functional restoration of sight. We are commercially developing this system in collaboration with the French company Pixium Vision, and clinical trials are slated to commence in 2016.


FOLLOW THE LIGHT: A blind patient navigates an obstacle course without the assistance of her guide-dog, thanks to a head-mounted camera and a backpack computer, which gather and process visual information before delivering a representation of the visual scene via her suprachoroidal retinal prosthesis.


Fabio Benfenati of the Italian Institute of Technology in Genoa and Guglielmo Lanzani at the institute’s Center for Nanoscience and Technology in Milan are also pursuing the subretinal approach to visual prostheses, developing a device based on organic polymers that could simplify implant fabrication.3 So far, subretinal light-sensitive implants appear to be a promising approach to restoring sight to the blind.

Daniel Palanker is a professor in the Department of Ophthalmology and Hansen Experimental Physics Laboratory at Stanford University.


  1. K. Stingl et al., “Functional outcome in subretinal electronic implants depends on foveal eccentricity,” Invest Ophthalmol Vis Sci, 54:7658-65, 2013.
  2. Y. Mandel et al., “Cortical responses elicited by photovoltaic subretinal prostheses exhibit similarities to visually evoked potentials,” Nat Commun, 4:1980, 2013.
  3. D. Ghezzi et al., “A polymer optoelectronic interface restores light sensitivity in blind rat retinas,”Nat Photonics, 7:400-06, 2013.


Behind the Eye

by Lauren Ayton and David Nayagam

 NEW SIGHT: A recipient of a prototype suprachoroidal prosthesis tests the device with Bionic Vision Australia (BVA) researchers.


Subretinal prostheses implanted between the retina and the RPE, along with epiretinal implants that sit on the surface of the retina (see below), have shown good results in restoring some visual perception to patients with profound vision loss. However, such devices require technically challenging surgeries, and the site of implantation limits the potential size of these devices. Epiretinal and subretinal prostheses also face challenges with stability and the occurrence of adverse intraocular events, such as infection or retinal detachment. Due to these issues, researchers have been investigating a less invasive and more stable implant location: between the vascular choroid and the outer sclera. (See illustration.)

Like subretinal prostheses, suprachoroidal implants utilize the bipolar cells and the retinal network down to the ganglion cells, which process the visual information before relaying it to the brain. But devices implanted in this suprachoroidal location can be larger than those implanted directly above or below the retina, allowing them to cover a wider visual field, ideal for navigation purposes. In addition, suprachoroidal electrode arrays do not breach the retina, making for a simpler surgical procedure that should reduce the chance of adverse events and can even permit the device to be removed or replaced with minimal damage to the surrounding tissues.

Early engineering work on suprachoroidal device design began in the 1990s with research performed independently at Osaka University in Japan1 and the Nano Bioelectronics and Systems Research Center of Seoul National University in South Korea.2 Both these groups have shown proof of concept in bench testing and preclinical work, and the Japanese group has gone on to human clinical trials with promising results.3 Subsequently, a South Korean collaboration with the University of New South Wales in Australia continued suprachoroidal device development.

More recently, our groups, the Bionics Institute and the Centre for Eye Research Australia, working as part of the Bionic Vision Australia (BVA) partnership, ran a series of preclinical studies between 2009 and 2012.4 These studies demonstrated the safety and efficacy of a prototype suprachoroidal implant, made up of a silicone carrier with 33 platinum disc-shaped electrodes that can be activated in various combinations to elicit the perception of rudimentary patterns, much like pixels on a screen. Two years ago, BVA commenced a pilot trial, in which researchers implanted the prototype in the suprachoroidal space of three end-stage retinitis pigmentosa patients who were barely able to perceive light. The electrode array was joined to a titanium connector affixed to the skull behind the ear, permitting neurostimulation and electrode monitoring without the need for any implanted electronics.5 In all three patients, the device proved stable and effective, providing enough visual perception to better localize light, recognize basic shapes, orient in a room, and walk through mobility mazes with reduced collisions.6Preparation is underway for future clinical trials, which will provide subjects with a fully implantable device with twice the number of electrodes.

Suprachoroidal prostheses can be larger than those implanted directly above or below the retina, allowing them to cover a wider visual field, ideal for navigation purposes.

Meanwhile, the Osaka University group, working with the Japanese company NIDEK, has been developing an intrascleral prosthetic device, which, unlike the Korean and BVA devices, is implanted in between the layers of the sclera rather than in the suprachoroidal space. In a clinical trial of this device, often referred to as suprachoroidal-transretinal stimulation (STS), two patients with advanced retinitis pigmentosa showed improvement in spatial resolution and visual acuity over a four-week period following implantation.3

Future work will be required to fully investigate the difference in visual perception provided by devices implanted in the various locations in the eye, but the initial signs are promising that suprachoroidal stimulation is a safe and viable clinical option for patients with certain degenerative retinal diseases.

Lauren Ayton is a research fellow and the bionic eye clinical program leader at the University of Melbourne’s Centre for Eye Research Australia. David Nayagam is a research fellow and the bionic eye chronic preclinical study leader at the Bionics Institute in East Melbourne and an honorary research fellow at the University of Melbourne.


  1. H. Sakaguchi et al., “Transretinal electrical stimulation with a suprachoroidal multichannel electrode in rabbit eyes,” Jpn J Ophthalmol, 48:256-61, 2004.
  2. J.A. Zhou et al., “A suprachoroidal electrical retinal stimulator design for long-term animal experiments and in vivo assessment of its feasibility and biocompatibility in rabbits,” J Biomed Biotechnol, 2008:547428, 2008.
  3. T. Fujikado et al., “Testing of semichronically implanted retinal prosthesis by suprachoroidal-transretinal stimulation in patients with retinitis pigmentosa,” Invest Ophthalmol Vis Sci, 52:4726-33, 2011.
  4. D.A.X. Nayagam et al., “Chronic electrical stimulation with a suprachoroidal retinal prosthesis: a preclinical safety and efficacy study,” PLOS ONE, 9:e97182, 2014.
  5. A.L. Saunders et al., “Development of a surgical procedure for implantation of a prototype suprachoroidal retinal prosthesis,” Clin & Exp Ophthalmol, 10.1111/ceo.12287, 2014
  6. M.N. Shivdasani et al., “Factors affecting perceptual thresholds in a suprachoroidal retinal prosthesis,” Invest Ophthalmol Vis Sci, in press. 


Shortcutting the Retina

By Mark Humayun, James Weiland, and Steven Walston

TINY IMPLANTS: The Argus II retinal implant, which was approved for sale in Europe in 2011 and in the U.S. in 2012, consists of a 3 mm x 5 mm 60-electrode array (shown here) and an external camera and video-processing unit. Users of this implant are able to perceive contrasts between light and dark areas.


Bypassing upstream retinal processing, researchers have developed so-called epiretinal devices that are placed on the anterior surface of the retina, where they stimulate the ganglion cells that are the output neurons of the eye. This strategy targets the last cell layer of the retinal network, so it works regardless of the state of the upstream neurons. (See illustration.)

In 2011, Second Sight obtained approval from the European Union to market its epiretinal device, the Argus II Visual Prosthesis System, which allowed clinical trial subjects who had been blind for several years to recover some visual perception such as basic shape recognition and, occasionally, reading ability. The following year, the FDA approved the device, which uses a glasses-mounted camera to capture visual scenes and wirelessly transmits this information as electrical stimulation patterns to a 6 x 10 microelectrode array. The array is surgically placed in the macular region, responsible in a healthy retina for high-acuity vision, and covers an area of approximately 20° of visual space.

A clinical trial showed that 30 patients receiving the device are able to more accurately locate a high-contrast square on a computer monitor, and when asked to track a moving high-contrast bar, roughly half are able to discriminate the direction of the bar’s movement better than without the system.1 The increased visual acuity has also enabled patients to read large letters, albeit at a slow rate, and has improved the patients’ mobility.2 With the availability of the Argus II, patients with severe retinitis pigmentosa have the first treatment that can actually improve vision. To date, the system has been commercially implanted in more than 50 patients.

Several other epiretinal prostheses have shown promise, though none have received regulatory approval. Between 2003 and 2007, Intelligent Medical Implants tested a temporarily implanted, 49-electrode prototype device in eight patients, who reported seeing spots of light when electrodes were activated. Most of these prototype devices were only implanted for a few months, however, and with no integrated camera, patients could not activate the device outside the clinic, limiting the evaluation of the prosthesis’s efficacy. This group has reformed as Pixium Vision, the company currently collaborating with Daniel Palanker’s group at Stanford to develop a subretinal device, and has now developed a permanent epiretinal implant that is in clinical trials. The group is also planning trials of a 150-electrode device that it hopes will further improve visual resolution.

Future developments in this area will aim to improve the spatial resolution of the stimulated vision; increase the field of view that can be perceived; and increase the number of electrodes. Smaller electrodes would activate fewer retinal ganglion cells, which would result in higher resolution. These strategies will be rigorously tested, and, if successful, may enable retinal prostheses that provide an even better view of the world.

Mark Humayun is Cornelius J. Pings Chair in Biomedical Sciences at the University of Southern California, where James Weiland is a professor of ophthalmology and biomedical engineering.Steven Walston is a graduate student in the Bioelectronic Research Lab at the university.


  1. M.S. Humayun et al., “Interim results from the international trial of Second Sight’s visual prosthesis,” Ophthalmology, 119:779-88, 2012.
  2. L. da Cruz et al., “The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss,” Br J Ophthalmol, 97:632-36, 2013.


Into the Brain

By Collette Mann, Arthur Lowery, and Jeffrey V. Rosenfeld

DIRECT TO BRAIN: Gennaris’s bionic-vision system—which includes an eye glasses–mounted camera that receives visual information (below), a small computerized vision proces­sor (right), and 9 mm x 9 mm electronic tiles (far right) that are implanted into one hemisphere of the visual cortex at the back of the brain—is expected to enter human trials next year.


 In addition to the neurons of the eye, researchers have also targeted the brain to stimulate artificial vision in humans. Early experimentation in epileptic patients with persistent seizures by German neurologists and neurosurgeons Otfrid Förster in 1929 and Fedor Krause and Heinrich Schum in 1931, showed that electrical stimulation of an occipital pole, the most posterior part of each brain hemisphere, resulted in sensations of light flashes, termed phosphenes. By the mid-1950s, Americans John C. Button, an osteopath and later MD, and Tracy Putnam, then Chief of Neurosurgery at Cedars-Sinai Hospital in Los Angeles, had implanted stainless steel wires connected to a simple stimulator into the cortices of four people who were blind, and the patients subsequently reported seeing flashes of light.

The first functional cortical visual prosthesis was produced in England in 1968, when Giles Brindley, a physiologist, and Walpole Lewin, a neurosurgeon, both at Cambridge University, implanted 80 surface electrodes embedded in a silicone cap in the right occipital cortex of a patient. Each electrode connected to one of 80 corresponding extracranial radio receivers, which generated simple, distinctly located phosphene shapes. The patient could point with her hand to their location in her visual field. When more than one electrode at a time was stimulated, simple patterns emerged.

The subsequent aim of the late William H. Dobelle was to provide patients with visual images comprising discrete sets of phosphenes—in other words, artificial vision. Dobelle had begun studying electrical stimulation of the visual cortex in the late 1960s with sighted patients undergoing surgery to remove occipital lobe tumors. He subsequently implanted surface-electrode arrays, first temporarily, then permanently, in the visual cortices of several blind volunteers. However, it was not until the early 2000s that the technology became available to connect a miniature portable camera and computer to the electrodes for practical conversion of real-world sights into electrical signals. With the resultant cortical stimulation, a patient was able to recognize large-print letters and the outline of images.



To elicit phosphenes, however, the surface electrodes used in these early cortical prostheses required large electrical currents (~3 mA–12 mA), which risked triggering epileptic seizures or debilitating migraines. The devices also required external cables that penetrated the skull, risking infection. Today, with the use of wireless technology, a number of groups are aiming to improve cortical vision prostheses, hoping to provide benefit to millions of people with currently incurable blindness.

One promising device from our group is the Gennaris bionic-vision system, which comprises a digital camera on a glasses frame. Images are transmitted into a small computerized vision processor that converts the picture into waveform patterns, which are then transmitted wirelessly to small electronic tiles that are implanted into the visual cortex located in the back of the brain. Each tile houses 43 penetrating electrodes, and each electrode may generate a phosphene. The patterns of phosphenes will create 2-D outlines of relevant shapes in the central visual field. The device is in the preclinical stage, with the first human trials planned for next year, when we hope to implant four to six tiles per patient to stimulate patterns of several hundred phosphenes that patients can use to navigate the environment, identify objects in front of them, detect movement, and possibly read large print.

The development in bionic vision devices is accelerating rapidly due to collaborative efforts using the latest silicon chip and electrode design, computer vision processing algorithms, and wireless technologies.

Other groups currently developing cortical visual prostheses include the Illinois Institute of Technology, the University of Utah, the École Polytechnique de Montréal in Canada, and Miguel Hernández University in Spain. All these devices follow the same principal of inducing phosphenes that can be visualized by the patient. Many technical challenges must be overcome before such devices can be brought to the clinic, however, including the need to improve implantation techniques. In addition to the need for patient safety, accuracy and repeatability when inserting the device are important for maximum results.

Development of bionic vision devices is accelerating rapidly due to collaborative efforts using the latest silicon chip and electrode design, computer vision processing algorithms, and wireless technologies. We are optimistic that a range of practical, safe, and effective bionic vision devices will be available over the next decade and that blind individuals will have the ability to “see” their world once again.

Collette Mann is the clinical program coordinator of the Monash Vision Group in Melbourne, Australia, where Arthur Lowery, a professor of electrical engineering, is the director. Jeffrey V. Rosenfeld is head of the Division of Clinical Sciences & Department of Surgery at the Central Clinical School at Monash University and director of the Department of Neurosurgery at Alfred Hospital, which is also in Melbourne.


Logo KW

Bionic Implants [991]

de System Administrator - jueves, 13 de noviembre de 2014, 12:31

Meet Bionic Amputee, Nigel Ackland


In 2006, Nigel Ackland had an accident. Working as a metal smelter a the time, his right hand was crushed in an industrial mixer. The hand was so severely damaged that six months later he was forced to have it amputated.

Speaking at Exponential Medicine, Ackland presented himself as an ordinary guy facing an extraordinary challenge—a distinction he shares with millions of fellow amputees. How to put your life back together? Learn to live with your disability?

A year after Ackland lost his hand he said he was beset by fits of sudden raging anger. It was hard enough to physically adapt, but he noticed changes in how people treated him too. Strangers would avoid him or stare, with some mixture of pity, fear, or disgust. The guy he saw in the mirror was a physical and mental wreck.

“Psychologically, I was in a very dark place.”

Then something happened that changed Ackland’s life. He got a call from RSL Steeper, maker of the bebionic robotic hand. The company asked if he’d be interested in becoming the first amputee to test out their product.

Bebionic is a prosthetic hand that myoelectrically senses muscle twitches in an amputee’s stump. Depending on which muscles users twitch, the hand can perform a variety of functional or communicative grip patterns and hand positions.

We first wrote about Ackland and his bionic hand back in 2012—when he first began releasing video showing what he could do—and followed up again last year.

With practice and experience, Ackland is now a pro. But it isn’t just about the technology itself enables him to do. His life has positively changed pretty much across the board.

“People still stop and stare,” he says, “But it’s not out of fear or pity anymore.”

He says people are more ready to accept him as he is, and of course, are full of curiosity about his bionic hand. They want to shake hands, see what he can do with it.

Ackland’s story offers a critical glimpse behind the scenes. The human element is all too often lost in the flush of excitement, in the feverish development of technology. But the point, ultimately, is to help people regain a sense of wholeness.

“I’m just an ordinary guy fortunate enough to wear an extraordinary piece of technology,” he says.

Ackland told us every thirty seconds someone becomes an amputee. Since he began testing the hand, another twenty people have joined him. And no doubt, the technology can’t come fast enough for everyone else that could benefit.

Encouragingly, the technology continues to develop. We’ve covered a number of prosthetic limbs—arms andlegs—that intelligently respond to a user’s thoughts and intentions to move. One team, out of Case Western, is even working on a prosthetic that can provide the user with a rudimentary sense of touch.

In the coming years, we hope to see continued improvement and wider availability. Ackland is, undoubtedly, a powerful model of just how much folks stand to benefit.

“I do believe life changing doesn’t have to be life ending,” he told us.

Logo KW

Bipolar Disorder [1517]

de System Administrator - martes, 13 de octubre de 2015, 17:33

Bipolar Disorder

by SearchMedica

Education on Circadian Cycle Improves Depression in Bipolar Disorder

By Mark L. Fuerst

In bipolar disorder, disturbances of biological rhythm often lead to mood swings and relapses, and impairments in biological rhythm may predict poor functioning and quality of life.These authors evaluated the effect of psychoeducation on biological rhythm and in the reduction of depressive, anxious, and manic symptoms at 12 months’ follow-up. This randomized clinical trial included 61 young adults aged 18 to 29 years who were diagnosed with bipolar disorder.Biological rhythm was assessed with the Biological Rhythm Interview Assessment in Neuropsychiatry (BRIAN). This instrument was developed for the clinical evaluation of biological rhythm disturbance experienced by patients suffering from mental disorders. It consists of 21 items divided into 5 main areas related to circadian rhythm disturbance in psychiatric patients, namely sleep, activities, social rhythms, eating pattern, and predominant rhythm. In particular, the BRIAN assesses the frequency of problems related to the maintenance of circadian rhythm regularity.The bipolar patients in the study were randomized to receive either a combined intervention of psychoeducation plus medication (32 patients) or treatment-as-usual with medication alone (29 patients). The combined intervention seemed to be more effective than treatment-as-usual in improving depressive symptoms at post-intervention as well as regulation of sleep/social domain at 6 months’ follow-up.The authors noted that improvement of depressive symptoms as well as regulation of sleep and social activities are known to prevent the onset of episodes of bipolar disorder and therefore improve long-term outcomes.


Gambling Problems Associated with Bipolar Type 2 Disorder

By Mark L. Fuerst 

Bipolar disorder is associated with elevated rates of problem gambling. Mood disturbances, such as hypomanic experiences, are also associated with elevated rates of gambling problem symptoms. However, little is known about rates in the different presentations of bipolar illness.The present authors set out to determine the prevalence and distribution of problem gambling in people with bipolar disorder in the United Kingdom. They used the Problem Gambling Severity Index to measure gambling problems in 635 bipolar disorder patients, with a particular focus on those with bipolar type 2 disorder.The results show that moderate to severe gambling problems were 4 times higher in people with bipolar disorder than in the general population, and were associated with type 2 disorder, history of suicidal ideation or attempt, and rapid cycling.The major finding, the authors state, is that those with a diagnosis of type 2 bipolar disorder were at significantly higher risk of gambling problems than those with a diagnosis of bipolar type 1 disorder. The data suggest that mild mood elevation involving enhanced reward focus, sleeplessness, and distractibility constitute particular risk factors for a gambling problem among these patients.In this study, patients with bipolar disorder who were at risk of problem gambling were likely to be younger and to have an earlier illness onset than patients at low risk, and also were more likely to work in service industries or to be unemployed. In contrast to previous studies in the general population and in bipolar disorder, there was not a higher prevalence of problem gambling in men compared with women.In conclusion, the authors state that about 1 in 10 patients with bipolar disorder may be at moderate to severe risk of problem gambling, possibly associated with suicidal behavior and a course of rapid cycling. With elevated rates of gambling problems in bipolar type 2 disorder, they recommend that clinicians routinely assessing gambling problems in these bipolar disorder patients.




Logo KW

Bipolar patient develops mHealth app to help track mood disorders [1396]

de System Administrator - martes, 8 de septiembre de 2015, 20:33

Bipolar patient develops mHealth app to help track mood disorders

Logo KW

Blind Woman Receives Bionic Eye, Reads a Clock With Elation [1638]

de System Administrator - martes, 19 de enero de 2016, 23:46

Blind Woman Receives Bionic Eye, Reads a Clock With Elation



Recently, the BBC broadcast the reaction of Rhian Lewis, a 49-year-old blind mother of two, as she read a clock correctly using her right eye for the first time in 16 years.

Lewis was understandably emotional, and as the first patient in the UK to receive one of the world’s most advanced bionic eye implants—it was one for the books.

At the age of five, Lewis was diagnosed with retinitis pigmentosa, a condition that impairs the light-detecting cells—known as photoreceptors—in the retina, making it unable to absorb and process light. From the condition, Lewis’s right eye is completely blind and her left eye has next to zero vision.

Though the condition has no cure in the traditional sense, Lewis’s optic nerve and the brain circuitry necessary for vision have remained undamaged. The photoreceptors, then, are the only elements needing replacement.

The solution?

A tiny 3x3mm retinal implant chip developed by German engineering firm, Retina Implant AG. The chip, part of an NHS study of the tech, was implanted into the back of Lewis’s right eye during a daylong procedure at Oxford’s John Radcliffe Hospital.

Retina Implant AG’s tiny chip contains 1,600 electrodes—equivalent to less than one percent of one megapixel—which capture light as it enters the eye and activate the nerve cells of the inner retina. These then send electrical signals to the brain through the optic nerve. A small computer is placed underneath the skin behind the ear, with an exterior magnetic coil sitting outside the skin to power the computer.

Though the technology is certainly improving, the device is not yet perfect, nor does it grant perfect vision.

Patients see flashes of light when the implant is turned on, and after a few weeks, the brain begins to make sense of these flashes, forming them into meaningful shapes. With a small handheld wireless device, Lewis uses dials to modify the sensitivity, contrast, and frequency of the implant.

The images she sees are not seamless—objects are grainy and appear only in black and white—though the implant is indeed life altering for the blind.

“It’s been maybe eight years that I’ve had any sort of idea of what my children look like,” Lewis told The Guardian. “Now, when I locate something, especially like a spoon or a fork on the table, it’s pure elation. I just get so excited that I’ve got something right.”

Bionic eye technology isn’t new, though this is one of the best versions to date.

In 2012, Robin Miller was one of the first patients to receive an implant; yet his was built to last for 18 months, while Lewis’s implant may last up to 5 years. Also in 2012, Australian Dianne Ashworth, who suffered from the same condition as Lewis, received a retinal implant. While the technology was groundbreaking at the time, it contained a mere 24 electrodes, compared to the 1,600 in Lewis’s.

In 2013, the Argus II artificial retina designed at the Lawrence Livermore National Laboratory was the first of its kind in medical visual prosthetics to receive FDA approval. Yet, like the Australian implant, this device only contained 20 electrodes and required patients to wear a sunglasses-like visor.

The Retina Implant AG device Lewis tested out was itself version two. This version has 100 more electrodes, better resolution, lasts longer, and consumes less power.

With 285 million people estimated to be visually impaired worldwide, there is no question that this technology is in high demand and holds massive potential to change the lives of millions. As the technology matures in coming years, the implants are likely to provide better sight even as they become less invasive and longer lasting. And hopefully, they’ll become more affordable and accessible too.

Most significantly, the possibility of full facial recognition for those who haven’t seen loved ones in years no longer seems like a miracle, but a possibility in a not too distant future.

Image credit:

Alison E. Berman

  • Staff Writer at Singularity University
  • Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).



Logo KW


de System Administrator - martes, 7 de octubre de 2014, 18:31



Written By: Peniel M. Dimberu

With the recent and highly publicized death of actor Robin Williams, depression is once again making national headlines. And for good reason. Usually, the conversation about depression turns to the search for effective treatments, which currently include cognitive behavioral therapy and drugs such as selective serotonin reuptake inhibitors (SSRIs).

However, an equally important issue is the timely and proper diagnosis of depression.

Currently, depression is diagnosed by a physical and psychological examination, but it mostly depends on self-reporting of subjective symptoms like depressed mood, lack of motivation, and changes in appetite and sleep patterns. Many people who might want to avoid a depression diagnosis for various reasons can fake their way through this self-reporting, making it likely that depression is actually under-diagnosed.

Therefore, an objective test could be an important development in properly diagnosing and treating depression. Scientists at Northwestern University may have developed such a diagnostic tool, one that requires no more than a simple test tube of blood.

A team of researchers, led by Dr. Eva Redei and Dr. David Mohr, found that blood levels of nine biomarkers—molecules found in the body and associated with particular conditions—were significantly different in depressed adults when compared to non-depressed adults.

The new study, published in the current issue of Translational Psychiatry, builds uponearlier work by Dr. Redei, who found the levels of 11 other biomarkers to be different in depressed versus non-depressed teenagers.

While this work is still in the early stages, it represents an important advancement in the correct diagnosis and treatment of major depressive disorder, which currently has alifetime prevalence of 17% of the adult population in the US.

The new test looks at the levels of nine RNA markers in the patient’s blood. In case you forgot your molecular biology, RNA is the messenger molecule made using DNA as a template. In turn, cells then use RNA as a template to make proteins, which are the actual machines that do the work specified by the DNA.

The idea is that some genes can be overexpressed or underexpressed during depression and if those genes can be identified by looking at the RNA made from them, an objective way of identifying depression could be developed.

The researchers from Northwestern initially looked at the RNA levels of 26 genes in adolescents and young adults (ages 15-19). They found that 11 of them were significantly different in patients that had previously been diagnosed with depression versus those who were never diagnosed. They then tried to apply the same approach to adults and found significant differences in nine biomarkers.

Interestingly, the biomarkers that were found to differ in depressed versus non-depressed patients were actually different in young people and adults, implying that depression is genetically different when comparing different age groups.

Furthermore, the test was also able to identify differences between those who went through cognitive behavioral therapy and showed improvement versus those who showed no improvement after therapy. This kind of information can help predict the proper treatment regimen for patients, increasing their chance of going into remission.

While this work is still in the early stages and will take more studies to establish its accuracy and clinical usefulness, it represents an exciting development in diagnostics.

As medicine and genomics continue to change with the development of ever-increasingly powerful and cost-effective technology, we expect to get better at identifying and treating diseases before they take a significant toll.

These new blood tests have the potential to change our approach to diagnosing depression as well as selecting the proper treatment.

Sadly, many depression patients are resistant to the most common treatments so the need for new, more effective treatments is great. Finding differences in the expression of certain genes during depression could even lead to a clearer understanding of the disease, which in turn could lead to improved treatments down the road.

Depression is a thief that steals a productive and enjoyable life from far too many people around the world. Hopefully, this line of research will one day make it a thing of the past like smallpox.

Image Credit:

This entry was posted in GeneticsLongevity and tagged biomarkersblood testDavid Mohr,depressiondnaEva Redeinorthwestern universityrnaRobin Williamsselective serotonin reuptake inhibitors.


Logo KW

Boom in gene-editing studies amid ethics debate over its use [1511]

de System Administrator - lunes, 12 de octubre de 2015, 20:43

In this photo provided by UC Berkeley Public Affairs, taken June 20, 2014 Jennifer Doudna, right, and her lab manager, Kai Hong, work in her laboratory in Berkeley, Calif. The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle-cell, preventing babies from inheriting a life-threatening disorder. "We need to try to get the balance right," said Doudna. She helped develop new gene-editing technology and hears from desperate families, but urges caution in how it's eventually used in people. (Cailey Cotner/UC Berkeley via AP) 

Boom in gene-editing studies amid ethics debate over its use

by Lauran Neergaard

The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle cell, preventing babies from inheriting a life-threatening disorder. 

It may sound like sci-fi, but research into genome editing is booming. So is a debate about its boundaries, what's safe and what's ethical to try in the quest to fight disease.

Does the promise warrant experimenting with human embryos? Researchers in China already have, and they're poised to in Britain.

Should we change people's genes in a way that passes traits to future generations? Beyond medicine, what about the environmental effects if, say, altered mosquitoes escape before we know how to use them?

"We need to try to get the balance right," said Jennifer Doudna, a biochemist at the University of California, Berkeley. She helped develop new gene-editing technology and hears from desperate families, but urges caution in how it's eventually used in people.

The U.S. National Academies of Science, Engineering and Medicine will bring international scientists, ethicists and regulators together in December to start determining that balance. The biggest debate is whether it ever will be appropriate to alter human heredity by editing an embryo's genes.

"This isn't a conversation on a cloud," but something that families battling devastating rare diseases may want, Dr. George Daley of Boston Children's Hospital told specialists meeting this week to plan the ethics summit. "There will be a drive to move this forward."

Laboratories worldwide are embracing a technology to precisely edit genes inside living cells—turning them off or on, repairing or modifying them—like a biological version of cut-and-paste software. Researchers are building stronger immune cells, fighting muscular dystrophy in mice and growing human-like organs in pigs for possible transplant. Biotech companies have raised millions to develop therapies for sickle cell disease and other disorders.

The technique has a wonky name—CRISPR-Cas9—and a humble beginning.


In this photo taken Sept. 9, 2015, Kevin Esvelt poses for a photo at Harvard University's Wyss Institute in Boston. The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle-cell, preventing babies from inheriting a life-threatening disorder. Esvelt's projects include genetically manipulating a mosquito species to fight malaria. (AP Photo/Rodrique Ngowi) 

Doudna was studying how bacteria recognize and disable viral invaders, using a protein she calls "a genetic scalpel" to slice DNA. That system turned out to be programmable, she reported in 2012, letting scientists target virtually any gene in many species using a tailored CRISPR recipe.

There are older methods to edit genes, including one that led to an experimental treatment for the AIDS virus, but the CRISPR technique is faster and cheaper and allows altering of multiple genes simultaneously.

"It's transforming almost every aspect of biology right now," said National Institutes of Health genomics specialist Shawn Burgess.

CRISPR's biggest use has nothing to do with human embryos. Scientists are engineering animals with human-like disorders more easily than ever before, to learn to fix genes gone awry and test potential drugs.

Engineering rodents to harbor autism-related genes once took a year. It takes weeks with CRISPR, said bioengineer Feng Zhang of the Broad Institute at MIT and Harvard, who also helped develop and patented the CRISPR technique. (Doudna's university is challenging the patent.)

A peek inside an NIH lab shows how it works. Researchers inject a CRISPR-guided molecule into microscopic mouse embryos, to cause a gene mutation that a doctor suspects of causing a patient's mysterious disorder. The embryos will be implanted into female mice that wake up from the procedure in warm blankets to a treat of fresh oranges. How the resulting mouse babies fare will help determine the gene defect's role.

Experts predict the first attempt to treat people will be for blood-related diseases such as sickle cell, caused by a single gene defect that's easy to reach. The idea is to use CRISPR in a way similar to a bone marrow transplant, but to correct someone's own blood-producing cells rather than implanting donated ones.

"It's like a race. Will the research provide a cure while we're still alive?" asked Robert Rosen of Chicago, who has one of a group of rare bone marrow abnormalities that can lead to leukemia or other life-threatening conditions. He co-founded the MPN Research Foundation, which has begun funding some CRISPR-related studies.


In this photo provided by UC Berkeley Public Affairs, taken June 20, 2014 Jennifer Doudna, right, and her lab manager, Kai Hong, work in her laboratory in Berkeley, Calif. The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle-cell, preventing babies from inheriting a life-threatening disorder. "We need to try to get the balance right," said Doudna. She helped develop new gene-editing technology and hears from desperate families, but urges caution in how it's eventually used in people. (Cailey Cotner/UC Berkeley via AP) 

So why the controversy? CRISPR made headlines last spring when Chinese scientists reported the first-known attempt to edit human embryos, working with unusable fertility clinic leftovers. They aimed to correct a deadly disease-causing gene but it worked in only a few embryos and others developed unintended mutations, raising fears of fixing one disease only to cause another.

If ever deemed safe enough to try in pregnancy, that type of gene change could be passed on to later generations. Then there are questions about designer babies, altered for other reasons than preventing disease.

In the U.S., the NIH has said it won't fund such research in human embryos.

In Britain, regulators are considering researchers' request to gene-edit human embryos—in lab dishes only—for a very different reason, to study early development.

In this photo taken Sept. 9, 2015, Kevin Esvelt poses for a photo at Harvard University's Wyss Institute in Boston. The hottest tool in biology has scientists using words like revolutionary as they describe the long-term potential: wiping out certain mosquitoes that carry malaria, treating genetic diseases like sickle-cell, preventing babies from inheriting a life-threatening disorder. Esvelt's projects include genetically manipulating a mosquito species to fight malaria. (AP Photo/Rodrique Ngowi) 

Medicine aside, another issue is environmental: altering insects or plants in a way that ensures they pass genetic changes through wild populations as they reproduce. These engineered "gene drives" are in very early stage research, too, but one day might be used to eliminate invasive plants, make it harder for mosquitoes to carry malaria or even spread a defect that gradually kills off the main malaria-carrying species, said Kevin Esvelt of Harvard's Wyss Institute for Biologically Inspired Engineering.

No one knows how that might also affect habitats, Esvelt said. His team is calling for the public to weigh in and for scientists to take special precautions. For example, Esvelt said colleagues are researching a tropical mosquito species unlikely to survive cold Boston even if one escaped locked labs.

"There is no societal precedent whatsoever for a widely accessible and inexpensive technology capable of altering the shared environment," Esvelt told a recent National Academy of Sciences hearing.



CRISPR/Cas-derived technology offers the ability to dive into the genome and make a very precise change.

Chinese team performs gene editing on human embryo

by Bob Yirka

(—A team of researchers in China has announced that they have performed gene editing on human embryos. In their paper uploaded to the open access site Protein & Cell (after being rejected by Nature and Science) the researchers defended their research by pointing out that the embryos used in their research were non-viable. 

It has been only recently that scientists have had a tool that allows for directly editing a genome—called CRISPR, it allows for removing a single (defective) gene from a genome and replacing it with another one, to prevent genetic diseases. CRISPR has been used to edit animal embryos and adult stem cells, but up till now, no one has used the technique to edit the genome of human embryos due to ethical issues—or if they have, they have not acknowledged it publicly—this effort by the team in China has crossed that ethical line and because of that the announcement will likely incite condemnation by some and stir a new round of debate regarding the ethics of conducting such research.

The researchers report that their desire was to see how well CRISPR would work on human embryos. To find out, they collected 86 doubly fertilized embryos from a fertilization clinic—such embryos have been fertilized by two sperm and cannot mature beyond just a tiny clump of cells, they die naturally before growing into anything. The team reports that 71 of the embryos survived to grow enough for use in the CRISPR experiment. Unfortunately, the researchers found that the technique worked properly on just a fraction of the total, and only small percentage of those managed to relay the new gene properly when they split. They also found that sometimes the procedure wound up splicing the wrong gene segment, which led to inserting new genes in the wrong places—which in normal embryos could lead to a new disease. Additionally, of those that did get spliced and put in the right place, many were mosaic, a term used to describe a mix of old and new genes, which in addition to also leading to a new disease, could lead doctors to misidentify gene splicing results in normal embryos.

The researchers conclude by suggesting that the problems they encountered should be investigated further before any type of clinical application is begun.                                                                                                    

Explore further:  'CRISPR' science: Newer genome editing tool shows promise in engineering human stem cells

More information: CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes, Protein & Cell, April 2015. DOI: 10.1007/s13238-015-0153-5.

ABSTRACT Genome editing tools such as the clustered regularly interspaced short palindromic repeat (CRISPR)-associated system (Cas) have been widely used to modify genes in model systems including animal zygotes and human cells, and hold tremendous promise for both basic research and clinical applications. To date, a serious knowledge gap remains in our understanding of DNA repair mechanisms in human early embryos, and in the efficiency and potential off-target effects of using technologies such as CRISPR/Cas9 in human pre-implantation embryos. In this report, we used tripronuclear (3PN) zygotes to further investigate CRISPR/Cas9-mediated gene editing in human cells. We found that CRISPR/Cas9 could effectively cleave the endogenous β-globin gene (HBB). However, the efficiency of homologous recombination directed repair (HDR) of HBB was low and the edited embryos were mosaic. Off-target cleavage was also apparent in these 3PN zygotes as revealed by the T7E1 assay and whole-exome sequencing. Furthermore, the endogenous delta-globin gene (HBD), which is homologous to HBB, competed with exogenous donor oligos to act as the repair template, leading to untoward mutations. Our data also indicated that repair of the HBB locus in these embryos occurred preferentially through the non-crossover HDR pathway. Taken together, our work highlights the pressing need to further improve the fidelity and specificity of the CRISPR/Cas9 platform, a prerequisite for any clinical applications of CRSIPR/Cas9-mediated editing.

See also: The ISSCR has responded to the publication of gene editing research in human embryos

via Nature                                        

Read more at:

Logo KW

Brain Activity Identifies Individuals [1514]

de System Administrator - martes, 13 de octubre de 2015, 13:23

This image shows the functional connections in the brain that tend to be most discriminating of individuals. Many of them are between the frontal and parietal lobes, which are involved in complex cognitive tasks. EMILY FINN

Brain Activity Identifies Individuals

By Kerry Grens

Neural connectome patterns differ enough between people to use them as a fingerprint.

Neuroscientists have developed a method to pick out an individual solely by his connectome—a pattern of synchronized neural activity across numerous brain regions.  Researchers had observed previously that brain connectivity is a unique trait, but a new study, published today (October 12) in Nature Neuroscience, demonstrates that neural patterns retain an individual’s signature even during different mental activities.

“What’s unique here is they were able to show it’s not just the functional connectivity—which is how different brain regions are communicating over time when you’re not doing a specific task—but even how the brain is activated during a specific task that is also very fingerprint-like,” said Damien Fair, who uses neuroimaging to study psychopathologies at Oregon Health and Science University but wasn’t involved with the study.

Fair and others said individuated brain scans could be applied to better understand the diversity of mental illnesses often lumped into the same diagnosis. “We don’t kneed to keep going at the average. We have the power to look at individuals,” said Todd Braver of Washington University who did not participate in the study. “To me I find that really exciting.”

The research team, based at Yale School of Medicine, extracted data from the Human Connectome Project, which includes functional MRI (fMRI) data from about 1,200 people so far. The Yale team analyzed imaging data from 268 different brain regions in 126 participants. To create a connectome profile for each individual, the researchers measured how strongly the activity of a specific brain region compared to the activity of every other brain region, creating an activity correlation matrix.

Each person, it turned out, had a unique activity correlation matrix. The team then used this profile to predict the identity of an individual in fMRI scans from another session.

Depending on the type of fMRI scan assessed, the researchers could nail someone’s identity with up to 99 percent accuracy. Scans taken during mental tasks, rather than resting, made it more difficult, and the accuracy dropped to below 70 percent.

“Even though brain function is always changing, and we saw it’s slightly harder to identify people when they are doing different things, people always looked most similar to themselves” than to another participant, said Emily Finn, a graduate student in Todd Constable’s lab and the lead author of the study.

Fair pointed out that one of the most individualized brain regions is the frontoparietal cortex, which helps to filter incoming information. He has found the same result in his own work on fingerprinting connectivity. “It really seems important for making an individual who we are,” he said.

The ability to identify individuals even during tasks on different days would be important for clinical applications. Mental disorders are often classified by phenotype, or symptoms, that may represent a variety of underlying causes. “These types of technologies I think are going to help us personalize mental health better,” Fair told The Scientist. “We’ll have more information to say specifically what’s happening in your brain.”

Finn’s group was also able to associate a person’s connectome with his or her “fluid intelligence.” This trait is measured by asking people to solve a problem or find a pattern without using language or math skills or learned information. Finn told The Scientist that stronger connections between the prefrontal and parietal lobes, brain regions already known to be involved in higher order cognition, were most indicative of higher fluid intelligence scores. The results “suggest levels of integration of different brain systems are giving rise to superior cognitive ability,” she said.

“It’s not just this idiosyncratic fingerpint that they’re talking about that basically allows you to differentiate one individual from another,” Braver said of the study, “but it pushes the idea that [the connectivity signature is] functionally relevant, that those things may be related to things that we think are interesting individual differences, like intelligence.”

Finn cautioned that the intelligence correlation is more a proof of concept to link brain connectivity with behaviors, rather than something having real-life applications. “Hopefully, we could replace that with some variable, like a neuropsychiatric illness or [predicting] who’s going to respond best to some treatment.”

E.S. Finn et al., “Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity,” Nature Neuroscience, doi:10.1038/nn.4135, 2015.




Logo KW

Brain Freeze [1513]

de System Administrator - martes, 13 de octubre de 2015, 13:29

TIGHT SQUEEZE: Chemical fixation compacts synapses in a mouse brain (left), compared to freezing, which maintains the extracellular space (blue; right). GRAHAM KNOTT

Brain Freeze

By Kerry Grens

A common tissue fixation method distorts the true neuronal landscape.

The paper

N. Korogod et al., “Ultrastructural analysis of adult mouse neocortex comparing aldehyde perfusion with cryo fixation,” eLife, 4:e05793, 2015.

The fix

Soaking brain tissue with chemical fixatives has been the go-to method of preserving specimens for decades. Yet few neuroscientists take into account the physical distortion that these chemicals cause. And even among those who do pay attention, “we don’t really know in quantitative terms how much really changes,” says Graham Knott, a morphologist at the École Polytechnique Fédérale de Lausanne in Switzerland.


Comparing fresh to fixed tissue, Knott and his colleagues found that chemical fixation shrank the tissue by 30 percent. “It raises the question of, ‘What on earth is going on if it shrinks that much?’” says Knott. To find out, they turned to an alternative preservation approach, rapid freezing and low-temperature resin embedding, which was shown in the 1960s to better capture the natural state of the brain. Using a high-pressure version of this cryo-fixation technique, they observed neurons swimming in extracellular space and smaller astrocytes than are seen in chemically fixed samples.


NIH investigator Kevin Briggman says Knott’s technique offers a much more accurate snapshot of the brain. An added bonus is that the elbow room around neurons afforded by cryo fixation makes it easier for automated methods to count cells or analyze structures. The only problem, he adds, is that, in contrast to chemical fixation, “you can’t freeze a whole mouse brain.”

The compromise

Briggman and Knott don’t advocate doing away with fixatives. Rather, Knott says, scientists who use them should consider their effects when interpreting data. “We need to use models that pay very careful attention to how tissue has reacted to chemicals.”

Logo KW

Brain Gain [1473]

de System Administrator - viernes, 2 de octubre de 2015, 22:45


Brain Gain

By Jef Akst

Young neurons in the adult human brain are likely critical to its function.

At a lab meeting of Fred “Rusty” Gage’s group at the Salk Institute for Biological Studies in the mid-1990s, the neuroscientist told his team that he wanted to determine whether new neurons are produced in the brains of adult humans. At the time, adult neurogenesis was well established in rodents, and there had been hints that primate brains also spawned new neurons later in life. But reports of neurogenesis in the adult human brain were sparse and had not been replicated. Moreover, the experiments had relied primarily on autoradiography, which revealed images of cell division but did not follow the fate of new cells, so researchers couldn’t be sure if they really became mature neurons.

Gage’s group, which included clinicians, was familiar with the use of bromodeoxyuridine (BrdU) to monitor the progression of certain cancers. BrdU is an artificial nucleoside that can stand in for thymidine (T) during DNA replication. As cells duplicate their genomes just before they divide, they incorporate BrdU into their DNA. To assess tumor growth, physicians inject the nucleoside substitute into a patient’s bloodstream, then biopsy the tumor and use an antibody to stain for BrdU. The number of BrdU-labeled cells relative to the total number of cells provides an estimate of how quickly the cancer is growing. “If that nucleotide is labeled in such a way that [we] can identify it, you can birthdate individual cells,” Gage says.

Because BrdU goes everywhere in the body, Gage and his colleagues figured that in addition to labeling the patients’ tumors, the artificial base would also label the cells of the brain. If the researchers could get their hands on brain specimens from patients who’d been injected with BrdU, perhaps it would be possible to see new brain cells that had been generated in adults. With a second antibody, they could then screen for cell-type markers to determine if the new cells were mature neurons. “If you can . . . [use] a second antibody to identify the fate of a cell, then that’s pretty definitive,” Gage says.

As Gage’s fellows and postdocs left the lab and got involved in clinical trials involving BrdU injections, they began to keep an eye out for postmortem brain samples that Gage could examine. In 1996, one of them came through. Neurologist Peter Eriksson, who at the time was working at the Sahlgrenska University Hospital in Gothenburg, Sweden, began sending Gage samples from the brains of deceased patients. Every few months, a new sample arrived. And while waiting for the next delivery, Gage and his team were “getting fresh tissue from the coroner’s office to practice staining fresh tissue,” he says, “so that when we got these valuable brains we could see [what] they were doing.”

Soon enough, a clear picture emerged: the human hippocampus, a brain area critical to learning and memory and often the first region damaged in Alzheimer’s patients, showed evidence of adult neurogenesis. Gage’s collaborators in Sweden were getting the same results. Wanting to be absolutely positive, Gage even sent slides to other labs to analyze. In November 1998, the group published its findings, which were featured on the cover of Nature Medicine.1

“When it came out, it caught the fancy of the public as well as the scientific community,” Gage says. “It had a big impact, because it really confirmed [neurogenesis occurs] in humans.”

Fifteen years later, in 2013, the field got its second (and only other) documentation of new neurons being born in the adult human hippocampus—and this time learned that neurogenesis may continue for most of one’s life.2 Neuroscientist Jonas Frisén of the Karolinksa Institute in Stockholm and his colleagues took advantage of the aboveground nuclear bomb tests carried out by US, UK, and Soviet forces during the Cold War. Atmospheric levels of 14C have been declining at a known rate since such testing was banned in 1963, and Frisén’s group was able to date the birth of neurons in the brains of deceased patients by measuring the amount of 14C in the cells’ DNA.

“What we found was that there was surprisingly much neurogenesis in adult humans,” Frisén says—a level comparable to that of a middle-aged mouse, the species in which the vast majority of adult neurogenesis research is done. “There is hippocampal neurogenesis throughout life in humans.”

But many details remain unclear. How do newly generated neurons in adults influence brain function? Do disruptions to hippocampal neurogenesis play roles in cognitive dysfunction, mood disorders, or even psychosis? Are there ways to increase levels of neurogenesis in humans, and might doing so be therapeutic? Researchers are now seeking to answer these and other questions, while documenting the extent and function of adult neurogenesis in mammals.

Breaking the mold


RODENT NEUROGENESIS: In rodents, there are two populations of neural stem cells in the adult brain. The majority of new neurons are born in the subventricular zone along the lateral ventricle wall and migrate through the rostral migratory stream (RMS) to the olfactory bulb. About one-tenth as many new neurons are produced in the subgranular zone of the dentate gyrus (white) of the hippocampus.
See full infographic: WEB | PDF

In the early 1960s, MIT neurobiologist Joseph Altman used a hypodermic needle to induce lesions in rat brains, while simultaneously injecting tritiated thymidine, a radioactive form of the nucleoside commonly used for tracking DNA synthesis and cell proliferation. He found evidence of new brain cells that had been born at the time of injection, including some neurons and their neuroblast precursors.3

Researchers were immediately skeptical of the results. Long-standing theory held that neurons in the brain that had been damaged or lost could not be replaced; Altman was suggesting the opposite. “They were really good, solid indications, but it was such a strong dogma that neurons couldn’t be generated in the adult brain,” says Frisén. “It wasn’t really until the ’90s, when new techniques came along, [that researchers] showed, yes, indeed, new neurons are added in the rodent brain.”

Those new techniques included BrdU, as well as neuron-specific protein markers and confocal imaging, which together enabled researchers to identify the newly generated cells. Multiple studies subsequently confirmed that neurogenesis occurs in limited regions of the rodent brain, specifically in the olfactory bulb and the dentate gyrus region of the hippocampus. (See illustration.) Research also revealed that the rate of neurogenesis decreases with stress, depression, and anxiety, but increases with exercise and enrichment.


HUMAN NEUROGENESIS: Researchers have also demonstrated that neurogenesis occurs in the adult human brain, though the locations and degree of cell proliferation appear to differ somewhat from rodents.
See full infographic: WEB | PDF

“The field grew enormously at this point,” Gage says, and its focus began to shift from whether new neurons were being produced—they were—to whether those cells formed connections with existing networks to become functional—they do. Turns out, “these newly born cells have 5,000 synapses on their dendrites,” Gage says—well within the range of other neurons in the brain.

But would those rodent results hold up in primates? All signs pointed to yes. In March 1998, Princeton University’s Elizabeth Gould and colleagues found evidence of neurogenesis in the dentate gyrus of adult marmoset monkeys—and the researchers determined that the rate of cell proliferation was affected by stress, just as in rodents.4 Six months later, Gage’s group published its findings based on the clinical samples of human brain tissue. “It was a surprise to me, and I think to most people,” Frisén says. And the point was hammered home with the Frisén group’s analysis of 14C in human brain samples.

“The human evidence now unequivocally suggests that the dentate gyrus in humans undergoes turnover in our lifetime,” says Amar Sahay of Harvard University. “It really begs the question what the functions are of these adult-born neurons.”

Young and excitable

The first step in understanding the function of the new neurons in the adult brain was to characterize the cells themselves. In the late 1990s and early 2000s, researchers delved into the cell biology of neurogenesis, characterizing the populations of stem cells that give rise to the new neurons and the factors that dictate the differentiation of the cells. They also documented significant differences in the behavior of young and old neurons in the rodent brain. Most notably, young neurons are a lot more active than the cells of established hippocampal networks, which are largely inhibited.5,6

“For a period of about four or five weeks, while [the newborn neurons] are maturing, they’re hyperexcitable,” says Gage. “They’ll fire at anything, because they’re young, they’re uninhibited, and they’re integrating into the circuit.”

To determine the functional role of the new, hyperactive neurons, researchers began inhibiting or promoting adult neurogenesis in rodents by various means, then testing the animals’ performance in various cognitive tasks. What they found was fairly consistent: the young neurons seemed to play a role in processing new stimuli and in distinguishing them from prior experiences. For example, if a mouse is placed in a new cage and given time to roam, then subjected to a mild shock, it will freeze for about 40 seconds the next time it is placed in that same environment, in anticipation of a shock. It has no such reaction to a second novel environment. But in an enclosure that has some features in common with the first, fear-inducing cage, the mouse freezes for 20 seconds before seemingly surmising that this is not the cage where it received the initial shock. Knock out the mouse’s ability to produce new neurons, however, and it will freeze for the full 40 seconds. The brain is not able to easily distinguish between the enclosures.

This type of assessment is called pattern separation. While some researchers quibble over the term, which is borrowed from computational neuroscience, most who study hippocampal neurogenesis agree that this is a primary role of new neurons in the adult brain. “While probably five or six different labs have been doing this over the last four or five years, basically everybody’s come to the same conclusion,” Gage says.



The basic idea is that, because young neurons are hyperexcitable and are still establishing their connectivity, they are amenable to incorporating information about the environment. If a mouse is placed in a new cage when young neurons are still growing and making connections, they may link up with the networks that encode a memory of the environment. Just a few months ago, researchers in Germany and Argentina published a mouse study demonstrating how, during a critical period of cellular maturation, new neurons’ connections with the entorhinal cortex, the main interface between the hippocampus and the cortex, and with the medial septum change in response to an enriched environment.7



“The rate at which [new neurons] incorporate is dependent upon experience,” Gage says. “It’s amazing. It means that the new neurons are encoding things when they’re young and hyperexcitable that they can use as feature detectors when they’re mature. It’s like development is happening all the time in your brain.”

Adding support to the new neurons’ role in pattern separation, Sahay presented findings at the 2014 Society for Neuroscience conference that neurogenesis spurs circuit changes known as global remapping, in which overlap between the populations of neurons that encode two different inputs is minimized.8 “We have evidence now that enhancing neurogenesis does enhance global remapping in the dentate gyrus,” says Sahay. “It is important because it demonstrates that stimulating neurogenesis is sufficient to improve this very basic encoding mechanism that allows us to keep similar memories separate.”


NEWBORN PHOTOS: Three different micrographs show new neurons (top and bottom, green; middle, white) generated in the adult mouse brain. Neurogenesis occurs in the dentate gyrus, a V-shape structure within the hippocampus. Bushy dendritic processes extend into a relatively cell-free region called the molecular layer (black in bottom photo) and axons project to the CA3 region of the hippocampus (green “stem” in bottom photo). Astrocytes are stained pink (top).COURTESY OF FRED H. GAGE

Pattern separation is likely not the only role of new neurons in the adult hippocampus. Experiments that have suppressed neurogenesis in adult rats have revealed impairments in learning in a variety of other tasks. More broadly, “we think it has to do with the flexibility of learning,” says Gerd Kempermann of the Center for Regenerative Therapies at the Dresden University of Technology in Germany.

Last year, for example, neuroscientist Paul Frankland of the Hospital for Sick Children in Toronto and his colleagues found evidence that newly generated neurons play a role in forgetting, with increased neurogenesis resulting in greater forgetfulness among mice.9 “If you think about what you’ve done today, you can probably remember in a great deal of detail,” he says. “But if you go back a week or if you go back a month, unless something extraordinary happened, you probably won’t remember those everyday details. So there’s a constant sort of wiping of the slate.” New hippocampal neurons may serve as the “wiper,” he says, “cleaning out old information that, with time, becomes less relevant.”

Conversely, Frankland’s team found, suppressing neurogenesis seems to reinforce memories, making them difficult to unlearn. “We think that neurogenesis provides a way, a mechanism of living in the moment, if you like,” he says. “It clears out old memories and helps form new memories.”

Neurogenesis in the clinic

While studying the function of hippocampal neurogenesis in adult humans is logistically much more difficult than studying young neurons in mice, there is reason to believe that much of the rodent work may also apply to people—namely, that adult neurogenesis plays some role in learning and memory, says Kempermann. “Given that [the dentate gyrus] is so highly conserved and that the mechanisms of its function are so similar between the species—and given that neurogenesis is there in humans—I would predict that the general principle is the same.”

And if it’s true that hippocampal neurogenesis does contribute to aspects of learning involved in the contextualization of new information—an ability that is often impaired among people with neurodegenerative diseases—it’s natural to wonder whether promoting neurogenesis could affect the course of Alzheimer’s disease or other human brain disorders. Epidemiological studies have shown that people who lead an active life—known from animal models to increase neurogenesis—are at a reduced risk of developing dementia, and several studies have found reduced hippocampal neurogenesis in mouse models of Alzheimer’s. But researchers have yet to definitively prove whether neurogenesis, or lack thereof, plays a direct role in neurodegenerative disease progression. It may be that neurogenesis has “nothing to do with the pathology itself, but [with] the ability of our brain to cope with it,” says Kempermann.

Either way, the research suggests that “the identification of pro-neurogenic compounds would have a therapeutic impact on cognitive dysfunction, specifically, pattern separation alterations in aging and early stages of Alzheimer’s disease,” notes Harvard’s Sahay. “There’s a growing list of genes that encode secreted factors or other molecules that stimulate neurogenesis. Identifying compounds that harness these pathways—that’s the challenge.”

The birth of new neurons in the adult hippocampus may also influence the development and progression of mood disorders. Several studies have suggested that reduced neurogenesis may be involved in depression, for instance, and have revealed evidence that antidepressants act, in part, by promoting neurogenesis in the hippocampus. When Columbia University’s René Hen and colleagues short-circuited neurogenesis in mice, the animals no longer responded to the antidepressant fluoxetine.10 “It was a very big surprise,” Hen says. “The hippocampus has really been always thought of as critical for learning and memory, and it is, but we still don’t understand well the connection to mood.”

Adult neurogenesis has also been linked to post-traumatic stress disorder (PTSD). While it is perhaps less obvious how young neurons might influence the expression of fear, Sahay says it makes complete sense, given the emerging importance of neurogenesis in distinguishing among similar experiences. “In a way, the hippocampus acts as a gate,” he says, with connections to the amygdala, which is important for processing fear, and the hypothalamus, which triggers the production of stress hormones, among other brain regions. “It determines when [these] other parts of the brain should be brought online.” If new neurons are not being formed in the hippocampus, a person suffering from PTSD may be less able to distinguish a new experience from the traumatic one that is at the root of his disorder, Sahay and his colleagues proposed earlier this year.11 “We think neurogenesis affects the contextual processing, which then dictates the recruitment of stress and fear circuits.”

Of course, the big question is whether researchers might one day be able to harness neurogenesis in a therapeutic capacity. Some scientists, such as Hongjun Song of Johns Hopkins School of Medicine, say yes. “I think the field is moving toward [that],” he says. “[Neurogenesis] is not something de novo that we don’t have at all—that [would be] much harder. Here, we know it happens; we just need to enhance it.” 


  1. P.S. Eriksson et al., “Neurogenesis in the adult human hippocampus,” Nat Med, 4:1313-17, 1998.
  2. K.L. Spalding et al., “Dynamics of hippocampal neurogenesis in adult humans,” Cell, 153:1219-27, 2013.
  3. J. Altman, “Are new neurons formed in the brains of adult mammals?” Science, 135:1127-28, 1962.
  4. E. Gould et al., “Proliferation of granule cell precursors in the dentate gyrus of adult monkeys is diminished by stress,” PNAS, 95:3168 -71, 1998.
  5. C. Schmidt-Hieber et al., “Enhanced synaptic plasticity in newly generated granule cells of the adult hippocampus,” Nature, 429:184-87, 2004.
  6. S. Ge et al., “A critical period for enhanced synaptic plasticity in newly generated neurons of the adult brain,” Neuron, 54:559-66, 2007.
  7. M. Bergami et al., “A critical period for experience-dependent remodeling of adult-born neuron connectivity,” Neuron, 85:710-17, 2015.
  8. K. McAvoy et al., “Rejuvenating the dentate gyrus with stage-specific expansion of adult-born neurons to enhance memory precision in adulthood and aging,” Soc Neurosci, Abstract DP09.08/DP8, 2014.
  9. K.G. Akers et al., “Hippocampal neurogenesis regulates forgetting during adulthood and infancy,” Science, 344:598-602, 2014.
  10. L. Santarelli et al., “Requirement of hippocampal neurogenesis for the behavioral effects of antidepressants,” Science, 301:805-09, 2003.
  11. A. Besnard, A. Sahay, “Adult hippocampal neurogenesis, fear generalization, and stress,” Neuropsychopharmacology, doi:10.1038/npp.2015.167, 2015.


Logo KW

Brain Genetics Paper Retracted [841]

de System Administrator - viernes, 5 de septiembre de 2014, 19:51

Brain Genetics Paper Retracted

A study that identified genes linked to communication between different areas of the brain has been retracted by its authors because of statistical flaws. 

By Anna Azvolinsky

The authors of a June PNAS paper that purported to identify sets of genes associated with a specific brain function last week (August 29) retracted the work because of flaws in their statistical analyses. “We feel that the presented findings are not currently sufficiently robust to provide definitive support for the conclusions of our paper, and that an extensive reanalysis of the data is required,” the authors wrote in their retraction notice.

The now-retracted study identified a set of gene ontologies (GO) associated with a brain phenotype that has been previously shown to be disturbed in patients with schizophrenia. Andreas Meyer-Lindenberg, director of the Central Institute of Mental Health Mannheim, Germany, and his colleagues had healthy volunteers perform a working memory task known to require communication between the hippocampus and the prefrontal cortex while scanning their brains using functional magnetic resonance imaging (fMRI). The volunteers also underwent whole-genome genotyping. Combining the fMRI and genomic data, the researchers identified groups of genes that appeared associated with communication between the two brain regions, which can be disturbed in some people with schizophrenia. The authors used gene set enrichment analysis to pick out genes associated with this brain phenotype, identifying 23 that could be involved in the pathology of the brain disorder.

The Scientist first learned of possible problems with this analysis when the paper was under embargo prior to publication. At that time, The Scientist contacted Paul Pavlidis, a professor of psychiatry at the University of British Columbia who was not connected to the work, for comment on the paper. He pointed out a potential methodological flaw that could invalidate its conclusions. After considering the authors’ analyses, Pavlidis reached out to Meyer-Lindenberg’s team to discuss the statistical issues he perceived.

The original analysis flagged a set of 11 genes in close proximity to one another within the genome using the same single nucleotide polymorphism (SNP), inflating the significance of the results. “The researchers found a variant near a genomic region that they say is correlated with the [working memory] task,” explained Pavlidis. “But instead of counting that variant once, that variant was counted 11 times.”

“When we re-analyzed the data, we saw that we had to retract because addressing the problem went beyond just an erratum,” Meyer-Lindenberg told The Scientist. “The analysis could not be used to make any conclusions with the required statistical confidence.”

Elizabeth Thomas, ­who studies the molecular mechanisms of neurological disorders at The Scripps Research Institute in La Jolla, California, and was not involved in the work noted that the GO annotations used in the study were outdated. “GOs change every few months, and it’s unfortunate for researchers that rely on a certain set of annotations. It makes you wonder whether the papers published in the past five to 10 years are still relevant,” said Thomas. “This retraction raises the issue of how many papers may have falsely reported gene associations because of the constantly evolving changes in gene assemblies and boundaries. That’s really alarming to me.”

According to Meyer-Lindenberg, the researchers are re-evaluating their data using a different set of criteria and several updated sets of GO annotations.

In the meantime, however, Pavlidis lauded the researchers’ swift decision to retract. “What the authors did was the right thing,” he said.

Logo KW

Brain Imaging Can Predict the Success of Large Public Health Campaigns [1540]

de System Administrator - jueves, 29 de octubre de 2015, 21:23

Brain Imaging Can Predict the Success of Large Public Health Campaigns

Annenberg Shool for Communication - University of Pennsylvania

Research Area: 
Related People:

Emily Falk, Ph.D.Matt O'Donnell, Ph.D.

 It’s a frustrating fact that most people would live longer if only they could make small changes: stop smoking, eat better, exercise more, practice safe sex. Health messaging is one important way to change behavior on a large scale, but while a successful campaign can improve millions of lives, a failed one can be an enormous waste of resources.

The red highlighted area depicts the brain area of interest in this study.

These same 40 images were then used in an email campaign sent to 800,000 smokers by the New York State Smokers Quitline. The email to each smoker contained one of the images randomly assigned, along with the identical message: “Quit smoking. Start Living.” It also provided a link where smokers could get free help to quit.

Not all images were created equal: Among those who opened the email, click through rates varied from 10% for the least successful images to 26% for the most successful.


Examples of three of the images used as part of the New York State Quitline campaign.

But interestingly, the negative anti-smoking images which elicited the most powerful brain response in the MPFC of 50 smokers in Michigan were also the most successful at getting the hundreds of thousands of New York smokers to click for help in quitting. (And research has shown that visits to a quit-smoking site correlate with the likelihood that someone actually will quit.)

“By their nature, messages about the risks we take with our health — whether it be by smoking, eating poorly, or not exercising — cause people to become defensive,” says Falk. “If you can get around that defensiveness by helping people see why advice might be relevant or valuable to them, your messaging will have a more powerful effect.”

By combining the self-reported survey data — what smokers said they found effective — with the brain responses from the fMRI, the researchers found that they could more accurately predict which messages would be effective than by knowing either on its own.

Using the brain to predict the success or failure of advertising campaigns has long been a holy grail for marketers. Although some have made claims to proprietary methods for doing so, the science behind it has been opaque. This study by Falk and her colleagues is among the first to demonstrate specific brain patterns to predict the success of public health campaigns.

“If you ask people what they plan to do or how they feel about a message, you get one set of answers,” says Falk. “Often the brain gives a different set of answers, which may help make public health campaigns more successful. My hope is that moving forward, we might be able to use what we learned from this study and from other studies to design messages that are going to help people quit smoking and make them healthier and happier in the long run.”

Co-authors on the study include Matthew B. O’Donnell from the University of Pennsylvania; Steve Tompson,Richard GonzalezSonya Dal CinVictor Stretcher, and Lawrence An of the University of Michigan; and K.M. Cummings of the Medical University of South Carolina.

This study was funded by The Michigan Center of Excellence in Cancer Communication Research (NIH-P50 CA101451), and the National Institutes of Health New Innovator Award (NIH 1DP2DA03515601).

Media contact: Julie Sloane, Annenberg School for Communication, 215-746-1798,


Logo KW

Brain Imaging Shows Why Kids with Autism Have Social Difficulties [1657]

de System Administrator - jueves, 4 de febrero de 2016, 00:19

Brain Imaging Shows Why Kids with Autism Have Social Difficulties

Scientists believe that children with autism spectrum disorder (ASD) have difficulties in social interactions at least partly due to an inability to understand other people’s thoughts and feelings through a process called “theory of mind,” or ToM.

A new innovative brain imaging study has uncovered new evidence explaining why ToM deficiencies are present in ASD children. The researchers found disruptions in the brain’s circuitry involved in ToM at multiple levels compared to typical brain functioning. The findings provide valuable insight into an important neural network tied to the social symptoms in children with ASD. 

“Reduced brain activity in ToM-related brain regions and reduced connectivity among these regions in children with autism suggest how deficits in the neurobiological mechanisms can lead to difficulties in cognitive and behavioral functioning, such as theory of mind,” said Marcel Just, the D.O. Hebb University Professor of Psychology at Carnegie Mellon University.

“Weaker coordination and communication among core brain areas during social thinking tasks in autism provides evidence for how different brain areas in autism struggle to work together as a team.”

The researchers used an approach first developed by Fulvia Castelli and her colleagues in the U.K. that created animation videos showing two geometric shapes moving around the screen. The shapes, such as a large red triangle and a small blue triangle, moved in ways that could be perceived as an interaction between them, such as coaxing or dancing.

The team demonstrated that “seeing” the interactions was in the mind of the beholder, or to be more specific, in the ToM circuitry of the viewer’s brain. Without ToM, it just looked like geometric shapes moving around the screen.

To better understand the the neural mechanisms involved with ToM, the scientists asked 13 high-functioning children with ASD between the ages of 10 and 16 as well as 13 similarly aged children without ASD to watch these short animated films. The children were asked to identify the thoughts and feelings, or mental states, of those triangles while having their brains scanned by an fMRI scanner.

The ASD children showed significantly reduced activation compared to the control group children in the brain regions considered to be part of the ToM network, such as the medial frontal cortex and temporo-parietal junction. Furthermore, the synchronization between such pairs of regions was lower in the autism group. 

The findings support Just’s previous research in 2004 which discovered this lower synchronization. In later studies, Just continued to show how this theory accounted for many brain imaging and behavioral findings during tasks that are heavily linked to the frontal cortex.

“One reason this finding is so interesting is that the ‘actors’ in the films have no faces, facial expressions or body posture on which to base a judgment of an emotion or attitude,” said Rajesh Kana, associate professor of psychology at the University of Alabama at Birmingham.

“The neurotypical children managed to identify a social interaction without social cues, such as interpreting the large triangle nudging the smaller one as a parent’s attempt to encourage a child, but the ASD children were unable to make the connection.”

Until now, most research focused on the connectivity among core brain regions in ASD has focused on adults, limiting knowledge about how the disorder affects younger people.

“By studying children, we were able to show that it is possible to characterize the altered brain circuitry earlier in development, which could lead to designing earlier effective intervention programs that could train children to infer the intentions and thoughts that underlie physical interactions between people,” Just said. “For example, children could be trained to distinguish between a helpful nudge and a hostile poke.”

The findings are published in the journal Molecular Autism.


Logo KW


de System Administrator - lunes, 27 de octubre de 2014, 12:42


Written By: Jason Dorrier

Since it was awarded a one billion euro, decade-long research grant last year, the Human Brain Project has been the center of extreme excitement and heavy criticism. The project aims to simulate the human brain in silicon on a yet-to-be-assembled supercomputer of massive computational power. The goal? Understanding.

In a recent paper (more below), HBP researchers write, “The ultimate prime aim is to imitate and understand the native computations, algorithms, states, actions, and emergent behavior of the brain, as well as promote brain-inspired technology.”


The prospect is mind numbingly self-reflexive—the human brain folds its faculties of analysis in on themselves to understand and reproduce itself. An awe-inspiring idea in its seeming impossibility.

And maybe that’s partly why the project has been a magnet for early condemnation.

The problem, according to critics, is that our limited empirical and theoretical understanding isn’t yet at the level needed for even a simple human brain simulation, and that resources, therefore, would be better allocated on basic research for now.

Further, in an open letter with almost 800 signatures, a group of neuroscientists warns the HBP is veering from an endeavor of simulation dependent on and informed by empirical neuroscience to a venture that favors technology over scientific rigor.

So, is the HBP putting the billion-euro cart before the horse?

The letter was published in July. The same month, the Human Brain Project’s co-executive director, Richard Frackowiak, wrote a spirited defense of the project.

Frackowiak compared the criticism to a similar missive written in 1990, the first year of the Human Genome Project. That earlier letter accused the Human Genome Project of “mediocre science, terrible science policy”—criticism the project later rose above by successfully sequencing the first complete human genome in 2003.

The Human Brain Project, he said, will likewise overcome early “teething troubles” to open an era of unified brain research in which neuroscience, computing, and medicine work together to revolutionize our understanding of the brain.


Frackowiak said data isn’t the problem. The challenge is systematizing it, making sense of the nearly 100,000 annual neuroscience papers, the riot of patient data out of hospitals. The HBP, which Frackowiak describes as the CERN of neuroscience, is, among other things, an attempt to unite and organize resources.

But to do that you need your digital ducks in a row first and foremost.

For that reason, the HBP is intentionally heavy on computing in the beginning, and the early work is to devise an effective digital approach to organizing existing data. They hope to have a number of specialized databases up and running by 2016.

“Far from being sidelined, neuroscience remains front and centre in the HBP,” Frackowiak wrote. “The [information and communications technologies] tools are meant as a scaffold; a bridge to support a convergence of fields that is already underway.”

And as for funding, he went on, it sounds like a lot, but at €50 million annually (for the core project)—the HBP is only 5% of the European neuroscience budget. Yes, that’s a lot of money for one project, but alternatively, it allows that one project to “go big.”

How convincing has this argument been? To some scientists, not very it would seem. An additional several hundred signatories have joined the critical open letter since July.

Perhaps it’s no surprise, then, that two HBP researchers, Yadin Dudai and Kathinka Evers, attempt to set the parameters of the discussion as thoroughly as they can in arecent paper titled, “To Simulate or Not to Simulate: What Are the Questions?”

Published in the neuroscience journal, Neuron, Dudai and Evers ask: What is simulation; what is its role in science; what challenges face those attempting to simulate the brain; and what are realistic expectations for such an endeavor?

The paper is sufficiently humble in its approach, not claiming to bring definite answers or wade into the funding debate—though, in truth, that probably can’t be avoided—and it doesn’t shy away from identifying the project’s biggest challenges.

First, they say, simulation is an established (and increasingly crucial) scientific tool with a proven track record in neuroscience and other disciplines. In addition to following the maxim “only the one who makes something can fully understand it” simulation allows us to artificially test hypotheses when testing the real system is costly, risky, or unethical.

But we need to temper our expectations a bit in terms of what brain simulation can do.


We do have data, they say, but probably not enough data. Not yet. Further, we don’t have enough high level theory either. We have known unknowns—but likely lots of undiscovered unknowns too.

Lacking solid benchmarks and top-down theory “may lead much effort astray.”

And our “gaps in understanding” may or may not be acceptable depending on the level of expected explanatory power. How far can we trust an incomplete simulation? Can it ever be complete?

Further, we need to define what we mean when we say “brain.” A useful simulation shouldn’t reproduce the brain in isolation. It is a complex adaptive system nested in another complex adaptive system—that is, the brain and the body are inseparable.

The mind arises in this interdependent body-brain relationship, and any simulation of it must not only take that link into account, but also acknowledge the limitations it imposes. We can only simulate so much of the body and its environment.

Of all the challenges the paper acknowledges, the least of them is computing power. The HBP’s brain simulation is expected to require exascale computing—a few orders of magnitude more than today’s most powerful supercomputers.

Dudai and Evers note, almost offhandedly, that it is likely the requisite computing power will be available before problems of incomplete knowledge are solved.

The least mundane part of the discussion (if the most esoteric) comes at the end of the paper when the authors question whether brain simulation must exhibit consciousness to be useful. Their surprising answer is, yes, at least for some of the project’s research objectives, like studying mental illness, which is closely related to consciousness.

Dudai and Evers write, “How adequate or informative can a simulation of, say, depression or anxiety be if there is no conscious experience in the simulation?”

There’s no guarantee consciousness will arise, proving its existence will be contentious, and such an outcome isn’t likely around the corner—still, Dudai and Evers believe it’s valuable to begin the discussion now in preparation for such possibilities.


Ultimately, the HBP’s success, they say, hinges on how well the project is timed with growing scientific knowledge. It ought to employ a strategy of detailed simulation—even down to the cellular level—tied together by higher level, general laws of the brain, as they’re discovered. Above all, it will necessarily need to advance step by step. Indeed, the HBP’s initial goal is simulating the lowly mouse cortex.

Simply? The HBP is a giant undertaking likely to evolve in the coming years.

As for the controversy, that too will probably continue in parallel. The Human Genome Project was criticized throughout much of its life and even pronounced a failure seven years in—and yet the project was finished two years ahead of schedule.

Other than the fact they’re both big science, it isn’t clear the two projects are perfectly analogous, but it’s also likely too early to call the game. Even as computing power is growing fast, so too are methods for observing and learning about the brain.

And the HBP’s timing and success aside, that the prospect of our brain successfully reproducing itself in silicon is even worthy of debate—that’s a breathtaking thought.

Image Credit: Human Brain Project

This entry was posted in Artificial Intelligence,BrainComputingFutureHealthSingularityTechand tagged big scienceBlue Brain Projectbrain,brain researchbrain simulationHenry Markram,Human Brain ProjectHuman Genome Project,Kathinka EversneuroscienceRichard FrackowiakYadin Dudai.


Logo KW

Brain Stimulation [803]

de System Administrator - viernes, 29 de agosto de 2014, 23:19

Prepare to Be Shocked

Four predictions about how brain stimulation will make us smarter
Logo KW

Brain-Computer Interfaces [1663]

de System Administrator - lunes, 15 de febrero de 2016, 12:19

Sci-Fi Short Imagines How Brain-Computer Interfaces Will Make Us “Connected”


Social life is defined by connections, and more than ever, the fabric of our social lives are woven digitally. Before the Internet picked up speed, local hubs like neighborhoods, churches, and community centers provided a variety of opportunities to develop relationships. Today, people often connect through social networks, where users exchange the familiarity of physical proximity for the transparency of real-time availability and exposure. But whether through physical or digital connections, there is a price to be paid for this togetherness.

Anyone concerned about increased surveillance in society has probably questioned whether sacrificing privacy is worth the perception of greater security. Just as this loss can be justified in the face of physical harm, so too does the value of privacy wane when faced with the mental anguish of loneliness.

What if technology provided the ultimate resolution of this existential crisis by allowing you to plug your brain into a boundless, cognitive melting pot with other humans?

Appealing as this may be, extreme connectedness would come at the price of privacy. For those wrestling with the existential crisis of modern life, mind-to-mind melding may be the only hope they feel they have left. "Connected", a sci-fi short film by Luke Gilford which debuted on Motherboard, gives us a brief glimpse into what the road looks like in a future that's arguably just around the corner.



Logo KW

Brain-Controlling Sound Waves Used to Steer Genetically Modified Worms [1466]

de System Administrator - sábado, 26 de septiembre de 2015, 14:56

Brain-Controlling Sound Waves Used to Steer Genetically Modified Worms

By Shelly Fan

Move over optogenetics, there’s a new cool mind-bending tool in town.

A group of scientists, led by Dr. Sreekanth Chalasani at the Salk Institute in La Jolla, California, discovered a new way to control neurons using bursts of high-pitched sound pulses in worms.

Dubbed “sonogenetics,” scientists say the new method can control brain, heart and muscle cells directly from outside the body, circumventing the need for invasive brain implants such as the microfibers used in optogenetics.

The Trouble With Light

Almost exactly a decade ago, optogenetics changed the face of neuroscience by giving scientists a powerful way to manipulate neuronal activity using light beams.


The concept is deceptively simple: using genetic engineering, scientists introduced light-sensitive protein channels into certain populations of neurons in mice that were previously impervious to light. Then, by shining light through microfiber cables implanted in the brain, the scientists can artificially activate specific neural networks, which in turn changes the behavior of the mouse.

Since then, the crazy tool technique has clocked an impressive list of achievements, including making docile mice aggressive, giving them on-demand boners and even implanting fake scary memories. Last month, a group of neurologists from San Diego got FDA-approval to begin the very first clinical trial that aims to use optogenetics in humans to treat degenerative blindness.

Yet optogenetics is not without faults. For one, the hardware that delivers light has to be threaded into the brain, which — unfortunately but unavoidably — physically traumatizes the brain. This makes its transition for human use difficult, particularly for stimulating deeper brain regions.

It’s not just an academic problem. Some incurable brain disorders, such as Parkinson’s disease, benefit from focused brain stimulation. Many scientists have predicted that optogenetics could be used for such disorders, but since the target brain area is deeply buried in the brain, clinicians have backed off from trying it in humans (so far).

What’s more, the stimulation isn’t as precise as scientists want. Just like particles in the atmosphere that scatter fading sunlight every evening, the physical components of the brain also scatter light. The result is that neuronal activation isn’t exactly targeted — in other words, scientists may be inadvertently activating neurons that they’d rather stay silent.

From Light to Sound

Chalasani and his colleagues believe that they have circumvented both issues by swapping light with sound.

The team was obviously deeply inspired by optogenetics. Instead of using the light-sensitive protein channelrhodopsin-2, the team hunted down a protein called TRP-4 that responds to mechanical stimulation such as vibrations. When blasted with ultrasound, TRP-4 opens a pore in the neuronal membrane, which allows ions to rush in — this biophysical response activates (or “fires”) a neuron.


C. elegans

Then, similar to optogenetics, the team delivered the genetic blueprint that encodes TRP-4 into the nematode worm C. elegans using a virus. (C. elegans, with only 302 clearly-mapped out neurons, is a neuroscience darling for reductionist studies.)

Sophisticated genetic tools restrict TRP-4 to only certain types of neurons — for example, those that control movement, sensation or higher brain functions such as motivation. By activating these neurons and watching the resulting behavior, scientists can then tease out which set of neurons are responsible for what behavior.

In one experiment, the team targeted motor neurons in the primitive worm. Realizing that plain old ultrasound wasn’t strong enough to activate the TRP-4-expressing neurons, the researchers embedded the worms in microbubbles to amplify the sound waves.

When transmitted into the worms through their opaque skin, the sound waves reliably activated motor neurons that were peppered with TRP-4. As a result, scientists could move the worms to preset goal destinations, as if controlling the worms with a joystick.

“In contrast to light, low-frequency ultrasound can travel through the body without any scattering,” said Chalasani in a press release. Unlike light, the sound pulses — too high-pitched for humans to hear — can be delivered to target neurons from the top of the skull, without the need for brain implants.

You can imagine putting an ultrasound cap on a person’s head and using that the switch neurons on and off in a person, Chalasani speculates.

Dr. Stuart Ibsen, the lead author of the paper, agrees. “This could be a big advantage when you want to stimulate a region deep in the brain without affecting other regions,” he said.


So far, the technique has only been tested in the nematode worm, but Chalasani says that the team is hard at work transforming it for use in mammals such as mice.

Like optogenetics, scientists could insert TRP-4 into certain populations of neurons with genetic engineering. They could then directly inject microbubbles into the bloodstream of mice to amplify the ultrasonic waves inside the body, and activate targeted neurons with a cap that generates the sound waves.

Unlike optogenetics, however, sonogenetics has a bit of a time lag. It’s basic physics: sound travels slower than light, which means there will be a larger delay between when scientists give the “go” signal versus when neurons actually activate.

“If you're working on the cortex of a human brain, working on a scale of a tenth of a second, optogenetics is going to do better,” concedes Chalasani, but considering that sonogenetics is non-invasive, “these will be complementary techniques.”

With optogenetics dominating the mind-control game, it’s hard to say if sonogenetics will take off. But Chalasani is optimistic.

“When we make the leap into therapies for humans, I think we have a better shot with noninvasive sonogenetics approaches than with optogenetics,” he said.

Image Credit: Shutterstock.comWikimedia Commons; video and final image courtesy of the Salk Institute.

Logo KW

Brain-Sensing Headband [774]

de System Administrator - miércoles, 20 de agosto de 2014, 18:13


Can this brain-sensing headband give you serenity?

By Sally Hayden, for CNN


- Ariel Garten's high-tech headband monitors brain activity

- Called 'Muse,' the device transmits information to your computer

- Can pour beer, control music volume, turn on lights just by thinking

- By tracking brain waves, could help users reduce stress


Mind gamesMeet Ariel Garten: 35-year-old CEO of tech company InteraXon. The business has created a headband which monitors brain activity, called 'Muse.' It claims to help reduce stress as the user focuses on their brain waves, which appear on a screen.

Drink it in: The headband has been used in a number of experiments, including one where a user urged a tap to pour beer through the power of concentration. 

Chairwoman: Garten even used Muse to power this levitating chair.

Music to the ears: In 2009, InteraXon orchestrated a brainwave-controlled musical and visual performance at the Ontario Premier's Innovation Awards.

Light show: For the 2010 Vancouver Winter Games, Muse users controlled a light show over Niagara Falls, similar to the one pictured in this 2013 display.


Editor's note: Leading Women connects you to extraordinary women of our time -- remarkable professionals who have made it to the top in all areas of business, the arts, sport, culture, science and more.

(CNN) -- Imagine a gadget that knows your mind better than you do.

Picture a device that can rank the activities in your life that bring you joy, or interject your typed words with your feelings.

One woman has helped create just that.

Ariel Garten believes that the brain -- with its 100 billion neurons that receive, register, and respond to thoughts and impulses -- has the power to accomplish almost anything, if only its power could be properly harnessed.

Her company InteraXon, which she co-founded withTrevor Coleman, has produced Muse, a lightweight headband that uses electroencephalography (EEG) sensors to monitor your brain activity, transmitting that information to a smartphone, laptop or tablet.

The high-tech headband has been used to pour beer, levitate chairs, or control the lights -- all without the wearer lifting a finger.

And in a world where technology is often blamed for raising stress levels, 35-year-old Garten believes her $300 headband could even help calm us down.

The Canadian -- who has worked as a fashion designer, art gallery director, and psychotherapist -- spoke to CNN about her influences and vision for the future of technology.

CNN: How does Muse help reduce stress?

Ariel Garten: Muse tracks your brain activity. Your brain sends electro-signals just like your heart does, and this headband is like a heart rate monitor.

As it tracks your brain activity, it sends that information to your computer, smartphone or tablet, where you can do exercises that track your brain activity in real time, and give you real time feedback to teach you how to calm and settle your mind.


The headband allows the wearer to see their brain activity when connected to a smartphone, tablet or laptop.

CNN: Technology is often blamed for making people stressed -- is there a certain irony in also using it to also calm us down?

AG: Technology can definitely be responsible for making people stressed because it pulls at our attention, it distracts us, it increases the number of demands and in some ways decreases our own agency.

We're very interested in inverting that on its head and creating solutions that help you calm yourself; that can help you stay grounded, choose what to focus your attention on, and manage your own mind and your response to the world.

"Technology itself is not the evil, it's the way that it's implemented."

 Ariel Garten, CEO, InteraXon

Technology itself is not the evil, it's the way that it's implemented. Technology can have some great solutions for us. Look at all the amazing medical interventions that we have.

CNN: You've suggested Muse could provide medical benefits for children with ADD -- how?

AG: To be clear, Muse is not a medical device, it's a computer product. Exercises using Muse have suggested that they can help people with ADHD, by helping you increase your state of focused attention.

We've had amazing emails -- just recently we had an email from somebody who is 29 years old with ADHD and after just two days of using Muse had noticed a benefit. Three weeks out they sent me an email saying 'this is not a game changer, this is a life changer.'


The muse headset up close.

CNN: Have you had interest in the product from any unexpected places?

AG: We've been contacted by a lot of sports stars and sports celebrities -- people wanting to use it to improve their sports game. We were surprised because we're so used to thinking of it as a cognitive tool.

"We can't read your thoughts, we can't read your mind"

Ariel Garten, CEO InteraXon

There's been quite a number of research labs using Muse, and they've been looking at applications in depression, epilepsy, and communications.

And then we've also had a lot of interest from companies interested in integrating our technology into their wellness and development programs. Companies like Google wanting to offer this to their employees to help improve their productivity and their wellness.

CNN: Do you have any reservations about the development of mind-mapping devices?

AG: In InteraXon we believe very strongly that you own all your own data. We have a very strict privacy policy. It's like a heart rate monitor, it's very binary so we can't read your thoughts, we can't read your mind. But we're very much into leading the way on the very responsible use of this technology.


Ariel Garten speaks at the What's Next panel at Engadget Expand.

CNN: What inspired you to get involved in this area?

AG: My background is in neuroscience, design and psychotherapy, and I'm very interested in helping people understand their own minds and use their minds more productively in their own life. Our brains get in our way in so many ways.

The things that we think, the feelings that we have, all of these things can be beautiful supports to our life and encourage the lives that we live. But they can also cause all kinds of anxiety, worries, all of these things that hold us back.

"As women, we are so good at holding ourselves back with the thoughts that are in our heads."

 Ariel Garten, CEO, InteraXon

Particularly women are a huge inspiration to me because we're so good at holding ourselves back with the thoughts that are in our heads. We're constantly worried about things like 'does this person think this way about me?' or 'have I done well enough?' or 'have I achieved as much as I'm supposed to?'

We have these dialogues within ourselves that can be really debilitating, and you know the answer is 'of course you're good enough,' and 'of course you've done well enough,' and 'of course you can achieve that.' And if you can learn to understand and gain control over your own internal dialogue, you can really learn to sort of undo the shackles that hold you back in your daily life, and your career, and your relationships.

Read: Bobbi Brown's billion dollar idea

Inspire: Nanny's double life as photographer

Learn: Frida Kahlo: Queen of the selfie


Logo KW

Brain-to-text: decoding spoken phrases from phone representations in the brain [1255]

de System Administrator - jueves, 25 de junio de 2015, 16:33

Brain-to-text: decoding spoken phrases from phone representations in the brain

by Christian Herff, Dominic Heger, Adriana de Pesters, Dominic Telaar, Peter Brunner, Gerwin Schalk and Tanja Schultz

Cognitive Systems Lab, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany | New York State Department of Health, National Center for Adaptive Neurotechnologies, Wadsworth Center, Albany, NY, USA | Department of Biomedical Sciences, State University of New York at Albany, Albany, NY, USA | Department of Neurology, Albany Medical College, Albany, NY, USA

It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

1. Introduction

Communication with computers or humans by thought alone, is a fascinating concept and has long been a goal of the brain-computer interface (BCI) community (Wolpaw et al., 2002). Traditional BCIs use motor imagery (McFarland et al., 2000) to control a cursor or to choose between a selected number of options. Others use event-related potentials (ERPs) (Farwell and Donchin, 1988) or steady-state evoked potentials (Sutter, 1992) to spell out texts. These interfaces have made remarkable progress in the last years, but are still relatively slow and unintuitive. The possibility of using covert speech, i.e., imagined continuous speech processes recorded from the brain for human-computer communication may improve BCI communication speed and also increase their usability. Numerous members of the scientific community, including linguists, speech processing technologists, and computational neuroscientists have studied the basic principles of speech and analyzed its fundamental building blocks. However, the high complexity and agile dynamics in the brain make it challenging to investigate speech production with traditional neuroimaging techniques. Thus, previous work has mostly focused on isolated aspects of speech in the brain.

Several recent studies have begun to take advantage of the high spatial resolution, high temporal resolution and high signal-to-noise ratio of signals recorded directly from the brain [electrocorticography (ECoG)]. Several studies used ECoG to investigate the temporal and spatial dynamics of speech perception (Canolty et al., 2007; Kubanek et al., 2013). Other studies highlighted the differences between receptive and expressive speech areas (Towle et al., 2008; Fukuda et al., 2010). Further insights into the isolated repetition of phones and words has been provided in Leuthardt et al. (2011b); Pei et al. (2011b). Pasley et al. (2012) showed that auditory features of perceived speech could be reconstructed from brain signals. In a study with a completely paralyzed subject, Guenther et al. (2009) showed that brain signals from speech-related regions could be used to synthesize vowel formants. Following up on these results, Martin et al. (2014) decoded spectrotemporal features of overt and covert speech from ECoG recordings. Evidence for a neural representation of phones and phonetic features during speech perception was provided in Chang et al. (2010) and Mesgarani et al. (2014), but these studies did not investigate continuous speech production. Other studies investigated the dynamics of the general speech production process (Crone et al., 2001a,b). A large number of studies have classified isolated aspects of speech processes for communication with or control of computers. Deng et al. (2010) decoded three different rhythms of imagined syllables. Neural activity during the production of isolated phones was used to control a one-dimensional cursor accurately (Leuthardt et al., 2011a). Formisano et al. (2008) decoded isolated phones using functional magnetic resonance imaging (fMRI). Vowels and consonants were successfully discriminated in limited pairings in Pei et al. (2011a). Blakely et al. (2008) showed robust classification of four different phonemes. Other ECoG studies classified syllables (Bouchard and Chang, 2014) or a limited set of words (Kellis et al., 2010). Extending this idea, the imagined production of isolated phones was classified in Brumberg et al. (2011). Recently, Mugler et al. (2014b) demonstrated the classification of a full set of phones within manually segmented boundaries during isolated word production.

To make use of these promising results for BCIs based on continuous speech processes, the analysis and decoding of isolated aspects of speech production has to be extended to continuous and fluent speech processes. While relying on isolated phones or words for communication with interfaces would improve current BCIs drastically, communication would still not be as natural and intuitive as continuous speech. Furthermore, to process the content of the spoken phrases, a textual representation has to be extracted instead of a reconstruction of acoustic features. In our present study, we address these issues by analyzing and decoding brain signals during continuously produced overt speech. This enables us to reconstruct continuous speech into a sequence of words in textual form, which is a necessary step toward human-computer communication using the full repertoire of imagined speech. We refer to our procedure that implements this process as Brain-to-Text. Brain-to-Text implements and combines understanding from neuroscience and neurophysiology (suggesting the locations and brain signal features that should be utilized), linguistics (phone and language model concepts), and statistical signal processing and machine learning. Our results suggest that the brain encodes a repertoire of phonetic representations that can be decoded continuously during speech production. At the same time, the neural pathways represented within our model offer a glimpse into the complex dynamics of the brain's fundamental building blocks during speech production.

2. Materials and Methods

2.1. Subjects

Seven epileptic patients at Albany Medical Center (Albany, New York, USA) participated in this study. All subjects gave informed consent to participate in the study, which was approved by the Institutional Review Board of Albany Medical College and the Human Research Protections Office of the US Army Medical Research and Materiel Command. Relevant patient information is given in Figure 1.

Figure 1. Electrode positions for all seven subjects.
Captions include age [years old (y/o)] and sex of subjects. Electrode locations were identified in a post-operative CT and co-registered to preoperative MRI. Electrodes for subject 3 are on an average Talairach brain. Combined electrode placement in joint Talairach space for comparison of all subjects. Participant 1 (yellow), subject 2 (magenta), subject 3 (cyan), subject 5 (red), subject 6 (green), and subject 7 (blue). Participant 4 was excluded from joint analysis as the data did not yield sufficient activations related to speech activity (see Section  2.4).

2.2. Electrode Placement

Electrode placement was solely based on clinical needs of the patients. All subjects had electrodes implanted on the left hemisphere and covered relevant areas of the frontal and temporal lobes. Electrode grids (Ad-Tech Medical Corp., Racine, WI; PMT Corporation, Chanhassen, MN) were composed of platinum-iridium electrodes (4 mm in diameter, 2.3 mm exposed) embedded in silicon with an inter-electrode distance of 0.6-1 cm. Electrode positions were registered in a post-operative CT scan and co-registered with a pre-operative MRI scan. Figure 1 shows electrode positions of all 7 subjects and the combined electrode positions. To compare average activation patterns across subjects, we co-registered all electrode positions in common Talairach space. We rendered activation maps using the NeuralAct software package (Kubanek and Schalk, 2014).

2.3. Experiment

We recorded brain activity during speech production of seven subjects using electrocorticographic (ECoG) grids that had been implanted as part of presurgical producedures preparatory to epilepsy surgery. ECoG provides electrical potentials measured directly on the brain surface at a high spatial and temporal resolution, unfiltered by skull and scalp. ECoG signals were recorded by BCI2000 (Schalk et al., 2004) using eight 16-channel g.USBamp biosignal amplifiers (g.tec, Graz, Austria). In addition to the electrical brain activity measurements, we recorded the acoustic waveform of the subjects' speech. Participant's voice data was recorded with a dynamic microphone (Samson R21s) and digitized using a dedicated g.USBamp in sync with the ECoG signals. The ECoG and acoustic signals were digitized at a sampling rate of 9600 Hz.

During the experiment, text excerpts from historical political speeches (i.e., Gettysburg Address, Roy and Basler, 1955), JFK's Inaugural Address (Kennedy, 1989), a childrens' story (Crane et al., 1867) or Charmed fan-fiction (Unknown, 2009) were displayed on a screen in about 1 m distance from the subject. The texts scrolled across the screen from right to left at a constant rate. This rate was adjusted to be comfortable for the subject prior to the recordings (rate of scrolling text: 42–76 words/min). During this procedure, subjects were familiarized with the task.

Each subject was instructed to read the text aloud as it appeared on the screen. A session was repeated 2–3 times depending on the mental and physical condition of the subjects. Table 1 summarizes data recording details for every session. Since the amount of data of the individual sessions of subject 2 is very small, we combined all three sessions of this subject in the analysis.


 Table 1. Data recording details for every session.

We cut the read-out texts of all subjects into 21–49 phrases, depending on the session length, along pauses in the audio recording. The audio recordings were phone-labeled using our in-house speech recognition toolkit BioKIT Telaar et al., 2014 (see Section 2.5). Because the audio and ECoG data were recorded in synchronization (see Figure 2), this procedure allowed us to identify the ECoG signals that were produced at the time of any given phones. Figure 2 shows the experimental setup and the phone labeling.

Figure 2. Synchronized recording of ECoG and acoustic data.
Acoustic data are labeled using our in-house decoder BioKIT, i.e., the acoustic data samples are assigned to corresponding phones. These phone labels are then imposed on the neural data.

2.4. Data Pre-Selection

In an initial data pre-selection, we tested whether speech activity segments could be distinguished from those with no speech activity in ECoG data. For this purpose, we fitted a multivariate normal distribution to all feature vectors (see Section 2.6 for a description of the feature extraction) containing speech activity derived from the acoustic data and one to feature vectors when the subject was not speaking. We then determined whether these models could be used to classify general speech activity above chance level, applying a leave-one-phrase-out validation.

Based on this analysis, both sessions of subject 4 and session 2 of subject 5 were rejected, as they did not show speech related activations that could be classified significantly better than chance (t-test, p > 0.05). To compare against random activations without speech production, we employed the same randomization approach as described in Section 2.11.

2.5. Phone Labeling

Phone labels of the acoustic recordings were created in a three-step process using an English automatic speech recognition (ASR) system trained on broadcast news. First, we calculated a Viterbi forced alignment (Huang et al., 2001), which is the most likely sequence of phones for the acoustic data samples given the words in the transcribed text and the acoustic models of the ASR system. In a second step, we adapted the Gaussian mixture model (GMM)-based acoustic models using maximum likelihood linear regression (MLLR) (Gales, 1998). This adaptation was performed separately for each session to obtain session-dependent acoustic models specialized to the signal and speaker characteristics, which is known to increase ASR performance. We estimated a MLLR transformation from the phone sequence computed in step one and used only those segments which had a high confidence score that the segment was emitted by the model attributed to them. Third, we repeated the Viterbi forced alignment using each session's adapted acoustic models yielding the final phone alignments. The phone labels calculated on the acoustic data are then imposed on the ECoG data.

Due to the very limited amount of training data for the neural models, we reduced the amount of distinct phone types and grouped similar phones together for the ECoG models. The grouping was based on phonetic features of the phones. See Table 2 for the grouping of phones.


Table 2. Grouping of phones.



Logo KW

Braingear Moves Beyond Electrode Swim Caps [998]

de System Administrator - miércoles, 19 de noviembre de 2014, 20:13

Exponential Medicine: Braingear Moves Beyond Electrode Swim Caps


If the last few decades in information technology have been characterized by cheaper, faster, and smaller computer chips, the next few decades will add cheaper, faster, and smaller sensors. Chips are the brains. Now they have senses.

Whereas most of these sensors have been available for years, it’s only relatively recently that they’ve gone from hulking million-dollar devices in labs to affordable consumer products embedded in smartphones and a growing list of accessories.

Most of these sensors are already incredibly cheap (on the order of a few dollars), tiny, increasingly accurate, and providing a profusion of software services on smartphones. But more recently, sensors have begun measuring the human body.

In recent years, we’ve seen smartphone motion sensors adapted for use in wearable devices to track physical activity or sleep. These have had uneven success with consumers. But the next steps may change that, as devices are able to accurately measure vital processes driven by the heart, lungs, blood—and even the brain.

Two brain activity devices were on display at Singularity University’s Exponential Medicine. The first, presented by InteraXon cofounder and CEO Ariel Garten, is the Muse headband, one of the first brain sensors available to consumers.


The Muse is a band worn across the forehead and secured behind the ears. Using seven electroencephalography (EEG) sensors strung across the band, the device measures levels of brain activity, sending the information to a smartphone.

What’s the point? The Muse is a biofeedback device. Coupled with early apps, it allows users to hear their brain—high levels of activity are sonified as crashing waves and wind, calm states as gentle, lapping waves and birdsong. By being consciously aware of brain activity, we can learn to control it, thus taming anxiety.

The Muse, born in an Indiegogo campaign almost two years ago, went on sale in August. At $299, it isn’t cheap, and neither is it inconspicuous. Even so, a few Exponential Medicine attendees joined Garten, wearing their Muse for the duration.

Garten’s Muse, however, isn’t the only compact brain sensing device out there. Philip Low presented on the iBrain, a similarly compact device—Low claims the latest iteration is the smallest brain sensor available—slung across the forehead.

In contrast to the Muse, iBrain is primarily used in research. Low said his goal was to fold as much brain sensing tech into a single device. He was only able to do this by writing specialized algorithms to consolidate all this in a single channel.

“We’re putting much less on people and getting much more out of them,” Low said. “And we can do this in their homes.”


The iBrain has been used to study a number of disorders including ALS, autism, and depression. Low said in one Navy study they used the device to make an accurate diagnosis of PTSD and SSRI treatment in one patient with data from the iBrain alone.

But Low thinks the device has potential beyond research—he thinks it might prove an effective brain-computer interface. Famously, Steven Hawking experimented with the iBrain as a communication device. And another ALS patient Augie Nieto later used the system to move a cursor on a screen and select letters to form words.

The greatest strength of Low and Garten’s brain sensing technology is that it is non-invasive—some brain-computer interfaces require implants in the brain to work consistently—and in the coming years, Garten expects more improvement.

While EEG caps still reign supreme in most high fidelity brain experiments we’ve seen lately—like Adam Gazzaley’s Glass Brain or this pair of brain-to-brain communication experiments—better software and sensors may change that.

In her talk, Garten said she imagines her device, still highly conspicuous today, will be replaced by smaller, subtler versions, maybe even a series of patches. And whereas her device only goes one way—recording brain activity—future devices might also provide stimulation too. (Indeed, we’ve covered some such devices in the past.)

Today’s body sensing technology is a constellation of disconnected hardware of varying accuracy and sensitivity. But as the tech develops and disappears—woven into our clothes or wearable patches—it may become second nature to regularly look into our hearts or brains on a smartphone, and take action to right the ship.

Image Credit:; InteraXon/Muse



Logo KW

Brain’s Role in Browning White Fat [1060]

de System Administrator - viernes, 16 de enero de 2015, 12:01


Brain’s Role in Browning White Fat

By Anna Azvolinsky

Insulin and leptin act on specialized neurons in the mouse hypothalamus to promote conversion of white to beige fat.

Ever since energy-storing white fat has been shown to convert to metabolically active beige fat, through a process called browning, scientists have been trying to understand how this switch occurs. The immune system has been shown to contribute toactivation of brown fat cells. Now, researchers from Monash University in Australia and their colleagues have shown that insulin and leptin—two hormones that regulate glucose metabolism and satiety and hunger cues—activate “satiety” neurons in the mouse hypothalamus to promote the conversion of white fat to beige. The results are published today (January 15) in Cell.

Hypothalamic appetite-suppressing proopiomelanocortin (POMC) neurons are known to relay the satiety signals in the bloodstream to other parts of the brain and other tissues to promote energy balance. “What is new here is that one way that these neurons promote calorie-burning is to stimulate the browning of white fat,” said Xiaoyong Yang, who studies the molecular mechanisms of metabolism at the Yale University School of Medicine, but was not involved in the work. “The study identifies how the brain communicates to fat tissue to promote energy dissipation.”

“The authors show that [insulin and leptin] directly interact in the brain to produce nervous-system signaling both to white and brown adipose tissue,” said Jan Nedergaard, a professor of physiology at Stockholm University who also was not involved in the study. “This is a nice demonstration of how the acute and chronic energy status talks to the thermogenic tissues.”

Although the differences between beige and brown fat are still being defined, the former is currently considered a metabolically active fat—which converts the energy of triglycerides into heat—nestled within white fat tissue. Because of their energy-burning properties, brown and beige fat are considered superior to white fat, so understanding how white fat can be browned is a key research question. Exposure to cold can promote the browning of white fat, but the ability of insulin and leptin to act in synergy to signal to the brain to promote browning was not known before this study, according to author Tony Tiganis, a biochemist at Monash.

White fat cells steadily produce leptin, while insulin is produced by cells of the pancreas in response to a surge of glucose into the blood. Both hormones are known to signal to the brain to regulate satiety and body weight. To explore the connection between this energy expenditure control system and fat tissue,Garron Dodd, a postdoctoral fellow in Tiganis’s laboratory, and his colleagues deleted one or both of two phosphatase enzymes in murine POMC neurons. These phosphatase enzymes were previously known to act in the hypothalamus to regulate both glucose metabolism and body weight, each regulating either leptin or insulin signaling. When both phosphatases were deleted, mice had less white fat tissue and increased insulin and leptin signaling.

“These [phosphatase enzymes] work in POMC neurons by acting as ‘dimmer switches,’ controlling the sensitivity of leptin and insulin receptors to their endogenous ligands,” Dodd told The Scientist in an e-mail. The double knockout mice also had an increase in beige fat and more active heat-generating brown fat. When fed a high-fat diet, unlike either the single knockout or wild-type mice, the double knockout mice did not gain weight, suggesting that leptin and insulin signaling to POMC neurons is important for controlling body weight and fat metabolism.

The researchers also infused leptin and insulin directly into the hypothalami of wild-type mice, which promoted the browning of white fat. But when these hormones were infused but the neuronal connections between the white fat and the brain were physically severed, browning was prevented. Moreover, hormone infusion and cutting the neuronal connection to only a single fat pad resulted in browning only in the fat pad that maintained signaling ties to the brain. “This really told us that direct innervation from the brain is necessary and that these hormones are acting together to regulate energy expenditure,” said Tiganis.

These results are “really exciting as, perhaps, resistance to the actions of leptin and insulin in POMC neurons is a key feature underlying obesity in people,” said Dodd.

Another set of neurons in the hypothalamus, the agouti-related protein expressing (AgRP) or “hunger” neurons, are activated by hunger signals and promote energy storage. Along with Tamas Horvath, Yale’s Yang recently showed that fasting activates AgRP neurons that then suppress the browning of white fat. “These two stories are complimentary, providing a bigger picture: that the hunger and satiety neurons control browning of fat depending on the body’s energy state,” said Yang. Activation of POMC neurons during caloric intake protects against diet-induced obesity while activation of AgRP neurons tells the body to store energy during fasting.

Whether these results hold up in humans has yet to be explored. Expression of the two phosphatases in the hypothalamus is known to be higher in obese people, but it is not clear whether this suppresses the browning of white fat.

“One of the next big questions is whether this increased expression and prevention of insulin plus leptin signaling, and conversion of white to brown fat perturbs energy balance and promotes obesity,” said Tiganis. Another, said Dodd, is whether other parts of the brain are involved in signaling to and from adipose tissue. 

G. Dodd et al., “Leptin and insulin act on POMC neurons to promote the browning of white fat,”Cell, doi:10.1016/j.cell.2014.12.022, 2015.  
Logo KW

Brain’s “Inner GPS” Wins Nobel [917]

de System Administrator - martes, 7 de octubre de 2014, 19:49

Left to right: John O’Keefe; May-Britt Moser, Edvard Moser


Brain’s “Inner GPS” Wins Nobel

John O’Keefe, May-Britt Moser, and Edvard Moser have won the 2014 Nobel Prize in Physiology or Medicine “for their discoveries of cells that constitute a positioning system in the brain.”

By Molly Sharlach and Tracy Vence

John O’Keefe, May-Britt Moser, and Edvard Moser have won the 2014 Nobel Prize in Physiology or Medicine “for their discoveries of cells that constitute a positioning system in the brain.”

O’Keefe, a professor of cognitive neuroscience at University College London, will receive one half of this year’s prize. Husband-and-wife team May-Britt and Edvard Moser, both professors at the Norwegian University of Science and Technology (NTNU), will share the second half.

Together identifying an inner positioning system within the brain, O’Keefe is being honored for his discovery of so-called place cells, while the Mosers are recognized for their later work identifying grid cells.

“The discoveries of John O’Keefe, May-Britt Moser and Edvard Moser have solved a problem that has occupied philosophers and scientists for centuries,” the Nobel Foundation noted in its press releaseannouncing the award: “How does the brain create a map of the space surrounding us and how can we navigate our way through a complex environment?”

Menno Witter, the Mosers’ colleague at NTNU’s Kavli Institute for Systems Neuroscience/Centre for Neural Computation, first met the pair in the 1990s when they were students at the University of Oslo; Witter was an assistant professor at VU University Amsterdam. 
Their work “is a very important contribution in terms of understanding at least part of the neural code that is generated in the brain that allows species—probably including humans—to navigate,” Witter toldThe Scientist. “We’re all very, very pleased, because it to us shows that what we’re doing . . . as a whole community is considered to be really important and prestigious. It is also, I think, a fabulous sign to the world that Norwegian science is really at a top level.”
Francesca Sargolini, a cognitive neuroscientist at Aix-Marseille University in France, worked with the Mosers when she was a postdoc. The lab had a “wonderful, stimulating atmosphere,” Sargolini told The Scientist. Discoveries made by O’Keefe and the Mosers have helped researchers understand “how the brain computes . . . information to make a representation of spaces, so we can use that information to move around in the environment and do what we do every day,” she added.
“This is a very well-deserved prize for John [O’Keefe] and the Mosers,” said Colin Lever, a senior lecturer in the department of psychology at Durham University in the U.K., who earned a PhD and continued postdoctoral research in O’Keefe’s lab.
“This is a fascinating area of research,” Lever continued. “What we’re discovering about the brain through spatial mapping is likely of greater consequence than just for understanding about space. . . . Indeed, it seems to support autobiographical memory in humans.”
Update (October 6, 11:57 a.m.): O’Keefe and Lynn Nadel met as graduate students at McGill University in Montreal. In 1978, when Nadel was a lecturer at University College London, the two coauthored the seminal book The Hippocampus as a Cognitive Map. “We pursued the spatial map story for some years together, and we still do so separately,” Nadel, who is now a professor of psychology at the University of Arizona, told The Scientist. “From my point of view, this award really recognizes the whole enterprise of looking at cognition in terms of brain function,” he added. “It’s pretty cool.”

Discovery of 'brain GPS' places neuroscientists in league of Nobel Laureates

Men are said to be better than women at creating maps in their brains. But as the three winners of the 2014 Nobel Prize in Medicine show, even mice have brain cells for navigation.


You may not think it, but research on the brains of mice and rats has revealed how humans find their way from one place to another.

It is for their work in this area that John O'Keefe of University College London, and May-Britt Moser of the Centre for Neural Computation in Trondheim and her husband Edvard Moser of the Kavli Institute for Systems Neuroscience in Trondheim have been announced as this year's winners of the Nobel Prize in Medicine and Physiology.

One half of the price goes to British-American John O'Keefe, the other half to the Norwegian couple May-Britt and Edvard Moser, "for their discoveries of cells that constitute a positioning system in the brain," the Nobel Assembly at the Karolinska Institute in Stockholm said on Monday.

When the Nobel committee contacted May-Britt Moser to give her the news, she said she cried.

"I was in shock, and I am still in shock," she told the Karolinska Institute in an interview directly after the announcement. "This is so great!"


What works in mice, works in humans

"Absolutely thrilled"

Other European neuroscientists are celebrating the Nobel Prize committee's decision as well.

"It is fantastic," Michael Brecht, a brain researcher at the Bernstein Centre for Computational Neuroscience in Berlin, told DW.

Brecht, who says he knows all three Nobel Laureates personally, added "they are great people and very impressive research characters. This Nobel Prize is well-deserved."

Marianne Hafting Fyhn of the department of bioscience at University of Oslo worked under May-Britt and Edvard Moser in their research group at the Norwegian University of Science and Technology in Trondheim.

"Personally it is a wonderful day for me as well," she told DW. "[The Mosers] are extremely nice people. There was always a nice atmosphere in the lab and they work on a very high scientific standard."

At the Max Planck Institute for Brain Research in Frankfurt am Main spokesman Arjan Vink said "we are absolutely thrilled."

"Edvard Moser and John O'Keefe were only here at our institute less than two weeks ago," Vink said. The two - yet-to-be - Nobel Laureates were there to speak at a symposium.


John O'Keefe discovered navigation in the brain back in the 1970s

Navigation through memory

Seldom are newly announced Nobel Laureates still engaged in everyday, active research. But that is the case with O'Keefe and the Mosers.

That said, John O'Keefe's first crucial discovery was more than 40 years ago.

In 1971, he watched rats moving freely in a room and recorded signals from nerve cells in a part of their brains called the hippocampus.

The hippocampus is responsible for assigning things to long-term memory.

O'Keefe discovered a new type of cell, which he called "place cells." He concluded that they form a virtual map of the room.

"At that time, scientists already assumed that the hippocampus had to be important [for orientation]," Brecht says. "But only O'Keefe had the crucial insight of how to study these cells. He realized you just had to get the animals going."

Andrew Speakman of University College London says O'Keefe's success is not down to luck.

"He is an amazing scientist and the best kind of enthusiastic, original and inspirational person," Speakman said.

Speakman worked with O'Keefe in the 1980s.

"Like tiles in a bathroom"

ForMay-Britt and Edvard Moser, "the Nobel Prize [has come] quite early," says their ex-PhD student Marianne Hafting Fyhn.

It was less than ten years ago, in 2005, when the couple discovered another key component of the brain's positioning system - and "made things hum," as Michael Brecht puts it.


They investigated brain regions neighboring the hippocampus and found so-called "grid cells".

These cells generate a coordinate system and allow for precise positioning and path-finding.

The cells fire when the animals are at certain locations "and these locations form a hexagonal pattern", Edvard Moser told DW in an interview earlier this year. "It is almost like the tiles in a bathroom."

Or like the grid on a city map.

"That there are grid patterns inside the animal's head was an amazing observation," Brecht says.

The Mosers showed that orientation "is a constructive process and not something that the animal has learned."

May-Britt and Edvard Moser were awarded the Körber European Science Prize earlier this year for their discovery.

From mice to humans

"This internal map is general across all mammals that have been investigated," Edvard Moser said.

And it is a system that develops quite early in evolution.

"Most recently, grid cells have been shown also in humans."

Brecht says "it was to be expected" that sooner or later the Nobel Prize would be awarded to the discoveries linked to the brain's navigation system as this research is "relevant for the field of medicine."

In patients with Alzheimer's disease, for example, the brain regions involved with orientation are frequently affected at an early stage.

Patients often lose their way and fail to recognize their environment.

"Learning how this system works in the normal brain will have consequences for the diagnosis and treatment of Alzheimer's disease," Edvard Moser said in his DW interview.

But it is still just basic research, he says. "[There'll be] more in the future, but we will get there."

Logo KW

Bridging the Mental Healthcare Gap With Artificial Intelligence [1706]

de System Administrator - lunes, 17 de octubre de 2016, 11:41

Bridging the Mental Healthcare Gap With Artificial Intelligence


Artificial intelligence is learning to take on an increasing number of sophisticated tasks. Google Deepmind’s AI is now able to imitate human speech, and just this past August IBM’s Watson successfully diagnosed a rare case of leukemia.

Rather than viewing these advances as threats to job security, we can look at them as opportunities for AI to fill in critical gaps in existing service providers, such as mental healthcare professionals.

In the US alone, nearly eight percent of the population suffers from depression (that’s about one in every 13 American adults), and yet about 45 percent of this population does not seek professional care due to the costs.

There are many barriers to getting quality mental healthcare, from searching for a provider who’s within your insurance network to screening multiple potential therapists in order to find someone you feel comfortable speaking with. These barriers stop many people from finding help, which is why about ninety percent of suicide cases in the US are actually preventable.

But what if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection?

This is X2AI's mission, a startup that’s built Tess AI to provide quality mental healthcare to anyone, regardless of income or location.

X2AI calls Tess a “psychological AI.” She provides a range of personalized mental health services—like psychotherapy, psychological coaching, and even cognitive behavioral therapy. Users can communicate with Tess through existing channels like SMS, Facebook messenger, and many internet browsers.

I had the opportunity to demo Tess at last year’s Exponential Medicine Conference in San Diego. I was blown away by how natural the conversation felt. In fact, a few minutes into the conversation I kept forgetting that the person on the other side of the conversation was actually a computer.

Now, a year later at Exponential Medicine 2016, the X2AI team is back and we’re thrilled with their progress. Here’s our interview with CEO and co-founder Michiel Rauws.

Since Tess was first created, how has the AI evolved and advanced? What has the system’s learning process been like?

The accuracy of the emotion algorithms has gone up a lot, and also the accuracy of the conversation algorithm, which understands the meaning behind what people say.

Is there a capability you are working on creating to take Tess’s conversational abilities to the next level?

We’re about to update our admin panel, so it will be very simple for psychologists to add their own favorite coping mechanisms into Tess. A coping mechanism is a specific way of talking through a specific issue with a patient.

Do you believe that users should know they are speaking with an AI? What are the benefits of having the human absent from a sensitive conversation?

 Yes, they should wholly be aware of that.

There’s quite some evidence out there that speaking with a machine takes away a feeling of judgment or social stigma. It’s available 24/7 and for as long as you want—you don’t pay by the hour.

The memory of a machine is also far better because it simply does not forget anything. In this way there is opportunity to connect dots that a human would not have thought of because they forgot part of the facts. There are also no waiting lists to get to talk to Tess.

One of the most important aspects is that, from a clinical standpoint, it is a huge advantage that Tess is always consistent and provides the same high quality work. She never has a bad day, or is tired from a long day of work.

The therapeutic bond is often mentioned as very important to the success of a treatment plan, as is the patient’s match with the therapist, otherwise he/she needs to look for another therapist. Tess, however, adapts herself to each person to ensure there will always be a match between Tess and the patient. And there is of course the part of Tess being scalable and exponentially improving.


Eugene Bann, co founder and CTO in Lebanon testing the AI in the field.

How does Tess handle receiving life-threatening information, such as being sent a suicidal message from a user?   

Patient safety always comes first. At all times when Tess is talking to the user she evaluates how the person is feeling with an emotion algorithm, which we’ve developed over the past 8 years, through a research firm called AEIR, now a subsidiary of X2AI.

In that way Tess always keeps track of how the user has been feeling and if there is a certain downward trend. And of course if there are it is perhaps a certain very negative conversation going on. Then there is the conversation algorithm that uses natural language processing to understand what the user is actually talking about, to pick up expressions like, “I don’t want to wake up anymore in the morning.”

Once such a situation requires human intervention then there is a seamless protocol to let either one of our psychologists or one of the clients take over the conversation. You can learn more about this in our explanation of AI ethics and data security.

Emotional wellbeing—stress, anxiety, depression—are big issues in the US, and there’s a lack of mental health services for those at risk. What needs to happen for Tess to scale to solve this?

We are very diligent in our approach to safely and responsibly moving towards deploying Tess at scale. Right now we are working with healthcare providers to allow them to offer Tess to support their treatment.

We also employ psychologists ourselves who are creating the content of Tess in the first place. Thanks to these psychologists we are able to offer behavioral health services directly to large employers or employer health plans, as the psychologists can take care of parts of the treatment and ensure to stand-by whenever additional human intervention is required. So not only in the case of an emergency but also when Tess does not manage to figure out how to make the person feel better about a certain difficult situation.

What shifts in public opinion around AI are needed for an AI like Tess to become a societal norm? How far away do you think we are from reaching this point?

AI can be useful today to actually help people and give people access to services they were not able to afford before.

Talking to a robot can get you a better experience than a person would because of the reasons I mentioned above. If certain tasks which are handled by people now but which could also be handled by a machine (or handled better by a machine) would actually be handled by machines, then people can dedicate more of their time to problems machines cannot take care of.

In this way the entire healthcare system becomes more sustainable and affordable.

Want to keep up with coverage from Exponential Medicine? Get the latest insights here.


Logo KW

Broncemia [651]

de System Administrator - sábado, 2 de agosto de 2014, 02:09

La arrogancia como estilo en la práctica de la medicina

Esa vieja y terrible enfermedad "Broncemia". Una magistral conferencia del Dr. Francisco Occihuzzi pronunciada en TED Córdoba. La arrogancia como estilo en la práctica de la medicina. Un patético modo de alejarse del dolor de la gente y del afecto de sus colegas.

Seguir leyendo en el sitio web

Logo KW

Bugs as Drugs: Seeking the Microbial Secret to Health [1600]

de System Administrator - jueves, 3 de diciembre de 2015, 19:33

Bugs as Drugs: Seeking the Microbial Secret to Health


Our body is, in essence, more ecosystem than organism.

The human body teems with trillions of microbes — bacteria, viruses and fungi — and at any moment, we may be carrying between one and three pounds of these micro-hitchhikers in colonies on our skin, groin, mouths, and sinuses. By far, however, the gut microbiome ecosystem — the largest and most complex — is the one that has both academic researchers and pharmaceutical companies hooked.

The reason? Our gut bugs may be the new frontier of a billion-dollar bioceutical industry.


Manipulating the microbiome for health is nothing new. The growing probiotics industry earns some $30 billionglobally each year selling supplements and foods and drinks like yogurt or kombucha. The health claims are: improved intestinal health, buzzing energy, weight loss and “all natural” mood enhancement.

“It’s unregulated, unsupported,” says Dr. Martin Blaser, a pioneer of human microbiome studies at NYU and advisor to Second Genome, a startup based in the Bay Area.

But that’s set to change. For years, the boom in over-the-counter probiotics was more hype than science; now, mounting evidence is beginning to link conditions ranging from the physical ­­— irritable bowel syndrome, Type 2 diabetes — to the mental — autism, Parkinson’s, depression ­— to the gut’s resident microbugs.

"The microbiome field has produced some of the most exciting science discoveries of the last five years, and its potential impact on human health is just too big to ignore," says Bernat Olle, chief operating officer atVedanta Biosciences, a Boston startup that looks to treat autoimmune and inflammatory diseases by modulating the microbiome.

Dr. Justin Sonnenburg, a microbiologist at Stanford University, agrees. “Undoubtedly, the microbiome is a little drug factory in our intestine,” he says.

One with many unresolved mysteries, and a hell of a lot to offer.

Beyond the vanguard

The tantalizing links between gut microbes and health have only recently begun to be accepted by mainstream science.

The main roadblock is proving causality. “It's very difficult to tell if microbial differences you see associated with diseases are causes or consequences,” Rob Knight, a microbiologist at the University of California, San Diego.


So far, the question’s been hard to answer. Most sophisticated, tightly controlled experiments were done in mice raised in completely sterile environments — hardly the best model, given that most humans are colonized at birth by resident microbes in our mothers’ vaginal canal.

Data in humans are far more limited and often correlational in nature. In varying degrees, our microbiome connects to a slew of metabolic diseases, such as obesity and diabetes. It also, crucially, acts as a communication channel between our immune and digestive systems.

The link to brain disorders is perhaps the most tantalizing. Children with autism, for example, often also suffer from gastrointestinal problems. As do patients with anxiety, depression, schizophrenic and neurodegenerative disorders, suggesting malfunctioning gut microbes. But is it just a connected fact, or is it a cause for their illness?

Without figuring out causal relationships, there’s little use in pursuing a drug. But bioceutical companies like Second Genome have a workaround. The goal is not to target every disease, says CEO Peter DiLaura, it’s to take aim at just a few.

In particular, ones with large unmet therapeutics need and evidence showing a large microbiome-driven causation, explains DiLaura. Obesity, Type 2 diabetes and Crohn’s disease all fit the bill, and Second Genome is tackling each with laser focus.

Chemical exploration

If successful, Second Genome could relieve millions of people of their chronic disease. The key to unlocking the gut-health secret, said DiLaura, is tapping into the ancient language that connects host with microbe.

“We’re code breakers,” says DiLaura. For eons, we have only focused on the host side of things; with advances in genomic profiling and big data, we can finally tap into the conversation within and surrounding the microbiome.

Preliminary results hint at complex answers. The composition of the gut microbiome — the balance between “good” and “bad” bacteria — can influence health by regulating inflammation in the body.

We can easily change the microbiome composition with diet and antibiotics, sometimes in less than a week,says Sonnenburg. This works well in our favor. His team is currently working on a molecule called sialic acid, which prevents harmful bacterial from taking over the gut after heavy antibiotic use.

Other companies are taking “bugs as drugs” quite literally.

OpenBiome, a company based in Cambridge, Mass, is providing frozen stool samples from healthy, pre-screened individuals to hospitals. There, the samples are put into the colons of people suffering from the deadly — and otherwise untreatable — gut infection Clostridium difficile, which kills 14,000 Americans each year.

Early results have been nothing short of remarkable so far: in a 2011 review of 317 patients, fecal transplants cleared up the infection in 92% of cases. To facilitate long-term maintenance treatment, the team is working on capsules that patients can take orally.


You might be thinking eww: after all, it’s hard not to give a crap about swallowing poop pills. Researchers agree. The next step is to try to isolate and culture healthful strains of bacteria that bear the brunt of the therapeutic work in fecal matter, and make them into probiotics to stop recurrent infections or prevent one from happening altogether. So far, the effects are there but moderate, and scientists are tweaking the probiotic composition to further optimize treatment.

Arguably, however, the most pragmatic approach is looking at gutbug bioactives — that is, proteins and metabolites secreted by the microbiome that impact health.

Second Genome is striding down this path. In close collaboration with academic advisors, the company is striving to find bioactives that are secreted by a healthy microbiome. So far, research is homing in on a class of chemicals called short-chain fatty acids. These molecules are constantly produced as gut microbes break down starchy foods, which — in ways yet uncovered — regulate our immune system, the integrity of our brain cells and an array of metabolic pathways.

Other molecules may combat the deadly effects of multiple sclerosis, a devastating degenerative brain disease currently without cure. Yet others may prove to be a new source of antibiotics, to amp up our rapidly dwindling antibiotic arsenal.

We’ve uncovered only the tip of the bioactive iceberg, and it’s a field ripe for discovery.

“People are eager to learn what exactly helpful bacteria are doing,” says Dr. Michael Fischbach, a microbiologist at the University of California, San Francisco, who uses machine learning to hunt down drug-making genes in the human microbiome.

“Nobody had anticipated that they have the capability to make so many different kinds of drugs,” says Fischbach. “We used to think that drugs were discovered by drug companies and prescribed by a physician and then they get to you.”

Forget that. The future of drug making may be right on — and inside — your body.




Logo KW

Bullying and Depression in Youths [1646]

de System Administrator - miércoles, 3 de febrero de 2016, 17:03

Bullying and Depression in Youths

By Karen Dineen Wagner, MD, PhD 

During an evaluation, a 10-year-old girl who was depressed said that classmates called her stupid and laughed at her all the time. Fortunately, the teachers stopped this verbal bullying. However, unbeknownst to the teachers, the students would text disparaging comments to the girl throughout the school day. She was reluctant to inform the teachers because she thought it would worsen the situation.

A 14-year-old girl in treatment for depression reported that she was being bullied in school. She said that girls in her classes constantly made negative comments about her appearance, dress, and behavior. They excluded her from social activities despite her desire to participate. She said that the girls were telling lies about her to boys in the class, which was damaging her reputation. What upset her the most was that an online site had been created in which students were encouraged to write all the reasons they hated her. She cried and said that she could not avoid the bullying even outside of school. She believed the only way to escape the bullying was to not exist anymore, and she confirmed that she was suicidal.


Cyberbullying, which allows bullying to extend beyond face-to-face contact into electronic media, has received considerable recent attention. Hamm and colleagues1 examined its effects via social media among children and adolescents. They included 36 studies of cyberbullying in their review. Most youths in these studies were middle and high school students, aged 12 to 18 years. The majority were female (55.8%).

Across these studies, 23% of the youths reported having been bullied online. The most common electronic social media platforms for bullying included message boards, social networking sites, blogs, Twitter, and Web pages. The most common types of cyberbullying were name-calling or insults, circulating pictures, and spreading gossip and rumors. Often relationship issues preceded the bullying. Girls were more likely to be cyberbullied than boys.

Adolescents who had been cyberbullied reported becoming more withdrawn, losing self-esteem, and feeling uneasy. There were adverse effects on relationships with family and friends. School grades worsened, there were more school absences, and behavior problems in school became common.

Depression was associated with cyberbullying. The adolescent’s level of depression increased significantly with exposure to cyberbullying. In some cases, cyberbullying was associated with self-harm behavior and suicidal ideation and attempts.

The most common strategies employed by the adolescents to deal with cyberbullying were to block the sender, ignore or avoid messaging, and protect personal information. Nearly 25% of the adolescents did not tell anyone about the cyberbullying. If they did tell someone, it was most likely to be a friend rather than an adult. Often adolescents perceived that nothing could be done to prevent the bullying, and if they told their parent about the bullying, they would lose access to the computer. The researchers suggest that increased awareness of the prevalence of cyberbullying and its adverse effects may lead to better prevention and management strategies.

The correlation between depression and bullying

In a recent study, Bowes and colleagues2 examined the association between being bullied by peers at age 13 and the occurrence of depression at 18 years. The study comprised 6719 adolescents from the Avon Longitudinal Study of Parents and Children cohort in the UK. About 10% (n = 683) of the participants reported frequent bullying at age 13. The proportion of youths with depression increased with the frequency of bullying: 14.8% of the youths who were frequently bullied met criteria for depression, whereas 7.1% of youths who were occasionally bullied and 5.5% of youths who were not bullied met criteria for depression.


Logo KW

Bullying: The Uncomfortable Truth About IT [882]

de System Administrator - viernes, 19 de septiembre de 2014, 15:34

Bullying: The Uncomfortable Truth About IT

Provided by  IDG CONNECT


IDG Connect has spoken to a range of bullying experts and conducted research to a self-selecting sample of 650 IT professionals. This report blends new statistics with detailed feedback from over 400 testimonials and aims to shed light on this misunderstood and overlooked topic.

Bullying is rife in schools. In fact, its legacy can leave such a lasting impact through people’s lives, it has become a recurring theme in popular films, songs and books. Bullying is also rampant in the workplace, although many individuals vehemently deny this is the case. This phenomenon is far more complicated than the childhood equivalent, but the results are equally devastating for victims and organisations alike. There is some evidence to suggest that things might be slightly worse in IT. Over the last few months IDG Connect has been investigating this in more detail. We have spoken to numerous industry professionals, consulted a panel of experts and conducted a self-selecting survey of 650 IT professionals to gather over 400 in-depth personal testimonials. The results show 75% of our respondents claim to have been bullied at work and 85% have seen it happen to others. These results in no way prove that things are worse in IT than elsewhere, but they do paint a pretty comprehensive picture of the problem. Above all though, these findings highlight the fact that however these issues are defined, they are endemic through the IT workplace: if nothing else, they need to be acknowledged and discussed. In the course of this short report, we look at what we have discovered and see if we can shed some light on this seemingly ongoing problem. We profile the bullies and the bullied and try to provide some context on why IT could be worse than other professions. This is an insidious problem, it is extremely difficult to pinpoint, yet it is everyone’s responsibility to understand what goes on – this is the only way it will ever be tackled.


Please read the attached whitepaper.




Logo KW

Calibración diagnóstica [642]