Neurociencia | Neuroscience


Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página:  1  2  3  4  5  (Siguiente)


Logo KW

Machine Dreams [1253]

de System Administrator - jueves, 25 de junio de 2015, 16:03

This Is What Happens When Machines Dream

By Jason Dorrier

When we let our minds wander, sleeping or waking, they begin mixing and remixing our experiences to create weird images, hallucinations, even epiphanies.

These might be the result of idle daydreaming on the side of a hill, when we see a whale in the clouds. Or they might be more significant, like the famous tale that the chemist Friedrich Kekulé discovered the circular shape of benzene after daydreaming about a snake eating its own tail.

There is little doubt we are a species consumed by our dreams—that our ability to find unexpected new patterns in the noise is what makes us human and what makes us creative.

Maybe that’s why a set of incredibly dream-like images recently released by Google are causing such a stir. These particular images were dreamed up by computers.


Google calls the process by which the images were created inceptionism, recalling the movie, and likewise, the images themselves range from beautiful to bizarre.

So, what exactly is going on here? We recently wrote about the torrid advances in image recognition using deep learning algorithms. By feeding these algorithms millions of labeled images ("cat", "cow," "chair," etc.), they learn to recognize and identify objects in unlabeled images. Earlier this year, machines at Google, Microsoft, and Baidu beat a human benchmark at image recognition.

In this case, Google reversed the process. They tasked their software with generating images based on the information already stored in its artificial neural network.

And here’s the fascinating bit: in a part of the experiment where the software was allowed to "free associate" and then forced into feedback loops to reinforce these associations—it found images and patterns (often mash-ups of things it had already seen) where none existed previously.

In some examples it interpreted leaves as birds or trees as buildings. In others, it created weird imaginary beasts in clouds—an “admiral-dog,” “pig-snail,” “camel-bird,” or “dog-fish.”


This tastes a little like our own creativity. We take in impressions, mash them up in our mind, and form complex ideas—some nonsensical, others more profound. But is it the same thing?

The easy answer: Of course not.

It’s Lady Lovelace’s objection as outlined by Alan Turing. Ada Lovelace, daughter of poet Lord Byron, wrote the earliest description of what we’d today call the software and programming of a modern universal computer. And she doubted machine creativity would ever exist.

“The Analytical Engine has no pretensions whatever to originate anything,” Lovelace wrote. “It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths."

That is, machines do as we tell them. Nothing more.

Turing rephrased Lovelace’s objection as "a machine can never take us by surprise.” And he disagreed. He said his machines often surprised him—mainly because he understood the underlying settings in the general sense. But the specifics often conspired to create surprising results in practice.

Indeed, Google’s reason for running this experiment was that “we actually understand surprisingly little of why certain models work and others don’t.” In other words, we get the general idea, but we often don't know what's taking place in every step of the process.

Artificial neural networks, more or less based on the human brain, are made of hierarchical layers of artificial neurons. Each level is responsible for recognizing increasingly abstract image components. The first level, for example, might be tasked with finding edges and corners. The next level might look for basic shapes—all the way up until the final level makes an abstract leap to “fork” or “building.”

Running the algorithms in reverse is a way of finding out what they’ve learned.

In one part of the experiment, the researchers asked the algorithms to generate a specific image, say its conception of a banana, in random noise (think static on a television screen). This was a way of determining how well it knew bananas. In one instance, when asked to generate a dumbbell, the software repeatedly showed dumbbells attached to arms.

"In this case, the network failed to completely distill the essence of a dumbbell," Google's engineers wrote in a blog post. "Maybe it’s never been shown a dumbbell without an arm holding it. Visualization can help us correct these kinds of training mishaps."

It got more interesting when they allowed the algorithm to look at an image and free associate. How abstract the result was depended on which layer of artificial neurons they queried.

The first, least abstract layer emphasized edges. This resulted in “ornament-like” patterns. Something you've probably already seen in a photo sketch app. But more abstract features emerged in higher layers. These were then further accentuated by induced feedback loops.

The researchers asked the network, “Whatever you see there, I want more of it!”


In one sense, these images are absolutely the result of a machine spitting out the contents of its database as directed by its programmers. Just as Lady Lovelace would have noted. And at the same time, they are undoubtedly surprising in a way Alan Turing would recognize.

Probably the most surprising aspect is just how much they resemble, in both process and output, something we ourselves might create—a daydreamer finding weird shapes in the clouds or an abstract artist visualizing otherworldly and contradictory landscapes. (Indeed, perhaps the desire to anthropomorphize machines is itself an ironic example of finding patterns where none exist.)

And it’s tempting to further extrapolate the process.

What happens when programs take in images, text, other sensory data—eventually rich experiences more akin to our own? Can a process like inceptionism incite them to remix these experiences into original ideas? Where do we draw the line between human and machine creativity?

Ultimately, it's a circular debate, and a distinction impossible to definitely prove.

At the least, as computers get better at abstract concepts, they'll help scientists or artists find new ideas. And maybe along the way we'll gain new insights into the inner workings of our own creative processes.

For now, we can enjoy these first few surprising baby steps.

Image Credit: Google Research (see the whole set of images here)

Logo KW

Manejar el Enfado [1649]

de System Administrator - miércoles, 3 de febrero de 2016, 20:33

Los diez consejos para manejar el enfado

by Esther Canales Castellanos

“La vida es demasiado importante como para tomársela en serio”

Oscar Wilde

El enfado es una emoción innata que surge cuando alguien o algo ha superado nuestro umbral de paciencia o de tolerancia a la frustración. Es decir, el enfado sirve para poner límites, tiene una intención positiva y hay veces que tiene una gran utilidad. El inconveniente es que sus consecuencias pueden llegar a resultar desagradables para nosotros mismos y para los demás.

Además se trata de una de las emociones más difíciles de controlar porque conlleva un gran nivel de activación. Para afrontar esos momentos en los que vamos a explotar te propongo los siguientes diez consejos. No es necesario que los realices todos cada vez que intuyas que te vas a enfadar, hay veces que con alguno de ellos es suficiente.

  1. Relájate. El enfado, como ya adelantaba antes, produce un elevado estado de activación en nuestro organismo. Sabemos que cuando nos enfadamos comenzamos a ponernos tensos, a hablar de forma más acelerada y con un tono más elevado. Nuestro nivel de adrenalina aumenta espectacularmente. Las personas que dedican un tiempo diario a la relajación y meditación son mucho menos propensas a enfadarse. Si no tienes esta rutina, la solución más sencilla es respirar profundamente varias veces antes de reaccionar.

  1. Observa tu comportamiento y pensamientos. Si últimamente te enfadas con mucha frecuencia sin un motivo realmente justificado, a lo mejor ha llegado la hora de cambiar de hábitos para controlar la ansiedad, estrés y las preocupaciones.

  2. Analiza cual es la causa real de tu enfado. La pregunta más importante que tienes que hacerte es ¿realmente es tan importante el motivo del enfado?¿merece la pena lo mal que me voy a sentir después? ¿por qué otra situación que realmente sí es importante puedo estar enfadado?

  3. Ponte en el lugar del otro. Se trata de intentar entender por qué la otra persona se ha comportado de determinada manera. No se trata de justificar un mal comportamiento. Muchas veces nos quedamos en los hechos más inmediatos o visibles, sin pensar que la otra persona puede que estuviese actuado sin mala intención o por que esté pasando por un mal momento, o bien porque ha habido un malentendido en la comunicación. Tendemos a pensar que somos el centro del mundo, pero eso no es cierto. Es importante controlar los pensamientos automáticos que tienden a juzgar precipitadamente a la otra persona.

  4. Interpreta correctamente la situación. Si algo nos enfada es porque lo estamos interpretando de forma amenazante. Interpretar la situación no significa que no tengamos razón para enfadarnos, sino en analizar hasta qué punto es justificado nuestro enfado y si hemos interpretado adecuadamente una situación.

  5. Céntrate en resolver el problema en lugar de quejarte. Hay personas que se pasan la vida quejándose como si esa fuese la solución. La queja nos sumerge aún más en el problema, restándonos energía y tiempo. La mejor forma de resolver un problema no es quejarse, sino buscar las mejores soluciones. Quejarse sólo sirve para que nos sintamos más frustrados, bloqueando nuestra capacidad de resolver el problema.

  6. Cambia la agresividad por la asertividad. Gritar, decir tacos o dar portazos puede que te ayude a descargar la tensión en un momento dado, pero sin duda, deteriora las relaciones con los demás y te haces daño a ti mismo. Encuentra formas más sanas de canalizar tu enfado, como dar un paseo, hacer una actividad que requiera energía, etc. Los demás no tienen porqué entender tu enfado, no te pueden leer el pensamiento, tan sólo pueden ver tu comportamiento. Sinceramente, a mi no me gustaría estar al lado de alguien que está en este estado, porque las emociones se contagian muy fácilmente.

  7. Hay personas que son tóxicas y su comportamiento o comentarios nos hacen sentir mal. El hecho de aceptar que esa persona simplemente es así y que sus motivos tendrá y que no son asunto nuestro, nos libera. Hay cosas que no tienen una solución a nuestro alcance. Esto no quiere decir que no debamos poner límites, ya que el enfado sirve para ello precisamente. Tendremos que encontrar formas más saludables de poner  esos límites.

  8. Ríete de la situación. La vida es demasiado importante como para tomársela en serio, como decía Oscar Wilde. Claro, hay situaciones en las que no puedes reírte. Habrá que ver en qué situaciones es apropiado. Pero el sentido del humor hace que se relaje la tensión y podamos ver la situación desde otro punto de vista. Nos ayuda a reencuadrar la situación, a tomar perspectiva. Uno no puede reírse y estar enfadado al mismo tiempo.

  9. Aprende de la situación.

En definitiva, se trata de gestionar la emoción de enfado para que no te domine.

Ya Aristóteles lo decía:

“Cualquiera puede enfadarse, eso es muy fácil. Pero enfadarse con la persona adecuada, en el grado exacto, en el momento oportuno, con el propósito justo y de la forma correcta, eso ciertamente, no resulta tan fácil.”

Esther Canales Castellanos. Psicóloga Experta en Coaching PsEC y Economista.



Logo KW

Manipulative Microbiomes [1202]

de System Administrator - jueves, 16 de abril de 2015, 12:54

Manipulative Microbiomes

Gut bacteria control tumor growth via the mammalian immune system.

By Jenny Rood


The paper M.R. Rutkowski et al., “Microbially driven TLR5-dependent signaling governs distal malignant progression through tumor-promoting inflammation,” Cancer Cell, 27:27-40, 2015.


CANCER COMMAND: The microbiome (as puppeteer) affects the immune system’s influence on tumor growth, in tandem with inflammatory cytokines (dogs).


The polymorphism The gut microbiome influences nonintestinal cancer progression, but the role of toll-like receptor 5 (TLR5), an immune system protein that recognizes commensal bacteria, was a mystery. As more than seven percent of people have nonfunctional TLR5, José Conejo-Garcia of the Wistar Institute in Philadelphia and colleagues explored how TLR5 impacts tumor growth.


The controller In a mouse model of sarcoma, the researchers found that tumors grew much faster in wild-type mice than in TLR5-deficient ones. Wiping out the mice’s microbiomes diminished the disparity, indicating that TLR5’s tumor-promoting effects are driven by commensal bacteria. TLR5-dependent signaling triggered the production of inflammatory interleukin 6 (IL-6), which in turn activated an immunosuppressive cascade—not switched on in TLR5-deficient animals—that promoted tumor growth.

The twist In human breast cancers or mouse mammary carcinomas, where IL-6 levels are much lower, TLR5 had the opposite effect. Tumors in TLR5-deficient mice grew better, due to the upregulation of IL-17, a different inflammatory cytokine. As in the sarcoma model, any differences between TLR5-deficient and wild-type mice disappeared upon antibiotic treatment. “Cancer is a systemic disease that is very dramatically influenced by commensal bacteria,” Conejo-Garcia says.

The manipulation “It’s very exciting to see this kind of progress,” says Susan Erdman of MIT. She says looking into the effects of human polymorphisms such as TLR5 “really contributes to the concept of personalized medicine in cancer therapy.” As a first step toward future treatments, study coauthor Melanie Rutkowski is exploring how manipulating the microbiome could inhibit tumor growth.

Logo KW

Manual de dependencia emocional afectiva [1485]

de System Administrator - martes, 6 de octubre de 2015, 19:20

Manual de dependencia emocional afectiva

por Karemi Rodríguez Batista | PsicoK

La dependencia emocional es una adicción hacia otra persona, generalmente la pareja. Cuando uno sufre dependencia, genera una necesidad desmesurada del otro, renunciando así a su libertad y empezando un camino de lo más tortuoso y desagradable, en que por cada minuto de falsa felicidad, derramamos litros y litros de lágrimas. ¿Cuáles son los síntomas? ¿Cuáles son las consecuencias de este padecimiento? La psicóloga Silvia Congost nos los explica en éste manual y, además, nos ofrece una serie de recomendaciones para ir superándolo. ¡Esperamos que os sea de utilidad!

Por favor descargue el documento PDF adjunto.

Logo KW

Meal Plans [771]

de System Administrator - martes, 19 de agosto de 2014, 16:38


SPLIT DECISION: Recently speciated marine bacteria use different strategies—biofilm growth (red) or mobility (blue)—to obtain nutrients | ILLUSTRATION BY YUTAKA YAWATA, GLYNN GORICK, AND ROMAN STOCKER

Meal Plans

Bacterial populations’ differing strategies for responding to their environment can set genetic routes to speciation.

By Rina Shaikh-Lesko 

The paper
Y. Yawata et al., “Competition–dispersal tradeoff ecologically differentiates recently speciated marine bacterioplankton populations,” PNAS, 111:5622-27, 2014.

The background
Marine bacteria obtain nutrients from clustered particles that float in the resource-poor broth that is ocean water. Yutaka Yawata, a postdoctoral fellow in Roman Stocker’s lab at MIT, and colleagues wondered if differing strategies used by bacteria to secure these scarce nutrients could influence how populations adapt to their microenvironments and, ultimately, drive speciation, or whether speciation happens by more passive routes.

Diverging strains
Yawata and his team studied two recently diverged populations of Vibrio cyclitrophicus—labeled S and L—isolated from different depths in the same ocean region. They found the L population to be skilled at attaching to nutrient particles and developing into biofilms, whereas the S population could swiftly move to unexploited nutrient patches.  

The technique
The researchers used a microfluidic device to create chemical gradients that could be quickly altered to observe how the bacteria respond. They found that S bacteria responded much more quickly to changes in nutrient gradients, while L populations stayed attached to surfaces, where they created biofilms. “They are starting to become two different species on a genetic basis—and on a behavioral basis,” says Yawata.  

The implications
The finding demonstrates that bacteria can actively respond to their environment to secure resources and that they make strategic tradeoffs to do so, characteristics previously shown only in plants and animals. “Behaviors can actually play a role and be barriers to gene flow,” causing populations to diverge even at the microscopic scale, says marine microbiologist Linda Amaral-Zettler of the Marine Biological Laboratory in Woods Hole, Massachusetts.

Logo KW


de System Administrator - martes, 28 de octubre de 2014, 16:04


Written By: Arlington Hewes

Not so long ago we covered a miniature, ant-sized computer chip designed to be embedded in everyday stuff to make it smarter. Instead of a cumbersome battery in need of constant recharging—the chip is powered wirelessly by radio waves.

Now, the same Stanford group, led by assistant professor of electrical engineering Amin Arbabian, is working on a sister chip destined to be implanted in the body to keep tabs on internal biological processes and distribute drugs and other therapies.

To date, medical implants bristle with wires or, when they’re wireless, are made bulkier by onboard batteries. Ideal implants would be wireless and battery-free.


Arbabian’s tiny Internet of Things chip powered by radio waves.

Unlike chips to be embedded in inanimate objects, implanted chips need to be fully compatible with the human body and present minimal health risk. To that end, Arbabian chose ultrasound to power his chip. Ultrasound is already safely used for sensitive procedures, like fetal imaging, and can provide the needed power.

How does it work? The chip houses a special piezoelectric material that flexes in response to incoming ultrasound waves creating a small amount of electricity.

“The implant is like an electrical spring that compresses and decompresses a million times a second, providing electrical charge to the chip,” says Marcus Weber, a Stanford graduate student who working on Arbabian’s team.

Arbabian’s team found their device responded to targeted ultrasound through three centimeters of chicken meat—their human flesh analog.

As it’s powered, the chip is designed to translate ultrasound to power, process commands for particular actions, and send back confirmation by radio. Such tasks might, in the future, include biosensing or delivering electric shocks to relieve pain or ease the worst symptoms in neurological conditions like Parkinson’s disease.

Currently, the chip is about the same size as the end of a ballpoint pen. The team is working to make the next generation of chips a tenth that size. The hope is such tiny implants might one day form a sensory network for in vivo brain research.

Arbabian’s chips may well prove forerunners to such implants, or the team’s wireless power tech might be combined with other cutting-edge implant designs.

Recently, for example, we wrote about a new graphene biosensor. The sensor is extraordinarily thin (four atoms across), transparent, flexible, and biocompatible. Because the graphene chip allows light to pass through, it lends itself to brain research using traditional imaging methods and cutting-edge optogenetics.

Whichever technology, or combination of technologies, wins out, it appears a new generation of digital technology is poised to be implanted—and that’s exciting news for pure research, medical diagnostics, and the humane therapies of tomorrow.

Image Credit: Stanford

This entry was posted in BrainComputingHealth,MedicineTech and tagged Amin Arbabian,biosensorbrain researchgraphene brain implant,Marcus Weberoptogeneticspiezoelectric,Stanfordultrasoundultrasound powering,wireless charging.


Logo KW

Medicina participativa [649]

de System Administrator - sábado, 2 de agosto de 2014, 02:04

"Doctor, ¡baje de la torre de marfil!"

Una modalidad asistencial donde el paciente es un sujeto activo en la toma de decisiones. El nuevo y urgente rol que médicos y pacientes necesitan adoptar en un mundo de enfermedades crónicas.

Excelente video del Profesor Bas Bloem en TEDx Maastricht, donde se impartieron sesiones sobre el futuro de la salud. Bas Bloem es Neurólogo y trabaja en el Departmento of Neurología, Radboud University Nijmegen Medical Centre, Holanda. Su presentación explica la transición entre un modelo de Salud 1.0 a lo que él denomina "Cuidados de salud participativos". ¿Es posible lograr que los pacientes se conviertan en sujetos activos en el cuidado de su salud? ¿Podrá el médico bajar de la torre marfil?  

Continuar leyendo en el sitio

Logo KW

Meditate and Profitable Decisions [703]

de System Administrator - miércoles, 6 de agosto de 2014, 17:45

Meditate for More Profitable Decisions

by Zoe Kinias, INSEAD Assistant Professor of Organisational Behaviour, and Andrew Hafenbrack, INSEAD PhD student in Organisational Behaviour with Jane Williams, Editor, Knowledge Arabia

It’s a practice rooted in Hinduism and adopted by beatniks seeking spiritual guidance. Now evidence shows meditation can improve business decisions and save your company from expensive investment mistakes.

Meditation has become an increasingly popular practice amongst the C-suite elite. And, with CEOs such as Rupert Murdoch (News Corp); Bill Ford (Ford Motor Company); Rick Goings (Tupperware); and Marc Benioff ( all touting its benefits, executive coaches are picking up on the trend introducing mindful techniques to programmes to calm the mind’s “chatter”, assist focus and manage stress. But new empirical evidence suggests it’s more than just a “feel good” exercise, and as little as 15 minutes of meditation can actually help people make better, more profitable decisions, by increasing resistance to the “sunk cost bias”.

The sunk cost bias - also known as the sunk cost fallacy or the sunk cost effect - is recognised as one of the most destructive cognitive biases affecting organisations today. Put simply, it’s the tendency to continue an endeavor once an investment has been made in an attempt to recoup or justify “sunk” irrecoverable costs. The phenomenon is not new; psychological scientists have been studying the “escalation of commitment” since the mid-1970s, noting its ability to distort rational thought and skew effective decision-making. Often it’s a subconscious action, which can result in millions of dollars being invested into a project, not because it’s a sound investment but because millions of dollars have already been spent.

Avoiding the trap

But it’s a mind trap you can avoid as suggested by the paper Debiasing the Mind Through Meditation: Mindfulness and the Sunk Cost Bias by Andrew Hafenbrack, INSEAD PhD student in Organisational Behaviour, Zoe Kinias, INSEAD Assistant Professor of Organisational Behaviour and Sigal Barsade, the Joseph Frank Bernstein Professor of Management at The Wharton School. Their research shows just 15 minutes of mindfulness meditation – such as concentrating on breathing or doing a body scan – helps raise resistance to this problematic decision process, and open the way to more rational thinking.

“Prior research shows the more we invest in something (financially, emotionally, or otherwise), the harder it is to give up that investment and the more inclined we are to escalate a commitment,” Hafenbrack notes. “In many cases negative emotions; fear, anxiety, regret, even guilt or worry over past decisions, subconsciously play a part in the decision-making process.”

Most noted examples include the U.S. military campaigns in Vietnam in the 1960s, and more recently in the Middle East, when mounting casualties made it increasingly difficult for the U.S. government to withdraw. On the business front companies regularly fall victim to the bias when faced with decisions on whether to pump money into a product after being scooped by a competitor, or to continue an investment as costs skyrocket beyond initial estimates. Sometimes it’s a matter of a product just not selling as well as expected, as was the case with the Concorde supersonic jet when France and Britain continued their investment long after it was known the aircraft was going to be unprofitable.

The sunk cost bias can also be exacerbated by anticipated regret, the result of thinking too much about what may, or may not, occur in the future.

Building resistance

It is the result, says Kinias, of both emotional and temporal processes. MRI brain scans show the mind’s natural state constantly jumps around, flicking between ideas, switching from the past to the future to the present in seconds. Through a series of studies Hafenbrack, Kinias and Barsade hypothesised, and found that mindfulness meditation, by focusing on the present, quiets this mind-wandering process, diminishing the negative feelings that distort thinking, thereby boosting resistance to the sunk cost bias.

They began the research with a correlational study demonstrating the link between trait mindfulness and an individual’s ability to resist sunk costs. As people vary in how mindful they are by disposition, volunteers were first assessed for this trait. They were then asked to make decisions based on ten scenarios. Some were business-related; others were simple choices like whether to attend a music festival that had been paid for when illness and bad weather made enjoyment unlikely. As expected, the results indicated that higher trait mindfulness in volunteers translated into their having more resistance to include sunk costs in their decision-making process.

 In subsequent studies the team looked at the causal relationship between mindfulness meditation and the sunk cost bias in both laboratory and online settings. In each case, one group of volunteers was led through a breathing meditation (a form of mindfulness meditation) while another (the control group) underwent a mind-wandering induction, a simple procedure replicating the normal mental state.

All volunteers were then given a sunk-cost dilemma and asked to make a decision. In each study – whether it was online or in the laboratory - volunteers who had undergone mindfulness practice were significantly more likely to resist the sunk cost bias.

Letting go

What was surprising, says Kinias, was the magnitude of the affect that came after such a short period of meditation. “In one of our experiments more than half the participants in the control condition committed the sunk cost bias whereas only 22 percent committed it following the 15 minute mindfulness meditation - that’s a pretty dramatic effect.”

“There may be cases when processing of the past can be useful for making decisions,” she concedes, “but what our research suggests is that people make better choices in the present moment when letting go of sunk costs is required to make the best decision.”

Zoe Kinias is an Assistant Professor of Organisational Behaviour at INSEAD. 

Follow us on twitter @INSEADKnowledge or Facebook


Logo KW

Memoria y Olvido: Claves Cerebrales [619]

de System Administrator - viernes, 1 de agosto de 2014, 21:37

Las claves cerebrales de la memoria y el olvido

por Karemi Rodríguez Batista

Magistral entrevista a Ignacio Morgado, Catedrático de Psicobiología en el Instituto de Neurociencia de la Universidad Autónoma de Barcelona, que desde el punto de vista de la Psicobiología, nos explica, en su última obra, lo que ocurre en el cerebro cuando aprendemos; cómo se forman, estabilizan y perduran las memorias; qué, cómo y cuándo recordamos; qué es el olvido, por qué olvidamos y muchísimas cosas más.


Logo KW

Memory capacity of brain is 10 times more than previously thought [1643]

de System Administrator - sábado, 30 de enero de 2016, 15:35

Memory capacity of brain is 10 times more than previously thought

Source: Salk Institute

The brain's memory capacity is in the petabyte range, as much as entire Web, new research indicates. The new work answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.

Salk scientists computationally reconstructed brain tissue in the hippocampus to study the sizes of connections (synapses). The larger the synapse, the more likely the neuron will send a signal to a neighboring neuron. The team found that there are actually 26 discrete sizes that can change over a span of a few minutes, meaning that the brain has a far great capacity at storing information than previously thought. Pictured here is a synapse between an axon (green) and dendrite (yellow).

Credit: Salk Institute

Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.

"This is a real bombshell in the field of neuroscience," says Terry Sejnowski, Salk professor and co-senior author of the paper, which was published in eLife. "We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power. Our new measurements of the brain's memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web."

Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output 'wire' (an axon) from one neuron connects to an input 'wire' (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.

"When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses," says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. "While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report."

Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses--with more surface area and vesicles of neurotransmitters--are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.

The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.

At first, the researchers didn't think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.

To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.

The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.

"We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature," says Bartol.

Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.

But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.

"Our data suggests there are 10 times more discrete sizes of synapses than previously thought," says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 "bits" of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.

"This is roughly an order of magnitude of precision more than anyone has ever imagined," says Sejnowski.

What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.

"We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses," says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.

The researchers calculated that for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.

"This means that every 2 or 20 minutes, your synapses are going up or down to the next size. The synapses are adjusting themselves according to the signals they receive," says Bartol.

"Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers," says Harris. "The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms." Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.

"The implications of what we found are far-reaching," adds Sejnowski. "Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us."

The findings also offer a valuable explanation for the brain's surprising efficiency. The waking adult brain generates only about 20 watts of continuous power--as much as a very dim light bulb. The Salk discovery could help computer scientists build ultraprecise, but energy-efficient, computers, particularly ones that employ "deep learning" and artificial neural nets--techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.

"This trick of the brain absolutely points to a way to design better computers," says Sejnowski. "Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains." 

Story Source:

  • The above post is reprinted from materials provided by Salk Institute. Note: Materials may be edited for content and length. 

Journal Reference:

  1. Terrence J Sejnowski et al. Nanoconnectomic upper bound on the variability of synaptic plasticity - See more at: eLife, January 2016 DOI: 10.7554/eLife.10778.001




Página:  1  2  3  4  5  (Siguiente)