Neurociencia | Neuroscience
Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
Hacking the Brain — Restoring Lost Abilities With the Latest Neurotechnologies 
Hacking the Brain — Restoring Lost Abilities With the Latest Neurotechnologies
A few weeks ago, I wrote about Ray Kurzweil's wild prediction that in the 2030s, nanobots will connect our brains to the cloud, merging biology with the digital world.
Let's talk about what's happening today.
Over the past few decades, billions of dollars have been poured into three areas of research: neuroprosthetics, brain-computer interfaces and optogenetics.
All three areas of research are already transforming humanity and solving many of the problems that seem to have stumped our natural evolutionary processes.
This post is about the latest developments in these fields — from the most exciting applications today to the most game-changing applications of the future.
Neuroprosthetics, Brain-Computer Interfaces, and Optogenetics
Your brain is composed of 100 billion cells called neurons.
These cells make you who you are and control everything you do, think and feel.
In combination with your sensory organs (i.e., eyes, ears), these systems shape how you perceive the world.
And sometimes, they can fail.
That's where neuroprosthetics come into the picture.
The term "neuroprosthetics" describes the use of electronic devices to replace the function of impaired nervous systems or sensory organs.
They've been around for a while — the first cochlear implant was implanted in 1957 to help deaf individuals hear — and since then, over 350,000 have been implanted around the world, restoring hearing and dramatically improving quality of life for those individuals.
But such a cochlear implant only hints at a very exciting field that researchers call the brain-computer interface, or BCI: the direct communication pathway between the brain (central nervous system, or CNS) and an external computing device.
The vision for BCI involves interfacing the digital world with the CNS for the purpose of augmenting or repairing human cognition.
And how we interface with the CNS is where it becomes interesting.
There are two approaches. The first is physically connecting wires and neurons with microscopic arrays of metallic pins that stick into the brain and electrically stimulate neurons and/or measure the neuron's electric potential when they fire.
The second, and far more interesting, approach is the arena of "optogenetics" — controlling neurons with light. Using this mechanism, a light-sensitive molecule is inserted into the cell surface of a neuron (usually through a virus vector). The light-sensitive molecule can then allow an outside user to trigger or inhibit the neuron's firing by pulsing a specific frequency of light.
The entire BCI and neuroprosthetics field is just at its infancy today.
To get you thinking about the possibilities, here are a few of my favorite applications illustrating what we can do today.
Where we go in the future is really just mind-blowing.
The Future — Where Brain Research Is Going
As neuroscientist David Eagleman recently pointed out at TED, our experience of reality is constrained by our biology.
This doesn't have to be the case anymore as we develop new ways to send novel inputs or computational capabilities into the brain.
We could add new senses. (Imagine being able to "plug in" to the stock market, to sense how the market was doing.) We could develop wireless, brain-to-brain communication, something called synthetic telepathy, and send messages to each other by thinking them.
Our brains are a platform and the opportunities for new applications are almost endless.
These applications will challenge what it means to be human. And once we, as Ray Kurzweil predicts, connect our neocortices to the cloud, perhaps we'll become something far more than "human" altogether.
Hacking Victims Deserve Empathy, Not Ridicule 
Hacking Victims Deserve Empathy, Not Ridicule
Every day for nearly two weeks, Troy Hunt, an Australian Internet security expert, has opened up his computer to find a plea for help from someone on the edge.
“I have contemplated suicide daily for the past week,” one person recently told Mr. Hunt. “My two beautiful children and my wife are keeping me alive. I am very worried that her family and others will find out, making it extremely difficult for her to stay with me.” Another wrote, “I imagine my grown kids finding out, my neighbors, friends, co-workers, and sometimes I just want to end it all before facing something like that.”
Mr. Hunt runs Have I Been Pwned?, a site that lets people determine if their data has been compromised in one of the online security breaches that have made headlines over the last few years. For the victims, most of those breaches resulted in little more than minor frustrations — changing a password, say, or getting a new credit card.
But the theft and disclosure of more than 30 million accounts from Ashley Madison, a site that advertises itself as a place for married people to discreetly set up extramarital affairs, is different. After the hacking, many victims have been plunged into the depths of despair. In addition to those contemplating suicide, dozens have told Mr. Hunt that they feared losing their jobs and families, and they expected to be humiliated among friends and co-workers.
There has been a tendency in the tech commentariat to minimize the Ashley Madison breach. The site has always seemed like a joke and possibly a scheme, and those who fell for it a testament to the Internet’s endless capacity to separate fools from their money.
But the victims of the Ashley Madison hacking deserve our sympathy and aid because, with slightly different luck, you or I could just as easily find ourselves in a similarly sorry situation. This breach stands as a monument to the blind trust many of us have placed in our computers — and how powerless we all are to evade the disasters that may befall us when the trust turns out to be misplaced.
“I feel reticent to blame people for ignorance or the consequences of their actions when they’re simply sitting there at home doing something perfectly reasonable in an environment where there was an expectation set for privacy,” Mr. Hunt told me. “I think what this does is demonstrate that everything you put online may become public.”
There are several steps to take to minimize future damage from hackings like this one. But first, we could all become a bit more tolerant of online lapses; maybe the way to solve the problem of rampant disclosure of private stuff is to strive to look away from the stuff when it leaks — and to give those who’ve been harmed the benefit of the doubt.
Second, we should all learn a little “opsec” — hackers’ jargon for “operational security,” or a guide for conducting yourself online to minimize the possibility of your secrets getting spilled. It wouldn’t hurt the tech industry to help us in that endeavor, building warnings and guidelines into the same machines that are leaking our secrets. Perhaps we should even start teaching opsec in schools.
So, Step 1: Even though it may be difficult, try to give people caught up in this breach the benefit of the doubt. Sure, some Ashley Madison users may have been unsavory, but some were not — and who among us doesn’t have something to hide?
“It’s easy to be snarky about Ashley Madison, but just because it’s unpopular or even immoral, it doesn’t mean this sort of activity shouldn’t be protected,” said Scott L. Vernick, a lawyer who specializes in digital privacy issues at the firm Fox Rothschild. “This gets at fundamental issues like freedom of speech and freedom of association — today it’s Ashley Madison, tomorrow it could be some other group that deserves protection.”
Everyone has some data — probably a lot of it — buried in their vast digital record that they would rather not disclose publicly. That problem will grow; in the last couple of decades, computers have come to function less as office tools than as friends and therapists. The digital world has become a place to offload your deepest fears and desires, to seek discreet counsel and surreptitious amusement under the veil of privacy offered by an LCD screen.
But much of that privacy is an illusion. If hackers can get at our fetishes on Ashley Madison, they can get at anything else — your nude selfies (don’t deny them), your embarrassing taste in music (Nickelback’s early stuff was great), your health records or whatever else you would prefer remained secret.
Given that inevitability, it might be best to approach disclosures like this one by consulting the Golden Rule. When you hear of some new breach, don’t sniff around the pilfered documents for other people’s secrets if you wouldn’t want others to dig into yours. Mr. Hunt’s website, Have I Been Pwned?, abides by this policy; he requires that people verify they own a particular email address before searching his Ashley Madison database.
Many other search sites are not as scrupulous, which Mr. Hunt said has inspired an army of busybodies to search for everyone they know. But he pointed out that the snoops might not have considered mitigating factors in this hacking. Ashley Madison did not ask users to verify email addresses, which means anyone could have signed up with someone else’s email address. Others may have logged on just once, because they were curious, joking, intoxicated or being ironic. In other words, a positive hit in the Ashley Madison database doesn’t tell the full story. And, anyway, the full story may be none of your business.
“Some people need to get a life,” Mr. Hunt said.
Of course, some people won’t get a life; they’ll want to search for you. That brings us to Step 2 in the plan for better privacy: When you’re online, act as if everything you do is public. If you’re engaging in anything that may one day come back to haunt you, take precautions: Create a fake name, fake email address, perhaps use a different device, and try to separate your underground identity from your true identity.
This is easier than it sounds. A person who goes by the handle thegrugq, the author of a blog called Hacker OPSEC (and whose real identity is, of course, a secret), has published several practical guides that explain how to protect your information online. If we, collectively, were to begin to take online security more seriously, such guides could be taught in schools — imagine a kind of home ec for computer security. It would be even better if our computers somehow warned us when we were violating these practices — say, a pop-up warning if your machine detected you were typing a work address on an adult site.
Still, thegrugq counseled in an email, these precautions are not foolproof. “Security is a trade-off against efficiency, and that can be very painful,” thegrugq said. “Few people will reduce the ease with which they can do something just because it might have a future benefit (just ask economists)!”
But maybe the dangers will prompt us all to remain vigilant. “True online security is not just defending against compromise, it’s operating under the assumption that compromise will happen,” SwiftOnSecurity, a security expert who assumes the online persona of a security-minded version of the pop star Taylor Swift as a kind of Twitter-based performance art, told me in a private chat. She added: “Your online life will extend across 60+ years. Imagine the changes. Imagine the disasters. Imagine what the world shouldn’t know about you that someday it will.”
Herramientas para vencer el “ansiestrés” 
Herramientas para vencer el “ansiestrés”
por Esther Canales Castellanos
La ansiedad y el estrés se han convertido en los grandes males de nuestro tiempo, tanto que se podría acuñar un nuevo término denominado “ansiestrés”.
Unos niveles saludables de “ansiestrés” pueden ser beneficiosos cuando nos enfrentamos ante determinadas situaciones que requieren nuestra acción inmediata y el despliegue de nuestros mejores recursos. Hay muchas ocasiones en las que no es necesario utilizar tanta energía para abordar una tarea o bien no es productivo preocuparse continuamente por algo cuya importancia estamos sobredimensionando. Estos niveles de ansiedad y estrés, aunque no lleguen a ser patológicos, no nos permiten vivir la vida con serenidad y satisfactoriamente.
Si lo que quieres es cambiar esta tendencia en tu vida te propongo una serie de técnicas que pueden resultarte de utilidad, para centrarte en tu presente.
En primer lugar te invito a que anotes los síntomas que te indican que sufres “ansiestrés”. Estos síntomas pueden ser de tres tipos:
Entre los síntomas físicos más frecuentes que solemos encontrar son los dolores. Todo tipo de dolores: de espalda, de cabeza, de estómago, etc. Cada persona somatiza el “ansiestrés” de alguna manera que suele ser recurrente.
Herramientas que funcionan muy bien para estos síntomas físicos son los ejercicios de respiración abdominal, el ejercicio suave, los ejercicios de relajación y remedios naturales como la tila y la valeriana. El objetivo es eliminar poco a poco las hormonas del estrés que se han ido acumulando en nuestro organismo. Cada cual puede encontrar su propia fórmula. Eso sí tiene que ser constante y , y al la vez ser gratificante.
En cuanto a los síntomas mentales- emocionales, estos requieren un trabajo en mayor profundidad, pudiendo ser de utilidad la ayuda profesional de un psicólogo o un coach. Esto se debe a que nos hemos acostumbrado a manejar las situaciones de nuestra vida cotidiana de determinada manera y necesitamos que alguien desde fuera nos aporte luz, que nos enseñe a mirar de otra manera.
Algunas herramientas que puedes utilizar tú mismo es revisas una por una todas las parcelas de tu vida y comprobar si están funcionando como te gustaría. Si no es así, traza tu propio plan para que poco a poco puedas ir consiguiendo los objetivos que te propones para mejorar estas áreas. Los pequeños cambios son acumulativos. Otra opción es la de escribir tu propio diario o cuaderno de trabajo donde puedes ir anotando tus impresiones, logros, situaciones positivas que te van ocurriendo, emociones que te causas las distintas situaciones, etc. Todo esto te ayuda a tomar perspectiva y a relativizar la importancia que tiene cada elemento en tu vida. Escribiendo también puede ocurrir que descubras cual es la verdadera causa de tu malestar y de esa forma trabajar en ello para solucionarlo.
Por último es importante aprender a gestionar tus síntomas conductuales. Hay comportamientos que nos generan “ansiestrés” y que a la vez se acaban convirtiendo en causa y efecto, como si de un círculo vicioso se tratase.
Algunas herramientas que podemos poner en práctica son las siguientes.
En primer lugar recomiendo que prestes una especial atención a cultivar tus relaciones, ya que nos reportan satisfacción.
Por otro lado es importante que elabores un plan de productividad personal en el que lo importante no sea realizar el mayor número de cosas, sino en dar prioridad a la calidad. Esto implica que realmente seamos sabios a la hora de utilizar nuestro tiempo. Hay veces que nos comportamos como si el tiempo fuese infinito, dispersando nuestra atención en infinidad de tareas. Tenemos que aprender a decir no y a elegir qué es lo importante para nuestros objetivos. Para ello primero tendremos que ser conscientes de cuál es el propósito de nuestra vida, qué es lo importante para nosotros. Hay veces que tenemos infinidad de cosas que nos gustaría hacer, a mí me ocurre continuamente, tengo múltiples intereses. Decidir a qué tenemos que renunciar y a cambio de qué es importante y requiere de una reflexión sincera sobre lo que podemos o no hacer con el tiempo del que disponemos.
Otra herramienta que te propongo es la de simplificar al máximo posible todo lo que hagas. Cuando te propongas hacer algo, piensa cual es la forma más breve de hacerlo sin que la calidad se resienta. Ya te adelanto que el perfeccionismo , a no ser que seas cirujano, es enemigo de tu efectividad y felicidad, y amigo número uno de tu “ansiestrés”. No puedes controlarlo todo, aprende a delegar y a no pretender ser perfecto ni agradar a todo el mundo.
Sobre todo, lo mejor que podemos hacer contra el “ansiestrés” es centrarnos en el aquí y ahora y disfrutarlo.
Esther Canales Castellanos. Psicóloga y Coach
Higher Ed Has Failed Students—Here’s How We Plan to Fix It 
Higher Ed Has Failed Students—Here’s How We Plan to Fix It
Higher education, in general, fails students in three ways.
This isn’t news.
Today, we almost take these challenges as immutable facts, but they don’t have to be. We can shift the tide by changing how and what we teach, and by making the most of technologies that are already here. My organization, Minerva, is one of the few working to address these problems — here are a few solutions we hope will make higher education more effective in the 21st century.
Preparing Students for Life After College
The standard curriculum has three parts: General education, a major, and electives. The problem is, as they are typically taught, none of these is very useful for students after graduation.
General education is supposed to prepare students for life after college, but often it consists of a set of breadth requirements that are neither designed with any particular goal in mind nor are part of a coherent program. The major is typically of no use to students after graduation. (How many economics majors become economists? How many sociology majors become sociologists?) And electives are typically just whatever happens to interest the faculty, with little thought about what is useful for students.
To tackle these problems, we’ve designed our entire curriculum around the goal of imparting “practical knowledge”— knowledge students can use to achieve their goals.
Practical knowledge is broad and generative. Kurt Lewin famously said, “There is nothing as practical as a good theory.” Practical knowledge is not vocational training nor is it focused on pre-professional instruction. Practical knowledge should give students the intellectual foundations to succeed at jobs that don’t even exist yet.
We have intentionally created a general education program during the first year that provides students with a set of cognitive tools they can use in varied situations. After the first year, we’ve designed a set of majors and concentrations that allow students to expand on this knowledge and apply it in more specific, “real-world” contexts. As students progress through the curriculum, they increasingly personalize it to help them achieve their goals.
We want our students to be able to become leaders and innovators, and to adapt to a changing, increasingly global world. Given these goals, we could identify skills and attributes that are key to their success in the future.
To ensure that what we teach translates to the real world, we devised the capstone experience.
Kicking off in the first semester of junior year, the capstone challenges students to plan a novel solution to an important problem. Then, over the following three semesters, students practice all they have learned from their Minerva experience by carrying out an original capstone project in their chosen field (or fields) in preparation for the students’ transition to the real world.
In addition, in their senior year, each student works with two other students and a professor to design a seminar on a topic of their choosing—and then they take the seminar. For most majors, students take two such seminars. No other university program allows students to personalize their instruction in this way.
Lastly, Minerva students change locations every semester after their first year in San Francisco.
They live and study in Berlin, Buenos Aires, Seoul, Bangalore, Istanbul, and London. We use each city as a campus, taking advantage of local resources and integrating them into the curriculum. This approach broadens the students’ perspectives and extends their learning environment into a wide range of diverse urban contexts.
Say No to In-Class Lectures: Making Learning Active
Traditionally, students read assigned materials and then attend class to hear their professor give a lecture. They take notes, go home, do an assignment, and repeat. This model is backward — that is, students should not be wasting time in the classroom being lectured at by the professor.
In a standard “flipped classroom,” homework is done in class — where the teacher and other students are available as resources — and lectures are provided before class.
This is a good start. But we’ve gone further.
At Minerva, minimal information transmission takes place in class. In our “radical flipped classroom,” we move both the homework, readings, and lectures to before class and reserve class time for active learning. Students use information acquired through lectures and homework in critical thinking, creative thinking, effective communication, and effective interaction. They take part in group problem solving, debate, role-playing exercises and other activities that engage them.
This is challenging—but in a good way.
Students often prefer a traditional lecture format to active learning because lectures are easy: The student simply writes down what the professor says, memorizes it, and then does well on a test. Moreover, there’s the illusion of learning: The more notes, the more learned. Right? No. The vast majority of what was “learned” is soon forgotten. Active learning solidifies newly acquired knowledge by requiring students actually to use it after they’ve learned it.
Active learning does have an apparent drawback: Less material can be covered than in a traditional lecture format. But this drawback is more apparent than real. If retention is tested three months later, students who took part in active learning typically retain many times as much as students who received the material in just a lecture. Moreover, because active learning focuses on using information, it is an ideal fit to Minerva’s emphasis on practical knowledge.
Minerva’s active learning approach is complemented by programs that test and deepen students’ learning through practical application with community partners.
Whether working with the mayor’s office to reimagine the public use of San Francisco’s Market Street or partnering with an entrepreneurial incubator to evaluate submissions for funding, students are applying the what they’ve learned as part of their curriculum.
Technology to Facilitate Learning, Measure Progress and Broaden Experience
All classes at Minerva are taught using a cloud-based system that was developed solely to conduct educational seminars, the Active Learning Forum (ALF).
We use ALF for two main reasons. First, it allows us to teach more effectively and helps students to learn more effectively. In particular, our use of active learning allows us to apply the science of learning systematically. For example, we know that rapid feedback is invaluable; we take advantage of this by recording all classes, which allows faculty to score students and give them feedback soon after class.
Second, ALF allows students to take classes and faculty to teach classes from anywhere in the world. This means that we can have students in the same seminar who are living in different cities and can bring their experiences into class for comparison/contrast exercises. It also means that we can recruit first-rate faculty who can teach from all over the world.
Lowering the Cost and Increasing Student Engagement
Finally, it’s worth taking a step back and considering Minerva in a broader context. As noted at the outset, higher education in general fails students in three ways: First, it does not prepare students to succeed in life after college. Everything we do at Minerva is focused on this goal.
Second, our peer institutions typically charge about four times what we do for tuition.
Because we don’t own buildings, have sports teams (or even a climbing wall!), and so on, we have far fewer expenses and can actually provide a much more intimate, substantially higher quality educational experience at a fraction of the cost.
And, finally, many students drop out, either never completing college or not attending classes (and instead just showing up for the test). We hope to motivate students to continue their journey toward mastery by restricting our classes to small, intimate seminars, by requiring engagement and participation from our students, by raising expectations as opposed to lowering them, and by having students live and travel together all over the world.
Combining rigorous academics, small seminars, and four years of immersive global study, we have built Minerva for the 21st century. We have not only changed what students learn, but how they learn. However, the only measure of our success will be the success of our students, not simply doing well in school but also doing well in life after graduation — professionally and personally.
To get updates on Future of Learning posts, sign up here.
Hiperconexión: un problema de los tiempos de internet 
Hiperconexión: un problema de los tiempos de internet
por Nse. Marita Castro
Algunas personas se preguntan si es correcto o no sentir tantos deseos de estar conectados a las distintas redes sociales que atrapan día a día cada vez a más individuos.
Un estudio publicado en la revista Psychological Science presentó que el interés de estar interactuando en los distintos medios sociales de internet es más difícil de resistir que el generado por los cigarrillos, el alcohol y el sexo.
Wilhelm Hofmann, un reconocido investigador de la capacidad de autocontrol y de la importancia que ésta tiene para nuestra calidad de vida, llevó a cabo un experimento junto a su equipo de la Escuela de Negocios de la Universidad de Chicago y a Kathleen Vohs, de la Universidad de Minnesota, para comprobar este impulso en 205 personas con edades entre los 18 y 85 años en la ciudad alemana de Wurtzburg.
Los participantes vivían estilos de vida muy similares a individuos típicos de los Estados Unidos. Todos ellos recibieron Blackberrys y debieron durante una semana identificar y notificar los deseos que experimentaban y la fuerza de los mismos, incluidos aquellos generados por el aviso de un correo electrónico o de información de alguna red social.
En general, cigarrillos, comida, sueño y sexo fueron los anhelos más fuertes que se experimentaron, pero, cuando se trataba de la resistirse a los mismos, los medios sociales se convirtieron en el eslabón más débil.
La diferencia radica en que en las estructuras virtuales el placer inmediato otorgado no presenta esfuerzo, no tiene costo y, en general, ninguna exhibe desventajas en el corto plazo. Sin embargo, a largo plazo, no poder dominar su uso creará que las personas no consigan manejar sus tiempos y con ello sufran serios problemas desde personales y familiares hasta laborales.
Además, esta investigación permitió observar otros hallazgos. De hecho, se pudo vislumbrar que a medida que el día avanza la posibilidad de autocontrolarse disminuye, ya que el cansancio influye en su reducción y cuando alguien pudo resistirse a su voluntad, posteriormente le resultó más complejo no caer en tentaciones posteriores.
Los puntos anteriores ya habían sido percibidos en otras investigaciones, como por ejemplo la de William Hedgcock, de la Universidad de Iowa, quien presentó que el autocontrol es un recurso limitado debido al alto consumo energético que significa para el cerebro. Por ello si durante el día tenemos muchas situaciones exigentes de su aplicación, es común que nos cueste ejercerlo e incluso no logremos resistirnos a tentaciones. También tenderemos a actuar menos calmos en circunstancias que lo requieran.
El aumento de la hiperconexión no solo capta la atención de los científicos, sino también la de los altos ejecutivos de empresas tecnológicas, quienes expresaron al periódico New York Times su preocupación por las consecuencias negativas de esta tendencia.
Stuart Crabb, uno de los directores ejecutivos de Facebook, aconsejó desconectarse y dejar de usar ―de vez en cuando― los ordenadores y smartphones. Consideró que las personas deben estar atentas al efecto que el uso excesivo ocasiona en su rendimiento y sus relaciones.
Por su parte, Scott Kriens, presidente de Juniper Networks, una de las mayores empresas de infraestructura de Internet, aseguró que el poderoso atractivo de los dispositivos refleja principalmente anhelos humanos primitivos para conectarse e interactuar, pero que estos necesitan ser administrados para que quienes los usan no vean abrumadas sus vidas.
Asimismo, Richard Fernández, ejecutivo de Google, señaló que los riesgos de estar demasiado “enganchado” a los dispositivos electrónicos son grandes. Según él, “los consumidores necesitan de una brújula interna para saber equilibrar las capacidades que la tecnología les ofrece para trabajar y conectarse, y la calidad de la vida que viven offline”.
En todos los casos estar conectados en exceso se debe a que resulta difícil "desconectarse" por el placer inmediato que esto genera y también porque apagar los equipos despierta en el cerebro una sensación de alarma, ya que puede estar perdiendo información importante.
Nuestro cerebro está preparado para evaluar rápidamente los estímulos del mundo exterior, considerando cuál puede ser la consecuencia de los mismos en el corto plazo, algo fundamental para garantizar nuestra supervivencia en los inicios de nuestra especie. Sentarnos a evaluar si un león tenía buenas o malas intenciones no era una opción viable, al igual que a sopesar si deseábamos comer un alimento o no un mundo en donde la hambruna era moneda corriente.
Desde nuestros primeros días los bancos de memorias emocionales evaluarán de forma casi instantánea si una acción tiene beneficios en el corto plazo, haciendo que en muchos casos no sean tenidas en cuenta las posibles consecuencias en el largo plazo. Esto para nuestro cerebro implica un camino mucho más largo y de mayor consumo energético al deber activarse áreas de la corteza prefrontal dentro de las que se pueden destacar las cortezas dorsolateral y cingular anterior.
Debido a ello es que cuando una respuesta emocional rápida hace sentir placer inmediato sin ninguna consecuencia en el aquí y ahora resulta muy difícil evaluar si esta acción nos traerá problemas.
En momentos en los cuales no estemos comprometidos emocionalmente con el estímulo será cuando tendremos la capacidad de valorar las posibles consecuencias a futuro y, en el caso de las redes sociales, también distinguiremos la cantidad de tiempo real que le otorgaremos a esta actividad.
Te invito a hacer una autoevaluación, registrando durante dos semanas la cantidad de horas que destinas a los diferentes medios de internet:
La tarea de autoevaluación y efectos en el largo plazo debe ser registrada y trabajada varias veces para lograr que lo que se reflexionó y consideró quede consolidado en la memoria y en próximas ocasiones pueda ser evaluado.
La capacidad reflexiva contribuye al autocontrol y a que logremos ciertos enfoques que nos permitirán discernir mejor y entender los pros y contras de estar hiperconectados. La tecnología es maravillosa y nos provee de muchas posibilidades, pero debemos darle un sano equilibrio con nuestra vida: esto es fundamental para convertirla en una herramienta que contribuya de forma positiva. Parte de nuestra educación necesita contemplar la capacidad de autorregulación para que nuestras vidas se desarrollen en armonía.
Artículo de uso libre, sólo se pide citar autor y fuente (Asociación Educar).
How a Memory Is Made 
How a Memory Is Made
Transcription factor levels dictate which neurons in a network store a memory.
In the right hemisphere of a mouse brain stained with DAPI (blue), some neurons in the insular cortex
When a nasty taste makes the stomach turn, neurons in the brain’s insular cortex fire up to form a memory of the foul flavor. But only a subset of cells are involved in storing that memory. In mice learning to dislike saltwater, new memories favor neurons with high levels of the cyclic-AMP-response-element-binding protein (CREB), according to a study published today (November 13) in Current Biology.
A team of researchers at the University of California, Los Angeles (UCLA) examined the development of a conditioned taste aversion response in mice that overexpressed CREB in a subset of insular cortex neurons. Precise inactivation of the CREB-expressing neurons revealed that these cells were required for the mice to remember the bad-taste experience. CREB, which activates the transcription of genes that make neurons more excitable, may play a broad role in regulating memory allocation in the mammalian brain.
“There’s a huge amount of work on molecular mechanisms of memory storage, and there’s relatively little known about the processes that are important for memory allocation—which cells really code the memory,” said neuroscientist Dietmar Kuhlof the University of Hamburg in Germany who was not involved in the study.
The idea that CREB determines which neurons form a memory makes sense, added Mauro Costa-Mattioli of Baylor College of Medicine. “The neurons are competing for the memory trace—not everyone can get it,” he said. “Those that are more excitable have more chances.”
The study builds on previous work by UCLA’s Alcino Silva and his team, which demonstrated in 2009 that CREB levels regulate the allocation of fear memory in a different brain region, the amygdala. “The amygdala and the [insular cortex] are two dramatically different structures in the brain,” said Silva. “We were hoping that the phenomenon of memory allocation was universal, but we had no way of knowing until this [study].”
To test the role of CREB in the insular cortex of mice, Silva and his colleagues used a viral vector to introduce two genes into excitatory neurons in this brain region. One gene directed the cells to churn out green fluorescent protein (GFP)-tagged CREB, while the other encoded a “designer receptor” called hM4Di, which is activated by a drug called clozapine-N-oxide (CNO). The drug has no effect on normal cells; its only activity is the selective silencing of neurons that express hM4Di.
Then, the researchers performed a conditioned taste aversion test: they presented the mice with both normal water and saltwater made with sodium chloride—mice, like humans, tend to enjoy the taste of salt. In another training session, the saltwater was instead made with lithium chloride, which induces nausea and discomfort. Three days later, when the animals were again given a choice between water with and without sodium chloride, mice that had tasted lithium chloride learned to avoid the saltwater. But in mice injected with CNO before the test, the CREB-expressing neurons were silenced, and the animals continued to prefer saltwater.
To test whether CREB mediates memory allocation by increasing neuronal activity, the team examined the expression of arc, which encodes a cytoskeleton-associated protein upregulated in excitatory neurons. In the insular cortices of mouse brains harvested after the conditioned taste aversion test, fluorescence in situ hybridization (FISH) showed that arc expression colocalized with GFP-CREB expression in the same cells, indicating bursts of activity in the neurons during memory retrieval.
“It never ceases to amaze me that we know enough about memory to be able to channel specific memories to specific cells,” said Silva. “We are able to genetically engineer the insular cortex so that we can determine ahead of time which neurons will be engaged in which memory. To me, that’s really magical.”
On the other hand, he said, these experiments are “limited, because in the real world, real memory is not about single strong memories.” Rather, said Silva, we remember events as “strings” of individual sensory memories. He said his team is now working to understand how these strings are formed, and CREB clearly plays a part.
For Costa-Mattioli, the demonstration that similar processes control memory allocation in different parts of the brain is “very tantalizing. . . . Perhaps this mechanism allows the strengthening of connections between brain areas.”
Y. Sano et al., “CREB regulates memory allocation in the insular cortex,” Current Biology, DOI: 10.1016/j.cub.2014.10.018, 2014.
How Aged Neurons In a Dish Can Accelerate Longevity Research 
How Aged Neurons In a Dish Can Accelerate Longevity Research
Aging insidiously leaves its mark on our brains.
With age, our well-oiled neuronal machinery slowly breaks down: gene expression patterns turn wacky, the nuclear membrane disintegrates, and neatly organized molecules inside the cells break out of their segregated compartments, turning the intracellular environment into a maladaptive, muddled molecular soup.
Yet the aging phenomenon has been very tough to study. Historically, scientists relied on fast-aging animal models — from fruit flies to primitive worms to mice — to tease out the biological mechanisms of aging, especially for tissues that are hard to biopsy from patients, including the brain.
Despite the value of these reductionist models, many life-lengthening treatments fail in clinical trials, and the leap from flies to mice to men remains insurmountable.
With the rise of induced pluripotent stem cell (iPSC) technology, scientists have been able to transform patients’ skin cells into iPSCs, which can then be coaxed into neurons for further study.
It’s a powerful technique: iPSCs derived from patients with Parkinson’s disease, for example, contain the same genetic underpinnings of the disease, which let scientists cheaply and efficiently test out theories and potential drug treatments—on human cells within the tightly controlled environment of a culture dish.
But the conversion process essentially turns back the clock: because iPSCs resemble early stage embryonic development, even when taken from an elder donor, they — and the neurons generated from them — lose their “aged” signature. Previous research found that their epigenetic landscapes, which dictate what genes are expressed where via small chemical attachments to the DNA, are reset to match that of a younger cell’s profile during the reprogramming process.
For example, genes that promote cell division are jacked up, whereas genes involved in inflammation — a strong correlate of aging — are tuned down. This discrepancy hardly makes them a good model of their aged donors.
It’s a thorny problem that’s plagued the field for decades. But now, published last week in Cell Stem Cell, a team led by Salk Institute professor Dr. Fred “Rusty” Gage has offered a clever solution.
The answer is to eschew iPSCs altogether, and take the road less traveled.
Using a method first discovered a few years ago at Stanford University, the team realized that with a series of careful biochemical tweaks, they could directly coax skin cells into fully functional neurons. Since these “induced neurons” skipped the embryonic stage, they reasoned, maybe their age counters were not reset.
It was quite the gamble.
No one knew if — and how — cells created in this way differed from neurons that naturally aged in the human brain, says lead author Dr. Jermone Mertens.
To find out, the team took skin biopsies from 19 volunteers aged from infancy to 89 years old, and transformed those skin cells into neurons using both the iPSC method and the direct conversion approach.
They then compared these “test tube neurons” to neurons obtained from autopsies of age-matched controls by looking at their transcriptomics — that is, a bird’s eye view of gene expression patterns that change with age.
As expected, iPSC-generated neurons lost their life history, reverting back to a baby-like state.
In stark contrast, neurons generated by direct conversion significantly differed in their transcriptomes based on the donor’s age. “They actually show changes in gene expression that have been previously implicated in brain aging,” said Mertens.
It’s a first, and it could be a game changer.
He pointed to a protein called RanBP17, which shuttles proteins in-and-out of the nucleus. The decline of RanBP17 was previously hypothesized to play a role in brain aging, but the theory was difficult to model in animals. However, to the team’s excitement, this abnormal decrease was recapitulated in neurons directly converted from older patients.
Other cellular assays also showed that the lab-grown neurons retained their telltale signs of aging. With age, the fragile membrane that separates the nucleus from other cellular component begins to fail, and proteins — no longer confined to their physiological working space — run amok.
This process could lead to aging or increase our susceptibility to age-related neurodegenerative disorders, such as ALS and Alzheimer’s disease, but we don’t know, said Martens. He’s eager to put the new system to use. With this new culture system, we can now directly test these ideas, Martens said.
“The results are obviously going to have an impact,” agrees Dr. John Gearhart, an expert in regenerative medicine at the University of Pennsylvania, who wasn’t involved in the study.
Although the team only tried their cell-transforming method in neurons, Gage expects it to work for other organs as well, allowing scientists to create aged heart or liver cells. The technique could also be expanded to highly-structured 3D cultures such as organoids, and used to model aging human organs.
"We expect that the paradigm of direct conversion into age-equivalent cells can be very important for future studies of age-related diseases," said Gage.
Image Credit: Shutterstock.com
How AI should be developed 
Let’s Shape AI Before AI Shapes Us
It’s time to have a global conversation about how AI should be developed
How Artificial Immune Systems May Be the Future of Cybersecurity 
How Artificial Immune Systems May Be the Future of Cybersecurity
2015 was a year of jaw-dropping hacks.
From CIA director John Brennan’s private email to Sony Inc, from the IRS to CVS, from Target to the notoriousAshley Madison, millions of people suffered from cybersecurity breakdowns across industries. According to the Ponemon Institute, the average cost of damages from data breaches in the US hit a staggering $6.5 million this year, up $600,000 from 2014.
Untallied are the personal costs to the hacker’s victims: the stress associated with leaked phone numbers, credit card information, social security numbers, tax information, and the time spent getting their lives back on track.
The sophistication and scope of cyber threats are expected to further escalate, yet our defenses remain rudimentary, even medieval. Overwhelmingly, the current strategy is to define the threats, and then build strong defensive walls focused on keeping nefarious agents, viruses or programs out.
Once hackers tunnel through, however, our information is ripe for the picking. Without any means of tracking hackers as they plow through our systems, current defenses are incapable of sounding alarms until it’s too late.
What’s more, security walls are useless against hacks that arise from within, such as those initiated by disgruntled employees or through social engineering. After all, how do you find something when you don’t know what you’re looking for?
Yet according to cybersecurity company Darktrace, we are far from fighting a losing war. All we need is to look to biology for a little inspiration.
The battle between virus and host has played out inside our bodies for millions of years. Through evolution, nature has crafted us into highly sophisticated forts that block off outside invaders and viciously attack inside threats.
These are epic battles with multiple fronts. The skin, a highly sophisticated barrier, wards off most external insults aiming to penetrate in. Similar to a digital firewall, it’s tough, adaptive, and constantly renewed to reinforce its strength.
Yet all walls crumble.
In cybersecurity, a lost wall most likely means a lost battle. Biowarfare paints a different picture altogether.
Once nefarious agents break through, our internal defense — the immune system — kicks into high gear. In a way, our bodies are highly functional police states: the immune system constantly monitors our internal environment, ensuring that its billions of molecular citizens smoothly carry out their respective roles. It learns and memorizes what’s normal, so when something strange happens, however sophisticated or novel, it knows to react.
The similarity between cyber and biological warfare is tough to ignore: in both cases, we deal with evolving adversaries that grow in complexity and gradually vary their means of attack. But because the immune system discriminates between “self” and “other,” it is so powerful that most of the time we aren’t even consciously aware that we’re under siege.
The biological immune system obviously works. So why not extend the metaphor a bit further and build a cyber immune system to protect our digital selves?
Since the early 1980s, computer scientists have toyed around with the idea of cyberimmunity. But at that time, AI still wasn’t up to the task — no algorithms could adaptively learn complex patterns and extrapolate to new ones.
With recent leaps forward in AI and deep learning, that’s set to change. Using these algorithms, scientists are starting to replicate the two main features of an adaptive immune system — learning and memory.
“Our system is self-learning, understanding what normal looks like and detecting emerging anomalies in real-time,” explains Darktrace in a promotional video.
Here’s how it works: The algorithms automatically model every device, user and network within an enterprise, allowing the system to build a full understanding of how information normally flows. This lets the program extrapolate a “threat visualization interface” to topographically map out the largest threats, thus letting cybersecurity analysts focus on top or in progress threats.
Like the immune system, Darktrace deals with a lot of noise from a system’s various components. The body handles this with a threshold response. When an injury reaches a certain level of severity, for example, the immune system activates cascades of molecular signals that recruit the cavalry — specialized immune cells such as the aptly named “killer T cell” — to the site of injury and cleans up any potential infection.
A cyberimmune system works a little bit differently. To learn what’s normal, it silently sits in the background and monitors things for a few weeks before it’s ready to detect strange happenings. Rather than flagging all suspicious activity, which could lead to overwhelming false-positives, it churns out advice based on probabilities, continuously updating its results in the light of changing evidence.
The system can also automatically cut off infiltrating agents from sensitive information, setting up a “honey pot” scenario where it traps the hacker and observes how they behave — what information they’re after, how they work, and maybe even where they came from.
So far, according to TheLong+Short, Darktrace works pretty well at picking out suspicious activity, including password compromises, anomalous internal file transfers and infections with ransomware.
That said, the system isn’t perfect.
And some of that is due to inherent faults of the biological immune system that it was based on. Autoimmunity is an obvious one — in some cases, the infectious agent is so similar to components of our own body that the immune system loses its ability to distinguish between self and other. Instead, as it delivers its brutal attacks, it inadvertently also damages our own organs.
Along the same lines, could cyber autoimmunity ever become an issue?
There are already cases of anti-virus software identifying core computer code as malicious malware and shutting it down. As hackers become increasingly sophisticated in their attack strategy, it may be possible to change bits of the network so that they look suspicious and are blocked off by the cyberimmune algorithms. Like the HIV virus, which seeks out and shuts down our immune system, hackers may even opt to directly attack cyberimmunity rather than circumvent it.
The results could be just as deadly.
In the end, security will always be a cat-and-mouse game, and nothing is 100% safe. But having an automated learning system that continuously finds and quarantines new threats definitely gives us the upper hand. It’s likely Darktrace is simply a step towards future, more sophisticated biomimetic cybersecurity systems.
Image Credit: Shutterstock.com