Neurociencia | Neuroscience


Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  2  3  4  5  6  7  8  9  10  ...  71  (Siguiente)


Logo KW

Artificial Enzymes from Artificial DNA [1015]

de System Administrator - miércoles, 10 de diciembre de 2014, 19:30

Artificial Enzymes from Artificial DNA Challenge Life As We Know It


In the decade or so since the Human Genome Project was completed, synthetic biology has grown rapidly. Impressive advances include the first bacteria to use a chemically-synthesized genome and creation of a synthetic yeast chromosome.

Recently, scientists from the MRC Laboratory of Molecular Biology in Cambridge, led by Dr. Philip Hollinger,reported creating the first completely artificial enzymes that are functional. The breakthrough was published in the journal Nature and builds on prior success by the group in creating several artificial nucleotides.


Nucleotides, the building blocks of DNA and RNA, consist of a phosphate group, one of five nitrogenous bases (adenine, cytosine, guanine, thymine, or uracil), and a sugar (deoxyribose in DNA and ribose in RNA).

In their previous studies, Dr. Hollinger’s group investigated whether nucleotides that don’t exist in nature (to our knowledge) could function like natural nucleotides. They designed six artificial nucleotides, keeping the phosphate group and one of the five nitrogenous bases, but switching between different sugars or even entirely different molecules.

The group found they could incorporate the artificial nucleotides, called xeno-nucleic acids or XNAs, into DNA and they behaved like “regular” DNA. The XNAs could be copied, could encode information and transfer it, and even undergo Darwinian natural selection—naturally occurring nucleic acids no longer seemed so special.

Moving forward, the scientists were interested in whether XNAs could function as enzymes, the proteins in cells that regulate biochemical reactions. They got this idea because RNA sometimes functions as an enzyme—the 1982 discovery that RNA could encode information, replicate itself and catalyze reactions filled in a big gap in our understanding of life and how it may have started on our planet.

Dr. Hollinger’s group created four new XNAzymes that could cut and paste RNA just like enzymes called polymerases. One of the XNAzymes even works on XNAs, something we think may have happened with RNA in the early days of life on Earth.

Besides the novelty factor in these experiments, the results suggest exciting possibilities. First, even though XNAs have never been found in nature, it doesn’t mean they don’t exist.


It is possible that on other planets—in our own galaxy or others—life isn’t restricted to DNA and RNA as we know them here on Earth. Under the right conditions, intelligent life that uses XNAs or even more exotic molecules could come into existence. That’s quite an eye-opener and something we need to keep in mind as we probe other worlds for life.

Further, Dr. Hollinger and other scientists also believe that XNAzymes could have therapeutic uses.

Because they are not naturally occurring, our bodies haven’t evolved a system to break down XNAzymes. If researchers could design an XNAzyme that can degrade specific RNA, then targeting an overactive cancer gene becomes possible. They can even be designed to target the DNA and RNA that viruses use to infect a cell and force it to make more viruses.

However, before any of these possibilities are realized, there are many questions to answer.

For example, all the work done by Dr. Hollinger’s group has been in test tubes—can they get similar results in live cells? Also, since the XNAzymes are not degraded by the cell, can they design a system to turn the XNAzymes on and off? An unnatural, long-lasting molecule that cannot be degraded risks serious unintended consequences if there is no system to regulate it.

Whatever becomes of XNAs and XNAzymes as therapeutic agents, the results published thus far are quite exciting. Even if we don’t find alien beings with an XNA genome, will our technology allow us to create whole living systems using these and perhaps other novel genetic material we haven’t created yet?

For further reading and discussion, the researchers behind this work (including Dr. Hollinger) recently participated in a Reddit AMA. Or to talk the merits and risks of creating synthetic life check out our discussion post on the subject.

Image Credit:

Logo KW

Artificial Intelligence Evolving From Disappointing to Disruptive [1002]

de System Administrator - martes, 25 de noviembre de 2014, 20:33

Summit Europe: Artificial Intelligence Evolving From Disappointing to Disruptive


Neil Jacobstein, Singularity University’s co-chair in AI and Robotics, has been thinking about artificial intelligence for a long time, and at a recent talk at Summit Europe, he wanted to get a few things straight. There’s AI, and then there’s AI.

Elon Musk recently tweeted this about Nick Bostrom’s book, Superintelligence: “We need to be super careful with AI. Potentially more dangerous than nukes.”

AI has long been a slippery term, its definition in near-constant flux. Ray Kurzweil has said AI is used to describe human capabilities just out of reach for computers—but when they master these skills, like playing chess, we no longer call it AI.

These days we use the term to describe machine learning algorithms, computer programs that autonomously learn by interacting with large sets of data. But we also use it to describe the theoretical superintelligent computers of the future.

According to Jacobstein, the former are already proving hugely useful in a range of fields—and aren’t necessarily dangerous—and the latter are still firmly out of reach.

The AI hype cycle has long been stuck at the stage of overpromise and underperformance.

Computer scientists predicted a computer would beat the world chess champion in a decade—instead it took forty years. But Jacobstein thinks AI is moving from a long period of disappointment and underperformance to an era of disruption.

What can AI do for you? Jacobstein showed participants a video of IBM’s Watson thoroughly dominating two Jeopardy champions—not because folks haven’t heard about Watson, but because they need to get a visceral feel of its power.

Jacobstein said Watson and programs like it don’t demonstrate intelligence that is “broad, deep, and subtle” like human intelligence, but they are a multi-billion dollar fulcrum to augment a human brain faced with zettabytes of data.

Our brains, beautiful and capable as they are, have major limitations that machines simply don’t share—speed, memory, bandwidth, and biases. “The human brain hasn’t had a major upgrade in over 50,000 years,” Jacobstein said.

Now, we’re a few steps away from having computer assistants that communicate like we do on the surface—speaking and understanding plain english—even as they manage, sift, and analyze huge chunks of data in the background.

Siri isn’t very flexible and still makes lots of mistakes, often humorous ones—but Siri is embryonic. Jacobstein thinks we’ll see much more advanced versions soon. In fact, with $10 million in funding, SIRI’s inventors are already working on a sequel.

And increasingly, we’re turning to the brain for inspiration. IBM’s Project SyNAPSE, led by Dharmendra Modha, released a series of papers—a real tour de force according to Jacobstein—outlining not just a new brain-inspired chip, but a new specially tailored programming language and operating system too.

These advances, among others highlighted by Jacobstein, will be the near future of artificial intelligence, and they’ll provide a wide range of services across industries from healthcare to finance.

But what of the next generation? A better understanding of the brain driven by advanced imaging techniques will inspire the future’s most powerful systems: “We’ll understand the human brain like we understand the kidneys and heart.”


If you lay out the human neocortex, the part of the brain responsible for higher cognition, it’s the size of a large dinner napkin. Imagine building a neocortex outside the confines of the skull—the size of this room or a city.

Jacobstein thinks reverse engineering the brain in silicon isn’t unreasonable. And then we might approach the kind of superintelligence Musk is worried about. Might such a superintelligent computer become malevolent? Jacobstein says it’s realistic.

“These new systems will not think like we do,” he said, “And that means we’ll have to exercise some control.”

Even if we don’t completely understand them, we’re still morally responsible for them—like our children—and it’s worth being proactive now. That includes planning diverse, layered controls on behavior and rigorous testing in a “sand box” environment, segregated and disconnected from other computers or the internet.

Ultimately, Jacobstein believes superintelligent computers could be a great force for good—finding solutions to very hard problems like energy, aging, or climate change—and we have a reasonable shot at these benefits without realizing the risks.

“We have a very promising future ahead,” Jacobstein said, “I encourage you to build the future boldly, but do it responsibly.”

Image Credit:

Logo KW

Artificial intelligence will replace traditional software approaches, Alphabet's Eric Schmidt says [1450]

de System Administrator - jueves, 24 de septiembre de 2015, 17:53

Artificial intelligence will replace traditional software approaches, Alphabet's Eric Schmidt says

Logo KW

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’ [1579]

de System Administrator - domingo, 15 de noviembre de 2015, 22:29

Robots manufactured by Shaanxi Jiuli Robot Manufacturing Co on display at a technology fair in Shanghai Photograph: Imaginechina/Corbis

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

by Charles Arthur

A new report suggests that the marriage of AI and robotics could replace so many jobs that the era of mass employment could come to an end

If you wanted relief from stories about tyre factories and steel plants closing, you could try relaxing with a new 300-page report from Bank of America Merrill Lynch which looks at the likely effects of a robot revolution.

But you might not end up reassured. Though it promises robot carers for an ageing population, it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines.

Haven’t we heard all this before, though? From the luddites of the 19th century to print unions protesting in the 1980s about computers, there have always been people fearful about the march of mechanisation. And yet we keep on creating new job categories.

However, there are still concerns that the combination of artificial intelligence (AI) – which is able to make logical inferences about its surroundings and experience – married to ever-improving robotics, will wipe away entire swaths of work and radically reshape society.

“The poster child for automation is agriculture,” says Calum Chace, author ofSurviving AI and the novel Pandora’s Brain. “In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed.

“But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

What if we’re the horses to AI’s humans? To those who don’t watch the industry closely, it’s hard to see how quickly the combination of robotics and artificial intelligence is advancing. Last week a team from the Massachusetts Institute of Technology released a video showing a tiny drone flying through a lightly forested area at 30mph, avoiding the trees – all without a pilot, using only its onboard processors. Of course it can outrun a human-piloted one.

MIT has also built a “robot cheetah” which can 

. Add to that the standard progress of computing, where processing power doubles roughly every 18 months (or, equally, prices for capability halve), and you can see why people like Chace are getting worried.

Drone flies autonomously through a forested area

But the incursion of AI into our daily life won’t begin with robot cheetahs. In fact, it began long ago; the edge is thin, but the wedge is long. Cooking systems with vision processors can decide whether burgers are properly cooked. Restaurants can give customers access to tablets with the menu and let people choose without needing service staff.

Lawyers who used to slog through giant files for the “discovery” phase of a trial can turn it over to a computer.An “intelligent assistant” called Amywill, via email, set up meetings autonomously. Google announced last week that you can get Gmail to write appropriate responses to incoming emails. (You still have to act on your responses, of course.)

Further afield, Foxconn, the Taiwanese company which assembles devices for Apple and others, aims to replace much of its workforce with automated systems. The AP news agency gets news stories written automatically about sports and business by a system developed by Automated Insights. The longer you look, the more you find computers displacing simple work. And the harder it becomes to find jobs for everyone.

So how much impact will robotics and AI have on jobs, and on society? Carl Benedikt Frey, who with Michael Osborne in 2013 published the seminal paperThe Future of Employment: How Susceptible Are Jobs to Computerisation? – on which the BoA report draws heavily – says that he doesn’t like to be labelled a “doomsday predictor”.

He points out that even while some jobs are replaced, new ones spring up that focus more on services and interaction with and between people. “The fastest-growing occupations in the past five years are all related to services,” he tells theObserver. “The two biggest are Zumba instructor and personal trainer.”

Frey observes that technology is leading to a rarification of leading-edge employment, where fewer and fewer people have the necessary skills to work in the frontline of its advances. “In the 1980s, 8.2% of the US workforce were employed in new technologies introduced in that decade,” he notes. “By the 1990s, it was 4.2%. For the 2000s, our estimate is that it’s just 0.5%. That tells me that, on the one hand, the potential for automation is expanding – but also that technology doesn’t create that many new jobs now compared to the past.”

This worries Chace. “There will be people who own the AI, and therefore own everything else,” he says. “Which means homo sapiens will be split into a handful of ‘gods’, and then the rest of us.

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.”

Arguably, we might be part of the way there already; is a dance fitness programme like Zumba anything more than adult play? But, as Chace says, a workless lifestyle also means “you have to think about a universal income” – a basic, unconditional level of state support.

Perhaps the biggest problem is that there has been so little examination of the social effects of AI. Frey and Osborne are contributing to Oxford University’s programme on the future impacts of technology; at Cambridge, Observercolumnist John Naughton and David Runciman are leading a project to map the social impacts of such change. But technology moves fast; it’s hard enough figuring out what happened in the past, let alone what the future will bring.

But some jobs probably won’t be vulnerable. Does Frey, now 31, think that he will still have a job in 20 years’ time? There’s a brief laugh. “Yes.” Academia, at least, looks safe for now – at least in the view of the academics.


Smartphone manufacturer Foxconn is aiming to automate much of its production facility. Photograph: Pichi Chuang/Reuters

The danger of change is not destitution, but inequality

Productivity is the secret ingredient in economic growth. In the late 18th century, the cleric and scholar Thomas Malthus notoriously predicted that a rapidly rising human population would result in misery and starvation.

But Malthus failed to anticipate the drastic technological changes - from the steam-powered loom to the combine harvester - that would allow the production of food and the other necessities of life to expand even more rapidly than the number of hungry mouths. The key to economic progress is this ability to do more with the same investment of capital and labour.

The latest round of rapid innovation, driven by the advance of robots and AI, is likely to power continued improvements.

Recent research led by Guy Michaels at the London School of Economics looked at detailed data across 14 industries and 17 countries over more than a decade, and found that the adoption of robots boosted productivity and wages without significantly undermining jobs.

Robotisation has reduced the number of working hours needed to make things; but at the same time as workers have been laid off from production lines, new jobs have been created elsewhere, many of them more creative and less dirty. So far, fears of mass layoffs as the machines take over have proven almost as unfounded as those that have always accompanied other great technological leaps forward.

There is an important caveat to this reassuring picture, however. The relatively low-skilled factory workers who have been displaced by robots are rarely the same people who land up as app developers or analysts, and technological progress is already being blamed for exacerbating inequality, a trend Bank of America Merrill Lynch believes may continue in future.

So the rise of the machines may generate huge economic benefits; but unless it is carefully managed, those gains may be captured by shareholders and highly educated knowledge workers, exacerbating inequality and leaving some groups out in the cold. Heather Stewart



Logo KW


de System Administrator - domingo, 12 de octubre de 2014, 17:28


Written By: Peniel M. Dimberu

In one of the gutsiest performances in sports history, NFL quarterback Chris Simms had to be carted off the field after taking several vicious hits from the defense during a game in 2006. Remarkably, Simms returned to the game shortly thereafter and led his team on a scoring drive before having to leave the game for good.

As it turns out, Simms had ruptured his spleen and lost nearly five pints of blood.

While you can live without your spleen, it serves several important functions in the body including making antibodies and maintaining a reservoir of blood. It also works to keep the blood clean by removing old blood cells and antibody-coated pathogens.

Now, scientists from Harvard’s Wyss Institute for Biologically Inspired Engineering in Boston have developed an artificial spleen that has been shown to rapidly remove bacteria and viruses from blood. The technology could be useful in many scenarios, including protecting people who suffer from immunodeficiencies and those infected with difficult to treat pathogens like Ebola virus. It also has great potential to reduce the incidence of sepsis, a leading cause of death that results from an infection that the immune system tries but fails to control effectively.

In the 2013 sci-fi thriller Elysium, the filmmakers imagined a futuristic body scanner that can quickly identify and treat almost any disease. While we may be far from an all-in-one machine that can handle any ailment, the artificial spleen developed by a Harvard team led by Dr. Donald Ingber could play a part in such a machine.

Their work, published last month in the journal Nature Medicine, was demonstrated to be effective in removing more than 90% of bacteria from blood.

Wyss Institute Founding Director Don Ingber, Senior Staff Scientist Michael Super and Technology Development Fellow Joo Kang explain how they engineered the Mannose-binding lectin (MBL) protein to bind to a wide range of sepsis-causing pathogens and then safely remove the pathogens from the bloodstream using a novel microfluidic spleen-like device.

While this device has potential to be a major advance in treating infections, the way it works is relatively straightforward. In most animals, a protein called mannose-binding lectin (MBL) binds to mannose, a type of sugar. Mannose is found on the outer surface of many pathogens, including bacteria, fungi and viruses. It is even found on some toxins that are produced by bacteria and contribute to illness.


Wyss Institute microfluidic biospleen.

Dr. Ingber’s team took a modified version of MBL and coated magnetic nanobeads with it. As the infected blood filters through the device, the MBL from the nanobeads binds to most pathogens or toxins that are around. As the blood then moves out of the device, a magnet grabs the magnetic nanobeads that have attached to the pathogens and removes them from the blood.

The blood can then be put right back into the patient, much cleaner than before.

In their initial experiments, the researchers used rats that had been infected with two common bacteria, Escherichia coli and Staphylococcus aureus. One group of rats was left untreated and the other group had their blood filtered using the new device. After five hours, 89% of the treated rats had survived while only 14% of the untreated rats were still alive.

The researchers also tested if the device could be effective for humans, which have about five liters of blood in an average adult. In five hours of testing, moving one liter of blood infected with bacteria and fungi through per hour, the device worked to remove the vast majority of the infectious bugs.

While five hours is a not a long time for patients who are hospitalized, it’s a bit long for patients who might be receiving outpatient treatment for an infection.

It is possible that as the design and function of the device is improved, it could work even faster than one liter per hour. The speed at which the artificial spleen is effective likely depends on several factors, including the pathogen load, the size of the patient (and thus their actual volume of blood) and the number of magnetic nanobeads in the device working to bind the pathogens.

Currently, the researchers are extending their experiments by testing the artificial spleen on a larger model animal, the pig.

If the device eventually makes it to market, it might provide a big boost to our arsenal against infectious microorganisms. It can bring the numbers of rapidly dividing bugs down to a level that can then make it easy for drugs or even just the immune system to finish them off, an important advancement for people who suffer from an immunodeficiency for any number of reasons. This device could also help reduce our overuse of antibiotics and give us a strong weapon against antibiotic-resistant bugs.

It might even find use in developing countries like those in Western Africa, where we are currently witnessing the devastation of the Ebola virus outbreak.

However, while many infectious bugs have mannose on their surface, not all of them do. Perhaps the 2.0 version of the artificial spleen will include proteins that can bind to other molecules on the surface of problematic microorganisms, leading us to closer to the all-in-one healing machine imagined in the futuristic world of Elysium.

Image Credit: Wyss Institute/Vimeo

This entry was posted in Medicine and tagged artificial organsartificial spleendonald ingberebola,Harvardmagnetic nanobeadswyss institute.


Logo KW

Asertividad [695]

de System Administrator - martes, 5 de agosto de 2014, 21:51

10 tips para ser asertivo sin dejar de ser uno mismo


La asertividad suele definirse como la capacidad de expresar las opiniones, los sentimientos, las actitudes y los deseos, y reclamar los propios derechos, en el momento adecuado, sin ansiedad excesiva, y de una manera que no afecte a los derechos de los demás.

La sabiduría popular dice que las personas asertivas salen adelante. Dicen lo que piensan, solicitan los recursos que necesitan, manifiestan sus deseos y sentimientos, y no aceptan un no por respuesta.  Pero si no eres una persona asertiva no debes preocuparte, se puede llegar a ser asertivo, pedir lo que necesitas y conseguir lo que quieres, sin dejar de ser uno mismo:

1. Comienza con algo pequeño. Si la idea de ser asertivo te hace sentir especialmente mal o inseguro, comienza con situaciones de bajo riesgo. Por ejemplo, si pides una hamburguesa, y el camarero te trae un salmón a la plancha, hazle ver su error y envíalo de vuelta. Si sales de compras  con tu pareja y estás tratando de decidir sobre un lugar para comer, manifiesta tu opinión a la hora de elegir a donde ir.

Una vez que te sientas cómodo en estas situaciones de bajo riesgo, comienza subiendo la dificultad poco a poco.

2. Empieza diciendo no. En el camino para ser más asertivo, el NO es tu mejor compañero. Debes decir no más a menudo. Es posible ser firme y decidido con el NO sin dejar de ser considerado. Al principio, decir que no puede hacer que te sientas ansioso, pero con el tiempo llegarás a sentirte bien y bastante liberado.

Es probable que algunas personas se sientan decepcionadas ante esta nueva situación. Pero recuerda que mientras expreses tus necesidades de una manera considerada, no eres en absoluto responsable de su reacción.

3. Sé simple y directo. Cuando te estás  afirmando a ti mismo, menos es más. Haz tus peticiones de manera sencilla y directa. No hay necesidad de dar explicaciones elaboradas (véase más adelante). Es suficiente con decir cortésmente lo que piensas, sientes o deseas.

4. Utiliza el “yo”. Al hacer una petición o expresar desaprobación usa el “yo”. Hazlo siempre en primera persona. En lugar de decir: “Eres muy desconsiderado. No tienes ni idea de lo duro que ha sido el día de hoy. ¿Por qué me pides que haga todas estas tareas?”, debes decir “Estoy agotado hoy. Veo que quieres que haga todas estas cosas, pero no voy a poder hacerlas hasta mañana”.

5. No te disculpes por expresar una necesidad o deseo. Al menos que estés pidiendo algo que sea manifiestamente irrazonable, no hay razón para sentirse culpable o avergonzado por expresar una necesidad o deseo. Así que deja de pedir disculpas cuando pides algo. Sólo pídelo educadamente y espera a ver cómo la otra persona responde.

6. Utiliza el lenguaje corporal y el tono de voz. Debes parecer seguro al hacer una solicitud o indicar una preferencia. Ponerse de pie, inclinarse un poco, sonreír o mantener una expresión facial neutra, mirar a la persona a los ojos, son acciones que denotan seguridad. También debes asegurarte de hablar con claridad y en voz lo suficientemente alta.

7. No tienes que justificar o explicar tu opinión. Cuando tomas una decisión o das una opinión con la que otros no están de acuerdo, un modo en el que van a tratar de ejercer control sobre ti será exigiendo que des una justificación de tu elección, opinión o comportamiento. Si no puedes encontrar una razón suficiente, suponen que debes estar de acuerdo con lo que quieren.

Las personas no asertivas, con su necesidad de agradar, se sienten obligadas a dar una explicación o una justificación para cada elección que hacen, incluso si la otra persona no se la pidió. Quieren asegurarse de que todo el mundo está de acuerdo con sus opciones, y de este modo lo que están haciendo es pedir permiso para vivir sus propias vidas.

8. Sé persistente. A veces te enfrentas a situaciones en las que inicialmente no encuentras respuesta a tus solicitudes. No te limites a decirte a ti mismo: “Al menos lo intenté “. A menudo para ser tratado con justicia tienes que ser persistente. Por ejemplo, si te cancelaron un vuelo, sigue preguntando acerca de otras opciones, como ser transferido a otra línea aérea, para poder llegar a tu destino a tiempo.

9. Mantén la calma. Si alguien está en desacuerdo o desaprueba tu elección, opinión o solicitud, no debes enojarte o ponerte a la defensiva. Es mejor buscar una respuesta constructiva o decidir evitar a esta persona en futuras situaciones.

10. Elije tus batallas. Un error común que cometemos en el camino para ser más asertivo es tratar de ser firme todo el tiempo. La asertividad es situacional y contextual. Puede haber casos en los que ser asertivo no te llevará a ninguna parte y tomar una postura más agresiva o pasiva es la mejor opción.

A veces, sin duda es necesario ocultar los sentimientos. Sin embargo, aprender a expresar tus opiniones, y lo más importante, a respetar la validez de esas opiniones y deseos, te convertirá en una persona con mayor confianza. El resultado de una acción asertiva puede llevarte a conseguir exactamente lo que quieres, o quizás un compromiso, o tal vez un rechazo, pero independientemente del resultado, dará lugar a que te sientas más cerca de controlar tu propia vida.

Logo KW


de System Administrator - domingo, 31 de agosto de 2014, 03:21


Autora: María Teresa Vallejo Laso

¿Te has parado a pensar alguna vez como reaccionas cuando interactúas por primera vez con una persona?
¿Por qué a primera vista te desagrada esa persona o te cae bien? 
¿Te sientes sorprendido?
¿Tal vez incómodo?
¿Responderás a esa persona o pasará desapercibida para ti?
¿Sabes que todas estas respuestas las puedes obtener en solo unos pocos segundos?


1. Reconocimiento de emociones: lo primero que hacemos es una evaluación acerca de su estado de ánimo, el cual elaboramos a partir de su rostro y del lenguaje corporal que observamos mediante gestos, movimientos, miradas, postura, etc.…. Así nuestra respuesta puede variar según creamos que la persona se encuentra angustiada, feliz o triste.

2. Seleccionamos la enorme cantidad de datos que nos llegan de la persona en cuestión y reducimos su complejidad.

3. Resumimos la información importante que tenemos sobre la persona que se nos acerca y omitimos y olvidamos otros muchos detalles. Por ejemplo, si nos gusta su modo de hablar, vestimenta y el contenido de su conversación, le ponemos un atributo (por ejemplo “es una persona agradable”).

4. Seguidamente, la información que estamos recibiendo , la almacenamos en nuestra memoria, la ponemos en relación con otras informaciones que ya disponemos de experiencias anteriores, la recuperamos y la aplicamos al caso en cuestión.

5. Después, intentamos ir más allá de la información obtenida, con el fin de predecir acontecimientos futuros y de este modo evitar o reducir la sorpresa. 

6. Ordenamos la información que tenemos y creamos categorías para clasificar su conducta, su apariencia y demás elementos informativos. Podemos categorizar en función de su atractivo físico, de su personalidad, de su procedencia geográfica, de la carrera universitaria que estudia, de su ideología política, etc., bien en un sistema categorial simple p.ej. amigo-enemigo, atractivo-poco atractivo, o en un sistema más complejo.

7. Buscamos los elementos invariantes de los estímulos que percibimos, ya que no nos resultan de interés los aspectos de la conducta que nos parezcan superficiales o inestables.

8. Los estímulos que percibimos pasan al interior de nuestra mente a través de un tamiz. Allí los interpretamos, y a partir de esta interpretación que hagamos, le otorgamos un significado. Si vemos a una persona que ayuda a un anciano a cruzar la calle, esa percepción la almacenamos en la memoria junto con la interpretación de que dicha persona es amable y ayuda a los demás.

9. Intentamos descubrir cómo la persona que estamos percibiendo es realmente, o cuáles son sus verdaderas intenciones ya que todos sabemos que los objetivos y deseos de la persona percibida influyen en la información de sí misma que presenta, lo que, unido a la ambigüedad que tiene gran parte de la información, hace que nos impliquemos en un proceso activo de conocerla mejor.

10. Realizamos una serie de inferencias, ya que la percepción de personas implica al propio Yo y ya que otras personas son similares a nosotros, todos podemos hacernos una idea de cómo se siente una persona cuando está triste, cuando le suspenden un examen o cuando le dan una buena noticia, porque nosotros hemos vivido esas experiencias o similares.

11. La percepción de personas suele darse en interacciones que poseen un carácter dinámico, es decir, cuando percibimos a otra persona, somos a la vez percibidos. Nuestra presencia, el hecho de sentirse observado, o el contexto, pueden hacer que la otra persona maneje la impresión que quiera causarnos, presentando o enfatizando ciertas características y omitiendo otras. Además, las expectativas o percepciones respecto a la persona que percibimos influyen en nuestra conducta hacia ella; esta conducta a su vez puede influir en la respuesta que la persona percibida emita, cerrando de esta manera una especie de círculo vicioso.


1. Inferimos las características psicológicas a partir de su conducta, así como de otros atributos de la persona observada. (Por ejemplo, la persona se encuentra sola, es atractiva físicamente, inteligente, de otra ciudad, amante de la ciencia, etc.).

2. Organizamos estas inferencias en una impresión coherente. Siguiendo el ejemplo, se podría producir el efecto Halo. Dicho efecto aparece cuando un rasgo positivo tiende a llevar asociados a él otros rasgos positivos, y un rasgo negativo otras cualidades negativas. De esta manera, los elementos informativos se organizan como un todo donde cada rasgo afecta y se ve afectado por todos los demás, generando una impresión dinámica.

3. Combinamos todos estos elementos ya que en cada impresión, aunque todos los rasgos se relacionan entre sí, hay unos que tienen un mayor impacto sobre los demás, sirviendo como elementos aglutinadores de la impresión.

4. Producimos una imagen global de la persona. Cuando percibimos a los demás, nos formamos impresiones globales y unitarias de cada persona. Sin embargo, la información que recibimos está fragmentada en pequeñas piezas informativas, de muy diversa índole.

5. Si hay elementos que a nuestro juicio son contradictorios, o son incoherentes entre sí, resolveremos las contradicciones. Cuando recibimos información inconsistente podemos hacer dos cosas. En primer lugar, se puede cambiar el significado de las características. En segundo lugar puede inferir nuevos rasgos que permitan reducir las contradicciones. Si sabemos de otra persona que es inteligente, afectuosa y mentirosa quizá deduzcamos que es político o diplomático. Con ambos mecanismos el resultado es el mismo. La impresión resultante es única y coherente.


Efecto primacía:

- Se da con mayor probabilidad cuando los sujetos se comprometen de alguna manera con el juicio basado en la primera información antes de que reciban la información adicional.

- Cuando la primera información es más clara, menos ambigua o más relevante para el juicio.

- Cuando la primera información se basa en la persona estímulo y/ o en la categoría.

- Cuando la información en general se refiere a una entidad que no se espera que cambie con el tiempo.

Efecto Recencia:

- Se da cuandolos últimos elementos informativos tienen un peso menor.

- Aparece cuando la información reciente es más fácil de recordar o más viva que la primera información.

- Los últimos adjetivos son descontados o ignorados en la medida en que sean inconsistentes con la información predominante anterior.

- Las personas prestamos menor atención a los últimos elementos informativos por cansancio o bien porque los consideramos menos creíbles o importantes, pensando quizá que, precisamente porque son menos importantes, por eso han sido colocados en último lugar.

Cuando la información que conocemos acerca de una persona contiene elementos positivos y negativos, estos últimos tienen una mayor importancia en la impresión formada. Por eso una primera impresión negativa es más difícil de cambiar que una positiva, pues los rasgos que conllevan una evaluación negativa parecen ser fáciles de confirmar y difíciles de desconfirmar, mientras que los rasgos positivamente evaluados son difíciles de adquirir pero fáciles de perder.

¿Cuáles son las razones de esto?

- Se ha sugerido una motivación egoísta por parte del perceptor, pues una persona que posea rasgos negativos supone un mayor grado de amenaza.

- La información negativa tiene un mayor valor informativo, dando por supuesto que la mayoría de las personas nos esforzamos por suministrar una imagen positiva de nosotros mismos, resulta evidente que la información positiva que suministramos dice poco acerca de nosotros como individuos únicos y peculiares.

- Dado que las evaluaciones negativas son menos habituales, su impacto sobre las impresiones es mayor


1. El congraciamiento: intentamos aparecer de una manera atractiva ante los demás, deseando ser aceptados, queridos…..Esto puede lograrse, por ejemplo, elogiando a la otra persona o mostrándose de acuerdo con sus opiniones y conductas. Básicamente, consiste en conformarse a las expectativas del perceptor.

2. Intimidación: con esta estrategia, las personas intentan mostrar el poder que ejercen sobre la otra persona, amenazando o creando temor. Esta táctica suele darse en relaciones que no son voluntarias, ya que en una relación voluntaria la probabilidad de que el otro abandone la relación es grande. Con frecuencia el perceptor se conforma a los deseos de la persona percibida con el fin de evitar las consecuencias negativas, o los disturbios emocionales de su desacuerdo.

3. Autopromoción: consiste en mostrar las propias habilidades y capacidades, ocultando los defectos. A veces esta táctica aumenta en eficacia si el individuo reconoce fallos menores o ya conocidos por los perceptores, pues de este modo su credibilidad aumenta. El problema de esta táctica es que con frecuencia resulta difícil hacer creer a los demás que uno tiene ciertas cualidades de las que carece.

4. Suscitar en los demás el deber moral, la integridad o incluso la culpabilidad (por ejemplo, cuando un compañero de trabajo le dice a otro “no importa, vete a casa que yo acabaré el trabajo, aunque me pierda el cumpleaños de mi hija”. O a veces, como último recurso, las personas muestran sus debilidades y dependencias respecto a la otra persona.

5. Una estrategia frecuentemente empleada en los dominios relacionados con la competencia o el rendimiento es la de auto-incapacidad que consiste en incrementar la probabilidad de que un posible fracaso futuro sea atribuido a factores externos y un posible éxito a factores internos.

En ocasiones el deseo de una auto-presentación favorable lleva a las personas a asociarse al éxito de los demás atribuyéndoselo de alguna manera. Es lo que se denomina disfrute del reflejo de la gloria de otros, es decir, sentir orgullo de la victoria o de los éxitos de otros a los que seguimos. También existe el distanciamiento del fracaso de otros. Por ejemplo, es frecuente en los seguidores de un equipo deportivo la expresión hemos ganado después de una victoria y la expresión el equipo ha perdido después de una derrota.

Sería incorrecto pensar que estos esfuerzos de los individuos por presentar unas imágenes determinadas de sí mismos son esfuerzos por presentar una imagen “falsa”, que no se corresponde con su Yo más profundo y auténtico. Elegir qué aspecto de nuestra identidad presentamos en una situación determinada puede llevarnos a elegir entre diversos aspectos de nuestra identidad igualmente verdaderos. En primer lugar porque estamos limitados por nuestra propia realidad y no todo lo que queremos podemos conseguirlo: quien no es inteligente puede intentar parecerlo pero lo logrará sólo relativamente. En segundo lugar, porque a veces nosotros, nuestro yo, se va convirtiendo en aquello que aparentamos especialmente cuando nuestra apariencia recibe la aprobación de quienes nos rodean.
Cuando percibimos a otra persona, recibimos información de muy diversa índole:

- Apariencia física: lo que percibimos inicialmente en otra persona la mayoría de las veces es su aspecto físico (que incluye no sólo características anatómicas, sino también vestimenta, forma de moverse). Esta información es crucial para que nos hagamos una idea de su estado de ánimo en ese momento (reconocimiento de emociones), para que sepamos a qué categorías sociales pertenece, e incluso para que nos hagamos una idea de qué rasgos de personalidad le caracterizan

- La conducta: lo que la otra persona hace es también unas de las fuentes de información cruciales. Sin embargo, es cierto, al mismo tiempo que la conducta no es un indicador muy fiable de los estados internos, pensamientos y sentimientos de la persona percibida.

- Los rasgos de personalidad: Las características de personalidad de la persona percibida son más importantes que sus características físicas cuando se hace un diagnóstico psicológico. La razón de este hecho parece estar en que al descubrir las disposiciones estables de otra persona, adquirimos también cierta capacidad predictiva sobre su conducta futura.

- Información sobre relaciones (roles, redes sociales, como por ejemplo, cuando sabemos que alguien es padre)

- Metas y objetivos que persigue (es una persona que busca el poder) y sobre contextos .La importancia de cada uno de estos diferentes tipos de contenidos depende en gran medida del contexto, de los objetivos del perceptor, así como de la propia característica. Una característica muy extrema (por ejemplo, una chica muy maquillada) puede tener un papel primordial en la organización de toda la información subsiguiente.

María Teresa Vallejo Laso

Referencias Bibliográficas:

  • Moya, M. Percepción de Personas
  • Echebarría, A. y Villareal, M, La percepción social


Logo KW

Assessing Research Productivity [1046]

de System Administrator - miércoles, 7 de enero de 2015, 14:49

Assessing Research Productivity

A new way of evaluating academics’ research output using easily obtained data

By Ushma S. Neill, Craig B. Thompson, and Donna S. Gibson

It can often be difficult to gauge researcher productivity and impact, but these measures of effectiveness are important for academic institutions and funding sources to consider in allocating limited scientific resources and funding. Much as in the lab, where it is important for the results to be repeatable, developing an algorithm or an impartial process to appraise individual faculty research performance over multiple disciplines can deliver valuable insights for long-term strategic planning. Unfortunately, the development of such evaluation practices remains at an embryonic stage.


Several methods have been proposed to assess productivity and impact, but none can be used in isolation. Beyond assigning a number to an investigator—such as the h-index, the number of a researcher’s publications that have received at least that same number of citations, or acollaboration index, which takes into account a researcher’s relative contributions to his or her publications—there are additional sources of data that should be considered. At our institution, Memorial Sloan Kettering Cancer Center (MSKCC) in New York City, there is an emphasis on letters of recommendation received from external expert peers, funding longevity, excellence in teaching and mentoring, and the depth of a faculty member’s CV. For clinicians, additional assessments of patient load and satisfaction are also taken into consideration by our internal committees evaluating promotions and tenure. Other noted evaluation factors include the number of reviews and editorials an individual has been invited to author; frequency of appearance as first, middle, or senior author in collaborations; the number of different journals in which the researcher has published; media coverage of his or her work; and the number of published but never-cited articles.

Here we propose a new bibliometric method to assess the body of a researcher’s published work, based on relevant information collected from the Scopus database and Journal Citation Reports (JCR). This method does not require intricate programming, and it yields a graphical representation of data to visualize the publication output of researchers from disparate backgrounds at different stages in their careers. We used Scopus to assess citations of research articles published between 2009 and 2014 by five different researchers, and by one retired researcher over the course of his career since 1996, a time during which this individual was a full professor and chair of his department. These six researchers included molecular biologists, an immunologist, an imaging expert, and a clinician, demonstrating that this apparatus could level the playing field across diverse disciplines.


ACROSS DISCIPLINES: A graphical display illustrates the publication productivity and impact of three researchers from disparate fields whose names appeared in The journal’s average impact for the year (gray squares) is compared to the impact of the researcher’s articles (red circles) in the same journal that year. Non-review journals ranked in the top 50 by impact factor, as determined by Journal Citation Reports, are noted in gold. This manner of representing journals equalizes researchers across disciplines such that the impact of a particular manuscript can be appreciated by seeing if the author’s red dot is higher or lower than the journal’s gray/gold one.
See full infographic: JPG | PDF

The metric we used calculates the impact of a research article as its number of citations divided by the publishing journal’s impact factor for that year, divided by the number of years since the article was published. The higher the number, the greater the work’s impact. This value is plotted together with the average impact of all research articles the journal published in that same year (average number of citations for all research articles published that year divided by the journal impact factor for that year divided by the number of years since publication). Publications in journals that rank in the top 50 by impact factor (not including reviews-only journals) are also noted.


ACROSS AGES: This method of visualizing researchers’ productivity can be a useful tool for comparing scientists at different points in their career.
See full infographic: JPG | PDF

By developing such a graph for each scientist being evaluated, we get a snapshot of his or her research productivity. Across disciplines, the graphs allow comparison of total output (number of dots) as well as impact, providing answers to the questions: Are the scientists’ manuscripts being cited more than their peers’ in the same journal (red dots above gray)? How many of each researcher’s papers were published in leading scientific journals (gold squares)? The method also allows evaluation of early-career scientists and those who are further along in their careers. (See graphs at right, top.) For young researchers, evaluators can easily see if their trajectory is moving upward; for later-stage scientists, the graphs can give a sense of the productivity of their lab as a whole. This can, in turn, reveal whether their laboratory output matches their allocated institutional resources. While the impact factor may be a flawed measurement, using it as a normalization tool helps to remove the influence of the journal, and one can visualize whether the scientific community reacts to a finding and integrates it into scientific knowledge. This strategy also allows for long-term evaluations, making it easy to appreciate the productivity of an individual, in both impact and volume, over the course of his or her career.


LONG-TERM ANALYSIS: A nearly 20-year stretch (1996–2014) is shown for a newly retired faculty member after a productive research career. Note that this individual did not publish any articles in top 50 non-review journals as determined by Journal Citation Reports impact factors. Although this researcher published several papers before 1996, Scopus has limited reliability for citations prior to that year; therefore the analysis excluded these data.
See full infographic: JPG | PDF

Assessing research performance is an important part of any evaluation process. While no bibliometric indicators alone can give a picture of collaboration, impact, and productivity, this method may help to buttress other measures of scientific success.

Ushma S. Neill is director of the Office of the President at Memorial Sloan Kettering Cancer Center (MSKCC). Craig B. Thompson is the president and CEO of MSKCC, and Donna S. Gibson is director of library services at the center.

Logo KW

At the Heart of Facebook’s Artificial Intelligence, Human Emotions [1215]

de System Administrator - sábado, 2 de mayo de 2015, 20:56

At the Heart of Facebook’s Artificial Intelligence, Human Emotions

By Amir Mizroch

Yann LeCun, director of AI Research at Facebook, and professor of computer science at New York University.FacebookFacebook Inc. doesn't yet have an intelligent assistant, like the iPhone's Siri.

But the social-networking company says it's aiming higher, in what has become one of the biggest battles raging between Silicon Valley's behemoths: How to commercialize artificial intelligence.

The once-niche field is aimed at figuring out how computers can make decisions on a level approaching that of human intelligence. Apple Inc.'s Siri, Microsoft  Corp.'s Cortana and Google  Inc.'s Google Now are all early manifestations. They are voice-recognition services that act as personal assistants on devices, helping users search for information–like finding directions or rating nearby restaurants. Both "learn" from their users, adapting to accents, for instance, and learning from previous searches about users' preferences.

Facebook thinks it can do better.

"Siri and Cortana are very scripted," says Yann LeCun, director of artificial-intelligence research at Facebook, in an interview. "There's only certain things they can talk about and dialogue about. Their knowledge base is fairly limited, and their ability to dialogue is limited," he said. "We're laying the groundwork for how you give common sense to machines."

Apple and Microsoft declined to comment.

Google Executive Chairman Eric Schmidt recently said the company was making progress in image and speech recognition, but admitted at a conference it was a "sore point" at the company that Siri was getting "all the credit."

Facebook's LeCun also sees promise in natural-language processing–machines understanding what is being said in speech in a more sophisticated way than Siri or Cortana. And he said image and video recognition is the "next frontier" at Facebook.

"It's clear that there's going to be a lot of progress in the way that machines can understand images and activities in video; personal interactions in video between people expressing emotions, and things like that," he said. A raised eyebrow might mean many different things in different contexts. After a computer shifts through reams of images of people raising an eyebrow, and what happens before or after, it can start to correlate that action.

The basic theory is that the more images the computer analyzes and correlates, the more precise it becomes, statistically. The goal is to approach the same level of correlation that the human brain makes as it processes images sent from a person's  eyes.

"It's not just about looking at your face to determine your emotions, it's about understanding interactions between different people and figuring out if those people are friends, or angry at each other," LeCun said.

French-born LeCun, 55 years old, is one of the world's leading figures in artificial-intelligence research, specifically of a subset of the science called "machine learning," or mathematical algorithms that adjust, and improve, as they receive and analyze new data.

While working at AT&T the late 80s and 90s, Mr. LeCun developed handwriting recognition processing that was eventually used by banks to scan and verify checks. Technology on pattern recognition he developed significantly pushed the commercial applications of image and text recognition and is being used in the search and voice-recognition products and services by Google and Microsoft.

Facebook hired LeCun in late 2013, luring him from New York University, where he remains a part-time professor, shuttling between campus and Facebook's nearby New York offices.  LeCun now spends one day a week at NYU, and the rest at Facebook, where he heads the AI research lab. The lab, split between Menlo Park and New York, is currently 40-members strong, much larger than most university AI research departments–which traditionally have done the heavy lifting on AI research.

Facebook's AI research is currently being used in image tagging, predicting which topics will trend, and face recognition. All of these services require algorithms to sift through vast amounts of data, like pictures, written messages, and video, to make calculated decisions about their content and context. Facebook has a big advantage over university campuses who have toiled for decades in the field. It can vacuum up the reams of data required to "teach" machines to make correlations.

Facebook last week said its main social network increased to 1.44 billion monthly users, up from 1.39 billion in the 2014 fourth quarter. The company added that it now has 4 billion video streams every day.

"You can work on a project that may take a few years to develop into something useful, but we all know that if it succeeds will have a big impact," LeCun said.

More: Artificial-Intelligence Experts Are in High Demand



Logo KW

Atrocidad y Solidaridad [644]

de System Administrator - martes, 20 de octubre de 2015, 21:28

El ser humano es capaz de las cosas más atroces y de las más solidarias


El Dr. Humberto Lucero recuerda con veneración a sus maestros y a su Córdoba natal y que encontró en la Medicina Legal el ámbito donde desarrollar su mayor pasión: el conocimiento de la conducta humana.

Continuar leyendo en el sitio

Página: (Anterior)   1  2  3  4  5  6  7  8  9  10  ...  71  (Siguiente)