Neurociencia | Neuroscience


Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página:  1  2  3  4  5  6  7  8  9  10  ...  74  (Siguiente)


Logo KW

3 Ways Exponential Technologies Are Impacting the Future of Learning [1576]

de System Administrator - jueves, 3 de diciembre de 2015, 17:15

3 Ways Exponential Technologies Are Impacting the Future of Learning


“Simply put, we can’t keep preparing children for a world that doesn’t exist.”
-Cathy N. Davidson

Exponential technologies have a tendency to move from a deceptively slow pace of development to a disruptively fast pace. We often disregard or don’t notice technologies in the deceptive growth phase, until they begin changing the way we live and do business. Driven by information technologies,  products and services become digitized, dematerialized, demonetized and/or democratized and enter a phase of exponential growth.

Nicole Wilson, Singularity University’s vice president of faculty and curriculum believes education technology is currently in a phase of deceptive growth, and we are seeing the beginning of how exponential technologies are impacting 1) what we need to learn, 2) how we view schooling and society and 3) how we will teach and learn in the future.


Watch Nicole Wilson, VP of Faculty and Curriculum at Singularity University discuss the three ways which exponential technologies are impacting how we teach and learn.

Exponential Technologies Impact What Needs to be Learned

In a 2013 white paper titled Dancing with Robots: Human Skills for Computerized Work Richard Murnane and Frank Levy argue that in the computer age, the skills which are valuable in the new labor market are significantly different than what they were several decades ago.

Computers are much better than humans at tasks that can be organized into a set of rules-based routines. If a task can be reduced to a series of “if-then-do” statements, then computers or robots are the right ones for the job. However, there are many things that computers are not very good at and should be left to humans (at least for now).  Levy and Murnane put these into three main categories:

Solving unstructured problems. 
Humans are significantly more effective when the desired outcomes or set of information needed to solve the problem are unknowable in advance. These are problems that require creativity.  

Working with new information. 
This includes instances where communication and social interaction are needed to define the problem and gather necessary information from other people. 

Carrying out non-routine manual tasks.
While robots will continue to improve dramatically, they are currently not nearly as capable as humans in conducting non-routine manual tasks. 

In the past three decades, jobs requiring routine manual or routine cognitive skills have declined as a percent of the labor market. On the other hand, jobs requiring solving unstructured problems, communication, and non-routine manual work have grown.

The best chance of preparing young people for decent paying jobs in the decades ahead is helping them develop the skills to solve these kinds of complex tasks.

What are these skills exactly?

In March, the World Economic Forum released their New Vision for Education Report, which identified a set of “21st century skills.” The report broke these into three categories: ‘Foundational Literacies’, ‘Competencies’ and ‘Character Qualities’.


The foundational literacies are the “basics.” Reading, writing, sciences, along with more practical skills like financial literacy. Even in a world of rapid change, we still need to learn how to read, write, do basic math, and understand how our society works.

The competencies are often referred to as the 4Cs — critical thinking, creativity, communication and collaboration — the very things computers currently aren’t good at. Developing character qualities such as curiosity, persistence, adaptability and leadership help students become active creators of their own lives, finding and pursuing what is personally meaningful to them.

Exponential Technologies Impact How We View Schooling and Society 

In her book, Now You See ItCathy N. Davidson, co-director of the annual MacArthur Foundation Digital Media and Learning Competitions, says 65 percent of today’s grade school kids will end up doing work that has yet to be invented. 

Davidson, along with many other scholars, argues that the contemporary American classroom is still functioning much like the classroom of the industrial era — a system created as a training ground for future factory workers to teach tasks, obedience, hierarchy and schedules.

For example, teachers and professors often ask students to write term papers.  Davidson herself was disappointed when her students at Duke University turned in unpublishable papers, when she knew that the same students wrote excellent blogs online.

Instead of questioning her students, Davidson questioned the necessity of the term paper. “What if bad writing is a product of the form of writing required in school — the term paper — and not necessarily intrinsic to a student’s natural writing style or thought process? What if ‘research paper’ is a category that invites, even requires, linguistic and syntactic gobbledygook?”

And if term papers are starting to seem archaic, formal degrees might be the next to go.

Getting a four-year degree in any technology field makes little sense when the field will likely be radically different by the time the student graduates. Today, we’re seeing the rise of concepts like Mozilla's “open badges” and Udacity’s “nanodegrees.” Udacity recently reached a billion-dollar valuation, partially based on the promise of their new nanodegree program.

Exponential Technologies Impact How We Teach and Learn

Technologies like artificial intelligence, big data and virtual and augmented reality are all poised to change the way we teach and learn both in the classroom and outside of it.

The ed-tech company Knewton focuses on creating personalized learning paths for students by gathering data to determine what each student knows and doesn’t know and how the student learns best. Knewton takes any free, open content, and uses an algorithm to bundle it into a uniquely personalized lesson for each student at any moment.


And while there’s no lack of enthusiasm around the potential of using virtual reality to change education, we are just seeing the first baby steps toward what will eventually be the full use of the technology. Google Expeditions, which aims to take students on virtual field trips and World of Comenius, a project which brought Oculus Rift headsets to a grammar school in the Czech Republic to virtually teach biology and anatomy are just two examples of many teams going through the process of trial and error to define what works for education in VR and what doesn’t. 

It’s clear that technologies undergoing exponential growth are shaping the skills we need to be successful, how we approach education in the classroom, and what tools we will use in the future to teach and learn. The bigger question is: How can we guide these technologies in a way that produces the kind of educated public we wish to have in the coming years?

To get updates on Future of Learning posts, sign up here.



Logo KW

3D Computer Interfaces Will Amaze—Like Going From DOS to Windows [1347]

de System Administrator - martes, 15 de septiembre de 2015, 20:55

3D Computer Interfaces Will Amaze—Like Going From DOS to Windows

By Jody Medich

Today, we spend as much time immersed in the digital world as we do the real world. But our computer interfaces are flawed. They force us to work in 2D rectangles instead of our native 3D spaces. Our brains have evolved powerful ways of processing information that depend on dimensionality — and flat screens and operating systems short circuit those tricks.

The result is a bit like reading War and Peace one letter at a time, or the classic example of a group of blind folks trying to define an elephant from the one piece they are touching. We're missing the big picture, missing all the spatial systems that help us understand — the space is just far too limited.

All this, however, is about to change in a big way.

With augmented and virtual reality, we’ll be able to design user interfaces as revolutionary as those first developed for the Xerox Alto, Apple Macintosh, and Microsoft Windows. These new 3D computer interfaces and experiences will help us leverage dimensionality, or space, just as we do in the real world — and they’ll do it with the dynamic power of digital technology and design.

How We Naturally Use Space to Think

Ultimately, humans are visual creatures.

We gather a large amount of information about our world by sight. We can tell which way the wind is blowing, for example, just by looking at a tree. A very large portion of our brains, therefore, is dedicated to processing visual compared to linguistics.

If you take your hand and put it to the back of your head, that’s about the size of your visual cortex. Driven by this large part of our brain, we learned to communicate via visual methods much earlier than language. And even after we formulated words, our language remained very visual. In the representation of letters on a spatial grid to create meaning, for example, it’s the underlying spatial structure that ultimately allows us to read.

Space is therefore very important to humans. Our brains evolved to be visual in 3D spaces. We are constantly creating a spatial map of our surroundings. This innate process is called spatial cognition; the acquisition of which helps us to recall memories, reveal relationships, and to think. It is key to sensemaking.

Spatial memory is, in effect, “free.” It allows us to offload a number of cognitively heavy tasks from our working memory in the following ways:

Spatial Semantics: We spatially arrange objects to make sense of information and its meaning. The spatial arrangement reveals relationships and connections between ideas.

External Memory: The note on the refrigerator, the photo of a loved one, or the place you always put your keys are all examples of how space compensates for our limited working memory.

Dimension: Dimension helps us to immediately understand information about an object. For example, you can easily tell the difference between War and Peace and a blank piece of paper just by looking at the two objects.

Embodied Cognition: Physically interacting with space through proprioception — this is the human sense that locates our body in space — is essential to understanding its volume, but it also helps us process thought.

How We Interact With Computers Today: The Graphical User Interface (GUI)

In 1973, Xerox PARC was a hotbed of technological innovation, and Xerox researchers were determined to figure out how to make interacting with computers more intuitive. Of course, they were fully aware of the way humans use visual and spatial tools.

People had a hard time remembering all the specialized linguistic commands necessary to operate a computer in command-line interface (think MS-DOS). So, researchers developed an early graphical user interface (GUI) on the Xerox Alto as a way to reduce the cognitive load by providing visual/spatial metaphors to accomplish basic tasks.

The Alto used a mouse and cursor, a bitmapped display, and windows for multitasking. No one thought the job was complete, but it was a great first step on the road to a simplified user interface. And most operating systems today testify to the power of GUI.

The Problem With Modern Operating Systems

The problem is 2D computing is flat. GUI was invented at a time when most data was linguistic. The majority of information was text, not photos or videos — especially not virtual worlds. So, GUI organization is based on filenames, not spatial semantics.

The resulting “magic piece of paper” metaphor creates a very limited sense of space and prevents development of spatial cognition. The existing metaphor:

  • Limits our ability to visually sort, remember and access
  • Provides a very narrow field of view on content and data relationships
  • Does not allow for data dimensionality

This means the user has to carry a lot of information in her working memory. We can see this clearly in the example of the piece of paper and War and Peace.

In modern operating systems, these objects look dimensionally to be exactly the same because they are represented by uniformly similar icons. The user has to know what makes each object different — even has to remember their names and where they are stored. Because of this, modern operating systems interrupt flow.

Multiple studies have been focused on interruption cost for software engineers. It turns out that any interruption can cause significant distraction. Once distracted, it takes nearly half an hour to resume the original task. In our operating systems, every task switch interrupts flow. The average user switches tasks three times a minute. And the more cognitively heavy the task switch, the more potent the interruption.

The Solution? The Infinite Space of Augmented and Virtual Reality

But now augmented and virtual reality are emerging. And with them, infinite space.

The good news is that spatial memory is free, even in virtual spaces. And allowing users to create spatial systems, such as setting up tasks across multiple monitors, have been proven to dramatically improve productivity by up to 40%.

So, how do we create a system that capitalizes on the opportunity?

The most obvious place to start is to enable development of virtual spatial semantics. We should build 3D operating systems that allow users to create spatial buckets to organize their digital belongings — or even better, allow users to overlay the virtual on their pre-existing real spatial semantics. People already have well established real world spatial memory, so combining the two will lead to even better multi-tasking.

For example, I have a place where I always put my keys (external memory). If I need to remember something the next time I leave the house, I leave it in that spot next to my keys. If I could also put digital objects there, I could become immensely more productive.

Further, adding digital smarts to spatial semantics, users can change the structure dynamically. I can arrange objects in a certain way to find specific meaning, and with the touch of a button, instantly rearrange the objects into a timeline, an alphabetical list, or any other spatial structure that would help me derive meaning — without losing my original, valuable arrangement. Sanddance at Microsoft Research (Steven Drucker, et al) and Pivot by Live Labs are excellent examples of this type of solution.

And finally, the introduction of the z-plane enables digital object dimensionality.


Flat Icons


By applying sort parameters to the x- and y-axis, virtual objects take on meaningful dimension. But unlike the real world, where objects follow the rules of Euclidean geometry, in the virtual world dimension can be dynamic. The sort methods applied can quickly be easily swapped out, depending on the user need, to quickly and effectively change the dimension of the objects — allowing users to tell at a glance what is more or less pertinent to a query.


History x File Size | Favorites x Time Spent


Dimension also creates opportunities for virtual memory palaces.

Memory palaces are a mnemonic device from ancient times that enhance memory by using spatial visualization to organize and recall information. The subject starts with a memorized spatial location, then “walks” through the space in their mind, assigning a different item to specific features of the space. Later, to remember the items, the subject again “walks” through the space and is reminded of the items along the way.

With the advent of virtual 3D spaces, the same type of memory device can be created to allow users to organize, remember, and delineate large amounts of information with the added benefit of a digital “map” of the space; a map that can be dynamically rearranged and searched depending on the user needs in any given moment.

We Are On the Cusp of Another Technological Revolution

Humans are indeed visual creatures, and augmented and virtual reality is geared to help us use those abilities in the digital world. Through these technologies we can alleviate the heavy reliance on working memory needed to operate our tools, and enable instead our natural spatial cognitive abilities. We are on the cusp of another technological revolution — one in which we create superhumans, not supercomputers.

In her 20-year design career, Jody has created just about everything from holograms to physical products and R&D for over 300 companies. She’s spent the last three years working on AR/VR, most notably as a principal experience designer on the HoloLens Project at Microsoft and principal UX at LEAP Motion. Previously, she co-founded and directed Kicker Studio, a design consultancy specializing in Natural User Interface and R&D for companies including Intel, Samsung, Microsoft, and DARPA. You can learn more about Jody's work here follow her @nothelga

To get updates on Future of Virtual Reality posts, sign up here.

Logo KW


de System Administrator - lunes, 1 de septiembre de 2014, 20:10



Written By: Sveta McShane

Everyone has knick-knacks of sentimental value around their home, but what if your emotions could actually be shaped into household things?

A project recently unveiled at the Sao Paulo Design Weekend turns feelings of love into physical objects using 3D printing and biometric sensors. “Each product is unique and contains the most intimate emotions of the participants’ love stories,” explains designer Guto Requena.

"The LOVE PROJECT is a study in design, science and technology that captures the emotions people feel in relating personal love stories and transforms them into everyday objects. The project suggests a future in which unique products will bear personal histories in ways that encourage long life cycles, thus inherently combining deeply meaningful works with sustainable design."

As users recount the greatest love stories of their lives, sensors track heart rate, voice inflection, and brain activity. Data from their physical and emotional responses are collected and interpreted in an interface, transforming the various inputs into a single output.

The real time visualization of the data is modeled using a system of particles, where voice data determines particle velocity, heart rate controls the thickness of the particles, and the data from brain waves causes the particles to repel or attract each other. To shape these particles into the form of an everyday object such as a lamp, fruit bowl or vase, a grid of forces guides the particles as they flow along their course.

The final designs are then sent to a 3D printer, which can print in a variety of materials including thermoplastics, glass, ceramic or metal.


While this method is both creative and intriguing, are any of the produced objects meaningful if the viewer is unable to interpret the emotion from which the object was created?

Looking at the objects produced, it’s difficult to imagine them eliciting similar feelings in viewers as were felt by those recounting their tales of love. Furthermore, the data that produced the objects cannot be extracted, so that information is lost.


Other groups have begun to experiment with infusing printed objects with interactivity, where the object provides the user with information. Techniques to convert digital audio files into 3D printable audio records have been developed as well as a way to play a‘sound bite’ on a 3D printed object by running a fingernail or credit card along printed ridges which produce a sound.

The “Love Project” is an interesting experiment that successfully includes the end user in the process of creating objects of meaning while also democratizing and demystifying the use of interactive digital technologies, yet it’s a stretch to think that the aesthetics of the objects themselves can help understand the mysterious human emotion of love.

What would be truly exciting is if we were able to transform intangible emotions into data, that data into a physical object and interact with the object in a way which brings insight and meaning and new way of understanding and visualizing our emotional states. With designers like Requena and Neri Oxman finding new ways to integrate art and 3D printing, we’re likely to see even more exciting projects at the interface of technology and expression on the horizon.

[Photo credits: Guto Requena]

This entry was posted in Art and tagged 3d printed audio records3d printingbiometricsbrain wavesGuto Requenaheart rateLove Project,neri oxmanSao Paulo Design Weekendsensors.


Logo KW

3D reconstruction of neuronal networks provides unprecedented insight into organizational principles of sensory cortex [1223]

de System Administrator - jueves, 7 de mayo de 2015, 21:14

3D reconstruction of neuronal networks provides unprecedented insight into organizational principles of sensory cortex

Detail of 3D reconstruction of individual cortical neurons in rats. Credit: Max Planck Florida Institute for Neuroscience 

Researchers at the Max Planck Institute for Biological Cybernetics (Germany), VU University Amsterdam (Netherlands) and Max Planck Florida Institute for Neuroscience (USA) succeed in reconstructing the neuronal networks that interconnect the elementary units of sensory cortex – cortical columns.

A key challenge in neuroscience research is identifying organizational principles of how the brain integrates sensory information from its environment to generate behavior. One of the major determinants of these principles is the structural organization of the highly complex, interconnected networks of neurons in the brain. Dr. Oberlaender and his collaborators have developed novel techniques to reconstruct anatomically-realistic 3D models of such neuronal networks in the rodent brain. The resultant model has now provided unprecedented insight how neurons within and across the elementary functional units of the sensory cortex – cortical columns – are interconnected. The researchers found that in contrast to the decade-long focus of describing neuronal pathways within a cortical column, the majority of the cortical circuitry interconnects neurons across cortical columns. Moreover, these ‘trans-columnar’ networks are not uniformly structured. Instead, ‘trans-columnar’ pathways follow multiple highly specialized principles, which for example mirror the layout of the sensory receptors at the periphery. Consequently, the concept of cortical columns, as the primary entity of cortical processing, can now be extended to the next level of organization, where groups of multiple, specifically interconnected cortical columns form ‘intracortical units’. The researchers suggest that these higher-order units are the primary cortical entity for integrating signals from multiple sensory receptors, for example to provide anticipatory information about future stimuli.

3D model for studying cortex organization

Rodents are nocturnal animals that use facial whiskers as their primary sensory receptors to orient themselves in their environment. For example, to determine the position, size and texture of objects, they rhythmically move the whiskers back and forth, thereby exploring and touching objects within their immediate surroundings. Such tactile sensory information is then relayed from the periphery to the sensory cortex via whisker-specific neuronal pathways, where each individual whisker activates neurons located within a dedicated cortical column. The one-to-one correspondence between a facial whisker and a cortical column renders the rodent vibrissal system as an ideal model to investigate the structural and functional organization of cortical columns. In their April publication in Cerebral Cortex, Dr. Marcel Oberlaender, Dr. Bert Sakmann and collaborators describe how their research sheds light on the organization of cortical columns in the rodent brain through the systematic reconstruction of more than 150 individual neurons from all cell types (image [top] shows examples for each of the 10 cell types in cortex) of the somatosensory cortex’s vibrissal domain (the area of the cortex involved in interpreting sensory information from the rodent’s whiskers).


3D reconstruction of individual cortical neurons in rats – Top: Exemplary neuron reconstructions for each of the 10 major cell types of the vibrissal part of rat sensory cortex (dendrites, the part of a neuron that receives information from other neurons, are shown in red; axons are colored according to the respective cell type). Bottom: Superposition of all reconstructed axons (colored according to the respective cell type) located within a single cortical column (horizontal white lines in the center represent the edges of this column). The axons from all cell type project beyond the dimensions of the column, interconnecting multiple columns (white open circles) via highly specialized horizontal pathways. Credit: Max Planck Florida Institute for Neuroscience  

In particular, the researchers combined neuronal labeling in the living animal, with custom-designed high-resolution 3D reconstruction technologies and integration of morphologies into an accurate model of the cortical circuitry. The resultant dataset can be regarded as the most comprehensive investigation of the cortical circuitry to date, and revealed surprising principles of cortex organization. First, neurons of all cell types projected the majority of their axon – the part of the neuron that transmits information to other neurons – far beyond the borders of the cortical column they were located in. Thus, information from a single whisker will spread into multiple cortical columns (image [bottom] shows how axons of neurons located in one cortical column project to all surrounding columns [white circles]). Second, these trans-columnar pathways were not uniformly structured. Instead, each cell type showed specific and asymmetric axon projection patterns, for example interconnecting columns that represent whiskers with similar distance to the bottom of the snout. Finally, the researchers showed that the observed principles of trans-columnar pathways could be advantageous, compared to any previously postulated cortex model, for encoding complex sensory information.

According to Dr. Oberlaender, neuroscientist at the Max Planck Institute for Biological Cybernetics and guest-scientist at the Max Planck Florida Institute for Neuroscience, “There has been evidence for decades that cortical columns are connected horizontally to neighboring columns. However, because of the dominance of the columnar concept as the elementary functional unit of the cortex, and methodological limitations that prevented from reconstructing complete 3D neuron morphologies, previous descriptions of the cortical circuitry have largely focused on vertical pathways within an individual cortical column.”

The present study thus marks a major step forward to advance the understanding of the organizational principles of the neocortex and sets the stage for future studies that will provide extraordinary insight into how sensory information is represented, processed and encoded within the cortical circuitry. “Our novel approach of studying cortex organization can serve as a roadmap to reconstructing complete 3D circuit diagrams for other sensory systems and species, which will help to uncover generalizable, and thus fundamental aspects of brain circuitry and organization,” explained Dr. Oberlaender.

Note: Material may have been edited for length and content. For further information, please contact the cited source:






Logo KW

8 steps successful security leaders follow to drive improvement [1164]

de System Administrator - martes, 17 de marzo de 2015, 13:48

8 steps successful security leaders follow to drive improvement


Logo KW

A Big Year for Biotech: Bugs as Drugs, Precision Gene Editing, and Fake Food [1626]

de System Administrator - domingo, 3 de enero de 2016, 21:31


A Big Year for Biotech: Bugs as Drugs, Precision Gene Editing, and Fake Food


Speculations around whether biotech stocks are in a bubble remain undecided for the second year in a row. But one thing stands as indisputable—the field made massive progress during 2015, and faster than anticipated.

For those following the industry in recent years, this shouldn’t come as a surprise.

In fact, according to Adam Feuerstei at The Street, some twenty-eight biotech and drugs stocks grew their market caps to $1 billion or more in 2014, and major headlines like, “Human Genome Sequencing Now Under $1,000 Per Person,” were strewn across the web last year.

But 2015 was a big year in biotech too.

Cheeky creations like BGI’s micropig made popular headlines, while CRISPR/Cas9 broke through into the mainstream and will forever mark 2015 as a year that human genetic engineering became an everyday kind of conversation (and debate).

With the great leaps in biotech this year, we met with Singularity University’s biotech track chair, Raymond McCauley, to create a collaborative list on four categories where we saw progress within biotech.

While this list is not comprehensive (nor is meant to be), these are a few tangible milestones in biotech’s greatest hits of 2015.

Drag-and-drop genetic engineering is near

2015 will go down in the books as a historic year for genetic engineering. It seemed everyone was talking about, experimenting with, or improving the gene editing technology CRISPR-Cas9. CRISPR is a cheap, fast, and relatively simple way to edit DNA with precision. In contrast to prior, more intensive methods, CRISPR-Cas9 gives scientists a new level of control to manipulate DNA.

CRISPR appears to be broadly useful in plants, animals, and even humans. And although the focus of this technology is on virtuous pursuits, such as eliminating genetic diseases like Huntington’s disease, there has been widespread concern about additional applications—engineering superhuman babies, to name one.

In early December, hundreds of scientists descended on Washington DC to, in part, debate the ethics of genetic engineering with CRISPR. But the debate isn’t over by any means. The greater ethical implications of altering DNA are complex, and as Wired writer Amy Maxmen puts it, CRISPR-Cas9 gives us “direct access to the source code of life,” and thus, the ability to re-write what it means to be human.

Surrounding debates on CRISPR’s use has also been a patent war for the technology.

In April 2014, molecular biologist at the Broad Institute and MIT, Feng Zhang, earned the first patent for CRISPR-Cas9, but since then, one of the original creators of CRISPR at UC Berkeley, molecular biologist Jennifer Doudna, has been fighting back. Meanwhile, new CRISPR enzymes (beyond Cas9) were announced this fall.

All this took place as examples of the power of gene editing showed up in the headlines. In April, Chinese researcher Junjiu Huang and team used CRISPR to engineer human embryos. In November, the first-ever use of gene editing on a human patient cured a one-year-old girl of leukemia (using a different DNA-cutting enzyme called TALEN). And then there were those Chinese superdogs.

At large, we are only at the very beginning of what this technology will bring to medicine—and the debate on how we should best use our newfound power.

The FDA okays personal DNA testing  

Two years ago, 23andMe CEO Ann Wojcicki received a warning letter from the FDA, stating that selling their genetic testing service was violating federal law. The FDA further warned against giving consumers so much information on their personal health without a physician consultation.

Two years later, 23andMe broke through the FDA deadlock and announced their somewhat scaled back new product—Carrier Status Reports (priced at $199)—marking the first FDA-approved direct-to-consumer genetic test.

Whereas their original product tested for 254 diseases, the new version examines 36 autosomal recessive conditions and the “carrier status” of the individual being tested. The company now has a billion-dollar valuation and over a million clients served as of this year. The implications of affordable and FDA-approved consumer genetic testing are large, both for individuals and physicians.

“Fake food” became a real thing

Also known as food 2.0, a handful of companies using biotech methods to engineer new foods made headlines or hit the consumer market this year.

Most of us heard about that lab-grown hamburger that was priced in the six-figure range a few years back. Now, in vitro meat (animal meat grown in a lab using muscle tissue) is coming in at less than $12 a burger. Though the burger’s creator, Mark Post, says a consumer version is still a ways off, the effect could be significant. One of the biggest benefits? Less environmental impact from meat products—which require 2,400 gallons of water to produce just one pound of meat.

Impossible Foods, meanwhile, is developing a “plant-based hamburger that bleeds,” writes The Economist, in addition to plant-based egg and dairy products. To give the burgers a meat-like taste, Impossible Foods extracts iron-rich molecule from plants, though the company has been extremely private about their method.

Mexico-based startup Eat Limmo, is tackling healthy, sustainable, and affordable food from a different angle. Their patent-pending process extracts nutrients from the seeds and peels of fruit waste, recycling all the bits we typically discard into cost-effective, nutritious, and tasty (they say) ingredients for making food.

Microbiome drugs entered clinical trials

Talk about the microbiome isn’t anything new; however, this year Second Genome pushed a microbiome drug into clinical trials. One of the key conditions the company is addressing is inflammatory bowel disease (IBD).

Over five million people suffer from IBD worldwide, and of those, a majority due to ulcerative colitis and Crohn's disease. Both conditions at present have less-than-ideal treatment methods, including harsh medications, steroids, and in extreme cases, the removal of portions of patients’ intestines.

The first drug the company is pushing through clinical trails addresses inflammation and pain in people suffering from IBD and has completed a phrase one placebo controlled and double blind trial.

Another startup to watch is uBiome, a microbiome sequencing service that sells personal microbiome kits direct-to-consumer (think 23andMe style for the microbiome).

An article in Wired last month claimed that uBiome is planning to announce a partnership with the CDC, where they’ll be evaluating stool samples from roughly 1,000 hospital patients. According to Wired, “uBiome and the CDC have set out to develop something like a 'Microbiome Disruption Index' to track how treatments, like antibiotics, alter gut microbes.”

Not only is this a major accomplishment for the startup, they also began as a citizen science movement, so it’s a big win for the whole community.

Alison E. Berman

Staff Writer at Singularity University
Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).
Logo KW

A First Big Step Toward Mapping The Human Brain [1234]

de System Administrator - lunes, 18 de mayo de 2015, 22:55

Electrophysiological data collected from neurons in the mouse visual cortex forms the basis for the Allen Institute's Cell Type Database.  ALLEN INSTITUTE FOR BRAIN SCIENCE

A First Big Step Toward Mapping The Human Brain

by Katie M. Palmer

It’s a long, hard road to understanding the human brain, and one of the first milestones in that journey is building a … database.

In the past few years, neuroscientists have embarked on several ambitious projects to make sense of the tangle of neurons that makes the human experience human, and an experience. In the UK, Henry Markram—the Helen Cho to Elon Musk’s Tony Stark—is leading the Human Brain Project, a $1.3 billion plan to build a computer model of the brain. In the US, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative hopes to, in its own nebulous way, map the dynamic activity of the noggin’s 86 billion neurons.

Now, the Allen Institute for Brain Science, a key player in the BRAIN Initiative, has launched a database of neuronal cell types that serves as a first step toward a complete understanding of the brain. It’s the first milestone in the Institute’s 10-year MindScope plan, which aims to nail down how the visual system of a mouse works, starting by developing a functional taxonomy of all the different types of neurons in the brain.

“The big plan is to try to understand how the brain works,” says Lydia Ng, director of technology for the database. “Cell types are one of the building blocks of the brain, and by making a big model of how they’re put together, we can understand all the activity that goes into perceiving something and creating an action based on that perception.”

The Allen Cell Types Database, on its surface, doesn’t look like much. The first release includes information on just 240 neurons out of hundreds of thousands in the mouse visual cortex, with a focus on the electrophysiology of those individual cells: the electrical pulses that tell a neuron to fire, initiating a pattern of neural activation that results in perception and action. But understanding those single cells well enough to put them into larger categories will be crucial to understanding the brain as a whole—much like the periodic table was necessary to establish basic chemical principles.


to Open Overlay GalleryA researcher uses a pipette to measure a cell’s electrical traces while under a microscope.  ALLEN INSTITUTE FOR BRAIN SCIENCE

Though researchers have come a long way in studying the brain, most of the information they have is big-picture, in the form of functional scans that show activity in brain areas, or small-scale, like the expression of neurotransmitters and their receptors in individual neurons. But the connection between those two scales—how billions of neurons firing together results in patterns of activation and behavior—is still unclear. Neuroscientists don’t even have a clear idea of just how many different cell types exist, which is crucial to understanding how they work together. “There was a lot of fundamental information that was missing,” CEO Allan Jones says 

 about the database project. “So when we got started, we focused on what we call a reductionist approach, really trying to understand the parts.”

When it’s complete, the database will be the first in the world to collect information from individual cells along four basic but crucial variables: cell shape, gene expression, position in the brain, and electrical activity. So far, the Institute has tracked three of those variables, taking high-resolution images of dozens of electrically-stimulated neurons with a light microscope, while carefully noting their position in the mouse’s cortex. “The important early findings are that there are indeed a finite number of classes,” says Jones. “We can logically bend them into classes of cells.”


Click to Open Overlay Gallery Neurons in the database are mapped in 3D space.  Allen Institute for Brain Science

Next up, the Institute will accumulate gene expression data in individual cells by sequencing their RNA, and the overlap of all four variables ultimately will result in the complete cell type taxonomy. That classification system will help anatomists, physicists, and neuroscientists direct their study of neurons more efficiently and build more accurate models of cortical function. But it’s important to point out that the database isn’t merely important for its contents. How those contents were measured and aggregated also is crucial to the future of these big-picture brain mapping initiatives.

To create a unified model of the brain, neuroscientists must collect millions of individual data points from neurons in the brain. To start, they take electrical readings from living neurons by stabbing them with tiny, micron-wide pipettes. Those pipettes deliver current to the cells—enough to get them to fire—and record the cell’s electrical output. But there are many ways to set up those electrical readings, and to understand the neural system as a whole, neuroscientists need to use the same technique every time to make sure that the electrical traces can be compared from neuron to neuron.


Each of the colored dots shown is a single neuron detailed in the cell database, clustered in the mouse visual cortex.  Allen Institute for Brain Science

The Allen Institute, in collaboration with other major neuroscience hubs—Caltech, NYU School of Medicine, the Howard Hughes Medical Institute, and UC Berkeley—has made sure to use the same electrical tracing technique on all of the neurons studied so far (they call it “Neurodata without Borders”). And while the data for this first set of mouse neurons was primarily generated at the Institute, those shared techniques will make future work more applicable to the BRAIN Initiative’s larger goals. “In future releases, we’ll be working with other people to get data from other areas of the brain,” says Ng. “The idea is that if everyone does things in a very standard way, we’ll be able to incorporate that data seamlessly in one place.”

That will become increasingly important as the Institute continues mapping not just mouse neurons, but human ones. It’s easy to target specific regions in the mouse brain, getting electrical readings from neurons in a particular part of the visual cortex. It’s not so easy to get location-specific neurons from humans. “These cells actually come from patients—people who have having neurosurgery for epilepsy, or the removal of tumors,” says Ng. For a surgeon to get to the part of the brain that needs work, they must remove a certain amount of normal tissue that’s in the way, and it’s that tissue that neuroscientists are able to study.

Because they don’t get to choose exactly where in the brain that tissue comes from, scientists at Allen and other research institutes will have to be extra careful that their protocols for identifying the cells—by location, gene expression, electrical activity, and shape—are perfectly aligned, so none of those precious cells are wasted. All together, the discarded remnants of those human brains may be enough to reconstruct one from scratch.




Logo KW

A former Amazon engineer's startup wants to fix the worst thing about tech job interviews [1463]

de System Administrator - sábado, 26 de septiembre de 2015, 13:33

A former Amazon engineer's startup wants to fix the worst thing about tech job interviews

by Matt Weinberger

If there's one thing that techies hate, it's interviewing for a job in tech.

You'd think it'd be easier, right? After all, every company is soon to be a software company if you believe the Silicon Valley hype, which means that every company will soon need lots and lots more programmers.

The problem is that it's actually really hard to assess whether or not somebody is a good programmer.

Everybody thinks they're an expert, and it's often non-technical people like HR that get tapped to do the initial assessment.

And so two things tend to happen when you interview for a tech job: You either get completely insane "skill" tests on extremely basic knowledge that have little to do with the job at hand, or else they turn to brainteasers, riddles, and other weird stuff designed to gauge your personality as much as your set of skills. 

Regardless, the result is the same: Really excellent coders find themselves without a job, while recruiters hire people with the wrong skills for the job they're hired to do. 

That's a problem that HackerRank, a startup founded by ex-Amazon engineer and current CEO Vivek Ravisankar, wants to solve. 

On Amazon's Kindle team back in 2008, building the software that let people self-publish blogs to the e-reader store, Ravisankar had to conduct a lot of technical interviews. Over time, it became clear that the process was not great. 

"It's very hard to figure out how good a programmer is from looking at your resume," says Ravisankar, who left Amazon in 2009 to pursue the startup. 

HackerRank is a tool for automatically making programming tests, based on the skills that the company wants to test for, and then giving them a score based on their own algorithm. 

"Your 5 can be my 1.2," Ravisankar said.  

It's a pretty simple idea, but it was profound enough to get HackerRank into the prestigious Y Combinator startup accelerator program. Today, one million developers use it to compete in challenges and gauge their own skills, HackerRank claims, while big companies like Amazon, Riot Games, and Evernote use it in their own recruiting efforts. So far, HackerRank has raised $12.4 million from venture capital firms like Khosla Ventures and Battery Ventures. 

Now, HackerRank announces integration with the super-popular recruitment software Oracle Taleo, Greenhouse, and Jobvite, such that recruiters can instantly test job seekers and see their scores from right within the program.

Again, simple, but profound — recruiters can see how good a candidate is straight from the software they're already using to find good future employees. 

There's an interesting side effect here, too. The traditional technical interview has the bad habit of turning away women and minority groups for the simple reason that they prioritize people who code in a similar way to the interviewer.

A more objective score given by HackerRank could remove that barrier, ensuring candidates' applications live and die by their own merit.

"We're bringing in a huge change in recruiting," Ravisankar says.

Logo KW

A Maker’s Guide to the Metaverse [1361]

de System Administrator - miércoles, 26 de agosto de 2015, 22:50

A Maker’s Guide to the Metaverse

By Rod Furlan

The virtual reality renaissance that is now underway is creating much excitement surrounding the potential arrival of the “metaverse.” As I write, four great technology titans are competing to bring affordable head-mounted displays to market and usher VR into the mainstream [1].

While the term “metaverse” was coined by Neal Stephenson in his 1992 novel Snow Crash, current usage has diverged significantly from its original meaning. In popular contemporary culture, the metaverse is often described as the VR-based successor to the web.

Its recent surge in popularity is fueled by the expectation that the availability of affordable VR equipment will invariably lead to the creation of a network of virtual worlds that is similar to the web of today—but with virtual “places” instead of pages.

From a societal perspective, the metaverse is also associated with anticipation that as soon as VR technology reaches a sufficiently high level of quality, we will spend a significant portion of our private and professional lives in shared virtual spaces. Those spaces will be inherently more accommodating than natural reality, and this contextual malleability is expected to have a profound impact on our interpersonal relationships and overall cultural velocity.

Given its potential, it is no surprise the metaverse is a persistent topic in discussions about the future of virtual reality. In fact, it is difficult to find VR practitioners who can speculate about a plausible future where technological progress is unhindered and yet a metaverse is never created.

Still, there is little consensus on what a real-world implementation of the metaverse would be like.

Our research group, Lucidscape, was created to accelerate the advent of the metaverse by addressing the most challenging aspects of its implementation [2]. Our mandate is to provide the open source foundations for a practical, real-world metaverse that embraces freedom, lacks centralized control, and ultimately belongs to everyone.

In this article, which is part opinion and part pledge, I will share the tenets for what we perceive as an “ideal” implementation of the metaverse.  My goal is not to promote our work but to provoke thought and spark a conversation with the greater VR community about what we should expect from a real-world metaverse.

Tenet #1 – Creative freedom is not negotiable

“The first condition of progress is the removal of censorship.” – George Bernard Shaw

As the prerequisite technologies become available, the emergence of a proto-metaverse becomes all but inevitable. Nevertheless, it is too soon to know what kind of metaverse will arise—whether it will belong to everybody, embracing freedom, accessibility and personal expression without compromise, or be controlled and shaped by the will and whims of its creators.

To draw a relatable comparison, imagine a different world where the web functions akin to Apple’s iOS app store. In this world, all websites must be reviewed and approved for content before they are made available to users. In this impoverished version of the web, content perceived as disagreeable by its gatekeepers is suppressed, and users find themselves culturally stranded in a manicured walled garden.

While our (reasonably) free web has become a powerful driver of contemporary culture, I would argue that a content-controlled web would remain culturally impotent in comparison because censorship inevitably stifles creativity.

Some believe that censorship under the guise of curation is acceptable under a benevolent dictator. But let me again bring forth the common example of Apple, an adored company that has succumbed to the temptation of acting as a distorted moral compass for its customers by ruling that images of the human body are immoral while murder simulators are acceptable [3].

In contrast, the ideal metaverse allows everyone to add worlds to the network since there are no gatekeepers.

In it, human creativity is unshackled by the conventions and customs of our old world. Content creators are encouraged to explore the full spectrum of possible human experiences without the fear of censorship. Each world is a sovereign space that is entirely determined and controlled by its owner-creator.

Tenet #2 – Technological freedom is not negotiable either

“If the users do not control the program, the program controls the users” – Richard Stallman

The ideal metaverse is built atop a foundation of free software and open standards. This is of vital importance not only to enforce the right to creative freedom but to safeguard a nascent network from the risks of single-source solutions, attempts of control by litigation or even abuse by its own developers.

In the long term, a technologically free metaverse is also more likely to achieve a higher level of penetration and cultural relevance.

Tenet #3 – Dismantle the wall between creators and users

“Dismantle the wall between developers and users, to develop systems so easy to program that doing so would be a natural, simple aspect of use.” - The Xerox PARC Design Philosophy [4]

Most computer users have never written a program, and most web users have never created a website.

While creative and technological freedom are required, they are not sufficient to assure an inclusive metaverse if only a small portion of the user population can contribute to the network.

It is also necessary to break the wall that separates content creators from consumers by providing not only the means but also the incentives necessary to make each and every user a co-author in the metaverse network.

This empowerment begins with the outright rejection of the current “social contract” that delineates the submissive relationship between users and the computers they use. In the current model, user contributions are neither expected nor welcome which in turn greatly diminishes the value of becoming algorithmically literate unless you intend to become a professional in the field.

However, in a metaverse where virtual components are easily inspected and modified in real-time [5], everyone could become a tinkerer first, and a maker eventually.

Thus, every aspect of the user experience in the ideal metaverse is an invitation to learn, create or remix. Worlds can be quickly composed by linking to pre-existing parts made available by other authors. The source code and other building blocks for each shared part is readily available for study or tinkering. While each world remains sovereign, visitors are nonetheless encouraged to submit contributions that can be easily accepted and incorporated [6].


To illustrate the benefits of embracing users as co-authors, imagine that you have published a virtual model of Paris in the ideal metaverse. Over time, your simulation gains popularity and becomes a popular destination for Paris lovers around the world. To your amazement, your visitors congeal into a passionate community that submits frequent improvements to your virtual Paris, effectively becoming your co-authors. [7]

Most importantly, the basic tools of creation [8] of the ideal metaverse are accessible to children and those who are not technologically inclined. By design, these tools allow users to learn by experimentation thus blurring the lines between purposeful effort and creative play. [9]

Tenet #4 – Support for worlds of unprecedented scale

Virtual worlds of today, with a single notable exception [10], can only handle smaller-scale simulations with no more than several dozen participants in the same virtual space. To overcome this limitation, world creators sacrifice the experience of scale by partitioning worlds into a multitude of smaller instances where only a limited number of participants may interact with each other.

In contrast, the simulation infrastructure of the ideal metaverse supports worlds of unprecedented scale (e.g., whole populated cities, planets, solar systems) while handling millions of simultaneous users within the same shared virtual space.

This is an incredibly difficult challenge because it requires maintaining a coherent state across a vast number of geographically separated machines in real-time. Even as networking technology advances, there are fundamental physical limits to possible improvements in total bandwidth and latency. [11]


Fulfilling this requirement will require algorithmic breakthroughs and the creation of a computational fabric that allows an arbitrary number of machines to join forces to simulate large seamless worlds while at the same time gracefully compensating for unfavorable network circumstances.

Scalability of this magnitude is not something that can be easily bolted onto a pre-existing architecture. Instead, the creators of the ideal metaverse must take this requirement into consideration from the very beginning of development.

Tenet #5 – Support for nomadic computation

Of all tenets proposed in this essay, this is the one that is most easily contested because it is motivated not by strict necessity but by the desire to create a network that is more than the sum of its parts.

The same way that the web required a new way of thinking about information, the ideal metaverse requires a new way of thinking about computation. One of the ways this requirement manifests itself is by our proposal for the support of safe nomadic computation.

In the ideal metaverse, a nomadic program is a fully autonomous participant with the similar “rights” of a human user. Like any ordinary user, such programs can move from one server to the next on the network. To the underlying computational fabric, there is no meaningful distinction between human operators and nomadic programs other than the fact that programs carry along their source code and internal state as they migrate to a new server.

A powerful illustrative example of the potential for roaming programs is the approach taken by developer Hello Games in the development of “No Man’s Sky” [13].

By leveraging procedural content generation, a team of four artists have generated a virtual universe containing over 18 quintillion planets. Unable to visit and evaluate those worlds one by one, they resorted to the creation of a fleet of autonomous virtual robots to explore their many worlds. Each robot documents its journey and takes short videos of the most remarkable things they encounter to share with its developers.

While Hello Games’ robot explorers are not nomadic programs, the same idea could be implemented in the metaverse on a much grander scale. For example, more than merely visiting worlds in the network, nomadic programs can also interact with other users or programs, improve the worlds visited [14] or even act as the autonomous surrogate for a user who is currently offline.

Moreover, the infrastructure required for supporting nomadic computation can also be leveraged to offload work to the computers utilized by human visitors. This is beneficial because thousands of end-user machines running complex logic can create much richer experiences than what would be possible with server-side resources exclusively. [15]

The road ahead

The five tenets of the ideal metaverse shared in this article can be succinctly distilled to just two adjectives — free and distributed. Those are precisely the attributes that made the web widely successful, and the core values the metaverse must embrace to achieve a similar level of cultural relevance.

However, there are still many significant challenges ahead for the creation of a real-world metaverse.

From a hardware perspective, nothing short of a collapse of technological progress stands in the way of the required technologies being made available. Computing and networking performance continue to increase exponentially [16], and affordable head-mounted displays are just around the corner. [17]


From the software standpoint, there are a few groups already mobilizing to fulfill the promise of the metaverse. I have previously introduced our team at Lucidscape, and I feel compelled to mention our amazing friends at High Fidelity since they are also hard at work building their own vision of the metaverse. Similarly noteworthy are the efforts of, even though they are developing proprietary technology, their work could be useful to the metaverse in the long run [18].

Overall, recent progress has been encouraging. Last year our team at Lucidscape ran the first large-scale test of a massively parallel simulation engine where over ten million entities were simulated on a cluster composed of 6,608 processor cores [19]. Meanwhile, High Fidelity has already released the alpha version of their open source platform for shared virtual reality, and as I write this, there are 44 domains in their network, which can be earnestly described as a proto-metaverse.

Stress Test #1 - Illustrative Cluster

Where imagination becomes reality

Nothing is as uniquely human as the capacity to dream. Our ability to imagine a better world gives us both the desire and the resolve to reshape the reality around us.

Neuroscientist Gerald Edelman eloquently defined the human brain as the place “where matter becomes imagination” [20]. It is a wondrous concept which is about to be taken a step further as metaverse establishes itself as the place “where imagination becomes reality.”

While in natural reality our capacity to imagine greatly outstrips our power to realize, virtual reality closes that gap and mainstream availability of VR will release an unfathomable amount of pent-up creative energy.

Our urge to colonize virtual worlds is easily demonstrated by success stories of video games that give users easy-to-use tools to create on their own. Media Molecule’s “Little Big Planet” receives over 5,000 submissions of user-created levels every day. Meanwhile, the number of Microsoft’s “Minecraft” worlds is estimated at the hundreds of millions.


While it is true that some of us may never find virtual reality to be as fulfilling as natural reality, ultimately we are not the ones who will realize the full potential of VR and the metaverse.

Today’s children will be the first “virtual natives.” Their malleable brains will adapt and evolve along with the virtual worlds they create and experience. Eventually they will learn to judge experiences exclusively on the amount of cognitive enrichment it offers and not based on the arbitrary labels of “real” or “virtual.”

In time, the metaverse will become humanity’s shared virtual canvas. In it, we will meet to create new worlds and new experiences that bypass the constraints of natural reality. Its arrival will set in motion a grand social experiment that will ultimately reveal the true nature of our species. [21]

How will our culture and morality evolve when reality itself becomes negotiable? Will we create worlds that elevate the human spirit to new heights? Or will we use virtual reality to satisfy our darkest desires?

To the disappointment of both the eternally optimistic and relentlessly pessimistic, the answer is likely to be a complex mixture of both.

The real world metaverse will be just as full of beauty and contain just as much darkness as the web we have today. It will be an honest mosaic portrait of experiences that is fully representative of our true cognitive identity as a species.

The problem you did not know you had

I would like to conclude by asking you to imagine a line representing your personal trajectory through life’s many possibilities. This line connects your birth to each of your most salient moments up to the current point in time, and it represents that totality of your life’s experience.

Each decision you made along the way pruned the tree of possibilities of the branches that were incompatible with the sum of your previous choices. For each door you opened, countless others were sealed shut because such is the nature of a finite human existence — causality mercilessly limits how much you can do with the time you have.

I, for example, decided to specialize in computer science so it is unlikely that I will ever become an astronaut. Since I am male and musically challenged, I will also never know what is like to be a female j-pop singer, or a person of a different race, or being born in a different century. No matter what I do, those experiences are inaccessible to me in natural reality.

The goal of this exercise is to bring to your attention that no matter how rich of a life you have lived, the breadth of your journey represents an insignificantly narrow path through the spectrum of possible human experiences.

This is how natural reality limits you. It denies you access to the full spectrum of experiences your mind requires to achieve higher levels of wisdom, empathy and cognitive well-being.

This is the problem you did not know you had — and virtual reality is precisely the solution you did not know you needed.


[1] Namely: Oculus VR owned by Facebook, Google, Sony and HTC/Valve.

[2] Lucidscape is building a massively distributed computational fabric to power the metaverse (

[3] While I am not in any way opposed to violent video games, I want to make the point that by any reasonable moral scale, sex and nudity are inherently more acceptable than murder.

[4] Read more:

[5] This is conceptually similar to what was attempted by the developers of the Xerox Alto operating system because user changes are reflected immediately. See also [4]

[6] This mechanism would be conceptually similar to a "pull request":

[7] "Wikiworlds" would be a good cognitive shortcut for this co-authoring model: worlds that are like Wikipedia in the aspect that anyone can contribute.

[8] Emphasis is given to the fact that the basic tools must be accessible to non-technical users. Certainly, complex tools for power users are also of critical importance.

[9] This reflects my personal wish of seeing a whole generation of kids becoming algorithmically literate by "playing" on the metaverse.

[10] Eve Online (

[11] Read more:

[13] Read more:

[14] Imagine an autonomous builder program that travels around the metaverse and uses procedural content generation to suggest improvements to the visited worlds.

[15] Another important aspect of supporting nomadic computation is to minimize the cross-talk between servers as autonomous agents roam the metaverse. Since the execution of nomadic programs is local to the server it is currently visiting, a great deal of network bandwidth can be spared.

[16] Read more:

[17] Coming soon: Oculus CV, Sony Morpheus, Valve HTC Vive

[18] I would like to take this opportunity to invite the great minds at Improbable to consider building a free metaverse alongside Lucidscape and High Fidelity instead of limiting themselves to the scope of the video game industry.

[19] Read more:

[20] “How Matter Becomes Imagination” is the sub-title of “A Universe of Consciousness” by Nobel Prize winner Gerald Edelman and neuroscientist Giulio Tononi.

[21] It is this author’s opinion that technology does not change us, it merely enables us to act the way we wanted to all along.

Rod Furlan is an artificial intelligence researcher, Singularity University alumnus and the co-founder Lucidscape, a virtual reality research lab currently working on a new kind of massively-distributed 3D simulation engine to power a vast network of interconnected virtual worlds. Read more here and follow him @rfurlan.

To get updates on Future of Virtual Reality posts, sign up here.

Image Credit: and mediamolecule 

Logo KW

A Movable Defense [1043]

de System Administrator - martes, 6 de enero de 2015, 12:46

A Movable Defense

In the evolutionary arms race between pathogens and hosts, genetic elements known as transposons are regularly recruited as assault weapons for cellular defense.

By Eugene V. Koonin and Mart Krupovic

JUMPERS: Transposable elements, which make up as much as 90 percent of the corn genome and are responsible for the variation in kernel color, may also be at the root of diverse immune defenses.


Researchers now recognize that genetic material, once simplified into neat organismal packages, is not limited to individuals or even species. Viruses that pack genetic material into stable infectious particles can incorporate some or all of their genes into their hosts’ genomes, allowing remnants of infection to remain even after the viruses themselves have moved on. On a smaller scale, naked genetic elements such as bacterial plasmids and transposons, or jumping genes, often shuttle around and between genomes. It seems that the entire history of life is an incessant game of tug-of-war between such mobile genetic elements (MGEs) and their cellular hosts.

MGEs pervade the biosphere. In all studied habitats, from the oceans to soil to the human intestine, the number of detectable virus particles, primarily bacteriophages, exceeds the number of cells at least tenfold, and maybe much more. Furthermore, MGEs and their remnants constitute large portions of many organisms’ genomes—as much as two-thirds of the human genome and up to 90 percent in plants such as corn.


MOBILE DNA: A false-color transmission electron micrograph of a transposon, a segment of DNA that can move around chromosomes and genomes


Despite their ubiquity and prevalence in diverse genomes, MGEs have traditionally been considered nonfunctional junk DNA. Starting in the middle of the 20th century, through the pioneering work of Barbara McClintock in plants, and over the following decades in a widening range of organisms, researchers began to uncover clues that MGE sequences are recruited for a variety of cellular functions, in particular for the regulation of gene expression. More-recent work reveals that many organisms also use MGEs for a more specialized and sophisticated function, one that capitalizes on the ability of these elements to move around genomes, modifying the DNA sequence in the process. Transposons seem to have been pivotal contributors to the evolution of adaptive immunity both in vertebrates and in microbes, which were only recently discovered to actually have a form of adaptive immunity—namely, the CRISPR-Cas (clustered regularly interspaced short palindromic repeats–CRISPR-associated genes) system that has triggered the development of a new generation of genome-manipulation tools.

Multiple defense systems have evolved in nearly all cellular organisms, from bacteria to mammals. Taking a closer look at these systems, we find that the evolution of these defense mechanisms depended, in large part, on MGEs—those same elements that are themselves targets of host immune defense.

Layers of defense

As cheaters in the game of life, stealing resources from their hosts, parasites have the potential to cause the collapse of entire communities, killing their hosts before moving on or dying themselves. But hosts are far from defenseless. The diversity and sophistication of immune systems are striking: their functions range from immediate and nonspecific innate responses to exquisitely choreographed adaptive responses that result in lifelong immune memory after an initial pathogen attack.1

Transposons seem to have been piv­otal contributors to the evolution of adap­tive immunity both in vertebrates and in microbes.

Over the last two decades or so, it has become clear that nearly all organisms possess multiple mechanisms of innate immunity.2 Toll-like receptors (TLRs), common to most animals, recognize conserved molecules from microbial pathogens and activate the appropriate components of the immune system upon invasion. Even more widespread and ancient is RNA interference (RNAi), a powerful defense system that employs RNA guides, known as small interfering RNAs (siRNAs), to destroy invading nucleic acids, primarily those of RNA viruses. Conceptually, the biological function of siRNAs is analogous to that of TLRs: an innate immune response to a broad class of pathogens.

Prokaryotes possess their own suite of innate immune mechanisms, including endonucleases that cleave invader DNA at specific sites and enzymes called methylases that modify those same sites in the prokaryotes’ own genetic material to shield it from cleavage, a strategy known as restriction modification (RM).3 If overwhelmed by pathogens, many prokaryotic cells will undergo programmed cell death or go into dormancy, thereby preventing the spread of the pathogen within the organism or population. In particular, infected bacterial or archaeal cells can activate toxin-antitoxin (TA) systems to induce dormancy or cell death. Normally, the toxin protein is complexed with the antitoxin and thus inactivated. However, under stress, the antitoxin is degraded, unleashing the toxin to harm the cell.

Many viruses that infect microbes also encode RM and TA modules.4 These viruses are, in effect, a distinct variety of MGEs that sometimes have highly complex genomes. Viruses use RM systems for the very same purpose as their prokaryotic hosts: the methylase modifies the viral genome, whereas the endonucleases degrade any unmodified genomes in the host cell, thereby providing nucleotides for the synthesis of new copies of the viral genome. And the TA system can ensure retention of a plasmid or virus within the cell. The toxin and antitoxin proteins dramatically differ in their vulnerability to proteolytic enzymes that are always present in the cell: the toxin is stable whereas the antitoxin is labile. This does not matter as long as both proteins are continuously produced. However, if both genes are lost (for example, during cell division), the antitoxin rapidly degrades, and the remaining amount of the toxin is sufficient to halt the biosynthetic activity of the cell and hence kill it or at least render it dormant. A plasmid or virus that carries a TA module within its genome thus implants a self-destructing mechanism in its host that is activated if the MGE is lost. (See illustration.)

When an MGE inserts into the host genome, it inevitably modifies that genome, typically using an MGE-encoded recombinase (also known as integrase or transposase) as a breaking-and-entering tool. Speaking in deliberately anthropomorphic terms, the MGEs do so for their own selfish purposes, to ensure their propagation within the host genome. However, given the ubiquity of MGEs across cellular life forms, it seems extremely unlikely that host organisms would not recruit at least some of these naturally evolved genome manipulation tools in order to exploit their remarkable capacities for their own purposes. Immune memory that involves genome manipulation is arguably the most obvious utility of these tools, and in retrospect, it is not surprising that unrelated transposons and their recombinases appear to have made key contributions to the origin of both animal and prokaryotic forms of adaptive immunity.

Guns for hire


DISRUPTING COLOR: The variations in color seen in this dahlia “flower” (actually a cluster of small individual flowers, or florets) can be caused by transposon-induced mutations.


Until recently, prokaryotes had been thought to entirely lack the sort of adaptive immunity that dominates defense against parasites in vertebrates. This view has been overturned in the most dramatic fashion by the discovery of the CRISPR-Cas, RNAi-based defense systems found to be present in most archaea and many bacteria studied to date.5 In 2005, Francisco Mójica of the University of Alicante in Spain and colleagues,6 and independently, Dusko Ehrlich of the Pasteur Institute in Paris,7 discovered that some of the unique sequences inserted between CRISPR, known as spacers, were identical to pieces of bacteriophage or plasmid genomes. Combined with a detailed analysis of the predicted functions of Cas proteins, this discovery led one of us (Koonin) and his team to propose in 2006 that CRISPR-Cas functioned as a form of prokaryotic adaptive immunity, with memory of past infections stored in the genome within the CRISPR “cassettes”—clusters of short direct repeats, interspersed with similar-size nonrepetitive spacers, derived from various MGEs—and to develop a detailed hypothesis about the mechanism of such immunity.8

Subsequent experiments from Philippe Horvath’s and Rodolphe Barrangou’s groups at Danisco Corporation,9 along with several other studies that followed in rapid succession, supported this hypothesis. (See “There’s CRISPR in Your Yogurt,” here.) It has been shown that CRISPR-Cas indeed functions by incorporating fragments of foreign bacteriophage or plasmid DNA into CRISPR cassettes, then using the transcripts of these unique spacers as guide RNAs to recognize and cleave the genomes of repeat invaders. (See illustration.) A key feature of CRISPR-Cas systems is their ability to transmit extremely efficient, specific immunity across many thousands of generations. Thus, CRISPR-Cas is not only a bona fide adaptive immunity system, but also a genuine machine of Lamarckian evolution, whereby an environmental challenge—a virus or plasmid, in this case—directly causes a specific change in the genome that results in an adaptation that is passed on to subsequent generations.10

When a mobile genetic element (MGE) inserts into the host genome, it inevitably modifies that genome, typically using an MGE-encoded recombinase as a breaking-and-entering tool.

A torrent of comparative genomic, structural, and experimental studies has characterized the extremely diverse CRISPR-Cas systems according to the suites of Cas proteins involved in CRISPR transcript processing and target recognition.5,11While Type I and Type III systems employ elaborate protein complexes that consist of multiple Cas proteins, Type II systems perform all the necessary reactions with a single large protein known as Cas9. These findings opened the door for straightforward development of a new generation of genome editing. Cas9-based tools are already used by numerous laboratories all over the world for genome engineering that is much faster, more flexible, and more versatile than any methodology that was available in the pre-CRISPR era.12

And it seems that humans are not the only species to have stolen a page from the CRISPR book: viruses have done the same. For example, a bacteriophage that infects pathogenic Vibrio cholera carries its own adaptable CRISPR-Cas system and deploys it against another MGE that resides within the host genome.13Upon phage infection, that rival MGE, called a phage inducible chromosomal island-like element (PLE), excises itself from the cellular genome and inhibits phage production. But at the same time, the bacteriophage-encoded CRISPR-Cas system targets PLE for destruction, ensuring successful phage propagation.

Consequently, in prokaryotes, all defense systems appear to be guns for hire that work for the highest bidder. Sometimes it is impossible to know with any certainty in which context, cellular or MGE, different defense mechanisms first emerged.

Transposon origins of adaptive immunity


THE ULTIMATE MOBILE ELEMENT: Bacteriophages converging on an E. coli cell can be seen injecting their genetic material (blue-green threads) into the bacterium. Often, viral DNA will be taken up by the host genome, where it can be passed through bacterial generations.


Recent evidence from our groups supports an MGE origin of the CRISPR-Cas systems. The function of Cas1—the key enzyme of CRISPR-Cas that is responsible for the acquisition of foreign DNA and its insertion into spacers within CRISPR cassettes—bears an uncanny resemblance to the recombinase activity of diverse MGEs, even though Cas1 does not belong to any of the known recombinase families. As a virtually ubiquitous component of CRISPR-Cas systems, Cas1 was likely central to the emergence of CRISPR-Cas immunity.

During a recent exploration of archaeal DNA dark matter—clusters of uncharacterized genes in sequenced genomes—we unexpectedly discovered a novel superfamily of transposon-like MGEs that could hold the key to the origin of Cas1.14 These previously unnoticed transposons contain inverted repeats at both ends, just like many other transposons, but their gene content is unusual. The new transposon superfamily is present in both archaeal and bacterial genomes and is highly polymorphic (different members contain from 6 to about 20 genes), with only two genes shared by all identified representatives. One of these conserved genes encodes a DNA polymerase, indicating that these transposons supply the key protein for their own replication. While diverse eukaryotes harbor self-synthesizing transposons of the Polinton or Maverick families, this is the first example in prokaryotes. But it was the second conserved protein that held the biggest surprise: it was none other than a homolog of Cas1, the key protein of the CRISPR-Cas systems.

We dubbed this new transposon family Casposons and naturally proposed that, in this context, Cas1 functions as a recombinase. In the phylogenetic tree of Cas1, the casposons occupy a basal position, suggesting that they played a key role in the origin of prokaryotic adaptive immunity.

In vertebrates, adaptive immunity acts in a completely different manner than in prokaryotes and is based on the acquisition of pathogen-specific T- and B-lymphocyte antigen receptors during the lifetime of the organism. The vast repertoire of immunoglobulin receptors is generated from a small number of genes via dedicated diversification processes known as V (variable), D (diversity), and J (joining) segment (V(D)J) recombination and hypermutation. (See illustration.) In a striking analogy to CRISPR-Cas, vertebrate adaptive immunity also seems to have a transposon at its origin. V(D)J recombination is mediated by the RAG1-RAG2 recombinase complex. The recombinase domain of RAG1 derives from the recombinases of a distinct group of animal transposons known as Transibs.15 The recombination signal sequences of the immunoglobulin genes, which are recognized by the RAG1-RAG2 recombinase and are necessary for bringing together the V, D, and J gene segments, also appear to have evolved via Transib insertion.

The two independent origins of adaptive immune systems in prokaryotes and eukaryotes involving unrelated MGEs show that, in the battle for survival, organisms welcome all useful molecular inventions irrespective of who the original inventor was. Indeed, the origin of CRISPR-Cas systems from prokaryotic casposons and vertebrate V(D)J recombination from Transib transposons might appear paradoxical given that MGEs are primary targets of immune systems. However, considering the omnipresence and diversity of MGEs, it seems likely that even more Lamarckian-type mechanisms have, throughout the history of life, directed genomic changes in the name of host defense.16

Moreover, the genome-engineering capacity of immune systems provides almost unlimited potential for the development of experimental tools for genome manipulation and other applications. The utility of antibodies as tools for protein detection and of RM enzymes for specific fragmentation of DNA molecules has been central to the progress of biology for decades. Recently, CRISPR-Cas systems have been added to that toolkit as, arguably, the most promising of the new generation of molecular biological methods. It is difficult to predict what opportunities for genome engineering could be hidden within still unknown or poorly characterized defense systems.



Eugene V. Koonin is a group leader at the National Library of Medicine’s National Center for Biotechnology Information in Bethesda, Maryland. Mart Krupovic is a research scientist at the Institut Pasteur in Paris, France.


  1. T. Boehm, “Design principles of adaptive immune systems,” Nat Rev Immunol, 11:307-17, 2011.
  2. R. Medzhitov, “Approaching the asymptote: 20 years later,” Immunity, 30:766-75, 2009.
  3. K.S. Makarova et al., “Comparative genomics of defense systems in archaea and bacteria,” Nucleic Acids Res, 41:4360-77, 2013.
  4. J.E. Samson et al., “Revenge of the phages: defeating bacterial defences,” Nat Rev Microbiol, 11:675-87, 2013.
  5. R. Barrangou, L.A. Marraffini, “CRISPR-Cas systems: Prokaryotes upgrade to adaptive immunity,”Mol Cell, 54:234-44, 2014.
  6. F.J. Mójica et al., “Intervening sequences of regularly spaced prokaryotic repeats derive from foreign genetic elements,” J Mol Evol, 60:174-82, 2005.
  7. A. Bolotin et al., “Clustered regularly interspaced short palindrome repeats (CRISPRs) have spacers of extrachromosomal origin,” Microbiology, 151:2551-61, 2005.
  8. K.S. Makarova et al., “A putative RNA-interference-based immune system in prokaryotes: computational analysis of the predicted enzymatic machinery, functional analogies with eukaryotic RNAi, and hypothetical mechanisms of action,” Biol Direct, 1:7, 2006.
  9. R. Barrangou et al., “CRISPR provides acquired resistance against viruses in prokaryotes,” Science, 315:1709-12, 2007.
  10. E.V. Koonin, Y.I. Wolf, “Is evolution Darwinian or/and Lamarckian?” Biol Direct, 4:42, 2009.
  11. K.S. Makarova et al., “Evolution and classification of the CRISPR-Cas systems,” Nat Rev Microbiol, 9:467-77, 2011.
  12. H. Kim, J.S. Kim, “A guide to genome engineering with programmable nucleases,” Nat Rev Genet, 15:321-34, 2014.
  13. K.D. Seed et al., “A bacteriophage encodes its own CRISPR/Cas adaptive response to evade host innate immunity,” Nature, 494:489-91, 2013.
  14. M. Krupovic et al., “Casposons: a new superfamily of self-synthesizing DNA transposons at the origin of prokaryotic CRISPR-Cas immunity,” BMC Biology, 12:36, 2014.
  15. V.V. Kapitonov, J. Jurka, “RAG1 core and V(D)J recombination signal sequences were derived from Transib transposons,” PLOS Biol, 3:e181, 2005.
  16. E.V. Koonin, M. Krupovic, “Evolution of adaptive immunity from transposable elements combined with innate immune systems,” Nature Rev Genet, doi:10.1038/nrg3859, December 9, 2014.



Página:  1  2  3  4  5  6  7  8  9  10  ...  74  (Siguiente)