Neurociencia | Neuroscience
Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
A Big Year for Biotech: Bugs as Drugs, Precision Gene Editing, and Fake Food 
A Big Year for Biotech: Bugs as Drugs, Precision Gene Editing, and Fake Food
Speculations around whether biotech stocks are in a bubble remain undecided for the second year in a row. But one thing stands as indisputable—the field made massive progress during 2015, and faster than anticipated.
For those following the industry in recent years, this shouldn’t come as a surprise.
In fact, according to Adam Feuerstei at The Street, some twenty-eight biotech and drugs stocks grew their market caps to $1 billion or more in 2014, and major headlines like, “Human Genome Sequencing Now Under $1,000 Per Person,” were strewn across the web last year.
But 2015 was a big year in biotech too.
Cheeky creations like BGI’s micropig made popular headlines, while CRISPR/Cas9 broke through into the mainstream and will forever mark 2015 as a year that human genetic engineering became an everyday kind of conversation (and debate).
With the great leaps in biotech this year, we met with Singularity University’s biotech track chair, Raymond McCauley, to create a collaborative list on four categories where we saw progress within biotech.
While this list is not comprehensive (nor is meant to be), these are a few tangible milestones in biotech’s greatest hits of 2015.
Drag-and-drop genetic engineering is near
2015 will go down in the books as a historic year for genetic engineering. It seemed everyone was talking about, experimenting with, or improving the gene editing technology CRISPR-Cas9. CRISPR is a cheap, fast, and relatively simple way to edit DNA with precision. In contrast to prior, more intensive methods, CRISPR-Cas9 gives scientists a new level of control to manipulate DNA.
CRISPR appears to be broadly useful in plants, animals, and even humans. And although the focus of this technology is on virtuous pursuits, such as eliminating genetic diseases like Huntington’s disease, there has been widespread concern about additional applications—engineering superhuman babies, to name one.
In early December, hundreds of scientists descended on Washington DC to, in part, debate the ethics of genetic engineering with CRISPR. But the debate isn’t over by any means. The greater ethical implications of altering DNA are complex, and as Wired writer Amy Maxmen puts it, CRISPR-Cas9 gives us “direct access to the source code of life,” and thus, the ability to re-write what it means to be human.
Surrounding debates on CRISPR’s use has also been a patent war for the technology.
In April 2014, molecular biologist at the Broad Institute and MIT, Feng Zhang, earned the first patent for CRISPR-Cas9, but since then, one of the original creators of CRISPR at UC Berkeley, molecular biologist Jennifer Doudna, has been fighting back. Meanwhile, new CRISPR enzymes (beyond Cas9) were announced this fall.
All this took place as examples of the power of gene editing showed up in the headlines. In April, Chinese researcher Junjiu Huang and team used CRISPR to engineer human embryos. In November, the first-ever use of gene editing on a human patient cured a one-year-old girl of leukemia (using a different DNA-cutting enzyme called TALEN). And then there were those Chinese superdogs.
At large, we are only at the very beginning of what this technology will bring to medicine—and the debate on how we should best use our newfound power.
The FDA okays personal DNA testing
Two years ago, 23andMe CEO Ann Wojcicki received a warning letter from the FDA, stating that selling their genetic testing service was violating federal law. The FDA further warned against giving consumers so much information on their personal health without a physician consultation.
Two years later, 23andMe broke through the FDA deadlock and announced their somewhat scaled back new product—Carrier Status Reports (priced at $199)—marking the first FDA-approved direct-to-consumer genetic test.
Whereas their original product tested for 254 diseases, the new version examines 36 autosomal recessive conditions and the “carrier status” of the individual being tested. The company now has a billion-dollar valuation and over a million clients served as of this year. The implications of affordable and FDA-approved consumer genetic testing are large, both for individuals and physicians.
“Fake food” became a real thing
Also known as food 2.0, a handful of companies using biotech methods to engineer new foods made headlines or hit the consumer market this year.
Most of us heard about that lab-grown hamburger that was priced in the six-figure range a few years back. Now, in vitro meat (animal meat grown in a lab using muscle tissue) is coming in at less than $12 a burger. Though the burger’s creator, Mark Post, says a consumer version is still a ways off, the effect could be significant. One of the biggest benefits? Less environmental impact from meat products—which require 2,400 gallons of water to produce just one pound of meat.
Impossible Foods, meanwhile, is developing a “plant-based hamburger that bleeds,” writes The Economist, in addition to plant-based egg and dairy products. To give the burgers a meat-like taste, Impossible Foods extracts iron-rich molecule from plants, though the company has been extremely private about their method.
Mexico-based startup Eat Limmo, is tackling healthy, sustainable, and affordable food from a different angle. Their patent-pending process extracts nutrients from the seeds and peels of fruit waste, recycling all the bits we typically discard into cost-effective, nutritious, and tasty (they say) ingredients for making food.
Microbiome drugs entered clinical trials
Talk about the microbiome isn’t anything new; however, this year Second Genome pushed a microbiome drug into clinical trials. One of the key conditions the company is addressing is inflammatory bowel disease (IBD).
Over five million people suffer from IBD worldwide, and of those, a majority due to ulcerative colitis and Crohn's disease. Both conditions at present have less-than-ideal treatment methods, including harsh medications, steroids, and in extreme cases, the removal of portions of patients’ intestines.
The first drug the company is pushing through clinical trails addresses inflammation and pain in people suffering from IBD and has completed a phrase one placebo controlled and double blind trial.
Another startup to watch is uBiome, a microbiome sequencing service that sells personal microbiome kits direct-to-consumer (think 23andMe style for the microbiome).
An article in Wired last month claimed that uBiome is planning to announce a partnership with the CDC, where they’ll be evaluating stool samples from roughly 1,000 hospital patients. According to Wired, “uBiome and the CDC have set out to develop something like a 'Microbiome Disruption Index' to track how treatments, like antibiotics, alter gut microbes.”
Not only is this a major accomplishment for the startup, they also began as a citizen science movement, so it’s a big win for the whole community.
Staff Writer at Singularity University
Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).
A First Big Step Toward Mapping The Human Brain 
Electrophysiological data collected from neurons in the mouse visual cortex forms the basis for the Allen Institute's Cell Type Database. ALLEN INSTITUTE FOR BRAIN SCIENCE
A First Big Step Toward Mapping The Human Brain
It’s a long, hard road to understanding the human brain, and one of the first milestones in that journey is building a … database.
In the past few years, neuroscientists have embarked on several ambitious projects to make sense of the tangle of neurons that makes the human experience human, and an experience. In the UK, Henry Markram—the Helen Cho to Elon Musk’s Tony Stark—is leading the Human Brain Project, a $1.3 billion plan to build a computer model of the brain. In the US, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative hopes to, in its own nebulous way, map the dynamic activity of the noggin’s 86 billion neurons.
Now, the Allen Institute for Brain Science, a key player in the BRAIN Initiative, has launched a database of neuronal cell types that serves as a first step toward a complete understanding of the brain. It’s the first milestone in the Institute’s 10-year MindScope plan, which aims to nail down how the visual system of a mouse works, starting by developing a functional taxonomy of all the different types of neurons in the brain.
“The big plan is to try to understand how the brain works,” says Lydia Ng, director of technology for the database. “Cell types are one of the building blocks of the brain, and by making a big model of how they’re put together, we can understand all the activity that goes into perceiving something and creating an action based on that perception.”
The Allen Cell Types Database, on its surface, doesn’t look like much. The first release includes information on just 240 neurons out of hundreds of thousands in the mouse visual cortex, with a focus on the electrophysiology of those individual cells: the electrical pulses that tell a neuron to fire, initiating a pattern of neural activation that results in perception and action. But understanding those single cells well enough to put them into larger categories will be crucial to understanding the brain as a whole—much like the periodic table was necessary to establish basic chemical principles.
to Open Overlay GalleryA researcher uses a pipette to measure a cell’s electrical traces while under a microscope. ALLEN INSTITUTE FOR BRAIN SCIENCE
Though researchers have come a long way in studying the brain, most of the information they have is big-picture, in the form of functional scans that show activity in brain areas, or small-scale, like the expression of neurotransmitters and their receptors in individual neurons. But the connection between those two scales—how billions of neurons firing together results in patterns of activation and behavior—is still unclear. Neuroscientists don’t even have a clear idea of just how many different cell types exist, which is crucial to understanding how they work together. “There was a lot of fundamental information that was missing,” CEO Allan Jones says
about the database project. “So when we got started, we focused on what we call a reductionist approach, really trying to understand the parts.”
When it’s complete, the database will be the first in the world to collect information from individual cells along four basic but crucial variables: cell shape, gene expression, position in the brain, and electrical activity. So far, the Institute has tracked three of those variables, taking high-resolution images of dozens of electrically-stimulated neurons with a light microscope, while carefully noting their position in the mouse’s cortex. “The important early findings are that there are indeed a finite number of classes,” says Jones. “We can logically bend them into classes of cells.”
Click to Open Overlay Gallery Neurons in the database are mapped in 3D space. Allen Institute for Brain Science
Next up, the Institute will accumulate gene expression data in individual cells by sequencing their RNA, and the overlap of all four variables ultimately will result in the complete cell type taxonomy. That classification system will help anatomists, physicists, and neuroscientists direct their study of neurons more efficiently and build more accurate models of cortical function. But it’s important to point out that the database isn’t merely important for its contents. How those contents were measured and aggregated also is crucial to the future of these big-picture brain mapping initiatives.
To create a unified model of the brain, neuroscientists must collect millions of individual data points from neurons in the brain. To start, they take electrical readings from living neurons by stabbing them with tiny, micron-wide pipettes. Those pipettes deliver current to the cells—enough to get them to fire—and record the cell’s electrical output. But there are many ways to set up those electrical readings, and to understand the neural system as a whole, neuroscientists need to use the same technique every time to make sure that the electrical traces can be compared from neuron to neuron.
Each of the colored dots shown is a single neuron detailed in the cell database, clustered in the mouse visual cortex. Allen Institute for Brain Science
The Allen Institute, in collaboration with other major neuroscience hubs—Caltech, NYU School of Medicine, the Howard Hughes Medical Institute, and UC Berkeley—has made sure to use the same electrical tracing technique on all of the neurons studied so far (they call it “Neurodata without Borders”). And while the data for this first set of mouse neurons was primarily generated at the Institute, those shared techniques will make future work more applicable to the BRAIN Initiative’s larger goals. “In future releases, we’ll be working with other people to get data from other areas of the brain,” says Ng. “The idea is that if everyone does things in a very standard way, we’ll be able to incorporate that data seamlessly in one place.”
That will become increasingly important as the Institute continues mapping not just mouse neurons, but human ones. It’s easy to target specific regions in the mouse brain, getting electrical readings from neurons in a particular part of the visual cortex. It’s not so easy to get location-specific neurons from humans. “These cells actually come from patients—people who have having neurosurgery for epilepsy, or the removal of tumors,” says Ng. For a surgeon to get to the part of the brain that needs work, they must remove a certain amount of normal tissue that’s in the way, and it’s that tissue that neuroscientists are able to study.
Because they don’t get to choose exactly where in the brain that tissue comes from, scientists at Allen and other research institutes will have to be extra careful that their protocols for identifying the cells—by location, gene expression, electrical activity, and shape—are perfectly aligned, so none of those precious cells are wasted. All together, the discarded remnants of those human brains may be enough to reconstruct one from scratch.
A former Amazon engineer's startup wants to fix the worst thing about tech job interviews 
A former Amazon engineer's startup wants to fix the worst thing about tech job interviews
If there's one thing that techies hate, it's interviewing for a job in tech.
You'd think it'd be easier, right? After all, every company is soon to be a software company if you believe the Silicon Valley hype, which means that every company will soon need lots and lots more programmers.
The problem is that it's actually really hard to assess whether or not somebody is a good programmer.
Everybody thinks they're an expert, and it's often non-technical people like HR that get tapped to do the initial assessment.
And so two things tend to happen when you interview for a tech job: You either get completely insane "skill" tests on extremely basic knowledge that have little to do with the job at hand, or else they turn to brainteasers, riddles, and other weird stuff designed to gauge your personality as much as your set of skills.
Regardless, the result is the same: Really excellent coders find themselves without a job, while recruiters hire people with the wrong skills for the job they're hired to do.
That's a problem that HackerRank, a startup founded by ex-Amazon engineer and current CEO Vivek Ravisankar, wants to solve.
On Amazon's Kindle team back in 2008, building the software that let people self-publish blogs to the e-reader store, Ravisankar had to conduct a lot of technical interviews. Over time, it became clear that the process was not great.
"It's very hard to figure out how good a programmer is from looking at your resume," says Ravisankar, who left Amazon in 2009 to pursue the startup.
HackerRank is a tool for automatically making programming tests, based on the skills that the company wants to test for, and then giving them a score based on their own algorithm.
"Your 5 can be my 1.2," Ravisankar said.
It's a pretty simple idea, but it was profound enough to get HackerRank into the prestigious Y Combinator startup accelerator program. Today, one million developers use it to compete in challenges and gauge their own skills, HackerRank claims, while big companies like Amazon, Riot Games, and Evernote use it in their own recruiting efforts. So far, HackerRank has raised $12.4 million from venture capital firms like Khosla Ventures and Battery Ventures.
Now, HackerRank announces integration with the super-popular recruitment software Oracle Taleo, Greenhouse, and Jobvite, such that recruiters can instantly test job seekers and see their scores from right within the program.
Again, simple, but profound — recruiters can see how good a candidate is straight from the software they're already using to find good future employees.
There's an interesting side effect here, too. The traditional technical interview has the bad habit of turning away women and minority groups for the simple reason that they prioritize people who code in a similar way to the interviewer.
A more objective score given by HackerRank could remove that barrier, ensuring candidates' applications live and die by their own merit.
"We're bringing in a huge change in recruiting," Ravisankar says.
A Maker’s Guide to the Metaverse 
A Maker’s Guide to the Metaverse
While the term “metaverse” was coined by Neal Stephenson in his 1992 novel Snow Crash, current usage has diverged significantly from its original meaning. In popular contemporary culture, the metaverse is often described as the VR-based successor to the web.
Its recent surge in popularity is fueled by the expectation that the availability of affordable VR equipment will invariably lead to the creation of a network of virtual worlds that is similar to the web of today—but with virtual “places” instead of pages.
From a societal perspective, the metaverse is also associated with anticipation that as soon as VR technology reaches a sufficiently high level of quality, we will spend a significant portion of our private and professional lives in shared virtual spaces. Those spaces will be inherently more accommodating than natural reality, and this contextual malleability is expected to have a profound impact on our interpersonal relationships and overall cultural velocity.
Given its potential, it is no surprise the metaverse is a persistent topic in discussions about the future of virtual reality. In fact, it is difficult to find VR practitioners who can speculate about a plausible future where technological progress is unhindered and yet a metaverse is never created.
Still, there is little consensus on what a real-world implementation of the metaverse would be like.
Our research group, Lucidscape, was created to accelerate the advent of the metaverse by addressing the most challenging aspects of its implementation . Our mandate is to provide the open source foundations for a practical, real-world metaverse that embraces freedom, lacks centralized control, and ultimately belongs to everyone.
In this article, which is part opinion and part pledge, I will share the tenets for what we perceive as an “ideal” implementation of the metaverse. My goal is not to promote our work but to provoke thought and spark a conversation with the greater VR community about what we should expect from a real-world metaverse.
Tenet #1 – Creative freedom is not negotiable
“The first condition of progress is the removal of censorship.” – George Bernard Shaw
As the prerequisite technologies become available, the emergence of a proto-metaverse becomes all but inevitable. Nevertheless, it is too soon to know what kind of metaverse will arise—whether it will belong to everybody, embracing freedom, accessibility and personal expression without compromise, or be controlled and shaped by the will and whims of its creators.
To draw a relatable comparison, imagine a different world where the web functions akin to Apple’s iOS app store. In this world, all websites must be reviewed and approved for content before they are made available to users. In this impoverished version of the web, content perceived as disagreeable by its gatekeepers is suppressed, and users find themselves culturally stranded in a manicured walled garden.
While our (reasonably) free web has become a powerful driver of contemporary culture, I would argue that a content-controlled web would remain culturally impotent in comparison because censorship inevitably stifles creativity.
Some believe that censorship under the guise of curation is acceptable under a benevolent dictator. But let me again bring forth the common example of Apple, an adored company that has succumbed to the temptation of acting as a distorted moral compass for its customers by ruling that images of the human body are immoral while murder simulators are acceptable .
In contrast, the ideal metaverse allows everyone to add worlds to the network since there are no gatekeepers.
In it, human creativity is unshackled by the conventions and customs of our old world. Content creators are encouraged to explore the full spectrum of possible human experiences without the fear of censorship. Each world is a sovereign space that is entirely determined and controlled by its owner-creator.
Tenet #2 – Technological freedom is not negotiable either
“If the users do not control the program, the program controls the users” – Richard Stallman
The ideal metaverse is built atop a foundation of free software and open standards. This is of vital importance not only to enforce the right to creative freedom but to safeguard a nascent network from the risks of single-source solutions, attempts of control by litigation or even abuse by its own developers.
In the long term, a technologically free metaverse is also more likely to achieve a higher level of penetration and cultural relevance.
Tenet #3 – Dismantle the wall between creators and users
“Dismantle the wall between developers and users, to develop systems so easy to program that doing so would be a natural, simple aspect of use.” - The Xerox PARC Design Philosophy 
Most computer users have never written a program, and most web users have never created a website.
While creative and technological freedom are required, they are not sufficient to assure an inclusive metaverse if only a small portion of the user population can contribute to the network.
It is also necessary to break the wall that separates content creators from consumers by providing not only the means but also the incentives necessary to make each and every user a co-author in the metaverse network.
This empowerment begins with the outright rejection of the current “social contract” that delineates the submissive relationship between users and the computers they use. In the current model, user contributions are neither expected nor welcome which in turn greatly diminishes the value of becoming algorithmically literate unless you intend to become a professional in the field.
However, in a metaverse where virtual components are easily inspected and modified in real-time , everyone could become a tinkerer first, and a maker eventually.
Thus, every aspect of the user experience in the ideal metaverse is an invitation to learn, create or remix. Worlds can be quickly composed by linking to pre-existing parts made available by other authors. The source code and other building blocks for each shared part is readily available for study or tinkering. While each world remains sovereign, visitors are nonetheless encouraged to submit contributions that can be easily accepted and incorporated .
To illustrate the benefits of embracing users as co-authors, imagine that you have published a virtual model of Paris in the ideal metaverse. Over time, your simulation gains popularity and becomes a popular destination for Paris lovers around the world. To your amazement, your visitors congeal into a passionate community that submits frequent improvements to your virtual Paris, effectively becoming your co-authors. 
Most importantly, the basic tools of creation  of the ideal metaverse are accessible to children and those who are not technologically inclined. By design, these tools allow users to learn by experimentation thus blurring the lines between purposeful effort and creative play. 
Tenet #4 – Support for worlds of unprecedented scale
Virtual worlds of today, with a single notable exception , can only handle smaller-scale simulations with no more than several dozen participants in the same virtual space. To overcome this limitation, world creators sacrifice the experience of scale by partitioning worlds into a multitude of smaller instances where only a limited number of participants may interact with each other.
In contrast, the simulation infrastructure of the ideal metaverse supports worlds of unprecedented scale (e.g., whole populated cities, planets, solar systems) while handling millions of simultaneous users within the same shared virtual space.
This is an incredibly difficult challenge because it requires maintaining a coherent state across a vast number of geographically separated machines in real-time. Even as networking technology advances, there are fundamental physical limits to possible improvements in total bandwidth and latency. 
Fulfilling this requirement will require algorithmic breakthroughs and the creation of a computational fabric that allows an arbitrary number of machines to join forces to simulate large seamless worlds while at the same time gracefully compensating for unfavorable network circumstances.
Scalability of this magnitude is not something that can be easily bolted onto a pre-existing architecture. Instead, the creators of the ideal metaverse must take this requirement into consideration from the very beginning of development.
Tenet #5 – Support for nomadic computation
Of all tenets proposed in this essay, this is the one that is most easily contested because it is motivated not by strict necessity but by the desire to create a network that is more than the sum of its parts.
The same way that the web required a new way of thinking about information, the ideal metaverse requires a new way of thinking about computation. One of the ways this requirement manifests itself is by our proposal for the support of safe nomadic computation.
In the ideal metaverse, a nomadic program is a fully autonomous participant with the similar “rights” of a human user. Like any ordinary user, such programs can move from one server to the next on the network. To the underlying computational fabric, there is no meaningful distinction between human operators and nomadic programs other than the fact that programs carry along their source code and internal state as they migrate to a new server.
A powerful illustrative example of the potential for roaming programs is the approach taken by developer Hello Games in the development of “No Man’s Sky” .
By leveraging procedural content generation, a team of four artists have generated a virtual universe containing over 18 quintillion planets. Unable to visit and evaluate those worlds one by one, they resorted to the creation of a fleet of autonomous virtual robots to explore their many worlds. Each robot documents its journey and takes short videos of the most remarkable things they encounter to share with its developers.
While Hello Games’ robot explorers are not nomadic programs, the same idea could be implemented in the metaverse on a much grander scale. For example, more than merely visiting worlds in the network, nomadic programs can also interact with other users or programs, improve the worlds visited  or even act as the autonomous surrogate for a user who is currently offline.
Moreover, the infrastructure required for supporting nomadic computation can also be leveraged to offload work to the computers utilized by human visitors. This is beneficial because thousands of end-user machines running complex logic can create much richer experiences than what would be possible with server-side resources exclusively. 
The road ahead
The five tenets of the ideal metaverse shared in this article can be succinctly distilled to just two adjectives — free and distributed. Those are precisely the attributes that made the web widely successful, and the core values the metaverse must embrace to achieve a similar level of cultural relevance.
However, there are still many significant challenges ahead for the creation of a real-world metaverse.
From a hardware perspective, nothing short of a collapse of technological progress stands in the way of the required technologies being made available. Computing and networking performance continue to increase exponentially , and affordable head-mounted displays are just around the corner. 
From the software standpoint, there are a few groups already mobilizing to fulfill the promise of the metaverse. I have previously introduced our team at Lucidscape, and I feel compelled to mention our amazing friends at High Fidelity since they are also hard at work building their own vision of the metaverse. Similarly noteworthy are the efforts of Improbable.io, even though they are developing proprietary technology, their work could be useful to the metaverse in the long run .
Overall, recent progress has been encouraging. Last year our team at Lucidscape ran the first large-scale test of a massively parallel simulation engine where over ten million entities were simulated on a cluster composed of 6,608 processor cores . Meanwhile, High Fidelity has already released the alpha version of their open source platform for shared virtual reality, and as I write this, there are 44 domains in their network, which can be earnestly described as a proto-metaverse.
Where imagination becomes reality
Nothing is as uniquely human as the capacity to dream. Our ability to imagine a better world gives us both the desire and the resolve to reshape the reality around us.
Neuroscientist Gerald Edelman eloquently defined the human brain as the place “where matter becomes imagination” . It is a wondrous concept which is about to be taken a step further as metaverse establishes itself as the place “where imagination becomes reality.”
While in natural reality our capacity to imagine greatly outstrips our power to realize, virtual reality closes that gap and mainstream availability of VR will release an unfathomable amount of pent-up creative energy.
Our urge to colonize virtual worlds is easily demonstrated by success stories of video games that give users easy-to-use tools to create on their own. Media Molecule’s “Little Big Planet” receives over 5,000 submissions of user-created levels every day. Meanwhile, the number of Microsoft’s “Minecraft” worlds is estimated at the hundreds of millions.
While it is true that some of us may never find virtual reality to be as fulfilling as natural reality, ultimately we are not the ones who will realize the full potential of VR and the metaverse.
Today’s children will be the first “virtual natives.” Their malleable brains will adapt and evolve along with the virtual worlds they create and experience. Eventually they will learn to judge experiences exclusively on the amount of cognitive enrichment it offers and not based on the arbitrary labels of “real” or “virtual.”
In time, the metaverse will become humanity’s shared virtual canvas. In it, we will meet to create new worlds and new experiences that bypass the constraints of natural reality. Its arrival will set in motion a grand social experiment that will ultimately reveal the true nature of our species. 
How will our culture and morality evolve when reality itself becomes negotiable? Will we create worlds that elevate the human spirit to new heights? Or will we use virtual reality to satisfy our darkest desires?
To the disappointment of both the eternally optimistic and relentlessly pessimistic, the answer is likely to be a complex mixture of both.
The real world metaverse will be just as full of beauty and contain just as much darkness as the web we have today. It will be an honest mosaic portrait of experiences that is fully representative of our true cognitive identity as a species.
The problem you did not know you had
I would like to conclude by asking you to imagine a line representing your personal trajectory through life’s many possibilities. This line connects your birth to each of your most salient moments up to the current point in time, and it represents that totality of your life’s experience.
Each decision you made along the way pruned the tree of possibilities of the branches that were incompatible with the sum of your previous choices. For each door you opened, countless others were sealed shut because such is the nature of a finite human existence — causality mercilessly limits how much you can do with the time you have.
I, for example, decided to specialize in computer science so it is unlikely that I will ever become an astronaut. Since I am male and musically challenged, I will also never know what is like to be a female j-pop singer, or a person of a different race, or being born in a different century. No matter what I do, those experiences are inaccessible to me in natural reality.
The goal of this exercise is to bring to your attention that no matter how rich of a life you have lived, the breadth of your journey represents an insignificantly narrow path through the spectrum of possible human experiences.
This is how natural reality limits you. It denies you access to the full spectrum of experiences your mind requires to achieve higher levels of wisdom, empathy and cognitive well-being.
This is the problem you did not know you had — and virtual reality is precisely the solution you did not know you needed.
 Namely: Oculus VR owned by Facebook, Google, Sony and HTC/Valve.
 Lucidscape is building a massively distributed computational fabric to power the metaverse (http://lucidscape.com)
 While I am not in any way opposed to violent video games, I want to make the point that by any reasonable moral scale, sex and nudity are inherently more acceptable than murder.
 This is conceptually similar to what was attempted by the developers of the Xerox Alto operating system because user changes are reflected immediately. See also 
 This mechanism would be conceptually similar to a "pull request": http://oss-watch.ac.uk/resources/pullrequest
 "Wikiworlds" would be a good cognitive shortcut for this co-authoring model: worlds that are like Wikipedia in the aspect that anyone can contribute.
 Emphasis is given to the fact that the basic tools must be accessible to non-technical users. Certainly, complex tools for power users are also of critical importance.
 This reflects my personal wish of seeing a whole generation of kids becoming algorithmically literate by "playing" on the metaverse.
 Eve Online (https://eveonline.com)
 Imagine an autonomous builder program that travels around the metaverse and uses procedural content generation to suggest improvements to the visited worlds.
 Another important aspect of supporting nomadic computation is to minimize the cross-talk between servers as autonomous agents roam the metaverse. Since the execution of nomadic programs is local to the server it is currently visiting, a great deal of network bandwidth can be spared.
 Read more: http://www.kurzweilai.net/the-law-of-accelerating-returns
 Coming soon: Oculus CV, Sony Morpheus, Valve HTC Vive
 I would like to take this opportunity to invite the great minds at Improbable to consider building a free metaverse alongside Lucidscape and High Fidelity instead of limiting themselves to the scope of the video game industry.
 “How Matter Becomes Imagination” is the sub-title of “A Universe of Consciousness” by Nobel Prize winner Gerald Edelman and neuroscientist Giulio Tononi.
 It is this author’s opinion that technology does not change us, it merely enables us to act the way we wanted to all along.
Rod Furlan is an artificial intelligence researcher, Singularity University alumnus and the co-founder Lucidscape, a virtual reality research lab currently working on a new kind of massively-distributed 3D simulation engine to power a vast network of interconnected virtual worlds. Read more here and follow him @rfurlan.
To get updates on Future of Virtual Reality posts, sign up here.
A Movable Defense 
A Movable Defense
In the evolutionary arms race between pathogens and hosts, genetic elements known as transposons are regularly recruited as assault weapons for cellular defense.
Researchers now recognize that genetic material, once simplified into neat organismal packages, is not limited to individuals or even species. Viruses that pack genetic material into stable infectious particles can incorporate some or all of their genes into their hosts’ genomes, allowing remnants of infection to remain even after the viruses themselves have moved on. On a smaller scale, naked genetic elements such as bacterial plasmids and transposons, or jumping genes, often shuttle around and between genomes. It seems that the entire history of life is an incessant game of tug-of-war between such mobile genetic elements (MGEs) and their cellular hosts.
MGEs pervade the biosphere. In all studied habitats, from the oceans to soil to the human intestine, the number of detectable virus particles, primarily bacteriophages, exceeds the number of cells at least tenfold, and maybe much more. Furthermore, MGEs and their remnants constitute large portions of many organisms’ genomes—as much as two-thirds of the human genome and up to 90 percent in plants such as corn.
© PROFESSOR STANLEY N. COHEN/SCIENCE SOURCE
Despite their ubiquity and prevalence in diverse genomes, MGEs have traditionally been considered nonfunctional junk DNA. Starting in the middle of the 20th century, through the pioneering work of Barbara McClintock in plants, and over the following decades in a widening range of organisms, researchers began to uncover clues that MGE sequences are recruited for a variety of cellular functions, in particular for the regulation of gene expression. More-recent work reveals that many organisms also use MGEs for a more specialized and sophisticated function, one that capitalizes on the ability of these elements to move around genomes, modifying the DNA sequence in the process. Transposons seem to have been pivotal contributors to the evolution of adaptive immunity both in vertebrates and in microbes, which were only recently discovered to actually have a form of adaptive immunity—namely, the CRISPR-Cas (clustered regularly interspaced short palindromic repeats–CRISPR-associated genes) system that has triggered the development of a new generation of genome-manipulation tools.
Multiple defense systems have evolved in nearly all cellular organisms, from bacteria to mammals. Taking a closer look at these systems, we find that the evolution of these defense mechanisms depended, in large part, on MGEs—those same elements that are themselves targets of host immune defense.
Layers of defense
As cheaters in the game of life, stealing resources from their hosts, parasites have the potential to cause the collapse of entire communities, killing their hosts before moving on or dying themselves. But hosts are far from defenseless. The diversity and sophistication of immune systems are striking: their functions range from immediate and nonspecific innate responses to exquisitely choreographed adaptive responses that result in lifelong immune memory after an initial pathogen attack.1
Transposons seem to have been pivotal contributors to the evolution of adaptive immunity both in vertebrates and in microbes.
Over the last two decades or so, it has become clear that nearly all organisms possess multiple mechanisms of innate immunity.2 Toll-like receptors (TLRs), common to most animals, recognize conserved molecules from microbial pathogens and activate the appropriate components of the immune system upon invasion. Even more widespread and ancient is RNA interference (RNAi), a powerful defense system that employs RNA guides, known as small interfering RNAs (siRNAs), to destroy invading nucleic acids, primarily those of RNA viruses. Conceptually, the biological function of siRNAs is analogous to that of TLRs: an innate immune response to a broad class of pathogens.
Prokaryotes possess their own suite of innate immune mechanisms, including endonucleases that cleave invader DNA at specific sites and enzymes called methylases that modify those same sites in the prokaryotes’ own genetic material to shield it from cleavage, a strategy known as restriction modification (RM).3 If overwhelmed by pathogens, many prokaryotic cells will undergo programmed cell death or go into dormancy, thereby preventing the spread of the pathogen within the organism or population. In particular, infected bacterial or archaeal cells can activate toxin-antitoxin (TA) systems to induce dormancy or cell death. Normally, the toxin protein is complexed with the antitoxin and thus inactivated. However, under stress, the antitoxin is degraded, unleashing the toxin to harm the cell.
Many viruses that infect microbes also encode RM and TA modules.4 These viruses are, in effect, a distinct variety of MGEs that sometimes have highly complex genomes. Viruses use RM systems for the very same purpose as their prokaryotic hosts: the methylase modifies the viral genome, whereas the endonucleases degrade any unmodified genomes in the host cell, thereby providing nucleotides for the synthesis of new copies of the viral genome. And the TA system can ensure retention of a plasmid or virus within the cell. The toxin and antitoxin proteins dramatically differ in their vulnerability to proteolytic enzymes that are always present in the cell: the toxin is stable whereas the antitoxin is labile. This does not matter as long as both proteins are continuously produced. However, if both genes are lost (for example, during cell division), the antitoxin rapidly degrades, and the remaining amount of the toxin is sufficient to halt the biosynthetic activity of the cell and hence kill it or at least render it dormant. A plasmid or virus that carries a TA module within its genome thus implants a self-destructing mechanism in its host that is activated if the MGE is lost. (See illustration.)
When an MGE inserts into the host genome, it inevitably modifies that genome, typically using an MGE-encoded recombinase (also known as integrase or transposase) as a breaking-and-entering tool. Speaking in deliberately anthropomorphic terms, the MGEs do so for their own selfish purposes, to ensure their propagation within the host genome. However, given the ubiquity of MGEs across cellular life forms, it seems extremely unlikely that host organisms would not recruit at least some of these naturally evolved genome manipulation tools in order to exploit their remarkable capacities for their own purposes. Immune memory that involves genome manipulation is arguably the most obvious utility of these tools, and in retrospect, it is not surprising that unrelated transposons and their recombinases appear to have made key contributions to the origin of both animal and prokaryotic forms of adaptive immunity.
Guns for hire
Until recently, prokaryotes had been thought to entirely lack the sort of adaptive immunity that dominates defense against parasites in vertebrates. This view has been overturned in the most dramatic fashion by the discovery of the CRISPR-Cas, RNAi-based defense systems found to be present in most archaea and many bacteria studied to date.5 In 2005, Francisco Mójica of the University of Alicante in Spain and colleagues,6 and independently, Dusko Ehrlich of the Pasteur Institute in Paris,7 discovered that some of the unique sequences inserted between CRISPR, known as spacers, were identical to pieces of bacteriophage or plasmid genomes. Combined with a detailed analysis of the predicted functions of Cas proteins, this discovery led one of us (Koonin) and his team to propose in 2006 that CRISPR-Cas functioned as a form of prokaryotic adaptive immunity, with memory of past infections stored in the genome within the CRISPR “cassettes”—clusters of short direct repeats, interspersed with similar-size nonrepetitive spacers, derived from various MGEs—and to develop a detailed hypothesis about the mechanism of such immunity.8
Subsequent experiments from Philippe Horvath’s and Rodolphe Barrangou’s groups at Danisco Corporation,9 along with several other studies that followed in rapid succession, supported this hypothesis. (See “There’s CRISPR in Your Yogurt,” here.) It has been shown that CRISPR-Cas indeed functions by incorporating fragments of foreign bacteriophage or plasmid DNA into CRISPR cassettes, then using the transcripts of these unique spacers as guide RNAs to recognize and cleave the genomes of repeat invaders. (See illustration.) A key feature of CRISPR-Cas systems is their ability to transmit extremely efficient, specific immunity across many thousands of generations. Thus, CRISPR-Cas is not only a bona fide adaptive immunity system, but also a genuine machine of Lamarckian evolution, whereby an environmental challenge—a virus or plasmid, in this case—directly causes a specific change in the genome that results in an adaptation that is passed on to subsequent generations.10
When a mobile genetic element (MGE) inserts into the host genome, it inevitably modifies that genome, typically using an MGE-encoded recombinase as a breaking-and-entering tool.
A torrent of comparative genomic, structural, and experimental studies has characterized the extremely diverse CRISPR-Cas systems according to the suites of Cas proteins involved in CRISPR transcript processing and target recognition.5,11While Type I and Type III systems employ elaborate protein complexes that consist of multiple Cas proteins, Type II systems perform all the necessary reactions with a single large protein known as Cas9. These findings opened the door for straightforward development of a new generation of genome editing. Cas9-based tools are already used by numerous laboratories all over the world for genome engineering that is much faster, more flexible, and more versatile than any methodology that was available in the pre-CRISPR era.12
And it seems that humans are not the only species to have stolen a page from the CRISPR book: viruses have done the same. For example, a bacteriophage that infects pathogenic Vibrio cholera carries its own adaptable CRISPR-Cas system and deploys it against another MGE that resides within the host genome.13Upon phage infection, that rival MGE, called a phage inducible chromosomal island-like element (PLE), excises itself from the cellular genome and inhibits phage production. But at the same time, the bacteriophage-encoded CRISPR-Cas system targets PLE for destruction, ensuring successful phage propagation.
Consequently, in prokaryotes, all defense systems appear to be guns for hire that work for the highest bidder. Sometimes it is impossible to know with any certainty in which context, cellular or MGE, different defense mechanisms first emerged.
Transposon origins of adaptive immunity
© EYE OF SCIENCE/SCIENCE SOURCE. COLORIZATION BY MARY MADSEN
Recent evidence from our groups supports an MGE origin of the CRISPR-Cas systems. The function of Cas1—the key enzyme of CRISPR-Cas that is responsible for the acquisition of foreign DNA and its insertion into spacers within CRISPR cassettes—bears an uncanny resemblance to the recombinase activity of diverse MGEs, even though Cas1 does not belong to any of the known recombinase families. As a virtually ubiquitous component of CRISPR-Cas systems, Cas1 was likely central to the emergence of CRISPR-Cas immunity.
During a recent exploration of archaeal DNA dark matter—clusters of uncharacterized genes in sequenced genomes—we unexpectedly discovered a novel superfamily of transposon-like MGEs that could hold the key to the origin of Cas1.14 These previously unnoticed transposons contain inverted repeats at both ends, just like many other transposons, but their gene content is unusual. The new transposon superfamily is present in both archaeal and bacterial genomes and is highly polymorphic (different members contain from 6 to about 20 genes), with only two genes shared by all identified representatives. One of these conserved genes encodes a DNA polymerase, indicating that these transposons supply the key protein for their own replication. While diverse eukaryotes harbor self-synthesizing transposons of the Polinton or Maverick families, this is the first example in prokaryotes. But it was the second conserved protein that held the biggest surprise: it was none other than a homolog of Cas1, the key protein of the CRISPR-Cas systems.
We dubbed this new transposon family Casposons and naturally proposed that, in this context, Cas1 functions as a recombinase. In the phylogenetic tree of Cas1, the casposons occupy a basal position, suggesting that they played a key role in the origin of prokaryotic adaptive immunity.
In vertebrates, adaptive immunity acts in a completely different manner than in prokaryotes and is based on the acquisition of pathogen-specific T- and B-lymphocyte antigen receptors during the lifetime of the organism. The vast repertoire of immunoglobulin receptors is generated from a small number of genes via dedicated diversification processes known as V (variable), D (diversity), and J (joining) segment (V(D)J) recombination and hypermutation. (See illustration.) In a striking analogy to CRISPR-Cas, vertebrate adaptive immunity also seems to have a transposon at its origin. V(D)J recombination is mediated by the RAG1-RAG2 recombinase complex. The recombinase domain of RAG1 derives from the recombinases of a distinct group of animal transposons known as Transibs.15 The recombination signal sequences of the immunoglobulin genes, which are recognized by the RAG1-RAG2 recombinase and are necessary for bringing together the V, D, and J gene segments, also appear to have evolved via Transib insertion.
The two independent origins of adaptive immune systems in prokaryotes and eukaryotes involving unrelated MGEs show that, in the battle for survival, organisms welcome all useful molecular inventions irrespective of who the original inventor was. Indeed, the origin of CRISPR-Cas systems from prokaryotic casposons and vertebrate V(D)J recombination from Transib transposons might appear paradoxical given that MGEs are primary targets of immune systems. However, considering the omnipresence and diversity of MGEs, it seems likely that even more Lamarckian-type mechanisms have, throughout the history of life, directed genomic changes in the name of host defense.16
Moreover, the genome-engineering capacity of immune systems provides almost unlimited potential for the development of experimental tools for genome manipulation and other applications. The utility of antibodies as tools for protein detection and of RM enzymes for specific fragmentation of DNA molecules has been central to the progress of biology for decades. Recently, CRISPR-Cas systems have been added to that toolkit as, arguably, the most promising of the new generation of molecular biological methods. It is difficult to predict what opportunities for genome engineering could be hidden within still unknown or poorly characterized defense systems.
© KIMBERLY BATTISTA
Eugene V. Koonin is a group leader at the National Library of Medicine’s National Center for Biotechnology Information in Bethesda, Maryland. Mart Krupovic is a research scientist at the Institut Pasteur in Paris, France.
Tagsviruses, transposons, transposable elements, Mobile genetic elements, innate immunity, immunology and adaptive immunity
A SEARCH BETWEEN RISING DISORDER AND COMPLEXITY 
THE END OF THE UNIVERSE: A SEARCH BETWEEN RISING DISORDER AND COMPLEXITY
Written By Cadell Last
In a new book, The Beginning and the End: The Meaning of Life in Cosmological Perspective, by philosopher Clement Vidal (@clemvidal), the two main trends of the universe – the trends of rising disorder and complexity – are explored. As a result, his investigation takes us to the most extreme conditions possible in our universe.
Continue reading on the site http://kwfoundation.org
A Step Closer to Human Genome Editing 
UK Will Use CRISPR on Human Embryos — a Step Closer to Human Genome Editing
This week, Kathy Niakan, a biologist working at the Francis Crick Institute in London received the green light from the UK’s Human Fertilisation and Embryology Authority to use genome editing technique CRISPR/Cas9 on human embryos.
Niakan hopes to answer important questions about how healthy human embryos develop from a single cell to around 250 cells, in the first seven days after fertilization.
By removing certain genes during this early development phase using CRISPR/Cas9, Niakan and her team hope to understand what causes miscarriages and infertility, and in the future, possibly improve the effectiveness of in-vitro fertilization and provide better treatments for infertility.
The embryos used in the research will come from patients who have a surplus of embryos in their IVF treatment and give consent for these embryos to be used in research. The embryos would not be allowed to survive beyond 14 days and are not allowed to be implanted in a womb to develop further. The team still needs to have their plans reviewed by an ethics board, but if approved, the research could start in the next few months.
In an op-ed for Time magazine, J. Craig Venter writes that the experiments proposed at the Crick Institute are similar to previous gene knockouts in mice and other species. While some results may be of interest, Venter believes, most will be inconclusive, as the field has seen in the past.
He continues, “The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes.”
Venter’s stance on the matter of genome editing echoes that of many other scientists in the field: Proceed with caution.
In December 2015, The National Academies of Sciences, Engineering and Medicine held an International Summit on Human Genome Editing, and after several days of discussion, released a statement of conclusions.
In a nutshell, the group recommended that basic and preclinical research should continue with the appropriate legal and ethical oversight. If human embryos or germline cells are modified during research, they should not be used to establish a pregnancy.
In cases of clinical use, the group underscored a difference between editing somatic cells (cells whose genomes are not passed on to the next generation) versus germline cells (whose genomes are passed on to the next generation).
Somatic cell editing would include editing genes that cause diseases such as sickle-cell anemia. Because these therapies would only affect the individual, the group recommends these cases should be evaluated based on “existing and evolving” gene-therapy regulations.
It’s worth noting that governments across the world have significantly diverse ways of handling gene-therapy regulations.
In the US, the National Institutes of Health (NIH) won’t fund genomic editing research involving human embryos. Research like Kathy Niakan’s is not illegal, as long as it is privately funded. In China, the government doesn’t ban any particular type of research, while countries like Italy and Germany are on the other side of the spectrum, where all human embryo research is banned.
The International Summit on Genome Editing concluded that today it would be “irresponsible to proceed with any clinical use of germline editing” until we have more knowledge of the possible risks and outcomes of doing so.
In spite of that, the group also concluded that as “scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis.” Similarly, Venter writes of the need for the scientific community to gain better understanding of the “software of life before we begin re-writing this code.”
While the “proceed with caution” message from scientists is loud and clear, the age of programmable biology seems to be getting closer and closer.
Between Venter’s statement that it is inevitable that we will edit our genomes for enhancements and the suggestion that human germline editing should be ‘revisited’ as opposed to banned, it seems even the scientific community is assuming a future which includes human genome editing.
So, where do we go from here?
This brave new future seems equal parts exciting, frightening — and inevitable. At this stage, more research is critical — so when the time comes to rewrite the software of life, we do so with wisdom.
Image Credit: Shutterstock.com
A step towards gene therapy against intractable epilepsy 
3D model of DNA double helix. Credit: Peter Artymiuk / Wellcome Images
A step towards gene therapy against intractable epilepsy
by Nikitidou Ledri L et al.
By delivering genes for a certain signal substance and its receptor into the brain of test animals with chronic epilepsy, a research group at Lund University in Sweden with colleagues at University of Copenhagen Denmark has succeeded in considerably reducing the number of epileptic seizures among the animals. The test has been designed to as far as possible mimic a future situation involving treatment of human patients.
Many patients with epilepsy are not experiencing any improvements from existing drugs. Surgery can be an alternative for severe epilepsy, in case it is possible to localize and remove the epileptic focus in the brain where seizures arise.
"There is a period between the detection of this focus and the operation when the gene therapy alternative could be tested. If it works well, the patient can avoid surgery. If it doesn't, surgery will go ahead as initially planned and the affected part will then be removed. With this approach, the experimental treatment will be more secure for the patient", says Professor Merab Kokaia.
He and his group are working on a rat model that mimics temporal lobe epilepsy, the most common type of epilepsy. The test animals are given injections of the epilepsy-inducing substance, kainate, in the temporal lobe of one the cerebral hemispheres. Most of the animals had seizures of varying degrees, whereas some had no seizures, which Merab Kokaia considers a good result, as it is similar to the situation among people. Brain damage resulting from various accidents has very different consequences for different patients, as some develop epilepsy, whereas others do not.
The rats that developed epilepsy were then given gene therapy in the part of the brain in which the kainate had been injected, and where the seizures arose. Genes were delivered for both the signal substance (neuropeptide Y) and one of its receptors. The idea was that the combination would create a larger effect than only delivering the gene for the signal substance itself. Neuropeptide Y can bind to several different receptors and in the worst case it binds to a receptor that promotes increase in the number of seizures instead of decrease.
The study results have so far been positive. The increase in the frequency of seizures that has been seen among the control animals that were treated with inactive genes was halted after the treatment with combination of active genes, and for 80 % of the animals the number of seizures was reduced by almost half.
"The test must be repeated in more animal studies, so that the possible side effects, on memory for example, can be studied. But, we regard this study as promising proof of concept, a demonstration that the method works," states Merab Kokaia.
He expects that the first gene therapy treatments will be carried out on patients who have already been selected for surgical procedures. In the long term, however, gene therapy will be of the greatest benefit to those patients who cannot be operated on. There are patients with severe epilepsy whose epileptic focus is so badly placed that an operation is out of question since it can impair e.g. speech or movement. These patients can therefore never undergo a surgical procedure, but could be helped by gene therapy in the future.
Note: Material may have been edited for length and content. For further information, please contact the cited source.
A Vaulted Mystery 
©G.E. KIDDLER SMITH/CORBIS
A Vaulted Mystery
Nearly 30 years after the discovery of tiny barrel-shape structures called vaults, their natural functions remain elusive. Nevertheless, researchers are beginning to put these nanoparticles to work in biomedicine.
In the mid-1980s, biochemist Leonard Rome of the University of California, Los Angeles, (UCLA) School of Medicine and his postdoc Nancy Kedersha were developing new ways to separate coated vesicles of different size and charge purified from rat liver cell lysates when they stumbled upon something else entirely. They trained a transmission electron microscope on the lysate to check whether the vesicles were being divvied up correctly, and the resulting image revealed three dark structures: a large protein-coated vesicle, a small protein-coated vesicle, and an even smaller and seemingly less dense object. (See photograph below.) The researchers had no idea what the smallest one was.
“There were many different proteins and membrane-bound vesicles in the various fractions we analyzed,” Kedersha recalls, but this small vesicle was different. And it was “not a contaminant,” she says, as additional micrographs of partially purified vesicles revealed similar strange objects, always found in association with the coated vesicles. The ovoid particles displayed a distinct shape, which reminded the researchers of a raspberry, a hand grenade, or a barrel, and all were smaller than any known organelle. Rome gave his postdoc the green light to investigate further.
NANCY KEDERSHA AND LEONARD ROME
Kedersha designed a way to purify the mystery particles, based on a procedure previously described in the literature for isolating coated vesicles, then stained and imaged what she’d collected using electron microscopy. The tiny structures had a complex but consistent barrel-shape morphology and measured 35 by 65 nanometers—much smaller than lysosomes, which range in diameter from 100 to more than 1,000 nanometers (1 micrometer), or mitochondria, which are 0.5 to 10 micrometers long. Kedersha also treated the particles with various proteases, as well as enzymes to digest RNA and DNA, to assess their constituent molecules, finding evidence of three major proteins and an RNA component. With a total mass of approximately 13 megadaltons, they appeared to be the largest eukaryotic ribonucleoprotein particles ever discovered. By comparison, ribosomes measure just 20 to 25 nanometers in diameter and weigh in at just over 3 megadaltons.
Kedersha dubbed the structures “vaults,” after the arched shape of the very first particle she and Rome observed, reminiscent of the vaulted ceilings of cathedrals.1 To screen for these new nanostructures in other species, Kedersha developed an antibody against one of the vault proteins she’d discovered, and used it to purify vaults from species across the animal kingdom: the minibarrels were abundant in the cells of rabbits, mice, chickens, cows, bullfrogs, sea urchins, and several human cell lines—varying from 10,000 to 100,000 per cell. Remarkably, they all appeared to be similar in size, shape, and morphology to those Kedersha and Rome isolated from rat livers. Clearly, this was an important cellular structure, and there were no reports of anything like it in the literature.
The broad distribution and strong conservation of vaults in eukaryotic species suggest that their function is essential to cells, but that function remains unclear to this day. In fact, in the three decades that have passed since their discovery, vaults have gone largely unnoticed by the scientific community. But a handful of dedicated groups are making strides in understanding what vaults are and what they do, with clues emerging that hint at their roles in cargo transport, cellular motility, and drug resistance, among other possible functions.
Cracking the vault
Scientists have taken several approaches to deciphering the structure of the nanosize vaults, including cryo-electron and freeze-etch microscopy and three-dimensional image reconstruction. Such work has revealed a symmetrical central barrel with a cinched middle and a cap protruding from the barrel’s top and bottom. (See illustration.) Cross sections reveal a very thin shell surrounding a large, hollow interior. Interestingly, a vault’s interior is spacious enough to enclose molecules as large as ribosomal subunits, but researchers have not confirmed whether vaults ever house cellular cargo.
As Kedersha’s early analyses suggested, vaults are composed of multiple copies of at least four distinct components: three proteins and one RNA molecule. The major vault protein (MVP) accounts for some 75 percent of the particles’ mass, with each vault containing 78 copies of the protein. In fact, the expression of MVP in an insect cell line—insects themselves are one of the few eukaryotic organisms that don’t have vaults—results in the spontaneous formation of particles with morphologic characteristics similar to those of endogenous vaults.2 Another protein typically found in vaults is vault poly(ADP-ribose) polymerase (VPARP). VPARP and MVP mRNA transcripts are expressed in similar patterns in the cell, and subcellular fractionation studies point to a strong binding between the two proteins.
Kedersha dubbed the structures “vaults,” after the arched shape of the very first particle she and Rome observed, reminiscent of the vaulted ceilings of cathedrals.
The third vault protein is TEP1, previously identified as the mammalian telomerase-associated protein 1, which binds RNA in the telomerase complex. TEP1-knockout mice exhibited no alterations in telomerase function, suggesting its role in the nucleus is redundant, but vaults purified from these animals revealed a complete absence of the fourth component of vaults: vault RNA (vRNA), a small untranslated RNA found at the tips of the particles. This work pointed to TEP1’s role in the recruitment and stabilization of vRNA.
The freeze-etching technique—which consists of physically breaking apart a frozen biological sample and then examining it with transmission electron microscopy—has revealed that vaults are not rigid, impermeable structures, but dynamic entities that are able to open and close, with a structure resembling a petaled flower.3 (See photograph below.) The “flowers” are usually seen in pairs, suggesting that an intact vault comprises two folded flowers with eight rectangular petals, each of which is connected to a central ring by a thin, short hook. (See illustration.)
The ability of vaults to open and close points to a possible function in cargo transport. At present, however, a definitive answer about the function of vaults remains elusive. In fact, in addition to cellular transport, more than a dozen roles for vaults have been proposed, including playing a part in multidrug resistance, cellular signaling, neuronal dysfunctions, and apoptosis and autophagy.
In search of function
Vaults are found in the cytoplasm, so far appearing to be completely excluded from the nucleus (except in sea urchins4). Within the cytoplasm, however, they are not randomly dispersed: they colocalize and interact with cytoskeletal elements, such as actin stress fibers and microtubules, and are also abundant in highly motile cells such as macrophages, suggesting the structures may help cells move around.
Vaults’ interactions with cytoskeletal elements also lend support to the idea that these particles act as cytoplasmic cargo transporters. Researchers hypothesize that vaults open, encapsulate molecules, then close and travel across the cytoplasm along microtubules or actin fibers before releasing their contents into the desired subcellular compartment.
In addition to the now well-characterized flower pattern of vault opening, Rome and colleagues have proposed two alternative hypotheses for how vaults might open: by separating at the waist, splitting into two completely dissociated halves, or by the raising of opposing petals on the two vault halves, hinging from the caps to open at the waist.5 (See illustration.) The latter may avoid destroying the integrity of the whole particle, potentially allowing vaults to repeatedly transport and release cargos. More recently, researchers have found evidence that the vaults “breathe” in solution, taking up and releasing proteins without ever fully opening.
Vaults also seem to be closely associated with nuclear pore complexes (NPC), protein conglomerations that span the inner and outer membranes of the nuclear envelope. This raises the possibility that vaults shuttle contents between the cytoplasm and nucleus. Interestingly, some structural characteristics of vaults, such as mass, diameter, and shape, are very similar to those of the NPC, although research has not yet conclusively established whether vaults actually form some sort of plug to stop up the NPC.
Researchers have also proposed a role for vaults in cancer cells’ ability to resist the pharmaceuticals doctors throw at them. In 1993, immunologist and experimental pathologist Rik Scheper of VU University in Amsterdam and colleagues found that a non-small-cell lung cancer cell line could be selected for resistance to the chemotherapy drug doxorubicin.6 The resulting cells overexpressed a large protein initially named lung resistance-related protein (LRP). Two years later, the group discovered that LRP was nothing other than human MVP,7 and the literature soon blossomed with papers on the possible role of vaults in chemotherapeutic drug resistance.
Experiments have yielded several observations that exclude a direct participation of MVP in such resistance, however. Knockdown of MVP does not affect cell survival, for instance, and upregulation of MVP does not increase resistance to anticancer drugs.8 Thus, while many clinical studies recognize MVP as a negative prognostic factor for response to chemotherapy, it remains to be seen whether vaults play a direct role in drug resistance or whether they are merely markers of a drug-resistance phenotype.
Putting vaults to work
While many questions about vaults remain, including whether they serve as cargo transporters for the cell, their large, hollow interiors have led some scientists to see the nanobarrels as potential tools for the delivery of biomaterials. A variety of strategies for encapsulating biomaterials already exists, including viruses, liposomes, peptides, hydrogels, and synthetic and natural polymers, but the use of these materials is often limited by insufficient payload, immunogenicity, lack of targeting specificity, and the inability to control packaging and release. Vaults, on the other hand, possess all the features of an ideal delivery vehicle. These naturally occurring cellular nanostructures have a cavity large enough to sequester hundreds of proteins; they are homogeneous, regular, highly stable, and easy to engineer; and, most of all, they are nonimmunogenic and totally biocompatible.
But the actual packaging of foreign materials into vaults remains challenging. In 2005, Rome and long-time UCLA collaborator Valerie Kickhoefer discovered a particular region at the VPARP’s C-terminus, named major vault protein interaction domain (mINT), which is responsible for binding VPARP to MVP. The researchers hypothesized that mINT acts as a kind of zip code directing VPARP to the inside of the vault and speculated that any protein tagged with the mINT sequence at the C-terminus could be packaged into vaults just like VPARP. Fusing the sequence to luciferase, the enzyme that makes fireflies glow, and expressing the construct in an insect cell line, they successfully generated vaults with the engineered protein packaged inside the central barrel in the same two rings typically formed of VPARP.9
Rome and his colleagues have since demonstrated that the technique can successfully incorporate any number of proteins into the tiny cellular particles, and even discovered that they can make changes to vault proteins to alter such packaging. For example, the addition of extra amino acids at the N-terminus of MVP produces vaults with the engineered protein packaged exclusively at the waist. Conversely, the addition of extra amino acids at the MVP C-terminus produces two blobs of densely packed protein at the ends of vaults. Vaults can also be engineered to bind antibodies or express cancer cell ligands on their surface, allowing for the precise delivery of biomaterials to target cells. Researchers believe that, once inside the body, the engineered vaults act as slow-release particles for whatever protein is packaged inside.
Three decades after their chance discovery, vaults remain mysterious. But researchers are not waiting for all the questions to be answered.
In collaboration with Rome, pathologist Kathleen Kelly’s group at UCLA is working to create a vault-based nasal spray that acts as a vaccine against Chlamydia infection.10 They engineered vaults to encase the major outer membrane protein (MOMP) of Chlamydia, which possesses highly immunogenic properties, then created a nasal spray to deliver the modified vaults to the nasal mucosa. After the immunization, they challenged female mice with a Chlamydia infection and found that the treatment significantly limited bacterial infection in mucosal tissue.
Vaults may also help fight cancer. The lymphoid chemokine CCL21 binds to the chemokine receptor CCR7 and serves as a chemoattractant for tumor-fighting cells of the immune system. Pulmonologist Steven Dubinett and immunologist Sherven Sharma of UCLA and their colleagues injected CCL21 into mice with a lung carcinoma, but because CCL21 is small, it rapidly dissipated out of the tumor and was relatively ineffective at drawing immune cells to the tumor. In collaboration with Rome’s group, the researchers tagged the chemokine with mINT to package it into vaults prior to injection, causing an increase in the number of leukocytic cells that infiltrated the tumor and, most importantly, leading to a significant decrease in tumor growth.11 Rome and colleagues have since started a company to advance this vault-based therapy through human trials. (See “Opening the Medical Vault” below.)
Three decades after their chance discovery, vaults remain mysterious. But researchers are not waiting for all the questions to be answered. Vault-based therapies show promise in treating a variety of diseases, and the success of such applications could give these nanosize barrels a big dose of recognition.
Opening the Medical Vault
© LAURIE O'KEEFE
Fifteen years ago, my postdoc Andy Stephen brought me a result that blew my mind. Because he needed to make large amounts of the major vault protein (MVP) in order to further study its properties, he had expressed the protein in insect cells, which, unlike most animal cells, lack vaults. To our great surprise, MVP was not only expressed at high levels, but it assembled within the insect cell cytoplasm into empty vault-like particles that appeared structurally identical to the naturally occurring vaults we had purified from other eukaryotes.
This discovery changed the direction of my laboratory. My colleague Valerie Kickhoefer and I began to engineer vault particles as nanoscale capsules for a wide range of applications. We identified a section of the vault poly(ADP-ribose) polymerase (VPARP) protein that binds with high affinity to the inside of vaults. Fusion of this sequence, called mINT, to any protein or peptide of interest facilitated its packaging into vaults and, thanks to a tight but reversible binding interaction, its slow release. Moreover, by fusing peptides to the C-terminus of MVP, we are able to engineer vaults with specific markers displayed on their surface, allowing the development of strategies for targeting vaults to cells or tissues.12
To develop such vaults for medical needs, I partnered with entrepreneur Michael Laznicka to form Vault Nano Inc. in the summer of 2013. The first vault-based therapeutic that we are moving forward is a human recombinant vault packaged with the CCL21 chemokine, which is normally produced in lymph nodes, where it attracts and activates T cells and dendritic cells. Injecting the recombinant vaults into a lung tumor model in mice, we observed that the attracted T cells and dendritic cells react with tumor antigens to halt tumor growth.11 Now, in collaboration with Steven Dubinett and Jay Lee here at UCLA, Vault Nano is moving the CCL21-vault into clinical studies, hoping to initiate a Phase 1 trial by the end of next year. If successful, the CCL21-vault therapeutic would be an off-the-shelf reagent that can harness the power of a patient’s own immune system to attack cancer.
With our UCLA collaborators Kathleen Kelly and Otto Yang, we are also pursuing the development of vault vaccines against Chlamydia and HIV. Current studies in animal models have demonstrated that when a pathogen-derived protein or peptide is packaged in vaults, the resulting nanocapsules can stimulate a robust immune response. With the help of Vault Nano, these studies will soon advance to the clinic. —Leonard H. Rome
Tagsvaults, organelles, nanotechnology, nanoparticle, nanomedicine, disease/medicine and cell & molecular biology