Neurociencia | Neuroscience


Neuroscience 

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página:  1  2  3  4  5  (Siguiente)
  TODAS

W

Logo KW

Wall Street analysts say virtual reality can't go mainstream until these issues are fixed [1307]

de System Administrator - domingo, 12 de julio de 2015, 00:14
 

Oculus founder Palmer Luckey demonstrates the final version

Wall Street analysts say virtual reality can't go mainstream until these issues are fixed

Logo KW

Watch This Open Source AI Learn to Dominate Super Mario World in Just 24 Hours [1250]

de System Administrator - jueves, 18 de junio de 2015, 18:17
 

Watch This Open Source AI Learn to Dominate Super Mario World in Just 24 Hours

By Jason Dorrier

Recently, Google’s DeepMind—an artificial intelligence firm acquired for over $400 million in 2013—has been widely featured for demonstrations of an algorithm that teaches itself to play video games. In a paper, the DeepMind team said the software had learned to play Atari Breakout, and some 48 other games, as well as any human gamer.

But DeepMind isn’t the only clever algorithm able to beat classic games. In a new YouTube video, Seth Bling explains the magic behind software he developed to learn how to play Nintendo’s Super Mario World.

"This program started out knowing absolutely nothing about Super Mario World or Super Nintendos," Bling says. "In fact, it didn't even know that pressing 'right' on the controller would make the player go towards the end of the level."

Called MarI/O, the evolutionary algorithm learns by experience, playing the same level over and over with a simple goal (or fitness function)—getting as far to the right as it can as quickly possible.

 

This chart shows MarI/O's learning curve as each generation moves closer to beating the level.

After each generation, it “breeds” the most successful approaches (even adding random mutations) and then tries again with the next generation of offspring. After about a day of learning and 34 generations, the algorithm was as good at completing the level as any speedy human gamer (see chart to the right).

That’s pretty cool. But there are a few other items of note too.

The algorithm is based on a 13-year-old evolutionary approach to neural networks outlined in a 2002 research paper.

Of course, though the approach itself is old, we now have a lot more computing power at our beck and call. What will neural networks be able to learn in a few more years of Moore’s Law?

Also, unlike DeepMind’s proprietary code, the code for the MarI/O program is available to anyone who wants to play too. And in theory, it should also work with other left-to-right platform games.

But the best part of the video? Bling explains what's happening as a representation of the neural net is superimposed on each attempt at the level. Instead of simply being told you’re viewing a computer learning to play a video game—you get a much more informative (and fascinating) glimpse behind the curtain.

Image Credit: Seth Bling/YouTube

Logo KW

We Are 100%, For Sure, in the Middle of a Major Extinction Event [1257]

de System Administrator - jueves, 25 de junio de 2015, 16:56
 

We Are 100%, For Sure, in the Middle of a Major Extinction Event

by Kaleigh Rogers

Lots of people know about the risk of losing endangered animals like pandas and tigers, but they’re just the tip of the iceberg, according to a new study showing a major global extinction is currently underway.

The paper, published in Science Advances, is by no means the first to come to this conclusion. But the researchers here—from the National Autonomous University of Mexico, Stanford, Berkeley, Princeton, and the University of Florida—wanted to test the theory of a mass extinction event using very strict criteria. Even using conservative estimates, the researchers found that the rate of extinction in the last 115 years is as high as 50 times what it would be under normal circumstances.

“If we do nothing, in the next 50 years it will be a completely different world, something that humanity has never experienced,” Gerardo Ceballos, lead author of the study and a senior ecology researcher at the National Autonomous University of Mexico, told me over the phone.

Ceballos and his colleagues found vertebrate species have been disappearing at an alarming rate for the last 500 years, roughly since humans started to have a significant impact on the environment. Since 1500, at least 338 vertebrate species have gone extinct, and if species continue to disappear at this rate, the planet’s biodiversity could be significantly and permanently altered within three generations, the researchers warned.

In the history of the Earth, there have been five mass extinctions—periods when a large number of species die off within a short period of time. The most recent event, the Cretaceous-Tertiary mass extinction, wiped out the dinosaurs 66 million years ago.

Researchers have been trying to determine for years whether or not a sixth mass extinction is upon us already. Critics say scientists may be overestimating the rate at which current species are dying out, and underestimating the historic rate of extinction. So Ceballos and his colleagues wanted to conduct an analysis that used only very conservative estimations.

Extinction is a natural part of evolution. New species emerge, other species die out; it’s always happened, and it will continue to happen as long as there’s life on Earth. But the natural rate of extinction—called background extinction—is typically pretty low. Most studies on extinction rates peg it between 0.1 and 1 extinction per 10,000 species per 100 years (a measure called E/MSY). In other words: for every 10,000 species on Earth, up to one of them will disappear every 100 years.

But these previous studies have faced criticism for relying on those numbers, so Ceballos and his colleagues undertook an extensive review of fossil records to estimate a new background extinction just for vertebrates. (According to the study, there’s not enough data to make decent estimates for non-vertebrate species). Their review found a background rate of 1.8 extinctions per 10,000 species per 100 years (1.8 E/MSY).

Next, the researchers collected data on all of the vertebrate species that are confirmed extinct, are considered extinct in the wild, or are believed to be extinct, according to the International Union of Conservation of Nature, from 1500 to today. Other studies in the past have used a wide range of estimations for how many species have gone extinct, Ceballos said, like looking at how much a habitat has decreased and extrapolating the number of species that would have disappeared. But, again, these researchers wanted to keep their numbers conservative, so they limited their analysis to the IUCN list.

When the researchers compared the background extinction rate to the actual extinction rate, even with these more conservative estimates, the gulf between the two was staggering. Since 1900, with a natural extinction rate of 2 E/MSY, about nine species would be expected to have disappeared. In reality, 477 vertebrate species are believed to have gone extinct.

Even when the researchers only included species that were confirmed extinct, 198 species, the die-off was 22 times the background rate.

“Let’s say the extinction rate had been 2.2 or 2.3 E/MSY, that would not be a mass extinction. It’s just a little higher than normal,” Ceballos said. “But what we’re finding is many, many times more. That’s why we’re certain that it’s a mass extinction.”

 

This is a breakdown of how long it should have taken for the same number of species that died off over the last 115 years to become extinct had the extinction rate stayed the same as the background rate. The red figures use only the confirmed extinct numbers while the blue figures use confirmed extinct, extinct in the wild, and believed to be extinct. Source: Science Advances

The study goes on to assert that the timing of this mass extinction makes it most likely human-caused: it’s been happening as human culture has advanced and humans have had a greater impact on the world around us.

But why should humans really care? So we don’t have the Dodo bird or the Yemen gazelle. who cares? But Ceballos explained that diversity is kind of like nature’s insurance policy, and the more diversity we have, the healthier we all are.

He pointed to a case a few years ago in Panama, when there was an outbreak of hantavirus: a virus carried by rodents that can spread to humans and make them very sick. In protected wildlife areas, where there were many diverse species of rodents, the incidents of hantavirus were really low. It didn’t spread as quickly because only some of the species were susceptible, Ceballos said. But in areas that had been impacted by forestry, and where only a handful of rodent species remained, the incidents of the virus were much higher.

“Losing one species in particular, maybe one mouse or one squirrel, would not have a huge impact,” Ceballos explained. “But the problem now is we’re losing so many species, the resiliency of the ecosystem collapses.”

Ceballos said it’s not too late for humans to right our wrongs and prevent every other species from vanishing around us. If we double down on conservation efforts and start reeling back the pressures on species—like habitat loss and climate change—Ceballos and his colleagues say we can stop the mass extinction in its tracks.

We could also do nothing, at our own peril, he said. There have been mass extinctions before, one wiping out 90 percent of life on Earth, and life went on.

“We know for sure that life will prevail,” Ceballos said. “What is at stake here is whether humanity will be able to survive.”

Logo KW

We can now edit our DNA. But let's do it wisely [1531]

de System Administrator - martes, 20 de octubre de 2015, 13:45
 

We can now edit our DNA. But let's do it wisely

by Jennifer Doudna

Geneticist Jennifer Doudna co-invented a groundbreaking new technology for editing genes, called CRISPR-Cas9. The tool allows scientists to make precise edits to DNA strands, which could lead to treatments for genetic diseases … but could also be used to create so-called "designer babies." Doudna reviews how CRISPR-Cas9 works — and asks the scientific community to pause and discuss the ethics of this new tool.

Video

00:12 A few years ago, with my colleague, Emanuelle Charpentier, I invented a new technology for editing genomes. It's called CRISPR-Cas9. The CRISPR technology allows scientists to make changes to the DNA in cells that could allow us to cure genetic disease.

00:32 You might be interested to know that the CRISPR technology came about through a basic research project that was aimed at discovering how bacteria fight viral infections. Bacteria have to deal with viruses in their environment, and we can think about a viral infection like a ticking time bomb -- a bacterium has only a few minutes to diffuse the bomb before it gets destroyed. So, many bacteria have in their cells an adaptive immune system called CRISPR, that allows them to detect viral DNA and destroy it.

01:04 Part of the CRISPR system is a protein called Cas9, that's able to seek out, cut and eventually degrade viral DNA in a specific way. And it was through our research to understand the activity of this protein Cas9 that we realized that we could harness its function as a genetic engineering technology -- a way for scientists to delete or insert specific bits of DNA into cells with incredible precision -- that would offer opportunities to do things that really haven't been possible in the past.

01:42 The CRISPR technology has already been used to change the DNA in the cells of mice and monkeys, other organisms as well. Chinese scientists showed recently that they could even use the CRISPR technology to change genes in human embryos. And scientists in Philadelphia showed they could use CRISPR to remove the DNA of an integrated HIV virus from infected human cells.

02:09 The opportunity to do this kind of genome editing also raises various ethical issues that we have to consider, because this technology can be employed not only in adult cells, but also in the embryos of organisms, including our own species. And so, together with my colleagues, I've called for a global conversation about the technology that I co-invented, so that we can consider all of the ethical and societal implications of a technology like this.

02:39 What I want to do now is tell you what the CRISPR technology is, what it can do, where we are today and why I think we need to take a prudent path forward in the way that we employ this technology.

02:54 When viruses infect a cell, they inject their DNA. And in a bacterium, the CRISPR system allows that DNA to be plucked out of the virus, and inserted in little bits into the chromosome -- the DNA of the bacterium. And these integrated bits of viral DNA get inserted at a site called CRISPR. CRISPR stands for clustered regularly interspaced short palindromic repeats. (Laughter)

03:24 A big mouthful -- you can see why we use the acronym CRISPR.

03:27 It's a mechanism that allows cells to record, over time, the viruses they have been exposed to. And importantly, those bits of DNA are passed on to the cells' progeny, so cells are protected from viruses not only in one generation, but over many generations of cells. This allows the cells to keep a record of infection, and as my colleague, Blake Wiedenheft, likes to say, the CRISPR locus is effectively a genetic vaccination card in cells. Once those bits of DNA have been inserted into the bacterial chromosome, the cell then makes a little copy of a molecule called RNA, which is orange in this picture, that is an exact replicate of the viral DNA. RNA is a chemical cousin of DNA, and it allows interaction with DNA molecules that have a matching sequence.

04:24 So those little bits of RNA from the CRISPR locus associate -- they bind -- to protein called Cas9, which is white in the picture, and form a complex that functions like a sentinel in the cell. It searches through all of the DNA in the cell, to find sites that match the sequences in the bound RNAs. And when those sites are found -- as you can see here, the blue molecule is DNA -- this complex associates with that DNA and allows the Cas9 cleaver to cut up the viral DNA. It makes a very precise break. So we can think of the Cas9 RNA sentinel complex like a pair of scissors that can cut DNA -- it makes a double-stranded break in the DNA helix. And importantly, this complex is programmable, so it can be programmed to recognize particular DNA sequences, and make a break in the DNA at that site.

05:27 As I'm going to tell you now, we recognized that that activity could be harnessed for genome engineering, to allow cells to make a very precise change to the DNA at the site where this break was introduced. That's sort of analogous to the way that we use a word-processing program to fix a typo in a document.

05:48 The reason we envisioned using the CRISPR system for genome engineering is because cells have the ability to detect broken DNA and repair it. So when a plant or an animal cell detects a double-stranded break in its DNA, it can fix that break, either by pasting together the ends of the broken DNA with a little, tiny change in the sequence of that position, or it can repair the break by integrating a new piece of DNA at the site of the cut. So if we have a way to introduce double-stranded breaks into DNA at precise places, we can trigger cells to repair those breaks, by either the disruption or incorporation of new genetic information. So if we were able to program the CRISPR technology to make a break in DNA at the position at or near a mutation causing cystic fibrosis, for example, we could trigger cells to repair that mutation.

06:51 Genome engineering is actually not new, it's been in development since the 1970s. We've had technologies for sequencing DNA, for copying DNA, and even for manipulating DNA. And these technologies were very promising, but the problem was that they were either inefficient, or they were difficult enough to use that most scientists had not adopted them for use in their own laboratories, or certainly for many clinical applications. So, the opportunity to take a technology like CRISPR and utilize it has appeal, because of its relative simplicity. We can think of older genome engineering technologies as similar to having to rewire your computer each time you want to run a new piece of software. Whereas the CRISPR technology is like software for the genome, we can program it easily, using these little bits of RNA.

07:53 So once a double-stranded break is made in DNA, we can induce repair, and thereby potentially achieve astounding things, like being able to correct mutations that cause sickle-cell anemia or cause Huntington's Disease. I actually think that the first applications of the CRISPR technology are going to happen in the blood, where it's relatively easier to deliver this tool into cells, compared to solid tissues.

08:23 Right now, a lot of the work that's going on applies to animal models of human disease, such as mice. The technology is being used to make very precise changes that allow us to study the way that these changes in the cell's DNA affect either a tissue or, in this case, an entire organism.

08:42 Now in this example, the CRISPR technology was used to disrupt a gene by making a tiny change in the DNA in a gene that is responsible for the black coat color of these mice. Imagine that these white mice differ from their pigmented litter-mates by just a tiny change at one gene in the entire genome, and they're otherwise completely normal. And when we sequence the DNA from these animals, we find that the change in the DNA has occurred at exactly the place where we induced it, using the CRISPR technology.

09:18 Additional experiments are going on in other animals that are useful for creating models for human disease, such as monkeys. And here we find that we can use these systems to test the application of this technology in particular tissues, for example, figuring out how to deliver the CRISPR tool into cells. We also want to understand better how to control the way that DNA is repaired after it's cut, and also to figure out how to control and limit any kind of off-target, or unintended effects of using the technology.

09:55 I think that we will see a clinical application of this technology, certainly in adults, within the next 10 years. I think that it's likely that we will see clinical trials and possibly even approved therapies within that time, which is a very exciting thing to think about. And because of the excitement around this technology, there's a lot of interest in start-up companies that have been founded to commercialize the CRISPR technology, and lots of venture capitalists that have been investing in these companies.

10:30 But we have to also consider that the CRISPR technology can be used for things like enhancement. Imagine that we could try to engineer humans that have enhanced properties, such as stronger bones, or less susceptibility to cardiovascular disease or even to have properties that we would consider maybe to be desirable, like a different eye color or to be taller, things like that. "Designer humans," if you will. Right now, the genetic information to understand what types of genes would give rise to these traits is mostly not known. But it's important to know that the CRISPR technology gives us a tool to make such changes, once that knowledge becomes available.

11:17 This raises a number of ethical questions that we have to carefully consider, and this is why I and my colleagues have called for a global pause in any clinical application of the CRISPR technology in human embryos, to give us time to really consider all of the various implications of doing so. And actually, there is an important precedent for such a pause from the 1970s, when scientists got together to call for a moratorium on the use of molecular cloning, until the safety of that technology could be tested carefully and validated.

11:54 So, genome-engineered humans are not with us yet, but this is no longer science fiction. Genome-engineered animals and plants are happening right now. And this puts in front of all of us a huge responsibility, to consider carefully both the unintended consequences, as well as the intended impacts of a scientific breakthrough.

12:21 Thank you.

12:22 (Applause)

12:27 Bruno Giussani: Thank you. (Applause) Jennifer, this is a technology with huge consequences, as you pointed out. Your attitude about asking for a pause or a moratorium or a quarantine is incredibly responsible. There are, of course, the therapeutic results of this, but then there are the un-therapeutic ones and they seem to be the ones gaining traction, particularly in the media. This is one of the latest issues of The Economist -- "Editing Humanity." It's all about genetic enhancement, it's not about therapeutics. What kind of reactions did you get back in March from your colleagues in the science world, when you asked or suggested that we should actually pause this for a moment and think about it?

13:12 JD: My colleagues were actually, I think, delighted to have the opportunity to discuss this openly. It's interesting that as I talk to people, my scientific colleagues as well as others, there's a wide variety of viewpoints about this. So clearly it's a topic that needs careful consideration and discussion.

13:28 BG: There's a big meeting happening in December that you and your colleagues are calling, together with the National Academy of Sciences and others, what do you hope will come out of the meeting, practically?

13:38 JD: Well, I hope that we can air the views

13:41 of many different individuals and stakeholders

13:44 who want to think about how to use this technology responsibly. It may not be possible to come up with a consensus point of view, but I think we should at least understand what all the issues are as we go forward.

13:56 BG: Now, colleagues of yours, like George Church, for example, at Harvard, they say, "Yeah, ethical issues basically are just a question of safety. We test and test and test again, in animals and in labs, and then once we feel it's safe enough, we move on to humans." So that's kind of the other school of thought, that we should actually use this opportunity and really go for it. Is there a possible split happening in the science community about this? I mean, are we going to see some people holding back because they have ethical concerns, and some others just going forward because some countries under-regulate or don't regulate at all?

14:28 JD: Well, I think with any new technology, especially something like this, there are going to be a variety of viewpoints, and I think that's perfectly understandable. I think that in the end, this technology will be used for human genome engineering, but I think to do that without careful consideration and discussion of the risks and potential complications would not be responsible.

14:54 BG: There are a lot of technologies and other fields of science that are developing exponentially, pretty much like yours. I'm thinking about artificial intelligence, autonomous robots and so on. No one seems -- aside from autonomous warfare robots -- nobody seems to have launched a similar discussion in those fields, and calling for a moratorium. Do you think that your discussion may serve as a blueprint for other fields?

15:18 JD: Well, I think it's hard for scientists to get out of the laboratory. Speaking for myself, it's a little bit uncomfortable to do that. But I do think that being involved in the genesis of this really puts me and my colleagues in a position of responsibility, and I would say that I certainly hope that other technologies will be considered in the same way, just as we would want to consider something that could have implications in other fields besides biology.

15:45 BG: Jennifer, thanks for coming to TED.

15:46 JD: Thank you.

15:48 (Applause)

Link: http://www.ted.com

Logo KW

We have greater moral obligations to robots than to humans [1603]

de System Administrator - jueves, 3 de diciembre de 2015, 20:26
 

We have greater moral obligations to robots than to humans

by Eric Schwitzgebel

Down goes HotBot 4b into the volcano. The year is 2050 or 2150, and artificial intelligence has advanced sufficiently that such robots can be built with human-grade intelligence, creativity and desires. HotBot will now perish on this scientific mission. Does it have rights? In commanding it to go down, have we done something morally wrong?

The moral status of robots is a frequent theme in science fiction, back at least to Isaac Asimov’s robot stories, and the consensus is clear: if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings. Philosophers and researchers on artificial intelligence who have written about this issue generally agree.

I want to challenge this consensus, but not in the way you might predict. I think that, if we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings.

Here’s why: we will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state. If a robot needlessly suffers or fails to reach its developmental potential, it will be in substantial part because of our failure – a failure in our creation, design or nurturance of it. Our moral relation to robots will more closely resemble the relation that parents have to their children, or that gods have to the beings they create, than the relationship between human strangers.

In a way, this is no more than equality. If I create a situation that puts other people at risk – for example, if I destroy their crops to build an airfield – then I have a moral obligation to compensate them, greater than my obligation to people with whom I have no causal connection. If we create genuinely conscious robots, we are deeply causally connected to them, and so substantially responsible for their welfare. That is the root of our special obligation.

Frankenstein’s monster says to his creator, Victor Frankenstein:

I am thy creature, and I will be even mild and docile to my natural lord and king, if thou wilt also perform thy part, the which thou owest me. Oh, Frankenstein, be not equitable to every other, and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature: I ought to be thy Adam….

We must either only create robots sufficiently simple that we know them not to merit moral consideration – as with all existing robots today – or we ought to bring them into existence only carefully and solicitously.

Alongside this duty to be solicitous comes another, of knowledge – a duty to know which of our creations are genuinely conscious. Which of them have real streams of subjective experience, and are capable of joy and suffering, or of cognitive achievements such as creativity and a sense of self? Without such knowledge, we won’t know what obligations we have to our creations.

Yet how can we acquire the relevant knowledge? How does one distinguish, for instance, between a genuine stream of emotional experience and simulated emotions in an artificial mind? Merely programming a superficial simulation of emotion isn’t enough. If I put a standard computer processor manufactured in 2015 into a toy dinosaur and program it to say ‘Ow!’ when I press its off switch, I haven’t created a robot capable of suffering. But exactly what kind of processing and complexity is necessary to give rise to genuine human-like consciousness? On some views – John Searle’s, for example – consciousness might not be possible in any programmed entity; it might require a structure biologically similar to the human brain. Other views are much more liberal about the conditions sufficient for robot consciousness. The scientific study of consciousness is still in its infancy. The issue remains wide open.

If we continue to develop sophisticated forms of artificial intelligence, we have a moral obligation to improve our understanding of the conditions under which artificial consciousness might genuinely emerge. Otherwise we risk moral catastrophe – either the catastrophe of sacrificing our interests for beings that don’t deserve moral consideration because they experience happiness and suffering only falsely, or the catastrophe of failing to recognise robot suffering, and so unintentionally committing atrocities tantamount to slavery and murder against beings to whom we have an almost parental obligation of care.

We have, then, a direct moral obligation to treat our creations with an acknowledgement of our special responsibility for their joy, suffering, thoughtfulness and creative potential. But we also have an epistemic obligation to learn enough about the material and functional bases of joy, suffering, thoughtfulness and creativity to know when and whether our potential future creations deserve our moral concern. 

  • Eric Schwitzgebel is professor of philosophy at University of California, Riverside. He blogs at The Splintered Mind and his latest book is Perplexities of Consciousness (2011).

Link: https://aeon.co

Logo KW

We Were Given Imagination [1298]

de System Administrator - jueves, 9 de julio de 2015, 20:02
 

We Were Given Imagination

with Rainn Wilson | XPRIZE Insights

https://youtu.be/YahSbI9OhPw

with Alexandra Dolan

https://youtu.be/kYB4rPOWC3o

The inspiration for this video was the following quote from an interview with Albert Einstein, who stated: "I believe in intuition and inspiration. Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution"...(Einstein, as cited in Viereck, 1929, p. 17).

with Josiah Mozloom

Logo KW

We're On The Brink Of A Revolution In Crazy-Smart Digital Assistants [1437]

de System Administrator - domingo, 20 de septiembre de 2015, 20:15
 

We're On The Brink Of A Revolution In Crazy-Smart Digital Assistants

by David Pierce

Here's a quick story you've probably heard before, followed by one you probably haven't. In 1979 a young Steve Jobs paid a visit to Xerox PARC, the legendary R&D lab in Palo Alto, California, and witnessed a demonstration of something now called the graphical user interface. An engineer from PARC used a prototype mouse to navigate a computer screen studded with icons, drop-down menus, and "windows" that overlapped each other like sheets of paper on a desktop. It was unlike anything Jobs had seen before, and he was beside himself. "Within 10 minutes," he would later say, "it was so obvious that every computer would work this way someday."

As legend has it, Jobs raced back to Apple and commanded a team to set about replicating and improving on what he had just seen at PARC. And with that, personal computing sprinted off in the direction it has been traveling for the past 40 years, from the first Macintosh all the way up to the iPhone. This visual mode of computing ended the tyranny of the command line—the demanding, text-heavy interface that was dominant at the time—and brought us into a world where vastly more people could use computers. They could just point, click, and drag.

In the not-so-distant future, though, we may look back at this as the wrong PARC-related creation myth to get excited about. At the time of Jobs' visit, a separate team at PARC was working on a completely different model of human-computer interaction, today called the conversational user interface. These scientists envisioned a world, probably decades away, in which computers would be so powerful that requiring users to memorize a special set of commands or workflows for each action and device would be impractical. They imagined that we would instead work collaboratively with our computers, engaging in a running back-and-forth dialog to get things done. The interface would be ordinary human language.

One of the scientists in that group was a guy named Ron Kaplan, who today is a stout, soft-spoken man with a gray goatee and thinning hair. Kaplan is equal parts linguist, psychologist, and computer scientist—a guy as likely to invoke Chomsky’s theories about the construction of language as he is Moore’s law. He says that his team got pretty far in sketching out one crucial component of a working conversational user interface back in the ’70s; they rigged up a system that allowed you to book flights by exchanging typed messages with a computer in normal, unencumbered English. But the technology just wasn’t there to make the system work on a large scale. “It would’ve cost, I don’t know, a million dollars a user,” he says. They needed faster, more distributed processing and smarter, more efficient computers. Kaplan thought it would take about 15 years.

“Forty years later,” Kaplan says, “we’re ready.” And so is the rest of the world, it turns out.

Today, Kaplan is a vice president and distinguished scientist at Nuance Communications, which has become probably the biggest player in the voice interface business: It powers Ford’s in-car Sync system, was critical in Siri’s development, and has partnerships across nearly every industry. But Nuance finds itself in a crowded marketplace these days. Nearly every major tech company—from Amazon to Intel to Microsoft to Google—is chasing the sort of conversational user interface that Kaplan and his colleagues at PARC imagined decades ago. Dozens of startups are in the game too. All are scrambling to come out on top in the midst of a powerful shift under way in our relationship with technology. One day soon, these companies believe, you will talk to your gadgets the way you talk to your friends. And your gadgets will talk back. They will be able to hear what you say and figure out what you mean.

If you’re already steeped in today’s technology, these new tools will extend the reach of your digital life into places and situations where the graphical user interface cannot safely, pleasantly, or politely go. And the increasingly conversational nature of your back-and-forth with your devices will make your relationship to technology even more intimate, more loyal, more personal.

But the biggest effect of this shift will be felt well outside Silicon Valley’s core audience. What Steve Jobs saw in the graphical user interface back in 1979 was a way to expand the popular market for computers. But even the GUI still left huge numbers of people outside the light of the electronic campfire. As elegant and efficient as it is, the GUI still requires humans to learn a computer’s language. Now computers are finally learning how to speak ours. In the bargain, hundreds of millions more people could gain newfound access to tech.

Voice interfaces have been around for years, but let’s face it: Thus far, they’ve been pretty dumb. We need not dwell on the indignities of automated phone trees (“If you’re calling to make a payment, say ‘payment’”). Even our more sophisticated voice interfaces have relied on speech but somehow missed the power of language. Ask Google Now for the population of New York City and it obliges. Ask for the location of the Empire State Building: good to go. But go one logical step further and ask for the population of the city that contains the Empire State Building and it falters. Push Siri too hard and the assistant just refers you to a Google search. Anyone reared on scenes of Captain Kirk talking to the Enterprise’s computer or of Tony Stark bantering with Jarvis can’t help but be perpetually disappointed.

Ask around Silicon Valley these days, though, and you hear the same refrain over and over: It’s different now.

One hot day in early June, Keyvan Mohajer, CEO of SoundHound, shows me a prototype of a new app that his company has been working on in secret for almost 10 years. You may recognize SoundHound as the name of a popular music-recognition app—the one that can identify a tune for you if you hum it into your phone. It turns out that app was largely just a way of fueling Mohajer’s real dream: to create the best voice-based artificial-intelligence assistant in the world.

The prototype is called Hound, and it’s pretty incredible. Holding a black Nexus 5 smartphone, Mohajer taps a blue and white microphone icon and begins asking questions. He starts simply, asking for the time in Berlin and the population of Japan. Basic search-result stuff—followed by a twist: “What is the distance between them?” The app understands the context and fires back, “About 5,536 miles.”

Then Mohajer gets rolling, smiling as he rattles off a barrage of questions that keep escalating in complexity. He asks Hound to calculate the monthly mortgage payments on a million-dollar home, and the app immediately asks him for the interest rate and the term of the loan before dishing out its answer: $4,270.84.

“What is the population of the capital of the country in which the Space Needle is located?” he asks. Hound figures out that Mohajer is fishing for the population of Washington, DC, faster than I do and spits out the correct answer in its rapid-fire robotic voice. “What is the population and capital for Japan and China, and their areas in square miles and square kilometers? And also tell me how many people live in India, and what is the area code for Germany, France, and Italy?” Mohajer would keep on adding questions, but he runs out of breath. I’ll spare you the minute-long response, but Hound answers every question. Correctly.

Hound, which is now in beta, is probably the fastest and most versatile voice recognition system unveiled thus far. It has an edge for now because it can do speech recognition and natural language processing simultaneously. But really, it’s only a matter of time before other systems catch up.

After all, the underlying ingredients—what Kaplan calls the “gating technologies” necessary for a strong conversational interface—are all pretty much available now to whoever’s buying. It’s a classic story of technological convergence: Advances in processing power, speech recognition, mobile connectivity, cloud computing, and neural networks have all surged to a critical mass at roughly the same time. These tools are finally good enough, cheap enough, and accessible enough to make the conversational interface real—and ubiquitous.

But it’s not just that conversational technology is finally possible to build. There’s also a growing need for it. As more devices come online, particularly those without screens—your light fixtures, your smoke alarm—we need a way to interact with them that doesn’t require buttons, menus, and icons.

At the same time, the world that Jobs built with the GUI is reaching its natural limits. Our immensely powerful onscreen interfaces require every imaginable feature to be hand-coded, to have an icon or menu option. Think about Photoshop or Excel: Both are so massively capable that using them properly requires bushwhacking through a dense jungle of keyboard shortcuts, menu trees, and impossible-to-find toolbars. Good luck just sitting down and cropping a photo. “The GUI has topped out,” Kaplan says. “It’s so overloaded now.”

That’s where the booming market in virtual assistants comes in: to come to your rescue when you’re lost amid the seven windows, five toolbars, and 30 tabs open on your screen, and to act as a liaison between apps and devices that don’t usually talk to each other.

You may not engage heavily with virtual assistants right now, but you probably will soon. This fall a major leap forward for the conversational interface will be announced by the ding of a push notification on your smartphone. Once you’ve upgraded to iOS 9, Android 6, or Windows 10, you will, by design, find yourself spending less time inside apps and more chatting with Siri, Google Now, or Cortana. And soon, a billion-plus Facebook users will be able to open a chat window and ask M, a new smart assistant, for almost anything (using text—for now). These are no longer just supplementary ways to do things. They’re the best way, and in some cases the only way. (In Apple’s HomeKit system for the connected house, you make sure everything’s off and locked by saying, “Hey Siri, good night.”)

At least in the beginning, the idea behind these newly enhanced virtual assistants is that they will simplify the complex, multistep things we’re all tired of doing via drop-down menus, complicated workflows, and hopscotching from app to app. Your assistant will know every corner of every app on your phone and will glide between them at your spoken command. And with time, they will also get to know something else: you.

Let’s quickly clear something up: Conversational tech isn’t going to kill the touchscreen or even the mouse and keyboard. If you’re a power user of your desktop computer, you’ll probably stay that way. (Although you might avail yourself more often of the ability to ask a virtual assistant things like “Where’s the crop tool, again?”)

But for certain groups of people, the rise of the conversational interface may offer a route to technological proficiency that largely bypasses the GUI. Very young people, for instance, are already skipping their keyboards and entering text through microphones. “They just don’t type,” says Thomas Gayno, cofounder and CEO of voice messaging app Cord. And elsewhere on the age spectrum, there are an enormous number of people for whom the graphical user interface never really worked in the first place. For the visually impaired, the elderly, and the otherwise technologically challenged, it has always been a little laughable to hear anyone describe a modern computer interface as “intuitive.”

Chris Maury learned this the hard way. In the summer of 2010, the then-24-year-old entrepreneur was crashing on a friend’s air mattress in Palo Alto and interning at a startup called ImageShack, having just dropped out of a PhD program to chase the Silicon Valley dream. And in the midst of his long commutes and fiendishly late nights, he realized his prescription eyeglasses weren’t cutting it anymore. An ordinary optometrist appointment led to a diagnosis of Stargardt’s disease, a degenerative condition that doctors told him would eventually leave him legally blind.

Maury, who had every intention of staying in tech, was immediately forced to consider how he might use a computer without his vision. But for the 20-some million people in the US who can’t see, there’s only one real option for staying connected to computers: a 30-year-old technology called a screen reader.

To use one of these devices, you move a cursor around your screen using a keyboard, and the machine renders into speech whatever’s being selected—a long URL, a drop-down menu—at a mind-numbing robotic clip. Screen reader systems can cost thousands of dollars and require dozens of hours of training. “It takes sometimes two sessions before you can do a Google search,” Maury tells me. And as digital environments have gotten more and more complex, screen readers have only gotten harder to use. “They’re terrible,” Maury says.

As his vision started to go downhill, Maury immersed himself in Blind Twitter (yes, there’s Blind Twitter) and the accessibility movement. He came to realize how pissed off some visually impaired people were about the technology available to them. And at the same time, he was faintly aware that the potential ingredients for something better—an interface designed first for voice—were, at that moment, cropping up all over Silicon Valley.

So he set out to redeem technology for blind people. Maury founded a company, Conversant Labs, in the hope of building apps and services that put audio first. Conversant’s first product is an iPhone app called SayShopping, which offers a way to buy stuff from Target.com purely through speech. But Maury has much bigger designs. Conversant Labs is releasing a framework for adding conversational interaction to apps for iOS developers before the end of the year. And Maury wants to build a prototype soon of a fully voice-based computing environment, as well as an interface that will use head movements to give commands. “That’s all possible right now,” he says. “It just needs to be built.”

One day in the fall of 2014, out of nowhere, Amazon announced a new product called the Echo, a cylindrical, talking black speaker topped with a ring of blue lights that glow when the device speaks. The gadget’s persona is named Alexa. At the sound of its “wake word,” the Echo uses something called far-field voice recognition to isolate the voice that is addressing it, even in a somewhat noisy room. And then it listens. The idea is that the Echo belongs in the middle of your living room, kitchen, or bedroom—and that you will speak to it for all sorts of things.

It’s a funny thing, trying to make sense of a technology that has no built-in visual interface. There’s not much to look at, nothing to poke around inside of, nothing to scroll through, and no clear boundaries on what it can do. The technology press was roundly puzzled by this “enigmatic” new product from Amazon. (At least one scribe compared the Echo to the mysterious black monolith from the beginning of 2001: A Space Odyssey.)

When I started using Alexa late last year, I discovered it could tell me the weather, answer basic factual questions, create shopping lists that later appear in text on my smartphone, play music on command—nothing too transcendent. But Alexa quickly grew smarter and better. It got familiar with my voice, learned funnier jokes, and started being able to run multiple timers simultaneously (which is pretty handy when your cooking gets a little ambitious). In just the seven months between its initial beta launch and its public release in 2015, Alexa went from cute but infuriating to genuinely, consistently useful. I got to know it, and it got to know me.

This gets at a deeper truth about conversational tech: You only discover its capabilities in the course of a personal relationship with it. The big players in the industry all realize this and are trying to give their assistants the right balance of personality, charm, and respectful distance—to make them, in short, likable. In developing Cortana, for instance, Microsoft brought in the videogame studio behind Halo—which inspired the name Cortana in the first place—to turn a disembodied voice into a kind of character. “That wittiness and that toughness come through,” says Mike Calcagno, director of Cortana’s engineering team. And they seem to have had the desired effect: Even in its early days, when Cortana was unreliable, unhelpful, and dumb, people got attached to it.

There’s a strategic reason for this charm offensive. In their research, Microsoft, Nuance, and others have all come to the same conclusion: A great conversational agent is only fully useful when it’s everywhere, when it can get to know you in multiple contexts—learning your habits, your likes and dislikes, your routine and schedule. The way to get there is to have your AI colonize as many apps and devices as possible.

To that end, Amazon, Google, Microsoft, Nuance, and SoundHound are all offering their conversational platform technology to developers everywhere. The companies know that you are liable to stick with the conversational agent that knows you best. So get ready to meet some new disembodied voices. Once you pick one, you might never break up.

David Pierce (@piercedavid) is a senior writer at WIRED.

Skip Article Header. Skip to: Start of Article.
Logo KW

Wearables Are Our Foray Into Empowering A Healthier Population [1286]

de System Administrator - jueves, 9 de julio de 2015, 19:24
 

Wearables Are Our Foray Into Empowering A Healthier Population

by

In 1937, Sylvan Goldman, the owner of the Humpty-Dumpty grocery store chain, invented the shopping cart.  Determined to reduce the cost of having to staff his stores with enough clerks to personally help each customer with over-the-counter purchases, he changed the paradigm.  He created displays and shelves where people could help themselves, and, to prevent any inconvenience, he invented the shopping cart so they didn’t have to struggle with armfuls of goods. For all intent and purpose, the creation of the shopping cart was the byproduct of a much larger cost-saving initiative.  

In many ways, healthcare is undergoing a similar change.  Faced with high and unsustainable costs, the industry needs to change and eventually to move toward a self-service model where patients are more actively involved in their own care. We have already begun to see this in the rise of urgent care facilities, which are reducing the volume of more costly Emergency Room visits; and this movement will continue as patients only see their physician when they are very ill or if they require specialized care.  While this will decrease volumes and help reduce overall healthcare costs, to maintain a healthy population people need to be able to accurately monitor their health in between visits with sufficient insight to know they are well.  Wearables are that first step of this evolution in healthcare.

The Washington Post recently published an interesting article on the wearable movement stating that it represents the latest efforts of large tech companies to “rebuild, regenerate, and reprogram the human body.” It is certainly a dramatic statement, and one that undercuts the reality that wearable technology is about empowering patients, not making them bionic.  It is about enabling them to measure and better understand themselves, and providing them with data they can use to make more informed decisions. In time, wearables will be able to help predict a wearer’s potential decline into illness and help them decide whether or not they need medical intervention or to visit their doctor. For instance, a shirt outfitted with a sensor could send a text message to the wearer that her blood pressure has been on the rise over the last six weeks, offer the choice to submit the trend report to her electronic health record (EHR), and suggest she consult her physician.

The wearable movement is about ushering in an era of awareness, and not in a navel-gazing way.  It is about becoming more cognizant of our bodies and the impact our daily actions, such as taking the stairs over the elevator, have on our overall health.  It is about accountability, both to ourselves and to others— if we say we want to start exercising more, we can track it and measure it.  Wearables are just one example of how the industry is beginning to design scalable solutions that help people live better, healthier lives while simultaneously alleviating some of the pressures on our costly, complex, and overwhelmed healthcare system.  

From one to many: Scalable health IT solutions

A recent survey on patient expectations found that nearly 70 percent of patients are coming to their appointments with a list of questions for their physicians, and additionally, 40 percent consult an online source, such as WebMD, and 20 percent bring health data from personal health monitoring devices with them. These statistics point to a more engaged population and to the amazing possibilities wearables hold for us as a society.  

The future potential for us to aggregate data in order to drive better insight about larger health trends, such as cancer clusters, is also discussed in The Washington Post article; however, we can do so much more. The emergence of disciplines such as Social Design, a field dedicated to solving societal problems through the thoughtful application of design, highlight this trend of using data to drive scalable solutions and actionable insights.  For instance, students are studying how to eradicate urban food deserts, which are neighborhoods that have no direct and consistent access to fresh fruits and vegetables. Just imagine the possibilities if we were to leverage the aggregate health data of individuals to determine how far they walk on average in order to strategically set up a mobile farmers market or health clinic.

This will, of course, take time, and perhaps I was too quick and tidy in my summary of the shopping cart’s history. After all, it was not a success at first. In fact, it was initially a basket. And when the design evolved into a cart, people didn’t like the shape, felt it was clunky, or unbefitting of them—there were myriad reasons for poor adoption. Goldman had to make several iterations in order to arrive at what we now know as the modern shopping cart, but the bottom line is that this is a natural phase of the innovation process.  And health IT is going through these modifications now. For instance, while we can track some health trends and vitals via wearables, we have no real way of driving meaningful action in a consistent way from them.  But that doesn’t mean that future is far off.  

The shopping cart changed our expectations as consumers, and even in a digital commerce world, it remains an icon for the empowered customer. We are the ones, after all, in control. We have access to the items we need and want. We can see our options, do our research, and carefully choose what we buy into and what we do not. We don’t need permission, but we can always ask for help. And this holds true for us as prosumers of healthcare: we, as patients, can choose our comfort level, ask our physicians about our personal data trends, and discuss the options for best managing our health. Wearables are our foray into empowering a healthier population in a cost-effective way.  

Dr. Nick van Terheyden is the CMIO at Nuance Communications where his insider perspective allows him to put his medical and technology expertise to work for clients who are striving to raise the bar for healthcare delivery paying attention not just to processes and systems, but to people. Follow him on Twitter.

Opinions expressed by HIT Consultant Contributors are their own.


Logo KW

Wearables Are Turning Your Pets and Other Animals Into Big Data [1629]

de System Administrator - domingo, 3 de enero de 2016, 21:56
 

 

Wearables Are Turning Your Pets and Other Animals Into Big Data

BY MARC PROSSER

Wearable and ingestible tech for animals is found on and in creatures such as bees and cows, and your dogs and cats. The amount of data generated by the devices is exploding, providing new insights into and a much better understanding of the lives of the creatures around us.

I do not think Mélinda will mind me sharing some very personal information about her: she struggled to produce milk in the time after her firstborn.

Luckily, a new diet changed things for the better, and her milk production shot up.

I have never met Mélinda — and in all likelihood never will. She lives not far from Quebec in Canada, while I live in Tokyo, Japan.

The reason I know about her troubles, and could tell you intimate details about her current health and even what she is eating, is an electronic sensor that sits in her stomach.

Each time Mélinda passes a Wi-Fi point, data from the sensor about pH levels in her stomach and her temperature are transmitted to a local database, and from there can be sent all around the world.

In fact, I am pretty certain Mélinda does not mind her data being shared, as she is a Holstein Friesian cow.

She is one of an ever-increasing number of livestock, pets, and wild animals that are being equipped with wearable or ingestible tech.

The vast amounts of data generated by the devices are leading to scientific discoveries and new, more proactive approaches to how we treat and interact with animals.

In the data lie the promise of previously unobtainable levels of transparency and better understanding of everything from dairy production and the lives of our pets to the training of racehorses and flight patterns of individual bees.

Heads-Up on Your Horse

One of the biggest advantages of wearable and ingestible devices is their ability to provide information about an animal’s condition that we cannot easily observe.

For example, iNOVOTEC Animal Care’s solution, which is what Mélinda is equipped with, enables farmers to catch illnesses much earlier than without it, leading to healthier lives for their livestock, while also saving the farmers money.

It has also proven helpful for anticipating when a cow is in heat. Ingestibles and wearables capable of tracking a cow’s temperature and general activity can improve insemination success rates from roughly 50 percent to nearly 90 percent.

“For the farmer, part of the benefit is that it lowers losses. If cows get sick and are treated with antibiotics, for example, the milk or meat from those animals cannot be sold. If they receive feed that is improperly balanced or blended, their milk production drops. Now we get data relating to those parameters on a continual basis,” says Bia Thomas, one of the founders of iNOVOTEC.

In Australia, Pinker Pinker felt the benefits of wearable tech. The racehorse was a local celebrity that won the $3 million W.S. Cox Plate race.

Pinker’s training had been optimized using a device from E-Trakka, located under her saddle, which gathered data about heart rate, speed, and stride length during each training session.

“We combine the data gathering with online systems. One collects it and feeds into a central database. We have also developed an attachable display which clips onto the saddle and lets the rider see real-time data on the horse’s speed and heart rate,” explains Andrew Stuart, founder of E-Trakka.

The system allows for better, more precise training and prevents overtraining.

For example, one horse was suffering from an undiscernible illness and showed a heart rate of 170 beats per minute during a training session, when it should have been around 110.

Training a horse while it is sick means risking permanent damage to the animal. There are even examples of racehorses suddenly dying after hard training, with autopsies revealing that they were sick at the time.

Tracking Buzz

A different kind of wearable tech was used by “Buzz Aldrin” (sorry), who is sadly no longer with us, to help a group of scientists.

The Buzz in question — nicknamed so solely by me — was an Australian bee who, like her sisters, was equipped with a tiny GPS tracker by scientists from Commonwealth Scientific and Industrial Research Organisation, Australia’s national science agency.

Plotting the bees’ flight courses and behavior, the scientists were able to analyze the importance of various factors in relation to colony collapse disorder.

Turning Your Pets’ Growls and Barks into Human Speech

Wearable tech for animals is not only finding its ways onto farms and wild animals. It is also entering your home.

Wearable ID tags and GPS locators can be attached to your pet’s collar, so you are always able to find your pet — or at least its collar.

Other systems track your pet’s movement when you are not home, and some promise to analyze your dog’s bark and “translate” it. There is even the company No More Woof, which is working on a device that interprets your dog’s brain waves and tells you exactly what your dog is thinking.

Admittedly, the device is very much in the experimental phase, and so far makes a dog look like a cross between Snowball from the animated series Rick and Morty and a confused telemarketer. It is, however, an illustration of the diversity of products and the cutting edge of wearables for animals.

The research company IDTechEX recently released the report “Wearable Technology for Animals 2015–2025,” which tracked products from 141 manufacturers. The report predicts an explosive growth in the market.

“For pets the uptake is, to a degree, already taking place. Wearable tech in relation to livestock is still moving at a somewhat slower pace, but we are definitely also seeing a marked increase in the market there. The slow uptake could, in part, be because of a cultural conservatism in farming,” says James Hayward, an analyst at IDTechEx.

Turning Animals Into Big Data

Many products are reaching levels of sophistication and the number of units shipped that they can cumulatively produce big data.

For example, E-Trakka has conducted more than 30,000 readings of horses in training, while iNOVOTECH has been gathering and analyzing data for years. During the average 150-day life span of the company’s devices, they generate 21,600 independent measurements of pH and temperature.

As with any industry in the throes of a big data revolution, the potential for advances is becoming apparent incrementally. One discovery leads to new questions, more insights, and new potential uses of wearables.

For livestock, the immediate benefits of big data are the identification and proactive treatment of diseases and ailments. This includes insights regarding the influence of feed change on the health of animals.

The next step is increased automation, where the computer systems automatically analyze data and alert farmers and potentially vets if there is a need to change the feed or if a cow is getting sick.

Beyond that lies a future of full transparency from farm to consumer.

Imagine a trip to the supermarket in the near future where you can scan a carton of milk with your phone and immediately see detailed information about where the milk is from, down to what farm produced it and the health of the animals on that farm.

“I think we really are in the early days of a shift where service providers like big dairy producers are going to be required to deliver that degree of transparency. The technology is fast approaching a point where it is possible,” Bia Thomas says.

For racehorses, wearables and data are transforming traditional training methods, making them much more efficient and minimizing the risk of harm to the animals.

E-Trakka is working on systems that let trainers share real-time data across the world, and it’s looking at the possibilities of working with data scientists in collaborative analysis of the data the company has gathered so far.

The data generated by “Buzz Aldrin” is perhaps the best example of how new avenues and possibilities are continuously opening up for wearable/ingestible tech and big data in relation to animals.

The GPS trackers have allowed scientists to analyze the effects of stress factors including disease, pesticides, air pollution, water contamination, diet, and extreme weather on the movements of bees and their ability to pollinate.

While this analysis is valuable in its own right, it becomes doubly so when looking at the Canadian company Bee Vectoring Technology, which is using bumblebees as a delivery method for natural pesticides. The method is far less invasive and much more efficient when it comes to delivering the pesticides where they are supposed to go.

The issue becomes how to accurately track if the bees have indeed gone to the places you want them to, and to discover ways of encouraging them to fly to those places—hence the need for a wearable tracking device capable of generating big data.

This data generation covers many other species, and is perhaps the most exciting long-term promise of wearable and / or ingestible devices for animals.

A combination of data from different sources in an ecosystem is likely to generate new, detailed insights into how its parts interact and influence each other. It will thereby lead to a much deeper understanding of it — and in turn on the effect humans as a species exert on the various ecosystems we are a part of and have created.

Image Credit: Shutterstock.com

RELATED TOPICS: 

Link: http://singularityhub.com

Logo KW

WELCOME TO BRAIN SCIENCE'S NEXT FRONTIER: VIRTUAL REALITY [1606]

de System Administrator - jueves, 3 de diciembre de 2015, 21:11
 

WELCOME TO BRAIN SCIENCE'S NEXT FRONTIER: VIRTUAL REALITY

BY TINA AMIRTHA

AMY ROBINSON, EXECUTIVE DIRECTOR AT THE STARTUP EYEWIRE, IS MAKING NEUROSCIENCE INTO A PLAYGROUND FOR THE HOT TECH DU JOUR.

Virtual reality is here, and brands of all stripes are embracing the tech. The New York Times and Google newly partnered to send more than 1 million cardboard VR viewers to Times subscribers at the beginning of November so they could watch the paper’s first VR documentaries on a smartphone. Magic Leap published 

 online in October, causing more buzz around VR’s potential. Even Tommy Hilfiger now offers VR sets to its in-store customers so they can watch its recent New York Fashion Week show.

 


Página:  1  2  3  4  5  (Siguiente)
  TODAS