Neurociencia | Neuroscience


Neuroscience 

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  ...  4  5  6  7  8  9  10  11  12  13  ...  73  (Siguiente)
  TODAS

B

Logo KW

BRAIN SIMULATION [967]

de System Administrator - lunes, 27 de octubre de 2014, 12:42
 

ARE EFFORTS AT TOTAL BRAIN SIMULATION PUTTING THE CART BEFORE THE HORSE?

Written By: Jason Dorrier

Since it was awarded a one billion euro, decade-long research grant last year, the Human Brain Project has been the center of extreme excitement and heavy criticism. The project aims to simulate the human brain in silicon on a yet-to-be-assembled supercomputer of massive computational power. The goal? Understanding.

In a recent paper (more below), HBP researchers write, “The ultimate prime aim is to imitate and understand the native computations, algorithms, states, actions, and emergent behavior of the brain, as well as promote brain-inspired technology.”

 

The prospect is mind numbingly self-reflexive—the human brain folds its faculties of analysis in on themselves to understand and reproduce itself. An awe-inspiring idea in its seeming impossibility.

And maybe that’s partly why the project has been a magnet for early condemnation.

The problem, according to critics, is that our limited empirical and theoretical understanding isn’t yet at the level needed for even a simple human brain simulation, and that resources, therefore, would be better allocated on basic research for now.

Further, in an open letter with almost 800 signatures, a group of neuroscientists warns the HBP is veering from an endeavor of simulation dependent on and informed by empirical neuroscience to a venture that favors technology over scientific rigor.

So, is the HBP putting the billion-euro cart before the horse?

The letter was published in July. The same month, the Human Brain Project’s co-executive director, Richard Frackowiak, wrote a spirited defense of the project.

Frackowiak compared the criticism to a similar missive written in 1990, the first year of the Human Genome Project. That earlier letter accused the Human Genome Project of “mediocre science, terrible science policy”—criticism the project later rose above by successfully sequencing the first complete human genome in 2003.

The Human Brain Project, he said, will likewise overcome early “teething troubles” to open an era of unified brain research in which neuroscience, computing, and medicine work together to revolutionize our understanding of the brain.

 

Frackowiak said data isn’t the problem. The challenge is systematizing it, making sense of the nearly 100,000 annual neuroscience papers, the riot of patient data out of hospitals. The HBP, which Frackowiak describes as the CERN of neuroscience, is, among other things, an attempt to unite and organize resources.

But to do that you need your digital ducks in a row first and foremost.

For that reason, the HBP is intentionally heavy on computing in the beginning, and the early work is to devise an effective digital approach to organizing existing data. They hope to have a number of specialized databases up and running by 2016.

“Far from being sidelined, neuroscience remains front and centre in the HBP,” Frackowiak wrote. “The [information and communications technologies] tools are meant as a scaffold; a bridge to support a convergence of fields that is already underway.”

And as for funding, he went on, it sounds like a lot, but at €50 million annually (for the core project)—the HBP is only 5% of the European neuroscience budget. Yes, that’s a lot of money for one project, but alternatively, it allows that one project to “go big.”

How convincing has this argument been? To some scientists, not very it would seem. An additional several hundred signatories have joined the critical open letter since July.

Perhaps it’s no surprise, then, that two HBP researchers, Yadin Dudai and Kathinka Evers, attempt to set the parameters of the discussion as thoroughly as they can in arecent paper titled, “To Simulate or Not to Simulate: What Are the Questions?”

Published in the neuroscience journal, Neuron, Dudai and Evers ask: What is simulation; what is its role in science; what challenges face those attempting to simulate the brain; and what are realistic expectations for such an endeavor?

The paper is sufficiently humble in its approach, not claiming to bring definite answers or wade into the funding debate—though, in truth, that probably can’t be avoided—and it doesn’t shy away from identifying the project’s biggest challenges.

First, they say, simulation is an established (and increasingly crucial) scientific tool with a proven track record in neuroscience and other disciplines. In addition to following the maxim “only the one who makes something can fully understand it” simulation allows us to artificially test hypotheses when testing the real system is costly, risky, or unethical.

But we need to temper our expectations a bit in terms of what brain simulation can do.

 

We do have data, they say, but probably not enough data. Not yet. Further, we don’t have enough high level theory either. We have known unknowns—but likely lots of undiscovered unknowns too.

Lacking solid benchmarks and top-down theory “may lead much effort astray.”

And our “gaps in understanding” may or may not be acceptable depending on the level of expected explanatory power. How far can we trust an incomplete simulation? Can it ever be complete?

Further, we need to define what we mean when we say “brain.” A useful simulation shouldn’t reproduce the brain in isolation. It is a complex adaptive system nested in another complex adaptive system—that is, the brain and the body are inseparable.

The mind arises in this interdependent body-brain relationship, and any simulation of it must not only take that link into account, but also acknowledge the limitations it imposes. We can only simulate so much of the body and its environment.

Of all the challenges the paper acknowledges, the least of them is computing power. The HBP’s brain simulation is expected to require exascale computing—a few orders of magnitude more than today’s most powerful supercomputers.

Dudai and Evers note, almost offhandedly, that it is likely the requisite computing power will be available before problems of incomplete knowledge are solved.

The least mundane part of the discussion (if the most esoteric) comes at the end of the paper when the authors question whether brain simulation must exhibit consciousness to be useful. Their surprising answer is, yes, at least for some of the project’s research objectives, like studying mental illness, which is closely related to consciousness.

Dudai and Evers write, “How adequate or informative can a simulation of, say, depression or anxiety be if there is no conscious experience in the simulation?”

There’s no guarantee consciousness will arise, proving its existence will be contentious, and such an outcome isn’t likely around the corner—still, Dudai and Evers believe it’s valuable to begin the discussion now in preparation for such possibilities.

 

Ultimately, the HBP’s success, they say, hinges on how well the project is timed with growing scientific knowledge. It ought to employ a strategy of detailed simulation—even down to the cellular level—tied together by higher level, general laws of the brain, as they’re discovered. Above all, it will necessarily need to advance step by step. Indeed, the HBP’s initial goal is simulating the lowly mouse cortex.

Simply? The HBP is a giant undertaking likely to evolve in the coming years.

As for the controversy, that too will probably continue in parallel. The Human Genome Project was criticized throughout much of its life and even pronounced a failure seven years in—and yet the project was finished two years ahead of schedule.

Other than the fact they’re both big science, it isn’t clear the two projects are perfectly analogous, but it’s also likely too early to call the game. Even as computing power is growing fast, so too are methods for observing and learning about the brain.

And the HBP’s timing and success aside, that the prospect of our brain successfully reproducing itself in silicon is even worthy of debate—that’s a breathtaking thought.

Image Credit: Human Brain Project

This entry was posted in Artificial Intelligence,BrainComputingFutureHealthSingularityTechand tagged big scienceBlue Brain Projectbrain,brain researchbrain simulationHenry Markram,Human Brain ProjectHuman Genome Project,Kathinka EversneuroscienceRichard FrackowiakYadin Dudai.

Link: http://singularityhub.com

Logo KW

Brain Stimulation [803]

de System Administrator - viernes, 29 de agosto de 2014, 23:19
 

Prepare to Be Shocked

Four predictions about how brain stimulation will make us smarter
Logo KW

Brain-Computer Interfaces [1663]

de System Administrator - lunes, 15 de febrero de 2016, 12:19
 

Sci-Fi Short Imagines How Brain-Computer Interfaces Will Make Us “Connected”

BY DAVID J. HILL

Social life is defined by connections, and more than ever, the fabric of our social lives are woven digitally. Before the Internet picked up speed, local hubs like neighborhoods, churches, and community centers provided a variety of opportunities to develop relationships. Today, people often connect through social networks, where users exchange the familiarity of physical proximity for the transparency of real-time availability and exposure. But whether through physical or digital connections, there is a price to be paid for this togetherness.

Anyone concerned about increased surveillance in society has probably questioned whether sacrificing privacy is worth the perception of greater security. Just as this loss can be justified in the face of physical harm, so too does the value of privacy wane when faced with the mental anguish of loneliness.

What if technology provided the ultimate resolution of this existential crisis by allowing you to plug your brain into a boundless, cognitive melting pot with other humans?

Appealing as this may be, extreme connectedness would come at the price of privacy. For those wrestling with the existential crisis of modern life, mind-to-mind melding may be the only hope they feel they have left. "Connected", a sci-fi short film by Luke Gilford which debuted on Motherboard, gives us a brief glimpse into what the road looks like in a future that's arguably just around the corner.

RELATED TOPICS:

Link: http://singularityhub.com

Logo KW

Brain-Controlling Sound Waves Used to Steer Genetically Modified Worms [1466]

de System Administrator - sábado, 26 de septiembre de 2015, 14:56
 

Brain-Controlling Sound Waves Used to Steer Genetically Modified Worms

By Shelly Fan

Move over optogenetics, there’s a new cool mind-bending tool in town.

A group of scientists, led by Dr. Sreekanth Chalasani at the Salk Institute in La Jolla, California, discovered a new way to control neurons using bursts of high-pitched sound pulses in worms.

Dubbed “sonogenetics,” scientists say the new method can control brain, heart and muscle cells directly from outside the body, circumventing the need for invasive brain implants such as the microfibers used in optogenetics.

The Trouble With Light

Almost exactly a decade ago, optogenetics changed the face of neuroscience by giving scientists a powerful way to manipulate neuronal activity using light beams.

 

The concept is deceptively simple: using genetic engineering, scientists introduced light-sensitive protein channels into certain populations of neurons in mice that were previously impervious to light. Then, by shining light through microfiber cables implanted in the brain, the scientists can artificially activate specific neural networks, which in turn changes the behavior of the mouse.

Since then, the crazy tool technique has clocked an impressive list of achievements, including making docile mice aggressive, giving them on-demand boners and even implanting fake scary memories. Last month, a group of neurologists from San Diego got FDA-approval to begin the very first clinical trial that aims to use optogenetics in humans to treat degenerative blindness.

Yet optogenetics is not without faults. For one, the hardware that delivers light has to be threaded into the brain, which — unfortunately but unavoidably — physically traumatizes the brain. This makes its transition for human use difficult, particularly for stimulating deeper brain regions.

It’s not just an academic problem. Some incurable brain disorders, such as Parkinson’s disease, benefit from focused brain stimulation. Many scientists have predicted that optogenetics could be used for such disorders, but since the target brain area is deeply buried in the brain, clinicians have backed off from trying it in humans (so far).

What’s more, the stimulation isn’t as precise as scientists want. Just like particles in the atmosphere that scatter fading sunlight every evening, the physical components of the brain also scatter light. The result is that neuronal activation isn’t exactly targeted — in other words, scientists may be inadvertently activating neurons that they’d rather stay silent.

From Light to Sound

Chalasani and his colleagues believe that they have circumvented both issues by swapping light with sound.

The team was obviously deeply inspired by optogenetics. Instead of using the light-sensitive protein channelrhodopsin-2, the team hunted down a protein called TRP-4 that responds to mechanical stimulation such as vibrations. When blasted with ultrasound, TRP-4 opens a pore in the neuronal membrane, which allows ions to rush in — this biophysical response activates (or “fires”) a neuron.

 

C. elegans

Then, similar to optogenetics, the team delivered the genetic blueprint that encodes TRP-4 into the nematode worm C. elegans using a virus. (C. elegans, with only 302 clearly-mapped out neurons, is a neuroscience darling for reductionist studies.)

Sophisticated genetic tools restrict TRP-4 to only certain types of neurons — for example, those that control movement, sensation or higher brain functions such as motivation. By activating these neurons and watching the resulting behavior, scientists can then tease out which set of neurons are responsible for what behavior.

In one experiment, the team targeted motor neurons in the primitive worm. Realizing that plain old ultrasound wasn’t strong enough to activate the TRP-4-expressing neurons, the researchers embedded the worms in microbubbles to amplify the sound waves.

When transmitted into the worms through their opaque skin, the sound waves reliably activated motor neurons that were peppered with TRP-4. As a result, scientists could move the worms to preset goal destinations, as if controlling the worms with a joystick.

“In contrast to light, low-frequency ultrasound can travel through the body without any scattering,” said Chalasani in a press release. Unlike light, the sound pulses — too high-pitched for humans to hear — can be delivered to target neurons from the top of the skull, without the need for brain implants.

You can imagine putting an ultrasound cap on a person’s head and using that the switch neurons on and off in a person, Chalasani speculates.

Dr. Stuart Ibsen, the lead author of the paper, agrees. “This could be a big advantage when you want to stimulate a region deep in the brain without affecting other regions,” he said.

 

So far, the technique has only been tested in the nematode worm, but Chalasani says that the team is hard at work transforming it for use in mammals such as mice.

Like optogenetics, scientists could insert TRP-4 into certain populations of neurons with genetic engineering. They could then directly inject microbubbles into the bloodstream of mice to amplify the ultrasonic waves inside the body, and activate targeted neurons with a cap that generates the sound waves.

Unlike optogenetics, however, sonogenetics has a bit of a time lag. It’s basic physics: sound travels slower than light, which means there will be a larger delay between when scientists give the “go” signal versus when neurons actually activate.

“If you're working on the cortex of a human brain, working on a scale of a tenth of a second, optogenetics is going to do better,” concedes Chalasani, but considering that sonogenetics is non-invasive, “these will be complementary techniques.”

With optogenetics dominating the mind-control game, it’s hard to say if sonogenetics will take off. But Chalasani is optimistic.

“When we make the leap into therapies for humans, I think we have a better shot with noninvasive sonogenetics approaches than with optogenetics,” he said.

Image Credit: Shutterstock.comWikimedia Commons; video and final image courtesy of the Salk Institute.

Logo KW

Brain-Sensing Headband [774]

de System Administrator - miércoles, 20 de agosto de 2014, 18:13
 

 

Can this brain-sensing headband give you serenity?

By Sally Hayden, for CNN

STORY HIGHLIGHTS

- Ariel Garten's high-tech headband monitors brain activity

- Called 'Muse,' the device transmits information to your computer

- Can pour beer, control music volume, turn on lights just by thinking

- By tracking brain waves, could help users reduce stress

 

Mind gamesMeet Ariel Garten: 35-year-old CEO of tech company InteraXon. The business has created a headband which monitors brain activity, called 'Muse.' It claims to help reduce stress as the user focuses on their brain waves, which appear on a screen.

Drink it in: The headband has been used in a number of experiments, including one where a user urged a tap to pour beer through the power of concentration. 

Chairwoman: Garten even used Muse to power this levitating chair.

Music to the ears: In 2009, InteraXon orchestrated a brainwave-controlled musical and visual performance at the Ontario Premier's Innovation Awards.

Light show: For the 2010 Vancouver Winter Games, Muse users controlled a light show over Niagara Falls, similar to the one pictured in this 2013 display.

Video

Editor's note: Leading Women connects you to extraordinary women of our time -- remarkable professionals who have made it to the top in all areas of business, the arts, sport, culture, science and more.

(CNN) -- Imagine a gadget that knows your mind better than you do.

Picture a device that can rank the activities in your life that bring you joy, or interject your typed words with your feelings.

One woman has helped create just that.

Ariel Garten believes that the brain -- with its 100 billion neurons that receive, register, and respond to thoughts and impulses -- has the power to accomplish almost anything, if only its power could be properly harnessed.

Her company InteraXon, which she co-founded withTrevor Coleman, has produced Muse, a lightweight headband that uses electroencephalography (EEG) sensors to monitor your brain activity, transmitting that information to a smartphone, laptop or tablet.

The high-tech headband has been used to pour beer, levitate chairs, or control the lights -- all without the wearer lifting a finger.

And in a world where technology is often blamed for raising stress levels, 35-year-old Garten believes her $300 headband could even help calm us down.

The Canadian -- who has worked as a fashion designer, art gallery director, and psychotherapist -- spoke to CNN about her influences and vision for the future of technology.

CNN: How does Muse help reduce stress?

Ariel Garten: Muse tracks your brain activity. Your brain sends electro-signals just like your heart does, and this headband is like a heart rate monitor.

As it tracks your brain activity, it sends that information to your computer, smartphone or tablet, where you can do exercises that track your brain activity in real time, and give you real time feedback to teach you how to calm and settle your mind.

 

The headband allows the wearer to see their brain activity when connected to a smartphone, tablet or laptop.
COURTESY INTERAXO

CNN: Technology is often blamed for making people stressed -- is there a certain irony in also using it to also calm us down?

AG: Technology can definitely be responsible for making people stressed because it pulls at our attention, it distracts us, it increases the number of demands and in some ways decreases our own agency.

We're very interested in inverting that on its head and creating solutions that help you calm yourself; that can help you stay grounded, choose what to focus your attention on, and manage your own mind and your response to the world.

"Technology itself is not the evil, it's the way that it's implemented."

 Ariel Garten, CEO, InteraXon

Technology itself is not the evil, it's the way that it's implemented. Technology can have some great solutions for us. Look at all the amazing medical interventions that we have.

CNN: You've suggested Muse could provide medical benefits for children with ADD -- how?

AG: To be clear, Muse is not a medical device, it's a computer product. Exercises using Muse have suggested that they can help people with ADHD, by helping you increase your state of focused attention.

We've had amazing emails -- just recently we had an email from somebody who is 29 years old with ADHD and after just two days of using Muse had noticed a benefit. Three weeks out they sent me an email saying 'this is not a game changer, this is a life changer.'

 

The muse headset up close.
COURTESY INTERAXON

CNN: Have you had interest in the product from any unexpected places?

AG: We've been contacted by a lot of sports stars and sports celebrities -- people wanting to use it to improve their sports game. We were surprised because we're so used to thinking of it as a cognitive tool.

"We can't read your thoughts, we can't read your mind"

Ariel Garten, CEO InteraXon

There's been quite a number of research labs using Muse, and they've been looking at applications in depression, epilepsy, and communications.

And then we've also had a lot of interest from companies interested in integrating our technology into their wellness and development programs. Companies like Google wanting to offer this to their employees to help improve their productivity and their wellness.

CNN: Do you have any reservations about the development of mind-mapping devices?

AG: In InteraXon we believe very strongly that you own all your own data. We have a very strict privacy policy. It's like a heart rate monitor, it's very binary so we can't read your thoughts, we can't read your mind. But we're very much into leading the way on the very responsible use of this technology.

 

Ariel Garten speaks at the What's Next panel at Engadget Expand.
COURTESY STEVE JENNINGS/GETTY IMAGES FOR ENGADGET

CNN: What inspired you to get involved in this area?

AG: My background is in neuroscience, design and psychotherapy, and I'm very interested in helping people understand their own minds and use their minds more productively in their own life. Our brains get in our way in so many ways.

The things that we think, the feelings that we have, all of these things can be beautiful supports to our life and encourage the lives that we live. But they can also cause all kinds of anxiety, worries, all of these things that hold us back.

"As women, we are so good at holding ourselves back with the thoughts that are in our heads."

 Ariel Garten, CEO, InteraXon

Particularly women are a huge inspiration to me because we're so good at holding ourselves back with the thoughts that are in our heads. We're constantly worried about things like 'does this person think this way about me?' or 'have I done well enough?' or 'have I achieved as much as I'm supposed to?'

We have these dialogues within ourselves that can be really debilitating, and you know the answer is 'of course you're good enough,' and 'of course you've done well enough,' and 'of course you can achieve that.' And if you can learn to understand and gain control over your own internal dialogue, you can really learn to sort of undo the shackles that hold you back in your daily life, and your career, and your relationships.

Read: Bobbi Brown's billion dollar idea

Inspire: Nanny's double life as photographer

Learn: Frida Kahlo: Queen of the selfie

Link: http://edition.cnn.com/2014/08/18/tech/can-this-brain-sensing-headband/index.html

Logo KW

Brain-to-text: decoding spoken phrases from phone representations in the brain [1255]

de System Administrator - jueves, 25 de junio de 2015, 16:33
 

Brain-to-text: decoding spoken phrases from phone representations in the brain

by Christian Herff, Dominic Heger, Adriana de Pesters, Dominic Telaar, Peter Brunner, Gerwin Schalk and Tanja Schultz

Cognitive Systems Lab, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany | New York State Department of Health, National Center for Adaptive Neurotechnologies, Wadsworth Center, Albany, NY, USA | Department of Biomedical Sciences, State University of New York at Albany, Albany, NY, USA | Department of Neurology, Albany Medical College, Albany, NY, USA

It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

1. Introduction

Communication with computers or humans by thought alone, is a fascinating concept and has long been a goal of the brain-computer interface (BCI) community (Wolpaw et al., 2002). Traditional BCIs use motor imagery (McFarland et al., 2000) to control a cursor or to choose between a selected number of options. Others use event-related potentials (ERPs) (Farwell and Donchin, 1988) or steady-state evoked potentials (Sutter, 1992) to spell out texts. These interfaces have made remarkable progress in the last years, but are still relatively slow and unintuitive. The possibility of using covert speech, i.e., imagined continuous speech processes recorded from the brain for human-computer communication may improve BCI communication speed and also increase their usability. Numerous members of the scientific community, including linguists, speech processing technologists, and computational neuroscientists have studied the basic principles of speech and analyzed its fundamental building blocks. However, the high complexity and agile dynamics in the brain make it challenging to investigate speech production with traditional neuroimaging techniques. Thus, previous work has mostly focused on isolated aspects of speech in the brain.

Several recent studies have begun to take advantage of the high spatial resolution, high temporal resolution and high signal-to-noise ratio of signals recorded directly from the brain [electrocorticography (ECoG)]. Several studies used ECoG to investigate the temporal and spatial dynamics of speech perception (Canolty et al., 2007; Kubanek et al., 2013). Other studies highlighted the differences between receptive and expressive speech areas (Towle et al., 2008; Fukuda et al., 2010). Further insights into the isolated repetition of phones and words has been provided in Leuthardt et al. (2011b); Pei et al. (2011b). Pasley et al. (2012) showed that auditory features of perceived speech could be reconstructed from brain signals. In a study with a completely paralyzed subject, Guenther et al. (2009) showed that brain signals from speech-related regions could be used to synthesize vowel formants. Following up on these results, Martin et al. (2014) decoded spectrotemporal features of overt and covert speech from ECoG recordings. Evidence for a neural representation of phones and phonetic features during speech perception was provided in Chang et al. (2010) and Mesgarani et al. (2014), but these studies did not investigate continuous speech production. Other studies investigated the dynamics of the general speech production process (Crone et al., 2001a,b). A large number of studies have classified isolated aspects of speech processes for communication with or control of computers. Deng et al. (2010) decoded three different rhythms of imagined syllables. Neural activity during the production of isolated phones was used to control a one-dimensional cursor accurately (Leuthardt et al., 2011a). Formisano et al. (2008) decoded isolated phones using functional magnetic resonance imaging (fMRI). Vowels and consonants were successfully discriminated in limited pairings in Pei et al. (2011a). Blakely et al. (2008) showed robust classification of four different phonemes. Other ECoG studies classified syllables (Bouchard and Chang, 2014) or a limited set of words (Kellis et al., 2010). Extending this idea, the imagined production of isolated phones was classified in Brumberg et al. (2011). Recently, Mugler et al. (2014b) demonstrated the classification of a full set of phones within manually segmented boundaries during isolated word production.

To make use of these promising results for BCIs based on continuous speech processes, the analysis and decoding of isolated aspects of speech production has to be extended to continuous and fluent speech processes. While relying on isolated phones or words for communication with interfaces would improve current BCIs drastically, communication would still not be as natural and intuitive as continuous speech. Furthermore, to process the content of the spoken phrases, a textual representation has to be extracted instead of a reconstruction of acoustic features. In our present study, we address these issues by analyzing and decoding brain signals during continuously produced overt speech. This enables us to reconstruct continuous speech into a sequence of words in textual form, which is a necessary step toward human-computer communication using the full repertoire of imagined speech. We refer to our procedure that implements this process as Brain-to-Text. Brain-to-Text implements and combines understanding from neuroscience and neurophysiology (suggesting the locations and brain signal features that should be utilized), linguistics (phone and language model concepts), and statistical signal processing and machine learning. Our results suggest that the brain encodes a repertoire of phonetic representations that can be decoded continuously during speech production. At the same time, the neural pathways represented within our model offer a glimpse into the complex dynamics of the brain's fundamental building blocks during speech production.

2. Materials and Methods

2.1. Subjects

Seven epileptic patients at Albany Medical Center (Albany, New York, USA) participated in this study. All subjects gave informed consent to participate in the study, which was approved by the Institutional Review Board of Albany Medical College and the Human Research Protections Office of the US Army Medical Research and Materiel Command. Relevant patient information is given in Figure 1.

Figure 1. Electrode positions for all seven subjects.
Captions include age [years old (y/o)] and sex of subjects. Electrode locations were identified in a post-operative CT and co-registered to preoperative MRI. Electrodes for subject 3 are on an average Talairach brain. Combined electrode placement in joint Talairach space for comparison of all subjects. Participant 1 (yellow), subject 2 (magenta), subject 3 (cyan), subject 5 (red), subject 6 (green), and subject 7 (blue). Participant 4 was excluded from joint analysis as the data did not yield sufficient activations related to speech activity (see Section  2.4).

2.2. Electrode Placement

Electrode placement was solely based on clinical needs of the patients. All subjects had electrodes implanted on the left hemisphere and covered relevant areas of the frontal and temporal lobes. Electrode grids (Ad-Tech Medical Corp., Racine, WI; PMT Corporation, Chanhassen, MN) were composed of platinum-iridium electrodes (4 mm in diameter, 2.3 mm exposed) embedded in silicon with an inter-electrode distance of 0.6-1 cm. Electrode positions were registered in a post-operative CT scan and co-registered with a pre-operative MRI scan. Figure 1 shows electrode positions of all 7 subjects and the combined electrode positions. To compare average activation patterns across subjects, we co-registered all electrode positions in common Talairach space. We rendered activation maps using the NeuralAct software package (Kubanek and Schalk, 2014).

2.3. Experiment

We recorded brain activity during speech production of seven subjects using electrocorticographic (ECoG) grids that had been implanted as part of presurgical producedures preparatory to epilepsy surgery. ECoG provides electrical potentials measured directly on the brain surface at a high spatial and temporal resolution, unfiltered by skull and scalp. ECoG signals were recorded by BCI2000 (Schalk et al., 2004) using eight 16-channel g.USBamp biosignal amplifiers (g.tec, Graz, Austria). In addition to the electrical brain activity measurements, we recorded the acoustic waveform of the subjects' speech. Participant's voice data was recorded with a dynamic microphone (Samson R21s) and digitized using a dedicated g.USBamp in sync with the ECoG signals. The ECoG and acoustic signals were digitized at a sampling rate of 9600 Hz.

During the experiment, text excerpts from historical political speeches (i.e., Gettysburg Address, Roy and Basler, 1955), JFK's Inaugural Address (Kennedy, 1989), a childrens' story (Crane et al., 1867) or Charmed fan-fiction (Unknown, 2009) were displayed on a screen in about 1 m distance from the subject. The texts scrolled across the screen from right to left at a constant rate. This rate was adjusted to be comfortable for the subject prior to the recordings (rate of scrolling text: 42–76 words/min). During this procedure, subjects were familiarized with the task.

Each subject was instructed to read the text aloud as it appeared on the screen. A session was repeated 2–3 times depending on the mental and physical condition of the subjects. Table 1 summarizes data recording details for every session. Since the amount of data of the individual sessions of subject 2 is very small, we combined all three sessions of this subject in the analysis.

 

 Table 1. Data recording details for every session.

We cut the read-out texts of all subjects into 21–49 phrases, depending on the session length, along pauses in the audio recording. The audio recordings were phone-labeled using our in-house speech recognition toolkit BioKIT Telaar et al., 2014 (see Section 2.5). Because the audio and ECoG data were recorded in synchronization (see Figure 2), this procedure allowed us to identify the ECoG signals that were produced at the time of any given phones. Figure 2 shows the experimental setup and the phone labeling.

Figure 2. Synchronized recording of ECoG and acoustic data.
Acoustic data are labeled using our in-house decoder BioKIT, i.e., the acoustic data samples are assigned to corresponding phones. These phone labels are then imposed on the neural data.

2.4. Data Pre-Selection

In an initial data pre-selection, we tested whether speech activity segments could be distinguished from those with no speech activity in ECoG data. For this purpose, we fitted a multivariate normal distribution to all feature vectors (see Section 2.6 for a description of the feature extraction) containing speech activity derived from the acoustic data and one to feature vectors when the subject was not speaking. We then determined whether these models could be used to classify general speech activity above chance level, applying a leave-one-phrase-out validation.

Based on this analysis, both sessions of subject 4 and session 2 of subject 5 were rejected, as they did not show speech related activations that could be classified significantly better than chance (t-test, p > 0.05). To compare against random activations without speech production, we employed the same randomization approach as described in Section 2.11.

2.5. Phone Labeling

Phone labels of the acoustic recordings were created in a three-step process using an English automatic speech recognition (ASR) system trained on broadcast news. First, we calculated a Viterbi forced alignment (Huang et al., 2001), which is the most likely sequence of phones for the acoustic data samples given the words in the transcribed text and the acoustic models of the ASR system. In a second step, we adapted the Gaussian mixture model (GMM)-based acoustic models using maximum likelihood linear regression (MLLR) (Gales, 1998). This adaptation was performed separately for each session to obtain session-dependent acoustic models specialized to the signal and speaker characteristics, which is known to increase ASR performance. We estimated a MLLR transformation from the phone sequence computed in step one and used only those segments which had a high confidence score that the segment was emitted by the model attributed to them. Third, we repeated the Viterbi forced alignment using each session's adapted acoustic models yielding the final phone alignments. The phone labels calculated on the acoustic data are then imposed on the ECoG data.

Due to the very limited amount of training data for the neural models, we reduced the amount of distinct phone types and grouped similar phones together for the ECoG models. The grouping was based on phonetic features of the phones. See Table 2 for the grouping of phones.

 

Table 2. Grouping of phones.

More: http://journal.frontiersin.org

 

Logo KW

Braingear Moves Beyond Electrode Swim Caps [998]

de System Administrator - miércoles, 19 de noviembre de 2014, 20:13
 

Exponential Medicine: Braingear Moves Beyond Electrode Swim Caps

BY JASON DORRIER

If the last few decades in information technology have been characterized by cheaper, faster, and smaller computer chips, the next few decades will add cheaper, faster, and smaller sensors. Chips are the brains. Now they have senses.

Whereas most of these sensors have been available for years, it’s only relatively recently that they’ve gone from hulking million-dollar devices in labs to affordable consumer products embedded in smartphones and a growing list of accessories.

Most of these sensors are already incredibly cheap (on the order of a few dollars), tiny, increasingly accurate, and providing a profusion of software services on smartphones. But more recently, sensors have begun measuring the human body.

In recent years, we’ve seen smartphone motion sensors adapted for use in wearable devices to track physical activity or sleep. These have had uneven success with consumers. But the next steps may change that, as devices are able to accurately measure vital processes driven by the heart, lungs, blood—and even the brain.

Two brain activity devices were on display at Singularity University’s Exponential Medicine. The first, presented by InteraXon cofounder and CEO Ariel Garten, is the Muse headband, one of the first brain sensors available to consumers.

 

The Muse is a band worn across the forehead and secured behind the ears. Using seven electroencephalography (EEG) sensors strung across the band, the device measures levels of brain activity, sending the information to a smartphone.

What’s the point? The Muse is a biofeedback device. Coupled with early apps, it allows users to hear their brain—high levels of activity are sonified as crashing waves and wind, calm states as gentle, lapping waves and birdsong. By being consciously aware of brain activity, we can learn to control it, thus taming anxiety.

The Muse, born in an Indiegogo campaign almost two years ago, went on sale in August. At $299, it isn’t cheap, and neither is it inconspicuous. Even so, a few Exponential Medicine attendees joined Garten, wearing their Muse for the duration.

Garten’s Muse, however, isn’t the only compact brain sensing device out there. Philip Low presented on the iBrain, a similarly compact device—Low claims the latest iteration is the smallest brain sensor available—slung across the forehead.

In contrast to the Muse, iBrain is primarily used in research. Low said his goal was to fold as much brain sensing tech into a single device. He was only able to do this by writing specialized algorithms to consolidate all this in a single channel.

“We’re putting much less on people and getting much more out of them,” Low said. “And we can do this in their homes.”

 

The iBrain has been used to study a number of disorders including ALS, autism, and depression. Low said in one Navy study they used the device to make an accurate diagnosis of PTSD and SSRI treatment in one patient with data from the iBrain alone.

But Low thinks the device has potential beyond research—he thinks it might prove an effective brain-computer interface. Famously, Steven Hawking experimented with the iBrain as a communication device. And another ALS patient Augie Nieto later used the system to move a cursor on a screen and select letters to form words.

The greatest strength of Low and Garten’s brain sensing technology is that it is non-invasive—some brain-computer interfaces require implants in the brain to work consistently—and in the coming years, Garten expects more improvement.

While EEG caps still reign supreme in most high fidelity brain experiments we’ve seen lately—like Adam Gazzaley’s Glass Brain or this pair of brain-to-brain communication experiments—better software and sensors may change that.

In her talk, Garten said she imagines her device, still highly conspicuous today, will be replaced by smaller, subtler versions, maybe even a series of patches. And whereas her device only goes one way—recording brain activity—future devices might also provide stimulation too. (Indeed, we’ve covered some such devices in the past.)

Today’s body sensing technology is a constellation of disconnected hardware of varying accuracy and sensitivity. But as the tech develops and disappears—woven into our clothes or wearable patches—it may become second nature to regularly look into our hearts or brains on a smartphone, and take action to right the ship.

Image Credit: Shutterstock.com; InteraXon/Muse

 

 

Logo KW

Brain’s Role in Browning White Fat [1060]

de System Administrator - viernes, 16 de enero de 2015, 12:01
 

JUSTIN HEWLETT, MNHS MULTIMEDIA, MONASH UNIVERSITY

Brain’s Role in Browning White Fat

By Anna Azvolinsky

Insulin and leptin act on specialized neurons in the mouse hypothalamus to promote conversion of white to beige fat.

Ever since energy-storing white fat has been shown to convert to metabolically active beige fat, through a process called browning, scientists have been trying to understand how this switch occurs. The immune system has been shown to contribute toactivation of brown fat cells. Now, researchers from Monash University in Australia and their colleagues have shown that insulin and leptin—two hormones that regulate glucose metabolism and satiety and hunger cues—activate “satiety” neurons in the mouse hypothalamus to promote the conversion of white fat to beige. The results are published today (January 15) in Cell.

Hypothalamic appetite-suppressing proopiomelanocortin (POMC) neurons are known to relay the satiety signals in the bloodstream to other parts of the brain and other tissues to promote energy balance. “What is new here is that one way that these neurons promote calorie-burning is to stimulate the browning of white fat,” said Xiaoyong Yang, who studies the molecular mechanisms of metabolism at the Yale University School of Medicine, but was not involved in the work. “The study identifies how the brain communicates to fat tissue to promote energy dissipation.”

“The authors show that [insulin and leptin] directly interact in the brain to produce nervous-system signaling both to white and brown adipose tissue,” said Jan Nedergaard, a professor of physiology at Stockholm University who also was not involved in the study. “This is a nice demonstration of how the acute and chronic energy status talks to the thermogenic tissues.”

Although the differences between beige and brown fat are still being defined, the former is currently considered a metabolically active fat—which converts the energy of triglycerides into heat—nestled within white fat tissue. Because of their energy-burning properties, brown and beige fat are considered superior to white fat, so understanding how white fat can be browned is a key research question. Exposure to cold can promote the browning of white fat, but the ability of insulin and leptin to act in synergy to signal to the brain to promote browning was not known before this study, according to author Tony Tiganis, a biochemist at Monash.

White fat cells steadily produce leptin, while insulin is produced by cells of the pancreas in response to a surge of glucose into the blood. Both hormones are known to signal to the brain to regulate satiety and body weight. To explore the connection between this energy expenditure control system and fat tissue,Garron Dodd, a postdoctoral fellow in Tiganis’s laboratory, and his colleagues deleted one or both of two phosphatase enzymes in murine POMC neurons. These phosphatase enzymes were previously known to act in the hypothalamus to regulate both glucose metabolism and body weight, each regulating either leptin or insulin signaling. When both phosphatases were deleted, mice had less white fat tissue and increased insulin and leptin signaling.

“These [phosphatase enzymes] work in POMC neurons by acting as ‘dimmer switches,’ controlling the sensitivity of leptin and insulin receptors to their endogenous ligands,” Dodd told The Scientist in an e-mail. The double knockout mice also had an increase in beige fat and more active heat-generating brown fat. When fed a high-fat diet, unlike either the single knockout or wild-type mice, the double knockout mice did not gain weight, suggesting that leptin and insulin signaling to POMC neurons is important for controlling body weight and fat metabolism.

The researchers also infused leptin and insulin directly into the hypothalami of wild-type mice, which promoted the browning of white fat. But when these hormones were infused but the neuronal connections between the white fat and the brain were physically severed, browning was prevented. Moreover, hormone infusion and cutting the neuronal connection to only a single fat pad resulted in browning only in the fat pad that maintained signaling ties to the brain. “This really told us that direct innervation from the brain is necessary and that these hormones are acting together to regulate energy expenditure,” said Tiganis.

These results are “really exciting as, perhaps, resistance to the actions of leptin and insulin in POMC neurons is a key feature underlying obesity in people,” said Dodd.

Another set of neurons in the hypothalamus, the agouti-related protein expressing (AgRP) or “hunger” neurons, are activated by hunger signals and promote energy storage. Along with Tamas Horvath, Yale’s Yang recently showed that fasting activates AgRP neurons that then suppress the browning of white fat. “These two stories are complimentary, providing a bigger picture: that the hunger and satiety neurons control browning of fat depending on the body’s energy state,” said Yang. Activation of POMC neurons during caloric intake protects against diet-induced obesity while activation of AgRP neurons tells the body to store energy during fasting.

Whether these results hold up in humans has yet to be explored. Expression of the two phosphatases in the hypothalamus is known to be higher in obese people, but it is not clear whether this suppresses the browning of white fat.

“One of the next big questions is whether this increased expression and prevention of insulin plus leptin signaling, and conversion of white to brown fat perturbs energy balance and promotes obesity,” said Tiganis. Another, said Dodd, is whether other parts of the brain are involved in signaling to and from adipose tissue. 

G. Dodd et al., “Leptin and insulin act on POMC neurons to promote the browning of white fat,”Cell, doi:10.1016/j.cell.2014.12.022, 2015.  
Logo KW

Brain’s “Inner GPS” Wins Nobel [917]

de System Administrator - martes, 7 de octubre de 2014, 19:49
 

Left to right: John O’Keefe; May-Britt Moser, Edvard Moser

UCL, DAVID BISHOP; WIKIMEDIA, THE KAVLI INSTITUTE/NTNU

Brain’s “Inner GPS” Wins Nobel

John O’Keefe, May-Britt Moser, and Edvard Moser have won the 2014 Nobel Prize in Physiology or Medicine “for their discoveries of cells that constitute a positioning system in the brain.”

By Molly Sharlach and Tracy Vence

John O’Keefe, May-Britt Moser, and Edvard Moser have won the 2014 Nobel Prize in Physiology or Medicine “for their discoveries of cells that constitute a positioning system in the brain.”

O’Keefe, a professor of cognitive neuroscience at University College London, will receive one half of this year’s prize. Husband-and-wife team May-Britt and Edvard Moser, both professors at the Norwegian University of Science and Technology (NTNU), will share the second half.

Together identifying an inner positioning system within the brain, O’Keefe is being honored for his discovery of so-called place cells, while the Mosers are recognized for their later work identifying grid cells.

“The discoveries of John O’Keefe, May-Britt Moser and Edvard Moser have solved a problem that has occupied philosophers and scientists for centuries,” the Nobel Foundation noted in its press releaseannouncing the award: “How does the brain create a map of the space surrounding us and how can we navigate our way through a complex environment?”

Menno Witter, the Mosers’ colleague at NTNU’s Kavli Institute for Systems Neuroscience/Centre for Neural Computation, first met the pair in the 1990s when they were students at the University of Oslo; Witter was an assistant professor at VU University Amsterdam. 
 
Their work “is a very important contribution in terms of understanding at least part of the neural code that is generated in the brain that allows species—probably including humans—to navigate,” Witter toldThe Scientist. “We’re all very, very pleased, because it to us shows that what we’re doing . . . as a whole community is considered to be really important and prestigious. It is also, I think, a fabulous sign to the world that Norwegian science is really at a top level.”
 
Francesca Sargolini, a cognitive neuroscientist at Aix-Marseille University in France, worked with the Mosers when she was a postdoc. The lab had a “wonderful, stimulating atmosphere,” Sargolini told The Scientist. Discoveries made by O’Keefe and the Mosers have helped researchers understand “how the brain computes . . . information to make a representation of spaces, so we can use that information to move around in the environment and do what we do every day,” she added.
 
“This is a very well-deserved prize for John [O’Keefe] and the Mosers,” said Colin Lever, a senior lecturer in the department of psychology at Durham University in the U.K., who earned a PhD and continued postdoctoral research in O’Keefe’s lab.
 
“This is a fascinating area of research,” Lever continued. “What we’re discovering about the brain through spatial mapping is likely of greater consequence than just for understanding about space. . . . Indeed, it seems to support autobiographical memory in humans.”
 
Update (October 6, 11:57 a.m.): O’Keefe and Lynn Nadel met as graduate students at McGill University in Montreal. In 1978, when Nadel was a lecturer at University College London, the two coauthored the seminal book The Hippocampus as a Cognitive Map. “We pursued the spatial map story for some years together, and we still do so separately,” Nadel, who is now a professor of psychology at the University of Arizona, told The Scientist. “From my point of view, this award really recognizes the whole enterprise of looking at cognition in terms of brain function,” he added. “It’s pretty cool.”
 

Discovery of 'brain GPS' places neuroscientists in league of Nobel Laureates

Men are said to be better than women at creating maps in their brains. But as the three winners of the 2014 Nobel Prize in Medicine show, even mice have brain cells for navigation.

 

You may not think it, but research on the brains of mice and rats has revealed how humans find their way from one place to another.

It is for their work in this area that John O'Keefe of University College London, and May-Britt Moser of the Centre for Neural Computation in Trondheim and her husband Edvard Moser of the Kavli Institute for Systems Neuroscience in Trondheim have been announced as this year's winners of the Nobel Prize in Medicine and Physiology.

One half of the price goes to British-American John O'Keefe, the other half to the Norwegian couple May-Britt and Edvard Moser, "for their discoveries of cells that constitute a positioning system in the brain," the Nobel Assembly at the Karolinska Institute in Stockholm said on Monday.

When the Nobel committee contacted May-Britt Moser to give her the news, she said she cried.

"I was in shock, and I am still in shock," she told the Karolinska Institute in an interview directly after the announcement. "This is so great!"

 

What works in mice, works in humans

"Absolutely thrilled"

Other European neuroscientists are celebrating the Nobel Prize committee's decision as well.

"It is fantastic," Michael Brecht, a brain researcher at the Bernstein Centre for Computational Neuroscience in Berlin, told DW.

Brecht, who says he knows all three Nobel Laureates personally, added "they are great people and very impressive research characters. This Nobel Prize is well-deserved."

Marianne Hafting Fyhn of the department of bioscience at University of Oslo worked under May-Britt and Edvard Moser in their research group at the Norwegian University of Science and Technology in Trondheim.

"Personally it is a wonderful day for me as well," she told DW. "[The Mosers] are extremely nice people. There was always a nice atmosphere in the lab and they work on a very high scientific standard."

At the Max Planck Institute for Brain Research in Frankfurt am Main spokesman Arjan Vink said "we are absolutely thrilled."

"Edvard Moser and John O'Keefe were only here at our institute less than two weeks ago," Vink said. The two - yet-to-be - Nobel Laureates were there to speak at a symposium.

 

John O'Keefe discovered navigation in the brain back in the 1970s

Navigation through memory

Seldom are newly announced Nobel Laureates still engaged in everyday, active research. But that is the case with O'Keefe and the Mosers.

That said, John O'Keefe's first crucial discovery was more than 40 years ago.

In 1971, he watched rats moving freely in a room and recorded signals from nerve cells in a part of their brains called the hippocampus.

The hippocampus is responsible for assigning things to long-term memory.

O'Keefe discovered a new type of cell, which he called "place cells." He concluded that they form a virtual map of the room.

"At that time, scientists already assumed that the hippocampus had to be important [for orientation]," Brecht says. "But only O'Keefe had the crucial insight of how to study these cells. He realized you just had to get the animals going."

Andrew Speakman of University College London says O'Keefe's success is not down to luck.

"He is an amazing scientist and the best kind of enthusiastic, original and inspirational person," Speakman said.

Speakman worked with O'Keefe in the 1980s.

"Like tiles in a bathroom"

ForMay-Britt and Edvard Moser, "the Nobel Prize [has come] quite early," says their ex-PhD student Marianne Hafting Fyhn.

It was less than ten years ago, in 2005, when the couple discovered another key component of the brain's positioning system - and "made things hum," as Michael Brecht puts it.

 

They investigated brain regions neighboring the hippocampus and found so-called "grid cells".

These cells generate a coordinate system and allow for precise positioning and path-finding.

The cells fire when the animals are at certain locations "and these locations form a hexagonal pattern", Edvard Moser told DW in an interview earlier this year. "It is almost like the tiles in a bathroom."

Or like the grid on a city map.

"That there are grid patterns inside the animal's head was an amazing observation," Brecht says.

The Mosers showed that orientation "is a constructive process and not something that the animal has learned."

May-Britt and Edvard Moser were awarded the Körber European Science Prize earlier this year for their discovery.

From mice to humans

"This internal map is general across all mammals that have been investigated," Edvard Moser said.

And it is a system that develops quite early in evolution.

"Most recently, grid cells have been shown also in humans."

Brecht says "it was to be expected" that sooner or later the Nobel Prize would be awarded to the discoveries linked to the brain's navigation system as this research is "relevant for the field of medicine."

In patients with Alzheimer's disease, for example, the brain regions involved with orientation are frequently affected at an early stage.

Patients often lose their way and fail to recognize their environment.

"Learning how this system works in the normal brain will have consequences for the diagnosis and treatment of Alzheimer's disease," Edvard Moser said in his DW interview.

But it is still just basic research, he says. "[There'll be] more in the future, but we will get there."

Logo KW

Bridging the Mental Healthcare Gap With Artificial Intelligence [1706]

de System Administrator - lunes, 17 de octubre de 2016, 11:41
 

Bridging the Mental Healthcare Gap With Artificial Intelligence

BY ALISON E. BERMAN

Artificial intelligence is learning to take on an increasing number of sophisticated tasks. Google Deepmind’s AI is now able to imitate human speech, and just this past August IBM’s Watson successfully diagnosed a rare case of leukemia.

Rather than viewing these advances as threats to job security, we can look at them as opportunities for AI to fill in critical gaps in existing service providers, such as mental healthcare professionals.

In the US alone, nearly eight percent of the population suffers from depression (that’s about one in every 13 American adults), and yet about 45 percent of this population does not seek professional care due to the costs.

There are many barriers to getting quality mental healthcare, from searching for a provider who’s within your insurance network to screening multiple potential therapists in order to find someone you feel comfortable speaking with. These barriers stop many people from finding help, which is why about ninety percent of suicide cases in the US are actually preventable.

But what if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection?

This is X2AI's mission, a startup that’s built Tess AI to provide quality mental healthcare to anyone, regardless of income or location.

X2AI calls Tess a “psychological AI.” She provides a range of personalized mental health services—like psychotherapy, psychological coaching, and even cognitive behavioral therapy. Users can communicate with Tess through existing channels like SMS, Facebook messenger, and many internet browsers.

I had the opportunity to demo Tess at last year’s Exponential Medicine Conference in San Diego. I was blown away by how natural the conversation felt. In fact, a few minutes into the conversation I kept forgetting that the person on the other side of the conversation was actually a computer.

Now, a year later at Exponential Medicine 2016, the X2AI team is back and we’re thrilled with their progress. Here’s our interview with CEO and co-founder Michiel Rauws.

Since Tess was first created, how has the AI evolved and advanced? What has the system’s learning process been like?

The accuracy of the emotion algorithms has gone up a lot, and also the accuracy of the conversation algorithm, which understands the meaning behind what people say.

Is there a capability you are working on creating to take Tess’s conversational abilities to the next level?

We’re about to update our admin panel, so it will be very simple for psychologists to add their own favorite coping mechanisms into Tess. A coping mechanism is a specific way of talking through a specific issue with a patient.

Do you believe that users should know they are speaking with an AI? What are the benefits of having the human absent from a sensitive conversation?

 Yes, they should wholly be aware of that.

There’s quite some evidence out there that speaking with a machine takes away a feeling of judgment or social stigma. It’s available 24/7 and for as long as you want—you don’t pay by the hour.

The memory of a machine is also far better because it simply does not forget anything. In this way there is opportunity to connect dots that a human would not have thought of because they forgot part of the facts. There are also no waiting lists to get to talk to Tess.

One of the most important aspects is that, from a clinical standpoint, it is a huge advantage that Tess is always consistent and provides the same high quality work. She never has a bad day, or is tired from a long day of work.

The therapeutic bond is often mentioned as very important to the success of a treatment plan, as is the patient’s match with the therapist, otherwise he/she needs to look for another therapist. Tess, however, adapts herself to each person to ensure there will always be a match between Tess and the patient. And there is of course the part of Tess being scalable and exponentially improving.

 

Eugene Bann, co founder and CTO in Lebanon testing the AI in the field.

How does Tess handle receiving life-threatening information, such as being sent a suicidal message from a user?   

Patient safety always comes first. At all times when Tess is talking to the user she evaluates how the person is feeling with an emotion algorithm, which we’ve developed over the past 8 years, through a research firm called AEIR, now a subsidiary of X2AI.

In that way Tess always keeps track of how the user has been feeling and if there is a certain downward trend. And of course if there are it is perhaps a certain very negative conversation going on. Then there is the conversation algorithm that uses natural language processing to understand what the user is actually talking about, to pick up expressions like, “I don’t want to wake up anymore in the morning.”

Once such a situation requires human intervention then there is a seamless protocol to let either one of our psychologists or one of the clients take over the conversation. You can learn more about this in our explanation of AI ethics and data security.

Emotional wellbeing—stress, anxiety, depression—are big issues in the US, and there’s a lack of mental health services for those at risk. What needs to happen for Tess to scale to solve this?

We are very diligent in our approach to safely and responsibly moving towards deploying Tess at scale. Right now we are working with healthcare providers to allow them to offer Tess to support their treatment.

We also employ psychologists ourselves who are creating the content of Tess in the first place. Thanks to these psychologists we are able to offer behavioral health services directly to large employers or employer health plans, as the psychologists can take care of parts of the treatment and ensure to stand-by whenever additional human intervention is required. So not only in the case of an emergency but also when Tess does not manage to figure out how to make the person feel better about a certain difficult situation.

What shifts in public opinion around AI are needed for an AI like Tess to become a societal norm? How far away do you think we are from reaching this point?

AI can be useful today to actually help people and give people access to services they were not able to afford before.

Talking to a robot can get you a better experience than a person would because of the reasons I mentioned above. If certain tasks which are handled by people now but which could also be handled by a machine (or handled better by a machine) would actually be handled by machines, then people can dedicate more of their time to problems machines cannot take care of.

In this way the entire healthcare system becomes more sustainable and affordable.

Want to keep up with coverage from Exponential Medicine? Get the latest insights here.

Link: http://singularityhub.com


Página: (Anterior)   1  ...  4  5  6  7  8  9  10  11  12  13  ...  73  (Siguiente)
  TODAS