Neurociencia | Neuroscience


Glossary 

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página:  1  2  3  4  (Siguiente)
  TODAS

B

Logo KW

Back-up brains: The era of digital immortality [1080]

de System Administrator - sábado, 31 de enero de 2015, 20:35
 

Back-up brains: The era of digital immortality

 "It could change our relationship with death, providing some noise where there is only silence." — Aaron Sunshine

A few months before she died, my grandmother made a decision.

Bobby, as her friends called her (theirs is a generation of nicknames), was a farmer’s wife who not only survived World War II but also found in it justification for her natural hoarding talent. ‘Waste not, want not’ was a principle she lived by long after England recovered from a war that left it buckled and wasted. So she kept old envelopes and bits of cardboard cereal boxes for note taking and lists. She kept frayed blankets and musty blouses from the 1950s in case she needed material to mend. By extension, she was also a meticulous chronicler. She kept albums of photographs of her family members. She kept the airmail love letters my late grandfather sent her while he travelled the world with the merchant navy in a box. Her home was filled with the debris of her memories.

Yet in the months leading up to her death, the emphasis shifted from hoarding to sharing. Every time I visited my car would fill with stuff: unopened cartons of orange juice, balls of fraying wool, damp, antique books, empty glass jars. All things she needed to rehome now she faced her mortality. The memories too began to move out. She sent faded photographs to her children, grandchildren and friends, as well as letters containing vivid paragraphs detailing some experience or other.

On 9 April, the afternoon before the night she died, she posted a letter to one of her late husband’s old childhood friends. In the envelope she enclosed some photographs of my grandfather and his friend playing as young children. “You must have them,” she wrote to him. It was a demand but also a plea, perhaps, that these things not be lost or forgotten when, a few hours later, she slipped away in her favourite armchair.

 

The hope that we will be remembered after we are gone is both elemental and universal. The poet Carl Sandburg captured this common feeling in his 1916 poem Troths:

Yellow dust on a bumblebee’s wing, 
Grey lights in a woman’s asking eyes, 
Red ruins in the changing sunset embers: 
I take you and pile high the memories. 
Death will break her claws on some I keep.

It is a wishful tribute to the potency of memories. The idea that a memory could prove so enduring that it might grant its holder immortality is a romantic notion that could only be held by a young poet, unbothered by the aches and scars of age.

Nevertheless, while Sandburg’s memories failed to save him, theysurvived him. Humans have, since the first paintings scratched on cave walls, sought to confound the final vanishing of memory. Oral history, diary, memoir, photography, film and poetry: all tools in humanity’s arsenal in the war against time’s whitewash. Today we bank our memories onto the internet’s enigmatic servers, those humming vaults tucked away in the cooling climate of the far North or South. There’s the Facebook timeline that records our most significant life events, the Instagram account on which we store our likeness, the Gmail inbox that documents our conversations, and the YouTube channel that broadcasts how we move, talk or sing. We collect and curate our memories more thoroughly than ever before, in every case grasping for a certain kind of immortality.

 

Is it enough? We save what we believe to be important, but what if we miss something crucial? What if some essential context to our words or photographs is lost? How much better it would be to save everything, not only the written thoughts and snapped moments of life, but the entire mind: everything we know and all that we remember, the love affairs and heartbreaks, the moments of victory and of shame, the lies we told and the truths we learned. If you could save your mind like a computer’s hard drive, would you? It’s a question some hope to pose to us soon. They are the engineers working on the technology that will be able create wholesale copies of our minds and memories that live on after we are burned or buried. If they succeed, it promises to have profound, and perhaps unsettling, consequences for the way we live, who we love and how we die.

Carbon copy

I keep my grandmother’s letters to me in a folder by my desk. She wrote often and generously. I also have a photograph of her in my kitchen on the wall, and a stack of those antique books, now dried out, still unread. These are the ways in which I remember her and her memories, saved in hard copy. But could I have done more to save her?

San Franciscan Aaron Sunshine’s grandmother also passed away recently. “One thing that struck me is how little of her is left,” the 30-year-old tells me. “It’s just a few possessions. I have an old shirt of hers that I wear around the house. There's her property but that's just faceless money. It has no more personality than any other dollar bill.” Her death inspired Sunshine to sign up with Eterni.me, a web service that seeks to ensure that a person’s memories are preserved after their death online.

It works like this: while you’re alive you grant the service access to your Facebook, Twitter and email accounts, upload photos, geo-location history and even Google Glass recordings of things that you have seen. The data is collected, filtered and analysed before it’s transferred to an AI avatar that tries to emulate your looks and personality. The avatar learns more about you as you interact with it while you’re alive, with the aim of more closely reflecting you as time progresses.

“It’s about creating an interactive legacy, a way to avoid being totally forgotten in the future,” says Marius Ursache, one of Eterni.me’s co-creators. “Your grand-grand-children will use it instead of a search engine or timeline to access information about you – from photos of family events to your thoughts on certain topics to songs you wrote but never published.” For Sunshine, the idea that he might be able to interact with a legacy avatar of his grandmother that reflected her personality and values is comforting. “I dreamt about her last night,” he says. “Right now a dream is the only way I can talk to her. But what if there was a simulation? She would somehow be less gone from my life.”

 

While Ursache has grand ambitions for the Eterni.me service (“it could be a virtual library of humanity”) the technology is in still its infancy. He estimates that subscribers will need to interact with their avatars for decades for the simulation to become as accurate as possible. He’s already received many messages from terminally ill patients who want to know when the service will be available – whether they can record themselves in this way before they die. “It’s difficult to reply to them, because the technology may take years to build to a level that’s useable and offers real value,” he says. But Sunshine is optimistic. “I have no doubt that someone will be able to create good simulations of people's personalities with the ability to converse satisfactorily,” he says. “It could change our relationship with death, providing some noise where there is only silence. It could create truer memories of a person in the place of the vague stories we have today.”

It could, I suppose. But what if the company one day goes under? As the servers are switched off, the people it homes would die a second death.

 

As my own grandmother grew older, some of her memories retained their vivid quality; each detail remained resolute and in place. Others became confused: the specifics shifted somehow in each retelling. Eterni.me and other similar services counter the fallibility of human memory; they offer a way to fix the details of a life as time passes. But any simulation is a mere approximation of a person and, as anyone who has owned a Facebook profile knows, the act of recording one’s life on social media is a selective process. Details can be tweaked, emphases can be altered, entire relationships can be erased if it suits one’s current circumstances. We often give, in other words, an unreliable account of ourselves.

Total recall

What if, rather than simply picking and choosing what we want to capture in digital form, it was possible to record the contents of a mind in their entirety? This work is neither science fiction nor the niche pursuit of unreasonably ambitious scientists. Theoretically, the process would require three key breakthroughs. Scientists must first discover how to preserve, non-destructively, someone's brain upon their death. Then the content of the preserved brain must be analysed and captured. Finally, that capture of the person’s mind must be recreated on a simulated human brain.

 

First, we must create an artificial human brain on which a back-up of a human’s memories would be able to ‘run’. Work in the area is widespread. MIT runs a course on the emergent science of ‘connectomics’, the work to create a comprehensive map of the connections in a human brain. The US Brain project is working to record brain activity from millions of neurons while the EU Brain project tries to build integrated models from this activity.

Anders Sandberg from the Future of Humanity Institute at Oxford University, who in 2008 wrote a paper titled Whole Brain Emulation: A Roadmap, describes these projects as “stepping stones” towards being able to fully able to emulate the human brain.

“The point of brain emulation is to recreate the function of the original brain: if ‘run’ it will be able to think and act as the original,” he says. Progress has been slow but steady. “We are now able to take small brain tissue samples and map them in 3D. These are at exquisite resolution, but the blocks are just a few microns across. We can run simulations of the size of a mouse brain on supercomputers – but we do not have the total connectivity yet. As methods improve I expect to see automatic conversion of scanned tissue into models that can be run. The different parts exist, but so far there is no pipeline from brains to emulations.”

 

Investment in the area appears to be forthcoming, however. Google is heavily invested in brain emulation. In December 2012 the company appointed Ray Kurzweil as its director of engineering on the Google Brain project, which aims to mimic aspects of the human brain. Kurzweil, a divisive figure, is something of a figurehead for a community of scientists who believe that it will be possible to create a digital back-up of a human brain within their lifetime. A few months later, the company hired Geoff Hinton, a British computer scientist who is one of the world's leading experts on neural networks, essentially the circuitry of how the human mind thinks and remembers.

Google is not alone, either. In 2011 a Russian entrepreneur, Dmitry Itskov, founded ‘The 2045 Initiative’, named after Kurzweil’s prediction that the year 2045 will mark the point at which we’ll be able to back up our minds to the cloud. While the fruits of all this work are, to date, largely undisclosed, the effort is clear.

 

Neuroscientist Randal Koene,science director for the 2045 Initiative, is adamant that creating a working replica of a human brain is within reach. “The development of neural prostheses already demonstrate that running functions of the mind is possible,” he says. It’s not hyperbole. Ted Berger, a professor at the University of Southern California’s Center for Neuroengineering has managed to create a working prosthetic of the hippocampus part of the brain. In 2011 a proof-of-concept hippocampal prosthesis was successfully tested in live rats and, in 2012 the prosthetic was successfully tested in non-human primates. Berger and his team intend to test the prosthesis in humans this year, demonstrating that we are already able to recreate some parts of the human brain.

Memory dump

Emulating a human brain is one thing, but creating a digital record of a human’s memories is a different sort of challenge. Sandberg is cynical of whether this simplistic process is viable. “Memories are not neatly stored like files on a computer to create a searchable index,” he says. “Memory consists of networks of associations that are activated when we remember. A brain emulation would require a copy of them all.”

 

Indeed, humans reconstruct information from multiple parts of the brain in ways that are shaped by our current beliefs and biases, all of which change over time. These conclusions appear at odds with any effort to store memories in the same way that a computer might record data for easy access. It is an idea based on, as one sceptic I spoke to (who wished to remain anonymous) put it, “the wrong and old-fashioned ‘possession’ view of memory”.

There is also the troubling issue of how to extract a person’s memories without destroying the brain in the process. “I am sceptical of the idea that we will be able to accomplish non-destructive scanning,” says Sandberg. “All methods able to scan neural tissue at the required high resolution are invasive, and I suspect this will be very hard to achieve without picking apart the brain.” Nevertheless, the professor believes a searchable, digital upload of a specific individual’s memory could be possible so long as you were able to “run” the simulated brain in its entirety.

 

“I think there is a good chance that it could work in reality, and that it could happen this century,” he says. “We might need to simulate everything down to the molecular level, in which case the computational demands would simply be too large. It might be that the brain uses hard-to-scan data like quantum states (an idea believed by some physicists but very few neuroscientists), that software cannot be conscious or do intelligence (an idea some philosophers believe but few computer scientists), and so on. I do not think these problems apply, but it remains to be seen if I am right.”

If it could be done, then, what would preserving a human mind mean for the way we live?

Some believe that there could be unanticipated benefits, some of which can make the act of merely extending a person’s life for posterity seem rather plain by comparison. For example, David Wood, chairman of the London Futurists, argues that a digital back-up of a person’s mind could be studied, perhaps providing breakthroughs in understanding the way in which human beings think and remember.

 

And if a mind could be digitally stored while a person was still alive then, according to neuroscientist Andrew A Vladimirov, it might be possible to perform psychoanalysis using such data. “You could run specially crafted algorithms through your entire life sequence that will help you optimise behavioural strategies,” he says.

Yet there’s also an unusual set of moral and ethical implications to consider, many of which are only just beginning to be revealed. “In the early stages the main ethical issue is simply broken emulations: we might get entities that are suffering in our computers,” says Sandberg. “There are also going to be issues of volunteer selection, especially if scanning is destructive.” Beyond the difficulty of recruiting people who are willing to donate their minds in such a way, there is the more complicated issue of what rights an emulated mind would enjoy. “Emulated people should likely have the same rights as normal people, but securing these would involve legislative change,” says Sandberg. “There might be the need for new kinds of rights too. For example, the right for an emulated human to run in real-time so that they can participate in society.”

 

Defining the boundaries of a person’s privacy is already a pressing issue for humanity in 2015, where third-party corporations and governments hold more insight into our personal information than ever before. For an emulated mind, privacy and ownership of data becomes yet more complicated. “Emulations are vulnerable and can suffer rather serious breaches of privacy and integrity,” says Sandberg. He adds, in a line that could be lifted from a Philip K Dick novel: “We need to safeguard their rights”. By way of example, he suggests that lawmakers would need to consider whether it should be possible to subpoena memories.

Property laws

“Ownership of specific memories is where things become complex,” says Koene. “In a memoir you can choose which memories are recorded. But if you don't have the power of which of your memories others can inspect it becomes a rather different question.” Is it a human right to be able to keep secrets?

These largely un-interrogated questions also begin to touch on more fundamental issues of what it means to be human. Would an emulated brain be considered human and, if so, does the humanity exist in the memories or the hardware on which the simulated brain runs? If it's the latter, there’s the question of who owns the hardware: an individual, a corporation or the state? If an uploaded mind requires certain software to run (a hypothetical Google Brain, for example) the ownership of the software license could become contentious.

 

The knowledge that one’s brain is to be recorded in its entirety might also lead some to behave differently during life. “I think it would have the same effect as knowing your actions will be recorded on camera,” says Sandberg. “In some people this knowledge leads to a tendency to conform to social norms. In others it produces rebelliousness. If one thinks that one will be recreated as a brain emulation then it is equivalent to expecting an extra, post-human life.”

Even if it were possible to digitally record the contents and psychological contours of the human mind, there are undeniably deep and complicated implications. But beyond this, there is the question of whether this is something that any of us truly want. Humans long to preserve their memories (or, in some cases, to forget them) because they remind us of who we are. If our memories are lost we cease to know who we were, what we accomplished, what it all meant. But at the same time, we tweak and alter our memories in order to create the narrative of our lives that fits us at any one time. To have everything recorded with equal weight and importance might not be useful, either to us or to those who follow us.

 

Where exactly is the true worth of the endeavour? Could it actually be the comforting knowledge for a person that they, to one degree or other, won’t be lost without trace? The survival instinct is common to all life: we eat, we sleep, we fight and, most enduringly, we reproduce. Through our descendants we reach for a form of immortality, a way to live on beyond our physical passing. All parents take part in a grand relay race through time, passing the gene baton on and on through the centuries. Our physical traits – those eyes, that hair, this temperament – endure in some diluted or altered form. So too, perhaps, do our metaphysical attributes (“what will survive of us is love,” as Philip Larkin tentatively put it in his 1956 poem, ‘An Arundel Tomb’). But it is the mere echo of immortality. Nobody lives forever; with death only the fading shadow of our life remains. There are the photographs of us playing as children. There are the antique books we once read. There is the blouse we once wore.

I ask Sunshine why he wants his life to be recorded in this way. “To be honest, I'm not sure,” he says. “The truly beautiful things in my life such as the parties I've thrown, the sex I've had, the friendships I’ve enjoyed. All of these things are too ephemeral to be preserved in any meaningful way. A part of me wants to build monuments to myself. But another part of me wants to disappear completely.” Perhaps that is true of us all: the desire to be remembered, but only the parts of us that we hope will be remembered. The rest can be discarded.

 

Despite my own grandmother’s careful distribution of her photographs prior to her death, many remained in her house. These eternally smiling, fading unknown faces evidently meant a great deal to her in life but now, without the framing context of her memories, they lost all but the most superficial meaning. In a curious way, they became a burden to those of us left behind.

My father asked my grandmother’s vicar (a kindly man who had been her friend for many years), what he should do with the pictures; to just throw the photographs away seemed somehow flippant and disrespectful. The vicar’s advice was simple. Take each photograph. Look at it carefully. In that moment you honour the person captured. Then you may discard of it and be free.

Share this story on FacebookGoogle+ or Twitter.

Link: http://www.bbc.com

Logo KW

Behavioral analytics vs. the rogue insider [1226]

de System Administrator - lunes, 11 de mayo de 2015, 23:35
 

Behavioral analytics vs. the rogue insider

By Taylor Armerding   

The promise of User Behavioral Analytics is that it can go beyond simply detecting insider threats to predicting them. Some experts say that creates a significant privacy problem.

The recent arrest by the FBI of a former employee of JP Morgan Chase for allegedly trying to sell bank account data, including PINs, ended well for the bank.

According to the FBI, the former employee, Peter Persaud, was caught in a sting operation when he attempted to sell the data to informants and federal agents.

But such things don’t always end so well for the intended victims. The arrest was yet another example of the so-called “rogue insider” threat to organizations.

And such incidents are providing increasing incentives to use technology to counter it.

The threat of employees going rogue – wittingly or not – is significant enough that some organizations are turning to behavior analytics that, according to its advocates, are able not only to detect insider security threats as they happen, but even predict them.

Such protection would likely be welcomed by most organizations, but it comes with an obvious consequence: Worker privacy. Predicting security threats calls up images of “Minority Report,” the 2002 movie starring Tom Cruise, in which police arrested people before they committed crimes.

In that sci-fi world, it was “precogs” – psychics – who predicted the impending crimes. The IT version is User Behavior Analytics (UBA).

According to Gartner, “UBA is transforming security and fraud management practices because it makes it much easier for enterprises to gain visibility into user behavior patterns to find offending actors and intruders.”

[ ALSO: How to Use Network Behavior Analysis Tools ]

Saryu Nayyar, CEO of Gurucul Solutions, in a recent statement, said her firm’s technology, “continuously monitors hundreds of (employee behavior) attributes to detect and rank the risk associated with anomalous behaviors.

"This is nothing new. What’s different today is the use of big data analytics, machine learning algorithms and risk scoring being applied to these logs."

 

Saryu Nayyar, CEO, Gurucul Solutions

“It identifies and scores anomalous activity across users, accounts, applications and devices to predict risks associated with insider threats.”

This should not be a surprise. Data analytics are being applied to just about every challenge in the workplace, from marketing to efficiency. So it is inevitable that it would be used to counter what has always been the weakest link in the security chain – the human.

Americans have also been told for years that personal privacy is essentially dead. Still, some of them may not appreciate just how dead it is, or soon will be, in the workplace.

But Nayyar and others note that there should be no expectation of privacy in the workplace when it comes to corporate data.

“This technology is simply monitoring activity within a company’s IT systems,” she said. “It does not read emails or personal communications.”

She added that monitoring of employee behaviors by IT has been going on for a long time. “This is nothing new,” she said. “What’s different today is the use of big data analytics, machine learning algorithms and risk scoring being applied to these logs.”

Michael Overly, technology partner at Foley & Lardner LLP, said companies should notify their employees that, “business systems should not be used for personal or private communications and other activities, and that the systems and data can and likely will be reviewed, including through automated means.”

"Employees must understand that if they want privacy with regard to their online activities, they need to use a means other than their employer’s computers."

 

Michael Overly, technology partner, Foley & Lardner LLP

But he agreed with Nayyar that privacy is necessarily limited in the workplace. “Employees must understand that if they want privacy with regard to their online activities, they need to use a means other than their employer’s computers, like a smartphone or a home computer,” he said.

That is also the view of Troy Moreland, chief technology officer at Identity Automation. “In general, if employees are using employer-provided equipment, they have no right to privacy as long as it’s clearly expressed,” he said.

But Joseph Loomis, founder and CEO of CyberSponse, said such policies, if they are too heavy handed, can cause morale problems. “I believe it’s justified,” he said, “it’s just that there are various opinions on what type of privacy someone is entitled to or not.”

He said it would likely take significant “training, education and explaining” to eliminate the feeling of a “Big Brother” atmosphere in the workplace.

"(UBA) will cause legal issues if one terminates without cause other than predictive intelligence."

 

Joseph Loomis, founder and CEO, CyberSponse

Gabriel Gumbs, vice president of product strategy at Identity Finder, said he believes the potential for morale problems is real. “At the core of UBA is an unspoken distrust of everyone, not just the rogue employees,” he said.

Matthew Prewitt, partner at Schiff Hardin and chairman of its cybersecurity and data privacy practice, said one problem with predicting misconduct is that it can become self-fulfilling. “An employee who is viewed with mistrust and suspicion is more likely to become a rogue employee,” he said.

He agrees that there is a limited expectation of privacy in the workplace, especially on the corporate network. But he said a “creative advocate” for an employee could argue that, “UBA is so different from other types of monitoring that some sort of express reference to UBA needs to be provided in the notice.”

Loomis added that in states not governed by “right-to-work” laws, UBA, “will cause legal issues if one terminates without cause other than predictive intelligence.”

The much larger problem, they say, is from unintentional rogues – those with too many access privileges, who use “shadow” IT and/or who are simply lazy or careless.

“In our experience over-privileged scenarios account for approximately 65% of insider threat incidents, shadow IT 20% and carelessness 15%,” Nayyar said.

Moreland has a list of labels for such employees, including “access hoarders” who “gobble up as much access as they possibly can and refuse to relinquish any of it, even when it's no longer needed.”

Others, who he calls “innovators,” are well intentioned – they are trying to be more productive – but one of the ways they do so is by circumventing IT policies.

Gumbs noted that the Verizon Data Breach Investigations Report found that, “privilege abuse is the most damaging of insider threats.”

But he added that not all abuse of access privileges is innocent, and does not necessarily mean an employee is over-privileged. “In the majority of cases, users had the proper level of privilege for their roles, they simply abused those privileges for personal or financial gain,” he said. In those cases, he and other experts say identity and access management can reduce the security risks significantly.

“Over-privilege is a substantial concern,” Overly said. “In general, the majority of users in businesses today are over-privileged. The concept of least privilege is seldom implemented properly and even more seldom addressed as personnel duties change and evolve over time.”

Dennis Devlin, cofounder, CISO and senior vice president of privacy practice at SAVANTURE, said he sees the same thing. “In my experience most individuals who have been with an organization for a long time are over-privileged,” he said. “Access privileges are accretive and tend to grow over time. The law of least privileges exists not just to prevent malicious access, but to also to prevent accidental or inadvertent disclosure.”

"In my experience most individuals who have been with an organization for a long time are over-privileged."

Dennis Devlin, cofounder, CISO and senior vice president of privacy practice, SAVANTURE

He said better access management could reduce the need for intrusive monitoring. “Appropriate privileges keep individuals in their respective ‘swim lanes,’ reduce the need for excessive monitoring and make SIEM analysis much more effective,” he said.

Beyond the legal and morale questions, however, the verdict is still out on how well UBA works.

Overly said in his experience, “it has a long way to go with regard to accuracy. All too often, the volume of false alarms causes the results to be disregarded when an actual threat is identified.”

Nayyar said it does work, through analysis of unusual or “anomalous” behaviors in things like geolocation, elevated permissions, connecting to an unknown IP or installing unknown software for backdoor access to sensitive data (see sidebar).

She provided an example of flagging rogue behavior: A software engineer who had resigned from a company and was leaving in a month, exhibited behavior never seen before.

While on vacation, the employee, “logged in from a previously unseen IP address, accessed source code repository and downloaded sensitive files from a project he wasn't assigned to,” she said.

“Two days later, the engineer accessed multiple servers and moved the downloaded files to a NFS (Network File System) location, which he made mountable and attempted to sync the files to prohibited consumer cloud storage service.”

She said the user was flagged as soon as he created the NFS mount point, “based on predictive modeling, and his VPN connection was terminated.”

But as effective as that sounds, even advocates of UBA warn that, like any security tool, it is a “layer” of protection, not a guarantee.

“Perfection cannot be achieved,” Overly said. “If an insider is intent on causing harm to the business, it may be impossible to prevent it.”

What does UBA track?

According to Saryu Nayyar, CEO of Gurucul Solutions, User Behavioral Analytics can detect behavioral anomalies by monitoring activities including:
  1. Geo-location: Access to resources from different geographies, locations not seen before, or from unauthorized locations, etc. This is a very simple use-case.
  2. Elevated Permissions: Employee elevating their access privileges to perform a task they or their peers have never performed in the past.
  3. Device Outbound Access: Certain high-value assets might be connecting to an unknown IP/geo-location they shouldn't connect to. This behavior could be an anomaly when compared to past behavior or peer group behavior.
  4. Employees accessing resources and installing unknown software for backdoor access to sensitive data that can be transmitted outside network.

RELATED TOPICS    

Link: http://www.csoonline.com

Logo KW

BioData World Congress 2015 [1433]

de System Administrator - viernes, 18 de septiembre de 2015, 14:58
 

BioData World Congress 2015

Big Data Solutions is the leading big data initiative of Intel that aims to empower business with the tools, technologies, software and hardware for managing big data. Big Data solutions is at the forefront of big data analytics and today we talk to Bob Rogers, Chief Data Scientist, Intel about his role, big data for genomics and his contributions to the BioData World Congress 2015.

Please read the attached PDF

Logo KW

Biological Compass [1584]

de System Administrator - lunes, 16 de noviembre de 2015, 20:56
 

Researchers found the genes that code for the MagR and Cry proteins in the retinas of pigeons. WIKIMEDIA, TU7UH

Biological Compass

By Bob Grant

A protein complex discovered in Drosophila may be capable of sensing magnetism and serves as a clue to how some animal species navigate using the Earth’s magnetic field.

A variety of different animal species possess remarkable navigational abilities, using the Earth’s magnetic field to migrate thousands of miles every year or find their way home with minimal or no visual cues. But the biological mechanisms that underlie this magnetic sense have long been shrouded in mystery. Researchers in China may have found a tantalizing clue to the navigational phenomenon buried deep in the fruit fly genome. The team, led by biophysicist Can Xie of Peking University, discovered a polymer-like protein, dubbed MagR, and determined that it forms a complex with a photosensitive protein called Cry. The MagR/Cry protein complex, the researchers found, has a permanent magnetic moment, which means that it spontaneously aligns in the direction of external magnetic fields. The results were published today (November 16) in Nature Materials.

“This is the only known protein complex that has a permanent magnetic moment,” said Peter Hore, a physical chemist at the University of Oxford, U.K., who was not involved in the research. “It’s a remarkable discovery.”

Xie and his colleagues called upon long-standing biochemical models that sought to explain animals’ magnetic sense to initiate the search for a physical magnetoreceptor. One of these involves molecules that incorporate oxides of iron in their structure and another involves Cry, which is known to produce radical pairs in some magnetic fields. “However, this only proved that Cry plays a critical role in the magnetoreptive biological pathways, not necessarily that it is the receptor,” Xie wrote in an email to The Scientist. “We believe there [are] such universal magnetosensing protein receptors in an organism, and we set out to find this missing link.”

The researchers performed whole-genome screens of Drosophila DNA to search for a protein that might partner with Cry and serve as that magnetoreceptor. “We predicted the existence of a multimeric magentosensing complex with the attributes of both Cry- and iron-based systems,” Xie wrote. “Amazingly, later on, our genome-wide screening and experiments showed this is real.”

Xie and his colleagues were “brave enough to go through the whole genome-wide search to hunt for this protein,” said James Chou, a biophysicist at Harvard Medical School. “Sometimes it works, sometimes you don’t get anything. Luckily, this time he got something big out of it.”

In 2012, after identifying MagR in Drosophila, Xie and his colleagues screened the genomes of several other animal species, finding genes for both Cry and MagR in virtually all of them, including in butterflies, pigeons, robins, rats, mole rats, sharks, turtles, and humans. “This protein is evolutionarily conserved across different classes of animals (from butterflies to pigeons, rats, and humans),” Xie wrote.

Determining that MagR and Cry were highly expressed and colocalized in the retinas of pigeons, Xie’s team focused on that species to conduct further experiments to ferret out the structure and behavior of the protein complex. Using biochemical co-purification, electron microscopy, and cellular experiments in the presence of a magnetic field, the researchers constructed a rod-shaped model of the MagR/Cry complex, and suggested a potential mechanism for how the complex might work in situ to sense magnetism. “It is quite convincing that this complex may be the magnetoreceptor, at least for the organism they have fished it out from,” Chou said. “I think it’s a great step forward to open this whole mystery.”

Cry likely regulates the magnetic moment of the rod-shaped complex, while the iron-sulfur clusters in the MagR protein are probably what give rise to the permanent magnetic polarity of the structure. “The nanoscale biocompass has the tendency to align itself along geomagnetic field lines, and to obtain navigation cues from a geomagnetic field,” Xie wrote. “We propose that any disturbance of this alignment may be captured by connected cellular machinery such as the cytoskeleton or ion channels, which would channel information to the downstream neural system, forming the animal’s magnetic sense (or magnetic ‘vision’).”

Hore was cautious about saying that the newly modeled complex is absolutely responsible for magnetoreception in animals. “I don’t think I would say that its game-changing, but it is very interesting and will prompt a lot of experimental and theoretical work,” he said. “It may be very relevant to magnetoreception, it’s just too soon to know.”

“It may not be very accurate because this is really just a model,” Chou agreed, “but I think it’s a good effort, and it will stimulate follow-up work on the structure.”

Of course, there may well be additional biological components that play into giving animals a magnetic sense. Pigeons, for example, sense the inclination of the Earth’s magnetic field rather than the absolute direction of the field, Hore points out. The MagR/Cry complex, as described in the paper, would be capable of detecting the absolute direction or intensity of a field, not the inclination.

But beyond the clues into how animals sense the Earth’s magnetic field for navigational purposes, the discovery may yield new biochemical tools that could be used by other researchers. Chief among these applications is the potential to use the MagR/Cry complex along with controlled magnetic fields to control the behavior of cells or whole organisms. Such a development would be “sort of the magnetic version of optogenetics,” Cho said.

Xie agreed. “It may give rise to magnetogenetics,” he wrote.

The study also generates multiple questions about the biological components surrounding the protein complex and how they contribute to magnetic sensation. “This is just the tip of the iceberg,” Choe said. “This opens up a lot of future projects to unveil how this polarity, or alignment with the Earth’s magnetic field, can transmit signal, whether it’s a neural signal or one that regulates transcription.”

Xie said he thinks that while the “biocompass model” he and his colleagues proposed may serve as a universal mechanism for animal magnetoreception, there may be more magnetoreceptors to be discovered. Additionally, evolution enhanced the magnetic sense in some, especially migratory, species, which could have led to numerous variations on the theme. This may even extend to humans, he added. “I have a friend who has really good sense of directions, and he keep telling people that he can always know where is south, where is north, even to a new place he has never been,” Xie wrote in an email. “According to him, he felt there is a compass in his brain. I was laughed, but now I guess I understand what he meant. . . . However, human’s sense of direction is very complicated. Magnetoreception may play some roles.”

S. Qin et al., “A magnetic protein biocompass,” Nature Materials, doi:10.1038/nmat4484, 2015.

Link: http://www.the-scientist.com

Logo KW

Biomarkers, apps help in suicide prevention [1397]

de System Administrator - martes, 8 de septiembre de 2015, 20:47
 

Report: Biomarkers, apps help in suicide prevention

Logo KW

Biomedical ecosystem focused on innovative knowledge encoded with SNOMED CT [1316]

de System Administrator - martes, 14 de julio de 2015, 21:29
 

Title: Biomedical ecosystem focused on innovative knowledge encoded with SNOMED CT

Presenter: Prog. Gustavo Tejera, SUEIIDISS and KW Foundation

October 25th-30th, 2015 - Radisson Montevideo, Uruguay

Audience

Doctors, Project Managers, Managers of Content’s Departments, Architects of the Digital Society, MBA, Engineers, Analysts of BIG DATA and IoT, Interoperability and Automation Specialists.

Objectives

Transmitting the key concepts to create and share reusable contents encoded with SNOMED CT. They can improve application logic and training at the point of care, without its creators know programming. It is the first step towards a social network 3.0 based on SNOMED CT.

Abstract

 

SNOMED CT is a source of incremental knowledge to articulate episodes, processes, tasks, forms, descriptors, indicators, rules and agents in all layers of the electronic health record.

How to build reusable components with SNOMED CT knowledge to improve the logic of the applications? How to swap? Web 3.0 is ready to start, but there are difficulties related to the training of professionals, the value of knowledge and market rules.

In this study we present the experience of the KW Foundation in the development and implementation of HealthStudio (open source) at the National Cancer Institute, Sanatorium John Paul II, Medical Federation of the Interior, Uruguayan Medical Sanatorium, College of Nursing and Maryland University.

In a BIG DATA composed of more than 20 million log events, we discover how to involve users in the construction process and content that increase both contextual cognition at the attention area, logistical and administrative tasks.

With the construction of reusable knowledge of SNOMED CT, the healthcare community gets a great facilitator for the essential alignment ("light"), connection ("camera") and interoperability ("action").

We believe that an innovative, cognitive, community and incremental ecosystem is possible to build on the basis of SNOMED CT, ready for the generation and analysis of BIG DATA and the Internet of Things. But above all, it is essential to ensure that these tools allow the inclusion of all levels in the democratic construction of eHealth and Digital Society.

References

  1. SUEIIDISS web page http://www.sueiidiss.org, HL7 Uruguay.
  2. HL7 International web page: http://www.hl7.org.
  3. UMLS web page: www.nlm.nih.gov/research/umls.
  4. KW Foundation web page http://kwfoundation.org, free sources for community’s contents builders.
  5. HealthStudio web page http://simultec.org, free downloads for community’s contents builders.
  6. Book “The New Warriors in the Digital Society”, Gustavo Tejera, 2014.
  7. 2012 Laureate Computerworld Award, KW Social Network initiative.

Link: http://kwfoundation.org

Logo KW

Bionic Eye [903]

de System Administrator - miércoles, 1 de octubre de 2014, 16:02
 

The Bionic Eye

Using the latest technologies, researchers are constructing novel prosthetic devices to restore vision in the blind.

By Various Researchers

STIMULATING VISION:
See full infographic: JPG | PDF 
© VIKTOR KOEN

In 1755, French physician and scientist Charles Leroy discharged the static electricity from a Leyden jar—a precursor of modern-day capacitors—into a blind patient’s  body using two wires, one tightened around the head just above the eyes and the other around the leg. The patient, who had been blind for three months as a result of a high fever, described the experience like a flame passing downwards in front of his eyes. This was the first time an electrical device—serving as a rudimentary prosthesis—successfully restored even a flicker of visual perception.

More than 250 years later, blindness is still one of the most debilitating sensory impairments, affecting close to 40 million people worldwide. Many of these patients can be efficiently treated with surgery or medication, but some pathologies cannot be corrected with existing treatments. In particular, when light-receiving photoreceptor cells degenerate, as is the case in retinitis pigmentosa, or when the optic nerve is damaged as a result of glaucoma or head trauma, no surgery or medicine can restore the lost vision. In such cases, a visual prosthesis may be the only option. Similar to cochlear implants, which stimulate auditory nerve fibers downstream of damaged sensory hair cells to restore hearing, visual prostheses aim to provide patients with visual information by stimulating neurons in the retina, in the optic nerve, or in the brain’s visual areas.

In a healthy retina, photoreceptor cells—the rods and cones—convert light into electrical and chemical signals that propagate through the network of retinal neurons down to the ganglion cells, whose axons form the optic nerve and transmit the visual signal to the brain. (Seeillustration.) Prosthetic devices work at different levels downstream from the initial reception and biochemical conversion of incoming light photons by the pigments of photoreceptor rods and cones at the back of the retina. Implants can stimulate the bipolar cells directly downstream of the photoreceptors, for example, or the ganglion cells that form the optic nerve. Alternatively, for pathologies such as glaucoma or head trauma that compromise the optic nerve’s ability to link the retina to the visual centers of the brain, prostheses have been designed to stimulate the visual system at the level of the brain itself. (See illustration.)

While brain prostheses have yet to be tested in people, clinical results with retinal prostheses are demonstrating that the implants can enable blind patients to locate and recognize objects, orient themselves in an unfamiliar environment, and even perform some reading tasks. But the field is young, and major improvements are still necessary to enable highly functional restoration of sight.

Henri Lorach is currently a visiting researcher at Stanford University, where he focuses on prosthetic vision and retinal signal processing.

 

  

----------------------------------------------------------------------------

Substitutes for Lost Photoreceptors

by Daniel Palanker

 SEEING IN PIXELS: Photovoltaic arrays of pixels can be implanted on top of the retinal pigment epithelium (shown here in a rat eye, right), where they stimulate activity in the retinal neurons downstream of damaged photoreceptors.

PALANKER LAB, STANFORD UNIVERSITY

In the subretinal approach to visual prosthetics, electrodes are placed between the retinal pigment epithelium (RPE) and the retina. (See illustration.) There, they stimulate the nonspiking inner retinal neurons—bipolar, horizontal, and amacrine cells—which then transmit neural signals down the retinal network to the retinal ganglion cells (RGCs) that propagate to the brain via the optic nerve. Stimulating the retinal network helps preserve some aspects of the retina’s natural signal processing, such as the “flicker fusion” that allows us to see video as a smooth motion, even though it is composed of frames with static images; adaptation to constant stimulation; and the nonlinear integration of signals as they flow through the retinal network, a key aspect of high spatial resolution. Electrical pulses lasting several milliseconds provide selective stimulation of the inner retinal neurons and avoid direct activation of the ganglion cells and their axons, which would otherwise considerably limit patients’ ability to interpret the spatial layout of a visual scene.

In the subretinal approach to visual pros­thetics, electrodes are placed between the retinal pigment epithelium and the retina, where they stimulate the nonspiking inner retinal neurons.

The Boston Retinal Implant Project, a multidisciplinary team of scientists, engineers, and clinicians at research institutions across the U.S., is developing a retinal prosthesis that transmits information from a camera mounted on eyeglasses to a receiving antenna implanted under the skin around the eye using radiofrequency telemetry—technology similar to radio broadcast. The decoded signal is then delivered to an implanted subretinal electrode array via a cable that penetrates into the eye. The information delivered to the retina by this device is not related to direction of gaze, so to survey a scene a patient must move his head, instead of just his eyes.

The Alpha IMS subretinal implant, developed by Retina Implant AG in Reutlingen, Germany, rectifies this problem by including a subretinal camera, which converts light in each pixel into electrical currents. This device has been successfully tested in patients with advanced retinitis pigmentosa and was recently approved for experimental clinical use in Europe. Visual acuity with this system is rather limited: most patients test no better than 20/1000, except for one patient who reached 20/550.1 The Alpha IMS system also needs a bulky implanted power supply with cables that cross the sclera and requires complex surgery, with associated risk of complications.

 

STIMULATING ARRAY: This prototype suprachoroidal array, which is implanted behind the choroid, can be larger than prostheses inserted in front of or behind the retina.

COURTESY OF NESTLAIR PHOTOGRAPHY

To overcome these challenges, my colleagues and I have developed a wireless photovoltaic subretinal prosthesis, powered by pulsed light. Our system includes a pocket computer that processes the images captured by a miniature video camera mounted on video goggles, which project these images into the eye and onto a subretinally implanted photodiode array. Photodiodes in each pixel convert this light into pulsed current to stimulate the nearby inner retinal neurons. This method for delivering the visual information is completely wireless, and it preserves the natural link between ocular movement and image perception.

Our system uses invisible near-infrared (NIR, 880–915 nm) wavelengths to avoid the perception of bright light by the remaining functional photoreceptors. It has been shown to safely elicit and modulate retinal responses in normally sighted rats and in animals blinded by retinal degeneration.2Arrays with 70 micrometer pixels restored visual acuity in blind rats to half the natural level, corresponding to 20/250 acuity in human. Based on stimulation thresholds observed in these studies, we anticipate that pixel size could be reduced by a factor of two, improving visual acuity even further. Ease of implantation and tiling of these wireless arrays to cover a wide visual field, combined with their high resolution, opens the door to highly functional restoration of sight. We are commercially developing this system in collaboration with the French company Pixium Vision, and clinical trials are slated to commence in 2016.

 

FOLLOW THE LIGHT: A blind patient navigates an obstacle course without the assistance of her guide-dog, thanks to a head-mounted camera and a backpack computer, which gather and process visual information before delivering a representation of the visual scene via her suprachoroidal retinal prosthesis.

COURTESY OF NESTLAIR PHOTOGRAPHY

Fabio Benfenati of the Italian Institute of Technology in Genoa and Guglielmo Lanzani at the institute’s Center for Nanoscience and Technology in Milan are also pursuing the subretinal approach to visual prostheses, developing a device based on organic polymers that could simplify implant fabrication.3 So far, subretinal light-sensitive implants appear to be a promising approach to restoring sight to the blind.

Daniel Palanker is a professor in the Department of Ophthalmology and Hansen Experimental Physics Laboratory at Stanford University.

References

  1. K. Stingl et al., “Functional outcome in subretinal electronic implants depends on foveal eccentricity,” Invest Ophthalmol Vis Sci, 54:7658-65, 2013.
  2. Y. Mandel et al., “Cortical responses elicited by photovoltaic subretinal prostheses exhibit similarities to visually evoked potentials,” Nat Commun, 4:1980, 2013.
  3. D. Ghezzi et al., “A polymer optoelectronic interface restores light sensitivity in blind rat retinas,”Nat Photonics, 7:400-06, 2013.

----------------------------------------------------------------------------

Behind the Eye

by Lauren Ayton and David Nayagam

 NEW SIGHT: A recipient of a prototype suprachoroidal prosthesis tests the device with Bionic Vision Australia (BVA) researchers.

COURTESY OF BIONIC VISION AUSTRALIA'S BIONIC EYE PROJECT

Subretinal prostheses implanted between the retina and the RPE, along with epiretinal implants that sit on the surface of the retina (see below), have shown good results in restoring some visual perception to patients with profound vision loss. However, such devices require technically challenging surgeries, and the site of implantation limits the potential size of these devices. Epiretinal and subretinal prostheses also face challenges with stability and the occurrence of adverse intraocular events, such as infection or retinal detachment. Due to these issues, researchers have been investigating a less invasive and more stable implant location: between the vascular choroid and the outer sclera. (See illustration.)

Like subretinal prostheses, suprachoroidal implants utilize the bipolar cells and the retinal network down to the ganglion cells, which process the visual information before relaying it to the brain. But devices implanted in this suprachoroidal location can be larger than those implanted directly above or below the retina, allowing them to cover a wider visual field, ideal for navigation purposes. In addition, suprachoroidal electrode arrays do not breach the retina, making for a simpler surgical procedure that should reduce the chance of adverse events and can even permit the device to be removed or replaced with minimal damage to the surrounding tissues.

Early engineering work on suprachoroidal device design began in the 1990s with research performed independently at Osaka University in Japan1 and the Nano Bioelectronics and Systems Research Center of Seoul National University in South Korea.2 Both these groups have shown proof of concept in bench testing and preclinical work, and the Japanese group has gone on to human clinical trials with promising results.3 Subsequently, a South Korean collaboration with the University of New South Wales in Australia continued suprachoroidal device development.

More recently, our groups, the Bionics Institute and the Centre for Eye Research Australia, working as part of the Bionic Vision Australia (BVA) partnership, ran a series of preclinical studies between 2009 and 2012.4 These studies demonstrated the safety and efficacy of a prototype suprachoroidal implant, made up of a silicone carrier with 33 platinum disc-shaped electrodes that can be activated in various combinations to elicit the perception of rudimentary patterns, much like pixels on a screen. Two years ago, BVA commenced a pilot trial, in which researchers implanted the prototype in the suprachoroidal space of three end-stage retinitis pigmentosa patients who were barely able to perceive light. The electrode array was joined to a titanium connector affixed to the skull behind the ear, permitting neurostimulation and electrode monitoring without the need for any implanted electronics.5 In all three patients, the device proved stable and effective, providing enough visual perception to better localize light, recognize basic shapes, orient in a room, and walk through mobility mazes with reduced collisions.6Preparation is underway for future clinical trials, which will provide subjects with a fully implantable device with twice the number of electrodes.

Suprachoroidal prostheses can be larger than those implanted directly above or below the retina, allowing them to cover a wider visual field, ideal for navigation purposes.

Meanwhile, the Osaka University group, working with the Japanese company NIDEK, has been developing an intrascleral prosthetic device, which, unlike the Korean and BVA devices, is implanted in between the layers of the sclera rather than in the suprachoroidal space. In a clinical trial of this device, often referred to as suprachoroidal-transretinal stimulation (STS), two patients with advanced retinitis pigmentosa showed improvement in spatial resolution and visual acuity over a four-week period following implantation.3

Future work will be required to fully investigate the difference in visual perception provided by devices implanted in the various locations in the eye, but the initial signs are promising that suprachoroidal stimulation is a safe and viable clinical option for patients with certain degenerative retinal diseases.

Lauren Ayton is a research fellow and the bionic eye clinical program leader at the University of Melbourne’s Centre for Eye Research Australia. David Nayagam is a research fellow and the bionic eye chronic preclinical study leader at the Bionics Institute in East Melbourne and an honorary research fellow at the University of Melbourne.

References

  1. H. Sakaguchi et al., “Transretinal electrical stimulation with a suprachoroidal multichannel electrode in rabbit eyes,” Jpn J Ophthalmol, 48:256-61, 2004.
  2. J.A. Zhou et al., “A suprachoroidal electrical retinal stimulator design for long-term animal experiments and in vivo assessment of its feasibility and biocompatibility in rabbits,” J Biomed Biotechnol, 2008:547428, 2008.
  3. T. Fujikado et al., “Testing of semichronically implanted retinal prosthesis by suprachoroidal-transretinal stimulation in patients with retinitis pigmentosa,” Invest Ophthalmol Vis Sci, 52:4726-33, 2011.
  4. D.A.X. Nayagam et al., “Chronic electrical stimulation with a suprachoroidal retinal prosthesis: a preclinical safety and efficacy study,” PLOS ONE, 9:e97182, 2014.
  5. A.L. Saunders et al., “Development of a surgical procedure for implantation of a prototype suprachoroidal retinal prosthesis,” Clin & Exp Ophthalmol, 10.1111/ceo.12287, 2014
  6. M.N. Shivdasani et al., “Factors affecting perceptual thresholds in a suprachoroidal retinal prosthesis,” Invest Ophthalmol Vis Sci, in press. 

----------------------------------------------------------------------------

Shortcutting the Retina

By Mark Humayun, James Weiland, and Steven Walston

TINY IMPLANTS: The Argus II retinal implant, which was approved for sale in Europe in 2011 and in the U.S. in 2012, consists of a 3 mm x 5 mm 60-electrode array (shown here) and an external camera and video-processing unit. Users of this implant are able to perceive contrasts between light and dark areas.

© PHILIPPE PSAILA/SCIENCE SOURCE

Bypassing upstream retinal processing, researchers have developed so-called epiretinal devices that are placed on the anterior surface of the retina, where they stimulate the ganglion cells that are the output neurons of the eye. This strategy targets the last cell layer of the retinal network, so it works regardless of the state of the upstream neurons. (See illustration.)

In 2011, Second Sight obtained approval from the European Union to market its epiretinal device, the Argus II Visual Prosthesis System, which allowed clinical trial subjects who had been blind for several years to recover some visual perception such as basic shape recognition and, occasionally, reading ability. The following year, the FDA approved the device, which uses a glasses-mounted camera to capture visual scenes and wirelessly transmits this information as electrical stimulation patterns to a 6 x 10 microelectrode array. The array is surgically placed in the macular region, responsible in a healthy retina for high-acuity vision, and covers an area of approximately 20° of visual space.

A clinical trial showed that 30 patients receiving the device are able to more accurately locate a high-contrast square on a computer monitor, and when asked to track a moving high-contrast bar, roughly half are able to discriminate the direction of the bar’s movement better than without the system.1 The increased visual acuity has also enabled patients to read large letters, albeit at a slow rate, and has improved the patients’ mobility.2 With the availability of the Argus II, patients with severe retinitis pigmentosa have the first treatment that can actually improve vision. To date, the system has been commercially implanted in more than 50 patients.

Several other epiretinal prostheses have shown promise, though none have received regulatory approval. Between 2003 and 2007, Intelligent Medical Implants tested a temporarily implanted, 49-electrode prototype device in eight patients, who reported seeing spots of light when electrodes were activated. Most of these prototype devices were only implanted for a few months, however, and with no integrated camera, patients could not activate the device outside the clinic, limiting the evaluation of the prosthesis’s efficacy. This group has reformed as Pixium Vision, the company currently collaborating with Daniel Palanker’s group at Stanford to develop a subretinal device, and has now developed a permanent epiretinal implant that is in clinical trials. The group is also planning trials of a 150-electrode device that it hopes will further improve visual resolution.

Future developments in this area will aim to improve the spatial resolution of the stimulated vision; increase the field of view that can be perceived; and increase the number of electrodes. Smaller electrodes would activate fewer retinal ganglion cells, which would result in higher resolution. These strategies will be rigorously tested, and, if successful, may enable retinal prostheses that provide an even better view of the world.

Mark Humayun is Cornelius J. Pings Chair in Biomedical Sciences at the University of Southern California, where James Weiland is a professor of ophthalmology and biomedical engineering.Steven Walston is a graduate student in the Bioelectronic Research Lab at the university.

References

  1. M.S. Humayun et al., “Interim results from the international trial of Second Sight’s visual prosthesis,” Ophthalmology, 119:779-88, 2012.
  2. L. da Cruz et al., “The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss,” Br J Ophthalmol, 97:632-36, 2013.

----------------------------------------------------------------------------

Into the Brain

By Collette Mann, Arthur Lowery, and Jeffrey V. Rosenfeld

DIRECT TO BRAIN: Gennaris’s bionic-vision system—which includes an eye glasses–mounted camera that receives visual information (below), a small computerized vision proces­sor (right), and 9 mm x 9 mm electronic tiles (far right) that are implanted into one hemisphere of the visual cortex at the back of the brain—is expected to enter human trials next year.

COURTESY OF MVG

 In addition to the neurons of the eye, researchers have also targeted the brain to stimulate artificial vision in humans. Early experimentation in epileptic patients with persistent seizures by German neurologists and neurosurgeons Otfrid Förster in 1929 and Fedor Krause and Heinrich Schum in 1931, showed that electrical stimulation of an occipital pole, the most posterior part of each brain hemisphere, resulted in sensations of light flashes, termed phosphenes. By the mid-1950s, Americans John C. Button, an osteopath and later MD, and Tracy Putnam, then Chief of Neurosurgery at Cedars-Sinai Hospital in Los Angeles, had implanted stainless steel wires connected to a simple stimulator into the cortices of four people who were blind, and the patients subsequently reported seeing flashes of light.

The first functional cortical visual prosthesis was produced in England in 1968, when Giles Brindley, a physiologist, and Walpole Lewin, a neurosurgeon, both at Cambridge University, implanted 80 surface electrodes embedded in a silicone cap in the right occipital cortex of a patient. Each electrode connected to one of 80 corresponding extracranial radio receivers, which generated simple, distinctly located phosphene shapes. The patient could point with her hand to their location in her visual field. When more than one electrode at a time was stimulated, simple patterns emerged.

The subsequent aim of the late William H. Dobelle was to provide patients with visual images comprising discrete sets of phosphenes—in other words, artificial vision. Dobelle had begun studying electrical stimulation of the visual cortex in the late 1960s with sighted patients undergoing surgery to remove occipital lobe tumors. He subsequently implanted surface-electrode arrays, first temporarily, then permanently, in the visual cortices of several blind volunteers. However, it was not until the early 2000s that the technology became available to connect a miniature portable camera and computer to the electrodes for practical conversion of real-world sights into electrical signals. With the resultant cortical stimulation, a patient was able to recognize large-print letters and the outline of images.

 

COURTESY OF MVG

To elicit phosphenes, however, the surface electrodes used in these early cortical prostheses required large electrical currents (~3 mA–12 mA), which risked triggering epileptic seizures or debilitating migraines. The devices also required external cables that penetrated the skull, risking infection. Today, with the use of wireless technology, a number of groups are aiming to improve cortical vision prostheses, hoping to provide benefit to millions of people with currently incurable blindness.

One promising device from our group is the Gennaris bionic-vision system, which comprises a digital camera on a glasses frame. Images are transmitted into a small computerized vision processor that converts the picture into waveform patterns, which are then transmitted wirelessly to small electronic tiles that are implanted into the visual cortex located in the back of the brain. Each tile houses 43 penetrating electrodes, and each electrode may generate a phosphene. The patterns of phosphenes will create 2-D outlines of relevant shapes in the central visual field. The device is in the preclinical stage, with the first human trials planned for next year, when we hope to implant four to six tiles per patient to stimulate patterns of several hundred phosphenes that patients can use to navigate the environment, identify objects in front of them, detect movement, and possibly read large print.

The development in bionic vision devices is accelerating rapidly due to collaborative efforts using the latest silicon chip and electrode design, computer vision processing algorithms, and wireless technologies.

Other groups currently developing cortical visual prostheses include the Illinois Institute of Technology, the University of Utah, the École Polytechnique de Montréal in Canada, and Miguel Hernández University in Spain. All these devices follow the same principal of inducing phosphenes that can be visualized by the patient. Many technical challenges must be overcome before such devices can be brought to the clinic, however, including the need to improve implantation techniques. In addition to the need for patient safety, accuracy and repeatability when inserting the device are important for maximum results.

Development of bionic vision devices is accelerating rapidly due to collaborative efforts using the latest silicon chip and electrode design, computer vision processing algorithms, and wireless technologies. We are optimistic that a range of practical, safe, and effective bionic vision devices will be available over the next decade and that blind individuals will have the ability to “see” their world once again.

Collette Mann is the clinical program coordinator of the Monash Vision Group in Melbourne, Australia, where Arthur Lowery, a professor of electrical engineering, is the director. Jeffrey V. Rosenfeld is head of the Division of Clinical Sciences & Department of Surgery at the Central Clinical School at Monash University and director of the Department of Neurosurgery at Alfred Hospital, which is also in Melbourne.

 


Logo KW

Bionic Implants [991]

de System Administrator - jueves, 13 de noviembre de 2014, 12:31
 

Meet Bionic Amputee, Nigel Ackland

BY JASON DORRIER

In 2006, Nigel Ackland had an accident. Working as a metal smelter a the time, his right hand was crushed in an industrial mixer. The hand was so severely damaged that six months later he was forced to have it amputated.

Speaking at Exponential Medicine, Ackland presented himself as an ordinary guy facing an extraordinary challenge—a distinction he shares with millions of fellow amputees. How to put your life back together? Learn to live with your disability?

A year after Ackland lost his hand he said he was beset by fits of sudden raging anger. It was hard enough to physically adapt, but he noticed changes in how people treated him too. Strangers would avoid him or stare, with some mixture of pity, fear, or disgust. The guy he saw in the mirror was a physical and mental wreck.

“Psychologically, I was in a very dark place.”

Then something happened that changed Ackland’s life. He got a call from RSL Steeper, maker of the bebionic robotic hand. The company asked if he’d be interested in becoming the first amputee to test out their product.

Bebionic is a prosthetic hand that myoelectrically senses muscle twitches in an amputee’s stump. Depending on which muscles users twitch, the hand can perform a variety of functional or communicative grip patterns and hand positions.

We first wrote about Ackland and his bionic hand back in 2012—when he first began releasing video showing what he could do—and followed up again last year.

With practice and experience, Ackland is now a pro. But it isn’t just about the technology itself enables him to do. His life has positively changed pretty much across the board.

“People still stop and stare,” he says, “But it’s not out of fear or pity anymore.”

He says people are more ready to accept him as he is, and of course, are full of curiosity about his bionic hand. They want to shake hands, see what he can do with it.

Ackland’s story offers a critical glimpse behind the scenes. The human element is all too often lost in the flush of excitement, in the feverish development of technology. But the point, ultimately, is to help people regain a sense of wholeness.

“I’m just an ordinary guy fortunate enough to wear an extraordinary piece of technology,” he says.

Ackland told us every thirty seconds someone becomes an amputee. Since he began testing the hand, another twenty people have joined him. And no doubt, the technology can’t come fast enough for everyone else that could benefit.

Encouragingly, the technology continues to develop. We’ve covered a number of prosthetic limbs—arms andlegs—that intelligently respond to a user’s thoughts and intentions to move. One team, out of Case Western, is even working on a prosthetic that can provide the user with a rudimentary sense of touch.

In the coming years, we hope to see continued improvement and wider availability. Ackland is, undoubtedly, a powerful model of just how much folks stand to benefit.

“I do believe life changing doesn’t have to be life ending,” he told us.

Logo KW

Bipolar Disorder [1517]

de System Administrator - martes, 13 de octubre de 2015, 17:33
 

Bipolar Disorder

by SearchMedica

Education on Circadian Cycle Improves Depression in Bipolar Disorder

By Mark L. Fuerst

In bipolar disorder, disturbances of biological rhythm often lead to mood swings and relapses, and impairments in biological rhythm may predict poor functioning and quality of life.These authors evaluated the effect of psychoeducation on biological rhythm and in the reduction of depressive, anxious, and manic symptoms at 12 months’ follow-up. This randomized clinical trial included 61 young adults aged 18 to 29 years who were diagnosed with bipolar disorder.Biological rhythm was assessed with the Biological Rhythm Interview Assessment in Neuropsychiatry (BRIAN). This instrument was developed for the clinical evaluation of biological rhythm disturbance experienced by patients suffering from mental disorders. It consists of 21 items divided into 5 main areas related to circadian rhythm disturbance in psychiatric patients, namely sleep, activities, social rhythms, eating pattern, and predominant rhythm. In particular, the BRIAN assesses the frequency of problems related to the maintenance of circadian rhythm regularity.The bipolar patients in the study were randomized to receive either a combined intervention of psychoeducation plus medication (32 patients) or treatment-as-usual with medication alone (29 patients). The combined intervention seemed to be more effective than treatment-as-usual in improving depressive symptoms at post-intervention as well as regulation of sleep/social domain at 6 months’ follow-up.The authors noted that improvement of depressive symptoms as well as regulation of sleep and social activities are known to prevent the onset of episodes of bipolar disorder and therefore improve long-term outcomes.

 

Gambling Problems Associated with Bipolar Type 2 Disorder

By Mark L. Fuerst 

Bipolar disorder is associated with elevated rates of problem gambling. Mood disturbances, such as hypomanic experiences, are also associated with elevated rates of gambling problem symptoms. However, little is known about rates in the different presentations of bipolar illness.The present authors set out to determine the prevalence and distribution of problem gambling in people with bipolar disorder in the United Kingdom. They used the Problem Gambling Severity Index to measure gambling problems in 635 bipolar disorder patients, with a particular focus on those with bipolar type 2 disorder.The results show that moderate to severe gambling problems were 4 times higher in people with bipolar disorder than in the general population, and were associated with type 2 disorder, history of suicidal ideation or attempt, and rapid cycling.The major finding, the authors state, is that those with a diagnosis of type 2 bipolar disorder were at significantly higher risk of gambling problems than those with a diagnosis of bipolar type 1 disorder. The data suggest that mild mood elevation involving enhanced reward focus, sleeplessness, and distractibility constitute particular risk factors for a gambling problem among these patients.In this study, patients with bipolar disorder who were at risk of problem gambling were likely to be younger and to have an earlier illness onset than patients at low risk, and also were more likely to work in service industries or to be unemployed. In contrast to previous studies in the general population and in bipolar disorder, there was not a higher prevalence of problem gambling in men compared with women.In conclusion, the authors state that about 1 in 10 patients with bipolar disorder may be at moderate to severe risk of problem gambling, possibly associated with suicidal behavior and a course of rapid cycling. With elevated rates of gambling problems in bipolar type 2 disorder, they recommend that clinicians routinely assessing gambling problems in these bipolar disorder patients.

 

Link: http://topics.searchmedica.com

 

Logo KW

Bipolar patient develops mHealth app to help track mood disorders [1396]

de System Administrator - martes, 8 de septiembre de 2015, 20:33
 

Bipolar patient develops mHealth app to help track mood disorders


Página:  1  2  3  4  (Siguiente)
  TODAS