Referencias | References


Referencias completas de vocabulario, eventos, crónicas, evidencias y otros contenidos utilizados en los proyectos relacionados con biotecnología y neurociencia de la KW Foundation.

Full references of vocabulary, events, chronicles, evidences and other contents used in KW Projects related to biotechnology and neuroscience.

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  ...  95  96  97  98  99  100  101  102  103  104  105  (Siguiente)
  TODAS

V

Logo KW

Virtual app and virtual desktop [1750]

de System Administrator - lunes, 10 de abril de 2017, 17:24
 

Virtual app and virtual desktop access gains mobile traction

by Eddie Lockhart

Larger screens and better technology, including built-in 4G, are big reasons why VDI on mobile devices is becoming more realistic than ever. Browser-based clients also play role.

Virtual desktops and applications are commonplace in many organizations, but the technology has struggled to catch on with mobile users.

Traditionally, mobile devices were just too small for virtual desktop access. Early smartphone screens were little, so if users did access virtual desktops with their smartphones, the interface was unwieldy. They were constantly scrolling just to see everything on the screen.

Well, times have changed, thanks to the rise of larger phones, tablets and browser-based virtual resource delivery -- not to mention the advent of mobile thin and zero clients. Now mobile virtual application and virtual desktop access is more viable.

How mobility and virtualization are coming together

There have been a lot of technological advancements in everything from the devices users work with to the remote display technologies administrators use to deliver virtual resources. From a device standpoint, screens are larger and have better resolution than ever, so full virtual desktop access is easier for mobile users. And most devices come with 4G built-in, which means users can connect to virtual resources without a Wi-Fi connection.

"The big problem with mobile thin clients is that despite their name, they aren't really all that mobile."

Several products also aim to improve the mobile virtual desktop experience. On the remote display front, VMware designed Blast Extreme with mobile devices in mind. The protocol uses graphics processing units rather than CPU to process graphics, which is less of a strain on device batteries.

IT administrators can push individual apps to users' devices with tools such as Citrix XenApp, Microsoft App-V and VMware ThinApp instead of having to deliver full desktops. In addition, suites such as Citrix Cloud and VMware Workspace One combine desktop and app virtualization with enterprise mobility management, giving admins a single location to manage virtual and native mobile apps.

How browser-based clients fit into the equation

In the sea of devices and operating systems that make up most organizations these days, web browsers represent the common denominator. Thanks to browser-based virtualization clients, it doesn't matter if a user is working with Windows 10 on a traditional PC or Apple iOS on an iPhone, as long as he has a browser, he can reach his virtual resources.

The browser-based approach also makes life easier for admins dealing with virtual apps, because they only have to worry about updating workers' browsers rather than updating the multitude of devices they use.

In addition, any users who need to access multiple virtual desktops can easily do so with browser-based virtualization because they can open one desktop in one window and another desktop in a second window. They can then flip back and forth the same way they would between browser tabs.

What about mobile thin and zero clients?

Another option is to use thin clients. HP Inc., IGEL and Dell Wyse all offer their own lines of mobile thin clients. The big problem with mobile thin clients is that despite their name, they aren't really all that mobile because they come with a standard laptop body. They are obviously more portable than a desktop PC, but they cannot compare to a smartphone, 2-in-1 or tablet.

Mobile zero clients are easier to manage than thin clients because they have fewer settings, and remote desktops appear as if they run locally which makes everything straightforward for users. Maybe most importantly, neither data nor apps live on zero clients.

The clean slate nature and portability of zero clients makes it seem like they'd be perfect for mobile users. Traditional zero clients, however, do not come with Wi-Fi capabilities so they're not particularly useful to users who need to connect to wireless internet on the go.

As a result, the mobile zero client market is still in its infancy because there simply aren't that many products that fit the bill. As of now Toshiba and NCS Technologies Inc. are the only vendors that offer products categorized as mobile zero clients with Wi-Fi capabilities. Even with Wi-Fi in place, admins must install local browsers on users' mobile zero clients so they can access the internet.

Next Steps

Link: http://searchvirtualdesktop.techtarget.com

Logo KW

Virtual Application Patterns [794]

de System Administrator - viernes, 29 de agosto de 2014, 16:28
 

Understanding Patterns of Expertise Virtual Application Patterns

Abstract

Enterprise IT departments strive to contribute to the competitiveness of the business organization, developing and deploying innovative applications that can help benefit the bottom line and drive top-line growth. Too often, however, IT managers find themselves unable to develop and deploy applications with the agility they would like. The skills needed to quickly design, test, configure and integrate applications into complex IT environments can be difficult to find, and expert IT staff can quickly become overwhelmed by demand.

Virtualization technology has helped drive efficiency improvements through consolidation of workloads and to a lesser extent through the management of systems and workloads.

Now virtual application patterns from IBM take these capabilities a significant step further. This paper shows how IT organizations can use patterns of expertise provided by virtual application patterns to speed deployments, reduce the risk of error, and help simplify and automate tasks across the management and maintenance lifecycle.

Please read the attached whitepaper

Logo KW

Virtual CPU [472]

de System Administrator - martes, 29 de abril de 2014, 00:33
 

Virtual CPU (vCPU)

A virtual CPU (vCPU) also known as a virtual processor, is a physical central processing unit (CPU) that is assigned to a virtual machine (VM).

By default, virtual machines are allocated one vCPU each. If the physical host has multiple CPU cores at its disposal, however, then a CPU scheduler assigns execution contexts and the vCPU essentially becomes a series of time slots on logical processors.

Because processing time is billable, it is important for an administrator to understand how his cloud provider documents vCPU usage in an invoice. It is also important for the administrator to realize that adding more vCPUs will not automatically improve performance. This is because as the number of vCPUs goes up, it becomes more difficult for the scheduler to coordinate time slots on the physical CPUs, and the wait time can degrade performance.

In VMware, vCPUs are part of the symmetric multi-processing (SMP) multi-threaded compute model. SMP also allows threads to be split across multiple physical or logical cores to improve performance of more parallel virtualized tasks. vCPUs permit multitasking to be performed sequentially in a multi-core environment.

This was last updated in April 2014

Contributor(s): Matthew Haughn
Posted by: Margaret Rouse
 
Logo KW

Virtual Desktop Infrastructure (VDI) [462]

de System Administrator - jueves, 3 de abril de 2014, 17:34
 

 VDI

VDI itself is essentially a complex distributed application consisting of a set of components including clients, desktop virtual machines, connection brokers, load balancers, directory services, image composers and more. A key goal when planning and operating a VDI environment is to ensure that end users receive performance comparable to that of a physical desktop. This requires that the underlying virtual infrastructure has resources appropriately allocated at all times, including virtual machine placement, to assure performance to the distributed VDI application and desktop end users.

Logo KW

VIRTUAL REALITY: THE NEXT GREAT MEDIA PLATFORM [898]

de System Administrator - domingo, 28 de septiembre de 2014, 22:26
 

VIRTUAL REALITY MAY BECOME THE NEXT GREAT MEDIA PLATFORM—BUT CAN IT FOOL ALL FIVE SENSES?

Written By: Jason Dorrier

Jason Silva calls technologies of media “engines of empathy.” They allow us to look through someone else’s eyes, experience someone else’s story—and develop a sense of compassion and understanding for them, and perhaps for others more generally.

But he says, while today cinema is the “the cathedral of communication technology,” looking to the future, there is another great medium looming—virtual reality.

Expanding on the possibilities embodied in the Oculus Rift, Silva envisions a future when we inhabit not virtual realities but “real virtualities.” A time when we discard today’s blunt tools of communication to cloak ourselves in thought and dreams.

It’s an electrifying vision of the future, one many science fiction fans have imagined. At present, we’re nowhere near the full digital duplication and manipulation of reality Silva describes. But if we don’t dream a thing, it’ll never come to pass.

Sometimes we can see the long potential of tech and are awed by it, even though we don’t know how to make it happen yet. All new technologies begin in the mind’s eye like this. “We live in condensations of our imagination,” Terence McKenna says.

Realization can take years; the engineering process can fizzle and reignite—go through a roller coaster of inflated expectations and extreme disillusion. Eventually, we get close enough to the dream to call it a sibling, if not an identical twin.

So, what will it take to get to Silva’s real virtuality? Let’s take a (brief) stroll through the five senses and see how close we are to digitally fooling them.

 

Sight

Two items crucial to immersive visuals are imperceptible latency (that is, no delay between our head moving and the scene before us adjusting) and high resolution.

With a high-performance PC and LED- and sensor-based motion tracking, the Oculus Rift has the first one almost nailed for seated VR. As you move your head, the scene in front of you adapts almost seamlessly—as it would in the real world. This is why the Rift is so exciting, it not only makes such immersion possible, it does so affordably.

But what about resolution? It’s acceptable, but could be better.

Currently, the Rift uses a high-definition display—the latest prototype is rumored to be about 2,600 pixels across. You can’t see the dark edges separating pixels (as you could in the first developer kit) but the graphics still aren’t as sharp as they could be.

Displays about 4,000 and even 8,000 (4K and 8K) pixels across are near, and they get us closer to ideal resolution—but even they won’t be enough.

“To get to the point where you can’t see pixels, I think some of the speculation is you need about 8K per eye [the Rift's screen is split in half] in our current field of view,” Oculus founder, Palmer Luckey, told Ars Tecnica. “And to get to the point where you couldn’t see any more improvements, you’d need several times that.”

He believes we can get to 8K per eye in next decade. Televisions and mobile devices are the prime movers now, but depending on their success, VR systems may eventually be the motivation for developing the highest possible resolution screens.

Theoretically, how high? Recent research out of England shows the bleeding edge. Scientists there are developing flexible displays with pixels on the order of a few hundred nanometers across—150 times smaller than today.

 

Sound

Surround sound has been available for years in home entertainment systems. But immersive VR needs to move beyond basic directionality toward pinpoint accuracy in space. Further, sounds need to compensate and adapt for your movement.

This too is almost available, if not yet perfected. In Microsoft’s (recently shuttered) Silicon Valley lab, a research team combined head tracking technology like the Rift’s and a 3D-scanned physiological profile of a user’s head to deliver positional audio.

“Essentially we can predict how you will hear from the way you look,” Ivan Tashev, one of the researchers, told MIT Technology Review. “We work out the physical process of sound going around your head and reaching your ears.”

Sony is also working on positional audio for its virtual reality system (Project Morpheus). High-definition pinpoint sound using the same motion sensing and software tricks enabling the Rift, then, seems plausible in the near future.           

Touch, Taste, and Smell

Now, things get a little dicey. While we can imagine providing a sense of touch using jets of air, interactive body suits, or other peripherals—there isn’t anything yet that fulfills this particular requirement in a completely immersive way. Smell and taste may be just as difficult as touch to credibly recreate (sorry Smell-O-Vision and DigiScents fans).

Virtual Bodies

Transporting body parts into the virtual world for interaction is much closer. Groups are already working to adapt sensored devices like hand-held controllers, gloves, suits, and infrared 3D imaging systems (e.g., Kinect or Leap Motion) to link real and virtual bodies.

Unrestricted movement is a harder problem, though specialized treadmills or moving floors might allow us to walk the virtual world without running into a wall.

 

The Architects

As we’re developing the ability to walk through the door—we’ll need a place to visit. The earliest VR experiences have been bare-bones adaptions of video game worlds. Game developers are working to more completely adapt existing games for VR. And filmmakers are excited to try 360-degree filming for immersive moviemaking.

Meanwhile, Philip Rosedale, creator of Second Life, is developing a kind of sequel to Second Life for virtual reality’s next act. The software, called High Fidelity, will be compatible with a combination of body sensors and computer vision to reproduce gestures and facial expressions in a virtual body (or avatar) in a virtual world.

High Fidelity, like Second Life, will be open source all the way. That is, the world won’t be controlled from the top down but will instead blossom from the bottom up. Crowdsourced world building allows for otherwise impossible richness and complexity.

Computing Power

Anyone who’s ever been in Second Life knows rendering even a simple shared virtual world takes a fast internet connection and powerful computer. High Fidelity has an interesting solution (for shared virtual worlds) in mind—instead of centralized servers, the job would be distributed between millions of user laptops and devices.

Distributed (super)computing added to continued growth in processing power and faster fiber connections could handle increasingly immersive, realtime worlds.

The Final Frontier

Stephen Wolfram says, “When there’s no reason something’s impossible, it ends up being possible.” We’ve been discussing external devices meant to fool the brain from the outside in—ultimately we may directly stimulate the brain itself.

As the understanding of our brains advances in tandem with the tech to influence them, perhaps we’ll learn to simulate thoughts, visions, and dreams Matrix-like.

The tantalizing tip of the iceberg? Scientists recently announced they’d successfully used EEG to record and transfer thoughts online between brains 5,000 miles apart.

The researchers involved in the project wrote, “We anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely.”

Terence McKenna says this is the final frontier, “Our destiny is to become what we think, to have our thoughts become our bodies and our bodies become our thoughts.”

Image Credit: Shots of Awe/YouTube

This entry was posted in Computer Interfaces,GadgetsSingularity and tagged High FidelityJason Silvamicrosoftoculus riftpalmer luckeyphilip rosedalepositional audioSecond Lifeterence mckennathe matrixvirtual realityvirtual reality sound.

Link: http://singularityhub.com

 

 

Logo KW

Virtualización y Nubes Privadas [448]

de System Administrator - lunes, 14 de julio de 2014, 17:16
 

 

The distinction between a virtualized environment and a private cloud is getting less and less clear, and some say less important. Both virtualization and private clouds offer enterprises a range of benefits including cost savings, faster deployment, better use of IT infrastructure and reduced management, to name a few. Which technology an organization adopts depends upon its specific goals and characteristics, and often deploying virtualization leads to the development of a private cloud.

In this eGuide, CIO along with sister publications Computerworld, InfoWorld and Network World examine the distinctions between virtualization and private cloud, the benefits that both technologies offer, and how some companies are taking advantage of them.

Logo KW

Virtualization increases IP address consumption [1012]

de System Administrator - domingo, 7 de diciembre de 2014, 23:16
 

Three ways virtualization increases IP address consumption

by: Brien Posey

Virtual environments use at least twice as a many IP addresses as physical ones because each desktop and endpoint used to access it need their own addresses. Luckily, IPAM tools can help you keep track of your addresses.

IP address consumption doubles when you deploy virtual desktops, so it's important that IP address management is on your radar.

When an organization begins working toward implementing VDI, it has a lot of things to consider: Is the storage connectivity fast enough? Do the host servers have enough memory? Will the end-user experience be acceptable?

These are all important questions, but one aspect of the preparation process that is sometimes overlooked is the affect that desktop virtualization will have on IP address consumption.

How virtualization consumes IP addresses

There are three primary ways that desktop virtualization affects IP address consumption. The first has to do with changes that you may need to make to your DHCPconfiguration.

Depending on how many virtual desktops you want to support, you may need to create additional DHCP scopes. You might even need to deploy some extra DHCP servers. This certainly isn't necessary in every situation, but it happens often enough to make it worth mentioning.

The second way IP address consumption becomes a factor is that the organization may suddenly consume far more IP addresses than it did prior to the desktop virtualization implementation. The reason for this is quite simple.

Consider an environment without virtual desktops. Each PC consumes an IP address, as do any backend servers. Shops implementing virtual desktops or VDI sometimes overlook the fact that desktop virtualization does not eliminate desktop hardware needs. Regardless of whether users connect via tablets, thin client devices or repurposed PCs, the endpoint consumes an IP address, and so does each virtual desktop.

This means that desktop virtualization effectively doubles IP address consumption on the client side. Each user consumes at least two IP addresses: The physical hardware uses one address and the virtual desktop uses another. There is no way to get around this requirement, so you must ensure that an adequate number of IP addresses are available to support virtual desktops and endpoints.

The third reason IP address consumption increases in a virtual desktop environment has to do with the way workers use virtual desktops. Employees can use virtual desktops on a wide variety of devices, such as PCs, smartphones and tablets. This gives workers the freedom to use the device that makes the most sense in a given situation. But IP address consumption does not mirror device use in real time.

When a device connects to the network, a DHCP server issues the device an IP address lease, but the lease isn't revoked when the device disconnects from the network. The lease remains in effect for a predetermined length of time, regardless of whether the device is still being used. As such, the IP address is only available to the device that leased it; it's not available for other devices to access during the lease period.

Desktop virtualization by its very nature leads to increased IP address consumption. The actual degree to which the IP addresses are consumed varies depending on device usage, however. From a desktop standpoint, you can expect the IP address consumption to double, but in organizations where workers use multiple devices, consumption can be even higher.

How to protect the network against increased IP consumption

The first thing I recommend doing is implementing session limits. Remember, every virtual desktop that is powered up consumes an IP address. You can establish some degree of control over the IP address consumption by limiting the number of concurrent sessions that users are allowed to establish. If each user is only allowed to have one or two concurrent sessions, then you will consume fewer IP addresses (not to mention fewer host resources) than you would if each user could launch an unlimited number of virtual desktops.

I also recommend adopting an automated IP address management tool. There are a number of third-party options on the market. Windows Server 2012 and 2012 R2 also included IP address management software in the Microsoft IPAM feature.

Like any other form of resource consumption, IP address usage tends to evolve over time. To that end, it is extremely important to track IP address usage over the long term so you can project if or when your IP address pools are in danger of depletion.

An IP address management tool should also include an alerting mechanism that responds to situations where a DHCP pool runs low on addresses; the depletion of a DHCP scope can result in a service outage for some users. Using an automated software application to track scope usage is the best way to make sure that you are never caught off guard.

More: 

Link: http://searchvirtualdesktop.techtarget.com

 

Logo KW

VIRTUALLY ‘POSSESS’ ANOTHER PERSON’S BODY [604]

de System Administrator - miércoles, 30 de julio de 2014, 17:48
 

HOW TO VIRTUALLY ‘POSSESS’ ANOTHER PERSON’S BODY USING OCULUS RIFT AND KINECT

Written By: Jason Dorrier

Virtual reality can put you in another world—but what about another body? Yifei Chai, a student at the Imperial College London, is using the latest in virtual reality and 3D modeling hardware to virtually “possess” another person.

How does it work? One individual dons a headmounted, twin-angle camera and attaches electrical stimulators to their body. Meanwhile, another person wears an Oculus Rift virtual reality headset streaming footage  from their friend’s camera/view.

A Microsoft Kinect 3D sensor tracks the Rift wearer’s body. The system shocks the appropriate muscles to force the possessed person to lift or lower their arms. The result? The individual wearing the Rift looks down and sees another body, a body that moves when they move—giving the illusion of inhabiting another’s body.

The system is a rough prototype. There’s a noticeable delay between action and reaction, which lessens the illusion’s effectiveness (though it’s evidently still pretty spooky), and there’s a limit to how finely the possessor can control their friend.

Currently, Chai’s system stimulates 34 arm and shoulder muscles. He admits it’s gained a lot more attention than expected. Even so, he hopes to improve it with high-definition versions of the Oculus Rift and Kinect to detect subtler movements.

Beyond offering a fundamentally novel experience, Chai thinks virtual reality systems like his might be used to encourage empathy by literally putting us in someone else’s shoes. This is akin to donning an age simulation suit, which saddles youthful users with a range of age-related maladies from joint stiffness to impaired vision.

The idea is we’re more patient and understanding with people facing challenges we ourselves have experienced. A care worker, for example, might be less apt to become frustrated with a patient after experiencing their challenges firsthand.

Virtual reality might also prove a useful therapy—a way to safely experience uncomfortable situations to ease anxiety and build habits for real world interaction. Training away an extreme fear of public speaking, for example, might include a program of standing and addressing virtual audiences.

For all these applications, the more immersive and realistic, the better. However, not all of them necessarily require control of another person’s movements—and they might be just as effective (and simpler) using digital avatars instead of real people.

That said, I couldn’t watch the video without getting hypothetical.

Chai’s system only allows for the translation of coarse, delayed movement. But what if it could translate fine, detailed movement in real time? Such a futuristic system would be more than just a cool or therapeutic experience. It would be a way to transport skills anywhere, anytime at very nearly light speed.

Currently, a number of hospitals are using telepresence robots (basically a screen and camera on a robotic mount) to allow medical specialists to video chat live with patients, nurses, and doctors hundreds or thousands of miles away. This is a way to more efficiently spread expertise and talent through the system.

 

Now imagine having the ability to transport the hands of a top surgeon at the Mayo Clinic to a field hospital in Africa or a refugee camp in Lebanon. Geography would no longer limit those in need to the doctors nearby (often in short supply).

Virtual surgery could allow folks to volunteer their time without needing to travel to a war zone or move to a refugee camp full time.

But for such applications, it doesn’t make sense to use human surrogates. You’d need to embed stimulators body-wide to even approach decent control of a human. A robot, on the other hand, is designed from the ground up for external control.

And beyond medical applications, we could remotely control robotic surrogates in factories or on construction sites. Heck, in the event of alien invasion, maybe we’d even hook up to giant mechs to do battle on behalf of all humanity. But I digress.

Robots are still a long way from nimbly navigating the real world. And there are other difficult problems beyond mere movement and control. The Da Vinci surgical robot, for example, allows surgeons to perform surgery at a short distance, but it can’t yet translate fine touch sensations. Ideally, we’d translate movement, visuals, and sensation.

Will we control human or robot surrogates using virtual reality? Maybe not. The larger point, however, is the technology will likely find a broad range of applications beyond gaming and entertainment—many of which we’ve yet to fully imagine.

Image Credit: New Scientist/YouTube; BagoGames/Flickr

Link: http://singularityhub.com/2014/07/30/how-to-virtually-possess-another-persons-body-using-oculus-rift-and-kinect/

Logo KW

VisiCalc [389]

de System Administrator - viernes, 10 de enero de 2014, 18:00
 

 VisiCalc

VisiCalc fue la primera aplicación de hoja de cálculo disponible para computadores personales. Es considerada la aplicación que convirtió el microcomputador de un hobby para entusiastas de la computación en una herramienta seria de negocios. Se vendieron más de 700.000 copias de VisiCalc en seis años.

Concebido por Dan Bricklin, refinado por Bob Frankston, desarrollado por su compañía Software Arts y distribuido por Personal Software en 1979 posteriormente llamada VisiCorp para la computadora Apple II, propulsó el Apple de ser juguete de un aficionado a los hobbys a ser una muy deseada herramienta financiera. Esto probablemente motivó a IBM a entrar al mercado del PC, que ellos habían ignorado hasta entonces.

Según Bricklin, él estaba mirando a su profesor de la universidad en la escuela de Harvard Business School crear un modelo financiero en una pizarra. Cuando el profesor encontraba un error o deseaba cambiar un parámetro, tenía que tediosamente borrar y reescribir un número de entradas secuenciales en la tabla, iluminando a Bricklin para darse cuenta de que él podía replicar el proceso en una computadora usando una 'hoja de cálculo electrónica' (electronic spreadsheet) para ver los resultados de las fórmulas subyacentes.

Fuente: http://es.wikipedia.org/wiki/VisiCalc

Logo KW

VISUAL MICROPHONE [741]

de System Administrator - miércoles, 13 de agosto de 2014, 19:08
 

EAVESDROP ON CONVERSATIONS USING A BAG OF CHIPS WITH MIT’S ‘VISUAL MICROPHONE’

Written By: Jason Dorrier

MIT’s ‘visual microphone’ is the kind of tool you’d expect Q to develop for James Bond, or to be used by nefarious government snoops listening in on Jason Bourne. It’s like these things except for one crucial thing—this is the real deal.

Describing their work in a paper, researchers led by MIT engineering graduate student, Abe Davis, say they’ve learned to recover entire conversations and music by simply videoing and analyzing the vibrations of a bag of chips or a plant’s leaves.

The researchers use a high-speed camera to record items—a candy wrapper, a chip bag, or a plant—as they almost invisibly vibrate to voices in conversation or music or any other sound. Then, using an algorithm based on prior research, they analyze the motions of each item to reconstruct the sounds behind each vibration.

The result? Whatever you say next to that random bag of chips lying on the kitchen table can and will be held against you in a court of law. (Hypothetically.)

The technique is accurate to a tiny fraction of a pixel and can reconstruct sound based on how the edges of those pixels change in color due to sound vibration. It works equally well in the same room or at a distance through soundproof glass.

The results are impressive (check out the video below). The researchers use their algorithm to digitally reassemble the notes and words of “Mary Had a Little Lamb” with surprising fidelity, and later, the Queen song “Under Pressure” with enough detail to identify it using the mobile music recognition app, Shazam.

 

While the visual microphone is cool, it has limitations.

The group was able to make it work at a distance of about 15 feet, but they haven’t tested longer distances. And not all materials are created equal. Plastic bags, foam cups, and foil were best. Water and plants came next. The worst materials, bricks for example, were heavy and only poorly conveyed local vibrations.

Also, the camera matters. The best results were obtained from high-speed cameras capable of recording 2,000 to 6,000 frames per second (fps)—not the highest frame rate out there, but orders of magnitude higher than your typical smartphone.

Even so, the researchers were also able to reproduce intelligible sound using a special technique that exploits the way many standard cameras record video.

Your smartphone, for example, uses a rolling shutter. Instead of recording a frame all at once, it records it line by line, moving from side to side. This isn’t ideal for image quality, but the distortions it produces infer motion the MIT team’s algorithm can read.

The result is more noisy than the sounds reconstructed using a high-speed camera. But theoretically, it lays the groundwork for reconstructing audio information, from a conversation to a song, using no more than a smartphone camera.

Primed by the news cycle, the mind is almost magnetically drawn to surveillance and privacy issues. And of course the technology could be used for both good and evil by law enforcement, intelligence agencies, or criminal organizations.

However, though the MIT method is passive, the result isn’t necessarily so different from current techniques. Surveillance organizations can already point a laser at an item in a room and infer sounds based on how the light scatters or how its phase changes.

And beyond surveillance and intelligence, Davis thinks it will prove useful as a way to visually analyze the composition of materials or the acoustics of a concert hall. And of course, the most amazing applications are the ones we can’t imagine.

None of this would be remotely possible without modern computing. The world is full of information encoded in the unseen. We’ve extended our vision across the spectrum, from atoms to remote galaxies. Now, technology is enabling us to see sound.

What other hidden information will we one day mine with a few clever algorithms?

Image Credit: MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)/YouTube

This entry was posted in Singularity and tagged Abe Davisaudio sensorscomputer visionhigh-speed cameraMITMIT CSAILsmartphone camerassmartphone scanningsmartphone sensors.

Link: http://singularityhub.com


Página: (Anterior)   1  ...  95  96  97  98  99  100  101  102  103  104  105  (Siguiente)
  TODAS