Referencias | References


Referencias completas de vocabulario, eventos, crónicas, evidencias y otros contenidos utilizados en los proyectos relacionados con biotecnología y neurociencia de la KW Foundation.

Full references of vocabulary, events, chronicles, evidences and other contents used in KW Projects related to biotechnology and neuroscience.

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  ...  96  97  98  99  100  101  102  103  104  105  (Siguiente)
  TODAS

V

Logo KW

Virtualization increases IP address consumption [1012]

de System Administrator - domingo, 7 de diciembre de 2014, 23:16
 

Three ways virtualization increases IP address consumption

by: Brien Posey

Virtual environments use at least twice as a many IP addresses as physical ones because each desktop and endpoint used to access it need their own addresses. Luckily, IPAM tools can help you keep track of your addresses.

IP address consumption doubles when you deploy virtual desktops, so it's important that IP address management is on your radar.

When an organization begins working toward implementing VDI, it has a lot of things to consider: Is the storage connectivity fast enough? Do the host servers have enough memory? Will the end-user experience be acceptable?

These are all important questions, but one aspect of the preparation process that is sometimes overlooked is the affect that desktop virtualization will have on IP address consumption.

How virtualization consumes IP addresses

There are three primary ways that desktop virtualization affects IP address consumption. The first has to do with changes that you may need to make to your DHCPconfiguration.

Depending on how many virtual desktops you want to support, you may need to create additional DHCP scopes. You might even need to deploy some extra DHCP servers. This certainly isn't necessary in every situation, but it happens often enough to make it worth mentioning.

The second way IP address consumption becomes a factor is that the organization may suddenly consume far more IP addresses than it did prior to the desktop virtualization implementation. The reason for this is quite simple.

Consider an environment without virtual desktops. Each PC consumes an IP address, as do any backend servers. Shops implementing virtual desktops or VDI sometimes overlook the fact that desktop virtualization does not eliminate desktop hardware needs. Regardless of whether users connect via tablets, thin client devices or repurposed PCs, the endpoint consumes an IP address, and so does each virtual desktop.

This means that desktop virtualization effectively doubles IP address consumption on the client side. Each user consumes at least two IP addresses: The physical hardware uses one address and the virtual desktop uses another. There is no way to get around this requirement, so you must ensure that an adequate number of IP addresses are available to support virtual desktops and endpoints.

The third reason IP address consumption increases in a virtual desktop environment has to do with the way workers use virtual desktops. Employees can use virtual desktops on a wide variety of devices, such as PCs, smartphones and tablets. This gives workers the freedom to use the device that makes the most sense in a given situation. But IP address consumption does not mirror device use in real time.

When a device connects to the network, a DHCP server issues the device an IP address lease, but the lease isn't revoked when the device disconnects from the network. The lease remains in effect for a predetermined length of time, regardless of whether the device is still being used. As such, the IP address is only available to the device that leased it; it's not available for other devices to access during the lease period.

Desktop virtualization by its very nature leads to increased IP address consumption. The actual degree to which the IP addresses are consumed varies depending on device usage, however. From a desktop standpoint, you can expect the IP address consumption to double, but in organizations where workers use multiple devices, consumption can be even higher.

How to protect the network against increased IP consumption

The first thing I recommend doing is implementing session limits. Remember, every virtual desktop that is powered up consumes an IP address. You can establish some degree of control over the IP address consumption by limiting the number of concurrent sessions that users are allowed to establish. If each user is only allowed to have one or two concurrent sessions, then you will consume fewer IP addresses (not to mention fewer host resources) than you would if each user could launch an unlimited number of virtual desktops.

I also recommend adopting an automated IP address management tool. There are a number of third-party options on the market. Windows Server 2012 and 2012 R2 also included IP address management software in the Microsoft IPAM feature.

Like any other form of resource consumption, IP address usage tends to evolve over time. To that end, it is extremely important to track IP address usage over the long term so you can project if or when your IP address pools are in danger of depletion.

An IP address management tool should also include an alerting mechanism that responds to situations where a DHCP pool runs low on addresses; the depletion of a DHCP scope can result in a service outage for some users. Using an automated software application to track scope usage is the best way to make sure that you are never caught off guard.

More: 

Link: http://searchvirtualdesktop.techtarget.com

 

Logo KW

VIRTUALLY ‘POSSESS’ ANOTHER PERSON’S BODY [604]

de System Administrator - miércoles, 30 de julio de 2014, 17:48
 

HOW TO VIRTUALLY ‘POSSESS’ ANOTHER PERSON’S BODY USING OCULUS RIFT AND KINECT

Written By: Jason Dorrier

Virtual reality can put you in another world—but what about another body? Yifei Chai, a student at the Imperial College London, is using the latest in virtual reality and 3D modeling hardware to virtually “possess” another person.

How does it work? One individual dons a headmounted, twin-angle camera and attaches electrical stimulators to their body. Meanwhile, another person wears an Oculus Rift virtual reality headset streaming footage  from their friend’s camera/view.

A Microsoft Kinect 3D sensor tracks the Rift wearer’s body. The system shocks the appropriate muscles to force the possessed person to lift or lower their arms. The result? The individual wearing the Rift looks down and sees another body, a body that moves when they move—giving the illusion of inhabiting another’s body.

The system is a rough prototype. There’s a noticeable delay between action and reaction, which lessens the illusion’s effectiveness (though it’s evidently still pretty spooky), and there’s a limit to how finely the possessor can control their friend.

Currently, Chai’s system stimulates 34 arm and shoulder muscles. He admits it’s gained a lot more attention than expected. Even so, he hopes to improve it with high-definition versions of the Oculus Rift and Kinect to detect subtler movements.

Beyond offering a fundamentally novel experience, Chai thinks virtual reality systems like his might be used to encourage empathy by literally putting us in someone else’s shoes. This is akin to donning an age simulation suit, which saddles youthful users with a range of age-related maladies from joint stiffness to impaired vision.

The idea is we’re more patient and understanding with people facing challenges we ourselves have experienced. A care worker, for example, might be less apt to become frustrated with a patient after experiencing their challenges firsthand.

Virtual reality might also prove a useful therapy—a way to safely experience uncomfortable situations to ease anxiety and build habits for real world interaction. Training away an extreme fear of public speaking, for example, might include a program of standing and addressing virtual audiences.

For all these applications, the more immersive and realistic, the better. However, not all of them necessarily require control of another person’s movements—and they might be just as effective (and simpler) using digital avatars instead of real people.

That said, I couldn’t watch the video without getting hypothetical.

Chai’s system only allows for the translation of coarse, delayed movement. But what if it could translate fine, detailed movement in real time? Such a futuristic system would be more than just a cool or therapeutic experience. It would be a way to transport skills anywhere, anytime at very nearly light speed.

Currently, a number of hospitals are using telepresence robots (basically a screen and camera on a robotic mount) to allow medical specialists to video chat live with patients, nurses, and doctors hundreds or thousands of miles away. This is a way to more efficiently spread expertise and talent through the system.

 

Now imagine having the ability to transport the hands of a top surgeon at the Mayo Clinic to a field hospital in Africa or a refugee camp in Lebanon. Geography would no longer limit those in need to the doctors nearby (often in short supply).

Virtual surgery could allow folks to volunteer their time without needing to travel to a war zone or move to a refugee camp full time.

But for such applications, it doesn’t make sense to use human surrogates. You’d need to embed stimulators body-wide to even approach decent control of a human. A robot, on the other hand, is designed from the ground up for external control.

And beyond medical applications, we could remotely control robotic surrogates in factories or on construction sites. Heck, in the event of alien invasion, maybe we’d even hook up to giant mechs to do battle on behalf of all humanity. But I digress.

Robots are still a long way from nimbly navigating the real world. And there are other difficult problems beyond mere movement and control. The Da Vinci surgical robot, for example, allows surgeons to perform surgery at a short distance, but it can’t yet translate fine touch sensations. Ideally, we’d translate movement, visuals, and sensation.

Will we control human or robot surrogates using virtual reality? Maybe not. The larger point, however, is the technology will likely find a broad range of applications beyond gaming and entertainment—many of which we’ve yet to fully imagine.

Image Credit: New Scientist/YouTube; BagoGames/Flickr

Link: http://singularityhub.com/2014/07/30/how-to-virtually-possess-another-persons-body-using-oculus-rift-and-kinect/

Logo KW

VisiCalc [389]

de System Administrator - viernes, 10 de enero de 2014, 18:00
 

 VisiCalc

VisiCalc fue la primera aplicación de hoja de cálculo disponible para computadores personales. Es considerada la aplicación que convirtió el microcomputador de un hobby para entusiastas de la computación en una herramienta seria de negocios. Se vendieron más de 700.000 copias de VisiCalc en seis años.

Concebido por Dan Bricklin, refinado por Bob Frankston, desarrollado por su compañía Software Arts y distribuido por Personal Software en 1979 posteriormente llamada VisiCorp para la computadora Apple II, propulsó el Apple de ser juguete de un aficionado a los hobbys a ser una muy deseada herramienta financiera. Esto probablemente motivó a IBM a entrar al mercado del PC, que ellos habían ignorado hasta entonces.

Según Bricklin, él estaba mirando a su profesor de la universidad en la escuela de Harvard Business School crear un modelo financiero en una pizarra. Cuando el profesor encontraba un error o deseaba cambiar un parámetro, tenía que tediosamente borrar y reescribir un número de entradas secuenciales en la tabla, iluminando a Bricklin para darse cuenta de que él podía replicar el proceso en una computadora usando una 'hoja de cálculo electrónica' (electronic spreadsheet) para ver los resultados de las fórmulas subyacentes.

Fuente: http://es.wikipedia.org/wiki/VisiCalc

Logo KW

VISUAL MICROPHONE [741]

de System Administrator - miércoles, 13 de agosto de 2014, 19:08
 

EAVESDROP ON CONVERSATIONS USING A BAG OF CHIPS WITH MIT’S ‘VISUAL MICROPHONE’

Written By: Jason Dorrier

MIT’s ‘visual microphone’ is the kind of tool you’d expect Q to develop for James Bond, or to be used by nefarious government snoops listening in on Jason Bourne. It’s like these things except for one crucial thing—this is the real deal.

Describing their work in a paper, researchers led by MIT engineering graduate student, Abe Davis, say they’ve learned to recover entire conversations and music by simply videoing and analyzing the vibrations of a bag of chips or a plant’s leaves.

The researchers use a high-speed camera to record items—a candy wrapper, a chip bag, or a plant—as they almost invisibly vibrate to voices in conversation or music or any other sound. Then, using an algorithm based on prior research, they analyze the motions of each item to reconstruct the sounds behind each vibration.

The result? Whatever you say next to that random bag of chips lying on the kitchen table can and will be held against you in a court of law. (Hypothetically.)

The technique is accurate to a tiny fraction of a pixel and can reconstruct sound based on how the edges of those pixels change in color due to sound vibration. It works equally well in the same room or at a distance through soundproof glass.

The results are impressive (check out the video below). The researchers use their algorithm to digitally reassemble the notes and words of “Mary Had a Little Lamb” with surprising fidelity, and later, the Queen song “Under Pressure” with enough detail to identify it using the mobile music recognition app, Shazam.

 

While the visual microphone is cool, it has limitations.

The group was able to make it work at a distance of about 15 feet, but they haven’t tested longer distances. And not all materials are created equal. Plastic bags, foam cups, and foil were best. Water and plants came next. The worst materials, bricks for example, were heavy and only poorly conveyed local vibrations.

Also, the camera matters. The best results were obtained from high-speed cameras capable of recording 2,000 to 6,000 frames per second (fps)—not the highest frame rate out there, but orders of magnitude higher than your typical smartphone.

Even so, the researchers were also able to reproduce intelligible sound using a special technique that exploits the way many standard cameras record video.

Your smartphone, for example, uses a rolling shutter. Instead of recording a frame all at once, it records it line by line, moving from side to side. This isn’t ideal for image quality, but the distortions it produces infer motion the MIT team’s algorithm can read.

The result is more noisy than the sounds reconstructed using a high-speed camera. But theoretically, it lays the groundwork for reconstructing audio information, from a conversation to a song, using no more than a smartphone camera.

Primed by the news cycle, the mind is almost magnetically drawn to surveillance and privacy issues. And of course the technology could be used for both good and evil by law enforcement, intelligence agencies, or criminal organizations.

However, though the MIT method is passive, the result isn’t necessarily so different from current techniques. Surveillance organizations can already point a laser at an item in a room and infer sounds based on how the light scatters or how its phase changes.

And beyond surveillance and intelligence, Davis thinks it will prove useful as a way to visually analyze the composition of materials or the acoustics of a concert hall. And of course, the most amazing applications are the ones we can’t imagine.

None of this would be remotely possible without modern computing. The world is full of information encoded in the unseen. We’ve extended our vision across the spectrum, from atoms to remote galaxies. Now, technology is enabling us to see sound.

What other hidden information will we one day mine with a few clever algorithms?

Image Credit: MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)/YouTube

This entry was posted in Singularity and tagged Abe Davisaudio sensorscomputer visionhigh-speed cameraMITMIT CSAILsmartphone camerassmartphone scanningsmartphone sensors.

Link: http://singularityhub.com

Logo KW

Visualización [529]

de System Administrator - sábado, 12 de julio de 2014, 17:43
 

Las claves de la visualización

Link: http://hipercognicion.blogspot.com/2010/07/visualizacion.html

Al recrear escenas en la mente, la visualización es unos de los más poderosos instrumentos que tenemos para programar nuestro futuro. Nuestro mundo del presente se ha convertido en un mundologocrático, donde las palabras parecen tener todo el protagonismo y las imágenes se han quedado relegadas al mundo onírico. Cierto es que las palabras son capaces de estimular sensaciones y respuestas mentales, pero más cierto es que las imágenes están dotadas de una enorme capacidad para configurar nuestro presente y nuestro futuro a modo de guías.
 
De forma inconsciente, muchas personas emplean la visualización sin darse cuenta cuando, por ejemplo, piensan en cómo disfrutarán las vacaciones, lo terrible que sería tener un accidente o qué se sentiría practicando deporte de riesgo. Pero esos brotes espontáneos no son sino una menudencia al lado de lo que supone la visualización activa y consciente. Ya se ha tratado abundantemente en otras entradas (Autohipnosis, Guía conductual, Sensaciones) la forma de trabajar la visualización. Si se hace de forma adecuada, esas imágenes tendrán un carácter similar a los recuerdos. El sujeto actúa como protagonista de una acción determinada, que vendrá acompañada de sensaciones positivas o negativas en función de las imágenes. Serán esas sensaciones las que guíen nuestro futuro. Nuestra conducta quedará marcada por esas sensaciones y evitará volver a enfrentarse a las sensaciones negativas experimentadas en la visualización, mientras que buscará las experiencias positivas vividas mediante la visualización.
Logo KW

VOICE OVER IP THAT BOOSTS BUSINESS EFFICIENCYTEN TIPS FOR GETTING IT RIGHT [1039]

de System Administrator - lunes, 5 de enero de 2015, 19:37
 

VOICE OVER IP THAT BOOSTS BUSINESS EFFICIENCY

TEN TIPS FOR GETTING IT RIGHT

Business phone service with VoIP is the new face of advanced technology, providing the same telecommunication services, but at affordable rates. Internet telephony is being adopted by businesses worldwide due to its more cost-effective approach and enhanced services compared with traditional telephones. It also facilitates new communication platforms such as video and web conferencing, voice to email messaging and more, in addition to traditional simple calls. 

Selecting the ideal vendor based on your inbound and outbound call requirements is one of the most important decisions you will make for your business. In this guide, you’ll find ten tips for comparing top vendors and selecting a quality, affordable VoIP solution. But first, what are some of the advantages of VoIP for business?

Please read the attached whitepaper.

Logo KW

Vulnerability Management Solution [749]

de System Administrator - jueves, 14 de agosto de 2014, 15:12
 

8 Best Practices for Selecting a Vulnerability Management Solution

The process of selecting a Vulnerability Management (VM) solution for your organization can be a tedious and difficult process. Selecting the right security solution for your business is incredibly important because not all solutions do what is required by your organization. However, there are ways to make this process easier. Download this whitepaper to learn about the 8 best practices for selecting a Vulnerability Management solution and see how you can save time and receive the exact VM solution that your company needs.

Please read the attached whitepaper.

W

Logo KW

Warp [323]

de System Administrator - viernes, 3 de enero de 2014, 19:06
 

 

El empuje warpempuje por curvaturaimpulso de deformación o impulso de distorsión es una forma teórica de propulsión superlumínica. Este empuje permitiría propulsar una nave espacial a una velocidad equivalente a varios múltiplos de la velocidad de la luz, mientras se evitan los problemas asociados con la dilatación relativista del tiempo. Este tipo de propulsión se basa en curvar o distorsionar el espacio-tiempo, de tal manera que permita a la nave «acercarse» al punto de destino. El empuje por curvatura no permite, ni es capaz de generar, un viaje instantáneo entre dos puntos a una velocidad infinita, tal y como ha sido sugerido en algunas obras de ciencia ficción, en las que se emplean tecnologías imaginarias como el hipermotor o el motor de salto. Una diferencia entre la propulsión a curvatura y el uso del hiperespacio es que en la propulsión a curvatura, la nave no entra en un universo (o dimensión) diferente: simplemente se crea alrededor de la nave una pequeña «burbuja» (burbuja warp) en el espacio-tiempo, y se generan distorsiones del espacio-tiempo para que la burbuja se «aleje» del punto de origen y se «aproxime» a su destino. Las distorsiones generadas serían de expansión detrás de la burbuja (alejándola del origen) y de contracción delante de la burbuja (acercándola al destino). La burbuja warp se situaría en una de las distorsiones del espacio-tiempo, sobre la cual cabalgaría de manera análoga a como los surfistas lo hacen sobre una ola de mar.

El empleo de la curvatura espacial como medio de trasporte es un concepto que ha sido objeto de tratamiento teórico por algunos físicos (como Miguel Alcubierre con su métrica de Alcubierre, y Chris Van Den Broeck).

El empuje warp o warp drive es famoso por ser el método de desplazamiento empleado en el universo ficticio de Star Trek.

Viabilidad de la propulsión por curvatura

Entre los diferentes físicos teóricos que han analizado esta propulsión, no existe un diseño o hipótesis común que permita definir una teoría sólida para viajar mediante curvatura del espacio-tiempo. El más conocido de estos diseños es el motor de Alcubierre (The warp drive: hyper-fast travel within general relativity, acerca del impulso de deformación de Alcubierre, publicado en 1994) y que asume uno de los términos empleados en la jerga de Star Trek: el factor de curvatura como medida de la curvatura (deformación) del espacio-tiempo y que permite el viaje (más rápido que la luz) de un objeto gracias a la curvatura generada del espacio-tiempo. Si el espacio-tiempo se curva de manera apropiada, estrictamente hablando, el objeto o la nave no se mueve a velocidades lumínicas, de hecho se encuentra estacionaria en el espacio interior de la burbuja warp. Esta situación estacionaria de la nave, dentro de la burbuja, haría que la tripulación no se viera afectada por grandes aceleraciones/desaceleraciones ni existiría un transcurrir del tiempo diferente, es decir, no sufriría el efecto de la dilatación temporal, como en el caso de desplazarse a velocidades próximas a las de la luz en el espacio-tiempo. La nave, al activarse su propulsión por curvatura, para un observador exterior parecería que se mueve más rápido que la luz y desaparecería de su campo de visión en un breve lapso al expandirse el espacio-tiempo de la nave con respecto a ese observador.

Miguel Alcubierre hace referencia a la necesidad de la materia extraña (también denominada materia exótica) para el empuje warp. La existencia de materia exótica no es teórica y el efecto Casimir lleva a suponer la existencia de dicha materia. Sin embargo, la generación de materia exótica, y su sostenimiento, para el desarrollo de un empuje de curvatura (o para mantener abierta la «garganta» de un agujero de gusano) es impracticable. Algunos métodos o teorías asociados con la creación/sostenimiento de la materia exótica apuntan a que la materia exótica debería moverse, localmente a una velocidad superior a la de la luz (y a la existencia de los denominados taquiones). Otras teorías, apuntan que se puede evitar este movimiento a una velocidad superior a la de la luz pero implicaría la generación de una singularidad desnuda al frente de la burbuja warp. Sea por un método u otro, la creación /sostenimiento de materia exótica, en particular y el uso de empujes de curvatura violan, a priori, diferentes condiciones de energía en el ámbito de la teoría del campo cuántico. Alcubierre, concluyó que la generación de una burbuja warp era inviable ya que, según sus cálculos iniciales, necesitaría para su creación (y las distorsiones del espacio-tiempo) más energía que la existente en el universo.

Un análisis posterior del doctor Van Den Broeck (On the (im)possibility of warp bubles, publicado en 1999), de la Universidad Católica de Leuven (Bélgica) ofreció como resultado una energía inferior a la calculada inicialmente por Alcubierre (reducida por un factor de 10 elevado a 61). Sin embargo, esto no indica que la propuesta sea realista, tal y como indicó Van Den Broeck, ya que calculó la energía necesaria para transportar varios átomos a poco menos que el equivalente a la de tres masas solares.

 Agujeros Negros

 Agujeros Negros - Radiación de Hawking

No obstante, un estudio de 2008 a cargo de Richard K. Obousy y Gerald Cleaver, de la Universidad de Baylor (Texas), en la que se estudian los efectos de un espacio-tiempo de varias dimensiones (como predice la teoría de cuerdas), rebaja la energía necesaria para mover una nave de 1000 m3 a velocidades superlumínicas a «solo» 1045 J (el equivalente a la energía contenida en la masa de Júpiter).

En este mismo estudio, se estima una velocidad máxima teórica para un motor warp de 1032 c, si bien se trataría de un límite inútil desde el punto de vista práctico, pues para alcanzar esa velocidad arbitrariamente alta se necesitaría más energía de la disponible en el universo.

A principios del siglo XXI, la construcción de un motor de curvatura está lejos de convertirse en una realidad, debido tanto a la tecnología existente como a la elevada energía necesaria para su desarrollo. Parecen existir además otros impedimentos teóricos a un viaje superlumínico con esta tecnología, como la inestabilidad cuántica de la burbuja o la radiación de Hawking. No obstante, no existen argumentos teóricos que impidan los viajes warp sublumínicos.

Fuente: http://es.wikipedia.org/wiki/Warp

Logo KW

Watch MIT’s Breakthrough 3D Printer Pour Molten Glass Like Honey [1405]

de System Administrator - jueves, 10 de septiembre de 2015, 23:07
 

Watch MIT’s Breakthrough 3D Printer Pour Molten Glass Like Honey

By Jason Dorrier

Glass and visions of the future go hand in hand. Towering skylines of glass and steel evoke a sense of progress like nothing else. And yet, the technology itself is ancient, and how we work glass is rooted firmly in the past.

We’ve automated glassmaking for basic items like bottles or windows, but creating beautiful and complicated glass shapes still requires the human touch and a lot of skill.

Machines, however, may soon rival or surpass even artisans.

To date, glass and 3D printers haven't mixed well. But MIT's Mediated Matter group recently unveiled a custom 3D printer able to make high quality glass products that are transparent to light—a feat other 3D printing techniques have yet to equal.

The technique recalls some of the earliest glassmaking methods, when artisans would coil molten glass around a sand core to form shapes. The difference, of course, is that MIT's printer automates the physical process and liberates the imagination with digital design.

Watching the machine work is more than a little enchanting.

This isn't the first 3D printer to print glass.

Methods used on materials with high melting points (e.g., metal or ceramics) also work with glass. However, products made in this manner are fragile and opaque. This is where MIT's printer excels. As described in a recent paper, the printer can print transparent glass with properties akin to conventional glass.

The key? It's all about temperature. While more conventional techniques fuse glass powder on a print bed, the MIT team figured out how to print with molten glass from start to finish.

To achieve this, they split the printer into temperature-controlled sections. At the top, a kiln and crucible are kept at a piping 1800+ degrees fahrenheit. The print space below is kept just above the melting point of glass. A heated nozzle of aluminum oxide allows the glass to travel from crucible to print space without sticking to the sides. Once complete, the final product is slowly cooled to prevent the glass from cracking (this is known as annealing).

Like other 3D printers, the process is automated and directed by design software. It currently takes soda lime glass (a common form) but could potentially make other kinds of glass at different temperatures.

In the paper, the team says challenges yet to be fully ironed out—the machine is still a work in progress—include how to "automatically start, stop, and cut glass filament." Solutions may include adding automated torching, compressed air, cutting shears, or a high temperature valve. With increasingly fine control, they'll be able to make more "intricate cross-sections and internal structures."

Also, although the printer can lay down glass with sub-millimeter precision (a conservative tolerance of 0.5 mm), the actual resolution of the product is limited by the width of the glass coil. (That is, instead of making a continuous surface, you still see layers of glass.)

But as the team further perfects the printer—what might they make?

MIT Mediated Matter's Neri Oxman, one of the paper's authors, sits at the nexus of engineering, research, and design. We've written about her weirdly beautiful 3D printed art in the past. When it comes to glass, she's imagining an intimate marriage of form and function from small-scale microfluidics to architectural work on grand scales.

“Could we design an all-glass building with internal channels and networks for airflow and water circulation?” Oxman asked Wired. “Can we surpass the great modern tradition of discrete formal and functional partitions and generate an all-in-one building skin?”

Starting from digital designs modeled to the finest detail, future architects might skin buildings in glass optimized for various lighting effects, glass that works more seamlessly with the environment, or even glass that transmits data over an entire building's surface by printing fiber optics into the glass itself.

Though such scenarios are still firmly in the future, Oxman says the team is eyeing these larger goals. And as methods for 3D printing diversify and mature, the possibilities at the frontier will only expand.

Image Credit: MIT Mediated Matter/Steven Keating

Logo KW

Watch These Drones Build a Rope Bridge—and Intrepid Researchers Walk Over It [1460]

de System Administrator - sábado, 26 de septiembre de 2015, 12:21
 

Watch These Drones Build a Rope Bridge—and Intrepid Researchers Walk Over It

By Jason Dorrier

Earlier this year, we wrote about a project to 3D print a bridge in Amsterdam. Said printer will move along a set of (self-printed) tracks, leaving a fully formed bridge in its wake. Now, machine bridge making is taking to the air—and swapping drones for 3D printers.

Dubbed the Aerial Construction project, ETH Zurich researchers equipped several quadcopters with spools of lightweight (but strong) rope, set up two scaffold anchors, measured and set their locations—and hit ‘go’. Using knots, links, and braiding the swooping drones looped the rope into a 7.4 meter bridge.

And then the researchers walked over it. (Happily, it held firm.)

The work adds to other experiments aimed at automating the building of large structures. A number of groups, for example, have been working to 3D print whole houses for years. Others are building bricklaying robots. And an old UPenn project used quadcopters to lift and place magnetically bound struts into a structure.

In this case, the researchers decided rope was a good building material because, at least for now, drones have a strict weight limit. Also, lifting and placing heavy, modular building materials requires greater precision. And of course, drones can work over empty space without supporting structures.

The dream scenario? Some intrepid scientist or National Geographic photographer is hacking through the jungle and encounters a deep chasm with no way to cross. Our explorer pops out a few handy bridge-drones, and away they go. (And undoubtedly there many other cool applications.)

The reality is a little different.

 

ETH Zurich researcher tests drone-built rope bridge.

In this case, the drones use carefully placed, uniform steel scaffolding for anchors, not trees of unknown strength, growing at wild angles. Also, there’s the matter of positioning. The drones require a precise indoor tracking system so they can locate themselves in space and avoid crashing into their fellow flying workers.

And then there's the ever-present power problem. Drones have limited flight time due to battery limitations. The length and complexity of your bridge is tied to just how long the machines can stay airborne. And of course, if you're exploring the grey areas on the map, finding a charge will require sunshine at least.

But if the researchers can solve a few logistical problems, they can move outdoors. And with a few good anchors, hypothetically they could make jungle rope bridges.

Just not quite yet.

Image Credit: ETH Zurich

 


Página: (Anterior)   1  ...  96  97  98  99  100  101  102  103  104  105  (Siguiente)
  TODAS