Médico, bioquímico, escritor, consultor urbano, educador ciudadano, divulgador científico, innovador social, diseñador de ciudades saludables, Campeón del mundo atletismo, JMS, Islas Canarias).
Referencias | References
Referencias completas de vocabulario, eventos, crónicas, evidencias y otros contenidos utilizados en los proyectos relacionados con biotecnología y neurociencia de la KW Foundation.
Full references of vocabulary, events, chronicles, evidences and other contents used in KW Projects related to biotechnology and neuroscience.
Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
Un trastorno del ritmo cardíaco o arritmia cardíaca, es una alteración en la sucesión de latidos cardíacos. Puede deberse a cambios en la frecuencia cardíaca, tanto porque se acelere, disminuya (taquicardia o bradicardia), que no son necesariamente irregulares sino más rápidas o más lentas. Pero muy a menudo la arritmia supone un ritmo irregular, que ocurre cuando se presentan anomalías en el marcapaso fisiológico del corazón (nodo sinusal) o en el sistema de conducción del corazón, o por aparición de zonas marcapaso anormales (ectópicos).
Las «bradiarritmias» o trastornos lentos del ritmo cardíaco, resultan de la producción inadecuada de impulsos provenientes del nodo sinusal o de un bloqueo de la propagación del impulso y pueden causar pérdida de la conciencia.
Las «taquiarritmias» o trastornos acelerados del ritmo cardíaco, pueden ser de de origen auricular, en este caso es posible que permitan un gasto cardíaco adecuado y son menos peligrosas que las arritmias ventriculares sostenidas, las cuales suelen producir más a menudo colapso o muerte.
Arte de la Guerra 
El arte de la guerra es un libro sobre tácticas y estrategias militares, escrito por Sun Tzu, un famoso estratega militar chino.
El autor de "El arte de la guerra" fue Sun Zi o Sun Wu, su nombre oficial. Los relatos tradicionales afirmaban que su descendiente, Sun Bin (孙膑/sūn bìn), también escribió un tratado sobre tácticas militares, titulado El arte de la guerra de Sun Bin (孙膑兵法/ sūn bìn bīngfǎ). Tanto Sun Zi como Sun Bin son referidos como Sun Zi en los escritos chinos clásicos, y algunos historiadores creyeron que ambos eran la misma persona, sin embargo, gracias al descubrimiento de varios rollos de bambú desenterrados en 1972 en la Montaña del Gorrión de Plata (Yinqueshan), en la ciudad de Linyin, Shandong, se confirma que Sun Bin escribió su propio arte de la guerra.
Se considera que el texto fue escrito hacia el último tercio del siglo iv a.C. Este texto se dio a conocer en Europa a finales del siglo xviii. Su primera aparición fue la edición francesa de 1772 en París, del jesuita Jean Joseph-Marie Amiot, fue titulada Art Militaire des Chinois.
Fue y sigue siendo estudiado por todos aquellos estrategas militares que han dirigido ejércitos, pero también ha servido de gran ayuda para todo aquel guerrero que ha emprendido el Camino.
El arte de la guerra es uno de los libros más antiguos que se han escrito. Fue el primer intento conocido sobre lecciones de guerra. Sin embargo, es todavía frecuentemente utilizado en la actualidad debido a que sus enseñanzas pueden ser aplicadas en muchas otras áreas donde está involucrado el conflicto.
Artificial Intelligence Heading 2014 
Where is Artificial Intelligence Heading?
Between Google's January £400 million purchase of DeepMind and IBM's recent competition to find new uses for supercomputer Watson, the media spotlight seems to be gradually honing in on Artificial Intelligence (AI). We speak to professional insiders to find out if 2014 really is the year for AI.
"I have a list of things I expect people to do with Watson, but by unleashing it to people in Brazil and Africa and China, as well as Silicon Valley, who knows what they'll come up with. That's the intrigue behind having a contest," said Jerry Cuomo, IBM fellow and CTO for WebSphere about the IBM Watson Mobile Developer Challenge, which invites software developers to produce apps that make use of Watson's resources.
This certainly opens up a lot of scope for progression in Artificial Intelligence, especially when you consider the increased emphasis on machine learning and robotics from companies like Google, which has been gradually acquiring organisations in this space. In December there was Boston Dynamics, in January there was UK startup DeepMind, and then there were all those smaller deals like DNNresearch along with seven robotics companies at the tail end of 2013.
So where is Artificial Intelligence likely to go in the near term, medium term and long term?
Neil Lawrence, Professor of Machine Learning at the University of Sheffield who works with colleagues on DeepMind and Google says: “The investments we are seeing [by big companies] are very large because there is a shortage of expertise in this area. In the UK we are lucky to have some leading international groups, however the number of true experts in the UK still numbers in the tens rather than the hundreds.”
“The DeepMind purchase reflects this,” he continues. “Their staff was made up in large part by recent PhD graduates from some of these leading groups. Although even in this context the 400 million dollar price tag still seems extraordinary to many in the field. The year 2014 is not the year in which these developments happened, but it may be the year in which they've begun to impinge upon the public consciousness.”
“I think 2014 is the year where we see an increased use in AI,” agrees Lawrence Flynn, CEO of natural language interaction specialists Artificial Solutions. “But it will take time for AI implementations such as Watson and our own Teneo Network of Knowledge to become widely established [and] for AI to become commonplace.”
Dr Ben Medlock, Chief Technology Officer at SwiftKey, a smart text prediction software company clarifies: “I think AI will become increasingly visible in 2014 as the foundation of a new range of applications and products. We're excited about the potential of AI technology to make interaction with devices more personal, engaging and ‘human’. However, investment in such technologies is a long term commitment, and we're still far from reaching our full potential in this area. We should expect progress to continue well into the next decade and beyond.”
“Initially, AI will be mostly used for personalization,” says Flynn. “For instance, if you always choose sushi every time your mobile personal assistant offers you a choice of nearest restaurants, eventually it will stop giving you a choice and just the directions to the sushi bar. If you always fly business class with British Airways, then why bother the user with a choice of flights from other airlines.”
Lawrence is keen to stress “that it is not in industry where the breakthroughs have happened, but in academia.” He adds: “A particular focus of my own group is dealing with 'massive missing data': where most of the information we would like to have to base our decisions on is not available. Beyond my own area of research there are also key challenges in the areas of planning and reasoning. It is not yet clear to me how the recent breakthroughs will affect these areas.”
While Medlock feels in the near future “[there is likely to be] an increased investment in businesses focused on AI, as the industry begins to understand that these technologies will underpin many [future] products.”
Lawrence thinks the long term future for AI “is very bright, but progress will be steady, not with large single steps forward, but across a number of applications.”
Flynn in turn stresses: “I don’t believe there will be one big AI moment that history will point to, it will just gradually start to become a normal part of our everyday lives. As devices, appliances, transportation [and so on] become intrinsically connected to each other and the internet, so AI will develop further to ensure seamless interaction between them all.”
“Expectations may currently be too high for the immediate future,” says Lawrence. “We are still many years away from achieving many of our goals in artificial intelligence research. The current successes have emerged from an area known as machine learning, a foundational technique that already underpinned much of the data driven decision making of the large internet companies.”
“The methodologies used have mainly emerged from a relatively small annual conference known as NIPS,” he adds. “The recent breakthroughs emerged from a group of NIPS researchers who received very far-sighted funding from the Canadian government (the Canadian Institute for Advanced Research NCAP program). The program spent a relatively small amount of money (tens of millions) on a carefully selected group of people. This group was led by Geoff Hinton (now of Google) and advised by Yann LeCun (now of Facebook).”
“In the UK, for example,” he continues, “large amounts of money are now promised, but it is not at all clear whether it will be well spent. Functional research operates rather like a well-tended garden: it needs an understanding of the right sort of plants and the ideal conditions for them. A sudden large increase in funding can have a similar effect to indiscriminate application of manure: something will grow, but it's not clear at the outset what it will be. When it comes to harvest time, will we have roses or dock leaves? The Canadian approach was to select the roses first, and then carefully tend them. Other countries would do well to follow a similar approach if they want to reap similar rewards.”
The view from industry is also similar. Matlock looks at the future in terms of the next couple of decades and in this time frame he believes: “AI research will lead us towards more general solutions, able to take diverse inputs from a wide range of data sources and make powerful predictions that closely mimic higher order human reasoning. We will harness the rich streams of data harvested from personal/wearable devices and feed them into these general purpose AI problem solvers, providing support for important life decisions and enhancing our general health and wellbeing.”
Whilst Flynn says “[although] we are very excited by the possibilities that AI opens up in the next few years, ironically it’s likely that by the time AI is mainstream in every home that consumers won’t even think about it. As far as they are concerned, a product or service works how it’s supposed to and most of the time that’s what people care about.”
The start may be slow but as Flynn concludes: “In the longer term [more than 20 years] I expect AI research to help us explore some of our deepest questions around life, purpose, consciousness and what it means to be human.”
It will be interesting to see whether this comes true within any of our lifetimes.
Kathryn Cave is Editor at IDG Connect
Artigas Arnal, José Gervasio 
José Gervasio Artigas Arnal (Montevideo, Gobernación de Montevideo, 19 de junio de 1764 - Asunción del Paraguay, 23 de septiembre de 1850) fue un militar, estadista y máximo prócer uruguayo. Recibió los títulos “Jefe de los Orientales” y “Protector de los Pueblos Libres”. Fue uno de los más importantes estadistas de la Revolución del Río de la Plata, por lo que es honrado también en la Argentina por su contribución a la independencia y la federalización del país.
As containers take off, so do security concerns 
As containers take off, so do security concerns
Containers offer a quick and easy way to package up applications but security is becoming a real concern
Containers offer a quick and easy way to package up applications and all their dependencies, and are popular with testing and development.
According to a recent survey sponsored by container data management company Cluster HQ, 73 percent of enterprises are currently using containers for development and testing, but only 39 percent are using them in a production environment.
But this is changing, with 65 percent saying that they plan to use containers in production in the next 12 months, and cited security as their biggest worry. According to the survey, just over 60 percent said that security was either a major or a moderate barrier to adoption.
Containers can be run within virtual machines or on traditional servers. The idea is somewhat similar to that of a virtual machine itself, except that while a virtual machine includes a full copy of the operating system, a container does not, making them faster and easier to load up.
The downside is that containers are less isolated from one another than virtual machines are. In addition, because containers are an easy way to package and distribute applications, many are doing just that -- but not all the containers available on the web can be trusted, and not all libraries and components included in those containers are patched and up-to-date.
According to a recent Red Hat survey, 67 percent of organizations plan to begin using containers in production environments over the next two years, but 60 percent said that they were concerned about security issues.
Isolated, but not isolated enough
Although containers are not as completely isolated from one another as virtual machines, they are more secure than just running applications by themselves.
"Your application is really more secure when it's running inside a Docker container," said Nathan McCauley, director of security at Docker, which currently dominates the container market.
According to the Cluster HQ survey, 92 percent of organizations are using or considering Docker containers, followed by LXC at 32 percent and Rocket at 21 percent.
Since the technology was first launched, McCauley said, Docker containers have had built-in security features such as the ability to limit what an application can do inside a container. For example, companies can set up read-only containers.
Containers also use name spaces by default, he said, which prevent applications from being able to see other containers on the same machine.
"You can't attack something else because you don't even know it exists," he said. "You can even get a handle on another process on the machine, because you don't even know it's there."
However, container isolation doesn't go far enough, said Simon Crosby, co-founder and CTO at security vendor Bromium.
"Containers do not make a promise of providing resilient, multi-tenant isolation," he said. "It is possible for malicious code to escape from a container to attack the operation system or the other containers on the machine."
If a company isn't looking to get maximum efficiency out of its containers, however, it can run just one container per virtual machine.
This is the case with Nashua, NH-based Pneuron, which uses containers to distribute its business application building blocks to customers.
"We wanted to have assigned resourcing in a virtual machine to be usable by a specific container, rather than having two containers fight for a shared set of resources," said Tom Fountain, the company's CTO. "We think it's simpler at the administrative level."
Plus, this gives the application a second layer of security, he said.
"The ability to configure a particular virtual machine will provide a layer of insulation and security," he said. "Then when we're deployed inside that virtual machine then there's one layer of security that's put around the container, and then within our own container we have additional layers of security as well."
But the typical use case is multiple containers inside a single machine, according to a survey of IT professionals released Wednesday by container security vendor Twistlock.
Only 15 percent of organizations run one container per virtual machine. The majority of the respondents, 62 percent, said that their companies run multiple containers on a single virtual machine, and 28 percent run containers on bare metal.
And the isolation issue is still not figured out, said Josh Bressers, security product manager at Red Hat.
"Every container is sharing the same kernel," he said. "So if someone can leverage a security flaw to get inside the kernel, they can get into all the other containers running that kernel. But I'm confident we will solve it at some point."
Bressers recommended that when companies think about container security, they apply the same principles as they would apply to a naked, non-containerized application -- not the principles they would apply to a virtual machine.
"Some people think that containers are more secure than they are," he said.
McCauley said that Docker is also working to address another security issue related to containers -- that of untrusted content.
According to BanyanOps, a container technology company currently in private beta, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities such as Shellshock and Heartbleed.
Outside the official repositories, that number jumps to about 40 percent.
Of the images created this year and distributed in the official repositories, 74 percent had high or medium priority vulnerabilities.
"In other words, three out of every four images created this year have vulnerabilities that are relatively easy to exploit with a potentially high impact," wrote founder Yoshio Turner in the report.
In August, Docker announced the release of the Docker Content Trust, a new feature in the container engine that makes it possible to verify the publisher of Docker images.
"It provides cryptographic guarantees and really leapfrogs all other secure software distribution mechanisms," Docker's McCauley said. "It provides a solid basis for the content you pull down, so that you know that it came from the folks you expect it to come from."
Red Hat, for example, which has its own container repository, signs its containers, said Red Hat's Bressers.
"We say, this container came from Red Hat, we know what's in it, and it's been updated appropriately," he said. "People think they can just download random containers off the Internet and run them. That's not smart. If you're running untrusted containers, you can get yourself in trouble. And even if it's a trusted container, make sure you have security updates installed."
Security and management
According to Docker's McCauley, existing security tools should be able to work on containers the same way as they do on regular applications, and also recommended that companies deploy Linux security best practices.
Earlier this year Docker, in partnership with the Center for Information Security, published a detailed security benchmark best practices document, and a tool called Docker Bench that checks host machines against these recommendations and generates a status report.
However, for production deployment, organizations need tools that they can use that are similar to the management and security tools that already exist for virtualization, said Eric Chiu, president and co-founder at virtualization security vendor HyTrust.
"Role-based access controls, audit-quality logging and monitoring, encryption of data, hardening of the containers -- all these are going to be required," he said.
In addition, container technology makes it difficult to see what's going on, experts say, and legacy systems can't cut it.
"Lack of visibility into containers can mean that it is harder to observe and manage what is happening inside of them," said Loris Degioanni, CEO at Sysdig, one of the new vendors offering container management tools.
Another new vendor in this space is Twistlock, which came out of stealth mode in May.
"Once your developers start to run containers, IT and IT security suddenly becomes blind to a lot of things that happen," said Chenxi Wang, the company's chief strategy officer.
Say, for example, you want to run anti-virus software. According to Wang, it won't run inside the container itself, and if it's running outside the container, on the virtual machine, it can't see into the container.
Twistlock provides tools that can add security at multiple points. It can scan a company's repository of containers, it can scan containers just as they are loaded and prevent vulnerable containers from launching.
"For example, if the application inside the container is allowed to run as root, we can say that it's a violation of policy and stop it from running," she said.
Twistlock can monitor whether a container is communicating with known command-and-control hosts and either report it, cut off the communication channel, or shut down the container altogether.
And the company also monitors communications between the container and the underlying Docker infrastructure, to detect applications that are trying to issue privileged commands or otherwise tunnel out of the container.
According to IDC analyst Gary Chen, container technology is still new that most companies are still figuring out what value they offer and how they're going to use them.
"Today, it's not really a big market," he said. "It's still really early in the game. Security is something you need once you start to put containers into operations."
That will change once containers get more widely deployed.
"I wouldn't be surprised if the big guys eventually got into this marketplace," he said.
More than 800 million containers have been downloaded so far by tens of thousands of enterprises, according to Docker.
But it's hard to calculate the dollar value of this market, said Joerg Fritsch, research director for security and risk management at research firm Gartner.
"Docker has not yet found a way to monetize their software," he said, and there are very few other vendors offering services in this space. He estimates the market size to be around $200 million or $300 million, much of it from just a single services vendor, Odin, formerly the service provider part of virtualization company Parallels.
With the exception of Odin, most of the vendors in this space, including Docker itself, are relatively new startups, he said, and there are few commercial management and security tools available for enterprise customers.
"When you buy from startups you always have this business risk, that a startup will change its identity on the way," Firtsch said.
This story, "As containers take off, so do security concerns" was originally published by CSO.
Ashton-Tate fue una compañía de software de Estados Unidos, famosa por desarrollar la aplicación de base de datos dBase. Ashton-Tate creció desde una pequeña compañía de garaje hasta convertirse en una multinacional con centros de desarrollo de software repartidos por Estados Unidos y Europa.
Así eligen los animales a sus gobernantes ¿Los hombres los imitamos? 
Así eligen los animales a sus gobernantes ¿Los hombres los imitamos?
En política, la mayoría de nosotros somos ‘marxistas’ en su tendencia e interpretación.
Sin duda que la mejor guía para entender a los políticos (o tal vez más a los tan comunes politiqueros) la brindó este influyente pensador que alguna vez sentenció sabiamente: “La política es el arte de buscar problemas, encontrarlos, hacer un diagnóstico falso y aplicar después los remedios equivocados". Hablamos obviamente del gran Groucho Marx, fabuloso cómico estadounidense de origen alemán.
De allí el riesgoso y tan común comportamiento humano en la dinámica electoral, cuando al elegir a nuestros dirigentes, parece que imitamos a los animales. De forma inexplicable casi siempre escogemos a nuestros verdugos. La interpretación está en este video, con contenido basado en un texto del poeta mexicano Guillermo Aguirre y Fierro (Autor del Brindis del bohemio). Una fábula publicada hace 90 años, que parece escrita hoy: “La elección de los animales”.
Sobre el autor:
Asimov's Three Laws of Robotics 
Asimov's Three Laws of Robotics
Posted by: Margaret Rouse
Science-fiction author Isaac Asimov is often given credit for being the first person to use the term robotics in a short story composed in the 1940s. In the story, Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:
Isaac Asimov en 1965
Tres Leyes de la Robótica
En ciencia ficción las tres leyes de la robótica son un conjunto de normas escritas por Isaac Asimov, que la mayoría de los robots de sus novelas y cuentos están diseñados para cumplir. En ese universo, las leyes son "formulaciones matemáticas impresas en los senderos positrónicos del cerebro" de los robots (líneas de código del programa que regula el cumplimiento de las leyes guardado en la memoria Flash EEPROM principal del mismo). Aparecidas por primera vez en el relato Runaround (1942), establecen lo siguiente:
Esta redacción de las leyes es la forma convencional en la que los humanos de las historias las enuncian; su forma real sería la de una serie de instrucciones equivalentes y mucho más complejas en el cerebro del robot.
Asimov atribuye las tres Leyes a John W. Campbell, que las habría redactado durante una conversación sostenida el 23 de diciembre de 1940. Sin embargo, Campbell sostiene que Asimov ya las tenía pensadas, y que simplemente las expresaron entre los dos de una manera más formal.
Las tres leyes aparecen en un gran número de historias de Asimov, ya que aparecen en toda su serie de los robots, así como en varias historias relacionadas, y la serie de novelas protagonizadas por Lucky Starr. También han sido utilizadas por otros autores cuando han trabajado en el universo de ficción de Asimov, y son frecuentes las referencias a ellas en otras obras, tanto de ciencia ficción como de otros géneros.