Referencias | References


Referencias completas de vocabulario, eventos, crónicas, evidencias y otros contenidos utilizados en los proyectos relacionados con biotecnología y neurociencia de la KW Foundation.

Full references of vocabulary, events, chronicles, evidences and other contents used in KW Projects related to biotechnology and neuroscience.

Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  ...  96  97  98  99  100  101  102  103  104  105  (Siguiente)
  TODAS

W

Logo KW

Web Content Management System [445]

de System Administrator - viernes, 14 de marzo de 2014, 14:13
 

CMS

How to Choose the Best Web Content Management System for
Customer Experience Management:
A Guide for Both Marketers and Developers

Choosing a Web CMS is about more than Content Management

We’ve come a long way since the days when a content management system (CMS) was simply a way to manage and update the content on your website. Today, a web CMS is just one type of technology you need to consistently deliver an excellent customer experience. While your web CMS is a crucial component, today you must look at it as part of a larger customer experience management capability.

Why the shift? It all starts with the connected, empowered customer who brings greater expectations and preferences about how and when he or she wishes to engage with a brand. Today’s customers expect a seamless, multichannel experience that anticipates their needs and wants. Companies that deliver this type of experience are building trust and loyalty that result in top- and bottom-line improvements including: greater return on marketing investment, increased conversions, higher revenues, and greater lifetime customer value.

To achieve these business outcomes, companies are embracing the discipline of customer experience management and investing in the technology that enables it. A customer experience management platform lets you drive consistency in the experiences that your customers have with your brand. And that’s where a web CMS comes in. A web CMS helps you achieve that consistency and deliver great web experiences. The rest of the customer experience management solution helps you deliver that content and consistency in other channels such as email and social.

Because your web CMS must interoperate seamlessly with the components of customer experience management, the CMS decision shouldn’t be made in a vacuum. This paper highlights the criteria – both from the marketers’ and the IT/developers’ perspective – that today’s organizations should consider when selecting a new web CMS as part of a broader customer experience management strategy.

“ It is time to start thinking about WCM [web content management] beyond just managing content or siloed websites or experiences. Instead, we need to think of how WCM will interact and integrate with other solutions – like search, recommendations, eCommerce, and analytics – in the customer experience management (CXM) ecosystem in order to enable businesses to manage experiences across customer touchpoints.”

— Stephen Powers, Forrester

Logo KW

Web content management systems [733]

de System Administrator - martes, 12 de agosto de 2014, 20:54
 

Web content management systems try to do it all

 

by: Lauren Horwitz

What happens at Mohegan Sun stays at Mohegan Sun -- or at least on the casino company's website.

Internet marketing manager Ryan Lee understands the importance of digital content to business all too well. He manages the websites for the Uncasville, Connecticut, and Wilkes-Barre, Pennsylvania, casinos. The Connecticut property is the second largest in the U.S. and home to some 40 restaurants, three casinos and several entertainment venues for acts like John Legend and Panic! At the Disco.

But managing the volume of content associated with so much entertainment is a big job. The website has to be updated regularly with concert schedules, restaurant menus and casino information. The site has to aggregate resources from various places and distribute it to the general public, but also personalize information for loyalty program members, who can get wind of special offers or book a hotel directly from the site. And MoheganSun.com has to deliver information to PCs, mobile devices and way-finding kiosks on the floor of the Connecticut property. The Mohegan Sun sites try to do it all for their users.
 

It's a paradox of sorts, where company websites are becoming the place to centralize communications through disparate channels such as Facebook and Twitter, blogs and YouTube, and mobile devices. Web content management systems (WCMs) are the principal weapon in this ground war to engage audiences, but they face new challenges as channels of communication -- and content itself -- grow more sophisticated and varied. WCM software enables nondevelopers to manage and self-publish digital content. With WCM software, content managers can reach audiences wherever they choose to communicate and wrangle these channels and audiences into one location on a company's website.

At Mohegan Sun, the company needed to corral all its disparate content and centralize it in a WCM, all while enabling rapid edits and ways to reach customers through new communication channels. "We needed a hub," Lee recalled. "With the quantity of content we're managing, it's imperative that it's all in one place." So in 2013, Mohegan Sun brought on Adobe Experience Manager to replace custom systems.

Now, Lee said, the company can create better brand consistency between the sites for the Pennsylvania and Connecticut properties and improve customer service through the site. "The customer experience is more fluid, rather than being bounced around," Lee said.

A lot of users have catching up to do -- in terms of operations, marketing and editorial maturity -- to leverage what these tools can do.Tony Byrnepresident of Real Story Group.

A lot of users have catching up to do -- in terms of operations, marketing and editorial maturity -- to leverage what these tools can do.Tony Byrnepresident of Real Story Group

Tony Byrne | President of Real Story Group

The channel paradox

The market for WCM technology is gathering steam. According to the Research and Markets report, "Global Web Content Management Systems Market 2014-2018," the WCM software market is due to grow 12.7% between 2013 and 2018. Web content management software has earned a reputation for flexibility and ease of use for nondeveloper types, such as editors and marketers.

But the power of these systems -- their ability to democratize the process -- can also be their downfall. "The major vendors are ahead of the vast majority of their customer bases," said Tony Byrne, president of Real Story Group, a technology research and analyst firm. "A lot of users have catching up to do -- in terms of operations, marketing and editorial maturity -- to leverage what these tools can do."

At Mohegan Sun, for example, the company is still shoehorning its legacy mobile content management system (CMS) into its larger WCM strategy. The company houses mobile versions of its website outside of Adobe. So, the regular version of the website doesn't automatically populate to the mobile version, because the mobile sites don't reside in Adobe Experience Manager.

"If a concert is canceled, you have to remember to cancel it in multiple places -- and all the different parts and pieces that you have to fill out," Lee noted. Having to make changes in multiple locations is time-consuming and introduces the risk of errors or inconsistencies. Lee said that as the company considers applying for additional gaming licenses in new territories, he would like to make a single environment the governing standard.

"It's all an effort we are moving toward," he said. "Having Adobe be the hub for everything digital -- mobile apps, way-finders, websites, mobile websites and whatever other channels come along -- hopefully we will be able to manage out of one environment."

Adobe Experience Manager offers analytics tools and social media monitoring tools as well, but Mohegan Sun isn't using those today. While these WCM systems offer lots of functionality, companies are often still laying the groundwork for more basic initiatives, such as bringing mobile content into their WCMs.

Content that drives users

New England Biolabs (NEB) in Ipswich, Massachusetts, has a range of content on its site, from research to reagents to other tools for the life sciences. Because NEB offers information as well as commercial products on its site, it needed a flexible Web content management system that can educate and inform its site users with articles, white papers and video but also function as an e-commerce site and sell products.

"People come to the website to learn something, and then buy the product," said Tanya Osterfield, digital marketing manager at the company. So the company needed a WCM that could take its research data and populate it on the site in different ways and in different formats.

"Our site features information like the temperature you would want to use one of our products at [during an experiment]." Osterfield said. "So that information appears on the product page but also in 42 tables and charts across the site." The company can use its WCM to publish information for a variety of different site purposes -- and different sites -- without having to publish the information in each location or to make changes in each location when they are necessary.

That is a major boon for NEB, because it helps ensure better data integrity and consistency. "We can be more trusted and accurate with our data because it's changed everywhere at one time," she said.

New England Biolabs uses various channels, such as Facebook and Twitter, to promote its content, but always with the intent of driving people back to the site. "In every Facebook post, every tweet, there is a call to action," Osterfield said. "That is the funnel. You find something interesting on Twitter, you go to our site to learn more about it, and 'Here are some products that relate to it and that you can buy.'"

Osterfield said that NEB is also quite conscious of tailoring content and message to the channel it is operating in. So, for example, they do not promote products on Twitter but provide information on research and provide calls to action. "We don't get salesy on Twitter or Facebook. We want to give people a way to learn more."

When is doing it all doing too much?

Osterfield's instinct to tailor NEB's content for the audience and the environment is a best practice for Web content management systems.

But WCM systems can pose the risk of trying to be all things for all purposes, and that places responsibility at the feet of human beings to provide quality control. Content managers can't just sit back and let the software drive everything.

According to Real Story Group's Byrne, tailoring content for multiple environments is the name of the game, but it's also quite difficult. "The enterprise wants a single source of content that can be reused across different platforms," Byrne said. "But if you're going to republish things to Facebook, you're going to want to modify it for the context of the environment that you're in."

Byrne presented a hypothetical in which users may change a headline to make it more compelling but forget to preview its display for the mobile website. Headlines can get cut off or be difficult to read in that small real estate, and content managers need to think about an increasing number of details as the channels proliferate and WCM capabilities grow to mirror the environment.

Byrne notes that the goals of WCM can be at odds with one another and create conflicts.

"There is this classic tension between having a single source of content but then also doing things that are more customer-specific and more contextual," he said. "Those two are always fighting against each other. It's not a vendor problem, but vendors tend to gloss over it."

Link: http://searchcontentmanagement.techtarget.com

Logo KW

Western Union [65]

de System Administrator - sábado, 4 de enero de 2014, 17:59
 

Western Union fue fundada en Rockester, Nueva York, en 1851, con el nombre de The New York and Mississippi Valley Printing Telegraph Company. Después de que adquiriera una serie de compañías de la competencia, la empresa cambió su nombre a Western Union Telegraph Company, en 1856.

Fuente: http://es.wikipedia.org/wiki/Western_Union

Logo KW

Weta Workshop [439]

de System Administrator - lunes, 27 de enero de 2014, 12:38
 

 Weta

Weta Workshop es una empresa de efectos especiales mecánicos localizada en el barrio de Miramar, en Wellington (Nueva Zelanda), que produce efectos para la televisión y el cine. Es una de las principales divisiones del holding llamado Weta Limited, y el origen en 1987 de este grupo empresarial.

Aunque Weta lleva produciendo desde hace tiempo criaturas y efectos de maquillaje para las series de televisión Hercules: The Legendary Journeys y Xena: la princesa guerrera, y efectos para películas como Meet the Feebles y Criaturas celestiales; la producción de Weta Workshop tomó relevancia mundial con la trilogía de El Señor de los Anillos, del director Peter Jackson; para la que Weta produjo escenarios, vestuario, armaduras, armas, criaturas y maquetas. Las habilidades de fantasía de Weta también se han utilizado en The Chronicles of Narnia: The Lion, the Witch and the Wardrobe y Las crónicas de Narnia: el príncipe Caspian. Gracias a esta serie de películas, imaginada por C. S. Lewis, la empresa también es conocida por el nombre de Walden Media.

Actualmente Weta está trabajando en el vestuario de Justice League of America, película en la que también está involucrada Weta Digital. Otros proyectos incluyen Jane and the Dragon y la versión en vivo del anime Neon Genesis Evangelion. Weta también trabaja actualmente en la adaptación al cine de Halo, el popular videojuego de Bungie y Microsoft. Weta ha construido un warthog del juego funcional y a escala real. Weta también proporciona efectos especiales al festival de música Rock2Wgtn celebrado en la pascua de 2008.

Fuente: http://es.wikipedia.org/wiki/Weta_Workshop 

Logo KW

WHAT HAPPENS WHEN ROBOTS KNOW US BETTER THAN WE KNOW OURSELVES? [597]

de System Administrator - martes, 29 de julio de 2014, 19:20
 

THE UNCANNIEST VALLEY: WHAT HAPPENS WHEN ROBOTS KNOW US BETTER THAN WE KNOW OURSELVES?

Written By: Steven Kotler

The “uncanny valley” is a term coined by Japanese roboticist Mashahiro Mori in 1970 to describe the strange fact that, as robots become more human-like, we relate to them better—but only to a point. The ”uncanny valley” is this point.

The issue is that, as robots start to approach true human mimicry, when they look and move almost, but not exactly, like a real human, real humans react with a deep and violent sense of revulsion.

This is evolution at work. Biologically, revulsion is a subset of disgust, one of our most fundamental emotions and the by-product of evolution’s early need to prevent an organism from eating foods that could harm that organism. Since survival is at stake, disgust functions less like a normal emotion and more like a phobia—a nearly unshakable hard-wired reaction.

Psychologist Paul Ekman discovered that disgust, alongside contempt, surprise, fear, joy, and sadness, is one of the six universally recognized emotions. But the deepness of this emotion (meaning its incredibly long and critically important evolutionary history) is why Ekman also discovered that in marriages, once one partner starts feeling disgust for the other, the result is almost always divorce.

Why? Because once disgust shows up the brain of the disgust-feeler starts processing the other person (i.e. the disgust trigger) as a toxin. Not only does this bring on an unshakable sense of revulsion (i.e. get me the hell away from this toxic thing response), it de-humanizes the other person, making it much harder for the disgust-feeler to feel empathy. Both spell doom for relationships.

Now, disgust comes in a three flavors. Pathogenic disgust refers to what happens when we encounter infectious microorganisms; moral disgust pertains to social transgressions like lying, cheating, stealing, raping, killing; and sexual disgust emerges from our desire to avoid procreating with “biologically costly mates.” And it is both sexual disgust and pathogenic that creates the uncanny valley.

To protect us from biologically costly mates, the brain’s pattern recognition has a hair-trigger mechanism for recognizing signs of low-fertility and ill-health. Something that acts almost human but not quite, reads—to our brain’s pattern recognition system—as illness.

And this is exactly what goes wrong with robots. When the brain detects human-like features—that is, when we recognize a member of our own species—we tend to pay more attention. But when those features don’t exactly add up to human, we read this as a sign of disease—meaning the close but no cigar robot reads as a costly mate and a toxic substance and our reaction is deep disgust.

 

Repliee Q2. Taken at Index Osaka Note: The model of Repliee Q2 is probably same as Repliee Q1expo, Ayako Fujii, announcer of NHK.

But the uncanny valley is only the first step in what will soon be a much more peculiar progress, one that will fundamentally reshape our consciousness. To explore this process, I want to introduce a downstream extension of this principle—call it the uncanniest valley.

The idea here is complicated, but it starts with the very simple fact that every species knows (and I’m using this word to describe both cognitive awareness and genetic awareness) its own species the best. This knowledge base is what philosopher Thomas Nagel explored in his classic paper on consciousness: ”What Is It Like to Be A Bat.” In this essay, Nagel argues that you can’t ever really understand the consciousness of another species (that is, what it’s like to be a bat) because each species’ perceptual systems are hyper-tuned and hyper-sensitive to its own sensory inputs and experiences. In other words, in the same way that “game recognizes game,” (to borrow a phrase from LL Cool J), species recognize species.

And this brings us to Ellie, the world’s first robo-shrink. Funded by DARPA and developed by researchers at USC’s Institute for Creative Studies, Ellie is an early iteration computer simulated psychologist, a bit of complicated software designed to identify signals of depression and other mental health problems through an assortment of real-time sensors (she was developed to help treat PTSD in soldiers and hopefully decrease the incredibly high rate of military suicides) .

At a technological level, Ellie combines a video camera to track facial expressions, a Microsoft Kinect movement sensor to track gestures and jerks, and a microphone to capture inflection and tone. At a psychological level, Ellie evolved from the suspicion that our twitches and twerks and tones reveal much more about our inner state than our words (thus Ellie tracks 60 different “features”—that’s everything from voice pitch to eye gaze to head tilt). As USC psychologist and one of the leads on the project, Albert Rizzo told NPR: [P]eople are in a constant state of impression management. They’ve got their true self and the self that they want to project to the world. And we know that the body displays things that sometimes people try to keep contained.”

 

More recently, a new study just found that patients are much more willing to open up to a robot shrink than a human shrink. Here’s how Neuroscience News explained it: ”The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, ‘What’s something you feel guilty about?’ or ‘Tell me about an event, or something that you wish you could erase from your memory.’ In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.

The reason for this success is pretty straightforward. Robots don’t judge. Humans do.

But this development also tells us a few things about our near future. First, while most people are now aware of the fact that robots are going to steal a ton of jobs in the next 20 years, the jobs that most people think are vulnerable are of the blue-collar variety. Ellie is one reason to disavow yourself of this notion.

As a result of this coming replacement, two major issues are soon to arise. The first is economic. There are about 607,000 social workers in America, 93,000 practicing psychologists, and roughly 50,000 psychiatrists. But, well, with Ellie 2.0 in the pipeline, not for long. (It’s also worth noting that these professions generate about $3.5 billion dollars in annual income, which—assuming robo-therapy is much, much cheaper than human-therapy—will also vanish from the economy.)

But the second issue is philosophical, and this is where the uncanniest valley comes back into the picture. Now, for sure, this particular valley is still hypothetical, and thus based on a few assumptions. So let’s drill down a bit.

The first assumption is that social workers, psychologist and psychiatrists are a deep knowledge base, arguably one of our greatest repositories of “about human” information.

Second, we can also assume that Ellie is going to get better and better and better over time—no great stretch since we know all the technologies that combine to make robo-psychologists possible are, as was well-documented in Abundance, accelerating on exponential growth curves. This means that sooner or later, in the psychological version of the Tricorder, we’re going to have an AI that knows us as well as we know ourselves.

Third—and also as a result of this technological acceleration—we can also assume there will soon come a time when an AI can train up a robo-therapist better than a human can—again, no great stretch because all we’re really talking about is access to a huge database of psychological data combined with ultra-accurate pattern recognition, two already possible developments.

But here’s the thing—when you add this up, what you start to realize is that sooner or later robots will know us better than we know ourselves. In Nagel’s terms, we will no longer be the species that understands our species the best. This is the Uncanniest Valley.

And just as the uncanny valley produces disgust, I’m betting that the uncanniest valley produces a nearly unstoppable fear reaction—a brand new kind of mortal terror, the downstream result of what happens when self loses its evolutionarily unparalleled understanding of self.

Perhaps this will be temporary. It’s not hard to imagine that our journey to this valley will be fortuitous. For certain, the better we know ourselves—and it doesn’t really matter where that knowledge comes from—the better we can care for and optimize ourselves.

Yet I think the fear-response produced by this uncanniest valley will have a similar effect to disgust in relationships—that is, this fear will be extremely hard to shake.

But even if I’m wrong, one this for certain, we’re heading to an inflection point almost with an equal—the point in time when we lose a lot more of ourselves, literally, to technology and another reason that life in the 21st century is about to get a lot more Blade Runner.

[Photo credits: Robert Couse-Baker/Flickr, Wikipedia, Steve Jurvetson/Flickr]

This entry was posted in Robots and taggedabundanceAlbert RizzoBlade Runnerellieellie 2.0Mashahiro MoriMicrosoft KinectPaul Ekmansteven kotlerThomas Nageltricorder,uncanny valley.

Link: http://singularityhub.com/2014/07/29/the-uncanniest-valley-what-happens-when-robots-know-us-better-than-we-know-ourselves/

Logo KW

What Kind of Thing Is Moore’s Law? [1200]

de System Administrator - sábado, 11 de abril de 2015, 20:45
 

What Kind of Thing Is Moore’s Law?

The trend has more to do with collective behavior than the laws of nature
Logo KW

What's the difference between a vulnerability scan, penetration test and a risk analysis? [1231]

de System Administrator - sábado, 16 de mayo de 2015, 15:06
 

What's the difference between a vulnerability scan, penetration test and a risk analysis?

By Tony Martin-Vegue

Misunderstanding these important tools can put your company at risk – and cost you a lot of money

You’ve just deployed an ecommerce site for your small business or developed the next hot iPhone MMORGP. Now what?

Don’t get hacked!

Let’s examine the differences in depth and see how they complement each other.

 

Vulnerability assessment

Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart.

Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps.

It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them.

Penetration test

Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope – this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 – they probe an open port and see what can be exploited.

For example, let’s say a website is vulnerable to Heartbleed. Many websites still are. It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference – the website or service is actually being penetrated, just like a hacker would do.

Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided.

Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch.

Risk analysis

A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn't require any scanning tools or applications – it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk – including financial, reputational, business continuity, regulatory and others -  to the company if the vulnerability were to be exploited.

Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed.

The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained – specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data?

A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them.

The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step – they are used to perform wide sweeps of a network to find missing patches or misconfigured software. From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket.

It’s important to know the difference – each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference.

This article is published as part of the IDG Contributor Network. Want to Join?

Tony Martin-Vegue

  • IDG Contributor Network 
  • Tony Martin-Vegue works for a large global retailer leading the firm’s cyber-crime program. His enterprise risk and security analyses are informed by his 20 years of technical expertise in areas such as network operations, cryptography and system administration. Tony holds a Bachelor of Science in Business Economics from the University of San Francisco and holds certifications including CISSP, CISM and CEH.

Link: http://www.csoonline.com

 

 

Logo KW

WHEN THE INTERNET SLEEPS [956]

de System Administrator - lunes, 20 de octubre de 2014, 17:19
 

WHEN THE INTERNET SLEEPS

Written By: Jason Dorrier

The internet is a little bit like an organism—a really huge organism, made up of over four billion IP addresses networked across the globe. How does the internet behave day to day? What are its natural cycles?

USC Viterbi School of Engineering project leader and computer science assistant professor, John Heidemann, decided to find out.

In collaboration with Lin Quan and Yuri Pradkin, Heidemann pinged 3.7 million IP address blocks—representing almost a billion unique IP addresses—every 11 minutes for two months earlier this year. They asked the simple question: When are these addresses active and when are they sleeping?

The team found some interesting trends. IP addresses using home WiFi routers in countries like the US and Western Europe were consistently on (or awake) around the clock, whereas addresses in Eastern Europe, South America, and Asia tended to cycle more regularly with day and night.

Why is this important? Think of it as a method for differentiating between a “sleeping” internet and a “broken” internet.

“This data helps us establish a baseline for the internet,” says Heidemann, “To understand how it functions, so that we have a better idea of how resilient it is as a whole, and can spot problems quicker.”

The simplest use of the data may be akin to a health checkup, but there might be other interesting research outcomes too. For example, an “always on” internet may correlate with economic development. Over the years, we might be able to track how countries are doing, adding internet data to other broad statistics like GDP.

You might also have noticed there are big holes in the map in Africa, Asia, and South America. These in part correlate to low-population areas—but they also show where internet coverage is still spotty. Indeed, billions around the world still lack regular internet access (a situation Google and Facebook are intent on remedying).

Heidemann’s map is intriguing, in part, because it’s a striking visual representation of just how connected the planet already is—and just how much more connected it is likely to become over next few years and decades.

Image Credit: USC Viterbi School of Engineering

This entry was posted in ComputingTech and tagged facebookgoogleinternet accessinternet balloonsJohn HeidemannLin Quanusc viterbi school of engineeringYuri Pradkin.

Link: http://singularityhub.com

Logo KW

When the Toaster Shares Your Data With the Refrigerator, the Bathroom Scale, and Tech Firms [1274]

de System Administrator - martes, 30 de junio de 2015, 22:53
 

When the Toaster Shares Your Data With the Refrigerator, the Bathroom Scale, and Tech Firms

By Vivek Wadhwa

Your toaster will soon talk to your toothbrush and your bathroom scale. They will all have a direct line to your car and to the health sensors in your smartphone. I have no idea what they will think of us or what they will gossip about, but our devices will be soon be sharing information about us — with each other and with the companies that make or support them.

It’s called the Internet of Things, a fancy name for the sensors embedded in our commonly used appliances and electronic devices, which will be connected to each other via WiFi, Bluetooth, or mobile-phone technology.  They will have computers in them to analyze the data that they gather and will upload this via the Internet to central storage facilities managed by technology companies. Just as our TVs are getting smarter, with the ability to stream Netflix shows, make Skype calls, and respond to our gestures, our devices will have increasingly sophisticated computers embedded in them for more and more purposes.

The Nest home thermostat already monitors its users’ daily movements and optimizes the temperature in their homes. It reduces energy bills and makes their houses more comfortable. Technology companies say they will use the Internet of Things in the same way: to improve our energy usage, health, security, and lifestyle and habits.

Well, that is what they claim. In reality, companies such as Apple and Google want to learn all they can about us so that they can market more products and services to us — and sell our data to others. Google Search, Gmail, and Apple Maps monitor our life for that purpose but are free and very helpful; and so will the new features on our devices be inexpensive and useful. They will tell us when we need to order more milk, eat our medicine, rethink having that extra slice of cheesecake, and take the dog for a walk. It’s a Faustian bargain, but one that most of us will readily make.

The ability to collect such data will have a profound effect on the economy. McKinsey Global Institute, in a new report, “The Internet of Things: Mapping the value beyond the hype,” says that the economic impact of the Internet of Things could be $3.9 to $11.1 trillion per year by 2025 — 11 percent of the global economy. It will reach far beyond our homes and create value through productivity improvements, time savings, and improved asset utilization; by monitoring machines on the factory floor, the progress of ships at sea, and traffic patterns in cities; and through the economic value of reductions in disease, accidents, and deaths. It will monitor the natural world, people, and animals.

As CLSA analyst Ed Maguire explains it, when manufacturers connect their products, they gain insights into the distribution chain, into usage patterns, and into how to create iterative products. Turning electronic products into software-controlled machines makes possible continuous improvements both to the machines and to the business models for using them. The constant improvement in features that we see in our smartphones will become common on our other devices. Maguire says that companies will be able to “offer an experience or utility as a service that previously had to be purchased as a physical good.”

Everything will be connected, including cars, street lighting, jet engines, medical scanners, and household appliances. Rather than throwing appliances away when a new model comes out, we will just download new features. That is how the Tesla electric cars already work — they get software updates every few weeks that provide new features. Tesla’s latest software upgrades are enabling the cars to begin to drive themselves.

But the existence of all these sensors will create many new challenges. Businesses have not yet figured out how to use the data they already have. According to McKinsey, for example, oil rigs have as many as 30,000 sensors, but their owners examine only one percent of the data they collect. The data they do use mostly concern anomaly detection and control — not optimization and prediction, which would offer the greatest value.

Companies are also reluctant to change their business models, which they would need to do in order to offer better experiences and new methods of pricing. Sensor data will tell product manufacturers how much their products are used and will allow them to charge by usage. They will be able to bundle product upgrades and new services into usage charges. But that will mean accepting payment retrospectively rather than in advance, and will require them to build business operations that focus on data and software, with new organizational structures. So they will be reluctant to change. But creative new start-ups will take advantage of technology advances and put incumbents out of business. Note how Uber is using the technologies in our smartphones to disrupt the taxi industry. That is a prelude of things to come.

My greatest concerns in all this are the loss of privacy and confidentiality. Cameras are already recording our every move in city streets, in office buildings, and in shopping malls. Our newly talkative devices will keep track of everything we do, and our cars will know everywhere we have been. Privacy will be dead, even within our homes.

Already, there are debates about whether Facebook and Instagram can and should be able to legally use our likes and the pictures we upload for marketing purposes. Google reads our e-mails and keeps track of what we watch on YouTube in order to deliver advertisements to us. Will we be happy for the manufacturers of our refrigerators to recommend new flavors of ice cream; for our washing machines to suggest a brand of clothes to buy, or for our weighing machines to recommend new diet plans? They will have the data necessary for doing this — just as the maker of your smart TV maker is learning what shows you watch. Will we be happy for criminals and governments to hack our houses and learn even more about who we are and what we think?

I am not looking forward to having my bathroom scale tell my refrigerator not to order any more cheesecake, but know that it is an amazing — and scary — future that we are rapidly heading into.

 

Vivek Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford University, director of research at Center for Entrepreneurship and Research Commercialization at Duke, and distinguished fellow at Singularity University.

His past appointments include Harvard Law School, University of California Berkeley, and Emory University. 

Image Credit: Ruta Production/Shutterstock.com

Logo KW

Where Were You 3 Minutes Ago? Your Apps Know [1175]

de System Administrator - domingo, 29 de marzo de 2015, 18:08
 

Where Were You 3 Minutes Ago? Your Apps Know

By Elizabeth Dwoskin

Researchers used a “privacy nudge” to inform study participants when apps requested location data.Carnegie Mellon

Dozens of smartphone apps collect so much location data that their publishers can plot users’ comings and goings in detail, a forthcoming peer-reviewed study found.

Computer scientists at Carnegie Mellon University concluded that a dozen or so popular Android apps collected device location – GPS coordinates accurate to within 50 meters – an average 6,200 times, or roughly every three minutes, per participant over a two-week study period.

The research comes at a time of increasing concern about electronic privacy. A 2014 Pew survey found that more than 90 percent of Americans feel they’ve lost control over personal data. While savvy users understand that using mobile devices entails some privacy tradeoffs – for example, a navigation app will reveal their location to the app’s publisher – most don’t realize the extent to which such information is collected and distributed, the researchers said.

The researchers recruited 23 users of Android version 4.3 from Craigslist and the Carnegie Mellon student body. Participants were allowed to use their own choice of apps after installing software that noted app requests for a variety of personal information; not only location but also contacts, call logs, calendar entries, and camera output. They weren’t told the purpose of the study and were screened to weed out people who had a technical background or strong views about privacy.

The researchers found that even apps that provided useful location-based services often requested the device’s location far more frequently than would be necessary to provide that service, the researchers said. The Weather Channel, for example, which provides local weather reports, requested device location an average 2,000 times, or every 10 minutes, during the study period. Groupon GRPN +0.40%, which necessarily gathers location data to offer local deals, requested one participant’s coordinates 1,062 times in two weeks.

“Does Groupon really need to know where you are every 20 minutes?” asked Norman M. Sadeh, a Carnegie Mellon professor who co-authored the study. “The person would have to be accessing Groupon in their sleep.”

Groupon and the Weather Channel did not respond to requests for comment.

App publishers have ample incentive to gather as much location data as they can. Marketers pay 10% to 20% more for online ads that include location information, said Greg Stuart, chief executive of the Mobile Marketing Association. In previous research, Sadeh and his colleagues found that when an app requests location, 73% of the time it shares the information with an advertising network.

 

Privacy nudges no longer can be implemented on Android.| Bloomberg News

Location data can make ads more relevant to consumers, by making it possible to draw inferences about what audience members are interested in, Stuart said. The data can be used to show an ad for a store to a potential customer who is nearby, a technique that boosts store traffic 40%, according to Mobile Marketing Association research. Or it can be used to present ads for store items to shoppers who are already inside. Users often aren’t aware that their location played a role in being shown a particular ad, Stuart added.

Among the software that handled the most location data were programs pre-installed on the device that couldn’t be easily deleted. Google Play Services, which distributes information to a variety of apps, computed location an average 2,200 times during the study period.

Google declined to comment.

In addition to tallying app requests for personal data, the Carnegie Mellon researchers explored a conundrum: Despite these widespread worries about information leaks, few users take actions that would plug them, such as downloading privacy software or adjusting their device’s settings.

The researchers sent to study participants a daily message – a “privacy nudge,” as Sadeh called it – telling them how many times apps collected their personal data. After receiving the nudges daily, 95% of participants reported reassessing their app permissions and 58% chose to restrict apps from collecting data.

Privacy nudges no longer can be implemented on Android. Operating system updates since the study was concluded removed the software that gave the researchers access to logs of app requests for personal information.

Link: http://blogs.wsj.com

 


Página: (Anterior)   1  ...  96  97  98  99  100  101  102  103  104  105  (Siguiente)
  TODAS