Referencias | References


Referencias completas de vocabulario, eventos, crónicas, evidencias y otros contenidos utilizados en los proyectos relacionados con biotecnología y neurociencia de la KW Foundation.

Full references of vocabulary, events, chronicles, evidences and other contents used in KW Projects related to biotechnology and neuroscience.

Navegue por el glosario usando este índice.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS
Actualmente ordenados Nombre (ascendente) Ordenar por: Apellido(s) | Nombre cambiar a descendente

Página:  1  2  3  4  5  6  7  8  9  10  ...  105  (Siguiente)
TODAS

Logo KW

System Administrator

Logo KW

+1 [128]

de System Administrator - jueves, 2 de enero de 2014, 18:15
 

+1 se refiere a la letra siguiente en el abecedario.

        • H + 1 = I
        • A + 1 = B
        • L + 1 = M
Logo KW

100G [557]

de System Administrator - domingo, 25 de octubre de 2015, 21:55
 

Transitioning to 100G and Beyond: The Big Picture

As the industry moves forward to meet the enormous demand for data with video, mobile, and cloud, the core networks need to transition from 10G to 100G and beyond. Consider some of these findings from Cisco’s 2013 Visual Networking Index (VNI) report:

      • Global network users will generate 3 trillion Internet video minutes per month, which translates to 6 million years of video per month, or 1.2 million video minutes every second, or more than two years’ worth of video every second.
      • Wi-Fi and mobile-connected devices will generate 68 percent of Internet traffic by 2017.
      • More traffic will traverse the Internet in 2017 than the total amount of traffic that traveled the Internet cumulatively from 1984 to 2012.
      • The “Internet of Things” has arrived, and applications such as digital health monitors, smart meters, asset and package tracking, chips for pets and livestock, and video surveillance are driving more and more traffic.
      • Globally, machine-to-machine (M2M) connections will grow three-fold, from 2 billion in 2012 to 6 billion by 2017. Annual global M2M IP traffic will grow 20-fold over the same period - from 197 petabytes in 2012 to 3.9 exabytes by 2017.

Fuente: CISCO

Logo KW

11-S [42]

de System Administrator - miércoles, 15 de enero de 2014, 18:18
 

Los atentados del 11 de septiembre de 2001 (comúnmente denominados “11-S”en España y Latinoamérica; “9/11” en el mundo anglosajón) fueron una serie de atentados terroristas suicidas cometidos aquel día en los Estados Unidos por miembros de la red yihadista Al Qaeda. Lo hicieron mediante el secuestro de aviones de línea, para ser impactados luego contra varios objetivos. Causaron la muerte a cerca de 3.000 personas y heridas a otras 6.000, así como la destrucción del entorno del World Trade Center en Nueva York y graves daños en el Pentágono, Estado de Virginia. Fue el episodio que precedería a la guerra de Afganistán y a la adopción por el Gobierno estadounidense y sus aliados de la política denominada “Guerra contra el terrorismo”.

Fuente: http://es.wikipedia.org/wiki/11S

Logo KW

2001: A Space Odyssey [124]

de System Administrator - miércoles, 15 de enero de 2014, 18:13
 

2001

 2001 2001

2001

2001 2001

2001

2001: A Space Odyssey (en español, 2001: Odisea del Espacio) es una película de culto del género ciencia ficción dirigida por Stanley Kubrick. Fue estrenada en 1968 y marcó un hito por el estilo de comunicación visual, los revolucionarios efectos especiales, el realismo científico y las proyecciones vanguardistas.

Fuente: http://es.wikipedia.org/wiki/2001:_Odisea_del_Espacio

Logo KW

2013 - Estadística de Accesos Web a www.kwfoundation.org [411]

de System Administrator - sábado, 9 de abril de 2016, 16:42
 

TOTALES 2013

AWStats 2013

Logo KW

2014 State of the Network [780]

de System Administrator - jueves, 21 de agosto de 2014, 17:57
 

2014 State of the Network

Whitepaper

Logo KW

2015’s Most Electrifying Emerging Tech? World Economic Forum Releases Annual List [1206]

de System Administrator - jueves, 16 de abril de 2015, 14:46
 

"Time is everything"

2015’s Most Electrifying Emerging Tech? World Economic Forum Releases Annual List

By Jason Dorrier

Writing lists forecasting technology is a bit like writing science fiction. Prerequisites include intimate knowledge of the bleeding edge of technology, the ability to separate signal from noise, and more than a little audacity. Looking back, they can appear adorably quaint or shockingly prescient.

In either case, they’re usually as much a reflection of and commentary on the present as they are a hint at the future. What problems seemed most pressing and which solutions were most exciting? The World Economic Forum's Top 10 Emerging Technologies list isn't old enough to make serious judgements about its track record. But looking back, each year’s contribution reminded me of the year in which it was written.

Enhanced education technology, for example, was included in 2012—when the launch of Coursera, edX, and Udacity led the New York Times to dub it “The Year of the MOOC [Massive Open Online Course].” 2013’s #1 spot, OnLine Electric Vehicles, was inspired by those South Korean buses charged by the road.

Two and three years on? Though much expanded, online education is still struggling to prove its worth, and road-charged vehicles remain a rarity. The former will get there, in my view, while the latter may lag in cities, where big infrastructure projects—especially on main arteries like roads—are so disruptive.

But right or wrong isn’t the point (yet). These are emerging technologies. The World Economic Forum expects most of these tools, often still in the research stage, to take anywhere from 10 to 30 years to have a broad impact. Ultimately, some will succeed, some will partially succeed, and some will fail or be replaced.

Debating what should or shouldn't have been included—that’s the fun part. To that end, we've summarized each entry on this year's list below. Go here for the list in full, and leave your thoughts in the comments.

1. "Fuel cell vehicles: Zero-emission cars that run on hydrogen"

 hydrogen-fuel-cells-concept

Long-promised, fuel cell vehicles are finally here. Various car companies are aiming to bring new models to market. Though pricey at first ($70,000), if they prove popular, prices could fall in coming years.

Fuel cells combine the most attractive characteristics of gas-powered cars and electric cars.

Electric cars are criticized for low range and lengthy recharging. Fuel cell cars, meanwhile, go 400 miles on a tank of compressed hydrogen and take minutes to refuel. They are also clean burning—replacing toxic gases, like carbon monoxide and soot, with naught but water vapor.

Despite better fuel cell technology, there are a number of obstacles, largely shared by electric cars, that may prevent widespread adoption in the near term. These include the clean, large scale production of hydrogen gas, the transportation of gas over long distances, and the construction of refueling infrastructure.

2. "Next-generation robotics: Rolling away from the production line"

Like fuel cell vehicles, everyday robots have long commanded prime real estate in the imagination. But beyond factories, they have yet to break into the mainstream. Most are still big, dangerous, and dumb. That's changing thanks to better sensors, improved robotic bodies (often inspired by nature), increasing computing and networking power, and easier programming (it no longer takes a PhD).

 nao-at-work

These new, more flexible robots are capable of tasks beyond assembly lines. New applications span weeding and harvesting on farms to helping patients out of bed in Japanese hospitals.

Prime fears include the concern robots will replace human workers or run amok. These risks may appear increasingly realistic. However, the list's writers note that prior rounds of automation tended to produce higher productivity and growth. Meanwhile, more familiarity and experience with robots may reduce fears, and a strong human-machine alliance is the more likely outcome.

3. "Recyclable thermoset plastics: A new kind of plastic to cut landfill waste"

 recyclable-thermoset-plastics

There are two commonly used plastics: thermoplastics (which can be reshaped and thus recycled) and thermoset plastics (which can only be shaped once and are not recyclable). The latter category are prized for durability and are widely used.

But the tradeoff for toughness is most end up as landfill.

Just last year, however, researchers discovered recyclable thermoset plastics. The new category, dubbed poly(hexahydrotriazine)s, or PHTs, can be dissolved in strong acid and reused in new products. Achieving recyclability without sacrificing durability means they may replace previously unrecyclable components.

How quickly might this happen? The list's writers predict recyclable thermoset plastics will be ubiquitous by 2025. The move could significantly reduce the amount of plastic waste in landfill across the globe.

4. "Precise genetic-engineering techniques: A breakthrough offers better crops with less controversy"

New genetic engineering techniques are more precise and forego controversial techniques that rely on the bacterial transfer of DNA. The breakthrough CRISPR-Cas9 gene editing method, for example, uses RNA to disable or modify genes in much the same way such changes happen during natural genetic mutation. The technique can also accurately insert new sequences or genes into a target genome.

 dna-molecule-genetic-engineering

Another advance, RNA interference (RNAi), protects crops from viral infection, fungal pathogens, and pests and may reduce dependence on chemical pesticides. Major staples including wheat, rice, potatoes, and bananas may benefit from the tech.

The report predicts declining controversy as genetic engineering helps boost incomes of small farmers, feeds more people, and thanks to more precise techniques, avoids transgenic plants and animals (those with foreign genetic material). Meanwhile, the tech may make agriculture more sustainable by reducing needed resources like water, land, and fertilizer.

5. "Additive manufacturing: The future of making things, from printable organs to intelligent clothes"

 2015-emerging-technology-list-3

3D printing has been used in industrial prototyping for years, but now it's beginning to branch out.

3D printed objects offering greater customization, like Invisalign's tailor-made orthodontic braces, for example, are coming to market. 3D bioprinting machines are placing human cells layer-by-layer to make living tissues (e.g., skin, bone, heart, and vascular). Early applications are in the testing of new drugs, but eventually they hope to print whole, transplantable organs.

Next steps include 3D printed integrated electronics, like circuit boards—though nanoscale parts, like those in processors, still face challenges—and 4D printed objects that transform themselves in accordance with environmental conditions, like heat and humidity. These might be useful in clothes or implants.

Potentially disruptive to the traditional manufacturing market, additive manufacturing is still confined to limited applications in cars, aerospace, and medicine. Still, fast growth is expected in the next decade.

6. "Emergent artificial intelligence: What happens when a computer can learn on the job?"

 2015-emerging-technology-list-4

Artificial intelligence is on the rise. Microsoft, Google, Facebook, and others are developing the technology to allow machines to autonomously learn by sifting massive amounts of data.

Hand-in-hand with advanced robotics, AI will boost productivity, freeing us from certain jobs and, in many cases, even doing them better. It's thought that driverless cars, for example, will reduce accidents (which are often due to human error), and AI systems like Watson may help doctors diagnose disease.

The fear we may lose control of superintelligent machines remains a hot topic, as does the concern over increasing technological unemployment and inequality. The report notes the former may yet be decades away, and despite a possibly bumpy path, the latter may make human attributes, like creativity and emotional IQ, more highly valued. The future will challenge our conception of what it means to be human and force us to deeply consider the risks and benefits of giving machines human-like intelligence.

7. "Distributed manufacturing: The factory of the future is online—and on your doorstep"

 2015-emerging-technology-list-5

Instead of gathering and assembling raw materials in a centralized factory and assembly line—in distributing manufacturing, materials would be spread over many hubs and products would be made near to the customer.

How would it work? "Replace as much of the material supply chain as possible with digital information."

If you've visited 3D printing marketplaces like Thingiverse or Shapeways, you've seen the future of manufacturing. Plans for a product are digitized in 3D modeling software and uploaded to the web. Tens of thousands of people can grab the files and make the product anywhere there's a 3D printer. This might be at a local business, university, or eventually, in our homes.

Distributed manufacturing may more efficiently use resources and lower barriers to entry, allowing for increased diversity (as opposed to today's heavily standardized, assembly line products). Additionally, instead of requiring trucks, planes, and ships to move things—we'll simply zap them over the internet. This may allow for goods to rapidly travel the whole globe, even to places not currently well served.

Risks include intellectual rights violations, as we saw when music was digitized, and less control over dangerous items, like guns or other weapons. And not all items will be amenable to distributed manufacturing. Traditional methods will remain, but perhaps be much reduced in scope.

8. "'Sense and avoid' drones: Flying robots to check power lines or deliver emergency aid"

 2015-emerging-technology-list-6

People are finding all manner of interesting non-military uses for drones: agriculture, news gathering, delivery, filming and photography. The drawback to date, however, is they still require a human pilot. The next step? Drones that fly themselves. To do this safely, they'll need to sense and avoid obstacles.

Early prototypes are already here. Just last year, Intel and Ascending Technologies showed off drones able to navigate an obstacle course and avoid people. Once autonomous, drones can undertake dangerous tasks (without needing a human to be near). This might include checking power lines or delivering supplies after a disaster. Or drones may monitor crops and allow more efficient use of resources like water and fertilizer.

Remaining challenges include making more robust systems capable of flight in adverse conditions. Once perfected, however, drones like robots, will take the power of computing into the three dimensional physical realm and "vastly expand our presence, productivity and human experience."

9. "Neuromorophic technology: Computer chips that mimic the human brain"

 neuromorphic-chips

In some ways, the human brain remains the envy of today's most sophisticated supercomputers. It is powerful, massively parallel, and insanely energy efficient. Computers, no matter how fast they are, are still linear power hogs that substitute brute force for elegance. But what if we could combine the two?

Neurmorphic chips, like IBM's TrueNorth, are inspired by the brain and hope to do just that. Instead of shuttling data back and forth between stored memory and central processors, neuromorphic chips combine storage and processing into the same interconnected neuron-like components. This fundamentally different chip architecture may vastly speed processing and improve machine learning.

TrueNorth has a million "neurons" that, when working on some tasks, are hundreds of times more power efficient than conventional chips. Neuromorphic chips "should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence...[where] computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways."

10. "Digital genome: Healthcare for an age when your genetic code is on a USB stick"

 genomic-sequencing

The cost to sequence the human genome has fallen exponentially since it was first sequenced. In the beginning in cost tens or even hundreds of millions of dollars to sequence a single genome. Now, the cost is somewhere around $1,000 and a single machine can sequence tens of thousands of genomes a year.

As more people get their genome sequenced, the information can be stored on a laptop, USB stick, or shared online. Quick and affordable genetic testing promises to make healthcare—from the genetic components of heart disease to cancer—more individually tailored, targeted, and effective.

Prime concerns and challenges include security and privacy of personal information. Also, communication of genetic risks and educating people on what those risks mean will be critical. In aggregate, however, the list's authors say it is more likely the benefits of personalized medicine will outweigh the risks.

The Top 10 Emerging Technologies of 2015 list was compiled by the World Economic Forum's Meta-Council on Emerging Technologies. To learn more, be sure to check out the full list here.

Image Credit: Shutterstock.com

Logo KW

3 Ways to Use Social Media to Recruit Better Tech Talent [1007]

de System Administrator - viernes, 5 de diciembre de 2014, 21:53
 

3 Ways to Use Social Media to Recruit Better Tech Talent

Is your company using social media to its full potential when it comes to finding new employees? Your competition probably is.

By Sharon Florentine

A whopping 93 percent of the 1,855 recruiting pros surveyed in Jobvite's 2014 Social Recruiting Survey use or plan to use social media in their recruiting efforts.

The reason why is simple and powerful. According to respondents, leveraging social media improves candidate quality by 44 percent over those using only "traditional" recruiting techniques like phone screenings and filtering resumes based solely on skills and experience.

Social media allows not only information about a candidate's experience and skills, but a better glimpse into their lifestyle, values and their cultural fit, which is crucial for companies looking not just to recruit and hire, but also to engage employees and improve retention rates.

The Jobvite survey reveals that 80 percent of recruiters are using social media to evaluate a candidate's potential culture match. The emphasis on cultural fit is a major reason recruiters are doubling down on social media as a tool.

Use Social Media to Evaluate Cultural Fit

Social media's often used to highlight "what not to do" from a candidate's perspective (take down those photos of your bachelor weekend in Vegas, please), but what's often overlooked is its usefulness to recruiters and hiring managers as both a sourcing and a screening tool for new talent especially when it comes to finding talent with that perfect cultural fit, says Yarden Tadmor, CEO and founder of anonymous job search and recruiting app Switch.

"Traditionally, social media's importance to recruiting has been limited to the way it is used to weed out candidates who might be a bad fit -- in other words, those unprotected tweets can do serious damage when recruiters are evaluating potential employees. But social media, whether staple networks like LinkedIn, Facebook and Twitter or burgeoning apps like our Switch … has become a convenient and comprehensive way for recruiters to find, 'like' and connect with candidates," says Tadmor.

Filtering candidates through the lens of their Facebook profiles, Twitter feeds and other platforms helps determine whether prospects would fit the culture of a company and, perhaps more importantly, if they would be willing to consider a move, Tadmor says.

"The impact social media has had on our recruiting is immeasurable. When we're on the fence about a candidate's resume, we use LinkedIn to find out how involved they are in the LinkedIn community and throughout the industry. This gives valuable insight that was previously unattainable, and are key ingredients of our prime candidates," says Cristin Sturchio, global head of Talent at COGNOLINK. Sturchio adds that when using LinkedIn as a screening tool, she and her team look for candidates who've gained endorsements, who belong to professional groups and follow relevant companies and people.

"This tells us that they are engaged and active in their profession, and are likely to be engaged and active as one of our employees. You can't find that kind of information on a resume, and if you can, it often gets lost in more pressing details," says Sturchio.

Use Social Media to Evangelize Your Business, Mission and Values

From a recruiting perspective, having a well-defined social media brand can help attract the best passive candidates, says Tadmor. In fact, according to the Jobvite research, companies know they have to sell their workplace cultures not just to attract the right candidates but to influence their decisions about where to work, and attract like-minded talent. In addition, continued use of social media will help companies attract the next-generation workforce, as millennials continue to use social and mobile technology in their career efforts, according to David Hirsch, managing partner of Metamorphic Ventures.

Hirsch says that social media is the ideal medium for employers to broadcast their social mission in order to attract high quality candidates. "With mobile being the dominant way that millennials communicate and operate, we fully expect the way that companies will find new talent will continue trending toward more use of social media, as connections are made based on geo-location proximity, interests, passions, experiences, extended network, etc.," says Hirsch.

As more millennials enter the workforce, Hirsch says apps like Switch will become more important for both employers and employees, allowing them to quickly sift through the "noise" and find their perfect "match" in a way that's more in line with how millennials will expect to experience their job searches and how recruiters should target prospects.

Use Social Media to Advertise Open Positions

Of course, recruiters and hiring managers are still using social media in a more traditional way, to post open positions or as a platform to reach broader segments of their industry in hopes of luring potential employees, says Seven Step RPO (Recruitment Process Outsourcing) president Paul Harty.

"A company might be using Facebook or Twitter to broadcast targeted industry related news for mechanical engineers. It also may have recruiters posting about mechanical engineers for those interested in related jobs, and then advertising those jobs through that commentary. That targeted outreach and profiling happens more than you think. Companies are finding people by Tweeting or posting a Facebook page to find the skills they are looking to acquire, regardless of position. However and wherever recruiters can find talent, they will leverage those channels to be where the talent community exists," says Harty.

Cognolink's Sturchio highlights that his organization also uses social media for job postings. "We also use Twitter to blast out our recruiting activities on campus, which allows us to find new candidates, promote our brand, and draw interest and awareness to find talent," says Sturchio.

Link: http://www.cio.com

Logo KW

37 Life Lessons in 37 Years [438]

de System Administrator - domingo, 5 de julio de 2015, 23:27
 

37 Life Lessons in 37 Years

37

 

Today is my 37th birthday. And, I must say, it's been a pretty interesting ride so far. As I look back over the years and many phases of my Iife, I realize how each stage, success, stumble, triumph and heartache has had a significant impact on where I stand right now. And despite the rough patches, I love it all.

From a shy yet studious little girl, to an artsy and somewhat rebellious teenager, to a happy-go-lucky big-dreaming 20-something with a bit of a wild size, my metamorphosis were plentiful in my early years. Now into my 30s, my heart has grown a few sizes larger and overflows with motherly love as I've discovered what matters most in life. And my entrepreneurial experiences have been a crash course in lessons of life, business and self that, at times, brought me to the brink of what I thought I could handle, only to be rewarded nicely for sticking it out and seeing it through.

As I continue to step more fully into myself each day and bring to light my mission of helping others build their own dreams with joy and ease, I've racked my brain to think of my top 37 life lessons so far. On this day of celebrating another trip around the sun, I share these with you and hope you find inspiration as I have. Enjoy!

Top 37 Life Lessons So Far...

      • 01. Happiness comes from within. We spend way too much of our lives looking for outside validation and approval that eludes us. Turns out, it's been an inside job all along. Go inward.
      • 02. Be grateful for everything. The good, the bad, the ugly. Our entire life is a precious gift. The pleasure, the pain -- it's all part of our path.
      • 03. Subtle shifts in perception will transform your entire life. When feeling fearful, angry, hurt, simply choose to see a situation differently.
      • 04. In being true to yourself, you can't possibly make everybody else happy.Still, it's better to risk being disliked for living your truth than to be loved for what you are pretending to be.
      • 05. The world is our mirror. What we love in others is a reflection of what we love about ourselves. What upsets us about others is a strong indication of what we need to look at more closely within ourselves.
      • 06. Everybody comes into our life for a reason. It is up to us to be open to the lesson they are meant to teach. The more someone rubs us the wrong way, the greater the lesson. Take notes.
      • 07. Trust. In troubled times, just know that the Universe has your back and everything is going to be alright. If you're not there yet, trust in hindsight you will understand. Your higher good is being supported, always.
      • 08. Never take things personally. What others do is a reflection of what's going on in their own life and probably has little or nothing to do with you.
      • 09. A walk in nature cures a lot. Taking in some fresh air and the beautiful landscape of this earth is amazingly head-clearing, grounding, and mood-lifting. Bonus: You can learn a whole lot about life in your observation of the awesomeness which is nature.
      • 10. Hurt people hurt people. Love them anyway. Although, it's totally okay to love them from a distance.
      • 11. You have to feel it to heal it. Bring your fears and weaknesses front and center and shine a blazing spotlight on them because the only way out is through. The hurt of facing the truth is SO worth it in the long run, I swear.
      • 12. Perfectionism is an illusion. A painful one at that. Ease up. Strive for excellence, sure, but allow yourself room to make mistakes and permission to be happy regardless of outcome.
      • 13. Take the blinders off. Don't become so laser-focused on your own goals and desires that you miss out on the beauty in life and the people around you. The world is stunningly beautiful when you walk around with eyes wide open.
      • 14. Celebrate the journey. It's not all about the destination. Savor all of your successes, even the small ones.
      • 15. Forgiveness is not so much about the other person. It's about you and for you so that you can gain the peace and freedom you deserve. Forgive quickly and often.
      • 16. We are all incredibly intuitive. When we learn to become still and listen, we can tap into some pretty amazing primal wisdom. Listen to the quiet whisper of your heart. It knows the way.
      • 17. Let your soul shine! Be authentic. There is nobody else on this earth just like you. Step into your truth wholeheartedly and live and breathe your purpose.
      • 18. We are powerful creators. Seriously, bad-asses. With intention, focus, and persistence -- anything is possible. Know this.
      • 19. I am full of light. You are full of light. We are all full of light. Some cast shadows on their own brightness. Be a beacon of light to others and show them the way.
      • 20. Don't take life too seriously! Nobody gets out alive anyway. Smile. Be goofy. Take chances. Have fun.
      • 21. Surround yourself with people who love and support you. And, love and support them right back! Life is too short for anything less.
      • 22. Learn the delicate dance. Have big beautiful dreams and vision. Chase them with much passion. But, also hold on to them all ever so lightly. Be flexible and willing to flow as life comes at you.
      • 23. Giving is the secret to receiving. Share your wisdom, your love, your talents. Share freely and be amazed at how much beauty in life flows back to you.
      • 24. On that note, be careful not to give too much. If you empty out your own cup completely, you will have nothing left to give. Balance is key.
      • 25. Say "YES!" to everything that lights you up. Say "no", unapologetically, to anything that doesn't excite you or you don't have the bandwidth for. Time is one of our most precious resources that we can never get back. Manage it wisely.
      • 26. Sometimes we outgrow friendships. It doesn't mean they're bad or you're bad. It just means you're on different paths. Hold them in your heart, but when they start to hurt or hold you back, it's time to give space or let go.
      • 27. Fear is often a very good indicator of what we really want and need in our life. Let it be your compass and enjoy the exciting adventure it leads you on.
      • 28. Overcoming your fears is one of the most empowering things you can ever do for yourself. You'll prove to yourself you can truly accomplish anything! Major self-confidence booster.
      • 29. Our bodies are our vehicle to our dreams. Treat them with love and fuel them with the best health to feel vibrant and energized. But, never obsess over image. Looks are subjective and will fade in time, anyway. Feeling good, healthy, and comfortable in our own skin is what matters most.
      • 30. Let those that you love know it often and enthusiastically. You can never say it or show it too much. Your time, total presence, love, and genuine concern for their wellness is the greatest gift of all.
      • 31. The present moment is where it's at. It's the only one promised to any of us. Learn from your past & enjoy the beautiful memories, but don't cling or let them haunt you. And, dream big and be excited about the future, but don't become obsessed. Love this moment, always.
      • 32. Life is full of highs and lows. We need them both to grow to our fullest potential. Just hang on tight and enjoy the ride.
      • 33. We are all connected as one human family. Nobody is better or worse than anyone else -- just at different stages of our journeys and dealing with life the best way we know how. Recognize that the other person is you.
      • 34. Practice daily gratitude for all the blessings in your life, large and small. Not only is this a high vibe practice that feels amazing, in practicing regularly you are creating space for even more abundance -- of joy, love, health, and prosperity.
      • 35. We are not the center of the universe, although our ego can make us feel that way at times. Step outside of that way of thinking and see the world and other people's perspective in a whole new beautiful light.
      • 36. The world needs more love, light, and laughter. Go be love.
      • 37. You are the guru. For much of our lives, we have been told what do, how to think, what looks good, what "success" is. You don't have to buy into any of it. Feel free to peel back the layers. Think for yourself. Break the mold. When you stop doing what everybody else wants you to do and start following your own intuition, you will be ridiculously happy.

In looking back at your own life, realize that every high and low is all part of your amazing story. Own it! Take cues and guidance from the universe and you will continue to go on an incredible ride as you fully step into your truth and power.

Age is just a number, but the higher it gets, the more wisdom and life experience we've amassed. You are never going to be younger than you are in this present moment again. So embrace it, love it, and enjoy it fully!

Here's to many more beautiful years of seeking-truth, questioning all that does not sit right, and making your greatest impact in the world! I look forward to adding more lessons as life continues to give me the opportunity to learn, grow, transform, share and expand. Hope you will too.

With much love,
Dawn

For more inspiration, visit the Dawnsense site and sign up for weekly love letters. Also, join the supportive Dawnsense community on FacebookTwitter and Instagram.

Fuente: http://www.huffingtonpost.com/dawn-gluskin/life-lessons_b_4629195.html?ncid=edlinkusaolp00000009

Logo KW

3D Printed Electronic Devices Are Coming [1057]

de System Administrator - jueves, 15 de enero de 2015, 19:25
 

 

3D Printed Electronic Devices Are Coming

BY JASON DORRIER

The handheld computers we carry in our pockets represent almost unimaginable complexity. Batteries, sensors, chips, circuits, and touch displays in a space age shell, all painstakingly assembled by thousands of workers and shipped globally.

Smartphones have disrupted numerous industries. Up next? Disrupting how such modern marvels are made. What if you could dramatically reduce the number of parts and eliminate assembly—what if you could 3D print a smartphone?

Voxel8’s new 3D printer, which can print functioning electronic devices all in one piece, heralds just such a radical possibility. It should be noted that Voxel8’s printer can’t print every component yet. But that doesn’t lessen its significance.

How does it work? In one sense, just like your standard 3D printer, the device is guided by digital modeling software to build shapes out of consecutive layers of plastic. When instructed, the printer pauses and a second print nozzle lays down highly conductive circuits while the operator pops in key components like processor, battery, and motors.

Voxel8’s demo video shows the device printing a working drone quadcopter start to finish.

Now, clearly, the Voxel8 printer can’t print a drone from the ground up, never mind a smartphone. But circuitry is the connective tissue of a device, and the printer can make antennas, electromagnetic coils, or stacked integrated circuits.

"No longer are you limited to planar PCBs [printed circuit boards]," the Voxel8 team writes on their website. "Now you can design the electronics to fit your part, rather than designing the part around the electronics."

And for more complete 3D printed electronic devices? Think of this as a proof of concept.

 3D printed electronics: Some assembly still required.

The Harvard research group behind the device has spent years neck deep in materials science. The conductive “ink” that forms their 3D printed circuits reportedly leaves competitors in the dust in terms of sheer conductivity. And in the future, Voxel8 says they plan to develop new “inks” for resistors, sensors, and batteries.

As the technology progresses, and the number of printable electronic components grows, it'll advance 3D printing beyond today's prototypes and plastic trinkets to useful tools and devices. But not yet.

The last few decades of industrial prototyping on big, expensive 3D printers were a bit like the early days of modern computing—the decades of military and industrial supercomputers that required a room and a PhD to operate, and only did a few things really well. But the last few years have looked more like the 70s and early 80s when hobbyists and hackers gave birth to personal computers.

Like early PCs, consumer 3D printers are clunky, slow, and in search of a reason for regular folks to buy them. We can expect them to get faster. And their interfaces will get smoother. But that killer application remains elusive.

 

CT scan of a 3D printed quadcopter showing circuit board wired to motors.

Expanding the materials 3D printers can use expands what their products can do. That’s why Voxel8 is exciting. Industrial 3D printers can already print with a variety of materials including different plastics and metals. MakerBot recently announced composite wood, stone, and metal filaments for its consumer line of printers at CES.

They showed off a hammer that looked and functioned like a traditional hammer (albeit, still not quite as sturdy). Printing tools and replacement parts at home might be an early reason to buy a 3D printer.

But add a variety of conductive inks and printable electronics to the mix, and we begin to see far more versatile 3D printed objects requiring little to no assembly. Maybe we’ll go to the local print shop for our devices.

Maybe we’ll print some of them on our desk.

In either case, it'll be one disruptive technology disrupting another. Why, for example, would Foxconn need to replace humans assembling iPhones with industrial robots, if the manufacturing and assembly process was radically simplified by advanced 3D printers? Or, eventually, perhaps we won't need a big, centralized manufacturing hub like Foxconn at all.

When I learned about Voxel8, I was reminded of William Gibson’s latest novel, The Peripheral, which takes place in both a near and far future. In the near future—virtually everything is 3D printed. Even the protagonist Flynne Fisher’s phone.

Perhaps Voxel8's electronics printing machine marks the beginning of all that. We're here to find out.

Image Credit: Voxel8

 Link: http://singularityhub.com

Logo KW

3D Printed Shoes [1748]

de System Administrator - viernes, 7 de abril de 2017, 20:44
 

Adidas to Mass-Produce 3D Printed Shoes in Vats of Warm Liquid Goo

By Jason Dorrier

Adidas just announced they’re partnering with 3D printing company Carbon to mass-produce a line of shoes with 3D printed mid-soles (the spongy bit that cushions your foot). Called Futurecraft 4D, they aim to make 5,000 pairs by the end of the year, ramping up production to 100,000 pairs next year.

While 3D printing is often touted for its ability to customize products, Adidas will start with a single design to test the tech. Their ultimate goal, however, is to customize each shoe to fit the unique contours of a person’s foot.

This isn’t the company’s first foray into the world of 3D printing—an earlier model of the Futurecraft shoe, made with Materialise, previously sold for $333—nor are they the only shoe company pursuing the technology.

What’s interesting about this project is the challenges Adidas says Carbon’s technology can solve. And whereas 3D printed shoes have mostly arrived in small numbers, Adidas commitment to ramp up production is notable.

The idea of printing objects on demand is exciting, but the reality is more nuanced. 3D printing is slow and costly. Traditional manufacturing processes like injection molding still reign supreme for mass manufacturing at cost.

Adidas and Carbon are optimistic this may be changing for some products.

Of the 3D printers we’ve covered over the years, Carbon's is a personal favorite. Instead of stacking layers to make an object, Carbon uses light and heat to selectively harden liquid resin. The result is very sci-fi. A digital design made manifest is hoisted from a vat of high-tech goo in a single finished piece.

But Carbon's process has practical advantages too. For one, it’s relatively fast.

Printing soles used to take Adidas 10 hours. Now it takes 90 minutes. And they aim to further reduce print time to 20 minutes. Also, each sole is printed continuously in one piece, which eliminates weak spots where layers meet. And the sole’s honeycomb geometry—the properties of which vary over the sole's length—wouldn’t be possible with injection molding.

“Mechanical engineers have been taunting the world with the properties of these structures for years,” according to Carbon cofounder, Joseph DeSimone. “You can’t injection-mold something like that, because each strut is an individual piece.”

 

The technology also allows for faster, more complete prototyping. Adidas ran through some 50 designs before landing on their final choice.

A typical process, which would require copious retooling, would try out a handful of designs before moving on. By 3D printing both the design and the final product, Adidas can skip tooling on both ends. And unlike prior prototypes, the design and the final product are made of the same material—limiting the likelihood the final product will perform differently.

In addition to Adidas, Nike, Under Armour, and New Balance have their own 3D printed shoe projects, but these have mostly been produced in small batches. While 100,000 pairs of shoes is a drop in the ocean relative to the hundreds of millions of pairs Adidas sells each year, it's a lot more than a few hundred pairs.

Whether the shoe itself catches on? We'll have to wait and see.

 

Jason Dorrier

Jason is managing editor of Singularity Hub. He cut his teeth doing research and writing about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he'll only ever know a tiny fraction of it all.

Image Credit: Adidas

Link: https://singularityhub.com

 

Logo KW

3D Printed Titanium Ribs and Sternum

de System Administrator - viernes, 11 de septiembre de 2015, 21:28
 

We Can Rebuild Him: Patient Receives 3D Printed Titanium Ribs and Sternum

By Jason Dorrier

It’s a bit like a Marvel superhero comic or a 70s sci-fi TV show—only it actually just happened. After having his sternum and several ribs surgically removed, a Spanish cancer patient took delivery of one titanium 3D printed rib cage—strong, light, and custom fit to his body.

It’s just the latest example of how 3D printing and medicine are a perfect fit.

The list of 3D printed body parts now includes dental, ankle, spinal, trachea, and even skull implants (among others). Because each body is unique, customization is critical. Medical imaging, digital modeling, and 3D printers allow doctors to fit prosthetics and implants to each person’s anatomy as snugly and comfortably as a well tailored suit.

In this case, the 54-year-old patient suffered from chest wall sarcoma, a cancer of the rib cage. His doctors determined they would need to remove his sternum and part of several ribs and replace them with a prosthetic sternum and rib cage.

 

This image shows how the 3D printed titanium implant attaches firmly to the patient's rib cage.

Titanium chest implants aren’t new, but the complicated geometry of the bone structure makes it difficult to build them. To date, the typically used flat plate implants tend to come loose and raise the risk of complications down the road.

Now, we can do better. We have the technology.

Complexity is free with 3D printing. It’s as easy to print a simple shape as it is to print one with intricate geometry. And with a 3D model based on medical scans, it’s possible to make prosthetics and implants that closely fit a patient’s body.

But it takes more than your average desktop Makerbot to print with titanium.

 

The finished implant. Image credit: Anatomics.

The surgeons enlisted Australian firm Anatomics—the company that designed a 3D printed skull implant to replace nearly all of a patient’s cranium last year—and CSIRO’s cutting-edge 3D printing workshop, Lab 22, to design and manufacture the implant.

Lab 22 owns and operates a million-dollar Arcam printer. Most 3D printed metal parts use a technology called selective laser sintering, in which layers of powdered metal are fused with a laser beam. Instead of a laser, however, the Arcam printer uses a significantly more powerful electron beam technology developed for aerospace applications. (GE, for example, is printing titanium aluminide turbine blades with the tech.)

The surgeons worked closely with Anatomics to design the implant based on CT scans of the patient’s chest. Using a precise 3D model, the printer built the titanium implant—a sternum and eight rib segments—layer by layer. The final product is firmly attached to the patient's remaining rib cage with screws.

According to CSIRO’s Alex Kingsbury, “It would be an incredibly complex piece to manufacture traditionally, and in fact, almost impossible.”

Once complete, the team flew the implant to Spain for the procedure. All went to plan. The patient left hospital 12 days after the surgery and is recovering well.

While customization is widely used to illustrate 3D printing's power, it can often be more of a perk than a necessity. In many cases, traditional mass manufacturing methods still make more sense because they're cheaper and faster.

In some industries, however, customization is critical.

Aerospace firms, for example, are making 3D printed parts for jet and rocket engines—where rapid prototyping speeds up the design process, and cheap complexity and customization yields parts that can't be made any other way.

And nowhere is customization more useful than in medicine. From affordable custom prosthetics to tailor-made medical implants to bioprinted organs—the potential, in terms of improving and even saving lives, is huge.

We can't rebuild and replace every body part yet, but that's where we're headed.

Image Credit: CSIRO, Anatomics

Logo KW

3D-PRINTED BIO-BOTS [707]

de System Administrator - miércoles, 6 de agosto de 2014, 22:55
 

 

TINY 3D-PRINTED BIO-BOTS ARE PROPELLED BY MUSCLE CELLS

By: Jason Dorrier

Robots come in all shapes and sizes—some are mechanical, and some aren’t. Last year, a team of scientists from the University of Illinois at Urbana-Champaign made a seven-millimeter-long 3D printed robot powered by the heart cells of a rat.

The device, made of 3D printed hydrogel—a water-based, biologically compatible gel—had two feet, one bigger than the other. The smaller, longer foot was coated in heart cells. Each time the cells contracted, the robot would crawl forward a few millimeters.

3D printing allowed the researchers to quickly fabricate and test new designs. But there was a problem. Because the heart cells beat spontaneously (like in a human heart), they couldn’t control the robot’s motion. So, the scientists designed a new bio-bot.

The new device is also made using a 3D-printed gel scaffold, but instead of heart cells, it uses skeletal muscle cells to move around. The contraction of the muscle cells is controlled by electric current. By varying the frequency of the current, researchers can make the bio-bots go faster and slower, or in absence of a current, turn them off.

The bio-bots’ overall design is also naturally inspired. The hydrogel is rigid enough to provide structural support, and at the same time, it can flexibly bend like a joint. The muscle cells are affixed to two tendon-like posts that serve double duty as the bot’s feet.

The researchers think the bio-bots may prove useful in medicine or in the environment.

“It’s exciting to think that this system could eventually evolve into a generation of biological machines that could aid in drug delivery, surgical robotics, ‘smart’ implants, or mobile environmental analyzers, among countless other applications,” said Caroline Cvetkovic, co-first author of the paper.

In the future, the researchers hope to make the hydrogel backbone capable of motion in multiple directions, instead of just a straight line. And they may integrate neurons to steer the bio-bots using light or chemical gradients.

“Our goal is for these devices to be used as autonomous sensors, ” said study leader Rashid Bashir. “We want it to sense a specific chemical and move towards it, then release agents to neutralize the toxin, for example. Being in control of the actuation is a big step forward toward that goal.”

Image Credit: University of Illinois at Urbana-Champaign/YouTube

This entry was posted in CyborgRobots and tagged3d printingbio-botbiological robotcardiac tissueCaroline Cvetkovichydrogelmuscle,Rashid BashirUniversity of Illinois at Urbana-Champaign.

Link: http://singularityhub.com/2014/07/22/these-tiny-3d-printed-bio-bots-are-propelled-by-muscle-cells/

Logo KW

4 clichés de atención al cliente que realmente funcionan [1777]

de System Administrator - miércoles, 9 de agosto de 2017, 15:46
 

4 clichés de atención al cliente que realmente funcionan

Una mujer sabia una vez dijo: "Hacer un cliente, no una venta". Por otra parte, parece que la competencia en la industria tomó este consejo de la manera equivocada, ya que constantemente se dirigen a los clientes con frases genéricas, o debemos decir clichés.

¿Pero es el servicio al cliente el que usa mal tales expresiones o es el método que trabaja en su favor? No es que los clientes queden asombrados con esas respuestas, ya que son bastante predecibles, aún así, lo que se esconde detrás de esas palabras podría ser más poderoso de lo que parece.

Responsabilidad de un Agente de Servicio al Cliente

De alguna manera, ser un representante de servicio al cliente de chat en vivo da la impresión de que el único propósito de su trabajo es sonreír y tratar de engañar al cliente para que adquiera sus productos. Gobernado por una política "más y más feliz", hablan mal a la audiencia confusa hasta que está lista para aceptar su servicio.

Sin embargo, la realidad es que son profesionales capacitados con un trabajo para informar, ayudar y resolver cualquier problema que un cliente pueda tener. No sólo son una conexión crucial entre un negocio y un público objetivo, sino que también juegan un papel importante en el aumento de las tasas de conversión.

Como sugieren las estadísticas, el 78% de los clientes renuncian a la compra debido al mal servicio al cliente, mientras que el 81% de las personas están más inclinados a trabajar con un negocio nuevamente después de ser mimado por un buen soporte al cliente.

¿Y cuánto afecta esto a las tasas de conversión? Bueno, un servicio de calidad puede aumentar los beneficios en un 125% y las organizaciones que optaron por concentrarse en los clientes obtienen al menos un 60% más de beneficios que su competencia.

Por lo tanto, es obvio que los deberes de trabajo de apoyo al cliente son mucho mayores que la opinión popular. Y aunque parece simple, en realidad, la lista de responsabilidades va más allá de lo que sugiere la descripción del trabajo básico. Por ejemplo:

  • Atraer clientes potenciales;
  • Responder preguntas sobre productos / servicios;
  • Sugerir información sobre productos / servicios relacionados;
  • Apertura de cuentas de clientes;
  • Actualización de la información de la cuenta;
  • Resolver problemas y quejas de productos / servicios;
  • Encontrar y explicar la mejor solución del problema;
  • Garantizar que las cuestiones se resuelvan por completo;

Experiencia del cliente

La forma en que los clientes son tratados establece el tono para el potencial de crecimiento del negocio. Después de todo, los consumidores son sólo personas que quieren ser respetados, por lo que si la empresa no muestra algún interés, los clientes recordarán haber sido maltratados.

Además, además de perder clientes potenciales, la organización perderá credibilidad. Debido a las conexiones en línea rápidas, las revisiones malas pueden poner seriamente el negocio en peligro. Sólo se necesita una decepcionante crítica de Yelp para que una empresa pierda hasta 30 posibles compradores, ya que el 70% de los consumidores globales confían en las revisiones en línea.

Es por eso que las empresas están cambiando el enfoque a la experiencia del cliente más que nunca. Se están dando cuenta de que una empresa necesita invertir en sus clientes, que lentamente cumplen con  las predicciones que indican que para 2018, aproximadamente el 50% de las organizaciones estarán motivadas para dedicar su atención y gastar en innovaciones en el servicio al cliente.

Los clichés más comunes    

A pesar de los problemas mencionados anteriormente, los agentes de los clientes todavía se niegan a abandonar las metáforas "clásicas". ¿Por qué? Debido a que son picaduras de sonido agradable que evocan una respuesta determinada, y la gente está acostumbrada a ellos.

Hay numerosos refranes famosos que podemos oír incluso en los comerciales diarios, pero aquí están algunas de las tácticas cliché más comunes:

  • La más alta calidad al precio más bajo;
  • Si usted compra ahora, usted consigue uno más para libre;
  • Satisfacción garantizada o le devolveremos su dinero;
  • Experiencia de servicio al cliente profesional;

¿Estos clichés realmente funcionan?

Desafortunadamente, los clichés realmente funcionan. Aunque no son algo emocionante y nuevo, están tallados en nuestros cerebros. Cada vez que un cliente escucha estas cotizaciones familiares, desencadena una reacción positiva.

Vamos a analizar el significado detrás de esas palabras por un momento. "La más alta calidad al menor precio" es el sueño de todos, lo que implica que el negocio se preocupa por sus clientes. Simplemente hace pensar que obtendrá un producto de alta calidad a un precio razonable, que es una razón suficiente para aceptar la compra.

"Si usted compra ahora, obtendrá el segundo producto de forma gratuita" es otro cliché frecuentemente acompañado de "oferta limitada o una vez". Crea una prisa y los clientes sienten que deben apresurarse para hacer un pago o perderán mucho.

Por otra parte, la definición legal de "satisfacción garantizada" describe el término como un contrato de venta o de servicios en el que el vendedor transmite al comprador la discreción única y unilateral sobre si los bienes y servicios ofrecidos son aceptables.

Esta frase atrae a los clientes con una satisfacción garantizada del 100%, así que ¿por qué no hacer ese tipo de trato? Sin embargo, el problema puede surgir cuando el negocio no cumple con sus promesas. Este es un cliché peligroso para usar porque puede llevar a graves repercusiones comerciales. Sin embargo, eso no significa que no funciona, ya que la gente está más motivada para proceder con la compra debido a esa promesa.

Por último, hay una oración aparentemente inútil, pero efectiva cliché: "servicio al cliente profesional". Aunque cada servicio al cliente debe ser profesional o por lo menos tratar de ser, los clientes potenciales se inclinan hacia una organización que se ve calificado. Y agregando el adjetivo "profesional" puede definitivamente convertir a la gente en clientes potenciales.

Cada negocio depende de los clientes, particularmente los muy satisfechos. Con el fin de atraer a más clientes, las organizaciones a menudo recurren a trucos pequeños pero inteligentes de usar los clichés influyentes para desencadenar un aumento en los ingresos.

Es curioso que algunas expresiones comunes pueden hacer una gran diferencia, pero realmente funcionan. Son un medio eficaz de aumentar su base de clientes. Por lo tanto, la conclusión lógica es que las empresas seguirán utilizando hasta que la promesa de la innovación de servicio al cliente se convierte en un nuevo cliché dentro de la industria.

Autor Bio: -Robin es un Ejecutivo de Soporte Técnico. Es un experto en gestión del conocimiento y varias herramientas de Knowledge Base. Actualmente, es un experto en gestión de conocimiento residente en ProProfs. En su tiempo libre, Robin disfruta leyendo y viajando.

Link: https://www.theonlinecitizen.com

Logo KW

4 Fantásticos [428]

de System Administrator - jueves, 16 de enero de 2014, 19:44
 

 4

Los 4 Fantásticos (en inglés The Fantastic Four) es un grupo de superhéroes del universo Marvel, creado por el guionista Stan Lee y el dibujante Jack Kirby en el número del 1 del cómic The Fantastic Four (Noviembre de 1961) de la editorial estadounidense Marvel Comics.

Convertido en el título clave de la denominada Edad de Plata de los comic-books, sirvió de vehículo para autores como Roy ThomasJohn ByrneSteve EnglehartWalter SimonsonJohn BuscemaGeorge Pérez y Tom De Falco. Ha sido adaptada también a series de dibujos animados y a películas de imagen real.

Fuente: http://es.wikipedia.org/wiki/Los_4_Fantásticos

Logo KW

4 Ways Your Competitors Are Stealing Your IT Talent [1008]

de System Administrator - viernes, 5 de diciembre de 2014, 22:01
 

4 Ways Your Competitors Are Stealing Your IT Talent

Savvy companies are shopping for talent in what is arguably the best place to find it -- their competition. As the talent war heats up, poaching tech professionals is becoming increasingly common. Here's how it's done and how to stop it.

By Sharon Florentine

One of the best places for your competitors to find great talent is within the walls of your company. If your best and brightest have been jumping ship to work for your biggest rival, it's important to know how they're being recruited, why they are being targeted and what you can do to stop it. Here's how your competitors may be poaching your talent.

They're Using Professional Search Tactics

Savvy companies know that the best talent is often already employed - with their competitors. Hiring a professional search firm -- or if that's not financially feasible, copying their subtle approach -- can lure away even the most content employees. As thisInc. Magazine article points out, targeting successful talent and then making contact via social networks like Facebook or LinkedIn, or at professional networking events, conferences or industry events with the promise of a "great opportunity" can pique their interest and entice them to consider a move.

They're Using Tools Like Poachable or Switch

One of the biggest challenges for hiring managers and recruiters is finding passive candidates, says Tom Leung, founder and CEO of anonymous career matchmaking service Poachable.

"Passive job finding - and searching for passive candidates - has a lot of interest for both candidates and for hiring managers and recruiters. As the economy rebounds and the technology market booms it remains difficult to match potential candidates with key open positions," Leung says. Employees and candidates are demanding higher pay from potential employers while, at the same time, STEM jobs are taking twice as long to fill as non-STEM jobs.

"When we asked hiring managers and recruiters what their biggest challenge was, they told us their weak spot was luring great talent that was already employed. Everybody seems to be doing a decent job of blasting out job postings, targeting active candidates, interviewing them, but this passive recruiting is where people get stuck," says Leung.

Passive candidates are already employed and aren't necessarily unhappy, Leung says, but if the right opportunity came up, they would consider making a move. That's where tools like Lueng's Poachable and the new Switch solution come in.

"These folks might want to make a move, but they're too busy to check the job boards every day, and they're content where they are. What we do is help them discover what types of better, more fulfilling jobs are out there by asking them what 'dream job' would be tempting enough for them to move, and we help them find that," says Leung.

Are You Offering Competitive Benefits and Perks

Flexible work schedules, job-sharing, opportunities to work remotely, subsidized child and elder care, employee-paid healthcare packages, on-site gym facilities, a masseuse and unlimited vacation time are all important if you want to attract talented IT professionals.

"Companies that acknowledge and accommodate the fact that their talent has a life separate from work tend to have more engaged, loyal and productive employees, says Dice.com president Shravan Goli.

March 2014 study from Dice.com surveyed tech pros and found benefits and perks like flexibility, free food and the ability to work with cutting-edge technology were key drivers of their decision to take a new position. "With approximately 2.9 percent unemployment rate in the IT industry, companies must get creative to attract and keep their top talent. Perks and benefits are one way they are looking beyond compensation," says Goli.

Offering Better Monetary Incentives

Your talent is one of your business' greatest assets, and if you're not doing everything you can to ensure they stay happy, especially where compensation is concerned, you could lose them - and be at a competitive disadvantage, according to theU.S. Small Business Administration.

 "All companies have valued employees - those they can't afford to lose because of their skill, experience and commitment to their work. One way you can help them resist the temptation to stray is to show that you are invested in their future," according to data from the SBA.

 The SBA advises giving these employees one-on-one time with management, discussing their professional goals and their importance, and sharing the company's vision for continued growth as well as the employee's role in that growth.

In addition, the SBA says, offering meaningful pay increases, a generous bonus structure and/or compensation like "long-term incentive plans tied to the overall success of the business, not just individual performance, can also send a clear message to your employees that they have a recognized and valuable role to play in your business as a whole."

Link: http://www.cio.com

Logo KW

50 Years of Moore’s Law [1189]

de System Administrator - sábado, 4 de abril de 2015, 21:57
 

Gordon Moore

50 Years of Moore’s Law

The glorious history and inevitable decline of one of technology’s greatest winning streaks

Fifty years ago this month, Gordon Moore forecast a bright future for electronics. His ideas were later distilled into a single organizing principle—Moore’s Law—that has driven technology forward at a staggering clip. We have all benefited from this miraculous development, which has forcefully shaped our modern world.

In this special report, we find that the end won’t be sudden and apocalyptic but rather gradual and complicated. Moore’s Law truly is the gift that keeps on giving—and surprising, as well.

The Multiple Lives of Moore’s Law

Why Gordon Moore’s grand prediction has endured for 50 years

Logo KW

7 [303]

de System Administrator - sábado, 18 de enero de 2014, 16:20
 

El 7 simboliza pensamiento, espiritualidad, conciencia, análisis psíquico, sabiduría. Es el número del intelecto, el idealismo y la represión. Los que se identifican con el 7 son personas amantes de la lectura, el estudio y las ansias por aprender. Tienden a proyectar su vida en una esfera de idealismo y actividad intelectual. Poseen habilidades para análisis, investigación con inteligente búsqueda del conocimiento; mentalidad científica y con capacidad de inventiva; son estudiosos, meditativos y de personalidad encantadora; amantes de la soledad y de la paz; perfeccionistas. Del lado negativo, son personas reservadas, con motivos ocultos, propensos a la argumentación con silencios o sarcasmos; tienen tendencia al aislamiento, a posiciones inflexibles, y les irritan las distracciones. El 7 es compatible por complementariedad con el 3.

Fuente: http://numerologia.euroresidentes.es/nacimiento/numero/7

Logo KW

8 Ways AI Will Profoundly Change City Life by 2030 [1708]

de System Administrator - miércoles, 19 de octubre de 2016, 23:04
 

8 Ways AI Will Profoundly Change City Life by 2030

 

BY EDD GENT

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.

The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, a computer scientist, former president of the Association for the Advancement of Artificial Intelligence, and managing director of Microsoft Research's main Redmond lab.

Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.

1. Transportation

The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.

Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.

Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.

Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public's first experience with physically embodied AI systems and will strongly influence public perception.

2. Home and Service Robots

Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots' on-board computing capacity.

Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft's Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.

But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.

 

3. Healthcare

AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.

If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.

At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.

Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.

 

4. Education

The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.

AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.

Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.

5. Low-Resource Communities

In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.

Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.

6. Public Safety and Security

By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.

Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.

 

7. Employment and Workplace

The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.

AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it's hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.

These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.

8. Entertainment

Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.

Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.

But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.

RELATED TOPICS: 

Image credit: Shutterstock

Link: http://singularityhub.com

Logo KW

8 ways to boost knowledge management for better employee productivity [1430]

de System Administrator - jueves, 17 de septiembre de 2015, 21:06
 

8 ways to boost knowledge management for better employee productivity

Posted by Ben Rossi

A huge amount of people’s working day is spent on ‘non-work’. Here’s how to make them more productive.

 

"Employees are more likely to share information and grow a company's productivity and competitive advantage when they feel heard"

A McKinsey & Company study from May 2014 found that the average interaction worker spends an estimated 28% of the workweek managing email and nearly 20% looking for internal information or tracking down colleagues who can help with specific tasks.

That’s right: information professionals and knowledge workers spend over one-quarter of their time looking for information, writing emails and collaborating internally.

This means that streamlining knowledge management could have a dramatic effect on the productivity of an organisation. Furthermore, making information accessible and well organised helps unlock the value of the collective knowledge held by employees.

Fortunately, this does not require investing in expensive new tools. The same McKinsey report said that most companies could double the current value they get from social tools by removing online hierarchies and creating an environment that is more open, direct, trusting and engaging.

>See also: Should central IT butt out of information management?

Here are eight ways to enhance knowledge management in an organisation.

1. Embrace the desire to socialise

Humans are social creatures. Employees have a natural tendency to socialise, and this does not have to be treated as slacking off or a distraction. Encouraging employees to form relationships encourages knowledge sharing because it is through these interactions that employees get to know each other.

Socialising enhances their awareness of each others’ strengths and weakness. They will know who to go to with specific queries and feel more comfortable reaching out, which helps them act faster and make better decisions.

2. Encourage dialogue and collaboration

Today’s employee wants to feel that their voice is heard within the organisation and they place a high premium on collaboration. They are active users of mobile and social technology, and do not want to stand on the sidelines – they want to get involved.

Employers cannot and should not fight this. Rather than bosses expounding about their ideas for hours, they should cultivate an atmosphere of open communication. Create opportunities for employees to share their thoughts and ideas with each other and allow for improvisation. Remember that true organisational change has to occur at every level.

3. Solicit feedback and questions

The old adage of “there is no such thing as a bad question” certainly holds true with knowledge management. Questions are how people learn, whether they are a CEO or an intern.

One of the best ways to get employees to share their knowledge and exchange insights is to seek feedback. Ask employees for help and solicit their opinions, expertise, and advice. Invite others to work with you, even to make small contributions. Be transparent by sharing what you are doing and why, and ask your team how they would do it differently. Lead by example.  

4. Centralise information

As mentioned above, an organisation has a goldmine of collective knowledge at its disposal. In addition to open communication, a centralised repository where that knowledge can live is important, so employees can access it when they need to.

Take advantage of a platform that facilitates and documents employee interactions. This enables staff to quickly locate conversations and/or colleagues who can provide the insights they need for projects or decisions.

5. Generate new ideas

Good ideas can come from anywhere. Open up crowdstorming and collaborative brainstorming to the entire organisation by crowdsourcing product and service ideas. This allows you to identify potential challenges, collect a broad range of perspectives, and develop solutions in an intuitive, user-friendly forum.

6. Establish immediate communication and sharing

Communication is not just important on the individual level. B2B supply chains also involve various teams, branches, vendors and more.

Part of effective knowledge management is ensuring that all these moving parts are able to easily talk to each other, because otherwise your workflows will hit roadblocks. Remove as many silos as you can and streamline communication. Breaking down barriers will drive productivity.  

7. Encourage a change mindset

Someone with a “change” or “growth” mindset approaches problems as opportunities. They embrace challenges, learn from their setbacks, don’t give up, and take control over their actions. For knowledge sharing to have the greatest results, this is the mindset you want to cultivate in your employees.

Leaders can do this by aligning the organisational structures and processes to support that vision. Set performance goals for individuals and for the organisation as a whole, and then motivate your team to achieve them. Leaders can also model change by setting examples of desired behaviors in day-to-day interactions, enlisting help from influential people within the organisation, and most importantly, ensure that teams are held accountable to the changes.

A change mindset involves helping employees grow. Develop their talent and skills by evaluating performance, rewarding high-performing individuals, and offering a range of educational opportunities so they can work on their weaknesses and hone their strengths.

Finally, make sure you have commitment and understanding from your employees by making sure employees know why changes need to happen and how they will be supported. Keep track of progress so it aligns with the company's overall mission and employees' daily work.

8. Tap into intrinsic motivation

Employees are more motivated to share knowledge when they find their work interesting, stimulating and enjoyable. The more motivated an employee feels, the more likely they are to share knowledge.

Instead of driving motivation through external feedback – which can leave workers feeling manipulated or controlled – inspire your team by encouraging autonomy. Autonomy is an essential part of motivation and job satisfaction, and employees who have some autonomy in what they do are more likely to feel enthusiastic about their work.

Areas such as scheduling, decision making and process management provide excellent opportunities for developing a confident, engaged team.

Ultimately, employees are more likely to share information and grow a company's productivity and competitive advantage when they feel heard, have access to the knowledge and resources they need, and have a positive environment with leaders who are committed to collaboration.

Sourced from Tim Eisenhauer, president of Axero Solutions

See also: The knowledge economy is sparking a new approach to STEM education

Link: http://www.information-age.com

Logo KW

8080 [332]

de System Administrator - domingo, 5 de enero de 2014, 14:43
 

 8080

El Intel 8080 fue un microprocesador temprano diseñado y fabricado por Intel. El CPU de 8 bits fue lanzado en abril de 1974. Corría a 2 MHz, y generalmente se le considera el primer diseño de CPU microprocesador verdaderamente usable.

 8080

Varios fabricantes importantes fueron segundas fuentes para el procesador, entre los cuales estaban AMDMitsubishiNatSemiNECSiemens, y Texas Instruments. También en el bloque oriental se hicieron varios clones sin licencias, en países como la Unión de Repúblicas Socialistas Soviéticas y la República Democrática de Alemania.

Logo KW

9 [304]

de System Administrator - viernes, 14 de marzo de 2014, 14:39
 

El 9 simboliza genio artístico, sentido humanitario, tendencia al romance y a lo emotivamente sentimental. Es el número de la persistencia, generosidad y capacidad de empuje. Las personas que se identifican con el 9 son animadamente amistosas y simpáticas, desinteresadas, hacen sus tareas de buena gana en su trabajo. Poseen talento artístico y para la escritura. Tienden a perdonar los fallos de los demás. Son capaces de iniciar proyectos y trabajar con persistencia hasta su culminación. Del lado negativo,  son propensos a la auto-adulación y a los intereses dispersos. Son posesivos, descuidados con las finanzas y necesitan acaparar la atención.

Fuente: http://numerologia.euroresidentes.es/nacimiento/numero/9

Logo KW

A guide to Docker container networking [1445]

de System Administrator - miércoles, 23 de septiembre de 2015, 22:11
 

A guide to Docker container networking

By Brandon Butler 

Despite all the hype about containers, the application packaging technology is still evolving, especially as relates to networking.

In the past year though there have been significant advancements to Docker container networking functionality. At the same time Docker has built a plug-in architecture that allows more advanced network management tools to control containers.

Meanwhile, startups have developed custom platforms for managing containers, while traditional vendors such as Cisco and VMware have enabled their network management tools to control containers. So, the earliest container networking challenges are beginning to be solved, but there’s still more work to be done.

 

Basic challenges

There have always been container networking issues. Containers hosted on the same physical server can interact with one another and share data. But Docker developers didn’t initially build in the ability to migrate a container from one host to another, or connect one container to another on a different host.

“The biggest challenges have been in cross-container communications,” says Keith Townsend, a technology analyst and blogger. “From one container to another, that’s the biggest frustration that most networking professionals will encounter.”

Engineers at Docker, the company that develops the open source project of the same name, quickly realized they needed to fix this.

Batteries included, but swappable

The networking issues led Docker in March 2015 to buy startup SocketPlane, which aimed to bring software-defined networking capabilities natively to Docker. In June, Docker announced the integration of SocketPlane technology into the open source project. New networking capabilities use basic Linux bridging features and VXLANs (virtual extensible LAN) to allow containers to communicate with other containers in the same Swarm, which is Docker's moniker for a cluster of containers. Container networking across hosts had been solved.

At the same time, Docker also released libnetwork, the codename for an open source project that allows third-party network management products to be "plugged in" to replace the built-in Docker networking functionality. Virtual networking products like VMware's NSX, Cisco's ACI and more than a half-dozen others were the first supported third-party network tools.

"It sets up an abstraction," says Docker Senior Vice President of Product Scott Johnston. "It's a Layer 3 network overlay that allows containers to be attached to it."

Docker now has two flavors of network management. There is native, out-of-the-box functionality supplied by Docker thanks to the SocketPlane acquisition that allows for networking across hosts. If users want more advanced network functionality - such as spinning up new networks programmatically, setting network policies, installing firewalls, load balancers or other virtual apps on the network - then a variety of network management products can be used. Docker calls its approach "batteries included, but swappable." Johnston says he hopes to have a similar plug-in model for container storage soon too.

Technology is the easy part

 

Docker SVP of Product Scott Johnston says when it comes to container networking, the technology is the easy part.

Johnston says these technology capabilities are the easy part. Getting developers that build apps in containers and the IT shop who will run them on the same page is an even bigger challenge.

Containerized apps have very different characteristics from traditional enterprise apps. Whereas in the past IT’s goal was to provide resilient systems that would not fail, now the priority is to provide instant-on capacity and agile, flexible networks.

“From a networking perspective, application delivery and performance is tied to how well the network infrastructure is able to support these new apps and use cases,” says Ken Owens, CTO of Cisco’s Cloud Infrastructure Services. “The role of the network engineer is to think about how things like programmable networking and software defined networking, network function virtualization can help.”

These tools allow for the automatic provisioning of network resources – instead of manual provisioning – which could soon be table-stakes requirements for organizations that truly embrace these new application paradigms.

This story, "A guide to Docker container networking " was originally published by  Network World.

Brandon ButlerSenior Editor

Senior Writer Brandon Butler covers the cloud computing industry for Network World by focusing on the advancements of major players in the industry, tracking end user deployments and keeping tabs on the hottest new startups. He contributes to NetworkWorld.com and is the author of the Cloud Chronicles blog. Email him at bbutler@nww.com and follow him on Twitter @BButlerNWW.

Link: http://www.cio.com

Logo KW

A Guide to Jumpstarting Technology [1094]

de System Administrator - domingo, 15 de febrero de 2015, 22:41
 

A Guide to Jumpstarting Technology

Putting a technology plan in place today will help ensure your growth tomorrow

Technology evolves at breakneck speed, and it plays an increasingly important role in the success of all types of businesses, regardless of size. Any new business should include a technology blueprint as an integral part of its start-up plan; every existing business should adopt such a plan if it doesn’t already have one; and all businesses should revisit their technology plans on a regular basis. This guide looks at the components to include in a technology plan, best practices for designing and implementing it, and the advantages and benefits it can provide.

The best time to develop a comprehensive technology plan is during the start-up phase of a new business, but it’s never too late for any company to assay its technology needs and begin formulating a comprehensive strategy to address them now and well into the future.

Please read the attached whitepapers

Logo KW

A Path to Ubiquitous, Any-to-Any Video Communication [1219]

de System Administrator - miércoles, 6 de mayo de 2015, 16:04
 

Open Visual Communications Consortium

A Path to Ubiquitous, Any-to-Any Video Communication

Any Vendor. Any Network. Any Device.

Over the last several years, great strides have been made to improve video communication capabilities in the industry. Video over IP network technology has made video easier and faster to deploy. HD quality is now commonplace in video systems and clients. Management and infrastructure solutions deployed in enterprises and organizations have enabled video networks to be established and custom dial plans implemented, enabling a rich set of visual communication experiences for users within those organizations. As a result, video adoption has increased across enterprises and organizations around the world.

However, with growth have also come challenges. Those challenges have been most keenly experienced where enterprises or organizations have desired to have video communications across organizational boundaries. With voice and Internet traffic, one does not ponder how a network is connected because "it just works" when one makes a call or accesses websites outside an end-user domain.

With video, the opposite has been true. Typically, end users only communicate via video within their own organization. When communicating with outside parties, they often have to use awkward dial strings, and /or engage in manual planning and testing over the public Internet to have a video call. Even then a successful call can only be established if the IT departments of both companies have security or firewall policies that will allow the video call to take place to parties outside their organization. The customer may choose to use a managed or hosted video service provider to help facilitate that communication; however, this only moves the problem to the service provider, which goes through a manual process to plan, test, and validate that the desired far-end parties are reachable. Both end users and service providers must deal with a wide variety of technical issues when establishing video between different organizations or different service providers. These issues include network connections, network quality of service (QoS), NAT/firewall traversal, security policies, various signaling protocols, inconsistent dial strings, security rules within each organization impacting video, and incompatabilities between video endpoints. In addition, there are the operational considerations around coordination of the different types management and scheduling systems and processes that exist within each Service Provider . Finally, the commercial considerations of termination and settlement between service providers must also be resolved.

This combination of technical and business challenges has relegated video communication to a collection of isolated islands. It’s easy to communicate within an island, but almost impossible to communicate between islands. The ability to resolve these issues and federate the islands doesn’t lie within the power of any one customer, one equipment manufacturer, one service provider, or even one standards body to solve. It requires a concerted effort of the industry driven by the needs of their end users.

The Open Visual Communications Consortium (OVCC) has been formed to address these issues. The mission of the OVCC group is to establish high-quality, secure, consistent, and easy-to-use video communication between these video "islands," thereby enabling a dramatic increase in the value of video communication to end customers worldwide.

This paper describes the OVCC organization, its purpose, and how it is addressing the B2B communications challenges and enabling businesses to open the door to faster decision-making, easier, more productive collaboration with partners and customers, streamlined supply chain management, and game-changing applications in education, healthcare, government and business.

Please read the attached whitepaper.

Logo KW

A tale of two women: same birthday, same Social Security number, same big-data mess [1569]

de System Administrator - domingo, 15 de noviembre de 2015, 18:52
 

A tale of two women: same birthday, same Social Security number, same big-data mess

by Katherine Noyes

The odds are higher than you might think, one company says

It’s a case that would seem to defy the odds many times over: Two Florida women born on the same day, in the same state, and given almost the same name. Though no one realized it at the time, it turns out they were also given the same Social Security number.

Joanna Rivera and Joannie Rivera only recently discovered the problem, according to areport this week, but in the meantime it’s caused no end of trouble for them. Credit applications have been denied; tax returns have been rejected.

Identity theft might have been a likely assumption, but in this case, it was something different.

After 25 years of confusion, the Social Security Administration reportedly has admitted its mistake at last: In 1990, two Florida hospitals created the same record for two babies with similar first names, the same last name and the same date of birth, and the administration gave them both the same Social Security number.

It’s not as uncommon as you might think. In fact, some 40 million SSNs are associated with multiple people, according to a 2010 study by ID Analytics.

Some, as in the Rivera case, are innocent mistakes caused by data-entry errors or bad handwriting, said Ken Meiser, vice president of identity solutions at ID Analytics.

 

SOCIAL SECURITY ADMINISTRATION

Others are “what we call identity manipulation,” whereby someone with a shaky credit history makes subtle changes to their identity so it’s not connected with their history, he said.

Then, of course, there’s impersonation by someone who either isn’t qualified for a SSN of their own or is trying to assume a different identity for other reasons.

Cases like that of the Rivera women are “really rare,” Meiser said. Nevertheless, “the question becomes, how do you build algorithms or recognize who is the legitimate owner of that SSN?”

Except in rare circumstances, the SSA is prohibited from trying to do that kind of verification, he said. So, it’s generally up to companies like ID Analytics or credit bureaus to serve that purpose.

“If you’re one of the folks who has had that duplication, it creates issues,” Meiser said. “It’s a really interesting challenge for everybody involved.”

In hindsight, the fact that two individuals were simultaneously asserting similar credentials but not living in exactly the same place might have been a tip-off, he said. “Both address A and address B were in play,” he said. “It probably should have triggered at least some concern.”

Also notable is the fact that the Internal Revenue Service didn’t raise a flag, since it presumably was getting W-2 forms from two different employers for somebody who was apparently holding two jobs but didn’t live in the same place, he said.

“When designing software, developers try to take into account all of the possible scenarios,” said Travis Smith, a security analyst with Tripwire. “However, humans are fallible,” and that can trip up otherwise well-designed software.

In any case, it’s an illustration of why it’s important for consumers to use due diligence. They can do it with free credit reports or identity protection services such as LifeLock, which is the parent company of ID Analytics.

“You should be reviewing those reports to see if there’s activity associated with your identity that you don’t recognize,” he said. “Either of these women could probably have seen the problem earlier if they had been doing that.”

The Social Security Administration did not immediately respond to a request for comment.

Katherine Noyes | U.S. Correspondent

Katherine Noyes has been an ardent geek ever since she first conquered Pyramid of Doom on an ancient TRS-80. Today she covers enterprise software in all its forms, with an emphasis on Linux and open source software.

More by Katherine Noyes

Link: http://www.pcworld.com

The dark side of layered security

By Maria Korolov

Sometimes, layered security can have unintended consequences and even make a company less secure than before

Layered security is currently considered a best practice for enterprises, since a single layer of defense against attackers is no longer enough. Sometimes, however, these layers can have unintended consequences and even make a company less secure than before.

Complexity

Jason Brvenik, principal engineer in the Cisco Security Business Group, said that he's seen organizations with as many as 80 different security technologies applied in layers.

"The proliferation of best of breed technologies creates security technology sprawl in pursuit of layered security and defense in depth," he said. "We see plenty of examples and sprawl and operational cost rising, where the technologies tend to conflict with each other."

Security practitioners have been talking about layered security for decades, said Brian Contos, Chief Security Strategist and SVP Field Engineer at Foster City, Calif.-basedNorse Corp., a cybersecurity intelligence firm founded by former law enforcement and intel officials.

"While academically this makes sense," he added, "if done incorrectly, it leads to the number one enemy of security: complexity."

Without an overall plan in mind, it's easy to overspend on individual products, to buy overlapping systems, or to leave unsecured gaps between layers.

"It's very common for security organizations to jump at technologies that address 'the monster of the week' but don't have broader value," said Carson Sweet, co-founder and CEO at San Francisco-based CloudPassage, Inc. "Keeping long-term perspective is extremely important, especially with point vendors pounding at security buyers about the latest FUD."

Cisco's Brvenik pointed out another problem with purchasing too many technologies, that of unmanaged or undermanaged systems.

Companies buy a technology in order to meeting a compliance need, or fill a security gap, or check off an item on a list, without budgeting or staffing the system's implementation or ongoing management. Then they forget about it, he said.

Not only is this a waste of money, but it actually hurts a company's security posture.

"You're creating opportunities for blind spots, because you think you mitigated that risk, but you haven't maintained a solid presence there," he said.

And even well-managed layers can create problems within an organization, said Jerry Irvine, CIO at Chicago-based security vendor Prescient Solutions.

Different security systems require different kinds of expertise, and the larger the organization, and the more systems there are in place, the more possibilities there are for conflicts -- especially when some of the systems are managed by different companies, such as outsourcers, cloud vendors, or other service providers.

Each security team focuses on its own security task, and this can interfere with that of other groups and with enterprise operations.

"Groups saddled with the responsibility of physical security may tighten down access controls to the point where applications and systems are affected, causing failure or extreme performance issues," Irvine said. "And when separate groups within the organization are responsible for the application they frequently open up access at the lower levels to assure connectivity, but increasing the overall vulnerability of the environment."

In fact, the more security layers are in place, the more likely it is that some will interfere with business operations, said Nathan Wenzler, executive director of security at Washington DC-based Thycotic Software Ltd.

Security products need to be configured then, once they're in place, they might need ongoing tuning, patching, or other kinds of maintenance. Administrators need to understand how the initial configuration and the subsequent changes might affect business processes, as well as other security systems, he said.

But most organizations only have so much expertise and time to go around.

"There's not enough time to implement them well, and keep managing them well," he said. "That becomes a challenge."

User pushback

Operations teams aren't the only ones who might try to fight back against too-restrictive security layers. Individual users can, as well, said Leah Neundorf, Senior Research Analyst at Cleveland-based security consulting firm SecureState LLC.

Say, for example, a company decides to use different credentials for different systems as part of its layered defense strategy.

Users are going to try to defeat that by using the same set of credentials for all systems, she said.

At a minimum, a company is going to want a set of credentials to access internal systems and another set of credentials to access email.

Users who use their email address as their account name for internal systems -- and the same password for both -- are creating a major security problem, since its so easy for outsiders to find out employees' email addresses.

She suggests that enterprises require different formats for user names and passwords to different systems.

"And make sure people understand the reasons you're putting these things in place," she said.

She also warned against credentials that give users access to, say, all the systems within a certain layer.

"Every admin doesn't have to have god rights," she said.

Integration

With each new security layer come integration challenges, where one product might interfere with the functioning of another, or create security policy conflicts.

"Sometimes interactions can have operational consequences," said Fred Kost, VP at Mountain View, Calif.-based security vendor HyTrust Inc. "It's critical for CSOs to test and validate layered security under different attack and load conditions. Clever attackers might use this to render some of an organization's layered security ineffective."

The tendency to buy best-of-breed systems from different vendors can also cause communication problems, forcing security analysts to learn to work with multiple systems instead of having one single view of a company's security situation.

The effort required might outweigh the benefits, said Usman Choudhary, chief product officer at Clearwater, Fla.-based security vendor ThreatTrack Security.

In particular, enterprises have to deal with systems that don't have a common data taxonomy and trying to correlate data after the fact can lead to gaps in coverage, he said. It also takes more time to deal with false positives and false negatives.

"These layered security challenges are the big problem in the cyber threat detection and mitigation space, and are the root cause of many of the recent breaches," he said. "Often the bad guys are very well aware of these issues and are able to exploit these gaps in the security solutions."

Link: http://www.csoonline.com

Logo KW

Abstracción [107]

de System Administrator - lunes, 6 de enero de 2014, 23:34
 

La abstracción (del latín abstrahere, "separar") es una operación mental destinada a aislar conceptualmente una propiedad concreta de un objeto, y reflexionar mentalmente sobre ésta. En programación, el término se refiere al énfasis en el "¿qué hace?", más que en el "¿cómo lo hace?" (característica de una “caja negra”). El común denominador en la evolución de los lenguajes de programación, desde los clásicos o imperativos hasta los orientados a objetos, ha sido el nivel de abstracción del que cada uno de ellos hace uso.

Fuente: http://es.wikipedia.org/wiki/Abstracción

Logo KW

Account Based Marketing (ABM) [920]

de System Administrator - miércoles, 8 de octubre de 2014, 10:51
 

Marketing Trends, Thoughts and Opinions

Account Based Marketing (ABM) and B2B Sales Support

Posted by: IDG Connect

Contrary to the traditional approach of mass marketing, Account Based Marketing (ABM) is now progressively becoming a part of marketing functions at IT service and solution providers. In order to yield a higher response rate and eventually generate a higher return on investment, sales and marketing, teams of B2B firms must work in harmony. Identifying business problems and pain-points of target accounts are of prime focus for fine-tuning the communication to address their technology challenges. This strategy is practicable for both large and small & medium sized technology service providers when it comes to generate more business from existing large-sized accounts.

ABM is not confined to winning new projects and harvesting hefty returns. It also aims to build a one-to-one business relationship which is beyond a typical buyer-seller relation. ABM also helps re-connect with past and currently non-active clients. As ITSMA rightly points out, ABM serves as a platform for services providers and clients to give a boost to their business relationship, create awareness and generate demand for solutions. Direct touch with key Decision Making Units (DMU) at target accounts lies at the core of ABM which helps IT companies design winning technology solutions for their clients.

To drive successful ABM programs and improve the conversion ratio, it is essential for the marketing team to understand the departmental goals of the sales team while having collaboration with each other. In this case, the marketing team has a major role to play for demand generation activities and call for action. Having the message content molded to address the technology challenges of either an individual account or group of target accounts plays a vital role. A thorough research of an account is necessary to service it and treat it like a market. All content, collateral and campaigns including social media marketing, thought leadership marketing, conference presentations, exhibitions etc. should ideally cater to the content needs of the account.

Collaterals could be specifically designed with content that answers all the key business problems of an account. The relevant technologies and experiences can then be highlighted in the documents like product collaterals, web pages, domain-brochures, corporate fact sheets, sales presentations, case studies, expo displays, and printed ads etc. This is significant as, for example, a stock-broking company looking for the automation of trading desk operations wouldn’t be interested in looking at Healthcare domain technology solutions. Thus mentioning this information would not benefit the client or the service provider. Sales collaterals which detail Capital Markets as a domain practice, level of experience and relevant case studies showing facts and figures would appeal to the key decision makers of a target account. During sales presentations, more than a film giving a general overview of all domains a solution provider operates in, the buyers would pay more attention to a domain-specific short film inculcating the details they are looking for.

Thought Leadership marketing can help an IT firm position itself as an expert in the industry. If used to target a specific account or a group of buyers, it can help a company build a distinctive image and credibility for buyers. Thought leadership content can be tweaked keeping in mind the needs of a target account by publishing industry insights, research studies, blogs, white papers, case studies, and informative videos etc. which are focused on any particular subject matter. The respect earned as a result of thought leadership marketing activities becomes a plus point at the time of sales pitch.

Engagement of target accounts through websites can give phenomenal results. Dynamic website content which is displayed specifically for different accounts or buyers, can lead to a significant growth in web traffic. DocuSign, a Digital Transaction Management service provider, witnessed a three-fold rise in its page views just by its target accounts.

Another medium for ABM is the new kid on the block – Social Media. Social media is a direct medium which helps IT service providers connect with rest of the world. Social media should be used to publish posts which talk about business issues of individual target accounts. For Banking sector accounts, it is appropriate to showcase only those horizontal services which are pertaining to banking operations.

While the mediums could be traditional or emerging, ABM is here to stay. It is a shift from mass to personalized and therefore, insights will be more useful than facts. Proactive over reactive is the mantra.

Kulwinder Singh is Director of Global Marketing & Corporate Communication at Synechron

Link: http://www.idgconnectmarketers.com

 

Logo KW

Actionable Security Intelligence [784]

de System Administrator - viernes, 22 de agosto de 2014, 16:19
 

The Growing Need for Real-time and Actionable Security Intelligence

By: Norse
 

Risk management has become an integral factor in many organizations because it prepares them for the worst that could happen.  Large enterprise organization that manage a lot of discrete data and client information must use all their resources to keep this information private and protected.  However, security is constantly changing and the ways that systems are being infiltrated has become more advanced.  Download this whitepaper to learn how to properly select the right real-time security option for your organization and to see what actionable security intelligence could offer your organization.

Please read the attached whitepaper.
Logo KW

ACV en mujeres [482]

de System Administrator - jueves, 3 de julio de 2014, 23:56
 
Nueva guía 2014 AHA/ASA | 23 JUN 14

Prevención del ACV en mujeres

Una nueva guía con recomendaciones específicas que toman en cuenta las particularidades del sexo femenino.
 
Autor: Cheryl Bushnell, MD, MHS; and Louise McCullough, MD, PhD Stroke Prevention in Women: Synopsis of the 2014 AHA/ASA Guideline
 

Accidente cerebrovascular en la mujer: Sinopsis de las recomendaciones de la guía 2014 de la American Heart Association/American Stroke Association

Resumen

Descripción: En febrero de 2014, la Asociación Americana del Corazón lanzó su primera directriz que se centró en la prevención del ictus en las mujeres. Esta nueva guía destaca los factores de riesgo para el accidente cerebrovascular únicos en las mujeres, incluyendo la anticoncepción oral la terapia hormonal, y los trastornos asociados con el embarazo como la preeclampsia, que pueden tener consecuencias duraderas en la salud de la mujer. También se ocupa de la hipertensión; fibrilación auricular; migraña con aura; y de la epidemiología de los tipos de accidente cerebrovascular, tales como la hemorragia subaracnoidea por aneurisma y trombosis venosa cerebral, que son predominantes en las mujeres.

Métodos: Los miembros de un panel de expertos multidisciplinario buscaron, revisaron y criticaron la literatura en idioma Inglés relevante publicada entre 1990 y mayo de 2013. El panel ideó tablas de evidencia y elaboró recomendaciones con arreglo a procedimientos y niveles de evidencia de las directrices de la Asociación Americana del Corazón.

Recomendaciones: Esta sinopsis de la guía resume la evidencia sobre los factores de riesgo para el accidente cerebrovascular en mujeres y sugiere estrategias de prevención. También describe las nuevas recomendaciones pertinentes para la identificación y el tratamiento de los trastornos hipertensivos en el embarazo que aumentan el riesgo de accidente cerebrovascular.

Introducción

Las diferencias de sexo son reconocidas cada vez más en muchas áreas de la medicina, y el accidente cerebrovascular no es una excepción. Se estima que unos 6,8 millones de personas en los Estados Unidos han tenido un accidente cerebrovascular, la mayoría de los cuales son mujeres (3,8 millones) (1). En el momento del ACV, las mujeres son mayores y tienen más probabilidades de vivir solas y tienen peor estado pre-mórbido que los hombres. Después del ACV, las mujeres también son más propensas a ser institucionalizados y a tener una recuperación más pobre y peor calidad de vida que los hombres (2-6).

Hay muchos factores de riesgo para el accidente cerebrovascular únicos en la mujer, como el embarazo y las complicaciones del embarazo, la anticoncepción hormonal y la terapia hormonal sustitutiva para los síntomas de la menopausia. Otros factores de riesgo son más comunes en las mujeres que en los hombres, como la hipertensión, la fibrilación auricular, dolor de cabeza tipo migraña con aura, la depresión y el estrés psicosocial.

Con estos problemas en mente, se ha desarrollado un esquema específico según el sexo que consolida las recomendaciones para la prevención del ictus en las mujeres y las pautas de prevención primaria y secundaria (7-8) hacen hincapié en los problemas específicos del estilo de vida con más detalle que las guías de prevención cardiovascular previamente publicados (9 ).

Factores de Riesgo para ACV

Hipertensión en mujeres no embarazadas

La hipertensión, el factor de riesgo más modificable más para el accidente cerebrovascular, es más frecuente en las mujeres que en los hombres (11). La hipertensión es más a menudo mal controlada en mujeres de mayor edad; sólo el 23% de mujeres frente a un 38% de los hombres mayores de 80 años tienen una presión arterial menor de 140/90 mm Hg (12).

Actualmente no existe ninguna evidencia de que los tratamientos antihipertensivos afecten de forma diferente la respuesta de la presión arterial o la prevención del ictus de acuerdo al sexo, pero muchos ensayos de agentes antihipertensivos no informan el análisis específico del sexo para la eficacia o los perfiles de efectos adversos.

Por otra parte, existen grandes brechas en la evidencia acerca de las opciones apropiadas de medicamentos, resistencia al tratamiento, adherencia y acerca de los enfoques para el tratamiento de la presión arterial por sexo  hormona-dependientes e independiente s(13).

Fibrilación auricular

Las diferencias de sexo en la fibrilación auricular incluyen una mayor prevalencia y un mayor riesgo asociado de episodios tromboembólicos en las mujeres (14). El efecto de esta epidemiología se ha traducido en el desarrollo de las calificaciones de riesgo de los pacientes con fibrilación auricular, con un punto adicional determinado por el sexo femenino en el CHA2DS2-VASc (insuficiencia cardíaca congestiva disfunción ventricular / izquierda, hipertensión, edad ≥ 75 años, diabetes mellitus, tiempos / ataque isquémico transitorio / tromboembolismo, enfermedad vascular, edad 65 a 74 años, la categoría sexo) puntuación (15).

Por lo tanto, se recomienda el uso de herramientas de estratificación de riesgo que dan cuenta de la edad y de las diferencias propias de cada sexo en la incidencia de accidente cerebrovascular. Las mujeres, particularmente las mayores de 75 años, deben ser examinadas de forma activa para la fibrilación auricular con la medición de la frecuencia del pulso y electrocardiografía (clase I, nivel de evidencia B). También se sugiere el tratamiento antiplaquetario para las mujeres con fibrilación auricular solitaria de 65 años o más jóvenes (13).

Migraña con aura

Las mujeres son cuatro veces más propensas que los hombres a tener migraña (16). Aunque el riesgo absoluto de ACV asociado con la migraña es bajo, la asociación entre la migraña con aura y el accidente cerebrovascular parece más fuerte en las mujeres menores de 55 años (17-18). La frecuencia de la migraña también puede estar asociada con accidente cerebrovascular (19).

Por lo tanto, sugerimos la reducción de la frecuencia de dolor de cabeza de la migraña como una posible estrategia para reducir el riesgo de accidente cerebrovascular, aunque no hay evidencia de que las estrategias de tratamiento específicas (por ejemplo, bloqueadores de canales de calcio, β-bloqueantes, y fármacos antiepilépticos) reduzcan el riesgo de accidente cerebrovascular (13).

Dada la relación sinérgica entre el tabaquismo y la migraña con aura, le recomendamos los tratamientos para dejar de fumar y el asesoramiento a las personas que fuman y tienen migraña. Por último, animamos a los médicos a advertir a las mujeres con migraña sobre el uso de anticonceptivos orales (13).

Anticoncepción Hormonal

El uso de anticonceptivos orales es un factor de riesgo para el accidente cerebrovascular en las mujeres jóvenes, lo que aumenta el riesgo de 1,4 a 2,0 veces en comparación con el de las mujeres que no usan estos agentes (13). El riesgo absoluto es bajo, aproximadamente 2 eventos por cada 10 000 mujeres por año con el uso de la formulación de la dosis más baja, de acuerdo con un estudio reciente de Dinamarca (20). El riesgo de accidente cerebrovascular entre las mujeres que utilizan anticonceptivos orales aumenta de forma exponencial a partir de 3,4 por 100 000 mujeres de 15 a 19 años a 64,4 por 100 000 mujeres de 45 a 49 años (20).

Los factores que pueden aumentar aún más el riesgo de accidente cerebrovascular incluyen eventos tromboembólicos previos, la hipertensión, el tabaquismo, la hiperlipidemia, la diabetes y la obesidad. En consecuencia, se recomienda identificar a las mujeres con estos factores de riesgo y aumentar los esfuerzos para manejar los factores de riesgo modificables en aquellas que usan anticonceptivos orales.

La guía también aborda las mutaciones protrombóticas y los marcadores biológicos que aumentan el riesgo de accidente cerebrovascular de una manera sinérgica. Los estudios muestran que los marcadores de disfunción endotelial, como el factor de von Willebrand y ADAMTS13 (una desintegrina y metaloproteinasa con el tipo trombospondina 1), aumentan el riesgo de accidente cerebrovascular más de 10 veces en las mujeres que usan anticonceptivos orales en comparación con aquellas que lo hacen no (21).

Aunque muchas mutaciones protrombóticas aumentan el riesgo de accidente cerebrovascular en las mujeres que usan anticonceptivos orales, no se recomienda el cribado para estas mutaciones antes de iniciar la terapia anticonceptiva oral debido a su baja prevalencia en mujeres por lo demás sanas, especialmente en ausencia de antecedentes familiares positivos (13) .

Se necesita investigación adicional para caracterizar mejor el riesgo de accidente cerebrovascular hemorrágico con el uso de anticonceptivos orales, centrándose en las mujeres mayores que pueden utilizar estos agentes hasta la menopausia, los miembros de los grupos minoritarios subrepresentados, constitución genética, y la paridad. El estudio de biomarcadores clínicamente disponibles, tales como el factor de von Willebrand, se justifica en poblaciones más amplias de mujeres.

Menopausia y terapia hormonal de reemplazo

La menopausia, en particular la menopausia a menor edad y el riesgo de accidente cerebrovascular pueden estar relacionados, pero la evidencia que define esta relación es inconsistente. Ya sean natural o quirúrgica, la asociación de la menopausia con el riesgo de accidente cerebrovascular es también incierta. Sin embargo, el uso de la terapia hormonal en mujeres posmenopáusicas es un factor de riesgo único para el accidente cerebrovascular en las mujeres.

En general, la terapia hormonal se asocia con un mayor riesgo de accidente cerebrovascular y no se recomienda para la prevención primaria o secundaria de esta condición. Todavía quedan muchas lagunas en la investigación sobre la magnitud de los daños y compensaciones entre los beneficios y los riesgos de la terapia hormonal sustitutiva. Estas lagunas se refieren al tratamiento de los subgrupos de mujeres que están en alto riesgo de accidente cerebrovascular después de la menopausia; tratamiento de las mujeres que están al principio del período peri o postmenopausia; y el momento óptimo, la dosis, el tipo y la vía de administración que podrían mejorar la salud vascular (13).

Depresión y el estrés psicosocial

Varios estudios de cohortes y un meta-análisis han identificado a la depresión y al estrés psicosocial como factores que aumentan el riesgo de accidente cerebrovascular incidente en un 25% a un 45% en las mujeres (22-24). El odds ratio entre los estudios que incluían a hombres y mujeres son similares a los de los estudios que incluyeron sólo hombres o sólo mujeres, por lo que es difícil establecer de forma concluyente que las mujeres con estas condiciones tengan un mayor riesgo de accidente cerebrovascular que los hombres. Se necesita más investigación para entender los subgrupos de mujeres en situación de riesgo, como a las que son tratados en relación con las que no reciben tratamiento, y el método de determinación de la depresión y del estrés psicosocial (13).

Estrategias de prevención del ictus

Estilo de vida saludable

Es recomendable el asesoramiento para mantener un peso saludable, comer una dieta saludable, la abstinencia de fumar, la actividad física regular, el consumo moderado de alcohol, y las actividades e intervenciones destinadas a lograr o mantener la presión arterial normal, así como el colesterol y los niveles de glucosa en sangre. La guía pone de relieve el riesgo de accidente cerebrovascular en varias condiciones de alto riesgo, como la obesidad, la inactividad física y el síndrome metabólico, pero encontró pocos datos que sugieran que estas condiciones aumentan el riesgo de accidente cerebrovascular de manera desproporcionada en las mujeres.

Sin embargo, un reciente meta-análisis de estudios con más de 750.000 personas y más de 12.000 accidentes cerebrovasculares encontró que las mujeres con diabetes tienen un 27% más de riesgo relativo de accidente cerebrovascular que los hombres diabéticos (25). Los mecanismos que subyacen a este aumento del riesgo se desconocen, pero pueden estar relacionados con un perfil de riesgo cardiovascular más adverso durante la fase de pre-diabetes en las mujeres que en los hombres (25).

Este meta-análisis proporciona una prueba más de que el reconocimiento de los factores de riesgo para el accidente cerebrovascular, especialmente aquellos que pueden aumentar de manera desproporcionada el riesgo en las mujeres, es fundamental para prevenir el accidente cerebrovascular.

Las intervenciones de estilo de vida saludable, que incluyan actividad física regular, dietas como la Enfoques Alimenticios para Detener la Hipertensión (DASH), la abstinencia de fumar, el consumo moderado de alcohol (13), y el reconocimiento y el tratamiento de la diabetes, son muy importantes. Hasta que las estrategias propias de cada sexo se pongan a prueba, las recomendaciones para la prevención del ictus en términos de intervenciones de estilo de vida saludables son las mismas para hombres y mujeres.

Estenosis carotídea

Las mujeres con estenosis carotídea sintomática (ictus isquémico o ataque isquémico transitorio ipsilateral a la estenosis carotídea) pueden ser menos propensas a recibir endarterectomía carotídea que los hombres (26). No se conoce que los beneficios y los riesgos de la angioplastia carotídea y de la colocación de stents difieren entre hombres y mujeres. Los datos del estudio CREST (endarterectomía carotídea Revascularización Versus Stenting Trial) demostraron que las mujeres asignadas al azar a la angioplastia y colocación de stent tuvieron una mayor proporción de eventos peri-procedimiento que los hombres y una posible interacción entre la asignación del tratamiento y el sexo (p = 0,064) (27).

Hay diferencias sexuales claras en la placa de la arteria carótida (las mujeres tienen menos características inflamatorias) y un riesgo más alto de complicaciones peri-procedimiento con endarterectomía por estenosis asintomática. Sin embargo, hoy no existen  evidencias que sugiere que las mujeres con estenosis carotídea sintomática o asintomática deben ser tratados médicamente contra quirúrgicamente (con endarterectomía o colocación de stent de la arteria coronaria) o de manera diferente a los hombres (13).

Por lo tanto, las recomendaciones de las directrices son las mismas para ambos sexos. Hay muchas lagunas en nuestra comprensión del tratamiento específico según el sexo de la enfermedad carotídea, por lo que se necesitan ensayos futuros para determinar si la cirugía es superior al tratamiento médico agresivo en mujeres con estenosis carotídea sintomática.

Aspirina para la prevención del ictus

No hay pruebas convincentes para sugerir que una terapia antiplaquetaria en particular o la dosis de esta terapia es más o menos beneficiosa en las mujeres que en los hombres, pero la protección de la aspirina pueden ser específica para determinadas enfermedades vasculares, sobre la base del sexo. Por ejemplo, los resultados de la WHS (Mujeres Health Study), un ensayo de 100 mg de aspirina cada dos días versus placebo, demostraron que la aspirina no reduce el riesgo de infarto de miocardio o muerte por causas cardiovasculares, pero sí lo hicieron respecto de la disminución de accidentes cerebrovasculares (en relación de riesgo, [IC 95%, 0,69 hasta 0,99] 0,83), en especial en el accidente cerebrovascular isquémico (riesgo relativo, 0,76 [IC, 0,63 a 0,93]) (28).

Un meta-análisis de la aspirina y la prevención primaria mostró que las mujeres parecen estar protegidas por la AAS de un accidente cerebrovascular, mientras que los hombres están protegidos de infarto de miocardio (29). Sin embargo, el ATT (antitrombóticos Trialists) informó que no hubo pruebas de una diferencia según el sexo en ninguno de los resultados vasculares después del ajuste para comparaciones múltiples (30).

En consonancia con otras recomendaciones publicadas, nuestra pauta sugiere considerar a la aspirina en las mujeres mayores de 65 años si se controla la presión arterial y el beneficio de la prevención del accidente cerebrovascular isquémico o infarto de miocardio es mayor que el riesgo de sangrado gastrointestinal y apoplejía hemorrágica (13). Si una mujer menor de 65 años pueden beneficiarse de la aspirina podría abordarse si estuviera disponible una puntuación de riesgo específica para el sexo.

Nuevas recomendaciones

Embarazo y Complicaciones

El riesgo de accidente cerebrovascular durante el embarazo es bastante bajo (alrededor de 34 por cada 100.000 partos) (31), pero el riesgo es más alto en el período post-parto. Aunque la definición tradicional de un marco de tiempo después del parto es de 6 semanas, un estudio reciente mostró que pueden ocurrir eventos trombóticos hasta 12 semanas después del parto (32). La sospecha de un accidente cerebrovascular o vasculopatía post-parto (el síndrome de encefalopatía posterior reversible o el síndrome de vasoconstricción cerebral reversible) o trombosis venosa cerebral debe ser mayor para las mujeres que desarrollan nueva aparición dolor de cabeza, visión borrosa, o convulsiones o cualquier signo o síntomas neurológicos durante el puerperio (13).

Preeclampsia y Eclampsia

La preeclampsia se presenta en aproximadamente el 5% de los embarazos. Se define como la presión arterial alta en el embarazo asociada con proteinuria (proteínas en orina ≥ 300 mg/24 h) o trombocitopenia, disfunción hepática, insuficiencia renal progresiva, edema pulmonar, o nueva aparición trastornos cerebrales o visuales (33). El Congreso Americano de Obstetras y Ginecólogos (anteriormente el Colegio Americano de Obstetras y Ginecólogos) publicó una guía actualizada (después de nuestro guía estaba en producción) que cambió los criterios para la preeclampsia para incluir a las mujeres sin proteinuria si una de las otras características multisistémicas estaba presente (33).

Debido a la evidencia de que una historia de preeclampsia se asocia con un riesgo 2 veces mayor de accidente cerebrovascular y un riesgo 4 veces mayor de hipertensión más adelante en la vida, se recomienda documentar la preeclampsia como un factor de riesgo (clase IIa, nivel de evidencia C) (13). 

Nuestra intención es aumentar la conciencia de que las mujeres con antecedentes de preeclampsia probablemente se beneficiarían de un cambio de estilo de vida y de la evaluación temprana del riesgo cardiovascular y de las intervenciones preventivas. 

Aunque la evidencia de una asociación entre la preeclampsia y la hipertensión más tarde con el consiguiente riesgo de accidente cerebrovascular es clara, la brecha actual en el conocimiento es la identificación de que las mujeres con preeclampsia tienen estas complicaciones. Se necesita más investigación para entender los biomarcadores u otras características que puedan identificar a las mujeres con mayor riesgo (13).

Hipertensión moderada en el embarazo

Otra nueva recomendación es considerar el tratamiento de mujeres con una presión arterial sistólica entre 150 y 159 mm Hg o una presión arterial diastólica de entre 100 y 109 mm Hg de nueva aparición durante el embarazo (clase IIa, nivel de evidencia B).

Esta recomendación difiere de la de la directriz del Congreso Estadounidense de Obstetras y Ginecólogos, que solo recomienda el tratamiento de pacientes con una presión arterial mayor de 160/110 mm Hg (33). Nuestra nueva recomendación se basa en la evidencia de que el tratamiento de la presión arterial moderadamente elevadas leve en el embarazo se asocia con una reducción del 50% en el riesgo de hipertensión grave (riesgo relativo, 0,5 [IC, 0,41 a 0,61]) (34).

Nuevos estudios o reanálisis de los datos existentes utilizando la nueva definición de preeclampsia pueden ser útiles para evaluar el beneficio del tratamiento de las elevaciones leves a moderadas de la presión arterial durante el embarazo. Aunque los medicamentos antihipertensivos seguros y eficaces se pueden utilizar durante el embarazo, el riesgo para el feto también debe ser considerado (13).

Conclusiones

Estas directrices ofrecen recomendaciones para la prevención del accidente cerebrovascular en las mujeres haciendo hincapié en los factores de riesgo que son únicos o más frecuentes. Es de destacar que reconocemos muchas lagunas en la literatura que limitan la capacidad de proporcionar un fuerte nivel de evidencia a las recomendaciones específicas por sexo. 

Algunas puntuaciones de riesgo específicas como la puntuación de riesgo de Framingham para el accidente cerebrovascular (35), toman en cuenta el sexo, pero no permiten calcular el riesgo en personas menores de 54 años. Las metas para nuestra pauta incluyen la identificación de factores de riesgo únicos y facilitar el desarrollo de nuevas herramientas propias de cada sexo para eliminar el riesgo de accidente cerebrovascular.

Sugerimos que una evaluación más precisa del riesgo de accidente cerebrovascular es posible si se conoce que los acontecimientos que ocurren en la edad adulta joven  aumentan el riesgo en la edad adulta, como la preeclampsia, que se debe documentar con ese fin. 

Además, los riesgos exclusivos de las mujeres (el uso de anticonceptivos orales y la terapia hormonal) y los factores de riesgo establecidos que son más prevalentes en las mujeres de mayor edad (hipertensión y fibrilación auricular) deben ser reconocidos. 

Esperamos que esta guía estimule la investigación adicional para determinar los mejores enfoques para la prevención del ictus, tanto para hombres como para mujeres.

Fuente: IntraMed

 
Referencias
 
1 Go AS, Mozaffarian D, Roger VL, Benjamin EJ, Berry JD, Blaha MJ, et al, American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Executive summary: heart disease and stroke statistics—2014 update: a report from the American Heart Association. Circulation. 2014; 129:399-410.

 
2 Sturm JW, Donnan GA, Dewey HM, Macdonell RA, Gilligan AK, Srikanth V, et al. Quality of life after stroke: the North East Melbourne Stroke Incidence Study (NEMESIS). Stroke. 2004; 35:2340-5.
 
3 Bushnell CD, Reeves MJ, Zhao X, Pan W, Prvu-Bettger J, Zimmer L, et al. Sex differences in quality of life after ischemic stroke. Neurology. 2014; 82:922-31.

4 Gray LJ, Sprigg N, Bath PM, Boysen G, De Deyn PP, Leys D, et al, TAIST Investigators. Sex differences in quality of life in stroke survivors: data from the Tinzaparin in Acute Ischaemic Stroke Trial (TAIST). Stroke. 2007; 38:2960-4.
 
5 Gargano JW, Reeves MJ, Paul Coverdell National Acute Stroke Registry Michigan Prototype Investigators. Sex differences in stroke recovery and stroke-specific quality of life: results from a statewide stroke registry. Stroke. 2007; 38:2541-8.
 
6 Gall SL, Tran PL, Martin K, Blizzard L, Srikanth V. Sex differences in long-term outcomes after stroke: functional outcomes, handicap, and quality of life. Stroke. 2012; 43:1982-7.
 
7 Goldstein LB, Bushnell CD, Adams RJ, Appel LJ, Braun LT, Chaturvedi S, et al, American Heart Association Stroke Council. Guidelines for the primary prevention of stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2011; 42:517-84.
 
8 Furie KL, Kasner SE, Adams RJ, Albers GW, Bush RL, Fagan SC, et al, American Heart Association Stroke Council, Council on Cardiovascular Nursing, Council on Clinical Cardiology, and Interdisciplinary Council on Quality of Care and Outcomes Research. Guidelines for the prevention of stroke in patients with stroke or transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2011; 42:227-76.
 
9 Mosca L, Benjamin EJ, Berra K, Bezanson JL, Dolor RJ, Lloyd-Jones DM, et al. Effectiveness-based guidelines for the prevention of cardiovascular disease in women—2011 update: a guideline from the American Heart Association. Circulation. 2011; 123:1243-62.
 
10 Jacobs AK, Kushner FG, Ettinger SM, Guyton RA, Anderson JL, Ohman EM, et al. ACCF/AHA clinical practice guideline methodology summit report: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation. 2013; 127:268-310.

11 Giralt D, Domingues-Montanari S, Mendioroz M, Ortega L, Maisterra O, Perea-Gainza M, et al. The gender gap in stroke: a meta-analysis. Acta Neurol Scand. 2012; 125:83-90.
 
12 Lloyd-Jones DM, Evans JC, Levy D. Hypertension in adults across the age spectrum: current outcomes and control in the community. JAMA. 2005; 294:466-72.
 
13 Bushnell C, McCullough LD, Awad IA, Chireau MV, Fedder WN, Furie KL, et al, American Heart Association Stroke Council, Council on Cardiovascular and Stroke Nursing, Council on Clinical Cardiology, Council on Epidemiology and Prevention, and Council for High Blood Pressure Research. Guidelines for the prevention of stroke in women: a statement for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014; 45:1545-88.
 
14 Fang MC, Singer DE, Chang Y, Hylek EM, Henault LE, Jensvold NG, et al. Gender differences in the risk of ischemic stroke and peripheral embolism in atrial fibrillation: the Anticoagulation and Risk Factors in Atrial Fibrillation (ATRIA) study. Circulation. 2005; 112:1687-91.
 
15 Lip GY, Nieuwlaat R, Pisters R, Lane DA, Crijns HJ. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the Euro Heart Survey on atrial fibrillation. Chest. 2010; 137:263-72.
 
16 Merikangas KR. Contributions of epidemiology to our understanding of migraine. Headache. 2013; 53:230-46.
 
17 Kurth T, Slomke MA, Kase CS, Cook NR, Lee IM, Gaziano JM, et al. Migraine, headache, and the risk of stroke in women: a prospective study. Neurology. 2005; 64:1020-6.
 
18 Schürks M, Rist PM, Bigal ME, Buring JE, Lipton RB, Kurth T. Migraine and cardiovascular disease: systematic review and meta-analysis. BMJ. 2009; 339:b3914.
 
19 Kurth T, Schürks M, Logroscino G, Buring JE. Migraine frequency and risk of cardiovascular disease in women. Neurology. 2009; 73:581-8.
 
20 Lidegaard Ø, Løkkegaard E, Jensen A, Skovlund CW, Keiding N. Thrombotic stroke and myocardial infarction with hormonal contraception. N Engl J Med. 2012; 366:2257-66.
 
21 Andersson HM, Siegerink B, Luken BM, Crawley JT, Algra A, Lane DA, et al. High VWF, low ADAMTS13, and oral contraceptives increase the risk of ischemic stroke and myocardial infarction in young women. Blood. 2012; 119:1555-60.
 
22 O'Donnell MJ, Xavier D, Liu L, Zhang H, Chin SL, Rao-Melacini P, et al, INTERSTROKE investigators. Risk factors for ischaemic and intracerebral haemorrhagic stroke in 22 countries (the INTERSTROKE study): a case-control study. Lancet. 2010; 376:112-23.
 
23 Pan A, Sun Q, Okereke OI, Rexrode KM, Hu FB. Depression and risk of stroke morbidity and mortality: a meta-analysis and systematic review. JAMA. 2011; 306:1241-9.
 
24 Pan A, Okereke OI, Sun Q, Logroscino G, Manson JE, Willett WC, et al. Depression and incident stroke in women. Stroke. 2011; 42:2770-5.
 
25 Peters SA, Huxley RR, Woodward M. Diabetes as a risk factor for stroke in women compared with men: a systematic review and meta-analysis of 64 cohorts, including 775 385 individuals and 12 539 strokes. Lancet. 2014..

26 Poisson SN, Johnston SC, Sidney S, Klingman JG, Nguyen-Huynh MN. Gender differences in treatment of severe carotid stenosis after transient ischemic attack. Stroke. 2010; 41:1891-5.
 
27 Howard VJ, Lutsep HL, Mackey A, Demaerschalk BM, Sam AD 2nd, Gonzales NR, et al, CREST investigators. Influence of sex on outcomes of stenting versus endarterectomy: a subgroup analysis of the Carotid Revascularization Endarterectomy Versus Stenting Trial (CREST). Lancet Neurol. 2011; 10:530-7.
 
28 Ridker PM, Cook NR, Lee IM, Gordon D, Gaziano JM, Manson JE, et al. A randomized trial of low-dose aspirin in the primary prevention of cardiovascular disease in women. N Engl J Med. 2005; 352:1293-304.
 
29 Berger JS, Roncaglioni MC, Avanzini F, Pangrazzi I, Tognoni G, Brown DL. Aspirin for the primary prevention of cardiovascular events in women and men: a sex-specific meta-analysis of randomized controlled trials. JAMA. 2006; 295:306-13.
 
30 Baigent C, Blackwell L, Collins R, Emberson J, Godwin J, Peto R, et al, Antithrombotic Trialists' (ATT) Collaboration. Aspirin in the primary and secondary prevention of vascular disease: collaborative meta-analysis of individual participant data from randomised trials. Lancet. 2009; 373:1849-60.
 
31 James AH, Bushnell CD, Jamison MG, Myers ER. Incidence and risk factors for stroke in pregnancy and the puerperium. Obstet Gynecol. 2005; 106:509-16.
 
32 Kamel H, Navi BB, Sriram N, Hovsepian DA, Devereux RB, Elkind MS. Risk of a thrombotic event after the 6-week postpartum period. N Engl J Med. 2014; 370:1307-15.
 
33 American College of Obstetricians and Gynecologists. Hypertension in pregnancy. Report of the American College of Obstetricians and Gynecologists Task Force on Hypertension in Pregnancy. Obstet Gynecol. 2013; 122:1122-31.
 
34 Abalos E, Duley L, Steyn DW, Henderson-Smart DJ. Antihypertensive drug therapy for mild to moderate hypertension during pregnancy. Cochrane Database Syst Rev. 2007; CD002252.
 
35 Wolf PA, D'Agostino RB, Belanger AJ, Kannel WB. Probability of stroke: a risk profile from the Framingham Study. Stroke. 1991; 22:312-8.

Logo KW

Adaptación al medio [562]

de System Administrator - miércoles, 16 de julio de 2014, 20:07
 

Adaptación al medio

Fuente: http://hipercognicion.blogspot.com/2011/12/adaptacion-al-medio.html

El ser humano tiene tres formas diferentes de adaptarse al entorno en que vive. La primera de ellas es la vía evolutiva, compartida con el resto de especies vivas. Mediante pequeños cambios genéticos, los organismos de una misma especie se enfrentan de distinto modo al entorno. Aquellos que expresan características que los hace fuertes frente al medio sobrevivirán y se reproducirán. Mientras, aquellos que mueran antes de reproducirse debido a que sus características los hace incompatibles con el medio, no podrán transmitir esas cualidades a la futura generación. De esa forma, unas características genéticas permanecen en la especie, mientras otras desaparecen. Esta forma de adaptación es extremadamente lenta y requiere el paso de múltiples generaciones para ser efectiva. Este es el mecanismo que dio origen, por ejemplo, a la bipedestación, al razonamiento abstracto o al pulgar oponible.
 
La segunda forma de adaptación es fisiológica. El organismo cuenta con varios mecanismos de adaptación que le permite modificar su estructura y funcionamiento para adecuarse al entorno. El tiempo que media en estos procesos puede ser desde unos pocos segundos a varios años. Entre los procesos más rápidos se encuentra la apertura del iris para adaptarse a la luz excesiva o la secreción de adrenalina para preparar el cuerpo y disponerse a huir o luchar. Por su parte, entre los procesos lentos se encuentra la coloración de la piel para responder a la intensidad de luz solar o la musculación que nos permite realizar trabajos más intensos.
 
Los anteriores procesos son compartidos con muchas las especies animales, pero además, el hombre dispone de un mecanismo adaptativo que lo hace único. Se trata de la cultura. La cultura es la sabiduría acumulada y transmitida de generación en generación. Mediante esos conocimientos heredados, el hombre aprende a protegerse del frío, a construir cobijo o a obtener alimentos. Un ser humano que creciera aislado de sus semejantes y, por ende, privado de esos conocimientos culturales, tendría ciertas capacidades para sobrevivir, pero estaría en clara desventaja adaptativa y su esperanza de vida quedaría notablemente mermada. Por tanto se reducirían sus probabilidades de llegar a reproducirse.
Logo KW

Adaptación del cuerpo humano al medio [512]

de System Administrator - sábado, 12 de julio de 2014, 16:46
 

La adaptación del cuerpo humano al medio

Fuente: http://hipercognicion.blogspot.com/2013/02/la-adaptacion-del-cuerpo-humano-al-medio.html

El cuerpo humano dispone de tres formas de adaptación al medio. La primera es evolutiva, por medio de los genes y requiere de cambios generacionales para obtener los frutos. Es decir, si un individuo o un grupo de ellos está sometido a una presión ambiental, sus descendientes estarán mejor adaptados. Lo cual viene a decir, que quien se adapta no es exactamente el individuo, sino la especie. Este es un proceso lento, que requiere de muchos miles de años para ver el cambio. Este proceso es responsable de la bipedestación, de nuestra dieta carnívora o del pulgar oponible. Hay un segundo tipo de adaptación, que afecta al individuo exclusivamente y se basa en cambios progresivos para adaptar el cuerpo al ambiente. Entre estos cambios podemos citar la pigmentación cutánea por efecto del sol o la musculación obtenida por el esfuerzo físico. Finalmente hay un tercer ámbito de adaptaciones físicas al ambiente, que son rápidas o incluso instantáneas, normalmente reversibles en el mismo plazo y cuya función nos permite adaptarnos a situaciones cambiantes del entorno. Entre estos procesos podemos encontrar la dilatación de la pupila para adaptarse a la oscuridad, la contracción o dilatación de los poros cutáneos o la secreción de adrenalina para hacer frente a un sobreesfuerzo.

Logo KW

Addressing security with the board [853]

de System Administrator - jueves, 11 de septiembre de 2014, 13:46
 

Addressing security with the board: Tips for both sides of the table

Clearly security is a boardroom topic, but the trick is to get both sides on the same page

By Steve Ragan

In the boardroom, when it comes to addressing the topic of security, there's tension on both sides of the table.

It doesn't happen all the time, but when it does, the cause of the friction is usually security executives and board members – each with vastly different areas of expertise and interest – pushing to get what they want out of the discussion while keeping business goals intact.

Stephen Boyer, the co-founder and CTO of BitSight Technologies, a company that uses public data to rate the security performance of an organization, shared some thoughts with CSO recently, geared towards moving the discussions forward past the deadlock.

Since there are two sides to the issue, Boyer shared two sets of tips; one set for the board and the other set for the executives speaking to them.

As a board member

Frame expectations clearly

Communication goes both ways. It's essential to make sure the security team understands what information is required, how discussions should be framed, and the level of abstraction you require to make decisions. Otherwise, you risk sitting through conversations that fail to address the issues the business cares about most.

"In no way should every board member have to act as a security expert. But, in today’s world, cyber risks are a major part of managing risk in a business. Therefore board members need to make it known what they see as critical and how to begin those conversations," Boyer explained.

Are you talking about security or risk?

Performance is important, but instead of focusing on specific technologies, policies and procedures, evaluate what the business is doing to proactively mitigate cyber risks and what those risk levels are.

For example, are there risks in the supply chain that your organization could be ignoring? With each strategic decision made, are the organization's risks increasing or decreasing?

"Understanding the security performance of a company is important, but managing the risks associated with security is crucial. As in other business areas, boards need to be aware of the sources of risk and communicate clearly what is acceptable for the business. From there, it’s not up to the board to dictate what technologies and policies should be in place, but to guide their teams when it’s necessary to take action to reduce or transfer security risks," Boyer said.

Decide on the key indicators you want to monitor and be consistent

You don't need to be in the trenches to understand security posture if you choose the right data points to assess. Work with your team to choose meaningful, data-driven metrics that demonstrate both performance and effectiveness.  It matters less how frequently you are attacked if your team is effectively re-mediating threats before they become an issue.

"One of the issues we can’t stress enough is that to arrive at insight and action ability, it’s important that all parties agree on a set of metrics that are objective and consistent.  The goal is to paint a clear picture of security performance over time, and to gain context about where your company sits relative to peers and competitors within your industry," Boyer added.

Focus on a fixed set of key indicators, and benchmark performance over time to gain valuable insight into the issues affecting your posture and effectiveness. Moreover, correlate performance changes with key events to gain an understanding about the impact of technology investments, headcount and policy decisions.

In short, shift the conversation from a numbers game to a performance review, as you would in other areas of the business.

As a security executive

Always provide context.

Historical trend data and peer comparisons are key points for helping leaders "get it" when the spotlight is turned on security performance.

Being able to show how your organization compares to others in your industry, provides context that is often lacking from discussions about cyber security. Your board members bring expertise from their personal experience - tying performance metrics back to companies they've managed or advised can help. 

Demonstrating that your company is more or less secure than others in your sector can help leaders justify strategic changes and investments that can improve your team's effectiveness.

"Context is key when it comes to security performance. If board members hear that overall security is going well, it gives them little information to bring cyber security into strategic decisions. A key way to add context to these discussions is through industry and peer benchmarking. If a security professional can tell the board, 'Here is where we are in relation to our industry and this is what I need for us to improve.' That is a strong and actionable statement," Boyer said.

Tell a story & teach a lesson.

Use this time to train your board members and fellow executives to be alert. Tell them what specific threats are targeting your company, what the attacks look like and what they can do to help avoid a breach.

If a peer has been breached and you fear you might also be a target, explain what conditions existed to allow the attack to happen and what you're doing to make the company more secure. By focusing on specific threats that your company is facing, instead of wants regarding issues you've already handled or the technical specifications of an attack, you can help prevent attacks from spreading.

"While conversations should stay high level when it comes to security, boards should be informed of major threats facing their company. In our recent analysis of the Education sector, we found that the Flashback virus was widespread on college networks," Boyer said.

"For a university security practitioner, this is crucial information to convey to the school’s board and more importantly, to answer the fundamental question: 'What does this threat mean for our business?'"

Answer the questions being asked.

Your metrics should paint a picture that people outside the security team can understand. Reduce the amount of technical jargon and stat charts on your slides and focus on measuring what matters to your audience. The end result should communicate whether you are more or less secure, and why.

"A lot of times we hear from companies that talking with security teams can be intimidating because not everyone in the room is a technological expert, or at the same level of awareness as the pros.  The way to face this challenge is to avoid walking into the room with an eye-chart packed presentation, but to instead focus on only showing the metrics that answer the questions your board is asking.  This ties back to knowing your audience and making sure you speak in a common language," Boyer said.

Link: http://www.csoonline.com

Logo KW

Adicciones [534]

de System Administrator - sábado, 12 de julio de 2014, 17:54
 

Conocer las adicciones

Fuente: http://hipercognicion.blogspot.com/2009/05/vicios.html

Llamemos adicciones a aquellas prácticas que producen un placer inmediato, que suelen conllevar consecuencias negativas mediatas y que suele resultar extremadamente difícil abandonar. Algunas adicciones son bien conocidas, otras no tanto. Casi todo el mundo identifica con facilidad las adicciones de fumar, consumir drogas o alcohol. Menos son los que reconocen como adicciones la comida descontrolada, el sexo desenfrenado, las compras compulsivas o la limpieza maniática. Cualquiera de estas actividades por sí misma no se considera patológica, sin embargo, todas se pueden tornar descontroladas si la estabilidad emocional del sujeto atraviesa dificultades. La prueba para evaluar si una de estas actividades se ha descontrolado consiste en intentar su abandono de forma temporal. Si el individuo resiste la prueba, lo más probable es que no se trate de una conducta descontrolada, sino de un hábito más o menos saludable. Pero si es incapaz de aguardar varios días sin practicar la actividad, seguramente se haya vuelto patológica. Una vez identificada la adicción, probablemente queramos desprendernos de ella cuanto antes, pero seamos incapaces. Cualquier intento de eliminar estas actividades de nuestras vidas fracasará si las contemplamos de forma aislada sin considerar el resto de circunstancias de nuestra vida. Dicho de otro modo, no nos libraremos de estas actividades a menos que modifiquemos el resto de nuestra vida. Las adicciones proporcionan un placer inmediato que es requerido fundamentalmente cuando nuestras vidas no nos resultan placenteras. Por tanto, la intervención deberá incidir en hacer que nuestras vidas sean más placenteras antes de retirar una conducta que nos proporciona ese placer inmediato. Es habitual que una persona deje de fumar, de beber o de comer compulsivamente cuando emprende una nueva etapa en su vida, como iniciarse en algún deporte, emprender estudios, una nueva relación, etc. Desde Hipercognición abogamos por la vida plena como remedio natural y espontáneo contra las conductas adictivas.

Logo KW

ADN [60]

de System Administrator - miércoles, 27 de agosto de 2014, 21:13
 

 

El ácido desoxirribonucleico, frecuentemente abreviado como ADN, es un ácido nucleico que contiene instrucciones genéticas usadas en el desarrollo y funcionamiento de todos los organismos vivos conocidos y algunos virus. Es responsable de la transmisión hereditaria. El papel principal de la molécula de ADN es el almacenamiento a largo plazo de información.

Fuente: http://es.wikipedia.org/wiki/ADN

Logo KW

ADVANCED PERSISTENT THREATS [797]

de System Administrator - jueves, 25 de junio de 2015, 22:59
 

3 BIG DATA SECURITY ANALYTICS TECHNIQUES YOU CAN APPLY NOW TO CATCH ADVANCED PERSISTENT THREATS

By Randy Franklin Smith and Brook Watson
Commissioned by HP

In this unprecedented period of advanced persistent threats (APTs), organizations must take advantage of new technologies to protect themselves. Detecting APTs is complex because unlike intensive, overt attacks, APTs tend to follow a “low and slow” attack profile that is very difficult to distinguish from normal, legitimate activity—truly a matter of looking for the proverbial needle in a haystack. The volume of data that must be analyzed is overwhelming. One technology that holds promise for detecting these nearly invisible APTs is Big Data Security Analytics (BDSA).

In this technical paper, I will demonstrate three ways that the BDSA capabilities of HP ArcSight can help to fight APTs:

  • 1. Detecting account abuse by insiders and APTs
  • 2. Pinpointing data exfiltration by APTs
  • 3. Alerting you of new program execution

Please read the attached whitepapers.

Logo KW

Afecto en la socialización [521]

de System Administrator - sábado, 12 de julio de 2014, 17:10
 

El afecto en la socialización

Fuente: http://hipercognicion.blogspot.com/2011/07/el-afecto-en-la-socializacion.html

Las sociedades se construyen con cada nuevo miembro que nace en ellas. Ellos son los cimientos que soportarán en un futuro el destino de la sociedad, y es por ello que debemos esmerarnos en que tengan la mejor calidad humana para acometer esa tarea. El afecto es la energía que hace que se mueva toda la maquinaria social y tiene su origen en la especial relación entre madre e hijo durante los primeros meses de vida y, en menor medida, entre padre e hijo. Será durante esa etapa cuando aprenda lo que es el afecto, qué valor tiene y por qué tiene que ganárselo. Los hijos separados de sus madres durante los primeros años de vida son más vulnerables ante todo tipo de enfermedades y será fácil que desarrollen psicopatía, un trastorno que hacen que el individuo viva al margen de la sociedad y que la considere como hostil. El afecto es el lenguaje universal que hablan los hombres y mujeres dentro de la humanidad, a través de él construyen las instituciones, las funciones y las estructuras sociales. El sentido de la vida está orientado a lograr afecto en sus múltiples manifestaciones, como amor, como reconocimiento, como admiración, etc. Además el afecto retorna hacia quien lo entrega. Los nuevos miembros de la sociedad buscarán el afecto de sus padres en la sociedad. La sociedad le ofrecerá recompensas en las que verán ese afecto. Serán generalizaciones del afecto paternal.

Logo KW

Agencias de Calificación de Riesgos [195]

de System Administrator - miércoles, 8 de enero de 2014, 18:28
 

 Riesgo

Las agencias de calificación de riesgos son empresas que, por cuenta de un cliente, califican determinados productos financieros o activos de empresas, Estados o Gobiernos. Sus notas o calificaciones valoran el riesgo de impago y el deterioro de la solvencia del emisor. Para los inversores, aumentan el abanico de opciones y proporcionan medidas fáciles de usar. En general esto incrementa la eficiencia del mercado, al reducir los costes tanto para el que presta como para el que toma prestado. Este mecanismo puede afectar la disponibilidad de capital de riesgo, fundamental para emprendedores y startups. La calificación marca el tipo de interés al que se concederá la financiación.

Fuente: http://es.wikipedia.org/wiki/Agencia_de_calificación_de_riesgos

Logo KW

AGESIC [202]

de System Administrator - sábado, 12 de julio de 2014, 17:57
 

AGESIC

AGESIC es la Agencia de Gobierno Electrónico y Sociedad de la Información de Uruguay, unidad ejecutora dependiente de la Presidencia de la República. Posee autonomía técnica. Su finalidad es impulsar la Sociedad de la Información, promoviendo la inclusión y equidad en el uso de las TIC (Agenda Digital Uruguay) y liderar la estrategia de Gobierno Electrónico y su implementación en el país, como base de un Estado eficiente y centrado en el ciudadano.

Sitio web: http://agesic.gub.uy/

Logo KW

Agile Operations [1401]

de System Administrator - martes, 8 de septiembre de 2015, 21:30
 

eGuide: Agile Operations

In the application economy, constant application updates are table stakes. To gain a competitive advantage, you must deliver the very best user experience by ensuring those improvements are based on real user feedback and application and infrastructure performance - from mobile to mainframe, on-premise or in the cloud. End-to-end monitoring solutions from CA can give your enterprise the holistic monitoring and in-depth management capabilities it needs to turn this feedback into valuable functions and reduce mean-time-to-recover.

Read this eGuide to learn how you can enhance user experience by leveraging real-time insights from your entire application and infrastructure to drive improvements.

Please read the attached eGuide.

Logo KW

AI-powered Google services [1747]

de System Administrator - viernes, 7 de abril de 2017, 15:08
 

The phone personalizes the machine-learning model locally, based on how it is used (A). Many users' updates are aggregated (B) to form a consensus change (C) to the shared model, after which the procedure is repeated. Image: Google

Android más inteligente: los servicios de Google que funcionan con AI mejorarán a medida que los utilice

Smarter Android: AI-powered Google services will get better as you use them

By Nick Heath

The tech giant is testing whether its mobile services could use an approach called Federated Learning to improve their machine-learning models.

Google is introducing a new way for its AI-powered services to improve as people use them.

The tech giant is testing whether its mobile services could use an approach called Federated Learning to refine their underlying machine-learning models.

For each Google service, a machine-learning model is downloaded to a mobile device. Federated Learning allows these models to improve by learning from data on the phone, and then to summarize any local changes as a small update. This update is then encrypted and sent back to the Google cloud, where it is averaged with other user updates to improve the shared backend model.

The continual refinement of the machine learning model stored on the phone benefits the end user, as improvements no longer depend solely on the improved machine learning models being downloaded to their phone.

Google says the approach also has the advantage of improving privacy, as all the training data remains on the device, and no individual updates are stored in the cloud. Updates will only be decrypted and averaged with those from other phones once hundreds or thousands of similar updates have been gathered.

"Federated Learning allows for smarter models, lower latency, and less power consumption, all while ensuring privacy," Google research scientists Brendan McMahan and Daniel Ramage said in a blog post.

"And this approach has another immediate benefit: in addition to providing an update to the shared model, the improved model on your phone can also be used immediately, powering experiences personalized by the way you use your phone."

Google is testing the Federated Learning approach in Gboard, a keyboard for Android handsets. In this instance, the machine learning model will remember which suggested inputs and information the user clicked on and use that data to improve future suggestions.

The blog post goes into some detail about the complexity of introducing the Federated Learning approach, including mentioning that the on-device training uses a miniature version of TensorFlow, Google's open-source software library for machine learning.

Google says that the Federated Learning approach can't be used to help solve every machine learning challenge, with exceptions including using labelled images to teach a machine to recognize the breed of dog in a photo.

In a somewhat similar move, last year Apple said it would approach machine learning in a way that respects personal data, by using what it called 'differential privacy'. This approach allows it to analyze customer data for trends without being able to identify any particular individuals: for example, to be able to spot trending words that need to be added to the QuickType keyboard suggestions.

Logo KW

Airbus Swears Its Pod/Car/Drone Is a Serious Idea Definitely [1734]

de System Administrator - viernes, 24 de marzo de 2017, 21:51
 

Credit: ITALDESIGN

Airbus jura que su Pod/Car/Drone es una idea seria y definitiva

Airbus Swears Its Pod/Car/Drone Is a Serious Idea Definitely

by JACK STEWART 

AIR TRAVEL NEVER involves air travel alone. City to city transport usually goes something like car or cab to train to shuttle to the terminal where you catch a plane, only to reverse the process at the other end, often with a little running somewhere along the line for good measure. Airbus came up with a crazy idea to change all of that with Pop.Up, a conceptual two-passenger pod that clips to a set of wheels, hangs under a quadcopter, links with others to create a train, and even zips through a hyperloop tube.

This crazy concept blurs the once-firm lines between planes, trains, and automobiles to let people take to the skies when traffic backs up. In other words, it’s a flying car. A really cool flying car, cooked up with help from Italdesign and unveiled at the Geneva Motor Show, but still, a flying car. Farfetched, yes, but Airbus says it is taking the idea seriously. “Adding the third dimension to seamless multi-modal transportation networks will, without a doubt, improve the way we live, and how we get from A to B,” said Mathias Thomsen, general manager for urban air mobility at Airbus.

Around town, the carbon fiber pod couples with an electric ground module and rolls along on a rather conventional four wheels. It’s autonomous, of course, because everything is in the future. Don’t want to creep along in gridlocked traffic? Simply summon an eight-rotor air module that resembles a supersized consumer drone. Clip in, take off, and enjoy a range that, should this technology ever actually work, will max out at about 60 miles. Whatever mode to choose, the Pop.Up parks itself at a recharging station upon arrival.

It all sounds crazy, but some big names see it happening. Dubai, the most superlative of emirates, plans to put people-carrying drones in service later this year. Last year, Uber said it could launch a flying car service within a decade. Such things will help wealthy (and brave) commuters skip over traffic, but don’t solve the last mile problem unless your destination boasts a rooftop landing pad. That’s where the Pop.Up idea enjoys an edge. It eliminates a key point of friction in multi-modal transportation: changing from one mode to the next. You just chill out in your pod.

Wanna hear the really crazy part? This scheme isn’t as far-fetched as you might think.

“This is getting to be less of a technology challenge,” says Pat Anderson, who is developing similar vehicle concepts at Embry Riddle Aeronautical University. He believes autonomous software and electric propulsion will subject the aviation industry to radical changes. Say goodbye to wings on tubes, at least for short distances: batteries and rotors will win the day. That doesn’t mean there aren’t hurdles, not the least of which are federal regulations.

“We designed these regulations in the 50s and 60s, and they go largely unchanged, due to inertia,” Anderson says. The General Aviation Manufacturers Association is pushing hard to move certification of novel planes away from the current fixed federal standards, and into a less prescriptive model. That will mean new ideas can be approved more quickly, similar to the way that safety devices in cars are. The FAA has already gotten on board with the changes, for small aircraft, but designing and applying rules for an entirely new class of aircraft will take years.

As humans pack into increasingly dense global mega-cities, they’ll need new ideas for transport to avoid gridlock. But Airbus and Italdesign haven’t committed to any sort of timeframe for this one, so (unless you live in Dubai) gazing up longingly through your car’s sunroof is as close to zooming through the sacré bleu sky as you’re likely to get, for now.

Link: https://www.wired.com

Logo KW

Alan Parsons [278]

de System Administrator - lunes, 13 de enero de 2014, 20:04
 

 AP

Alan Parsons (Londres, 20 de diciembre de 1948) es ingeniero de sonido, productor, compositor, multi instrumentista e intérprete inglés.

Fuente: http://es.wikipedia.org/wiki/Alan_Parsons

Logo KW

Alcanzar los objetivos [516]

de System Administrator - sábado, 12 de julio de 2014, 16:55
 

Alcanzar los objetivos

Link: http://hipercognicion.blogspot.com/2009/06/objetivos.html

La vida, son objetivos. Cuando los agotas, mueres. Si quieres tener una vida prolongada y estimulante deberás distribuir objetivos a lo largo de todo tu proyecto vital o correrás el riesgo de agotarlos demasiado temprano. Puedes imaginar estos objetivos como una riqueza que todos tenemos al nacer. Esta riqueza puedes distribuirla a lo largo de tu vida o derrocharla en los primeros años y agotarla. Si tienes demasiada prisa por conseguir un objetivo o si tratas de alcanzarlo con atajos, tal vez lo alcances pero no sabrás valorarlo, te pondrás metas cada vez más ambiciosas porque ninguna te llenará y derrocharás tu vida buscándole sentido y desearás que todo acabe. Y ello será porque te has dejado lo mejor en el camino, que es disfrutar con cada pequeño paso alcanzado. Con cada pequeño objetivo logrado.

Logo KW

Alexander Graham Bell [61]

de System Administrator - lunes, 3 de febrero de 2014, 16:54
 

 Bell

Alexander Graham Bell (Edimburgo, Escocia, Reino Unido, 3 de marzo de 1847 – Beinn Bhreagh, Canadá, 2 de agosto de 1922) fue un científico e inventor británico. Contribuyó al desarrollo de las telecomunicaciones y la tecnología de la aviación. Padre, abuelo y hermano estuvieron asociados con el trabajo en locución y discurso (su madre y su esposa eran sordas), lo que influyó profundamente en el trabajo de Bell (investigación en la escucha y el habla). Esto le movió a experimentar con aparatos para el oído. Sus investigaciones le llevaron a intentar conseguir la patente del teléfono en América, obteniéndola en 1876, aunque el aparato ya había sido desarrollado anteriormente por el italiano Antonio Meucci, siendo éste reconocido como su inventor el 11 de junio de 2002.

Fuente: http://es.wikipedia.org/wiki/Alexander_Graham_Bell

Logo KW

ALGORITHM HUNTS RARE GENETIC DISORDERS FROM FACIAL FEATURES IN PHOTOS [485]

de System Administrator - martes, 29 de julio de 2014, 21:11
 

 

ALGORITHM HUNTS RARE GENETIC DISORDERS FROM FACIAL FEATURES IN PHOTOS

Even before birth, concerned parents often fret over the possibility that their children may have underlying medical issues. Chief among these worries are rare genetic conditions that can drastically shape the course and reduce the quality of their lives. While progress is being made in genetic testing, diagnosis of many conditions occurs only after symptoms manifest, usually to the shock of the family.

A new algorithm, however, is attempting to identify specific syndromes much sooner by screening photos for characteristic facial features associated with specific genetic conditions, such as Down’s syndrome, Progeria, and Fragile X syndrome.

Researchers at the University of Oxford utilized machine learning to train the facial recognition software to identify features from a database of 2,878 images acquired via the web. Of these, 1,363 photos were of individuals with eight known developmental disorders while the more than 1,500 other images were of controls. The software maps faces and account for lighting, image quality, and other factors, just as many other photo applications utilized by Google and Facebook can. Individuals with similar traits are clustered together and the software improves as more individuals with related traits are identified.

 

The work was published in the journal eLife.

“A diagnosis of a rare genetic disorder can be a very important step. It can provide parents with some certainty and help with genetic counseling on risks for other children or how likely a condition is to be passed on,’ stated lead researcher Christoffer Nellåker in the release. ‘A diagnosis can also improve estimates of how the disease might progress, or show which symptoms are caused by the genetic disorder and which are caused by other clinical issues that can be treated.”

It’s believed that 30-40% of rare genetic disorders, which may affect up to 1 in 17 people, involve facial phenotypes or changes to the skull or face, making this a promising approach for detection.

Nellåker added, “A doctor should in future, anywhere in the world, be able to take a smartphone picture of a patient and run the computer analysis to quickly find out which genetic disorder the person might have.”

Learn more about the research at University of Oxford news, “Computer-aided diagnosis of rare genetic disorders from family snaps” or read the study here.

[Image credits: University of Oxford, eLife]

This entry was posted in GeneticsLongevity And Health and tagged aifacial featuresfacial recognitiongenetic disordersmachine learning,University of Oxford.

Link: http://singularityhub.com/2014/07/09/algorithm-hunts-rare-genetic-disorders-from-facial-features-in-photos/

Logo KW

Algoritmo [58]

de System Administrator - jueves, 2 de enero de 2014, 20:54
 

En matemáticas, lógica, ciencias de la computación y disciplinas relacionadas, un algoritmo (del griego y latín, dixit algorithmus y éste, a su vez, del matemático persa Al-Juarismi) es un conjunto preestablecido de instrucciones o reglas bien definidas, ordenadas y finitas que permite realizar una actividad mediante pasos sucesivos que no generen dudas a quien deba realizar dicha actividad. Dados un estado inicial y una entrada, siguiendo los pasos sucesivos se llega a un estado final y se obtiene una solución. Los algoritmos son el objeto de estudio de la algoritmia.

Fuente: http://es.wikipedia.org/wiki/Algoritmo

Logo KW

ALM SaaS tools and services [618]

de System Administrator - viernes, 1 de agosto de 2014, 21:24
 

Five hot ALM SaaS tools and services

Explore various ALM cloud offerings

by Amy Reichert

The move to provide application lifecycle management tools via software as a service is on. The number of tools is currently somewhat limited, but this area should expand in the near future.

Unlike traditional software purchases, obtaining application lifecycle management (ALM) tools via software as a service (SaaS) carries no licensing or maintenance fees. Instead, ALM SaaS users typically pay a subscription to the service. The standard subscription includes the cost of maintenance, upgrades and support. Additionally, the software is not taking up space on your network but is accessible via the cloud. The advantages of an ALM cloud model are generally reduced cost, remote management of software updates, up-to-date security measures, customizable features, accessibility from nearly anywhere with an Internet connection and fast installation.

Buying into SaaS does not free software developers from concerns about security, data storage and uptime. In a traditional ALM software model of purchase and install, the IT organization assumes the ongoing responsibilities for covering these concerns. With the cloud, the ALM service provider can be expected to meet the minimum requirements of theservice-level agreement (SLA). Still, developers will have to ensure that the SLA covers their needs and verify that the provider keeps their end of the deal.

Here are five ALM cloud options for companies considering this service.

Reduce the costs of project management with HP ALM SaaS

 

Hewlett Packard (HP) offers ALM SaaS both directly and through their partners.

HP's services include load, performance and functional test management software via theirQuality Center ALM tool. Subscribers receive access for a specific user level in multiple languages, support site access, secure sockets layer (SSL) data transmission, customization and migration support. There is no fixed time period or maintenance fee.

The benefits of HP ALM SaaS are reduced cost and maintenance for testing tools. There is also less hassle over who is responsible for keeping the system running. The testing tools are accessible from anywhere there is an Internet connection, so offshore and onshore teams use the same instance.

Gain accessibility and versatility with JIRA ALM SaaS

Atlassian offers JIRA services equipped with ALM SaaS.

JIRA ALM SaaS has several add-on tools, including Zephyr for test case management. SynapseRT is an add-on that includes traceability functionality for mapping requirements to test cases. Additionally, Behave for JIRA is useful for Agile teams that want to merge test management with user story or feature acceptance. With Behave, testers can use story acceptance criteria to create automated tests with Cucumber. Testers can choose to write either manual or automated tests.

The JIRA tools are accessible to the full development team, which may reduce the total number of tools the team ends up using. Test cases can be exported from JIRA and used with Cucumber directly if needed.

Collaborate and share with TOMOS

 

TOMOS claims to be the lightweight tool for ALM SaaS that allows fluid collaboration between the whole development team, including managers.

TOMOS is a centralized Web 2.0 application that is accessible with an Internet connection. All hosting, technical support and software improvements are included. The tool provides a large number of features like version and build management, test case development, test execution and defect management. The service also includes a collaboration space to store documents, requirements, screenshots or any other project-related data and metrics dashboards so management gets informative visual updates on project status. Furthermore, TOMOS includes customizable reports and a social network for users to further collaborate and exchange information.

Microsoft shops embrace Visual Studio Online

 

Microsoft Visual Studio Online is the cloud-based SaaS version of Visual Studio and Microsoft's ALM management tool team foundation server. It currently operates with two source control systems: Team Foundation Version Control and Git.

The SaaS ALM offering includes the most recent features, is updated on a continuous basis and is built on the Windows Azure cloud. Users can access the service with an existing Microsoft account. The cloud offering is missing functions that are part of the traditional purchase, including lab management, structure query language reporting, business intelligence and SharePoint. Microsoft's ALM SaaS offering does feature an automatically scalable build farm and scalable load testing servers. Updates to code are typically delivered on a three-week cycle. If the development team is already comfortable using Visual Studio and Team Foundation Service, it makes sense to move to the cloud version and avoid unnecessary disruption or the need for re-training.

Enjoy integrated management with TraceCloud's SaaS ALM

 

TraceCloud is an AWS-based ALM tool that includes an integrated management system for requirements, defects and testing. It features a built-in change control board for approval tracking and configurable metrics dashboards for management at the project, release, baseline, folder and user level.

Because TraceCloud is a SaaS plug-and-play RESTful Web service built in Javascript object notation, it integrates well with existing systems. As a SaaS ALM offering, support and maintenance are included in the subscription fee. TraceCloud offers security with 128-bit SSL, as well as encryption and scalability with Amazon's elastic cloud infrastructure.

More on SaaS:

Mobile applications: Testing and monitoring using SaaS solutions
Developing applications on SaaS platforms: Factors to consider

Link: http://searchsoftwarequality.techtarget.com/

 

 

 

Logo KW

Alternative Data [1719]

de System Administrator - jueves, 23 de febrero de 2017, 23:40
 

Alternative Data

Alternative data is information gathered from non-traditional information sources. Analysis of alternative data can provide insights beyond that which an industry's regular data sources are capable of providing.

The question of what constitutes alternative data varies from industry to industry.  In banking, for example, a lender may traditionally rely on an applicant’s credit score to assess risk and determine the probability that a loan will be paid back. When the applicant has no prior credit history, however, alternative data that illustrates the applicant’s history of meeting financial obligations, such as paying a cell phone bill on time each month, can be useful information. A bank that includes alternative data sources in risk assessment for a loan may also factor the applicant’s history of paying rent on time and whether or not the applicant consistently makes more than the required minimum monthly payment on credit card bills.

In recent years, the increase of data from mobile devices, satellitessensors and websites has led to large amounts of structuredsemistructured and unstructured data, also known as big data.  All that data has the potential to be mined for information and potentially help people make better data-driven decisions. In response to the demand for alternative data, some traditional research firms have branched out to become alt-data providers, selling corporate clients data from non-traditional sources and services to analyze that data..

Once a data source begins to be used on a regular basis, it is no longer considered to be an alternative source. Current use cases for alternative data include:

  • Precision agriculture – farmers can analyze time-series images taken by drones to make more accurate predictions about crop yields.
  • Healthcare – public healthcare workers can mine social media for specific keywords to fine-tune initiatives that track the spread of influenza.
  • Investment firms – analysts can use satellite imagery to analyze the number of cars in a shopping mall parking lot and predict the state of a local economy.
  • Military – autonomous robots can be used as additional eyes and ears to help improve situational awareness.

Link: http://whatis.techtarget.com

A Taxonomy For Alternative Data

by CARRIE SHAW

 We’re witnessing a data revolution. You all know by now that we’ve produced more data in the last 2 years than we have throughout all of humanity. And the pace is only increasing. It begs the question: how do you extract actionable trading signals from this brave new world of noise? It starts with understanding the landscape. Here is Quandl’s take on the taxonomy of alternative data for finance.

 

Link: https://blog.quandl.com

Logo KW

Amazon Auto Scaling [471]

de System Administrator - miércoles, 9 de abril de 2014, 16:55
 

Cloud computing is an exciting suite of technologies that has come to dominate the discussion of computing for two main reasons:

• It can provide the flexibility to quickly deploy computing environments that are not only properly configured for current needs, but that can also expand or contract according to future needs.
• It can help an organization save money.

Cloud computing can deliver flexibility and cost savings because it is uniquely capable of being scaled to the “right size” for a particular environment, no matter how frequently usage of the environment expands or contracts. Taking advantage of this flexibility can be difficult; however, some companies try to do it manually, while others create a custom automated system. Either method has the potential of introducing new challenges into your environment:

• Depending on manual processes to start new servers, stop unneeded systems, or change allocated storage space is expensive, slow, and worst of all, error prone. Any savings from moving environments to the cloud can easily be erased by the costs of manual intervention.
• Creating an automated scaling system customized for your organization’s needs takes a long time, almost certainly costs more than planned for, and requires a risky test and deployment phase. Such a system also requires its own hardware, software, and support environment that scales itself for
expansion or contraction.

As with so many of its other services and products, Amazon Web Services (AWS) created Auto Scaling to solve its own scaling issues, and now provides the service to its customers for free. On its own, Auto Scaling monitors your environment to ensure that the desired systems stay running. What is even more powerful is that you can tie the Amazon CloudWatch monitoring service into Auto Scaling, which allows your environment to automatically scale up or down based on current conditions:

• As load increases, Amazon CloudWatch can call Auto Scaling to add new computing capacity.
• As load decreases, Amazon CloudWatch can trigger Auto Scaling to shed computing capacity and reduce cost.

This paper describes what Auto Scaling is, when to use it, and provides an example of setting up Auto Scaling.

Logo KW

Amazon Relational Database Service (Amazon RDS) [461]

de System Administrator - domingo, 7 de diciembre de 2014, 21:53
 

Database Management Systems (DBMS) are an integral part of almost every large-scale software system. DBMS are large, complex software suites that not only store and retrieve data, but also secure the data, allow backups
of the data, replicate the data across multiple systems for greater reliability, and cache the data for faster access.

Such large database systems are difficult to set up, maintain, and expand. They are also challenging to clone, which is a critical step that allows Development (Dev) and Quality Assurance (QA) departments to use the same
environment for developing and testing an organization’s products.

Meanwhile, companies are finding that the data they capture is becoming more and more valuable to their business, either as a way of measuring and improving their own operations, or as a product that can be sold. The process of extracting value from the data becomes complicated, however. This is because as the data becomes more valuable, more care should be taken with the DBMS infrastructure—and yet the people who are involved in running and maintaining the DBMS are the ones who can best help with exploiting the data for the business.

Amazon Relational Database Service (Amazon RDS) was created by Amazon Web Services (AWS) out of its own experience with these DBMS complications. Amazon RDS provides cost-effective DBMS deployment, quick and efficient scaling, and easy support of development and QA. 

This paper describes how you can set up Amazon RDS and use it as a drop-in replacement for traditional DBMS, with all the cost, scaling, and agility advantages of deploying software in the cloud.

 MySQL

Logo KW

Amnesty International [362]

de System Administrator - lunes, 3 de febrero de 2014, 16:59
 

 Amnesty International

Nuestro objetivo es realista: ¡queremos cambiar la vida de muchas personas! Queremos que todas disfruten de los derechos consagrados en la Declaración Universal de Derechos Humanos. Y sabemos que es posible. Activistas de todo el mundo lo han demostrado oponiendo resistencia a quienes socavan los derechos humanos y pidiendo responsabilidades a quienes están en el poder.

Amnistía Internacional comenzó con la indignación de un hombre y con su valor para hacer algo al respecto. Tras enterarse de que dos estudiantes portugueses habían sido encarcelados por brindar por la libertad en 1961, el abogado británico Peter Benenson publicó un artículo, “The Forgotten Prisoners” ("Los presos olvidados"), en el periódico The Observer.

Su artículo puso en marcha la campaña mundial Appeal for Amnesty 1961 (Llamamiento por la amnistía 1961), que tuvo una notable respuesta. El artículo se publicó en periódicos de todo el mundo y su llamada a la acción encontró eco en los valores y aspiraciones de la gente en todas partes. Así surgió Amnistía Internacional.

Cronograma interactivo

Fuente: http://www.es.amnesty.org/quienes-somos/nuestros-objetivos/

Logo KW

Análisis de Datos [166]

de System Administrator - miércoles, 8 de enero de 2014, 13:08
 

El análisis de datos es un proceso para inspeccionar, limpiar y transformar datos con el objetivo de resaltar y/o descubrir información útil, como apoyo a la toma de decisiones. Algunos conceptos y herramientas utilizadas son: 

  • Se denomina inteligencia empresarial, inteligencia de negocios o BI (del inglés business intelligence) al conjunto de estrategias y herramientas enfocadas a la administración y creación de conocimiento mediante el análisis de datos existentes en una organización o empresa. 
  • En el contexto de la informática, un almacén de datos (del inglés datawarehouse) es una colección de datos orientada a un determinado ámbito, integrado, no volátil y variable en el tiempo, que ayuda a la toma de decisiones. Se trata, sobre todo, de un repositorio completo almacenado en una base de datos diseñada para favorecer el análisis OLAP. El datawarehouse se encuentra en una base de datos independiente de la que se halla en producción. Contiene a menudo grandes cantidades de información agrupadas en tuplas (pequeños conjuntos ordenados de datos). Las tuplas se ordenan, a su vez, en unidades lógicas. Ejemplo de tupla: {Código Postal, Diagnóstico, Edad, Sexo, Etnia}. 
  • OLAP es el acrónimo en inglés de On-Line Analytical Processing (procesamiento analítico en línea). Es una solución utilizada en el campo de la llamada Inteligencia empresarial (o Business Intelligence) cuyo objetivo es agilizar la consulta de grandes cantidades de datos. Para ello utiliza estructuras multidimensionales (o Cubos OLAP) que contienen datos resumidos de grandes bases de datos y/o sistemas transaccionales (OLTP). Se usa en informes de negocios, ventas, marketing, dirección, minería de datos y áreas similares. 
  • La minería de datos o data mining (etapa del análisis de "Knowledge Discovery in Databases" o KDD), es un campo de las ciencias de la computación referido al proceso que intenta descubrir patrones en grandes volúmenes de conjuntos de datos. Utiliza inteligencia artificial, aprendizaje automático, estadística y sistemas de bases de datos. El objetivo general del proceso de minería de datos consiste en extraer información de un conjunto de datos y transformarla en una estructura comprensible para su uso posterior. Además de la etapa de análisis en bruto, que involucra aspectos de bases, procesamiento y gestión de datos, considera inferencias, métricas de intereses, teoría de la complejidad computacional, post-procesamiento de las estructuras descubiertas, visualización y actualización en línea.

Fuente: http://es.wikipedia.org/wiki/Datawarehouse

Logo KW

Anatomy of a Virus [873]

de System Administrator - jueves, 18 de septiembre de 2014, 02:17
 

Anatomy of a Virus

A mass spectrometry-based analysis of influenza virions provides a detailed view of their composition.

By Molly Sharlach

A cross-section of an influenza virion, showing the locations and relative abundance of viral proteins (brightly colored) and host membrane and proteins (brown).EDWARD HUTCHINSON

A spherical influenza virion is an orderly hodgepodge comprised of hundreds of proteins that originate from both the virus and its host. A new mass spectrometry analysis, published today (September 16) in Nature Communications, has yielded the most complete picture to date of the identities, arrangements, and ratios of proteins in influenza virions.

Previously, only the most common and conspicuous influenza virion proteins were well characterized. Compared to the perfect icosahedrons of adenovirus, for example, irregular influenza particles are difficult to crystallize and to study structurally, according to the lead author of this latest work, Edward Hutchinson, a postdoctoral research assistant working with virologist Ervin Fodor at the University of Oxford. “When you look at influenza, you get things that look kind of like moldy kidney beans,” said Hutchinson. “Round shapes of varying morphologies with a fringe of glycoproteins around the edge.”

To sort out the virion components, Hutchinson and his colleagues used sensitive mass spectrometry. Comparing the intensities of ion spectra among protein fragments allowed them to quantify the relative abundance of each protein. They began this analysis with WSN, a flu strain from 1933 that—thanks to decades of growth in tissue culture—produces more uniform virions than clinical isolates. The researchers then compared the compositions of WSN and six other strains, including several recent clinical isolates. The strains shared a core set of viral and host proteins in consistent ratios, while other viral and host components varied.

Among these core proteins was viral NS1—so named because it has long been considered nonstructural. Among other activities, NS1 can inhibit the immune response triggered by the cytokine interferon. Finding NS1 in virions was an unexpected but logical result, said virologist Paul Digard of the University of Edinburgh, who was Hutchinson’s doctoral advisor but was not involved in the work. “NS1 is an important protein in terms of the biology of the virus, because it’s one of the things that counteracts the host defenses, so it’s always been a bit of a mystery” that it had not been detected within virions, Digard said. “It’s like turning up for a battle but not delivering your main weapon until halfway through.”

Separating viral particles from other cellular components using sucrose density gradient centrifugation presented a significant challenge, Hutchinson said. To further purify the virions, the team used a technique called hemadsorption, which takes advantage of influenza’s ability to bind to red blood cells.

Intriguingly, even uninfected host cells produced particles that shared characteristics with virions. This result lends support to the “Trojan exosome hypothesis,” which posits that viruses bud out from an infected cell using the host’s native machinery for exporting proteins or RNA in membrane-bound vesicles called exosomes.

“People have tended to think of influenza virion formation as something driven by the virus, something that the virus imposes on the cell,” Hutchinson said. “What we’re seeing here is almost more of a collaborative process: the virus is recruiting things which might be happening anyway in the cell.”

The variety of host proteins identified in virions could provide fodder for future studies, according to virologist Wendy Barclay of Imperial College London, who was not involved in the study but has collaborated with Fodor’s group on other work.

“It’s not surprising that there are host proteins in there, but this is the first real thorough description of which ones are there,” Barclay said. “What do they do? Are they just bystanders being carried along because the virus has taken over this exosome pathway in order to bud, or do they really help the virus along the way?”

The team also found differences in host protein composition between viruses grown in mammalian cells and in chicken eggs, which are typically used to grow viruses for vaccines. It’s unclear whether these differences are relevant for vaccine production, or for the zoonotic transmission of influenza from birds to humans.

“Is this to do with the differences between tissue culture and the inside of an egg, or is it to do with the differences between mammals and birds?” asked Hutchinson. “It will be very interesting to see what exactly determines the differences between virus particles, and whether that will pose barriers to the emergence of new mammalian strains from birds.”

Logo KW

ANCAP [106]

de System Administrator - lunes, 3 de febrero de 2014, 17:01
 

 ANCAP

La Administración Nacional de Combustibles, Alcohol y Portland (ANCAP) es la empresa pública multinacional uruguaya encargada de explotar y administrar el monopolio del alcohol, carburante y  cemento portland nacional.

Fuente: http://es.wikipedia.org/wiki/Administración_Nacional_de_Combustibles,_Alcohol_y_Portland

Logo KW

Año Y [2]

de System Administrator - lunes, 30 de diciembre de 2013, 12:52
 

En el año Y se produce el evento principal de la trilogía "Rumbo a la Sinapsis Planetaria".

Logo KW

ANTEL ARENA [318]

de System Administrator - domingo, 15 de junio de 2014, 20:41
 

ANTEL ARENA - PROYECTO GANADOR

AA

AA

AA

Fuente: http://www.subrayado.com.uy/Site/noticia/28412/antel-arena-estos-son-los-cinco-proyectos-ganadores

12/11/2013 17:37

Así será el Antel Arena, ya eligieron el proyecto ganador

Antel anunció el proyecto ganador para la construcción del nuevo estadio multiuso. Hubo 73 propuestas y cinco finalistas.

En la foto principal de la nota se ve el proyecto ganador, que fue el presentado por los arquitectos Pablo Bachetta y José Flores, y su equipo. Los ganadores tienen ahora hasta seis meses para presentar el proyecto ejecutivo. 

El nuevo estadio multiuso surgió de un acuerdo entre Antel y la Intendencia de Montevideo, luego que se incendiaria el viejo Cilindro Municipal y hubiera que derrumbarlo.

Antel piensa invertir unos US$ 40 millones en la construcción de este nuevo escenario deportivo y cultural que, asegura, tendrá la última tecnología en telecomunicaciones.

La oposición cuestionó el acuerdo entre Antel y la Intendencia por considerar que la construcción y administración de un estadio multiuso no está entre las tareas inherentes a la empresa estatal de telecomunicaciones, de acuerdo a los cometidos específicos definidos en la Constitución.

La construcción de este nuevo estadio por parte de Antel fue observada por el Tribunal de Cuentas de la República (TCR), por violar la Constitución. La oposición blanca y colorada toma esto como base para exigir que Antel desista de este proyecto.

La oposición también acusan a la presidenta de Antel, Carolina Cosse (que este martes anunció al ganador del proyecto) de pretender hacer campaña política para su eventual postulación a la Intendencia de Montevideo.

Logo KW

Antivirus [220]

de System Administrator - lunes, 14 de julio de 2014, 19:07
 

En informática, los antivirus son programas cuyo objetivo es detectar y/o eliminar virus informáticos. Nacieron durante la década de 1980. Un virus informático es un malware (software malicioso) que tiene por objeto alterar el normal funcionamiento de un dispositivo, sin el permiso o el conocimiento del usuario. Habitualmente los virus reemplazan archivos ejecutables por otros infectados con su código. Los virus pueden destruir, de manera intencionada, los datos almacenados en un dispositivo. También existen otros más inofensivos, caracterizados simplemente por ser molestos.

Fuente: http://es.wikipedia.org/wiki/Virus_informático

Logo KW

Antonio Santi Giuseppe Meucci [63]

de System Administrator - sábado, 4 de enero de 2014, 17:56
 

Meucci

Antonio Santi Giuseppe Meucci (Florencia, 13 de abril de 1808 - Nueva York, 18 de octubre de 1889) fue el inventor del teletrófono, posteriormente bautizado como teléfono, entre otras innovaciones técnicas.

Fuente: http://es.wikipedia.org/wiki/Antonio_Meucci

Logo KW

aPaas [1339]

de System Administrator - viernes, 7 de agosto de 2015, 20:09
 

THE ULTIMATE GUIDE TO

aPaas

by Mendix

Application Platform as a Service (aPaaS) is one of the hottest areas of cloud computing, and rightfully so, as it can dramatically improve the speed and ease with which organizations deliver custom business applications.

Please read the attached whitepaper.

Logo KW

API [556]

de System Administrator - lunes, 14 de julio de 2014, 18:53
 

API

An application programming interface (API) is code that allows two software programs to communicate with each other.

The API defines the correct way for a developer to write a program that requests services from an operating system or other application. APIs are implemented by function calls composed of verbs and nouns. The required syntax is described in the documentation of the application being called.

Typically, APIs are released for third-party development as part of a software development kit (SDK) or as an open API published on the Internet. If the applications are written in different languages or have been written for different platforms, middleware can provide messaging services so the two applications can communicate with each other.

Business interest in APIs grew with Web 2.0 mashups and executive dashboards that pulled data from two or more sources. Cloud computing has fueled even more interest, as companies experiment with ways to integrate a cloud provider's service with on-premises systems or other cloud services. 

Fuente: TechTarget

 

Logo KW

APIs Cognitivas [1710]

de System Administrator - domingo, 22 de enero de 2017, 13:11
 

 

APIs cognitivas, impulsoras de la innovación digital

por Melisa Osores

Las APIs están facilitando la innovación digital en los negocios, al permitir la integración de las apps para aprovechar tendencias como la nube, la movilidad, la analítica y el cómputo cognitivo.

Gracias a las interfaces de programación de aplicaciones (API), específicamente las cognitivas, se ha creado una nueva economía, pues con ellas son los consumidores quienes tienen el poder al decidir de manera inmediata lo que necesitan, cómo lo necesitan y porqué lo necesitan. Así lo explicó Darío Debarbieri, director de marketing global para la unidad de nube de IBM MW, en su conferencia durante el 26 Encuentro GeneXus, realizado del 26 al 28 de septiembre en Montevideo, Uruguay.

Debarbieri explicó que si bien las actuales herramientas sociales y digitales permiten seguir a los consumidores y obtener una gran cantidad de información valiosa sobre ellos, la cantidad de información es vasta, lo que imposibilita el análisis y comprensión de la misma para traducirla en conocimiento. Según el ejecutivo, casi el 80% de la información está desestructurada, y si la tendencia continúa, esto aumentará al 93% para el 2020.

Sin embargo, utilizando el modelo cognitivo de computación, se puede traducir esa información en conocimiento, e incluso personalizarla para que cada consumidor tenga un valor agregado. “Es muy difícil llegar al nivel de personalización de un cliente si no buscamos herramientas cognitivas, ya sea con APIs, ecosistemas o incluso a nivel personal”, indicó.

Las APIs están en todos lados, dijo Debarbieri, y no solo pueden mejorar el negocio, sino incluso generar nuevas oportunidades. Las hay de tres tipos:

  1. Gratuitas. Básicamente se enfocan en el correlacionamiento y enriquecimiento de datos. Crean nuevas oportunidades de negocio manteniendo la descarga y funcionamiento gratuito.
  2. De monetización. Cobran por transacción, es decir, se gana dinero y se genera una comisión a otras empresas o entes.
  3. De ecosistema. Aplicaciones que crecen y realiza convenios con otros mercados para ofrecer un mejor servicio o un producto adicional. Por ejemplo, las de comercio electrónico.

Gracias a la existencia de las APIs, ha existido una transformación de la interface. De la computadora hemos cambiado al smartphone, y de ahí se evolucionó a comandos de voz (Siri, Ecco, etc.) utilizados para adquirir información o productos”, señaló.

El siguiente paso, viene de la mano del cómputo cognitivo. “Gracias al modelo cognitivo actual, la información transformada en conocimiento ha provocado que las necesidades del cliente o consumidor están anticipadas de manera más efectiva”, comentó Debarbieri. “Las APIs congnitivas aprenden del consumidor, se anticipan a sus necesidades y nos brindan elementos que traducimos en valor agregado; es decir, podemos saber si nuestro cliente es aventurero, si le gusta el cine, qué tipo de música escucha y demás elementos que personalizan el servicio o producto que ofrecemos.

“Actualmente las APIs cognitivas están al alcance de todos, son seguras, a precios bajos y disponibles para todos, sin importar país o idioma. Solo es necesario que las utilicemos”, concluyó.

Encuentro GeneXus enfocado en transformación digital

Bajo el lema “Dream Digital”, la 26 edición del Encuentro GeneXus (GX26) reunió a más de 3.500 participantes para discutir tendencias de TI como realidad virtual, tecnología cognitiva, internet de las cosas, entre otros temas.

Entre sus principales conferencistas, contó con la participación de Breogán Gonda y Nicolás Jodal, presidente y CEO de GeneXus, respectivamente; Eduardo Mangarelli, director de tecnología de Microsoft LATAM; Darío Debarbieri, CMO de IBM; Roberto Cruz, gerente general de Cognitiva; Alberto Oppenheimer, director de soluciones de SAP; Alexander Blauth, socio ejecutivo de Gartner; Carlos Clúa, director de transformación comercial de Banco Sabadell de España; Ramón Heredia, director general Digital Bank de Colombia; Jorge Pedro Ramírez, gerente de desarrollo de sistemas de TV Azteca de México; José Clastornik, director ejecutivo de la Agencia de Desarrollo del Gobierno Electrónico y Sociedad de la Información y del Conocimiento de Uruguay, entre otros.

Profundice más:

Link: http://searchdatacenter.techtarget.com/es/

Logo KW

APP - The Alan Parsons Project [405]

de System Administrator - lunes, 13 de enero de 2014, 20:13
 
Logo KW

Apple Inc. [116]

de System Administrator - jueves, 9 de enero de 2014, 15:02
 

 Apple

Apple Inc. es una empresa multinacional estadounidense con sede en Cupertino, California. Diseña y produce equipos electrónicos y software. Entre los productos de hardware más conocidos de la empresa se encuentran la serie de equipos Macintosh, iPod, iPhone e iPad. Por el lado del software, Apple fabrica los sistemas operativos Mac OS X e iOS, el explorador de contenido multimedia iTunes, las suites iLife (creatividad multimedia) e iWork (productividad), Final Cut Studio (edición de vídeo profesional), Logic Studio (edición de audio), Xsan (intercambio de datos entre servidores), Aperture (edición de imágenes RAW), y el navegador web Safari. La empresa opera 370 tiendas propias en nueve países, miles de distribuidores y una tienda en línea donde se venden sus productos y se presta asistencia técnica. Las acciones de Apple Inc. valen más de USD 575.000 millones.

Fuente: http://es.wikipedia.org/wiki/Apple

Logo KW

Application Delivery Controller [463]

de System Administrator - jueves, 3 de abril de 2014, 17:45
 
ADC
Application Delivery Controller


An application delivery controller (ADC) is a network device that manages client connections to complex Web and enterprise applications.

ADCs evolved from server load balancers (SLBs) and can be thought of as next-generation SLBs. A new name was needed to differentiate ADCs from SLBs and, unfortunately, the market has given them several names. They go by application switch, application front ends and application delivery controllers, with ADC slowly becoming the most common term.

Typically, ADCs are strategically placed behind a firewall and in front of one or more application servers to be a single point of control that can determine the security needs of an application and provide simplified Authentication, Authorization and Accounting (AAA).

An ADC can accelerate the performance of applications delivered over a wide area network (WAN) by implementing optimization techniques such as compression and reverse caching. With reverse caching, new user requests for static or dynamic Web objects can often be delivered from a cache in the ADC rather than having to be regenerated by the servers.

Virtual application delivery controllers are particularly useful in highly virtualized data centers and cloud computing environments where customers need to be able to scale capacity up and down as application demand fluctuates. Many ADC vendors are working with cloud providers to deliver cloud load balancers for application acceleration, availability assurance and rapid scalability.

 
Logo KW

Apps IBM MobileFirst para iOS [1023]

de System Administrator - viernes, 19 de diciembre de 2014, 17:49
 

 

Llega la primera ola de apps IBM MobileFirst para iOS

por IT-Cloud

Se trata de una nueva clase de soluciones creadas para los negocios y con soporte para servicios de Cloud, que lleva las capacidades Big Data y Analítica de IBM a los usuarios de iPhone e iPad en las empresas. Diseñadas exclusivamente para iPhone como así también para iPad, ya están disponibles para clientes empresariales en banca, seguros, servicios financieros, telecomunicaciones, gobiernos y aerolíneas. En esta nota, un detalle de esta suite de soluciones.

Al apuntar a nuevas oportunidades y prioridades dentro de las industrias, las apps IBM MobileFirst para iOS conducen la movilidad empresarial a un mayor nivel. Ayudarán a los empleados a acceder a todas las capacidades de su compañía donde sea que interactúen con clientes, con más rapidez, facilidad y seguridad que antes.

Diseñadas exclusivamente para iPhone & iPad, las apps IBM MobileFirst para iOS se entregan en un entorno seguro, con analítica integrada y con conexión a los procesos centrales de la empresa. Las apps pueden personalizarse para cualquier organización e implementarse, administrarse y actualizarse con facilidad a través de servicios Cloud de IBM específicos para dispositivos iOS, con seguridad para los datos, las aplicaciones y el dispositivo. Esta primera suite de soluciones ya está disponible en múltiples sectores de industria, mientras continúa el diseño y desarrollo de apps adicionales como:

  • Plan Flight (viajes y transportes): aborda el mayor gasto de todas las aerolíneas – el combustible – al permitir a los pilotos visualizar cronogramas, planes de vuelo y manifiestos de tripulación con anticipación, informar problemas en vuelo a los equipos en tierra y tomar decisiones más informadas sobre el combustible discrecional.
  • Passenger+ (viajes y transportes): empodera a las tripulaciones de vuelo para ofrecer un nivel de servicios personalizados para los pasajeros en vuelo, lo cual incluye ofertas especiales, cambios de reservas e informaciones de equipaje.
  • Advise & Grow (banca y mercados financieros): pone a los banqueros en las instalaciones de las pequeñas empresas clientes, con autorización segura para acceder a perfiles de clientes y análisis competitivos, recolectar datos basados en analítica para dar recomendaciones personalizadas y completar transacciones seguras.
  • Trusted Advice (banca y mercados financieros): permite a los asesores acceder y administrar carteras de clientes, obtener conocimientos de análisis predictivo potente – en la cocina del cliente o en la tienda de café local, en lugar de la oficina del asesor – con plena capacidad de probar recomendaciones con sofisticadas herramientas de modelado para completar transacciones seguras.
  • Retention (seguros): da a los agentes acceso a los perfiles y la historia de los clientes, incluso la calificación de riesgos de retención basada en analítica, así como alertas, recordatorios y recomendaciones sobre los mejores próximos pasos y la facilitación de transacciones clave, como recopilación de firmas digitales y primas.
  • Case Advice (Gobierno): aborda la cuestión de la carga de trabajo y el soporte entre trabajadores sociales que están tomando decisiones críticas, de a una familia o situación por vez, en forma móvil. La solución ajusta las prioridades de casos sobre la base de conocimientos impulsados por analítica en tiempo real y aborda el riesgo en función del análisis predictivo.
  • Sales Assist (retail): permite a los representantes de ventas conectarse con perfiles de clientes, hacer sugerencias sobre la base de compras anteriores y selecciones actuales, verificar el inventario, ubicar artículos en la tienda y despachar artículos fuera de la tienda.
  • Pick & Pack (retail): combina la tecnología basada en proximidad con sistemas de inventario de back-end para la transformación del cumplimiento de órdenes.
  • Expert Tech (Telecomunicaciones): aprovecha capacidades nativas de iOS que incluyen FaceTime para un acceso fácil a la experticia y a servicios de ubicación para optimización de itinerarios a fin de entregar un servicio in situ superior, resolución más eficaz de problemas y productividad, así como una mayor satisfacción de clientes.

Link: IT-Cloud

Logo KW

Aprendizaje automático (machine learning) [1709]

de System Administrator - jueves, 5 de enero de 2017, 21:50
 

Aprendizaje automático (machine learning)

Publicado por: Margaret Rouse

El aprendizaje automático es un tipo de inteligencia artificial (AI) que proporciona a las computadoras la capacidad de aprender, sin ser programadas explícitamente. El aprendizaje automático se centra en el desarrollo de programas informáticos que pueden cambiar cuando se exponen a nuevos datos.

El proceso de aprendizaje automático es similar al de la minería de datos. Ambos sistemas buscan entre los datos para encontrar patrones. Sin embargo, en lugar de extraer los datos para la comprensión humana –como es el caso de las aplicaciones de minería de datos– el aprendizaje automático utiliza esos datos para detectar patrones en los datos y ajustar las acciones del programa en consecuencia. Los algoritmos del aprendizaje automático se clasifican a menudo como supervisados ​​o no supervisados. Los algoritmos supervisados ​​pueden aplicar lo que se ha aprendido en el pasado a nuevos datos. Los algoritmos no supervisados ​​pueden extraer inferencias de conjuntos de datos.

El feed de noticias de Facebook utiliza el aprendizaje automático para personalizar el feed de cada miembro. Si un miembro detiene frecuentemente su desplazamiento para leer o "gustar" de las publicaciones de un amigo en particular, el feed de noticias empezará a mostrar más actividad de ese amigo antes en el feed. Detrás del telón, el software simplemente utiliza análisis estadístico y análisis predictivo para identificar patrones en los datos del usuario, y utilizar patrones para rellenar el feed de noticias. En caso de que el miembro ya no detenga para leer, gustar o comentar sobre los mensajes del amigo, esos nuevos datos se incluirán en el conjunto de datos y el feed de noticias se ajustará en consecuencia.

Link: http://searchdatacenter.techtarget.com

Términos relacionados

Logo KW

Architecture for the Internet of Things [1013]

de System Administrator - martes, 9 de diciembre de 2014, 14:12
 

A Reference Architecture for the Internet of Things

The aim of this is to provide Architects and Developers of IoT projects with an effective starting point that covers the major requirements of IoT projects and systems. This includes the devices as well as the server-side and cloud architecture required to interact with and manage the devices.

Please read the attached whitepaper.

Logo KW

Are You Ready for Mobile Capture? [1186]

de System Administrator - viernes, 3 de abril de 2015, 18:17
 

Are You Ready for Mobile Capture?

The Top 10 Questions to Ask Before You Make Your Move

By Kevin Craine

There is a lot of excitement about mobile capture these days...and for good reason. We live in an age of mobile computing: 58% of American adults use a smartphone, over 40% own a tablet, and mobile computing grew by over 80% just last year alone. Indeed, there are more smartphones out there in the world than there are personal computers; and these numbers will only increase over the next few years. Mobile capture –the ability to capture document images and upload them on the fly– is a natural and inevitable outgrowth of the cultural and technological trend toward mobile computing and one that organizations should evaluate seriously before they get left behind.

Please read the attached whitepaper.

Logo KW

Ares [427]

de System Administrator - jueves, 16 de enero de 2014, 15:50
 

 Ares

En la mitología griegaAres (en griego antiguo Ἄρης) se considera el dios olímpico de la guerra, aunque es más bien la personificación de la fuerza bruta y la violencia, así como del tumulto, confusión y horrores de las batallas, en contraposición a su hermanastra Atenea, que representa la meditación y sabiduría en los asuntos de la guerra y protege a los humanos de sus estragos. Los romanos lo identificaron con Martedios romano de la guerra y la agricultura (al que habían heredado de los etruscos), pero éste gozaba entre ellos de mucha mayor estima.

Se lo representa como hijo de Zeus y Hera, aunque existe una tradición posterior según la cual Hera lo concibió al tocar una determinada flor que le ofreció la ninfa Cloris, en lo que parece ser una imitación de la leyenda sobre el nacimiento de Hefesto, y es recogida por Ovidio. También existe una leyenda similar sobre el nacimiento de Eris, diosa de la Discordia. Su lugar de nacimiento y auténtico hogar estaba situado lejos, entre los bárbaros y belicosos tracios, y a él huyó cuando fue descubierto acostándose con Afrodita.

Los helenos siempre desconfiaron de Ares, quizá porque ni siquiera estaba influenciado por el espíritu de pertenecer a un bando, sino que a veces ayudaba a una parte y a veces a la otra, según le dictaban sus inclinaciones. Su mano destructiva se veía incluso tras los estragos provocados por plagas y epidemias. Este carácter salvaje y sanguinario de Ares lo hacía ser odiado por otros dioses, incluidos sus propios padres.

Fuente: http://es.wikipedia.org/wiki/Ares

Logo KW

Aries [31]

de System Administrator - lunes, 3 de febrero de 2014, 16:52
 

 Aries

Aries, primer signo del zodiaco, rige a los nacidos entre el 22 de marzo y el 20 de abril. Es uno de los signos cardinales del horóscopo, pues coincide con el inicio de una de las estaciones del año. Entre el 20 y el 21 de marzo comienza la primavera en el hemisferio norte y el otoño en el hemisferio sur. Los signos cardinales corresponden al principio de acción, se rigen por la ley del poder y de la voluntad. Se caracterizan por la actividad, el movimiento, la energía y la exuberancia. Por ser el primer signo, también le corresponde la primera casa zodiacal, la del domicilio de la vida, la apariencia física y la personalidad. La primera casa representa la personalidad en estado puro, es la primera toma de contacto con el mundo. Aries es el neonato, demandante e impaciente, es el recién nacido que no sabe esperar, es el carácter del individuo enfrentado a sí mismo. En esta casa se conforman las tendencias, aspiraciones y realizaciones vitales de los que nacen bajo su influjo.

Fuente: http://www.arcanos.com/aries.asp

Logo KW

Army to cut 17,000 civilian jobs [1308]

de System Administrator - lunes, 13 de julio de 2015, 17:57
 

Army to cut 17,000 civilian jobs

By Ryan McDermott

Budget constraints are forcing the Army to reduce its civilian workforce by 17,000 jobs as well as make a slew of military cuts, according to a July 9 statement from the service.

These cuts will impact nearly every Army installation, both in the continental United States and overseas, the statement says.

The reduction of 17,000 civilian Army employees will occur in fiscal 2018. The statement does not say if the cuts will come through attrition, voluntary retirement or actual layoffs.

The cuts will also the Army force from 490,000 to 450,000 soldiers. The reduction of force structure will occur in fiscal years 2016 and 2017.

As part of those reductions, the number of regular Army brigade combat teams, those ready to be deployed by the service, will continue to reduce from a wartime high of 45 to 30 by the end of fiscal 2017.

"Budget constraints are forcing us to reduce the total Army," said Lt. Gen. Joseph Anderson, Army deputy chief of staff in the statement. "In the end, we had to make decisions based on a number of strategic factors, to include readiness impacts, mission command and cost."

If sequestration continues and budget are cut further, the Army's end-strength will be reduced to 420,000 soldiers by the end of fiscal 2019.

The resulting force would be incapable of simultaneously meeting current deployment requirements and responding to the overseas contingency requirements, the statement says.

For more:
- read the Army statement

Related Articles:

More: http://www.fiercegovernment.com

Logo KW

Arrabal [320]

de System Administrator - martes, 21 de enero de 2014, 21:56
 

 Arrabal

El Arrabal en el Tango
Por Miguel Angel García 

Las palabras "arrabal" y "arrabalero" entraron en el tango en 1919 de la mano de un poeta, Celedonio Flores, y salieron en los años de 1940 de la mano de otro poeta, Homero Manzi. Fueron 30 años en los que la geografía urbana fue uno de los temas principales de las letras de tango.

 Arrabal

"Arrabal" es una palabra castellana de origen árabe. No es el latino "suburbio"(sub-urbis), que supone una organización territorial, ni el medieval "extramuros", que implica por oposición una ciudad amurallada, ajena a la tradición americana. Es un ser de la ciudad fuera de sí, un derrame de su substancia que forma un frente impreciso sobre la campaña. El arrabal se define más en el tiempo que en el espacio: es el crecer de una ciudad.

 Arrabal

Fuente: http://www.tangoreporter.com/nota-arrabal.html

Logo KW

Arritmia [440]

de System Administrator - lunes, 3 de febrero de 2014, 16:48
 

Un trastorno del ritmo cardíaco o arritmia cardíaca, es una alteración en la sucesión de latidos cardíacos. Puede deberse a cambios en la frecuencia cardíaca, tanto porque se acelere, disminuya (taquicardia o bradicardia), que no son necesariamente irregulares sino más rápidas o más lentas. Pero muy a menudo la arritmia supone un ritmo irregular, que ocurre cuando se presentan anomalías en el marcapaso fisiológico del corazón (nodo sinusal) o en el sistema de conducción del corazón, o por aparición de zonas marcapaso anormales (ectópicos).

 

Las «bradiarritmias» o trastornos lentos del ritmo cardíaco, resultan de la producción inadecuada de impulsos provenientes del nodo sinusal o de un bloqueo de la propagación del impulso y pueden causar pérdida de la conciencia.

Las «taquiarritmias» o trastornos acelerados del ritmo cardíaco, pueden ser de de origen auricular, en este caso es posible que permitan un gasto cardíaco adecuado y son menos peligrosas que las arritmias ventriculares sostenidas, las cuales suelen producir más a menudo colapso o muerte.

Fuente: http://es.wikipedia.org/wiki/Arritmia

Logo KW

Arte de la Guerra [358]

de System Administrator - jueves, 18 de septiembre de 2014, 19:36
 

 Arte de la Guerra

El arte de la guerra  es un libro sobre tácticas y estrategias militares, escrito por Sun Tzu, un famoso estratega militar chino.

El autor de "El arte de la guerra" fue Sun Zi o Sun Wu, su nombre oficial. Los relatos tradicionales afirmaban que su descendiente, Sun Bin (孙膑/sūn bìn), también escribió un tratado sobre tácticas militares, titulado El arte de la guerra de Sun Bin (孙膑兵法/ sūn bìn bīngfǎ). Tanto Sun Zi como Sun Bin son referidos como Sun Zi en los escritos chinos clásicos, y algunos historiadores creyeron que ambos eran la misma persona, sin embargo, gracias al descubrimiento de varios rollos de bambú desenterrados en 1972 en la Montaña del Gorrión de Plata (Yinqueshan), en la ciudad de Linyin, Shandong, se confirma que Sun Bin escribió su propio arte de la guerra.

Se considera que el texto fue escrito hacia el último tercio del siglo iv a.C. Este texto se dio a conocer en Europa a finales del siglo xviii. Su primera aparición fue la edición francesa de 1772 en París, del jesuita Jean Joseph-Marie Amiot, fue titulada Art Militaire des Chinois.

Fue y sigue siendo estudiado por todos aquellos estrategas militares que han dirigido ejércitos, pero también ha servido de gran ayuda para todo aquel guerrero que ha emprendido el Camino.

El arte de la guerra es uno de los libros más antiguos que se han escrito. Fue el primer intento conocido sobre lecciones de guerra. Sin embargo, es todavía frecuentemente utilizado en la actualidad debido a que sus enseñanzas pueden ser aplicadas en muchas otras áreas donde está involucrado el conflicto.

Fuente: http://es.wikipedia.org/wiki/El_arte_de_la_guerra

Logo KW

Arthur C. Clarke [127]

de System Administrator - martes, 7 de enero de 2014, 16:30
 

 Arthur C. Clarke

Sir Arthur Charles Clarke (16 de diciembre de 1917, Minehead, Inglaterra - 19 de marzo de 2008, Colombo, Sri Lanka), más conocido como Arthur C. Clarke, fue un escritor y científico británico. Autor de obras de divulgación científica y de ciencia ficción, como 2001: Odisea del Espacio y El Centinela.

Fuente: http://es.wikipedia.org/wiki/Arthur_C._Clarke

Logo KW

Artificial Intelligence Heading 2014 [444]

de System Administrator - viernes, 4 de abril de 2014, 12:47
 

 AI 2014

TECHNOLOGY PLANNING AND ANALYSIS

Where is Artificial Intelligence Heading?

Between Google's January £400 million purchase of DeepMind and IBM's recent competition to find new uses for supercomputer Watson, the media spotlight seems to be gradually honing in on Artificial Intelligence (AI). We speak to professional insiders to find out if 2014 really is the year for AI.

"I have a list of things I expect people to do with Watson, but by unleashing it to people in Brazil and Africa and China, as well as Silicon Valley, who knows what they'll come up with. That's the intrigue behind having a contest," said Jerry Cuomo, IBM fellow and CTO for WebSphere about the IBM Watson Mobile Developer Challenge, which invites software developers to produce apps that make use of Watson's resources.

This certainly opens up a lot of scope for progression in Artificial Intelligence, especially when you consider the increased emphasis on machine learning and robotics from companies like Google, which has been gradually acquiring organisations in this space.  In December there was Boston Dynamics, in January there was UK startup DeepMind, and then there were all those smaller deals like DNNresearch along with seven robotics companies at the tail end of 2013.

So where is Artificial Intelligence likely to go in the near term, medium term and long term?

Neil Lawrence, Professor of Machine Learning at the University of Sheffield who works with colleagues on DeepMind and Google says: “The investments we are seeing [by big companies] are very large because there is a shortage of expertise in this area. In the UK we are lucky to have some leading international groups, however the number of true experts in the UK still numbers in the tens rather than the hundreds.”

“The DeepMind purchase reflects this,” he continues. “Their staff was made up in large part by recent PhD graduates from some of these leading groups. Although even in this context the 400 million dollar price tag still seems extraordinary to many in the field. The year 2014 is not the year in which these developments happened, but it may be the year in which they've begun to impinge upon the public consciousness.”

“I think 2014 is the year where we see an increased use in AI,” agrees Lawrence Flynn, CEO of natural language interaction specialists Artificial Solutions.  “But it will take time for AI implementations such as Watson and our own Teneo Network of Knowledge to become widely established [and] for AI to become commonplace.”

Dr Ben Medlock, Chief Technology Officer at SwiftKey, a smart text prediction software company clarifies: “I think AI will become increasingly visible in 2014 as the foundation of a new range of applications and products. We're excited about the potential of AI technology to make interaction with devices more personal, engaging and ‘human’. However, investment in such technologies is a long term commitment, and we're still far from reaching our full potential in this area. We should expect progress to continue well into the next decade and beyond.”

“Initially, AI will be mostly used for personalization,” says Flynn. “For instance, if you always choose sushi every time your mobile personal assistant offers you a choice of nearest restaurants, eventually it will stop giving you a choice and just the directions to the sushi bar. If you always fly business class with British Airways, then why bother the user with a choice of flights from other airlines.”

Lawrence is keen to stress “that it is not in industry where the breakthroughs have happened, but in academia.” He adds: “A particular focus of my own group is dealing with 'massive missing data': where most of the information we would like to have to base our decisions on is not available. Beyond my own area of research there are also key challenges in the areas of planning and reasoning. It is not yet clear to me how the recent breakthroughs will affect these areas.”

While Medlock feels in the near future “[there is likely to be] an increased investment in businesses focused on AI, as the industry begins to understand that these technologies will underpin many [future] products.”

Lawrence thinks the long term future for AI “is very bright, but progress will be steady, not with large single steps forward, but across a number of applications.”

Flynn in turn stresses: “I don’t believe there will be one big AI moment that history will point to, it will just gradually start to become a normal part of our everyday lives. As devices, appliances, transportation [and so on] become intrinsically connected to each other and the internet, so AI will develop further to ensure seamless interaction between them all.”

“Expectations may currently be too high for the immediate future,” says Lawrence. “We are still many years away from achieving many of our goals in artificial intelligence research. The current successes have emerged from an area known as machine learning, a foundational technique that already underpinned much of the data driven decision making of the large internet companies.”

“The methodologies used have mainly emerged from a relatively small annual conference known as NIPS,” he adds. “The recent breakthroughs emerged from a group of NIPS researchers who received very far-sighted funding from the Canadian government (the Canadian Institute for Advanced Research NCAP program). The program spent a relatively small amount of money (tens of millions) on a carefully selected group of people. This group was led by Geoff Hinton (now of Google) and advised by Yann LeCun (now of Facebook).”

“In the UK, for example,” he continues, “large amounts of money are now promised, but it is not at all clear whether it will be well spent. Functional research operates rather like a well-tended garden: it needs an understanding of the right sort of plants and the ideal conditions for them. A sudden large increase in funding can have a similar effect to indiscriminate application of manure: something will grow, but it's not clear at the outset what it will be. When it comes to harvest time, will we have roses or dock leaves? The Canadian approach was to select the roses first, and then carefully tend them. Other countries would do well to follow a similar approach if they want to reap similar rewards.”

The view from industry is also similar. Matlock looks at the future in terms of the next couple of decades and in this time frame he believes: “AI research will lead us towards more general solutions, able to take diverse inputs from a wide range of data sources and make powerful predictions that closely mimic higher order human reasoning. We will harness the rich streams of data harvested from personal/wearable devices and feed them into these general purpose AI problem solvers, providing support for important life decisions and enhancing our general health and wellbeing.”

Whilst Flynn says “[although] we are very excited by the possibilities that AI opens up in the next few years, ironically it’s likely that by the time AI is mainstream in every home that consumers won’t even think about it. As far as they are concerned, a product or service works how it’s supposed to and most of the time that’s what people care about.”

The start may be slow but as Flynn concludes: “In the longer term [more than 20 years] I expect AI research to help us explore some of our deepest questions around life, purpose, consciousness and what it means to be human.”

It will be interesting to see whether this comes true within any of our lifetimes.

Kathryn Cave is Editor at IDG Connect

Fuente: http://www.idgconnect.com/blog-abstract/6015/where-artificial-intelligence-heading

 

 

Logo KW

Artigas Arnal, José Gervasio [41]

de System Administrator - sábado, 17 de septiembre de 2016, 12:41
 

 Artigas

José Gervasio Artigas Arnal (Montevideo, Gobernación de Montevideo, 19 de junio de 1764 - Asunción del Paraguay, 23 de septiembre de 1850) fue un militar, estadista y máximo prócer uruguayo. Recibió los títulos “Jefe de los Orientales” y “Protector de los Pueblos Libres”. Fue uno de los más importantes estadistas de la Revolución del Río de la Plata, por lo que es honrado también en la Argentina por su contribución a la independencia y la federalización del país.

Fuente: http://es.wikipedia.org/wiki/José_Gervasio_Artigas

 

Logo KW

As containers take off, so do security concerns [1447]

de System Administrator - jueves, 24 de septiembre de 2015, 16:20
 

As containers take off, so do security concerns

By Maria Korolov

Containers offer a quick and easy way to package up applications but security is becoming a real concern

Containers offer a quick and easy way to package up applications and all their dependencies, and are popular with testing and development.

According to a recent survey sponsored by container data management company Cluster HQ, 73 percent of enterprises are currently using containers for development and testing, but only 39 percent are using them in a production environment.

But this is changing, with 65 percent saying that they plan to use containers in production in the next 12 months, and cited security as their biggest worry. According to the survey, just over 60 percent said that security was either a major or a moderate barrier to adoption.

Containers can be run within virtual machines or on traditional servers. The idea is somewhat similar to that of a virtual machine itself, except that while a virtual machine includes a full copy of the operating system, a container does not, making them faster and easier to load up.

The downside is that containers are less isolated from one another than virtual machines are. In addition, because containers are an easy way to package and distribute applications, many are doing just that -- but not all the containers available on the web can be trusted, and not all libraries and components included in those containers are patched and up-to-date.

According to a recent Red Hat survey, 67 percent of organizations plan to begin using containers in production environments over the next two years, but 60 percent said that they were concerned about security issues.

Isolated, but not isolated enough

Although containers are not as completely isolated from one another as virtual machines, they are more secure than just running applications by themselves.

"Your application is really more secure when it's running inside a Docker container," said Nathan McCauley, director of security at Docker, which currently dominates the container market.

According to the Cluster HQ survey, 92 percent of organizations are using or considering Docker containers, followed by LXC at 32 percent and Rocket at 21 percent.

Since the technology was first launched, McCauley said, Docker containers have had built-in security features such as the ability to limit what an application can do inside a container. For example, companies can set up read-only containers.

Containers also use name spaces by default, he said, which prevent applications from being able to see other containers on the same machine.

"You can't attack something else because you don't even know it exists," he said. "You can even get a handle on another process on the machine, because you don't even know it's there."

However, container isolation doesn't go far enough, said Simon Crosby, co-founder and CTO at security vendor Bromium.

"Containers do not make a promise of providing resilient, multi-tenant isolation," he said. "It is possible for malicious code to escape from a container to attack the operation system or the other containers on the machine."

If a company isn't looking to get maximum efficiency out of its containers, however, it can run just one container per virtual machine.

This is the case with Nashua, NH-based Pneuron, which uses containers to distribute its business application building blocks to customers.

"We wanted to have assigned resourcing in a virtual machine to be usable by a specific container, rather than having two containers fight for a shared set of resources," said Tom Fountain, the company's CTO. "We think it's simpler at the administrative level."

Plus, this gives the application a second layer of security, he said.

"The ability to configure a particular virtual machine will provide a layer of insulation and security," he said. "Then when we're deployed inside that virtual machine then there's one layer of security that's put around the container, and then within our own container we have additional layers of security as well."

But the typical use case is multiple containers inside a single machine, according to a survey of IT professionals released Wednesday by container security vendor Twistlock.

Only 15 percent of organizations run one container per virtual machine. The majority of the respondents, 62 percent, said that their companies run multiple containers on a single virtual machine, and 28 percent run containers on bare metal.

And the isolation issue is still not figured out, said Josh Bressers, security product manager at Red Hat.

"Every container is sharing the same kernel," he said. "So if someone can leverage a security flaw to get inside the kernel, they can get into all the other containers running that kernel. But I'm confident we will solve it at some point."

Bressers recommended that when companies think about container security, they apply the same principles as they would apply to a naked, non-containerized application -- not the principles they would apply to a virtual machine.

"Some people think that containers are more secure than they are," he said.

Vulnerable images

McCauley said that Docker is also working to address another security issue related to containers -- that of untrusted content.

According to BanyanOps, a container technology company currently in private beta, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities such as Shellshock and Heartbleed.

Outside the official repositories, that number jumps to about 40 percent.

Of the images created this year and distributed in the official repositories, 74 percent had high or medium priority vulnerabilities.

"In other words, three out of every four images created this year have vulnerabilities that are relatively easy to exploit with a potentially high impact," wrote founder Yoshio Turner in the report.

In August, Docker announced the release of the Docker Content Trust, a new feature in the container engine that makes it possible to verify the publisher of Docker images.

"It provides cryptographic guarantees and really leapfrogs all other secure software distribution mechanisms," Docker's McCauley said. "It provides a solid basis for the content you pull down, so that you know that it came from the folks you expect it to come from."

Red Hat, for example, which has its own container repository, signs its containers, said Red Hat's Bressers.

"We say, this container came from Red Hat, we know what's in it, and it's been updated appropriately," he said. "People think they can just download random containers off the Internet and run them. That's not smart. If you're running untrusted containers, you can get yourself in trouble. And even if it's a trusted container, make sure you have security updates installed."

Security and management

According to Docker's McCauley, existing security tools should be able to work on containers the same way as they do on regular applications, and also recommended that companies deploy Linux security best practices.

Earlier this year Docker, in partnership with the Center for Information Security, published a detailed security benchmark best practices document, and a tool called Docker Bench that checks host machines against these recommendations and generates a status report.

However, for production deployment, organizations need tools that they can use that are similar to the management and security tools that already exist for virtualization, said Eric Chiu, president and co-founder at virtualization security vendor HyTrust.

"Role-based access controls, audit-quality logging and monitoring, encryption of data, hardening of the containers -- all these are going to be required," he said.

In addition, container technology makes it difficult to see what's going on, experts say, and legacy systems can't cut it.

"Lack of visibility into containers can mean that it is harder to observe and manage what is happening inside of them," said Loris Degioanni, CEO at Sysdig, one of the new vendors offering container management tools.

Another new vendor in this space is Twistlock, which came out of stealth mode in May.

"Once your developers start to run containers, IT and IT security suddenly becomes blind to a lot of things that happen," said Chenxi Wang, the company's chief strategy officer.

Say, for example, you want to run anti-virus software. According to Wang, it won't run inside the container itself, and if it's running outside the container, on the virtual machine, it can't see into the container.

Twistlock provides tools that can add security at multiple points. It can scan a company's repository of containers, it can scan containers just as they are loaded and prevent vulnerable containers from launching.

"For example, if the application inside the container is allowed to run as root, we can say that it's a violation of policy and stop it from running," she said.

Twistlock can monitor whether a container is communicating with known command-and-control hosts and either report it, cut off the communication channel, or shut down the container altogether.

And the company also monitors communications between the container and the underlying Docker infrastructure, to detect applications that are trying to issue privileged commands or otherwise tunnel out of the container.

Market outlook

According to IDC analyst Gary Chen, container technology is still new that most companies are still figuring out what value they offer and how they're going to use them.

"Today, it's not really a big market," he said. "It's still really early in the game. Security is something you need once you start to put containers into operations."

That will change once containers get more widely deployed.

"I wouldn't be surprised if the big guys eventually got into this marketplace," he said.

More than 800 million containers have been downloaded so far by tens of thousands of enterprises, according to Docker.

But it's hard to calculate the dollar value of this market, said Joerg Fritsch, research director for security and risk management at research firm Gartner.

"Docker has not yet found a way to monetize their software," he said, and there are very few other vendors offering services in this space. He estimates the market size to be around $200 million or $300 million, much of it from just a single services vendor, Odin, formerly the service provider part of virtualization company Parallels.

With the exception of Odin, most of the vendors in this space, including Docker itself, are relatively new startups, he said, and there are few commercial management and security tools available for enterprise customers.

"When you buy from startups you always have this business risk, that a startup will change its identity on the way," Firtsch said.

This story, "As containers take off, so do security concerns" was originally published by  CSO.

Link: http://www.networkworld.com

Logo KW

Ashton-Tate [149]

de System Administrator - martes, 7 de enero de 2014, 22:31
 

Ashton-Tate fue una compañía de software de Estados Unidos, famosa por desarrollar la aplicación de base de datos dBase. Ashton-Tate creció desde una pequeña compañía de garaje hasta convertirse en una multinacional con centros de desarrollo de software repartidos por Estados Unidos y Europa.

Fuente: http://es.wikipedia.org/wiki/Ashton-Tate

Logo KW

Así eligen los animales a sus gobernantes ¿Los hombres los imitamos? [1240]

de System Administrator - miércoles, 27 de mayo de 2015, 10:28
 

Así eligen los animales a sus gobernantes ¿Los hombres los imitamos?

por Fabio Arévalo Rosero MD

En política, la mayoría de nosotros somos ‘marxistas’ en su tendencia e interpretación.

Sin duda que la mejor guía para entender a los políticos (o tal vez más a los tan comunes politiqueros) la brindó este influyente pensador que alguna vez sentenció sabiamente: “La política es el arte de buscar problemas, encontrarlos, hacer un diagnóstico falso y aplicar después los remedios equivocados". Hablamos obviamente del gran Groucho Marx, fabuloso cómico estadounidense de origen alemán. 

De allí el riesgoso y tan común comportamiento humano en la dinámica electoral, cuando al elegir a nuestros dirigentes, parece que imitamos a los animales. De forma inexplicable casi siempre escogemos a nuestros verdugos. La interpretación está en este video, con contenido basado en un texto del poeta mexicano Guillermo Aguirre y Fierro (Autor del Brindis del bohemio). Una fábula publicada hace 90 años, que parece escrita hoy: “La elección de los animales”.

Sobre el autor:

Médico, bioquímico, escritor, consultor urbano, educador ciudadano, divulgador científico, innovador social, diseñador de ciudades saludables, Campeón del mundo atletismo, JMS, Islas Canarias).

Logo KW

Asimo [414]

de System Administrator - martes, 14 de enero de 2014, 01:04
 
Logo KW

Asimov's Three Laws of Robotics [1337]

de System Administrator - viernes, 7 de agosto de 2015, 19:27
 

Asimov's Three Laws of Robotics

Posted by: Margaret Rouse

Science-fiction author Isaac Asimov is often given credit for being the first person to use the term robotics in a short story composed in the 1940s. In the story, Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:

1. Robots must never harm human beings or, through inaction, allow a human being to come to harm.
2. Robots must follow instructions from humans without violating rule 1.
3. Robots must protect themselves without violating the other rules.

Also see artificial intelligence, mechatronics, nanorobot, and robot.

Link: http://whatis.techtarget.com

Isaac Asimov en 1965

Tres Leyes de la Robótica

En ciencia ficción las tres leyes de la robótica son un conjunto de normas escritas por Isaac Asimov, que la mayoría de los robots de sus novelas y cuentos están diseñados para cumplir. En ese universo, las leyes son "formulaciones matemáticas impresas en los senderos positrónicos del cerebro" de los robots (líneas de código del programa que regula el cumplimiento de las leyes guardado en la memoria Flash EEPROM principal del mismo). Aparecidas por primera vez en el relato Runaround (1942), establecen lo siguiente:

  1. Un robot no hará daño a un ser humano o, por inacción, permitir que un ser humano sufra daño.
  2. Un robot debe obedecer las órdenes dadas por los seres humanos, excepto si estas órdenes entrasen en conflicto con la 1ª Ley.
  3. Un robot debe proteger su propia existencia en la medida en que esta protección no entre en conflicto con la 1ª o la 2ª Ley.[1]

Esta redacción de las leyes es la forma convencional en la que los humanos de las historias las enuncian; su forma real sería la de una serie de instrucciones equivalentes y mucho más complejas en el cerebro del robot.

Asimov atribuye las tres Leyes a John W. Campbell, que las habría redactado durante una conversación sostenida el 23 de diciembre de 1940. Sin embargo, Campbell sostiene que Asimov ya las tenía pensadas, y que simplemente las expresaron entre los dos de una manera más formal.

Las tres leyes aparecen en un gran número de historias de Asimov, ya que aparecen en toda su serie de los robots, así como en varias historias relacionadas, y la serie de novelas protagonizadas por Lucky Starr. También han sido utilizadas por otros autores cuando han trabajado en el universo de ficción de Asimov, y son frecuentes las referencias a ellas en otras obras, tanto de ciencia ficción como de otros géneros.

Link: https://es.wikipedia.org

Logo KW

Assistant Robots [972]

de System Administrator - miércoles, 29 de octubre de 2014, 19:43
 

SERVICE ROBOTS WILL NOW ASSIST CUSTOMERS AT LOWE’S STORE

Written By: Jason Dorrier

Most folks don’t interact with robots in their daily lives, so unless you work in a factory, the tech can seem remote. But if you’re a San Jose local? Welcome to the future. Orchard Supply Hardware just hired a pair of bots to greet and engage customers.

Beginning this holiday season, the robots, dubbed OSHbot, will employ a suite of new technologies to field simple customer questions, identify items, search inventory, act as guides, and even summon Orchard Supply Hardware experts for a video chat.

OSHbot was developed in a collaboration between Orchard Supply Hardware’s parent company Lowe’sSingularity University Labs, and robotics startup Fellow Robots.

Corporate groups, like Lowe’s Innovation Labs, join Singularity University Labs to extend their horizons, get a feel for technologies in the pipeline, and strike up mutually beneficial partnerships with startups immersed in those technologies.

“Lowe’s Innovation Labs is here to build new technologies to solve consumer problems with uncommon partners,” says Kyle Nel, executive director of Lowe’s Innovation Labs. “We focus on making science fiction a reality.”

The five-foot-tall gleaming white OSHbot has two video monitors, two lasers for navigation and obstacle avoidance, a 3D scanner (akin to Kinect, we imagine), natural language processing, and a set of wheels to navigate the store.

 

Customers walk up to OSHbot and ask where they can find a particular item, or if they don’t know the item’s name, they can show it to the 3D scanner. OSHbot matches it up with inventory and autonomously leads the customer up the right aisle, using its onboard sensors to navigate the store and avoid obstacles.

As the robot works, it creates a digital map of its environment and compares that map to the store’s official inventory map. Of course, memorizing long lists and locations is a skill particularly well suited to machines, and something humans struggle to do.

But humans are still a key part of the experience.

If a customer has a more complicated question, perhaps advice on a home improvement project or a product comparison, OSHbot is equipped to wirelessly connect to experts at other Orchard Supply Hardware stores for live video chat.

The robot speaks multiple languages—witness its fine Spanish in the video—and we think its video interface might prove a great helper for hearing impaired customers.

 

OSHbot is indeed cool—but it isn’t the first service robot we’ve seen.

In 2012, we covered a Korean robot, FURO, that answered traveler questions in multiple languages and served as roving billboard in a Brazilian airport. Even further back, in 2010, we wrote about PAL robotics’ Rheem-H1 robot mall guide.

OSHbot isn’t the first service robot to employ autonomous navigation and obstacle avoidance either. Indeed, the RP-Vita robot, made by iRobot and InTouchHealth, is already traversing hospital hallways, connecting distant doctors with patients by video.

But OSHbot is significant for a few other reasons.

 

For one, it’s being adopted by Lowe’s, a big established firm in a sector of the economy—lumber, tools, and screws—you might not associate with robotics.

Lowe’s hiring robots is akin to office supply chain, Staples, announcing they’ll carry 3D printers or UPS stores offering 3D printing services to customers.

Just as 3D printing is doggedly entering the mainstream, so too is robotics.

Also, OSHbot ties together a number of technologies in a clever new package. That laser guidance system? It’s not so different from the tech used in Google’s self-driving cars. And 3D scanning? We’ve seen it in gaming, but recently it’s been miniaturized in Leap Motion’s infrared gesture controls or Google’s Project Tango.

When we first saw Project Tango smartphones with 3D scanning hardware, we speculated it wouldn’t be long before it appeared in robots. Indeed, one early adopter strapped a Tango smartphone to his drone. Now OSHbot is using similar tech to model and identify nails, screws, and tools in the hardware world.

And there’s room for improvement. Instead of a static creation, think of OSHbot as a kind of service platform on which its makers can hang other useful tech gadgetry.

 

Paired with 3D scanning capability, Marco Mascorro, CEO of Fellow Robots, suggests a future version might have a 3D printer to make parts on the spot.

We imagine other hardware might include a credit card scanner for checkout or NFC for mobile payments (think Apple Pay). It’d be just like those roving bands of Apple store employees with iPads—only, you know, with robots.

And why not add face detection software akin to RealEyes or IMRSV’s Cara?

These programs could allow the robot to gauge a customer’s attentiveness and even basic emotions. If the customer looks confused, the software would recognize the expression and ask if they need more specific help finding an item. Or perhaps the robot got it wrong, and they need to be guided to a different product altogether.

We think OSHbot has lots of potential—but it’s still a new creation.

The goal in San Jose is to put its potential to the test in the real world. There is no better way to find bugs in a system than daily interaction with the public. We expect there might be a few glitches (perhaps even comical ones). Voice recognition and natural language processing, for example, are vastly improved but still imperfect.

Also, the robot’s price tag will matter for wider adoption. Similar robots run into the tens of thousands of dollars, not including maintenance costs. But the trend in robotics has been rapidly falling prices—and a few (even pricey) robots might not only ease the burden on human employees, but attract a few new customers to boot.

Will OSHbot and other customer service robots increasingly make their way into our everyday lives? We think so. But fear not—they’re here to help.

Image Credit: Fellow Robots

This entry was posted in Artificial Intelligence,RoboticsTech and tagged Carafellow robots,fuRogoogleIMRSVkyle nelLeap Motionlowe's,lowes innovation labsmarco mascorroMicrosoft Kinectorchard supply hardwareOSHbotPAL roboticsProject Tangorealeyesretail robots,service robotssingularity universitysingularity university labs.

Link: http://singularityhub.com

Logo KW

AT&T [55]

de System Administrator - jueves, 2 de enero de 2014, 20:50
 

La American Telephone and Telegraph Corporation (AT&T: Corporación Estadounidense de Teléfono y Telégrafo) comenzó con el propósito de manejar la última red telefónica a larga distancia de los Estados Unidos (3 de marzo de 1885). Comenzando en Nueva York, la red se extendió a Chicago en 1892, y a San Francisco en 1915.

Fuente: http://es.wikipedia.org/wiki/AT&T

Logo KW

ATLAS ROBOT [843]

de System Administrator - domingo, 7 de septiembre de 2014, 19:56
 

ATLAS ROBOT SLAVES AWAY IN MIT LAB, HAULING METAL SCAFFOLDING FOR US HUMANS

Written By: Jason Dorrier

The humanoid robot, Atlas, stands six feet tall and weighs three hundred pounds. The bot is built like an NFL offensive lineman, only substitute muscle, ligament, and bone for steel and hydraulics—and swap speed and agility for slow and awkward.

Atlas is a work in progress.

Still, take a moment to be impressed. Atlas can balance and walk on two legs. It can navigate uneven surfaces, like a field of rubble, and right itself after taking a hard knock from the side. The robot comically does calisthenics and pushups.

Robotics pioneer Hans Moravec says of robots and artificial intelligence: The hard problems are easy and the easy problems hard. A robot with the motor skills of a human infant, then, is more impressive than one that plays chess.

And Atlas is evolving. Instead of simply walking around or making viral internet videos—the robot is continuously becoming more useful, which is, of course, the point.

A recent video shows (a slightly morose, if we do say so) Atlas hauling a heavy piece of metal scaffolding around the lab. Presumably, this isn’t as easy as it appears. The weighty metal unbalances the robot on one side making stability more difficult.

The DARPA-funded Atlas was developed by Boston Dynamics—one of eight firmssnapped up in Google’s 2013 robotics buying spree—but it has since been handed over to MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

CSAIL is improving the Atlas software to make it faster and more autonomous. And a hardware change is coming too. Atlas currently trails a cord for power. CSAIL aims to cut the cord sometime this year or next (with the help of Boston Dynamics)—replacing external power with an onboard power source so Atlas can roam free.

But never fear. Atlas is no Terminator…not yet at least. The robot is aimed at disaster recovery and hazardous situations humans ought best avoid. DARPA has supplied a number of teams competing in the DARPA Robotics Challenge with an Atlas bot.

The DRC was inspired by the Fukushima nuclear meltdown—the worst of which may have been avoided had a robot capable of human-like tasks (driving a vehicle, turning off a valve, opening a door) been able to enter the radioactive environment.

At the DRC trials last December, another Google acquisition Schaft (now self-funded) outscored the competition. The finals will take place next June in Pomona California.

From there? More robots acting more like humans, of course.

Image Credit: DARPA/Wikimedia Commons

This entry was posted in Robots and tagged ATLAS,atlas robotBoston Dynamicsdarpadarpa robotics challengegoogleGoogle roboticshans moravec,MIT CSAILschaftschaft robot.

Link: http://singularityhub.com

Logo KW

Atractor de Lorenz [1730]

de System Administrator - jueves, 23 de marzo de 2017, 11:21
 

El atractor de Lorenz, con valores r = 28, σ = 10, b = 8/3.

Atractor de Lorenz

Fuente: Wikipedia

El atractor de Lorenz, concepto introducido por Edward Lorenz en 1963, es un sistema dinámico determinista tridimensional no lineal derivado de las ecuaciones simplificadas de rollos de convección que se producen en las ecuaciones dinámicas de la atmósfera terrestre.

Para ciertos valores de los parámetros {\displaystyle a,b,c}, el sistema exhibe un comportamiento caótico y muestra lo que actualmente se llama un atractor extraño; esto fue probado por Albert W. Tucker en 2001. El atractor extraño en este caso es un fractal de dimensión de Hausdorff entre 2 y 3. Grassberger (1983) ha estimado la dimensión de Hausdorff en 2.06 ± 0.01 y la dimensión de correlación en 2.05 ± 0.01.

El sistema aparece en láseres, en generadores eléctricos y en determinadas ruedas de agua.1

{\displaystyle {\frac {dx}{dt}}=a(y-x)}
{\displaystyle {\frac {dy}{dt}}=x(b-z)-y}
{\displaystyle {\frac {dz}{dt}}=xy-cz}
donde a es llamado el Número de Prandtl y b se llama el número de Rayleigh.{\displaystyle a,b,c>0}, pero es usualmente {\displaystyle a=10}{\displaystyle c=8/3} y b es variado. El sistema exhibe un comportamiento caótico para {\displaystyle b=28} pero muestra órbitas periódicas para otros valores de b; por ejemplo, con {\displaystyle b=99.96} se convierte en un nudo tórico llamado T(3,2).

 

Proyección de un atractor de Lorenz tridimensional.

La forma de mariposa del atractor de Lorenz puede haber inspirado el nombre del efecto mariposa en la Teoría del Caos.

Véase también

Referencias

Bibliografía

  • Lorenz, E. N. (1963). «Deterministic nonperiodic flow». J. Atmos. Sci. 20 p. 130-141.
  • Frøyland, J., Alfsen, K. H. (1984). «Lyapunov-exponent spectra for the Lorenz model». Phys. Rev. A. 29 p. 2928–2931.
  • Strogatz, Steven H. (1994). Perseus publishing, ed. Nonlinear Systems and Chaos.
  • Jonas Bergman, Knots in the Lorentz system, Undergraduate thesis, Uppsala University 2004.
  • P. Grassberger and I. Procaccia (1983). «Measuring the strangeness of strange attractors». Physica D. 9 p. 189-208. 10.1016/0167-2789(83)90298-1.

Enlaces externos

Link: https://es.wikipedia.org

Logo KW

Attackers abuse legacy routing protocol to amplify distributed denial-of-service attacks [1278]

de System Administrator - viernes, 3 de julio de 2015, 14:50
 

Attackers abuse legacy routing protocol to amplify distributed denial-of-service attacks

By Lucian Constantin

Servers could be haunted by a ghost from the 1980s, as hackers have started abusing an obsolete routing protocol to launch distributed denial-of-service attacks.

DDoS attacks observed in May by the research team at Akamai abused home and small business (SOHO) routers that still support Routing Information Protocol version 1 (RIPv1). This protocol is designed to allow routers on small networks to exchange information about routes.

RIPv1 was first introduced in 1988 and was retired as an Internet standard in 1996 due to multiple deficiencies, including lack of authentication. These were addressed in RIP version 2, which is still in use today.

In the DDoS attacks seen by Akamai, which peaked at 12.8 gigabits per second, the attackers used about 500 SOHO routers that are still configured for RIPv1 in order to reflect and amplify their malicious traffic.

DDoS reflection is a technique that can be used to hide the real source of the attack, while amplification allows the attackers to increase the amount of traffic they can generate.

RIP allows a router to ask other routers for information stored in their routing tables. The problem is that the source IP (Internet Protocol) address of such a request can be spoofed, so the responding routers can be tricked to send their information to an IP address chosen by attackers—like the IP address of an intended victim.

This is a reflection attack because the victim will receive unsolicited traffic from abused routers, not directly from systems controlled by the attackers.

But there’s another important aspect to this technique: A typical RIPv1 request is 24-byte in size, but if the responses generated by abused routers are larger than that, attackers can generate more traffic they could otherwise do with the bandwidth at their disposal.

In the attacks observed by Akamai, the abused routers responded with multiple 504-byte payloads—in some cases 10—for every 24-byte query, achieving a 13,000 percent amplification.

Other protocols can also be exploited for DDoS reflection and amplification if servers are not configured correctly, including DNS (Domain Name System), mDNS (multicast DNS), NTP (Network Time Protocol) and SNMP (Simple Network Management Protocol).

The Akamai team scanned the Internet and found 53,693 devices that could be used for DDoS reflection using the RIPv1 protocol. Most of them were home and small business routers.

The researchers were able to determine the device make and model for more than 20,000 of them, because they also had their Web-based management interfaces exposed to the Internet.

Around 19,000 were Netopia 3000 and 2000 series DSL routers distributed by ISPs, primarily from the U.S., to their customers. AT&T had the largest concentration of these devices on its network—around 10,000—followed by BellSouth and MegaPath, each with 4,000.

More than 4,000 of the RIPv1 devices found by Akamai were ZTE ZXV10 ADSL modems and a few hundred were TP-Link TD-8xxx series routers.

While all of these devices can be used for DDoS reflection, not all of them are suitable for amplification. Many respond to RIPv1 queries with a single route, but the researchers identified 24,212 devices that offered at least an 83 percent amplification rate.

To avoid falling victim to RIPv1-based attacks, server owners should use access control lists to restrict Internet traffic on UDP source port 520, the Akamai researchers said in their report. Meanwhile, the owners of RIPv1-enabled devices should switch to RIPv2, restrict the protocol’s use to the internal network only or, if neither of those options is viable, use access control lists to restrict RIPv1 traffic only to neighboring routers.

Link: http://www.networkworld.com

Logo KW

Auditoría de seguridad avanzada para redes [955]

de System Administrator - lunes, 20 de octubre de 2014, 14:24
 

Cómo realizar una auditoría de seguridad avanzada para redes

por Kevin Beaver

En los últimos años, las redes empresariales han evolucionado considerablemente. Todo, desde el cómputo móvil hasta la nube –y por no hablar de todos los sistemas internos nuevos, servicios y aplicaciones– están haciendo a las redes actuales más complejas que nunca. Incluso las mismas herramientas que se utilizan para administrar la seguridad de la red pueden expandir la superficie de ataque y crear sus propias vulnerabilidades.

Como muchos usuarios a menudo averiguan por la vía difícil, la complejidad es mala para la seguridad, pero continúa sin cesar.

Uno de los desafíos centrales que enfrentan las empresas hoy es no saber, en un momento dado, dónde se encuentran realmente las cosas con sus vulnerabilidades de red y los riesgos del negocio. Si una empresa va a realizar mejoras tangibles en la seguridad, tiene que tomar sus pruebas de seguridad de red al siguiente nivel. En este consejo, voy a explicar cómo empezar.

Iniciando una auditoría de seguridad de red de siguiente generación

En primer lugar, es importante recordar las diferencias entre las auditorías de seguridad, los análisis de vulnerabilidades, las pruebas de penetración y las evaluaciones de riesgos de la información. Hay similitudes entre todos ellos –a saber, encontrar y corregir los fallos de seguridad de la red antes de que sean explotados–, pero los enfoques, herramientas, conjuntos de habilidades y el tiempo requeridos para cada uno varían considerablemente.

Lo más importante que se debe entender con las auditorías de seguridad es qué es lo que su empresa está tratando de lograr. No le decimos a nuestros radiólogos e inspectores de viviendas que limiten su alcance sobre la base de lo que suponemos que necesitamos; ¿por qué limitar el foco de las pruebas de seguridad de la red a una sola cosa, como penetrar en la red desde el exterior, escanear una página web por el cumplimiento de PCI DSS, validar que se está llevando a cabo el ingreso a la auditoría y la colocación de parches, o asegurar la existencia de políticas documentadas? Se justifica una combinación de todos los métodos de pruebas tradicionales; yo lo llamo una "evaluación de seguridad." Y a través de la exploración, el análisis manual, los tutoriales físicos y entrevistas, he encontrado que este es el mejor enfoque.

Una ley fundamental de la seguridad de red es que usted no puede asegurar lo que no reconoce. Antes de que una organización pueda afirmar verdaderamente qué tan seguro es su entorno, debe estar segura de que está mirando todas las áreas correctas –y que lo está haciendo de manera periódica y constante. Esto no significa que la empresa tiene que ver cada uno de los sistemas, en cada segmento de la red, en todo momento, pero sí que debe centrarse en lo que es más importante y más visible. En otras palabras, como mínimo, mire la información más sensible sobre los sistemas más críticos y resuélvalo desde allí. Si eventualmente es capaz de evaluar la seguridad de todos los sistemas y aplicaciones –tanto internos como externos a la red– eso es genial. Después de todo, si tiene una dirección IP, una dirección URL o –dadas las cosas en estos días– un interruptor de encendido/apagado, probablemente es un blanco legítimo para atacar.

Herramientas del oficio

Con el fin de obtener una imagen real de la seguridad, deje de depender de los análisis básicos de puertos y vulnerabilidades. El negocio tiene que estar dispuesto a invertir en herramientas que le permiten excavar más, lo que requerirá las siguientes herramientas:

  • Analizadores de red que pueden encontrar anomalías en el protocolo, el sistema y el usuario/host.
  • Crackers de contraseña y cifrado que ponen a prueba la seguridad de los sistemas operativos, dispositivos móviles y encriptación completa de disco.
  • Herramientas de búsqueda de información con identificación personal que pueden encontrar archivos asegurados en forma incorrecta en recursos compartidos de red y computadoras portátiles sin encriptar.
  • Inyector de SQL y herramientas de proxy para pruebas de aplicaciones web de mayor profundidad.
  • Escáneres de base de datos que buscan bases de datos en vivo que no están contabilizadas.
  • Analizadores de base de reglas de firewall, que permiten descubrir fallas de red básica que podrían ser difícil encontrar de otra manera.
  • Analizadores de código fuente para descubrir problemas que acechan en la web y aplicaciones móviles.
  • Herramientas de explotación para demostrar lo que puede suceder cuando se descubren ciertas fallas.

Antes de empezar

Planificar las cosas con mucha antelación es fundamental para una buena prueba de seguridad. Eso establece las expectativas de todos los involucrados, incluyendo la administración, los desarrolladores y el personal de operaciones de TI. Estas son las principales áreas para atender:

  • ¿Quién es el patrocinador de la prueba? ¿Quién llevará a cabo las pruebas?
  • ¿Qué se va a probar?
  •   (Es decir: interna, externa, con o sin autenticación de usuario)
  • ¿Cuándo será probado?
  • ¿Se requiere análisis manual? (La dependencia periódica en los análisis automatizados está bien, pero la dependencia total sin realizar nunca un análisis manual es peligrosa, aunque demasiado común)
  • ¿Alrededor de cuánto tiempo va a tomar la prueba? (Será más larga de lo que piensa, sobre todo cuando se involucra el escaneo web)
  • ¿Se proveerá información completa del sistema o será una prueba a ciegas? (Yo prefiero la primera para asegurar un alcance definitivo y que nada se pasa por alto)
  • ¿Cuál es su plan de contingencia en caso de un problema, como la negación de servicio a la red o un escáner de vulnerabilidades web que rellena una base de datos de producción?

Ejecución de las pruebas

Puede tomar todo un libro para cubrir esta fase de las auditorías de seguridad de red, pero aquí hay cuatro subfases que su empresa querrá ejecutar:

  1. Reconocimiento. Para determinar cuáles sistemas, aplicaciones, personas/procesos, etc. deben ser analizados.

  2. Enumeración. Para ver qué se está ejecutando dónde, para que usted pueda desarrollar un plan de ataque.

  3. Descubrimiento de vulnerabilidades. Para descubrir fallas específicas, tales como contraseñas débiles, inyección SQL y actualizaciones de software que faltan.

  4. Demostración de vulnerabilidad. Para mostrar la importancia de las vulnerabilidades que usted encuentra y cómo importan en el contexto de su entorno de red y/o negocio.

Una vez más, realizar un análisis manual es una parte grande y muy importante de las pruebas de seguridad. Un mono puede ejecutar un escáner de vulnerabilidad; el valor real viene cuando usted utiliza su sabiduría y sentido común para descubrir y explotar las fallas que requieren de intervención humana.

Construcción de un informe eficaz

En esta etapa, el informe que usted construye esboza dónde y cómo se hacen las cosas; sin embargo, este paso generalmente no es tomado tan en serio como debería ser. Incluso si se descubre todas las mayores brechas de seguridad y exploits, si estos riesgos no se comunican bien a los demás dentro de la organización, entonces probablemente no se actuará respecto a ellos y el ciclo de brechas de seguridad continuará.

Desde mi propia experiencia, he encontrado que un informe oficial que contiene tanto un resumen ejecutivo informativo (es decir, no solo un par de frases de pelusa), como una lista de los detalles técnicos de cada hallazgo crítico y no crítico funciona bien; cuanto más breve, mejor. Nunca entregue informes exportados de las herramientas de pruebas de seguridad y espere que otros tomen en serio el informe.

Cualquiera que lea el informe final tiene que saber tres cosas básicas:

  1. Cuáles son los hallazgos.
  2. Dónde están presentes.
  3. Qué se puede hacer al respecto.

Incluso con capturas de pantalla y otros detalles técnicos, a menudo se puede enumerar dos o tres conclusiones en una sola página del informe. Tenga en cuenta que cuanto más detallado sea el informe, mayores serán las posibilidades de que algo sea pasado por alto. Remita a un apéndice, datos de prueba suplementarios (por ejemplo, reportes en PDF o HTML de las herramientas que utilizó) o sitios web, siempre que sea posible.

Su tiempo y su conocimiento son todo lo que tiene. Las empresas dependen en gran medida de los resultados de las evaluaciones de seguridad de red, por lo que las mejoras continuas son una necesidad. A veces se trata de sus herramientas, a veces es su metodología, pero a menudo es solo una cuestión de mejorar en lo que hace. Teniendo en cuenta todas las evaluaciones que realizo cada año, me he dado cuenta que si usted afina su enfoque hacia las pruebas de seguridad, puede obtener resultados mucho mejores en un período de tiempo más corto –incluso con toda la complejidad presente en las redes actuales.

Sobre el autor: Kevin Beaver es un consultor de seguridad de la información, escritor, profesional, vocero y testigo experto de Principle Logic LLC. Con más de 25 años de experiencia en la industria, Kevin se especializa en la realización de evaluaciones independientes de vulnerabilidades de seguridad en los sistemas de red, así como en web y aplicaciones móviles. Es autor/coautor de 11 libros sobre seguridad de la información, incluyendo el best-seller “Hacking For Dummies” y “Guía Práctica de Privacidad HIPAA y Cumplimiento de Seguridad”. Además, es el creador de los audiolibros y blog de seguridad de la información “Security On Wheels”, que ofrecen aprendizaje de seguridad para profesionales de TI sobre la marcha. Puede encontrarlo en su página web www.principlelogic.com y seguirlo en Twitter:@kevinbeaver.

Logo KW

Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020 [1192]

de System Administrator - lunes, 6 de abril de 2015, 19:04
 

Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020

Editor’s note: Tim Merel is the managing director of Digi-Capital.

Virtual reality and augmented reality are exciting – Google Glass coming and going, Facebook’s $2 billion for Oculus, Google’s $542 million into Magic Leap, not to mention Microsoft’s HoloLens. There are amazing early-stage platforms and apps, but VR/AR in 2015 feels a bit like the smartphone market before the iPhone. We’re waiting for someone to say “One more thing…” in a way that has everyone thinking “so that’s where the market’s going!”

A pure quantitative analysis of the VR/AR market today is challenging, because there’s not much of a track record to analyze yet. We’ll discuss methodology below, but this analysis is based on how VR/AR could grow new markets and cannibalize existing ones after the market really gets going next year.

AR is from Mars, VR is from Venus

VR and AR headsets both provide stereo 3D high-definition video and audio, but there’s a big difference. VR is closed and fully immersive, while AR is open and partly immersive – you can see through and around it. Where VR puts users inside virtual worlds, immersing them, AR puts virtual things into users’ real worlds, augmenting them.

You might think this distinction is splitting hairs, but that difference could give AR the edge over not just VR, but the entire smartphone and tablet market. There are major implications for Apple, Google, Microsoft, Facebook and others.

Where’s the beef?

VR is great for games and 3D films – that’s what it was designed for. However it is primarily a living room, office or seated experience, as you might bump into things if you walked down the street wearing a closed headset. It’s still a great technology with a ready and waiting user base of tens of millions among console, PC and MMO gamers, those who prefer 3D to 2D films, as well as niche enterprise users (e.g. medical, military, education). This has attracted a growing apps/games ecosystem around early players like Unity, Valve, Razer and others.

AR is great fun for games, but maybe not as much fun as VR when true immersion is required – think mobile versus console games. But that possible weakness for gamers is exactly why AR has the potential to play the same role in our lives as mobile phones with hundreds of millions of users. You could wear it anywhere and do anything. Where VR is like wearing a console on your face (Oculus), AR is like wearing a transparent mobile phone on it (Magic Leap, HoloLens).

Eye phone

AR could play a similar role to mobile across sectors, as well as a host of uses nobody has thought of yet. The sort of things you might do with AR include a-commerce (we just invented a new cousin to e-commerce and m-commerce), voice calls, web browsing, film/TV streaming in plain old 2D as well as 3D, enterprise apps, advertising, consumer apps, games and theme park rides. So while games feature prominently in most AR demos, they are only one of a multitude of potential uses for AR. See full analysis by sector here.

Real dollars

We forecast that AR/VR could hit $150 billion revenue by 2020, with AR taking the lion’s share around $120 billion and VR at $30 billion.

So where do these big numbers come from?

We think VR’s addressable market is primarily core games and 3D films, plus niche enterprise users. VR could have tens of millions of users, with hardware price points similar to console. We anticipate consumer software/services economics similar to current games, films and theme parks, but don’t expect substantial additional data or voice revenues from VR. There could be meaningful enterprise VR revenues, but we think that AR could take more of that market.

We think AR’s addressable market is similar to the smartphone/tablet market. So AR could have hundreds of millions of users, with hardware price points similar to smartphones and tablets. This could drive large hardware revenues for device makers.

AR software and services could have similar economics to today’s mobile market, as they both cannibalize and grow it. A large AR user base would be a major revenue source for TV/film, enterprise, advertising, and consumer apps from Facebook to Uber to Clash of Clans. Amazon and Alibaba would have an entirely new platform for selling to a mass audience. Together with innovative applications nobody has thought of yet, AR’s scale could prove a bonanza for mobile networks’ voice and data businesses. Someone has to pay for all that mobile data.

The sector forecasts by market from 2016 are covered in detail here, and below is what the market could look like in 2020. Between now and then we’ll be refining regularly as more data becomes available, and expect a lot of debate about where the market is headed.

Vomit reality and Glassholes

It’s not all virtual beer and Skittles, as some VR applications give people motion sickness, and the privacy questions surrounding Google Glass raised a lot of scrutiny. So there are both technical and social issues to resolve as both markets become real.

One more thing…

From the perspective of current giants, there are pluses and minuses. Facebook placed an early bet on Oculus, which might win VR but not address the larger AR market. Google learned from Glass, and had the foresight to invest in Magic Leap. HoloLens could allow Microsoft to regain the glory it lost to Apple in the last decade. And Apple? We would love to see an augmented “One more thing…”

The full analysis is here.

Logo KW

Augmented reality could improve postal operations, says report [1198]

de System Administrator - sábado, 11 de abril de 2015, 20:29
 

Augmented reality could improve postal operations, says report

Logo KW

Augmented Reality, Not VR, Will Be the Big Winner for Business [1691]

de System Administrator - sábado, 9 de abril de 2016, 18:04
 

Augmented Reality, Not VR, Will Be the Big Winner for Business

By Marc Prosser

Sometimes exponential technologies hide in plain sight. Their innovation speed, market size, and number of practical uses seem to be progressing linearly, until suddenly they tilt upwards and turn on the boosters.

A case can be made that augmented reality (AR) in enterprises is just such an exponential technology. A growing number of companies are busy developing and releasing AR systems for enterprise settings.

Augmented and virtual reality analyst Digi-Capital’s numbers give a good indication of just how big AR is expected to be in a few short years. According to Digi-Capital AR companies will generate $120 billion in revenue by 2020, compared to the $30 billion revenue expected for their ‘cousin’ companies in virtual reality.

Part of AR’s explosive growth can be traced to a wide array of uses in business settings. The tech is a fundamental component in the hardware and software revolution, known as Factory 4.0.

 augmented-reality-big-business-31

First Systems Are Go

While virtual reality is about creating closed, fully immersive digital experiences, augmented reality systems overlay sensory information, such as visuals and sounds, on the real world around you.

The best-known example is Google Glass—a kind of partial AR experience where a square display appears in a user’s field of view. The device never became the success with end-users that Google was hoping for.

However, with 20-20 hindsight (if you’ll pardon the terrible pun) Google Glass was partly responsible for kickstarting a wave of innovative new AR startups. Unlike Google, these companies focused solely on AR’s potential for enterprises.

One example is the Canadian company NGrain, whose solutions have been implemented in several major companies, including Lockheed Martin and Boeing.

Lockheed has used AR systems in relation to its F-35 and F-22 aircraft.

Using smart glasses or tablets, engineers and service personnel can overlay graphics that show data like maintenance records or assembly instructions on top of a piece of real-world equipment. The system can also compare a digital 3D model of an aircraft with an actual aircraft to identify any potential damage.

The introduction of AR let Lockheed Martin engineers work up to 30 percent faster.

Meanwhile, at Boeing, several teams are looking at using AR systems to perform running quality control of parts for several aircraft, including the 787 Dreamliner. AR systems also allow maintenance crews to carry out full airplane checks much quicker than was previously possible.

“Traditionally, these tasks are carried out consulting manuals and using paper and pen,” says Barry Po, director of product management at NGrain. “Using AR-systems lets you overlay any needed information, while you have both hands free, and our visual inspection and damage assessment software can make it much quicker to identify potential issues. The result is that the time it takes to do a full plane check can go from several days to a matter of hours.”

Other AR systems have been used to deliver on the job training.

Using images and instructional illustrations and video, augmented reality units can show a new employee how to complete a job task without needing an introduction course.

Further, data has shown workers using AR technology learn up to 95% quicker and tend to make fewer mistakes than workers trained using traditional methods.

Pipes Are Being Laid to Drive Broader Adoption

While AR in enterprises has shown impressive results, most of these come from initial pilot projects, using a limited number of devices.

AR is also facing a number of challenges, including a lack of industry standards which can make integrating AR units and software within current enterprise IT ecosystems difficult.

“Traditional software systems like ERP or WMS are not necessarily ready to integrate fully with the new technologies, like AR, that make up Factory 4.0,” Pete Wassel, CEO of Augmate, says.

AR companies have often run successful trials, instigated by a company CTO, but then hit a wall when attempting a full rollout.

Enterprise IT departments have often — and often understandably so — balked at the idea of introducing camera-equipped AR units that come with a host of potential security risks and integration headaches.

It is a situation that Augmate, along with other companies, has been working to solve.

Augmate is creating the backbone, or pipe systems, that make integration of AR into existing IT ecosystems smooth and safe. Its software systems have generated a lot of interest, not only within the AR industry, but also from individual enterprises and companies within the Internet of Things space.

AR’s Stealth Mode About to End

Enterprises are quickly becoming aware of the potential of AR, with two-thirds of companies recently interviewed by Tech Pro Research saying they were considering integrating AR solutions.

At the same time, the number of use case scenarios for AR is growing rapidly.

Training, maintenance, warehouse management, emergency response at breakdowns, co-worker location, damage assessment, work order creation, assembly product design, and marketing and sales are all being augmented.

The same goes for industry-specific tasks in a number of fields.

For example, in health care AR can assist with information during surgery, medical inspections, in relation to specific medical procedures, or simply to call up and immediately display a patient’s relevant medical history hands-free on a pair of smart glasses.

One of the biggest use cases across industries is remote maintenance and inspection. Using AR systems, experts will be able to give advice to on-site personnel in any of a number of situations. This would not only eliminate having to fly key personnel around the world but dramatically improve response times.

“It makes it possible to create what I call ‘John Madden’ guides, where experts are able to draw instructions and point things out in real time,” Pete Wassel says.

Companies and startups have been working on AR solutions for many of these specific tasks, and many are nearing full release, after spending time in either beta or stealth mode.

At the same time, the hardware capabilities— field of vision, battery time, sturdiness, and ease of use — of AR devices are improving rapidly. Also, motion sensor and eye tracking technology are improving, allowing for more hands-free use.

In short, it is a perfect scenario for rapid growth in enterprise AR.

A Future Beyond the Factory

While the coming years are likely to see the use of AR technology in enterprises explode — its enterprise heyday will likely end when it’s supplanted by another exponential technology.

“Technology moves in cycles. I would think that AR in enterprises will have a good run of maybe 15 years,” Pete Wassel says. “After that, robots and AI will start to outcompete human workers and become the new dominant exponential technologies in enterprises.”

 right-click-reality

But by then, it will have likely diffused beyond enterprises and become part of our daily lives.

As a species, we build knowledge on what was discovered by previous generations. We quickly realized it was impractical to rely on memory alone to do this, so we invented the printed word.

Our accumulated knowledge grew to lexical levels and then to whole libraries. Computers and the Internet are, of course, powerful new methods of storing and recalling information.

Each iteration increases the amount of information stored and makes it more readily accessible.

Augmented reality looks like another step, seamlessly integrating the physical world with our stores of information. Imagine having the ability to call up information about or perform a range of other actions on every object around you through a layer of AR.

This is the true promise of AR beyond its near-term enterprise sweet spot.

The ability to right-click on reality.

Marc Prosser 

Marc is British, Danish, Geekish, Bookish, Sportish, and loves anything in the world that goes 'booiingg'. He is a freelance journalist and researcher living in Tokyo and writes about all things science and tech. Follow Marc on Twitter (@wokattack1).

Image credit: Shutterstock.com

Link: http://singularityhub.com

 

Logo KW

Austin Powers [337]

de System Administrator - lunes, 3 de febrero de 2014, 17:06
 

 Austin Powers

Sir Austin Danger PowersKBE, (n. 12 de noviembre de 1939) es un espía británico creado e interpretado por Mike Myers, es el protagonista de la trilogía de películas: Austin Powers: International Man of MysteryAustin Powers: The Spy who Shagged me y Austin Powers in Goldmember.

Fuente: http://es.wikipedia.org/wiki/Austin_Powers_(personaje)

Logo KW

Autoconcepto y Autoestima [524]

de System Administrator - sábado, 12 de julio de 2014, 17:28
 

Qué diferencias hay entre autoconcepto y autoestima

Fuente: http://hipercognicion.blogspot.com/2013/08/que-diferencias-hay-entre-autoconcepto.html

Aunque son dos conceptos íntimamente relacionados, su significado es sensiblemente diferente. Tanto un concepto como el otro dependen de la personalidad y del impacto social de la persona, y ambos dependen de la subjetividad, pero entre ellos existen algunas diferencias.
 
El autoconcepto es la descripción que una persona hace de sí misma, tanto de su aspecto físico como de su personalidad o aptitudes. Se trata de un juicio subjetivo, que no tiene por que atender a la verdad objetiva ni a lo que los demás perciben. Así, un individuo puede describirse como alto, delgado y buen jugador de baloncesto, sin que los demás coincidan en ninguno de esos rasgos. Ejemplos extremos de esta percepción subjetiva constituyen trastornos como la anorexia.
Por su parte, la autoestima es la valoración afectiva que las personas muestran hacia sí mismas. En esta valoración influye el autoconcepto, de tal forma que un autoconcepto positivo puede redundar en una buena autoestima. La autoestima no se detiene en rasgos concretos, sino que es una apreciación global, que genera un tipo de sentimiento hacia nosotros mismos. Alguien con buenas aptitudes puede mostrar un autoconcepto negativo, si el entorno social en que se mueve así se lo transmite. Lo cual puede influir en que esa persona desarrolle una autoestima negativa.
 
Por tanto, la diferencia fundamental radica en que el autoconcepto es una descripción de uno mismo, atendiendo a diferentes rasgos físicos y mentales, mientras que la autoestima es la valoración global que tenemos de nosotros mismos.
Logo KW

Autodestrucción del Hardware [465]

de System Administrator - viernes, 4 de abril de 2014, 11:58
 

 

Hardware con la Capacidad de Autodestruirse

IT-Sitio / 14.02.2014

Tras ganar un contrato para el gobierno los EE.UU, por US$ 3,4 millones, IBM va a intentar desarrollar una serie de soluciones de hardware capaces de autodestruirse bajo ciertas condiciones específicas del entorno. ¿Para qué puede ser utilizado? Le contamos.

La agencia para la investigación de programas de defensa de los EE.UU. –DARPA- es conocida, entre otras cosas, por haber sido la institución que financió el nacimiento de Internet. Ahora está con un nuevo proyecto: desarrollar hardware capaz de autodestruirse bajo ciertas condiciones del entorno en el que se encuentra.

Y la firma ganadora del contrato para intentar este producto es IBM, que recibirá para intentar este desarrollo -denominado VARP (Vanishing Programmable Resources)- la cifra de 3,4 millones de dólares.

Según detallan en DARPA, "el hardware que se compra normalmente está diseñado para durar muchos años. Sin embargo, hay ocasiones y operaciones de las fuerzas armadas de los EE.UU, en las que la electrónica solo debería durar el tiempo de la misión, para evitar que caiga en manos enemigas".

Básicamente lo que IBM va a intentar es desarrollar un sistema que pueda utilizar radiofrecuencias para romper una cobertura transparente, posiblemente hecha de algún tipo de vidrio, y que recubre a los microchips claves de una computadora. Esta PC podría estar en, por ejemplo, un frente de batalla, siendo utilizada para encriptar y desencriptar mensajes.

Y este no fue el único encargo de la DARPA. Si bien el ganador de subsidio fue otra empresa -la compañía SRI International- el objetivo de este emprendimiento es desarrollar un nuevo tipo de batería, que también sea capaz de autodestruirse.

Logo KW

Autoengaño [553]

de System Administrator - domingo, 13 de julio de 2014, 23:51
 

Estrategias de la mente: el autoengaño

Una persona accidentada se encuentra en la cama de un hospital, su cabeza está envuelta en un aparatoso vendaje. Ha sufrido un fuerte golpe en la cabeza y la lesión le ha afectado la región del cerebro que controla los movimientos del brazo izquierdo. 
El médico le solicita "por favor, levante su brazo izquierdo". 

El paciente le dice que si, pero su brazo permanece donde está " ...lo tengo enredado en la sábana" dice. 

El médico le asegura que no está enredado. En este punto el paciente puede llegar a contestar algo inverosímil como "bueno... quizás estoy un poco cansado, porque no tengo ganas de levantarlo en este momento". Las personas que atienden a accidentados saben lo incongruentes que pueden llegar a ser las respuestas de pacientes a los que se les solicita hacer algo que no pueden realizar.

La incapacidad de reconocer una imposibilidad es un trastorno llamado anosognosia y es parte de ese aspecto peculiar de la psicología humana: nuestra ilimitada capacidad de ilusión. 

Ante la cruda e inequívoca realidad de que una parte del cuerpo está paralizado, una persona puede crear fácilmente un argumento alternativo con tal de no acreditar el problema. En realidad no está mintiendo, él mismo cree sinceramente en la validez de sus afirmaciones.

Aunque este fenómeno clínico parezca extraño, en cierto sentido todos hacemos algo similar casi todos los días, porque aunque nos gustaría pensar que moldeamos nuestras creencias para adaptarlas a la realidad que nos rodea, hay un impulso humano a hacer lo contrario: moldeamos nuestra realidad para que se ajuste a nuestras creencias, no importa cuán endeble sean las justificaciones.

 

La "ilusión" se puede definir como una creencia absurda, pero a su vez elogiada a pesar de que haya una abrumadora evidencia en contra. Para ser sinceros ¿quién no ha creído en algo ilusorio alguna vez? de hecho, una cierta dosis de autoengaño es imprescindible para nuestra salud mental.

 

A medida que nuestra vida avanza, vamos formando todo tipo de creencias y opiniones sobre lo que nos rodea y el mundo en general, este tipo de creencias se dividen en dos, las creencias instrumentales que son las que pueden ser directamente contrastables "necesito un martillo para clavar un clavo". Este tipo de convicciones tienden a ser claramente comprobables, o sea, si me baso en ellas y fracaso, tendré que reconsiderarlas.

El otro tipo de creencias son las filosóficas, son aquellas que no son tan fáciles de probar. En verdad son ideas que sostenemos debido a beneficios emocionales pero no son fáciles de demostrar. Por ejemplo, cuando decimos "vivo en el mejor país del mundo" o "el amor verdadero dura para toda la vida", realmente no se puede ofrecer ninguna evidencia que apoye estas ideas, en realidad las creemos porque cumplen con nuestras necesidades emocionales.

 

Los seres humanos hacemos pasar creencias emocionales por creencias instrumentales, las confundimos muy asiduamente. Por ejemplo, basta recordar el 21 de diciembre de 2012 cuando mucha gente creía que realmente se acababa el mundo.

 

Todo sea por tener el control

Una de las necesidades más poderosas que tenemos los seres humanos es sentir que tenemos el control. Todos sabemos de la impotencia y el estrés que provoca el darnos cuenta que estamos en peligro, por tanto, creer que tenemos el control sobre nuestro destino ayuda a aliviar esa experiencia negativa, incluso cuando esa creencia es infundada. De ahí el enorme atractivo del "pensamiento mágico" la creencia de que los pensamientos y gestos por sí solos pueden influir en el mundo que nos rodea. ¿Quién no conoce a alguien que se pone la camiseta de su equipo de fútbol favorito pensando que le dará suerte? ¿O que utiliza números con un significado especial estimando que con eso va a ganar la lotería?

¿Existe alguna fórmula para evitar el autoengaño? Quizás si, pero el problema con la ilusión es que no queremos escapar de ella. Es decir, si nos despertamos cada mañana y miramos la realidad de frente, quizás tengamos ganas de cortarnos las venas, tal vez literalmente. Los psicólogos saben que las personas deprimidas tienen menos de crédulos que el resto, son mucho más perspicaces y consientes de sus propios defectos, lo que se le llama "realismo depresivo". 

Está claro que los seres humanos estamos programados para "mantener la ilusión", en tal sentido, el autoengaño es parte de esa estrategia, por tanto, quizás lo mejor sea disfrutar de nuestras ilusiones mientras se pueda y esperar que no nos causen demasiados problemas a lo largo del camino.

Link: http://glosariodigital.blogspot.com/2014/07/estrategias-de-la-mente-el-autoengano.html

Logo KW

Avatar [419]

de System Administrator - martes, 14 de enero de 2014, 15:35
 

Avatar es una película de ciencia ficción estadounidense de 2009, escrita, producida y dirigida por James Cameron y protagonizada por Sam WorthingtonZoe SaldañaSigourney WeaverStephen Lang y Michelle Rodríguez.

Ambientada en el año 2154, los acontecimientos que narra se desarrollan en Pandora, una luna del planeta Polifemo habitada por una raza humanoide llamada na'vi, con la que los humanos se encuentran en conflicto debido a que uno de sus clanes está asentado alrededor de un gigantesco árbol que cubre una inmensa veta de un mineral muy cotizado y que supondría la solución a los problemas energéticos de la Tierra: el unobtainium. Jake Sully, un marine que quedó parapléjico, es seleccionado para participar en el programa Avatar, un proyecto que transporta la mente de los científicos a unos cuerpos artificiales dena'vi para que la comunicación con los nativos resulte así más sencilla. A pesar del fin científico del proyecto, el coronel Quaritch, quien dirige la defensa de la base humana en Pandora, convence a Jake para que le proporcione información sobre los nativos en caso de que fuera necesario recurrir a la fuerza para que se marchen. En un principio, Jake cumple profesionalmente su misión, pero se enamora de una de las nativas, Neytiri, y se da cuenta de que éstos jamás renunciarán a su tierra, haciendo inevitable un conflicto armado; él deberá decidir de qué lado está.

El presupuesto oficial de Avatar fue de 237 millones de dólares, aunque algunas estimaciones lo sitúan entre los 280 y los 310 millones, más otros 150 millones dedicados al marketing.

La película se estrenó el 18 de diciembre de 2009 en gran parte de Europa y en Estados Unidos, así como en ChileMéxicoParaguayVenezuela y Uruguay, aunque en algunos países se proyectó en fechas distintas, tanto anteriores como posteriores, al estreno internacional. Algunos de ellos fueron PerúBélgicaFranciaIndonesiaJamaica y Egipto, donde pudo ser vista desde el 16 de diciembre, mientras que en ArgentinaChina e Italia fue estrenada el 1, 2 y 15 de enero, respectivamente. En Cuba fue estrenada el 6 de febrero de 2010.

El día de su estreno Avatar logró una recaudación de aproximadamente 27 millones de dólares, aumentando esta cifra hasta los 241 millones tras su primer fin de semana en taquilla. Diecisiete días después de que se estrenara, se convirtió en la película que más rápido ha alcanzado la cifra de mil millones de dólares en recaudación y, transcurridas tres semanas, se situó como la película con mayor recaudación de todos los tiempos, superando así a Titanic (1997), también de James Cameron. Avatar consiguió superar esa marca en menos de seis semanas, convirtiéndose en la película más taquillera de la historia del cine hasta la fecha, logrando además ser la primera película en sobrepasar la barrera de los 2.000 millones de dólares en recaudación.

Avatar fue nominada al Premio Oscar 2010 en la categoría de Mejor película. Y ganó el Premio Oscar 2010 en las categorías de Mejores efectos visualesMejor dirección de arte, y Mejor fotografía. La película se reestrenó en Estados Unidos el 27 de agosto de 2010 en salas 3D y IMAX 3D, con algunas escenas inéditas.

Fuente: http://es.wikipedia.org/wiki/Avatar_(película)

Logo KW

AWS CloudFormation [906]

de System Administrator - miércoles, 1 de octubre de 2014, 19:21
 

AWS CloudFormation (Amazon Web Services CloudFormation)

Posted by Margaret Rouse

Amazon Web Services CloudFormation is a free service that provides developers with a simple way to create and manage an assortment of Amazon Web Service resources while keeping them organized.

Amazon Web Services CloudFormation is a free service that provides developers with a simple way to create and manage an assortment of Amazon Web Service resources while keeping them organized.

CloudFormation has programming tools and features open APIs that enable third parties to develop things on top of Amazon Web Services (AWS), like applications. It is designed to provide users with a richer development environment, enabling them to extend and develop enterprise ready applications that would run on Amazon Web Services. This gives users the ability to build a dependable, highly available, and scalable AWS infrastructure for their application requirements.

 

Users can create a template in a simple JSON-formatted text file, or use a sample template, to define the AWS resources and any restrictions necessary to run an application. Templates can be stored in public or private locations and can be reused to create copies of a current stack or as a foundation to create a new one.

By using the AWS Management Console, AWS Command Line Interface or APIs, users can edit and deploy a template and its accompanying assortment of resources, referred to as a stack. CloudFormation allows users to see which resources make up each stack and gives users the power to edit any of the resources within a stack. Once the stack is built, users can customize template features using parameters like tuning settings for templates located in multiple regions to ensure consistency among deployed applications.

CloudFormation eliminates user responsibility by automating service provisioning steps to ensure that programming chores remain in synch. Users can adjust and edit AWS resources in an organized and reliable way once they’re deployed, similar to version control, which monitors which software releases your infrastructure relies on.

AWS CloudFormation is designed to enable medium and large businesses with the need for more sophisticated development to deliver higher levels of security, more scalability and more programmability. These businesses will have more choices and provide more options when they’re trying to build and enhance their applications that would run on a platform like AWS. CloudFormation is best for businesses with a data center type of mentality where users develop their own applications as opposed to users who buy packages and don’t do much customization.

Amazon is building on a modern platform which might make their tools more efficient and less expensive than some of the traditional enterprise tools. At this stage of development, though, there are a few features users might find that don’t work exactly as advertised. Users also might find a need for certain functionality that has not yet been developed.

There is no extra charge for AWS CloudFormation. Users only pay for the AWS resources required to run their applications.

 

Link: http://searchaws.techtarget.com

Logo KW

AWStats [274]

de System Administrator - lunes, 3 de febrero de 2014, 17:11
 

 AW

AWStats es una herramienta open source (código abierto) de informes de análisis web. Utiliza los archivos de log (accesos) del servidor. Los datos son presentados con tablas y gráficos de barra.

Fuente: http://es.wikipedia.org/wiki/Awstats

Logo KW

Azar [511]

de System Administrator - sábado, 12 de julio de 2014, 16:44
 

El misterio del Azar

Fuente: http://hipercognicion.blogspot.com/2010/09/azar.html

Antes de que los dados abandonen la mano del jugador ya está determinada la posición en que van a caer. Podríamos decir que los dados lo saben, pero nosotros no. Entonces, ¿cabe hablar de sucesos aleatorios o es mejor hacerlo de sucesos aleatorios para el ser humano? Todo final tiene un principio y toda consecuencia tiene su causa correspondiente y única. Por tanto, la casualidad en sí misma no es más que una demostración de nuestra incapacidad para relacionar causas y consecuencias. En la búsqueda de esas causas se afana la superstición con mejor o peor acierto. También lo hace la ciencia pero aún está en pañales para demostrar el origen de determinados eventos complejos.

Todos los eventos tienen una explicación lógica, una causa que le da origen. El azar aglutina todos los sucesos cuyo origen nos es imposible de predecir, pero ello no implica que no tenga una causa lógica.
Para aumentar la frecuencia de aparición de un fenómeno arbitrario para nosotros, lo mejor que podemos hacer es concentrar nuestro pensamiento en el resultado final, pues sólo así tendremos algún grado de control sobre su resultado. Imaginar una consecuencia es la mejor forma de inducir las causas que la originan. Tal vez no podamos colocar los dados en nuestra mano de forma consciente para que caigan en una determinada posición, pero sí podremos imaginar que caen en esa posición, y tal vez veamos que empiezan a aparecer los resultados deseados.
Logo KW

Backing up virtual servers [596]

de System Administrator - martes, 29 de julio de 2014, 11:06
 

Backup challenges: Backing up virtual servers

by: Brien Posey

In the second part of this primer on backup challenges today, Brien Posey explores why backing up virtual servers creates data protection challenges.

Data growth, the need for more frequent data protection and a variety of other challenges have forced administrators to look for alternatives totraditional backups. The second part of this primer on backup challengestoday explains why backing up virtual servers presents challenges for IT pros.

In recent years, backing up virtual servers has been another source of headaches for backup administrators. When server virtualization first gained mainstream acceptance, the backup process had a lot of catching up to do. Most backup applications were not virtualization aware, and backup administrators were only beginning to understand the ways in whichvirtual environments differed from physical server backups.

Initially, most backup administrators chose to back up virtual machinesby deploying backup agents to each individual virtual machine. Ultimately, however, this approach proved to be inefficient at best. As virtual machines proliferated, managing large numbers of backup agents became challenging. Never mind the fact that, at the time, many backup products were licensed on a per-agent basis. Resource contention also became a huge issue since running multiple, parallel virtual machine backups can exert a significant load on a host server and the underlying storage.

Today, all of the major third-party backup vendors offer products that are virtualization aware. These products are generally able to perform host-level backups rather than requiring a separate backup agent for each virtual machine. Although this approach is vastly superior to that of deploying a separate agent onto each virtual machine, host-level backups present challenges of their own.

 

One such challenge stems from the fact that not all virtual machines are created equally. Some are difficult to protect using host-level backups. As previously mentioned, host-level backups tend to protect individual virtual machines through image backups. This process generally works by creating virtual machine snapshots and then tracking storage block modifications. In order for the process to work properly, however, the virtual machine must be running a supported operating system and the backup application must be compatible with all applications that are running on the virtual machine.

Virtual machine mobility also makes host-level backups challenging. Both VMware and Microsoft offer products that allow a running virtual machine to be moved from one host server to another. This means that when a host-level backup is created, it is entirely possible that the host will be running a completely different set of virtual machines than it was running the last time that a backup was made.

Check out our entire data protection primer on identifying data backup solutions for today's challenges.

This was first published in April 2014

Link: http://searchdatabackup.techtarget.com/

 

Logo KW

Backoffice [269]

de System Administrator - domingo, 12 de enero de 2014, 22:32
 

Se entiende por backoffice de un sistema informático a la infraestructura de software y hardware, “transparente” al usuario final, que oficia de soporte a todas las funcionalidades presentes en la interfaz del “frontoffice” (gestión, diseño, investigación, navegación, educación). 

Logo KW

Bank of America files patent for cryptocurrency wire transfer system [1443]

de System Administrator - miércoles, 23 de septiembre de 2015, 21:06
 

Bank of America files patent for cryptocurrency wire transfer system

Logo KW

Barcelona, base del nuevo Centro de Innovación global sobre IoE [865]

de System Administrator - domingo, 14 de septiembre de 2014, 21:59
 

Barcelona, base del nuevo Centro de Innovación global sobre IoE

Cisco y Schneider Electric han sido las dos primeras empresas en confirmar su participación en el Centro de Innovación global sobre IoE que se abrirá en Barcelona dentro del Smart City Campus. El lugar será una plataforma para la investigación, el desarrollo tecnológico y las nuevas oportunidades de mercado aplicadas a las ciudades inteligentes y a Internet de las Cosas.

El lugar proyectado estará ubicado en el histórico edificio Ca l'Alier del Siglo XIX, en el corazón del distrito 22@Barcelona y tendrá 1.720 metros cuadrados. La apertura del centro está prevista para el verano de 2016 y en el trabajarán 40 investigadores en torno al IoE y sus oportunidades de mercado.

"El nuevo Centro de Innovación ubicado en el Smart City Campus nos ayudará a reforzar la apuesta por un nuevo modelo de crecimiento económico basado en la tecnología, la innovación urbana y los servicios avanzados, mejorando el bienestar y la calidad de vida de las personas", ha comentado Xavier Trías, alcalde del Ayuntamiento de Barcelona.

Con este proyecto se crea un espacio donde las compañías puede colaborar con clientes, partners, start-ups, Administraciones y organismos de investigación y educación. El lugar contará también con laboratorio de innovación en torno al IoE para diseñar nuevas soluciones y servicios para las ciudades, y una zona de demostraciones en vivo.

"Estos centros constituyen potentes núcleos capaces de potenciar el pensamiento innovador y generar nuevas estrategias para definir y construir una nueva relación con nuestras ciudades y nuestro mundo", ha comentado Anil Menon, presidente de Smart Connected Communities y director general adjunto de Globalización en Cisco.

Será el quinto centro de este tipo de Cisco junto a los actuales centros de Río de Janeiro (Brasil) y Songdo (Corea del Sur) y los que se encuentran en construcción en Alemania y Canadá.

Asimismo, Schneider Electric quiere que este primer Centro de Excelencia SmartCity sea un punto de atracción de talento e innovación, y trabajará en los próximos meses para establecer vínculos de colaboración con algunas de las universidades referentes, entre las que se encuentra el IESE, que acoge también acogerá la Cátedra de Schneider Electric de Sostenibilidad y Estrategia Empresarial.

"Con este centro de excelencia damos un nuevo impulso a nuestra capacidad de proveer de soluciones avanzadas a las ciudades, bajo las premisas de la innovación, la colaboración público-privada, el trabajo con los mejores partners y la atracción del mejor talento internacional,” ha explicado Julio Rodríguez, Vicepresidente Ejecutivo de Operaciones de Schneider Electric.

Link: http://www.computerworld.es

Logo KW

Bart Kosko [1716]

de System Administrator - jueves, 23 de febrero de 2017, 22:16
 

Bart Kosko | Fuzzy Logic

Source: Wikipedia

Bart Andrew Kosko (born February 7, 1960) is a writer and professor of electrical engineering and law at the University of Southern California (USC). He is notable as a researcher and popularizer of fuzzy logicneural networks, and noise, and author of several trade books and textbooks on these and related subjects of machine intelligence.

Kosko’s technical contributions have been in three main areas: fuzzy logic, neural networks, and noise.

In fuzzy logic, he introduced fuzzy cognitive maps, fuzzy subsethood, additive fuzzy systems, fuzzy approximation theorems, optimal fuzzy rules, fuzzy associative memories, various neural-based adaptive fuzzy systems, ratio measures of fuzziness, the shape of fuzzy sets, the conditional variance of fuzzy systems, and the geometric view of (finite) fuzzy sets as points in hypercubes and its relationship to the ongoing debate of fuzziness versus probability.

In neural networks, Kosko introduced the unsupervised technique of differential Hebbian learning, sometimes called the “differential synapse,” and most famously the BAM or bidirectional associative memory family of feedback neural architectures, with corresponding global stability theorems.

In noise, Kosko introduced the concept of adaptive stochastic resonance, using neural-like learning algorithms to find the optimal level of noise to add to many nonlinear systems to improve their performance. He proved many versions of the so-called “forbidden interval theorem,” which guarantees that noise will benefit a system if the average level of noise does not fall in an interval of values. He also showed that noise can speed up the convergence of Markov chains to equilibrium.

 

More: https://en.wikipedia.org

Logo KW

Base de Datos [146]

de System Administrator - martes, 7 de enero de 2014, 21:17
 

Una base de datos o banco de datos es un conjunto de datos pertenecientes a un mismo contexto y almacenados sistemáticamente para su posterior uso. En este sentido, una biblioteca puede considerarse una base de datos compuesta en su mayoría por documentos y textos impresos en papel, indexados para su consulta. Actualmente, y debido al desarrollo tecnológico de campos como la informática y la electrónica, la mayoría de las bases de datos están en formato digital (electrónico). Existen programas denominados sistemas gestores de bases de datos SGBD—, que permiten almacenar y posteriormente acceder a los datos de forma rápida y estructurada. Las propiedades de estos SGBD, así como su utilización y administración, se estudian dentro del ámbito de la informática. Las aplicaciones más usuales sirven para la gestión de empresas e instituciones públicas. También son ampliamente utilizadas en entornos científicos con el objeto de almacenar la información experimental.

Las bases de datos pueden contener muchos tipos de datos —datatypes—. Algunos de ellos se encuentran protegidos por las leyes en varios países. Por ejemplo en España, los datos personales se encuentran protegidos por la Ley Orgánica de Protección de Datos de Carácter Personal (LOPD).

Fuente: http://es.wikipedia.org/wiki/Base_de_datos

Logo KW

BASIC [139]

de System Administrator - martes, 7 de enero de 2014, 19:45
 

BASIC (Beginner's All-purpose Symbolic Instruction Code), es una familia de lenguajes de programación de computadoras, de alto nivel. El BASIC original, el Dartmouth BASIC, fue diseñado en 1964 por John George Kemeny y Thomas Eugene Kurtz en el Dartmouth College, situado en New Hampshire, Estados Unidos, como un medio para facilitar la programación de computadores a estudiantes y profesores que no fueran de ciencias.

Fuente: http://es.wikipedia.org/wiki/Basic

Logo KW

Batlle, Jorge [338]

de System Administrator - lunes, 3 de febrero de 2014, 17:15
 

 Jorge Batlle

Jorge Luis Batlle Ibáñez (Montevideo25 de octubre de 1927) es un abogadoperiodista y político uruguayo.

Fue senador y diputado nacional. Tras cinco candidaturas resultó electo como presidente de la República Oriental del Uruguay, cargo que ocupó entre el 1 de marzo de 2000 y el 1 de marzo de 2005. Es miembro del ala liberal del Partido Colorado. Actualmente se ocupa enteramente a la actividad privada, aunque sigue teniendo voz dentro de la política nacional.

Fuente: http://es.wikipedia.org/wiki/Jorge_Batlle

Logo KW

Batman [122]

de System Administrator - martes, 7 de enero de 2014, 15:07
 

Batman

Batman (conocido inicialmente como The Bat-Man) es un personaje creado por los estadounidenses Bob Kane y Bill Finger (sólo se reconoce la autoría al primero). Es propiedad de DC Comics. Su primera aparición fue en la historia titulada “El caso del sindicato químico” de la revista Detective Comics Nº 27, lanzada por la editorial National Publications en mayo de 1939.

Fuente: http://es.wikipedia.org/wiki/Batman

Logo KW

BATTERY-FREE CHIP FOR THE ‘INTERNET OF THINGS’ [872]

de System Administrator - jueves, 18 de septiembre de 2014, 01:34
 

BATTERY-FREE CHIP FOR THE ‘INTERNET OF THINGS’ THAT’S THE SIZE OF AN ANT

Written By: Jason Dorrier

As a concept, the Internet of Things has been around for awhile. In theory, as chips get smaller and cheaper, we should be able to embed them in everyday items. Appliances, lighting, doors, climate control—all these things (and many more) get a chip and an internet connection. They can send data and receive commands.

In short, a world of dumb, inanimate objects wakes up to do our bidding.

But there are usually more than a few roadblocks between concept and execution. And two of the biggest challenges for the Internet of Things are power and cost.

Many of these chips won’t be easily accessible like your smartphone—so batteries in need of daily charging aren’t an ideal option. And if we’re to embed them in potentially billions or tens of billions of things, cost per chip needs to be minimal.

For the last few years, researchers and companies have been focused on solving these two problems. And a new generation of Internet of Things chips is emerging.

The latest is a battery-free chip the size of an ant that costs a few cents to make.

Amin Arbabian, the Stanford assistant professor of electrical engineering who led the team that developed the chip, says, “Cheap, tiny, self-powered radio controllers are an essential requirement for the Internet of Things.”

Arbabian’s contribution is an all-in-one chip. Its antennas are about 1/10 the size of a typical WiFi antenna and can send and receive signals. The onboard processor translates and performs incoming instructions.

And the most amazing part? No battery required. The chip is powered by the signals it uses to communicate.

To be clear, you can only harvest a very small amount of energy from radio waves—but the chip is incredibly power efficient. If it did need a battery, one AAA would run it for a century. But forget Energizer—this thing scavenges light for energy.

Arbabian worked with French semiconductor firm STMicroelectronics to make 100 of his chips as a proof-of-concept. He says they were able to pick up signals, harvest energy from them, and obey commands and send out instructions.

A potential drawback to the chip is that it only works over short distances. But Arbabian doesn’t think that’s necessarily a problem (in part, no doubt, because of the chip’s low cost). He thinks a web of the devices spaced a meter apart would form an interconnected net of digital control—the neurons of a smart house, for example.

Other groups are also working the same problem from a similar angle, likethis recent battery-free, radio-powered chip out of the University of Washington. The chip breathes the WiFi signals sent between common devices like laptops, smartphones, and the central router. Although it’s currently limited to about 2 meters—the team says they’re working to extend its range to 20 meters.

Even as these cheap, low-power, battery-free chips work to enable the Internet of Things—they aren’t the only needed solution.

Other challenges lurk down the line, such as building a set of standards that ensures all these smart “things” play nice together, that they work in tandem like a professional symphony instead of your middle school band. And security, too, will be crucial. The last thing you want is a house or city as glitchy with malware and viruses as an early-2000s PC.

But first we need the brain. Cheap chips that live on energy harvested out of the air? Worth keeping an eye on. The world isn’t wired yet—but it will be soon enough.

Image Credit: Stanford

This entry was posted in Gadgets and tagged Amin ArbabianInternet of Thingsmicrochipradio-powered devicesStanfordUniversity of California BerkeleyUniversity of Washington.

Link: http://singularityhub.com

Logo KW

Battling Big Data’s Tribal Knowledge Problem [1181]

de System Administrator - miércoles, 1 de abril de 2015, 22:21
 

Battling Big Data’s Tribal Knowledge Problem

by Alex Woodie

One of the practical data-related problems that people struggle with today is just finding the right piece of data hidden among various sources. If you’ve ever asked for a specific data set, only to be told “Ask Suzy in accounting,” you’ve experienced the tribal knowledge problem. The thing is, “Ask Suzy in accounting” doesn’t scale with big data.

The tribal knowledge problem affects all sorts of organizations using all sorts of techniques. No matter whether you use Hadoop, Teradata, or relational database management systems, there’s a level of obscurity surrounding data sources that confounds your ability to get the right piece of data.

Traditional master data management (MDM) and data governance techniques propose to solve the problem with a top-down approach and rigid categorization. But the speed at which data types and data sources are changing confounds this approach, says Satyen Sangani, who worked in Oracle’s data warehouse division before co-founding Alation, which today came out of stealth with a new product aimed at closing the tribal knowledge gap.

“While everybody is focused on problem of how to visualize the data and how to make the compute go faster and how do we store the information more efficiently, we’ve seen little about the fact that there’s just so much more data out there,” Sangani tells Datanami. “There’s a fundamental information relevance problem. How do you get the data when you need it, how do you sort through it, how do you filter down the data to get what you’re actually looking for? That’s fundamentally a problem that we see our customers deal with.”

Alation’s solution to the problem is to leverage the power of machine learning to automatically map the access paths that people naturally take to access data. Its software installs an agent in various data sources that access logs and certain metadata components to ascertain what pieces of data are most popular; it also takes a sample of the actual data for indexing purposes. It’s sort of like Google‘s PageRank algorithm, but applied within a customer’s own data environment.

 

Sorry Suzy, but you just don’t scale

This is a novel approach that yields powerful insights for search and discovery of data, Sangani says. “When you go into database system like Oracle or Teradata, it doesn’t tell you any of the information about who’s using it, when it’s used, what’s touching it, and what that data goes to,” he says. “All that information has to be effectively machine learned by either accessing the query logs or by more fundamentally asking people.” In other words, go see Suzy in accounting.

By understanding the larger context surrounding the data, Alation can help business analysts, IT administrators, or business managers figure out which data sets are the most pertinent for their needs. In that manner, it’s sort of like Yelp for business data, Sangani says.

“Yelp gives you a response and says ‘Here’s not only the result, but the reason the result is right is because 15 other people used this data, it was last refreshed on this date, there are 400 other people who are subscribing to this data, and there are 25 reports hanging off it,'” he says. “We give you a ton of context and it’s really that context that makes the search hum.”

 

In addition to building the core search and discovery foundation, Alation’s Postgres-based platform includes a collaboration layer that allows people to work together to better understand and annotate data sources as they explore them. In this manner, Alation hopes to engender more strategic data governance and management initiatives organically at customer sites, as an offshoot (or continuation of) the tactical application of search and discovery.

Alation is coming out of stealth today, but it already has a fairly impressive list of early adopters, including EBay, MarketShare iHeart Media, Square, and Inflection. At Ebay, the product is used by business analysts charged with analyzing petabytes of data sitting across multiple systems.

“What they [Ebay] ended up doing was using Alation in order to present not just the data and the context around the data, but also as a knowledge management tool to say, ‘Hey for calculating churn, here are the ways you can do it,'” Sangani says. “So it really helps the analysts accelerate their work process and work product by helping them find the information and getting them to the data sets and the answers they need.”

Currently Alation supports Hadoop (including HDFS and Hive); data warehouse platforms such as Teradata, Oracle, and IBM’s Netezza; and most relational database systems. The has NoSQL data stores on its roadmap, as well as possibly object-based file systems.

Sangani doesn’t believe that machines can take the place of humans for data curation. MDM initiatives will continue. But machines can augment humans and make them and the MDM initiatives more powerful and successful.

 

Alation builds a sort of graph that maps the use of data in an enterprise

“The problem with MDM is it’s effectively all top down all human curation,” Sangani says. “Our observation has been that’s just not scalable. It never was scalable previously, which is why I founded this company. But it works even less now.

“If you think about the amount of data that’s being generated, that’s one factor. But the data is also becoming more complicated. On top of that, there’s just very little in resources to manage the data. There has to be more people managing the data, more people accessing and touching the data, to talk about it and describe it. But there also have to be automated techniques in order to enable people to do that management work faster.”

Whether Alation ultimately solves the data tribalism problem remains to be seen. What isn’t up for debate is the bona fides of the Redwood City company’s co-founders, including Sangani, who worked at Oracle for many years; Aaron Kalb, who helped build Siri at Apple; machine learning expert Feng Niu; and Venky Ganti, who developed artificial intelligence at Microsoft.

Related Items:

Link: http://www.datanami.com

 

 

 

 
Logo KW

Bayer y los Genéricos [474]

de System Administrator - miércoles, 7 de mayo de 2014, 20:52
 

 Bayer

El consejero delegado de Bayer pierde los nervios al discutir sobre patentes. En su disculpa afirma que su objetivo es tratar a todos.

Bayer

Marijn Dekkers, consejero delegado de Bayer

La discusión con las autoridades indias consiguió sacar de sus casillas al consejero delegado de Bayer. En juego estaba la patente de su anticanceroso Nexavar, un fármaco de última generación para tratar cánceres de hígado y riñón. “No creamos este medicamento para los indios, sino para los occidentales que pueden pagarlo”, saltó el consejero delegado de Bayer, Marijn Dekkers. Y varios medios lo han recogido después.

Un excolega de Dekkers, John LaMattina, le dio ocasión de disculparse después. En un artículo publicado en Forbes, el exdirectivo de Pfizer le recrimina sus palabras. Y así dio pie a que Dekkers se explicara. Bayer remite a esas declaraciones.

“Lamento que lo que una rápida respuesta en el marco de una discusión haya salido a la luz de una manera que yo no pretendía. No puede ser más contrario a lo que yo quiero y lo que hacemos en Bayer”, dice Dekkers. Como compañía “queremos mejorar la salud y calidad de vida de la personas, independientemente de su origen o ingresos”. “En cualquier caso, estaba especialmente frustrado por la decisión del Gobierno indio de no proteger la patente del Nexavar que nos había concedido la autoridad en patentes del país. Estoy convencido de nuestra capacidad para innovar y en una abierta discusión en la reunión, mientras expresaba mi frustración fundamental, tenía que haber aclarado esto”.

“En cualquier caso, me reafirmo en que hay ningún motivo para que un país debilite la protección de la propiedad intelectual. Sin nuevos medicamentos, tanto las personas de los países en desarrollo como las de otros más prósperos sufrirán”, añade. Y concluye con una mención a que si bien los fabricantes de genéricos tienen “un papel crucial que desempeñar”, “no invierten en investigación y no producen nuevas curas o tratamientos, ni para los mercados en desarrollo ni los desarrollados”.

El enfrentamiento con India es uno más de un gran laboratorio con un país que aplica rigurosamente una ley de patentes que les lleva a rechazar la protección de algunos productos punteros (la mayoría de los casos porque no permiten modificaciones en la patente original según avanzan las investigaciones en los medicamentos, por ejemplo buscando sales diferentes). Fue lo que llevó al conflicto con Novartis por otro anticancerígeno (el Glívec). Con esa política, el Gobierno indio no solo consigue que la patente dure menos (lo que obliga a los laboratorios a bajar sus precios para competir), sino que protege a su pujante industria de genéricos.

Médicos sin Fronteras, una organización muy combativa y que usa muchos genéricos en sus actuaciones sanitarias, ha manifestado al respecto su deseo de que "el uso de genéricos sea protegido". "A lo largo de los años que vienen, sigue habiendo iniciativas legales y acuerdos internacionales que a ponen en riesgo seriamente la producción de genéricos", ha dicho en un comunicado José Antonio Bastos, presidente de la ONG. La investigación y desarrollo de nuevos medicamentos no debe estar "liderado solo por el afán de lucro de la industria farmacéutica, sino que hay otros elementos y otros componentes. La industria debería ser mucho más parte de la solución que parte del problema. En estos momentos, Médicos Sin Fronteras en algunos casos concretos está implicado en investigación de productos nuevos para enfermedades que afectan a pacientes con muy pocos recursos con contribuciones, con aportaciones de la industria farmacéutica".

Fuente: http://sociedad.elpais.com/sociedad/2014/01/23/actualidad/1390497913_508926.html

Antecedentes

"Los pobres no valemos nada"

Enfermos de cáncer indios temen quedarse sin tratamiento si se prohíben los genéricos

Bayer

Ajay Kumar (izquierda), junto a un niño enfermo de leucemia en un hospital de Nueva Delhi. / A.G.R.

Ajay Kumar no sabe que la medicina que le mantiene vivo podría subir de precio más de 13 veces, de unos 150 a 2.000 euros al mes. Tiene leucemia mieloide crónica desde hace un año, y la controla con V-Net, un genérico de Glivec, el anticancerígeno de la farmacéutica suiza Novartis.

Si la multinacional gana la demanda que ha interpuesto contra la ley de patentes india, el genérico dejará de producirse. Lo mismo podría ocurrir con otros medicamentos. La India exporta más de la mitad de sus genéricos a países pobres.

Como Kumar, la mayoría de quienes toman el genérico no saben la trascendencia que tiene en sus vidas la decisión que tome el tribunal el 15 de febrero. "Son muy pobres para enterarse. Su lucha diaria por sobrevivir, contra la enfermedad y el hambre, los mantiene ocupados", dice una coordinadora de la Asociación para Ayuda a Pacientes con Cáncer (CPAA por sus siglas en inglés), Neelima Batra.

El precio de los genéricos es ya de por sí insalvable para personas como Kumar. La leucemia cambió la vida de toda su familia. Hace un año eran pobres, pero ahora viven en condiciones paupérrimas. Él, de 40 años, era jornalero en Bihar, un Estado al este de India. Ganaba 60 rupias al día (poco más de un euro). Pero comenzó a sentirse muy cansado. Cada vez más, hasta que no pudo trabajar. En el hospital le dijeron que tenía cáncer en la sangre y lo enviaron al hospital de Nueva Delhi. Se llevó a su esposa y sus dos hijos. Ahora el único ingreso de la familia es el del hijo de 12 años, que trabaja en una tienda de té por 100 rupias al mes (menos de dos euros).

"No nos alcanza ni para pagar el cuartucho de la casa de descanso

[un refugio para enfermos del Gobierno, que cobra unos 10 céntimos de euro por día]", dice. Para el medicamento y la comida dependen de la CPAA. Esta ONG compra algunos genéricos, y recibe otros de las farmacéuticas indias. Así ayuda a unos 5.000 enfermos. Los que podrían no tener acceso a la droga si India termina por proteger Glivec. "Desgraciadamente nuestros recursos son limitados. Si los medicamentos suben, podremos atender a menos personas", cuenta Batra.

A pesar de todo, Kumar se considera afortunado: sólo un poco más de la mitad de quienes necesitan el fármaco pueden obtenerlo. Unos 20.000 toman genéricos, y 6.750 Glivec. Unos 25.000, simplemente, no tienen acceso. El problema principal, según el presidente de la asociación, Yogendra Sapru, es que la producción no es suficiente. A finales de 2003, forzada por sus compromisos con la Organización Mundial del Comercio, India dio a Glivec derechos exclusivos de mercado y seis de las nueve de compañías de genéricos tuvieron que parar.

A principios del año pasado, tras una impugnación de la CPAA, el Gobierno de India rechazó la protección del fármaco por considerar que no es una innovación. A partir de entonces las empresas han vuelto a producir, pero "todavía no logran cubrir los requerimientos", asegura Sapru.

Para Novartis, los 6.700 tratamientos que regala a través de los oncólogos cubren el 99% de los pacientes que toman Glivec en India. El intento de patente busca posicionarse entre la creciente clase media y alta, mientras que la farmacéutica afirma que no dejaría de regalar los medicamentos a la mayoría enormemente pobre. Esto obedece a una "lógica empresarial, que a la vez sostiene una significativa dimensión de responsabilidad social corporativa", dice el presidente de Novartis en España y responsable de mercados emergentes, Jesús Acebillo.

Que la compañía suiza regale la medicina no es suficiente, según las ONG. Su postura sentaría el precedente para que India diera otras patentes para otros medicamentos. A diferencia de los enfermos de cáncer, los VIH positivos están más concienciados. Saben por experiencia que la competencia entre productores de genéricos ha reducido el precio de los antirretrovirales. Ahora los de primera línea son gratis, pero cuando fallan y los pacientes necesitan nuevas combinaciones, el precio sube a 170 euros por paciente por mes. Una suma exorbitante para India: una persona promedio gana unos 386 euros al año.

Este es el caso de Nareinder Tandon, que tiene VIH. Al principio las cosas fueron fáciles: tuvo drogas gratis. Pero hace dos años tuvo que empezar a tomar menos antiguos. "Moriré si el Gobierno concede las patentes. Pero al fin y al cabo, los pacientes pobres, no valemos nada", dice.

Fármacos Amenazados

El juicio emprendido por la multinacional suiza Novartis contra la ley de patentes india va más allá de la protección de un medicamento. Los laboratorios de genéricos del país tienen en cartera al menos otros 11 productos que tendrían que dejar de producir si prospera la demanda. De ellos, 10 son antivirales usados como segunda línea de tratamiento contra el VIH, y el restante combate el citomegalovirus, una infección asociada al sida. Si prosperase la demanda, estos genéricos quedarían fuera del mercado. Teóricamente, la Organización Mundial de Comercio permite que un país rompa las patentes ante un caso de crisis sanitaria. Pero el caso todavía no se ha dado. La presión de las ONG ha conseguido que los laboratorios regalen o vendan barato sus productos a los países más afectados.

Fuente: http://elpais.com/diario/2007/02/01/sociedad/1170284410_850215.html 

Novartis reclama que se derogue la ley de patentes india por "inconstitucional"

La ex presidenta suiza y el nobel Desmond Tutu piden al laboratorio que retire la demanda

 El juicio de Novartis contra la ley de patentes india comenzó ayer con un nuevo argumento de la farmacéutica: "La ley es inconstitucional". En el proceso, que tiene lugar en Chennai (la antigua Madrás) se decidirá si en India se pueden seguir produciendo los genéricos del medicamento Glivec, que ha revolucionado el tratamiento de la leucemia mieloide, o si se le otorga la exclusividad de producción y comercialización a Novartis. Las ONG advierten que lo que está en juego va mucho más allá: que India pueda seguir abasteciendo de genéricos a los países pobres.

Hasta ahora la multinacional sólo había aducido que la ley viola la normativa de la Organización Mundial del Comercio (OMC), por saltarse la protección de la propiedad intelectual, según explicó a EL PAÍS Soli Bhushan, el abogado que lidera la defensa de Novartis y ex asesor del Gobierno indio en materia constitucional.

La ley de patentes india entró en vigor en 2005, a instancias de la OMC. Esta norma establece que se podrían patentar los productos registrados a partir de 1995. El problema con el imantinib, según un documento oficial indio, es que tiene dos fechas de registro: una en 1993, cuando se inscribió la sustancia pura; y otra en 1998, cuando se apuntó un derivado -en este caso, una sal, el mesilato-, que fue el que finalmente se comercializó.

Cambio sustancial

Las ONG y la oficina de patentes estiman que entre ambas formulaciones no hay un cambio sustancial, por lo que debe tenerse en cuenta la más antigua, que deja fuera de patente la molécula en India. Esta decisión se ampara en la sección 3d de la ley, que establece que no se podrán a volver a patentar "sales, ésteres, éteres [u otras nuevaspresentaciones químicas de una sustancia] salvo que tengan propiedades significativamente diferentes" que el producto original. Estos cambios entre la molécula inicialmente patentada y la que se comercializa son frecuentes, y los laboratorios los defienden como "innovación incremental" que justifica una nueva patente, informa Emilio de Benito.

El abogado de Novartis rechaza esta interpretación. Bhushan adelantó ayer su postura: "Que la oficina de patentes, con sus propios criterios, deniegue la exclusividad a un medicamento por no considerarlo una innovación es irracional". Según el letrado, esta norma conlleva arbitrariedad.

Los dos jueces pidieron al Gobierno que se defienda en las próximas audiencias. Los representantes oficiales se conformaron con escuchar las ponencias y afirmaron que el lunes presentarán pruebas para defender la constitucionalidad de la ley.

Desde la oficina del laboratorio en Basilea (Suiza), un portavoz de la compañía indicó que "la patente de 1993 es para sintetizar la molécula de imantinib, y representó el primer paso en el proceso del desarrollo de Glivec. Es importante entender que esta molécula no podía ser tomada por los pacientes en forma de píldora. Por tanto, hubo que desarrollar la sal mesilato y, posteriormente, su forma cristalina para que los pacientes pudieran recibir tratamiento. Esta última forma es el medicamento Glivec", y es el que debe ser patentado, y la que está registrada en más de 40 países.

Pero ésta es una batalla más allá del Glivec. Otros 9.000 medicamentos esperan a que se les garantice la exclusividad. De ellos, 11 son para tratar el VIH, y hay varios casos parecidos: moléculas anteriores a 1995 con modificaciones posteriores.

A quienes piden que Novartis no recurra la ley se sumaron ayer la ex presidenta suiza Ruth Dreifuss y el arzobispo y Premio Nobel de la Paz Desmond Tutu. "Las personas, no los beneficios, deben estar en el centro de la ley", dijo Tutu. También apoyan la petición el ministro de Cooperación alemán, Frank-Walter Steinmeier; el ex enviado especial de la ONU para el VIH / Sida en África Stephen Lewis, y el nuevo director del Fondo Mundial para la lucha contra el Sida, la Tuberculosis y la Malaria, Michel Kazatchkine.

Fuente: http://elpais.com/diario/2007/02/16/sociedad/1171580403_850215.html

Lo que se juega en la batalla del Glivec

El pulso de Novartis y el Gobierno indio por la patente del fármaco destapa múltiples paradojas

Una pobre potencia global

Como en la película Babel, muchas vidas y muchos anhelos están conectados, a veces sin saberlo, de un mismo hilo del que muchos tiran en direcciones contrarias: el próximo 15 de febrero, el Tribunal Supremo de Chennai, en India, tendrá que decidir sobre uno de esos hilos, el que conecta los más avanzados laboratorios de investigación médica con los camastros de humildes hospitales de Nigeria y Camerún. La multinacional suiza Novartis ha planteado un pleito contra la nueva normativa india sobre patentes por la que le niega el derecho de propiedad intelectual de uno de sus fármacos estrella, el Glivec.

Más información:

Organizaciones como Oxfam-Intermon y Médicos sin Fronteras creen que si Novartis gana la apelación y se le reconoce la patente, todo el sistema actual de obtención de genéricos baratos para el tercer mundo se vendrá abajo. Novartis reclama su derecho a defenderse de una legislación que considera lesiva, y argumenta que si pierde la apelación, el sistema de protección de la propiedad intelectual, del que depende la innovación en medicina, se hundirá. Aunque ése es el pulso de fondo (véase EL PAÍS de ayer), el caso tiene muchos hilos, entre ellos el que maneja la cada vez más pujante industria de genéricos india. ¿A quién beneficia el pleito? ¿Quién sale perjudicado? ¿Depende de su resolución el acceso de los pobres a este y otros fármacos? Ninguna de las respuestas es unívoca porque este caso es un excelente paradigma de las contradicciones y complejidades del mundo globalizado.

El país, el pleito. Tras un periodo de fuertes presiones, en 1995 India adoptó los acuerdos de la Organización Mundial de Comercio que regulan los derechos de propiedad, los llamados acuerdos TRIPS, por los que se obligaba a reconocer a partir de 2005 las patentes de productos, incluidos los medicamentos. La patente de un fármaco da derecho al laboratorio que lo ha obtenido a comercializarlo en exclusiva durante 20 años. Vencida la patente, el fármaco puede ser copiado como genérico. Hasta 2005, la industria farmacéutica india había podido fabricar sin problemas copias de fármacos protegidos por patente, e incluso exportarlos a otros países. Con la nueva ley se reconocían las patentes de los productos surgidos a partir de 1995 y los anteriores que fueran verdaderas innovaciones.

Pero los TRIPS concedían "cierta flexibilidad" a los gobiernos para hacer compatible el sistema de patentes con los objetivos sociales de salud pública. Esta flexibilidad se concretó en la Declaración de Doha de 2001, que permitía a los países pobres obtener los medicamentos sometidos a patente a precios más baratos bajo lo que se denomina "licencia obligatoria", que se justificaría per el estado de necesidad de ese país. La declaración precisaba que para estar amparado por una patente, el medicamento debía ser "nuevo, innovador y útil". Con esto se pretendía salir al paso de las argucias de algunos laboratorios que intentan alargar las patentes haciendo pasar por nuevo lo que son simples mejoras del fármaco original.

Cambio legal. La ley promulgada en India se adecuaba a estos criterios, pero a finales de 2004 se produjo un cambio de Gobierno y éste introdujo una modificación en la ley por la que concedía a la Oficina de Patentes la potestad de decidir cuándo un producto es nuevo y cuándo no. En función de esa facultad, la oficina denegó a Novartis el reconocimiento de la patente de Glivec alegando que no era nuevo. Se basaba en que el producto presentado en India era muy similar al patentado por la Novartis en Estados Unidos cuando se iniciaron los ensayos clínicos del principio activo, el imatinib.

El fármaco. ¿Es Glivec realmente un producto nuevo? Lo es. Glivec fue aprobado como medicamento por la Food and Drug Administration de Estados Unidos hace sólo cinco años. Se trata de uno de los avance terapéuticos más relevantes de los últimos años pues de hecho ha sido el primero de los nuevos fármacos surgidos de la revolución genética, conocidos como dianas terapéuticas.

La secuenciación del genoma ha permitido identificar algunos de los genes implicados en las alteraciones moleculares de distintos tumores. Los nuevos fármacos inciden sobre la alteración genética específica del tumor en cuestión, de modo que no tienen los efectos adversos de la quimioterapia y son mucho más eficaces. Son una innovación tan importante que han iniciado una nueva fase de la lucha contra el cáncer: la de los tratamientos personalizados. La patente internacional de Glivec vence en 2016. Lo que ha hecho la oficina de patentes india es comparar el producto actual, mejorado, respecto al original, que obtuvo la patente antes de que India estuviera obligada a respetarla. Novartis cree que es un ardid legal y por eso no se ha centrado en demostrar que la forma actual de Glivec supone una innovación respecto del producto inicial, sino que ha decidido impugnar la ley que otorga a la oficina india de patentes la discrecionalidad de considerar si un medicamento es suficientemente nuevo, innovador o útil. Con eso obliga a clarificar cómo deben aplicarse los acuerdos de Doha en un país que ya no puede considerarse pobre y que es uno de los principales productores de genéricos.

Sólo para la leucemia. Glivec es más importante por lo que tiene de innovación que por su contribución global a la curación del cáncer, pues este fármaco no sirve, como se ha dicho, para tratar el cáncer en general, sino sólo un tipo de leucemia, la mieloide crónica con cromosoma Filadelfia positivo, relativamente poco frecuente. En estos enfermos, sin embargo, Glivec marca un antes y un después: antes se morían sin remedio y ahora se curan.

El acceso de los pobres. Como todos los nuevos medicamentos, Glivec es extraordinariamente caro. Las farmacéuticas tratan de recuperar en los productos innovadores las enormes inversiones necesarias para la investigación de nuevos fármacos, incluyendo el coste de los que fracasan, y los elevados gastos de publicidad y promoción en los países ricos. Además de una diana terapéutica, Glivec es pues una colosal diana comercial, y ya se ha convertido en el segundo producto más vendido de la multinacional suiza después del antihipertensivo Diovan.

La diferencia de precio entre el producto con patente y el genérico es abismal: un año de tratamiento con Glivec cuesta cerca de 27.000 dólares, frente a los 2.700 del tratamiento con genérico. Si Novartis pierde el pleito ¿tendrán acceso los pobres a este medicamento? No. El coste del genérico es cinco veces superior al salario medio y la India no tiene un sistema público de protección de la salud que cubra, como en España, los medicamentos. Novartis asegura que si los pobres tienen acceso a Glivec es gracias al programa de dispensación gratuita por el cual reciben el fármaco 6.500 pacientes de India y 18.000 de todo el mundo. Su programa de responsabilidad social incluye un programa de ayudas al que en 2006 dedicó 750 millones de dólares, el 2% de su facturación.

El precedente. Novartis defiende en India unos derechos de patente que se le reconocen en todos los países avanzados. Glivec no es en realidad un producto de vital importancia si se tiene en cuenta que millones de personas mueren todavía en el mundo a causa de malaria, tuberculosis y otras enfermedades que tienen tratamiento barato desde hace tiempo. Pero con este pleito se ha situado en el ojo del huracán de uno de los mayores retos de nuestro tiempo: cómo garantizar el acceso de los pobres a las mejoras terapéuticas.

Las ONG entienden que si Novartis gana el pleito, supondrá un precedente. En estos momentos hay otros 13 fármacos en disposición de reclamar el reconocimiento de patente, entre ellos algunos antirretrovirales de segunda generación que India exporta ahora a otros países para tratar el sida y que en muchos casos se administran a través de las propias ONG porque los gobiernos no tienen capacidad para eso.

¿Son suficientes los programas de buena voluntad, sea de las multinacionales, sea de las ONG, para permitir el acceso de los pobres a los medicamentos? Evidentemente no. ¿A quién corresponde esta tarea? ¿Con qué instrumentos?

Fuente: http://elpais.com/diario/2007/02/06/salud/1170716401_850215.html

India se niega a patentar dos fármacos contra el VIH

La decisión abre la puerta a que haya genéricos de los medicamentos

Dos de los antivirales más usados actualmente en el mundo para tratar la infección por VIH, el Tenofovir y el Darunavir, tendrán pronto genéricos en el mercado. La decisión de las autoridades indias de negarles la patente permite que otros laboratorios los fabriquen, se supone que más baratos.

La medida no es sólo humanitaria. También es una manera de asegurar el negocio a Cipla -que fue el que llevó el caso a los tribunales, según informa la edición digital de la revista Nature- y otros grandes laboratorios indios, que se dedican a fabricar genéricos (copian la composición y pueden venderlos más baratos porque no tienen que amortizar los gastos en investigación).

El Tenofovir, sobre todo, es un medicamento de primera línea en la actuación contra el VIH. Inhibe la acción de una de las enzimas (las pequeñas máquinas moleculares que realizan los procesos dentro de las células) que el virus necesita para reproducirse, la transcriptasa inversa. Según recoge la publicación, el laboratorio fabricante, Gilead, todavía está estudiando si va a seguir la batalla legal. Tampoco hay reacción del fabricante de Darunavir (mucho menos usado porque se considera una segunda opción), Tibotec.

Lógicamente, la decisión ha sido muy bien acogida por la ONG que se dedican a la atención de personas con VIH en el mundo. De hecho, por primera alguna de ellas -como Médicos sin Fronteras- participó en el proceso judicial.

India se ha convertido en el abanderado del uso anticipado de los genéricos. Según los llamados acuerdos TRIPS de la Organización Mundial de Comercio, un país puede saltarse la patente de un medicamento si decreta un estado de emergencia sanitaria. Y no hay duda de que en muchos países, el VIH que causa el sida (hay 33 millones de casos en el mundo) lo es. Pero se trata, sobre todo, de países pobres que no tienen capacidad de fabricar un fármaco, y mucho menos de resistirse a las presiones de los países donde tienen su sede las empresas farmacéuticas. Por eso son potencias emergentes como India o Brasil las que se permiten retar a las grandes multinacionales. Además, éstas en muchas ocasiones dan licencias u ofrecen los productos a bajo precio para evitar la competencia de los genéricos en los países pobres (que, por otro lado, no podrían comprárselos) y, sobre todo, que estos productos a menor precio puedan llegar a los países ricos, que es donde sí hacen el negocio, o que los fabricantes de genéricos tengan preparados sus cadenas de producción para ofrecer su producto en cuanto la patente venza, con lo que ganan un tiempo de exclusividad para rentabilizar la ingente inversión (se calcula que más de 800 millones de euros) que cuesta poner un fármaco en el mercado.

Fuente: http://sociedad.elpais.com/sociedad/2009/09/04/actualidad/1252015208_850215.html

 

 

Logo KW

Bell Labs [49]

de System Administrator - jueves, 2 de enero de 2014, 20:37
 

Los Laboratorios Bell (Bell Labs en inglés) se distribuyen en varios centros de investigación científica y tecnológica, ubicados en más de diez países. Sus orígenes se remontan a los Laboratorios Telefónicos Bell, los cuales fueron fundados en el año de 1925 en el estado de Nueva Jersey, por la empresa AT&T.

Fuente: http://es.wikipedia.org/wiki/Laboratorios_Bell

Logo KW

Benefits and Business Value of An Interactive Intranet [1418]

de System Administrator - martes, 15 de septiembre de 2015, 20:54
 

Benefits and Business Value of An Interactive Intranet

Today, too many companies suffer from traditional intranets that are deserted landscapes of unconnected resources, one-way communication and little, if any, collaboration. It's time to combine best of traditional intranets and ESNs into a single interactive intranet that can empower your workforce to deliver real business value. Discover how with the eBook The Future of Intranets and Enterprise Social Networks.

 

The Future of Intranets and Enterprise Social Networks

Why Businesses Need an Interactive Intranet

Today, too many companies suffer from traditional intranets that are deserted landscapes of unconnected resources, one-way communication and little, if any, collaboration. Across the spectrum, other companies are dealing with the chaotic collaboration that enterprise social networks unleash. It’s time to combine best of both of these technologies—and leave behind the worst—into a single interactive intranet, one that can empower your workforce to connect, communicate and collaborate to deliver real business value.

It’s time to combine best of traditional intranets and ESNs into a single interactive intranet, one that can empower your workforce to deliver real business value.

This eBook will help you determine what can be done to update your organization’s intranet to make it an interactive one, or implement an interactive intranet for the very first time.

In this e-book, written by Larry Hawes, Principal Analyst at Dow Brook Advisory Services, we will learn:

  • How intranets have evolved over time and why they’re no longer delivering value in today’s workplace
  • The three key interactions of an interactive intranet, and how they deliver more value than traditional intranets and ESNs
  • The benefits and business value of an interactive intranet to both IT and business stakeholders
Make your Intranet truly Interactive: Discover how with this E-Book.

Please read the attached eBook.

Logo KW

BEST PRACTICES FOR A SEAMLESS OMNICHANNEL CUSTOMER EXPERIENCE [1522]

de System Administrator - sábado, 17 de octubre de 2015, 18:29
 

BEST PRACTICES FOR A SEAMLESS OMNICHANNEL CUSTOMER EXPERIENCE

by GeneSys

It’s natural to start designing an omnichannel customer experience (CX) with vision. For example, do you envision creating a 360-degree view of the customer to accurately predict and anticipate customer needs? Or, do you see creating an effortless experience that delivers a consistent experience as it transitions across multiple touchpoints and provides both added value and access to a live agent?

Once they have a vision, many companies jump right to functional requirements, skipping the important step of defining the actual customer experience. It is critical that you consider how to design a memorable experience in the customer lifecycle that maps to the needs of your particular customer segment and aligns to your brand values.

Here are six best practices to help you design and implement that experience.

Please read the attached eBook.

Logo KW

Best Practices for Security Monitoring [1220]

de System Administrator - miércoles, 6 de mayo de 2015, 16:03
 

Best Practices for Security Monitoring

...You Can’t Monitor What You Can’t See

Most security professionals focus on policy, training, tools, and technologies to address network security. Security tools and technologies, however, are only as good as the network data they receive for analysis. With mounting Governance, Risk Management and Compliance (GRC) requirements, the need for network monitoring is intensifying.

A new technology can help – the Network Monitoring Switch. It provides exactly the right data to each security tool and enables monitoring with dynamic adaptation. The Network Monitoring Switch resolves issues security teams have in getting visibility into the network and getting the right data for analysis. This whitepaper, targeted at security professionals, will address network visibility and will focus on:

    • Monitoring inclusive of virtualized environments
    • Monitoring 10GE/40GE networks using existing 1GE/10GE security tools
    • Providing automated responses for adaptive monitoring
    • Improving incident remediation
    • Improving handling of sensitive data
    • Providing granular access control so the entire monitoring process is tightly controlled

Please read the attached whitepaper.

Logo KW

Better Career Negotiator [789]

de System Administrator - martes, 26 de agosto de 2014, 15:17
 

8 Tips to Be a Better Career Negotiator

 

By Rich Hein

Whether its job hunting or asking for a raise, many of us don't like to negotiate. But as professionals it's become a required skill and you may only get one chance to make your case. Learn what it takes to be a confident negotiator.

During a job search and negotiating for salary are often the most critical times where you have to speak up for yourself because no one else will. Most of us negotiate all the time outside of work, like when buying a car or a home for example. Preparation, confidence and thoughtfulness are all required to build the right negotiating strategy, and the good news is that, like with any skill, the more you do it the better you will get. 

At some point in our careers we all have to face certain obstacles like reassessing our skill set, searching for a new position, or plotting a career path. Negotiating skills falls into this category and they don't necessarily come naturally for all of us. Many of us have to work at being good at it. To help you get up to speed CIO.com worked with career strategists and hiring managers to help you avoid the common pitfalls that befall many IT pros as they negotiate for better compensation packages and new jobs.

What Do You Want?

Whether it's negotiating the finer points of a new job with a hiring manager or working with your current boss to increase your salary, before you get started, it's important to take stock of what is truly important. We're all different. For some it's all about compensation while others want more vacation or a better work-life balance to build family memories. Our experts agree that most anything is negotiable, like working remotely, immediate 401k contributions, commitment to ongoing training and education, or in some cases relocation reimbursement. Critically evaluate your needs and consider the entire compensation package before deciding more money will make you happy. Roy J. West, Founder and CEO of The Roy West Companies, advises his clients that they have options but need to be sure they know what they seeking saying, "…always remember, "Talent" has options. Those who are immensely talented have options. So, have crystalline-like clarity around what you are looking for in a company, a manager, salary and, then go find them."

Know Your Worth

Many candidates go into an interview without being able to articulate what makes them a stand out candidate or what value they bring to the job. To negotiate with confidence you need to know what differentiates you. "The biggest mistake I see is that IT professionals don't go into the negotiation with an understanding of how they add value to the business. When they recognize how their work fits into the big picture, and are able to supply metrics to back it up, it helps them quantify the ROI they offer. In essence, they need to show that they are a "profit contributor" as opposed to being part of an expense center," says Stephen Van Vreede, Solutions Architect and founder of ITTechExec, a company that offers services focused on career lifecycle management and personal branding.

Know the Industry and Company

If you're after a job with another company then you need to put your research cap on and find out all you can about the company, what their business is all about, the position you're hiring for, who the hiring manager is and whatever other information you can glean. The bottom-line is the better you are informed the more confident you will feel when negotiating.

There are many places you can go to learn about a company's culture, and what past employees think of them, at places like Glassdoor.com and Salary.com. But don't neglect sources like press and product releases, Google News and trade sites and magazines. "Business savvy IT pros tend to conduct more research on market trends and utilize that information to negotiate offers more actively. These hires are likely to have a deeper understanding of the value of their skillsets and use that to initiate a negotiation conversation," says John Reed, Senior Executive Director with Robert Half Technology.

For more information on how to research companies you're interested in please read, Top 8 Sites for Researching Your Next Employer.

Listen at Least as Much as You Speak

Hemingway once wrote, "When people talk, listen completely. Most people never listen." If you have taken the time to research and prepare then you are eager to share but listening is just as important. Listen and ask intelligent questions to gain further insight to what their actual needs are and you may find common ground or an opportunity.

Negotiating Multiple Points

If you should find yourself in the position where you need to negotiate several points like location reimbursement, salary and remote hours make yourself a list and be prepared and have a thoughtful answer should any one of them be denied. "…be strategic in your approach. Make sure you cohesively address every item that you want to discuss. Have some alternatives mentally prepared. Know what you will ask for instead if they deny one of the items," says Reed.

Don't Take it Personally

Try not to get too emotional throughout the process. Sometimes it won't work out and that is part of the journey. Instead, experts say, focus on what separates you from the pack and the value that you will add to the role. "Remember that this isn't a personal affront. Your approach should be to focus on your value-add. Regardless of your position, you should be bringing more value to the company than they are paying you in compensation," says Van Vreede.

Make Sure the Timing is Right

"There is an aspect of "timing" when asking for a salary increase, so it's important to be mindful of the big picture," says Reed. No one is saying that you don't deserve the raise, but consider for a moment that you know your company has a hiring freeze and there was a workforce reduction recently. This is a good example of a potentially bad time to negotiate for a salary increase. If you find yourself in this situation you may consider dusting off your resume. If nothing else it will give you confidence to know you are lining all your ducks up in a row.

Get it In Writing

Most experts agree that getting it writing is a must, whether you're an IT contractor, a new hire or getting promoted. "I've seen too much over the years, so I would recommend getting the final offer in writing regardless of your employment status," says Van Vreede. If an organization won't give it to you in writing then you probably want to reconsider their offer.

Finally, read through the entire agreement before signing. This isn't a software agreement, it's the blueprint for your professional life and you should take the time to examine it.

Final Thought

Remember that a successful negotiation leaves everyone feeling like they have won something. Most times that means compromise for everyone in some fashion.

 

Rich Hein — Managing Editor

Rich Hein is Managing Editor for CIO.com. He covers IT careers. Follow everything from CIO.com on Twitter @CIOonline, on Facebook, and on Google +.

Link: http://www.cio.com

Logo KW

Beyond the Blueprint [837]

de System Administrator - jueves, 4 de septiembre de 2014, 14:07
 

A GENETIC WEB: Interacting species, such as this Gulf fritillary butterfly (Agraulis vanillae) feeding on goldenrod blooms at the edge of a salt marsh on St. Simons Island, Georgia, have an intimate relationship: the genes of one are likely influencing the phenotype of the other | © BEACHCOTTAGEPHOTOGRAPHY/ISTOCKPHOTO.COM

Beyond the Blueprint

In addition to serving as a set of instructions to build an individual, the genome can influence neighboring organisms and, potentially, entire ecosystems.

By Mark A. Genung, Jennifer A. Schweitzer, and Joseph K. Bailey

The relationship between an individual’s phenotype and genotype has been fundamental to the genetic analysis of traits and to models of evolutionary change for decades. Of course, scientists have long recognized that phenotype responds to nongenetic factors, such as environmental variation in nutrient availability or the presence of other, competing species. But by assuming that the genetic component of a particular trait is confined to your genes and only yours, scientists overlooked another important input: the genes of your neighbors.

Take field crickets as an example. To identify potential mates, female crickets listen with ears on their forelegs to the males’ songs, produced by the rubbing together of their forewings. Some males emit series of long, trill-like chirps, an advertisement of their fitness that females find very attractive. Songs dominated by short chirps have less pull. But female crickets don’t evaluate songs on their absolute merits; instead, their preferences are influenced by the songs they’ve heard in the past. Female crickets previously exposed only to songs with long chirps are less likely to respond to short-chirp songs than females that have been exposed to the songs of less-fit males already. The insects appear to be retaining information about available males and then using that information to assess the attractiveness of suitors.1

Choosing mates amidst competition is ubiquitous among animals, but the logistics of how such choice evolved is less straightforward: because male song type is largely determined by genetics, female mating behavior is under the influence of male genes. In other words, the females’ decision-making behaviors evolved based on the genetic composition of the entire social group. Such indirect genetic effects (IGEs), also called associative effects or extended phenotypes, are common and have profound implications for evolution. Beyond learning and behavior in social species, IGEs affect how organisms develop, how productive plants are, and whether individuals are attacked by predators, herbivores, and disease.

In some sense, examples of IGEs are intuitively obvious. No individual exists in a vacuum, isolated from the influences of others it encounters. Yet for decades, many prominent evolutionary theories assumed that all of the genetic influences on an individual’s phenotype came from genes within itself. What the field needs now is a clear framework that recognizes IGEs as additional factors in a population’s evolution, allowing for more-accurate predictions about how biological systems will change in the future. The genetic makeup of an individual not only influences phenotypes of individuals in its own species, but can have far-reaching effects on organisms at different trophic levels within its food web, impacting the dynamics of entire ecosystems. The role of commensal microbes in human health is a prime example of how IGEs can transcend species boundaries.

How IGEs affect evolutionary dynamics remains very much an open question. Recent theoretical strides in this area show how IGEs can greatly accelerate evolutionary change and hint at their hitherto unsuspected roles in such varied phenomena as animal mating rituals, the development of human agricultural systems, species range shifts in response to climate change, and even altruism. The influences of IGEs on diverse evolutionary processes are undoubtedly more complicated than most models can capture, and biologists must think creatively about new phenomena that IGEs may drive.

The social network

House cricket | © VITALII/ISTOCKPHOTO.COM

Examples of IGEs in social learning and behavior, such as cricket mating song preferences, are common. For example, contests between male red deer (Cervus elaphus) determine the population’s dominance hierarchy, such that the fitness of one male affects the outcome of contests with his opponents. Male dancing fiddler crabs (Uca terpsichores) build shelters that differ in quality, some of which provide better protection than others. Shelter quality determines which males are selected by female fiddler crabs, and the fitness of the female is in turn affected by the quality of the shelter, which is partly under the control of the male’s genes.

As a highly social species, humans are no exception. More than 50 years ago, Albert Bandura of Stanford University and his colleagues conducted the now-famous Bobo doll experiment, in which they exposed 3- to 6-year-old children to three different scenarios: an adult peacefully playing with toys while ignoring a weighted, inflatable toy called Bobo (that returns to a standing position after being disturbed); an adult yelling and striking the doll; or no adult present at all. When the children were given the choice of toys, they tended to mimic the actions of the adult they had observed: those who had seen an adult aggressively playing with the doll did the same, but played quietly with the other toys.2 Because behavioral traits such as aggressiveness are partially determined by genetics, this experiment suggested that the phenotypes of adults, and, in part, their genes, are a factor shaping the social choices made by children.

Recent research has also shown that when species interact, IGEs can have far-reaching effects. Last year, we published an article about two goldenrod species (Solidago altissima and S. gigantea) whose genomes affected not only neighboring plants, but also associated pollinators, and even the rate at which nutrients in dropped leaves are recycled through the ecosystem.3 We grew genetically identical individuals of each goldenrod species with neighbors of the same or different genetic identity to examine how unique combinations of plants fared. As expected, some clones were more productive than others—both in terms of above- and belowground biomass growth—and the more productive clones received more visits from pollinators. More surprisingly, clones predictably affected the productivity and chemical composition of their neighbors. For example, the neighbors of a particularly productive S. gigantea clone always devoted more resources to belowground biomass, a shift that was accompanied by higher levels of the complex polymer lignin. Increased lignin production by the plants neighboring the productive S. gigantea clone, in turn, made the neighbors’ leaf litter less attractive to microbial decomposers, such that it look longer for those nutrients to be cycled. The effects on pollinators were less clear: focal plants had more pollinators when grown next to particular genotypes, but a subsequent analysis indicated that it wasn’t simply due to plants producing more biomass. It’s possible that differences in the timing of floral displays between neighbors influenced visitation.

Also last year, Darren Rebar and Rafael Rodríguez of the University of Wisconsin–Milwaukee found additional evidence to support the idea that IGEs could have impacts across an ecosystem. They explored the interactions of treehoppers (Enchenopa binotata) and the nannyberry tree (Viburnum lentago), which serves as the insects’ host plant and primary environment.4 On the population level, as evolution acts on the plants’ genes, the treehoppers’ environment changes. And because each plant is genetically different—yielding larger or smaller leaves that provide better or worse hiding places for the arthropods, for example—a different mix of plants may select for different traits in their associated arthropod community.

By assuming that the genetic component of a particular trait is con­fined to your genes and only yours, scientists overlooked another important input: the genes of your neighbors.

For this experiment, the researchers raised a random sample of treehoppers on several clones of V. lentago and found that different clonal lines had varying effects on the arthropods’ traits related to mating and reproduction. Once again, the influence of the plants’ genomes on the treehoppers may seem obvious in retrospect, but this study provides some of the most direct evidence to date that IGEs operate between trophic levels of an ecosystem. This also reinforces the notion that IGEs are ubiquitous in natural systems, but are not always recognized as such. Moreover, this example illustrates an important consequence of IGEs: when an organism’s environment has a genetic component, that environment itself can evolve.

A changing landscape

 

SHARED GENETIC EFFECTS: Pollinators, such as this honeybee on a thyme flower, are under the influence not just of their own genomes, but the genes of their host plants as well. The genetic underpinnings of a plant’s nectar productivity, for example, could affect an insect’s physiology and behavior.| © GREGORY DUBUS/ISTOCKPHOTO.COM

One ecosystem in which IGEs undoubtedly have a major influence is the human gut. Researchers are developing microbes as tools to combat obesity, for example, and some are already used in skin-care products and for the treatment of Clostridium difficile infections. In each case, researchers are exploiting IGEs, targeting the genetic composition of the gut microbial community to effect a phenotypic change in the patient.

Recently, perturbations in the gut microbiome have been linked to cardiovascular disease, the leading cause of death worldwide. Red meat contains a molecule called carnitine that, when broken down by gut microbes, becomes trimethylamine-N-oxide (TMAO), a compound that causes plaque to build up and clog arteries. In April 2013, Stanley Hazen of the Cleveland Clinic and his colleagues enlisted omnivorous and vegan human volunteers to eat red meat and then tested differences in the activity of their gut microbes. The gut microbes of vegans didn’t break down carnitine into TMAO as fast as the bacterial community of meat eaters, suggesting that the function of the gut microbiome has evolved in response to host diet.5These changes to the gut bacterial community have, in turn, affected people’s ability to digest certain foods, with implications for their health, such as susceptibility to heart disease.

We should not be surprised if future research continues to affirm the relationship between the genetic contents of our commensal bacterial communities and our own health. (See “The Body’s Ecosystem,” The Scientist, August 2014.) Indeed, humans have long recognized that altering the microbial composition of the gut may be beneficial. The idea of using fecal material to treat digestive issues dates as far back as the 4th century, when ancient Chinese practitioners created soups that included fecal material from healthy individuals for those suffering from digestive problems. Modern methods in fecal bacteriotherapy are more sophisticated in how the material is transferred, but the basic principle is the same. Use of these procedures in recalcitrant cases of C. difficile infection is approved as a treatment and has already produced positive outcomes for patients: Els van Nood of the University of Amsterdam, along with a group of other researchers, showed in 2013 that fecal transplants could be more effective at treating C. difficile than the antibiotic vancomycin.6

IGEs also may play a role in the migration of species to different geographic ranges as the Earth’s climate continues to change. In a paper published earlier this year, we showed that plants typically found at high elevations grow better when near other high-elevation individuals of the same species, harboring different high-elevation-adapted genotypes. The same was not true of low-elevation plants, which do not perform better in the presence of other low-elevation varieties.7 Mathematical models published earlier this year by one of us (Schweitzer) confirmed this by showing that plant-soil interactions—through which plants alter bacterial and fungal soil communities with subsequent impacts on plant fitness—can lead to soil properties that favor some individuals over others, thereby selecting for different plant and microbial traits over time.8

In addition, the feedback that can occur as a consequence of IGEs may affect the rate and direction of evolution, possibly speeding the process of local adaptation to novel environmental conditions. (See “Seeds of Hopelessness,” The Scientist, August 2014.) Global climate change is causing significant shifts in environmental conditions worldwide by altering temperature and precipitation patterns and fragmenting once-intact habitats. These changing environments are a hotbed for IGEs; we just have to know where to look and how to detect them.

Remodeling evolution

When it comes to modeling IGEs to explore their influence on evolution, scientists must develop a framework that is both conceptual—drawing from examples across scientific fields to develop a cohesive theory of how IGEs influence genetic change—and mathematical, allowing researchers to make precise, testable predictions. Although the idea that social interactions could accelerate evolution dates back to 1915, interest in the evolutionary importance of IGEs surged in the late 1950s, and some four decades later, scientists finally began writing mathematical models to formalize these predictions. In particular, papers from Allen Moore, then based at the University of Kentucky, and Jason Wolf, first at Kentucky and later at Indiana University, developed evolutionary models that incorporated IGEs and quantified how these affected rates of evolution. Moore and Wolf showed that for certain traits, such as behaviors that involve reciprocal responses like escalating aggression between individuals, IGEs can greatly accelerate the rate of evolutionary change.9,10Because traits in a focal individual are also part of the “environment” for an interacting individual, when the focal individual’s traits change, so does the environment for an interacting individual—and vice versa. When traits of the focal and interacting individuals both change in the same direction, a positive feedback loop forms that accelerates evolutionary change.

Evolutionary models that incorporate IGEs have developed to the point that they can inform how some societies have developed egalitarian or altruistic tendencies over time. Most animal populations are composed of genetically diverse organisms, some weaker and some stronger, resulting in the adoption of rigid dominance hierarchies. But this is not a universal structure of animal societies. Diverse species, from invertebrates to humans, have complex social structures in which individuals make sacrifices for the good of the group, or for the good of others.

 

IT TAKES A VILLAGE: Indirect genetic effects (IGEs) are ubiquitous in all ecosystems. Animals that feed or live on other species, for example, are under the influence of that species’ genes. And looking closely at any ecosystem will reveal hundreds more IGEs, which are overlooked by the majority of evolutionary models. It’s time for researchers to consider all the genes in an environment when exploring the evolution of a single species, rather than simply the genome of the focal organism.
See full infographic: JPG | PDF

According to mathematical models, the answer may involve genetically determined, group-level aversion to inequity that counteracts the tendency of strong individuals to demand tribute from the weak. Sergey Gavrilets of the University of Tennessee, Knoxville, modeled such a system in which the evolution of “helping” behavior was an emergent property of the model.11 In this model, a group of imagined organisms of the same species interact and are ranked from strongest to weakest. When an individual finds a resource, a competitor may demand that resource, and the finder must then decide whether to give in or resist. Gavrilets’s model assumes that there are significant risks associated with losing a contest for possession of the resource, suggesting that the weaker individual will often give in. But the distribution of resources throughout a large group affects every individual in that group, not just a bully and its victims. If the demanding individual has more resources, and is therefore likely to be stronger than the resisting individual, it can be beneficial for an observer to intercede on behalf of the resisting individual, provided that the risk of injury or other costs are not prohibitive. In other words, in Gavrilets’s model, the selfish impulse turns out to be an individual-level aversion to inequality—a desire that no one else be stronger than oneself. Such behavior represents an IGE through which the ultimate share of resources in a population depends on the simultaneous expression of genes in many interacting individuals.

When an organism’s environment has a genetic component, that environment itself can evolve.

Gavrilets’s model indicates that helping behavior can evolve in just 1,000 generations, a very short time span in the history of human culture, and can ensure a more equitable, although not perfectly even, distribution of resources even in the presence of oppressive individuals. Moreover, because the tendency of an individual to participate in a conflict is genetically determined, these “escalation thresholds” can evolve over time.

Such helping is one of many examples of behavior that could be described as moral decisions, a topic that has attracted the attention of scholars for millennia. Some of the most convincing work on the evolution of morality suggests that it is not only the intelligence of humans that promotes moral behavior, but the demands of social groups in which humans find themselves embedded—the expectation to behave a certain way and to follow certain rules. If this is indeed the case, many of these social demands likely evolved due to impacts of IGEs. Whether we study crickets, microbes, plants, or humans, IGEs are important components of many biological systems and key drivers of evolution of all life.

Mark A. Genung is a postdoctoral researcher in the lab of Joseph K. Bailey, who is an associate professor of ecology and evolutionary biology at the University of Tennessee, Knoxville. Jennifer A. Schweitzer is also an associate professor in the department.

References

  1. N.W. Bailey, M. Zuk, “Field crickets change mating preferences using remembered social information,” Biol Lett, 5:449-51, 2009.
  2. A. Bandura et al., “Transmission of aggression through imitation of aggressive models,” J Abnorm Soc Psych, 63:575-82, 1961.
  3. M.A. Genung et al., “The afterlife of interspecific indirect genetic effects: genotype interactions alter litter quality with consequences for decomposition and nutrient dynamics,” PLOS ONE, 8:e53718, 2013.
  4. D. Rebar, R.L. Rodríguez, “Trees to treehoppers: genetic variation in host plants contributes to variation in the mating signals of a plant-feeding insect,” Ecol Lett, 17:203-10, 2014.
  5. R.A. Koeth et al., “Intestinal microbiota metabolism of l-carnitine, a nutrient in red meat, promotes atherosclerosis,” Nat Med, 19:576-85, 2013.
  6. E. van Nood et al., “Duodenal infusion of donor feces for recurrent Clostridium difficile,” New Engl J Med, 368:407-15, 2013.
  7. J.K. Bailey et al., “Indirect genetic effects: an evolutionary mechanism linking feedbacks, genotypic diversity and coadaptation in a climate change context,” Func Ecol, 28:87-95, 2014.
  8. J.A. Schweitzer et al., “Are there evolutionary consequences of plant-soil feedbacks along gradients?” Func Ecol, 28:55-64, 2014.
  9. A.J. Moore et al., “Interacting phenotypes and the evolutionary process. 1. Direct and indirect genetic effects of social interactions,” Evolution, 51:1352-62, 1997.
  10. J.B. Wolf et al., “Evolutionary consequences of indirect genetic effects,” Trends Ecol Evol, 13:64-69, 1998.
  11. S. Gavrilets, “On the evolutionary origins of the egalitarian syndrome,” PNAS, 109:14069-74, 2012.
Logo KW

Big Data - Mitos [454]

de System Administrator - lunes, 24 de marzo de 2014, 16:12
 

 BD

Mito 1: Big Data gestiona sólo datos estructurados

Big Data es un término que cobró fuerza en los últimos años y que ahora está en boca de todos. Sin embargo, hay un mito alrededor de este concepto ¿El término Big Data es un sinónimo de "análisis de datos estructurados" o abarca más? En esta nota, le despejamos las dudas.

El término "big data" se ha mencionado desde los años 90, pero realmente adquirió un uso popular hace un par de años. Esto se debe a que la cantidad de datos que el mundo genera colectivamente se ha disparado de forma exponencial.

A pesar de ello, no existe una única definición de big data. Aunque algunos sugieren que el término sólo se aplica a los datos no estructurados, está claro que, en la práctica, el big data puede incluir una combinación de datos estructurados, semiestructurados y no estructurados.

Algunos miden el volumen, la velocidad y la variedad de datos que entran a una compañía. Pero no existe un número mágico para ninguna de esas mediciones. Lo que una organización consideraría como big data, puede resultar muy poco para una entidad mayor como una corporación de telecomunicaciones.

En realidad, cuando se trata de su empresa, la única definición de big data que importa es la que describe cómo usted lidia con cualquier flujo masivo de información, y qué tan eficaz es su compañía al extraer conclusiones importantes de ella.

Que no lo angustie la exageración que pueda escuchar en el mercado. Hay muchos enfoques y herramientas poderosos que puede usar para extraer lo máximo de los datos con los que trabaja. En vez de ello, concéntrese en lo que desea lograr con toda la perspectiva que acaba de obtener.

 BD

Mito 2: El big data significa lidiar con nuevos datos

¿Cuándo se habla de Big Data, se habla de la gestión exclusiva de datos nuevos? ¿o abarca todos los datos ya sean antiguos o nuevos? Acceda a la nota que le develamos el mito.

Es verdad que todos los días se generan cantidades masivas de nuevos datos. Se calcula que el 90% de todos los datos del mundo se han generado en los últimos dos años.

Pero para muchas empresas, el simple hecho de lidiar con los datos existentes internamente puede ser un gran problema.

Por ejemplo, una empresa podría estar usando herramientas o hardware incapaces de filtrar con eficacia un conjunto de datos grande. Como resultado, es posible que se necesiten más de 24 horas para generar un informe clave. En el momento en que el informe está completo, la información ya es obsoleta para usarla.

Este ejemplo podría verse como un problema del big data que ya existe dentro de la compañía. Pero es posible que nunca se haya visto de esa forma. Resolver este tipo de dificultades internas puede representar una diferencia enorme en la forma en que la empresa cumple con sus objetivos.

 BD

Mito 3: Los servidores

El Big Data se implementa mejor usando servidores básicos con almacenamiento de conexión directa. ¿Mito o realidad?

No caiga en la trampa de la generalización absoluta. Implementar servidores básicos con almacenamiento de conexión directa puede funcionar si su compañía es un motor mundial de búsquedas y cuenta con 4.000 servidores en su clúster. En ese entorno, perder un servidor no es nada grave.

Pero en una empresa típica que apenas está adoptando un clúster Apache Hadoop de 20 nodos, el impacto de una falla en un servidor puede ser mucho más significativo. En este entorno, la implementación de un servidor básico crea riesgos innecesarios y exigencias administrativas adicionales.

La mejor ruta sería elegir un servidor de clase empresarial más confiable, como la familia HP ProLiant Gen8. Esta clase de servidores por lo general brinda un mejor rendimiento, administración simplificada e incluye inteligencia incorporada que puede predecir los problemas del servidor antes de que pasen.

En última instancia, la clase de servidor que seleccione debería basarse en qué tan ampliamente necesita escalarse y qué tan seguro necesita que sea su sistema.

 BD

Mito 4: Big Data y Virtualización, caminos bien separados

Nunca debería implementar su actividad de big data en un entorno virtualizado. ¿Mito o realidad? Acceda a la nota y conozca de la verdad.

Según HP, este mito se deriva de los orígenes de las herramientas y marcos más populares del big data.

Muchos, como Hadoop, se crearon como código abierto y con un fin particular: escalarse horizontalmente con rapidez y responder a las necesidades de los sitios de grandes motores de búsqueda y las empresas de redes sociales. Para ese tipo de empresas, la implementación acelerada de miles de servidores baratos que permitan satisfacer estas necesidades tan estrechas tiene mucho sentido, especialmente cuando los resultados se miden en milisegundos.

Pero la mayoría de las empresas no necesitan ese tipo de escala, ni un rendimiento casi instantáneo. La virtualización puede ser una opción viable para algunas de esas situaciones, por ejemplo, los sistemas de desarrollo. A algunos podría preocuparles la demora, pero si un informe tarda apenas unos minutos ejecutándose en vez de un par de horas, esa diferencia puede ser insignificante, especialmente cuando se compara con los beneficios de la virtualización.

Un entorno virtualizado será más flexible y podrá lograr una mejor utilización del servidor si la tecnología de big data no consume todos los recursos clave del CPU y la memoria.

El costo anticipado de virtualizar también puede ser un factor a considerar, especialmente cuando se compara con la implementación de servidores físicos de bajo costo. Pero, una vez más, hay que pensar en toda la situación. Una empresa podría permitirse apenas 20 servidores físicos, por ejemplo, pero puede implementar 300 máquinas virtuales en esos servidores.

Según la carga de trabajo y el tipo de procesamiento requerido, este entorno virtualizado al final sería una mejor solución para algunos tipos de aplicaciones y de empresas.

 BD

Mito 5: El software

Mientras el Big Data avanza como una de las tendencias más fuertes del mercado, siguen surgiendo algunas dudas y mitos que no son tan ciertos como parecen. Por ejemplo, se entiende que el software para el big data viene preparado para la empresa y que –por tanto- el canal no tiene trabajo que hacer a la hora de implementarlo. ¿Mito o realidad? Se lo contamos.

Las plataformas populares de software para big data como Hadoop cumplen bien con su función, pero no espere encontrarlos cargados con los recursos y herramientas robustos que se encuentran en una aplicación empresarial madura.

Actualmente, la mayoría de las empresas ejecutan sus aplicaciones claves (como Microsoft Exchange, los sistemas de administración de relaciones con el cliente o CRM, y Microsoft SharePoint) en un entorno virtualizado o en una nube privada. Hay algunas políticas y procedimientos establecidos para mantener la infraestructura a salvo y regida de forma adecuada. El entorno cuenta con herramientas robustas para lidiar con necesidades como la seguridad y la encriptación, las copias de seguridad y la recuperación, la multi-tenancy y el almacenamiento de datos en capas.

Sin embargo, los marcos y las aplicaciones más populares del big data no cuentan con estas herramientas empresariales robustas incorporadas. Esto se debe a que, tal y como sucede con los servidores básicos, la mayoría del software para big data creció a partir de esfuerzos de código abierto, concentrados en proporcionar la capacidad de escalarse rápidamente.

Simplemente, no se necesitaban muchas de las herramientas comunes que se encuentran en el software empresarial. Como resultado, una empresa que desarrolla una solución de big data dentro de un marco como Hadoop necesitará crear estas herramientas. También necesita repensar cómo va a administrar las operaciones. Por ejemplo, hacer copias de seguridad de hasta tres petabytes de información puede parecer asustador (y lo es), pero puede ser exigido por ley en algunos sectores. La restauración de esos datos es aún más desalentadora, si se considera el tiempo necesario para que se realice. Gracias a Dios, hay soluciones temporales.

La conclusión es que las soluciones de big data pueden usar un modelo operativo diferente de la implementación de nube privada clásica. Esto significa que va a haber diferencias e inconsistencias en la forma en que se administran los dos entornos. Y esto no necesariamente es algo malo; apenas hay que estar consciente de ello.

Fuente: IT-Sitio

Logo KW

Big Data en un mundo multiplataforma [1114]

de System Administrator - jueves, 19 de febrero de 2015, 18:47
 

Gestión de big data en un mundo multiplataforma

por Rodrigo Cerón

Existe una pregunta cada vez más recurrente en el mundo digital: ¿qué es big data y cómo gestionar eficazmente conjuntos de datos a gran escala? Para responder esta pregunta lo primero es analizar ¿de dónde viene la complejidad en la gestión y por qué está de moda la gestión de big data.

La respuesta es muy sencilla: el mundo se está digitalizando. Hoy en día hay muchas fuentes de datos dispares, desde aplicaciones como sistemas de CRM, a una variedad de plataformas como dispositivos móviles, televisores y sistemas de GPS, sólo para nombrar unos pocos. Las empresas están invirtiendo en iniciativas digitales como páginas web, gestión de video a través de dispositivos móviles y el uso de múltiples plataformas para la interacción con clientes, proveedores y audiencias. Estas interacciones digitales generan cantidades masivas de información y registros de datos que deben ser almacenados de manera eficiente y aprovechados con eficacia. Convenientemente, las velocidades de almacenamiento de datos están avanzando también, lo que permite la recolección masiva de datos. VISA, por ejemplo, supervisa 11 mil transacciones globales por segundo, mientras que un estimado de 193 mil mensajes de texto son enviados por segundo a nivel mundial.

Ahora, ¿para qué se almacenan todos estos datos si no pueden ser usados correctamente? El conocimiento es poder. Sabemos que toda esta información contiene insights que si son descifrados efectivamente, son la clave para mejorar la toma de decisión y llevarán a crecimientos en ingresos. Sin embargo, la mayoría de las veces, no está necesariamente claro cómo sacar provecho a toda esta información.

El reto está en esa mayoría de las veces.

 

Las empresas están atrapadas en plataformas de analítica que no están construidas para trabajar con esta nueva generación de información multiplataforma. Estos sistemas legados no son uniformes (se ven tensionados o con imperfecciones, “ridged”) y no pueden ser integrados fácilmente con arquitecturas de datos más flexibles.

Existe una falta de soluciones de big data capaces de trabajar con datos legados pero que sean también lo suficientemente flexibles para maximizar la percepción de la información digital.

Las empresas no deben gastan años conectando manualmente sus puntos digitales y esperando a que las plataformas evolucionen. Pero mientras tanto, ¿cómo sobrevivir y florecer en medio de esta tormenta de datos?

 

Se necesita seleccionar una herramienta que capture todos los datos directamente de su fuente, que no te ate a cubos. Se necesita una plataforma que esté lista para capturar los datos a través de todos los tipos de dispositivo digitales. Una plataforma que esté construida para escalar billones de bits de datos sin perder velocidad o eficiencias.

Con todo esto en mente, ¿qué información puede ayudar para evaluar y seleccionar la mejor plataforma de big data para sus necesidades de información? Las siguientes características principales deben ser específicamente revisadas en términos de sus requerimientos específicos:

  1. Flexibilidad: En un ambiente digital típico, los clientes interactúan con diferentes activos digitales (páginas web, aplicaciones móviles, videos, social media, etc.). Los datos deben poder ser capturados y almacenados sin restricciones, con API’s disponibles para un fácil entrada/salida sin restricciones en la segmentación, variables personalizadas o correlaciones. Estas interacciones pueden ser recolectadas en múltiples formas, pero la manera más efectiva es capturarlas directamente o a través de un proceso ETL (extract, transform and load) que prepare la información para ser alimentada en una base de datos tradicional.
  2. Ser una plataforma unificada: la plataforma debe concentrar en un solo lugar los datos de múltiples plataformas. Para sacar mayor provecho de la información, tener una capa sobrepuesta para proporcionar aplicaciones de analítica web/digital, inteligencia de negocio u otras aplicaciones de negocio, representa una forma eficiente de mantener los sistemas flexibles y escalables para requerimientos futuros de análisis de datos.
  3. Ligera (Lean): debe contener una sola vista sobre los datos (medición de audiencias,streams de video, con dashboards y reporteo), sin necesidad de múltiples plataformas, esto reducirá también el TCO (Total Cost of Ownership). Este tipo de funcionalidad en esquema SaaS (Software as a Service) nos lleva a tener menos horas de consultoría para la implementación y manutención de la plataforma.
  4. Sin costos escondidos: la plataforma debe tener un esquema de facturación clara, algo que no se ve con los proveedores de plataformas legadas que siempre nos llegan con sorpresas.

La cantidad de información seguirá creciendo en tamaño y complejidad. Los negocios están sufriendo para ganar insights rápido al estar atados a herramientas viejas de analítica. No hay tiempo para esperar muchos años. Existe una necesidad clara de una plataforma construida para manejar los distintos activos o plataformas digitales. Se debe elegir una herramienta que maximice el valor de todas sus inversiones digitales, no solo web sino de toda la información generada a través de múltiples plataformas. Con esto estará listo para gestionar big data en un mundo multiplataforma.

Sobre el autor: Rodrigo Cerón es ingeniero por el Instituto Tecnológico Autónomo de México (ITAM) y cuenta con un Diplomado en Marketing Digital por la Universidad Iberoamericana (UIA). Actualmente se desarrolla como gerente de Mercadotecnia para América Latina en comScore y es vicepresidente de Mercadotecnia y Publicidad de la Asociación Mexicana de Internet (AMIPCI). Rodrigo cuenta con una amplia experiencia generando estrategias de mercadotecnia y un profundo conocimiento de las industrias de tecnologías de la información, telecomunicaciones y entretenimiento. Puede contactarlo por correo:rceron@comscore.com  o a través de su cuenta de Twitter: @roceflo

Logo KW

Big Data Problems [747]

de System Administrator - jueves, 14 de agosto de 2014, 13:33
 

What Kind of Big Data Problem Do You Have?

Everyone's still talking about big data - and for good reason. It's continuing to grow as the use of computers, mobile devices and the Internet continues to increase. It's expanding as more devices, homes and machines are outfitted with sensors, smart meters and GIS transmitters. And with the advent of low-cost storage, in-memory analytics and other computing technologies, it's possible to turn this big data into meaningful insights that empower organizations to make proactive, more informed decisions than ever before.


This paper will give you a crash course in how one company categorizes analytic problems and the best technologies to address them.

Please read the attached whitepaper.

Logo KW

Big data projects: To build or to buy? [721]

de System Administrator - sábado, 9 de agosto de 2014, 00:26
 

Big data projects: To build or to buy?

Logo KW

Big Data [232]

de System Administrator - viernes, 5 de diciembre de 2014, 21:41
 

 Big Data

"Big data" es un término aplicado a conjuntos enormes de datos que superan la capacidad del software estándar para ser capturados, gestionados y procesados en un tiempo razonable. Los tamaños del "big data" se hallan constantemente en aumento. En 2012 se dimensionaba su tamaño entre una docena de terabytes y varios petabytes en un solo data set (conjunto de datos). Los científicos encontraban limitaciones debido al volumen generado en ciertas áreas tales como: meteorología, genómica, simulaciones de procesos físicos, investigaciones biológicas y ambientales. Las dificultades también incluían a los motores de búsqueda en internet, sistemas financieros, datawarehouse e inteligencia de negocios, bitácoras, monitorización, seguridad, trazabilidad, datos geográficos, espionaje y más.

Fuente: http://es.wikipedia.org/wiki/Big_Data

An architect's guide: How to use big data

This essential guide shows how organizations are using big data and offers advice for IT professionals working with the technology.

Introduction

Employees in organizations of all sizes can be faced with the daunting task of figuring out how to use big data and how to best manage it. Some IT professionals are tasked to use technology such as graph databases to crunch large volumes of data. Other developers need to be able to use tools like Hadoop to build systems capable of handling varying flows of data.

This guide brings together a range of stories that highlight examples of how to use big data, management techniques, trends with the technology, and key terms developers need to know.

1. The cloud and big data

Techniques for working with big data and the cloud

What is big data? How should it be used? These are two questions people commonly ask when they begin working with large volumes of data. While use cases are evolving, there are some tips and tricks that can be gleaned from success stories.

The following is a collection of articles going over the basics of working with big data. 

Get answers to frequently asked questions about what big data is, why it can be a problem and how big data tools will be part of the solution.Continue Reading

Organizations are handling more and more data all the time, and a big problem is figuring out how to find an important piece of information in peta-bytes of big data. How can it be done? Cloud based technologies that can burst and grow are becoming the standard solution. Continue Reading 

Achieving an affordable database solution that is both scalable and performant has always been a challenge, but Amazon has put scalability and performance within the reach of all sizes of business with their NoSQL solutions that have grown out of their Dynamo based big data systems.Continue Reading 

Many organizations are finding that current IT setups cannot meet modern demands, and in some instances, using data grid technologies can help.Continue Reading 

Data persistence can be problematic because it is often related to how an application is functioning. Continue Reading 

When designing big data applications, an important consideration is whether to use SOA or RESTful APIs to connect big data components and services to the rest of the application. Continue Reading

2. Using Hadoop

Popular tools for working with big data

It's difficult to discuss how to use big data and managing large volumes of data without discussing Hadoop, a Java-based framework. Megacorporations like Google and IBM have capitalized on the technology, but that doesn't mean smaller companies don't stand to benefit from it as well.

Read on for technical advice about working with Hadoop.

YARN represents the biggest architectural change in Hadoop since it's inception over seven years ago. Now, Hadoop goes beyond MapReduce to provide scheduled processing while simultaneously processing big data. Continue Reading

MapReduce has matured, and so has Hadoop, and together under the umbrella of YARN, these powerful technologies are working together better than ever to deliver faster and more flexible big data solutions to the enterprise. Continue Reading

Learn about the next steps in big data trends for the enterprise in 2014.Continue Reading

3. Big data in marketing

Benefit from harnessing big data

Long gone are the days of marketers scratching their heads to determine ways to use big data to their advantage. Organizations of all sizes are learning the benefits of being able to analyze big data and turn that information into a powerful resource to reach consumers. With this movement comes the need for IT professionals to know how to build systems able to wrangle large volumes of data.

The following is a collection of articles that highlight the basics of using big data in marketing.

There is a shift taking place in the business world, as big data in marketing empowers customers over companies. Continue Reading

Researchers and business users alike analyze big data in order to glean insights as to what customers actually want and need. Continue Reading

Organizations are taking advantage of big data management tools as applications are required to handle a growing volume of data. Continue Reading

4. Graph database use cases

Make visual representations with big data

Implementing graph databases is one example of the ways organizations have learned to make meaningful use of big data. While the technology has its roots in social media, there are practical uses for graph databases that extend beyond Facebook. From dating sites to online retailers, the use cases are extensive.

Read on to learn more about big data and graph databases.

At Big Data Techcon 2014, Software field engineer Max de Marzi makes a case for enterprise graph searches using the Neo4j database. Continue Reading

While the most commonly known graph database use cases involve social media, it's not the only market to make use of the technology. Continue Reading

Software field engineer Max De Marzi explains why graph searches and big databases can be of practical use to the enterprise. Continue Reading

5. Glossary

Common big data terms

This glossary provides common terms related to big data.

Big data is an evolving term that describes any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information. Although big data doesn't refer to any specific quantity, the term is often used when speaking about petabytes and exabytes of data. Continue Reading

Big data analytics is the process of examining large amounts of different data types, or big data, in an effort to uncover hidden patterns, unknown correlations and other useful information. Continue Reading

Big data management is the organization, administration and governance of large volumes of both structured and unstructured data. Continue Reading

A graph database, also called a graph-oriented database, is a type of NoSQL database that uses graph theory to store, map and query relationships.  A graph database is essentially a collection of nodes and edges. Each node represents an entity and each edge represents a relationship between two nodes. Continue Reading

Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation. Continue Reading

MapReduce is a software framework that allows developers to write programs that process massive amounts of unstructured data in parallel across a distributed cluster of processors or stand-alone computers.Continue Reading

NoSQL database, also called Not Only SQL, is an approach to data management and database design that's useful for very large sets of distributed data.   Continue Reading

Link: http://searchsoa.techtarget.com

Logo KW

Big data, big challenges: Hadoop in the enterprise [1454]

de System Administrator - jueves, 24 de septiembre de 2015, 19:01
 

Big data, big challenges: Hadoop in the enterprise

By Andrew C. Oliver

Fresh from the front lines: Common problems encountered when putting Hadoop to work -- and the best tools to make Hadoop less burdensome

As I work with larger enterprise clients, a few Hadoop themes have emerged. A common one is that most companies seem to be trying to avoid the pain they experienced in the heyday of JavaEE, SOA, and .Net -- as well as that terrible time when every department had to have its own portal.

To this end, they're trying to centralize Hadoop, in the way that many companies attempt to do with RDBMS or storage. Although you wouldn't use Hadoop for the same stuff you'd use an RDBMS for, Hadoop has many advantages over the RDBMS in terms of manageability. The row-store RDBMS paradigm (that is, Oracle) has inherent scalability limits, so when you attempt to create one big instance or RAC cluster to serve all, you end up serving none. With Hadoop, you have more ability to pool compute resources and dish them out.

Unfortunately, Hadoop management and deployment tools are still early stage at best. As awful as Oracle's reputation may be, I could install it by hand in minutes. Installing a Hadoop cluster that does more than "hello world" will take hours at least. Next, when you start handling hundreds or thousands of nodes, you'll find the tooling a bit lacking.

Companies are using devops tools like Chef, Puppet, and Salt to create manageable Hadoop solutions. They face many challenges on the way to centralizing Hadoop:

  • Hadoop isn't a thing: Hadoop is a word we use to mean "that big data stuff" like Spark, MapReduce, Hive, HBase, and so on. There are a lot of pieces.
  • Diverse workloads: Not only do you potentially need to balance a Hive:Tez workload against a Spark workload, but some workloads are more constant and sustained than others.
  • Partitioning: YARN is pretty much a clusterwide version of the process scheduler and queuing system that you take for granted in the operating system of the computer, phone, or tablet you're using right now. You ask it to do stuff, and it balances it against the other stuff it's doing, then distributes the work accordingly. Obviously, this is essential. But there's a pecking order -- and who you are often determines how many resources you get. Also, streaming jobs and batch jobs may need different levels of service. You may have no choice but to deploy two or more Hadoop clusters, which you need to manage separately. Worse, what happens when workloads are cyclical?
  • Priorities: Though your organization may want to provision a 1,000-node Spark cluster, it doesn't mean you have the right to provision 1,000 nodes. Can you really get the resources you need?

On one hand, many organizations have deployed Hadoop successfully. On the other, if this smells like building your own PaaS with devops tools, your nose is working correctly. You don't have a lot of choice yet. Solutions are coming, but none really solve the problems of deploying and maintaining Hadoop in a large organization yet:

  • Ambari: This Apache project is a marvel and an amazing thing when it works. Each version gets better and each version manages more nodes. But Ambari isn't for provisioning more VMs and does a better job provisioning than reprovisioning or reconfiguring. Ambari probably isn't a long-term solution for provisioning large multitenanted environments with diverse workloads.
  • Slider: Slider enables non-YARN applications to be managed by YARN. Many Hadoop projects at Apache are really controlled or sponsored by one of the major vendors. In this case, the sponsor is Hortonworks, so it pays to look at Hortonworks' road map for Slider. One of the more interesting developments is the ability to deploy Dockerized apps via YARN based on your workload. I haven't seen this in production yet, but it's very promising.
  • Kubernetes: I admit to being biased against Kubernetes because I can't spell it. Kubernetes is a way to pool compute resources Google-style. It brings us one step closer to a PaaS-like feel for Hadoop. I can see a potential future when you use OpenShift, Kubernetes, Slider, YARN, and Docker together to manage a diverse cluster of resources. Cloudera hired a Google exec with that on his resume.
  • Mesos: Mesos has some overlap with Kubernetes but competes directly with YARN or more accurately YARN/Slider. The best way to understand the difference is that YARN is more like traditional task-scheduling. A process gets scheduled against resources that YARN has available to it on the cluster. Mesos has an app request, Mesos makes an offer, and the process can "reject" that offer and wait for a better offer, sort of like dating. If you really want to understand this in detail, MapR has a good walkthrough (though possibly the conclusions are a bit biased). Finally, there's a YARN/Mesos hybrid called Myriad. The hype cycle has burned a bit quick for Mesos.

What about opting for a Hadoop provider in the public cloud? Well, there are a few answers to that question. For one, at a certain scale you begin to stop believing claims that Amazon is cheaper than having your own internal IT team maintaining things. Two, many companies have (real or imagined) beliefs around data security and regulation that prevent them from going to the cloud. Third, uploading larger data sets may not be practical, based on the amount of bandwidth you can buy and the time you need it to be processed/uploaded. Finally, many of the same challenges (especially around diverse workloads) persist in the cloud.

After the vendor wars subside and the shrill pitch of multiple solutions in the marketplace fades, we'll eventually have a turnkey solution for dealing with multiple workloads, diverse services, and different use cases in a way that provisions both the infrastructure and service components on demand.

For now, expect a lot of custom scripting and recipes. Organizations that make large-scale use of this technology simply can't wait to start centralizing. The cost of building and maintaining disparate clusters outweighs the cost of custom-building or deploying immature technology.

Andrew C. OliverColumnist

Andrew C. Oliver is a professional cat herder who moonlights as a software consultant. He is president and founder of Mammoth Data (formerly Open Software Integrators), a big data consulting firm based in Durham, N.C.

 

Logo KW

Big Picture [175]

de System Administrator - miércoles, 8 de enero de 2014, 14:23
 

La expresión “big picture” se utiliza como sinónimo de “panorama general”, “situación en su conjunto” o “visión global”.

Logo KW

Bill Gates [91]

de System Administrator - lunes, 6 de enero de 2014, 20:07
 

 Bill Gates

William Henry Gates III (Seattle, Washington, 28 de octubre de 1955), más conocido como Bill Gates, es un empresario y filántropo estadounidense, cofundador de la empresa de software Microsoft. Recientemente volvió a ostentar el título de ser el hombre más rico del mundo, con una fortuna estimada en 72.7 mil millones de dólares.

Fuente: http://es.wikipedia.org/wiki/Bill_gates

Logo KW

bio+ [57]

de System Administrator - jueves, 2 de enero de 2014, 20:53
 

El lenguaje de programación bio+ se impuso a partir del año Y.

Logo KW

Biomimética [1058]

de System Administrator - viernes, 16 de enero de 2015, 10:53
 

La Biomimética en acción | Biomimicry in action

por Janine Benyus

Science writer, innovation consultant, conservationist
A self-proclaimed nature nerd, Janine Benyus' concept of biomimicry has galvanized scientists, architects, designers and engineers into exploring new ways in which nature's successes can inspire humanity. Full bio

Video

Janine Benyus tiene un mensaje para inventores: Al buscar soluciones de diseño, busca primero en la naturaleza. Es ahí donde encontrarás la inspiración para crear diseños impermeables, aerodinámicos, alimentados por energía solar, y mucho más. Aquí nos revela dozenas de nuevos productos que parten de los procesos de la naturaleza y con resultados espectaculares.

0:11 - Si yo pudiera revelar algo que no podemos ver, al menos en las culturas modernas, revelaría algo que hemos olvidado, algo que antes sabíamos con la misma certeza con la que conocemos nuestros propios nombres y esa cosa, es que vivimos en un universo competente, que formamos parte de un planeta brillante. Y que estamos rodeados de genialidad.

0:42 - La Biomimética es una disciplina nueva que intenta aprender de esos genios, y de seguir sus consejos, consejos de diseño. Ahí es donde vivo yo. y también es mi universidad. Estoy rodeada de genialidad. No puedo evitar recordar los organismos y los ecosistemas que saben como vivir con elegancia en este planeta Esto es lo que te diría que recordaras si lo llegaras a olvidar otra vez. Recuerda esto. Esto es lo que pasa cada año. Esto es lo que cumple con su promesa. Mientras que nosotros pagamos las fianzas, esto es lo que ha pasado. Primavera.

1:30 - Imagina diseñar la primavera. Imagina semejante orquestación. Crees que TED es dificil de organizar. (Risas) ¿Verdad? Imagina... y si no lo has hecho recientemente, hazlo. Imagina el cronometraje, la coordinacción, todo esto sin leyes jerárquicas o políticas, o protocolos de cambio climático. Esto pasa cada año. Hay mucha presunción Hay amor en el aire. Hay grandes estrenos. Y los organismos, podría prometerlo tiene todas sus prioridades en orden.

2:20 - Tengo a un vecino que me mantiene en contacto con todo esto. Porque vive casi siempre boca arriba,mirando estas hierbas. Y una vez vino hacia mi, tenía unos siete u ocho años cuando vino a mi lado. Y había un avispero que había dejado crecer en mi jardín, justo fuera de mi puerta. La mayoría de gente los tiran cuando son pequeños. Pero a mi me fascinaba. Porque estaba mirando una papelería fina Italiana Y vino el a tocar la puerta. Venía cada día a enseñarme algo. Y tocaba la puerta como un pajaro carpintero hasta que le abriera. Y me preguntó como había hecho esa casa para las avispas. Porque nunca había visto una tan grande. Y le dije, "Sabes, Cody, son las avispas que la hicieron." Y la miramos juntos. Y podía entender porqué el pensaba, que estaba hecha con tanta belleza. Era tan arquitectónica. Tan precisa.

3:25 - Pero me di cuenta, como en su pequeña vida ya creía el mito de que si algo está tan bien hecho, es seguramente porque nosotros lo hicimos. Como es que no sabía, es lo que todos hemos olvidado, que no somos los primeros en construir. No somos los primeros en procesar la celulosa. No somos los primeros en fabricar papel. No somos los primeros en intentar optimizar el espacio, o de impermeabilizar, o de calentar o enfriar una estructura. No somos los primeros en construir casas para nuestros hijos.

4:05 - Lo que está pasando ahora, en este campo llamado biomimética, es que la gente vuelve a recordar que organismos, otros organismos, el resto de la naturaleza, están haciendo cosas muy similares a las que nosotros necesitamos hacer. Pero el hecho es que lo están haciendo de una manera que les ha permitido vivir con elegancia en este planeta durante millones de años Así que estas personas, los biomiméticos, son aprendizes de la naturaleza. Y están enfocados en la función. Lo que quisiera hacer es enseñaros algunas de las cosas que están aprendiendo. Se han preguntado a ellos mismos, "¿Qué pasa si cada vez que comienzo a inventar algo, Pregunto, '¿Como resolvería esto la naturaleza?'"

4:51 - Y esto es lo que están aprendiendo. Esta es una foto increíble de un fotógrafo checo que se llama Jack Hedley. Se trata de la historia de un ingeniero de J.R. West. de los que hacen el tren bala. Lo llamaron tren bala porque la parte frontal era redondeada. Pero cada vez que entraba en un túnel se acumulaba una onda de presión. Y generaba un estallido sónico al salir. Así que el jefe del ingeniero le dijo:"Encuentra una manera de acallar el tren."

5:17 - Resulta que el ingeniero era ornitólogo. Se fue a lo que sería una reunion de la Sociedad Audubon. Y estudió, había una película sobre el martín pescador. Y pensó para el, "Van de una densidad, el aire, a otra densidad, el agua, sin siquiera salpicar. Mira esta foto. Sin salpicar, para poder ver los peces. Y pensó, "¿Y si hacemos esto?" Acallaron el tren. Lo aceleraron un 10 por ciento utilizando 15 por ciento menos de electricidad.

5:48 - ¿Como hace la naturaleza para repeler a las bacterias? No somos los primeros en protegernos de alguna bacteria. Resulta que -- esto es un Tiburón de Galápagos No tiene bacterias en su superficie, ni suciedad, ni percebes. Y no es porque va rápido. De hecho es un tiburón que se mueve lentamente.¿Entonces como hace para que su cuerpo no acumule bacterias? No lo hace con ningún químico.Resulta que lo hace con los mismos dentículos que tienen los trajes de baño de Speedo, que rompieron todos esos records Olímpicos.

6:22 - Pero es un patrón particular. Y ese patrón, la arquitectura de ese patrón de los dentículos de la pielimpide que las bacterias se pueda adherir. Hay una compañia que se llama Sharklet Technologies que está utilizando esto en las superficies de hospitales para prevenir las bacterias. Que es preferible a impregnarlas con antibacteriales y productos agresivos a los cuales muchos organismos se están haciendo resistentes. Infecciones que proceden de hospitales están matando a más personas cada año, en los Estados Unidos de los que mueren de SIDA o cancer o accidentes de coche combinados,aproximadamente 100 mil.

7:04 - Este bichito vive en el desierto de Namíbia. No dispone de agua para poder beber. Pero es capaz de extraer agua de la niebla. Tiene unos bultitos en la parte trasera que protege sus alas. Y esos bultos actúan como un imán al agua. Las puntas atraen al agua y los lados son cerosos La niebla se acumula en las puntas. Se desliza por los lados y entra por la boca del animal. Hay un científico aquí en Oxfordque estudió esto, Andrew Parker. Y ahora oficinas de kinética y de arquitectura como Grimshaw están viendo la posibilidad de aplicar esto a la superficie de los edificios para que puedan captar agua de la niebla. diez veces más que nuestras redes de captura de niebla.

7:49 - El CO2 como material de construcción. Los organismos no interpretan el CO2 como veneno. Plantas y organismos que crean conchas, utilizan el coral como elemento de construcción. Ahora existe un fabricante de cemento en Estados Unidos que se llama Clara. Han tomado prestada la receta del arrecife de coral. Y están utilizando el CO2 como elemento de construcción en el cemento, en el hormigón. En lugar de, el cemento normalmente emite una tonelada de CO2 por cada tonelada de cemento. Ahora están invirtiendo esa ecaución y tomando la mitad de una tonelada de CO2 gracias a la receta del coral.

8:25 - Ninguno de ellos están utilizando los organismos en sí. En realidad solo están utilizando los modelos o las recetas de los organismos. ¿Como hace la naturaleza para captar la energía solar? Este es un nuevo tipo de célula solar que se basa en el funcionamiento de una hoja. Es auto-montable. Se puede aplicar a cualquier tipo de sustrato. Es extremadamente económico y recargable cada cinco años. De hecho es una empresa con la que estoy colaborando llamada OneSun, con Paul Hawken.

8:53 - Hay muchas maneras en la que la naturaleza filtra el agua que desaliniza el agua. Nosotros tomamos el agua y la empujamos contra una membrana. Y luego nos preguntamos porqué esa membrana se atascay porqué consume tanta energía. La naturaleza hace algo mucho más elegante. Y está en cada célula.Cada glóbulo rojo de tu cuerpo ahora mismo tiene unos poros en forma de reloj de arena llamados acuaporinas. Exportan moléculas de agua. Es una especie de ósmosis activo. Exportan moléculas de agua a través de, y deja a los solutos del otro lado. Una empresa llamada Aquaporin han comenzado a crear membranas de desalinización que imitan esta tecnología.

9:35 - Árboles y huesos están en constante regeneración mediante lineas de tensión. Este algorítmo se introdujo en un programa de software que está siendo utilizado para aligerar puentes, y aligerar vigas de construcción. De hecho G.M. Opel lo utilizó para crear ese esqueleto que ves ahí, en lo que llaman su coche biónico. La ligereza de su esqueleto utiliza un mínimo de material, como debe hacer un organismo, para conseguir máxima resistencia.

10:10 - Este escarabajo, a diferencia de esta bolsa de patatas aqui, utiliza un material, el chitín. Y encuentra muchas maneras de sacarle provecho. Es impermeable. Es duro y resistente. Respira. Crear color a través de estructura. Mientras que esa bolsa utiliza unas siete capas para hacer todas esas cosas. Una de las invenciones más importantes que tenemos que conseguir tan solo para acercarnos a lo que son capaces estos organismos es de encontrar una manera de minimizar la cantidad de material, el tipo de material que utilizamos y añadirle diseño. Se utilizan cinco polímeros en el mundo natural para hacer todo lo que ves. Nosotros utilizamos alrededor de 350 polímeros para crear todo esto.

11:03 - Naturaleza es nano. La nanotecnología, nanopartículas, se escucha mucha preocupación sobre esto.Nanopartículas sueltas. Lo que me parece más interesante es que poca gente pregunta, "¿Como podemos consultar a la naturaleza para hacer la nanotecnología más segura? La naturaleza lo hecho durante mucho tiempo. Incrustar nanopartículas en un material, por ejemplo, siempre. De hecho, las bacterias reductoras de azufre como parte de su síntesis, emiten como subproducto, nanopartículas en el agua. Pero justo después, emiten una proteína que reune y agrega esas nanopartículas. Hasta que salen de la solución.

11:47 - Consumo de energía. Los organismos la consumen a sorbos. Porque han de trabajar o negociar por cada pequeño sorbo que reciben. Y uno de los campos más grandes ahora, en el mundo de redes de energía, se oye hablar de la red inteligente. Uno de los principales consultores son los insectos sociales.Tecnología de enjambre. Hay una compañia que se llama Regen. Estudian como las hormigas y las abejas encuentran su comida y sus flores de manera más eficiente como las colmenas. Y diseñan electrodomésticos que se comunican entre sí a través del mismo algorítmo para determinar como minimizar la máxima utilización de energía.

12:33 - Hay un grupo de científicos en Cornell que están creando lo que ellos llaman un arbol sintético. Porque según dicen, "No hay surtidor debajo de un árbol." Su acción y transpiración capilar arrasta agua hacia arriba, gota a gota, a través de las raízes hasta liberarla por las hojas. Y están creando -- puedes imaginarlo como si fuera un especie de papel de pared. Quieren colocarlo sobre el interior de los edificios para distribuir agua sin la necesidad de bombas o surtidores.

13:06 - La anguila eléctrica de Amazonas. En peligro de extinción, algunas de estas especies, crean 600 voltios de electricidad con químicos que existen dentro de tu cuerpo. Pero incluso más interesante me parece el hecho de que 600 voltios no los frie muertos. Conoceis el uso del PVC. Lo utilizamos como aislamientopara encubrir alambrado. ¿Como es que estos organismos son capaces de aislarse de su propia carga eléctrica? Estas son algunas de las preguntas que aún nos tenemos que hacer.

13:35 - Este es un fabricante de turbinas eólicas que consultó a una ballena. La ballena jorobada tiene aletas con orillas serradas Y estas orillas hacen circular el agua de manera que reducen la resistencia del agua en un 32 por ciento. Como consecuencia, las turbinas eólicas pueden llegar a rotar con muy poco viento.

13:57 - El MIT acaba de crear un nuevo chip de radio que utiliza mucha menos energía que los chips actuales.Se basa en el funcionamiento de la cóclea del oído, capaz de captar señales de internet, inalámbricas, televisión y radio, en un único chip. Y por fin, a escala de un ecosistema.

14:19 - En el Biomimicry Guild, mi consultoría, trabajamos con HOK Architects, estudiamos como construir ciudades enteras, en el departamento de planeamiento. Y lo que preguntamos es, ¿No deben nuestras ciudades rendir lo mismo, en términos de servicios de ecosistema, que los sistemas nativos que han reemplazado? Estamos creando algo llamado Estándard de Rendimiento Ecológico, que somete a las ciudades a un estándar más alto.

14:48 - La pregunta es -- la biomimética es una herramienta de innovación muy poderosa. La pregunta que yo haría es, "¿Qué vale la pena resolver?" Si no has visto esto, es muy impresionante. El Dr. Adam Neiman.Es una representación de toda el agua en la Tierra en relación al volumen de la Tierra, todo el hielo, toda el auga dulce, todo el agua salada, y toda la atmósfera que podemos respirar, en relación al volumen de la Tierra. Y dentro de esas esferas, vida, más de 3.8 mil millones de años, ha creado para nosotros un lugar habitable y de abundancia.

15:26 - Y estamos en una cola muy muy larga de organismos que han surgido en este planeta para preguntarnos, "¿Como podemos vivir aquí con elegancia a largo plazo?" ¿Como podemos hacer lo que la vida misma ha sabido hacer? Crear las condiciones que favorcen la vida. Para hacer esto, el reto de diseño de nuestro siglo, creo yo, es volver a recordar a esos genios, y de alguna manera reencontrarnos con ellos.

16:02 - Una de las grandes ideas, uno de los grandes proyectos en el que he tenido el honor de participar es una nueva web. Y os animo a todos a visitarla. Se llama AskNature.org. Y lo que estamos intentando hacer, en un estilo-TED, es organizar toda la información biológica por diseño y función de ingeniería.

16:21 - Estamos trabajando con EOL, Enciclopedia de la Vida, el "TED wish" de Ed Wilson. El está recogiendo toda la información biológica en un portal. Y los científicos que contribuyen a EOL están respondiendo a una pregunta. "¿Qué podemos aprender de este organismo?" Y entonces esa información es la que irá a AskNature.org. Y esperamos que, cualquier inventor, en cualquier lugar del mundo, en el momento de creación, podrá introducir, "¿Como hace la naturaleza para desalinizar el agua?" Y le saldrá una lista con manglares, tortugas de mar, hasta riñones humanos.

16:57 - Y comenzaremos a poder hacer como hace Cody, y realmente estar en contacto con estos models increíbles, estos sabios que han estado aquí mucho más tiempo que nosotros. Ojalá, que con la ayuda de todos ellos, aprendamos a vivir en esta Tierra, en este hogar que es nuestro, pero no solo de nosotros. Muchas gracias. (Aplauso)

Link: http://www.ted.com

Biomimicry in action

0:11 - If I could reveal anything that is hidden from us, at least in modern cultures, it would be to reveal something that we've forgotten, that we used to know as well as we knew our own names. And that is that we live in a competent universe, that we are part of a brilliant planet, and that we are surrounded by genius.

0:42 - Biomimicry is a new discipline that tries to learn from those geniuses, and take advice from them, design advice. That's where I live, and it's my university as well. I'm surrounded by genius. I cannot help butremember the organisms and the ecosystems that know how to live here gracefully on this planet. This is what I would tell you to remember if you ever forget this again. Remember this. This is what happens every year. This is what keeps its promise. While we're doing bailouts, this is what happened. Spring.

1:30 - Imagine designing spring. Imagine that orchestration. You think TED is hard to organize. (Laughter) Right? Imagine, and if you haven't done this in a while, do. Imagine the timing, the coordination, all without top-down laws, or policies, or climate change protocols. This happens every year. There is lots of showing off. There is lots of love in the air. There's lots of grand openings. And the organisms, I promise you, have all of their priorities in order.

2:20 - I have this neighbor that keeps me in touch with this, because he's living, usually on his back, looking up at those grasses. And one time he came up to me -- he was about seven or eight years old -- he came up to me. And there was a wasp's nest that I had let grow in my yard, right outside my door. And most people knock them down when they're small. But it was fascinating to me, because I was looking at this sort of fine Italian end papers. And he came up to me and he knocked. He would come every day with something to show me. And like, knock like a woodpecker on my door until I opened it up. And he asked me how I had made the house for those wasps, because he had never seen one this big. And I told him, "You know, Cody, the wasps actually made that." And we looked at it together. And I could see why he thought, you know -- it was so beautifully done. It was so architectural. It was so precise.

3:25But it occurred to me, how in his small life had he already believed the myth that if something was that well done, that we must have done it. How did he not know -- it's what we've all forgotten -- that we're not the first ones to build. We're not the first ones to process cellulose. We're not the first ones to make paper. We're not the first ones to try to optimize packing space, or to waterproof, or to try to heat and cool a structure. We're not the first ones to build houses for our young.

4:05 - What's happening now, in this field called biomimicry, is that people are beginning to remember that organisms, other organisms, the rest of the natural world, are doing things very similar to what we need to do. But in fact they are doing them in a way that have allowed them to live gracefully on this planet for billions of years. So these people, biomimics, are nature's apprentices. And they're focusing on function.What I'd like to do is show you a few of the things that they're learning. They have asked themselves,"What if, every time I started to invent something, I asked, 'How would nature solve this?'"

4:51 - And here is what they're learning. This is an amazing picture from a Czech photographer named Jack Hedley. This is a story about an engineer at J.R. West. They're the people who make the bullet train. It was called the bullet train because it was rounded in front, but every time it went into a tunnel it would build up a pressure wave, and then it would create like a sonic boom when it exited. So the engineer's boss said, "Find a way to quiet this train."

5:17 - He happened to be a birder. He went to the equivalent of an Audubon Society meeting. And he studied -- there was a film about king fishers. And he thought to himself, "They go from one density of medium, the air, into another density of medium, water, without a splash. Look at this picture. Without a splash, so they can see the fish. And he thought, "What if we do this?" Quieted the train. Made it go 10 percent faster on 15 percent less electricity.

5:48 - How does nature repel bacteria? We're not the first ones to have to protect ourselves from some bacteria. Turns out that -- this is a Galapagos Shark. It has no bacteria on its surface, no fouling on its surface, no barnacles. And it's not because it goes fast. It actually basks. It's a slow-moving shark. So how does it keep its body free of bacteria build-up? It doesn't do it with a chemical. It does it, it turns out, with the same denticles that you had on Speedo bathing suits, that broke all those records in the Olympics,

6:22 - but it's a particular kind of pattern. And that pattern, the architecture of that pattern on its skin denticleskeep bacteria from being able to land and adhere. There is a company called Sharklet Technologiesthat's now putting this on the surfaces in hospitals to keep bacteria from landing, which is better than dousing it with anti-bacterials or harsh cleansers that many, many organisms are now becoming drug resistant. Hospital-acquired infections are now killing more people every year in the United States than die from AIDS or cancer or car accidents combined -- about 100,000.

7:04 - This is a little critter that's in the Namibian desert. It has no fresh water that it's able to drink, but it drinks water out of fog. It's got bumps on the back of its wing covers. And those bumps act like a magnet for water. They have water-loving tips, and waxy sides. And the fog comes in and it builds up on the tips.And it goes down the sides and goes into the critter's mouth. There is actually a scientist here at Oxfordwho studied this, Andrew Parker. And now kinetic and architectural firms like Grimshaw are starting to look at this as a way of coating buildings so that they gather water from the fog. 10 times better than our fog-catching nets.

7:49 - CO2 as a building block. Organisms don't think of CO2 as a poison. Plants and organisms that make shells, coral, think of it as a building block. There is now a cement manufacturing company starting in the United States called Calera. They've borrowed the recipe from the coral reef, and they're using CO2 as a building block in cement, in concrete. Instead of -- cement usually emits a ton of CO2 for every ton of cement. Now it's reversing that equation, and actually sequestering half a ton of CO2 thanks to the recipe from the coral.

8:25 - None of these are using the organisms. They're really only using the blueprints or the recipes from the organisms. How does nature gather the sun's energy? This is a new kind of solar cell that's based on how a leaf works. It's self-assembling. It can be put down on any substrate whatsoever. It's extremely inexpensive and rechargeable every five years. It's actually a company a company that I'm involved in called OneSun, with Paul Hawken.

8:53 - There are many many ways that nature filters water that takes salt out of water. We take water and push it against a membrane. And then we wonder why the membrane clogs and why it takes so much electricity. Nature does something much more elegant. And it's in every cell. Every red blood cell of your body right now has these hourglass-shaped pores called aquaporins. They actually export water molecules through. It's kind of a forward osmosis. They export water molecules through, and leave solutes on the other side. A company called Aquaporin is starting to make desalination membranes mimicking this technology.

9:35 - Trees and bones are constantly reforming themselves along lines of stress. This algorithm has been put into a software program that's now being used to make bridges lightweight, to make building beams lightweight. Actually G.M. Opel used it to create that skeleton you see, in what's called their bionic car. It lightweighted that skeleton using a minimum amount of material, as an organism must, for the maximum amount of strength.

10:10 - This beetle, unlike this chip bag here, this beetle uses one material, chitin. And it finds many many waysto put many functions into it. It's waterproof. It's strong and resilient. It's breathable. It creates color through structure. Whereas that chip bag has about seven layers to do all of those things. One of our major inventions that we need to be able to do to come even close to what these organisms can do is to find a way to minimize the amount of material, the kind of material we use, and to add design to it. We use five polymers in the natural world to do everything that you see. In our world we use about 350 polymers to make all this.

11:03 - Nature is nano. Nanotechnology, nanoparticles, you hear a lot of worry about this. Loose nanoparticles. What is really interesting to me is that not many people have been asking, "How can we consult nature about how to make nanotechnology safe?" Nature has been doing that for a long time. Embedding nanoparticles in a material for instance, always. In fact, sulfur-reducing bacteria, as part of their synthesis, they will emit, as a byproduct, nanoparticles into the water. But then right after that, they emit a protein that actually gathers and aggregates those nanoparticles so that they fall out of solution.

11:47 - Energy use. Organisms sip energy, because they have to work or barter for every single bit that they get.And one of the largest fields right now, in the world of energy grids, you hear about the smart grid. One of the largest consultants are the social insects. Swarm technology. There is a company called Regen. They are looking at how ants and bees find their food and their flowers in the most effective way as a whole hive. And they're having appliances in your home talk to one another through that algorithm, and determine how to minimize peak power use.

12:33 - There's a group of scientists in Cornell that are making what they call a synthetic tree, because they are saying, "There is no pump at the bottom of a tree." It's capillary action and transpiration pulls water up, a drop at a time, pulling it, releasing it from a leaf and pulling it up through the roots. And they're creating -- you can think of it as a kind of wallpaper. They're thinking about putting it on the insides of buildings to move water up without pumps.

13:06 - Amazon electric eel -- incredibly endangered, some of these species -- create 600 volts of electricity with the chemicals that are in your body. Even more interesting to me is that 600 volts doesn't fry it. You know we use PVC, and we sheath wires with PVC for insulation. These organisms, how are they insulatingagainst their own electric charge? These are some questions that we've yet to ask.

13:35 - Here's a wind turbine manufacturer that went to a whale. Humpback whale has scalloped edges on its flippers. And those scalloped edges play with flow in such a way that is reduces drag by 32 percent.These wind turbines can rotate in incredibly slow windspeeds, as a result.

13:57 - MIT just has a new radio chip that uses far less power than our chips. And it's based on the cochlear of your ear, able to pick up internet, wireless, television signals and radio signals, in the same chip. Finally, on an ecosystem scale.

14:19 - At Biomimicry Guild, which is my consulting company, we work with HOK Architects. We're looking at building whole cities in their planning department. And what we're saying is that, shouldn't our cities do at least as well, in terms of ecosystem services, as the native systems that they replace? So we're creating something called Ecological Performance Standards that hold cities to this higher bar.

14:48 - The question is -- biomimicry is an incredibly powerful way to innovate. The question I would ask is, "What's worth solving?" If you haven't seen this, it's pretty amazing. Dr. Adam Neiman. This is a depiction of all of the water on Earth in relation to the volume of the Earth -- all the ice, all the fresh water, all the sea water -- and all the atmosphere that we can breathe, in relation to the volume of the Earth. And inside those balls life, over 3.8 billion years, has made a lush, livable place for us.

15:26 - And we are in a long, long line of organisms to come to this planet and ask ourselves, "How can we live here gracefully over the long haul?" How can we do what life has learned to do? Which is to create conditions conducive to life. Now in order to do this, the design challenge of our century, I think, we need a way to remind ourselves of those geniuses, and to somehow meet them again.

16:02 - One of the big ideas, one of the big projects I've been honored to work on is a new website. And I would encourage you all to please go to it. It's called AskNature.org. And what we're trying to do, in a TEDesque way, is to organize all biological information by design and engineering function.

16:21 - And we're working with EOL, Encyclopedia of Life, Ed Wilson's TED wish. And he's gathering all biological information on one website. And the scientists who are contributing to EOL are answering a question, "What can we learn from this organism?" And that information will go into AskNature.org. And hopefully, any inventor, anywhere in the world, will be able, in the moment of creation, to type in, "How does nature remove salt from water?" And up will come mangroves, and sea turtles and your own kidneys.

16:57 - And we'll begin to be able to do as Cody does, and actually be in touch with these incredible models,these elders that have been here far, far longer than we have. And hopefully, with their help, we'll learn how to live on this Earth, and on this home that is ours, but not ours alone. Thank you very much.(Applause)

Link: http://www.ted.com

 

Logo KW

Biomimicry Institute [1059]

de System Administrator - viernes, 16 de enero de 2015, 11:38
 

The Biomimicry Institute empowers people to create nature-inspired solutions for a healthy planet.

 

Can We Use Biomimicry To Design Cities? Janine Benyus Says Yes

Yesterday in a sunny corner of London, a select group of UK journalists, myself included, were treated to an enrapturing few hours in the company of biomimicry guru Janine Benyus. The group was brought together by sustainable business pioneers InterfaceFLOR, whose long standing working relationship with Benyus enabled them to arrange this exclusive press lunch at raw food restaurant Saf. It was appropriate that Janine spoke to us in the centre of the bustling metropolis as her latest work in the field of biomimicry is focused on using the discipline to inform the design and function of cities.

Biologist at the Design Table
Janine Benyus describes her job as being the "biologist at the design table", a career that she says didn't really exist 10-12 years ago. Since the publication of her book,Biomimicry: Innovation Inspired by Nature in 1997, Benyus and her colleagues at theBiomimicry Guild have worked with some of the world's most successful companies including WalmartNike and Interface, helping them ask nature for inspiration on how to create more sustainable products and systems.

"After my book came out we expected environmentalists and conservationists to get in touch, but in fact it was big business who called. They wanted a biologist to come and talk about how life works. People woke up to the fact that there is a sustainable world in nature that we hadn't been using as a model"

 

The City as an Ecosystem
When I asked Janine yesterday what she was currently most excited about, after all these years of working in biomimicry, her unequivocal answer was cities. TheBiomimicry Guild has teamed up with HOK, one of the largest architectural developers in the world, to work on city masterplans inspired by nature. "A company like HOK has a massive impact on huge areas of land, due to the scale and breadth of their work. So the question we asked was how can you have a city perform like an ecosystem?"

Ecological Performance Standards
Benyus and her team have been developing the concept of designing cities to an 'ecological performance standard', looking at the original topography of the locale and working out the metrics of how the natural environment should perform. "How many millimetres of soil, how many tons of carbon, how much water stored, how much air purified? It is not enough to have green roofs and walls, we need to ask how a building will store carbon. We need cities to perform like ecosystems, not just look like them."

Landscape Amnesia
Benyus is working with HOK on two very different city plans, one green field city in India and one retrofitting scheme in China. The city of Lang Fang is on the North China Plain, an area that Benyus describes as having 'landscape amnesia'. The natural ecological system of Lang Fang was a mixed deciduous forest, but that was over 4000 years ago. Without the density of the forest the communities in this area haven't been able to effectively capture water, so have been drawing down their aquifer for years. Additionally they have to move water about using a pipeline that pumps water up from the Yangtze River in the south.

Water Self Reliance
The question for Lang Fang, Benyus says, is "How can they be water self reliant so they can recharge their aquifer and be the first Chinese city to turn away the pipe?" As she says, "That would be something the city could celebrate." According to Benyus this brief designed with HOK has completely changed the architectural plan of the city. Instead of concrete paths channelling potentially damaging storm water through the urban landscape, saving buildings but wasting this valuable resource, this city now has a plan to direct water back in the ground through strategically planted areas. "The masterplan is now beautiful with green ribbons flowing through the city, tracking and echoing the paleo-channels of old rivers that used to be there."

The Right Questions Lead to Innovation
Janine Benyus explains why formulating ambitious nature inspired briefs for city masterplanning is important. "If you're working in an area with dangerous levels of soil erosion, you should ask how can we have a policy of zero rainfall hitting bare ground? Then you can start designing a city with multiple green roofs, awnings and coverings. If on the other hand you are building in an area of severe water scarcity and the goal is to get 40% of water back in the ground, you should ask, how can we design permeable pavements?" These are bold design briefs and Benyus is proud to say that their goals with HOK, "have never been set so high or been so locally informed by ecological land types."

By asking the right questions designers can create uniquely demanding briefs which produce innovative solutions. Janine Benyus is a great believer in designers setting their sights as high as possible, "It's very powerful when companies have clear goals to work towards." And as Ray Anderson, the radical CEO at the helm of InterfaceFLOR, said when he first set out his ambition to create a sustainable road map for his company, "You can't tell me that it can't be done."

Janine Benyus hearts TreeHugger
We're delighted to say that Janine declared herself to be a great fan of TreeHugger yesterday and had some effusive praise for our in depth coverage of biomimicry, name checking Tim McGee for his great writing on the subject. And her parting words to us? "I really appreciate all the excellent work that you guys do. Keep on rocking in the free world TreeHugger!"

More on Biomimicry:

Tags: Architecture | Biomimicry | Water Conservation

Link: http://www.treehugger.com

 

BIOMIMICRY CHALLENGE: FOR IBM, SMART DESIGN DRAWS WATER CONSERVATION INSPIRATION FROM ECOSYSTEMS

OUR BIOMIMICRY CHALLENGE WHAT WOULD YOU ASK NATURE? DREW DOZENS OF REAL-WORLD BUSINESS PROBLEMS SUBMITTED BY COMPANIES FROM ALL OVER THE WORLD. WE ASSIGNED THREE CHALLENGES TO THREE FIRMS AND PAIRED THEM EACH WITH A BIOLOGIST. EACH TEAM IS NOW REPORTING THEIR BIO-INSPIRED SOLUTIONS.

You've probably seen ads for IBM's SmarterCity initiative, a program that uses the company's information technology to help municipal governments create healthier, more intelligent urban environments for their residents. Using their ability to collect and analyze data, IBM is able to provide information about elements of daily city life ranging from weather and traffic to water usage and air quality. But what they've done with that data has largely been used to make policy and economic decisions. IBM appealed to our What Would You Ask Nature? biomimicry challenge, asking how they could use nature to understand how these overlays of information could help guide residents toward making better personal decisions for the good of the city. A New York-based team at Smart Design accepted their challenge. 

 

IBM Biomimicry Challenge from Smart Design on Vimeo.

 After having a discussion with IBM, and walking through some day-in-the-life exercises that explored issues facing urban dwellers, Smart chose to focus on water conservation. Because of the complexity surrounding its systems, water is often misunderstood, says Tucker Fort, Smart's director of industrial design. "But unlike something like energy, it's a finite resource." Water was also something that residents interacted with everyday, and since IBM's goal was to make cities more responsive and resilient, using a biomimetic approach for encouraging more responsible water usage could have a real impact when implemented across an entire municipal area. Smart zeroed in on urban water consumption to explore how nature could inspire relevant, everyday solutions for city inhabitants to conserve water.

 

To immerse themselves in a biomimetic mindset, Fort, along with director of interaction design Ted Booth, and their team consisting of Whitney Hopkins, Colin Kelly, Anton Ljunggren, and Stephanie Yung, were introduced to the emerging discipline of biomimicry by their BaDT (biologist at the design table) Mark Dorfman. After a biomimicry primer, the team engaged in a blindfolded exercise where they were encouraged to smell, taste, touch, and listen to nature—anything that would break them of their reliance on vision. This is something Dorfman calls "quieting our cleverness." "If I were to show you a pine cone, you would see it and immediately know what it is, and that might be the end of your curiosity and exploration," says Dorfman. "But if you're blindfolded and handed a pine cone, you'll have to explore its shape, texture, smell, before figuring out what it is." The hope is that this process will open the designer's mind to viewing living things through a functional lens—a way that is particularly relevant to solving design challenges.

 

Visits to two urban greenways, the High Line and Hudson River Park, near Smart's offices, provided an opportunity to use this sensual awareness while focusing intently on natural solutions. Reframing the challenge as a functional approach, the designers asked themselves questions like "How does nature store water?" and "How does nature collect water?" Through Dorfman's storytelling and by looking at examples on the AskNature.org site, they learned about examples that ranged from the corky tuber, a giant 700-pound water-filled tuber that grows under the ground yet throws out a few tiny shoots on the surface to alert animals about its water levels, to the ways that camels self-regulate water consumption due to availability.

Although the designers were inspired by the natural examples, they quickly found a more actionable solution in the Life's Principles, a chart created by the Biomimicry Guild to illustrate how the earth regulates and conserves resources within its own giant ecosystem. Examining this representation of nature's complex systems, the designers realized that their solution would not come directly from an organism, but from this entire system as a whole: These core principles for life could work as a metaphor for a city, inspiring and informing solutions to make a healthier and smarter environment.

 

After studying the Life's Principles chart, one truth became apparent: Nature has strict boundaries when it came to resources. Cities don't. Especially when it came to water, organisms had very specific ways for dealing with conservation during times of scarcity. Grass will go dormant during droughts. Birds will conserve food for the good of the group. Animals and plants have a very intrinsic ability to monitor and self-regulate, whereas humans are so far removed from this cycle that they only pay attention when they experience what the designers named a "heart attack moment" where resources have been depleted and it's too late. "Animals regulate based on ambient conditions—they do for themselves but also for the good of the species," says Booth. "So how do you make those signals apparent? How can we bring out those concepts so we're looking at those boundaries and limits?"

But here's where Smart tapped their own knowledge of human behavior: "Peoplehate the word 'boundary,'" says Fort. There's nothing that frustrates people more than strict regulations and limits. The team realized to make their solution appeal to people, they needed to create "soft boundaries of encouragement" for water conservation: Using IBM's data, Smart could design feedback loops on each layer of the city's ecosystem that would create boundaries for individuals, communities, and cities.

 

Using the chart as their guide, the designers began framing a three-level approach to that would provide tangible and relevant feedback loops in different layers: individual (organism), communal (species), and societal (species to species).

For the individual layer, Smart wanted a non-intrusive, yet tangible way to show residents how much water they were using. They created the Heartbeat Faucet, which provides feedback by pulsing after dispensing every 12-oz. cup of water—approximately 20-30 times a minute for the typical faucet. This metered pulse would allow users to see and feel how much water they are using each time they turn on the faucet, informing everyone in the household about their behavior. The faucet would also give IBM point-of-use metering for real time analytics.

For the communal layer, Smart focused on the concept of "leveraging interdependence," or creating cooperative behavior. For this solution they looked at the idea of a Communal Reservoir, which would encourage groups to work together. In an urban environment, Smart decided, a reservoir would translate to the water used by an entire apartment building: Water usage is typically metered for a whole building but each resident often has no idea how much or little they're contributing to the bill. This program would track your building's water usage during a fixed time period and give a community reward—from municipal tax credits to flowers for the lobby—if you come in below your target. The elevator could then serve as the "town square," with displays that show the building's reservoir level and allow the community to modify behavior on daily basis.

Finally, for the societal layer, Smart needed a way to quickly convey the city's water health to its residents and reconnect cities to the natural resources their inhabitants depend on for survival. "When people used to have to pick up their water they could see the stream or the well, and know to conserve," says Booth. "We have to real connection to that." They created the concept of MicroParks, tiny greenspaces throughout the city located next to fire hydrants retrofitted with solar powered wireless water metering systems. The MicroParks' water feed could be manipulated to reflect the cities future water supply—a lush MicroPark would communicate a healthy water supply and a withering MicroPark would let residents know that conservation is critical. These miniature green spaces can forecast city water supply health and make behavior-changing daily connections with people in a positive way.

Smart was able to use the concepts of an entire ecosystem as a metaphor for a city environment, moving from very micro examples from nature into an application like the Life's Principles that applied nature in a very macro, big-picture way. This resulted in a solution that they felt was more appropriate than the traditional biomimicry approach of inscribing specific organisms' traits upon urban infrastructure and human behaviors. "We felt we lacked the credibility to say we can take the corky tuber and turn it into a reservoir," says Fort of the limited time and experience in biomimicry. "But it was valuable to think about things from a different perspective."

Although Dorfman didn't expect the team to make the Life's Principles the centerpiece of their solution, he thought it made an excellent bridge for designers to work between life's technologies and human behavior. "Most often, I've used Life's Principles to evaluate a potential solution that we've proposed to a client, so it was quite interesting to see the team use Life's Principles as both the inspiration for ideas as well as a tool to evaluate its sustainability," he says. "It was actually quite exciting to see the design team run with it."

The designers will now use the Life's Principles as another tool in their toolkit, much like the way Smart already uses the principles of Universal Design, a process of designing products and environments that are usable by people with a wide range of abilities. It will remind them that nature is often the best role model for elegant solutions. And for this solution, it also helped them to realize a simple truth: That some of the world's largest challenges will be overcome by changing the behavior of each individual within a larger system. "The modern city is just like an ecosystem," says Fort. "It seems so obvious, but if you just go right below the surface, there are all these inspirations and connections that are so meaningful."

Special thanks to the Biomimicry Institute and the Biomimicry Guild for partnering with the Designers Accord on this challenge.

More solutions from the What Would You Ask Nature? Biomimicry Challenge

If you have a design and sustainability story to share, let us know about it! Check out the Designers Accord Web site. And follow us on Twitter @designersaccord to hear what the Designers Accord community is thinking about.

Browse more Designers Accord Case Studies

Link: http://www.fastcompany.com

 

Logo KW

BIOS [227]

de System Administrator - jueves, 9 de enero de 2014, 21:52
 

 BIOS BIOSBIOS

El acrónimo BIOS (Basic Input/Output System) fue inventado por Gary Kildall en 1975.

Fuente: http://es.wikipedia.org/wiki/BIOS

Logo KW

Biotech Incubators Boom [455]

de System Administrator - lunes, 24 de marzo de 2014, 16:51
 

Incubator Boom

From San Francisco to St. Louis, biotech incubators are proliferating across North America. Can they deliver on their promise of fueling the economy?

Fuente: The Scientist - By Kerry Grens | March 1, 2014

Biotech

BIOTECH SUPPORT: The Helix Center Biotech Incubator in St. Louis, launched in July 2012, is currently home to more than a dozen young biotech companies.ST. LOUIS ECONOMIC DEVELOPMENT PARTNERSHIPOn the second floor of a low-slung office building west of downtown Chicago, piles of boxes await unpacking as the final touches, such as the meeting room’s interactive SMART Board conferencing screen, are installed. In laboratories down the hall, fume hoods have been hung, benches wiped clean, and the autoclave put in place, ready for the building’s first tenants to begin their research. EnterpriseWorks Chicago is the newest of the University of Illinois’s business incubators, which offer cheap rent to entrepreneurs in the early stages of commercialization. At the new facility, enterprising life scientists can rent a desk in a shared office for $100 a month, and an additional $600 secures a spot in the wet lab. All members get access to the university’s library, facilities, and expertise.

EnterpriseWorks Chicago is one of a number of biotech incubators popping up across the country. According to the National Business Incubation Association (NBIA), the number of business incubators has grown by two orders of magnitude in the past three decades, with 1,250 US incubators supporting up-and-coming businesses in 2012, compared with only a dozen such facilities in 1980. More than a third of US incubators cater to technology firms, according to NBIA, and life-science incubators are popping up not just in established biotech clusters, such as Boston and San Diego, but also in emerging communities like Albuquerque and New Orleans. (See map below, and also “Biotech on the Bayou,” The Scientist, October 2010.)

Mark Long, the president of biotech consultancy Long Performance Advisors, characterizes the current incubator trend as less of a “boom or bubble” and more of a “gold rush.” “I think a lot of people [in economic development] see biotech as the pot of gold at the end of the rainbow, and they see it as their salvation,” he says.

The Helix Center Biotech Incubator in St. Louis, for instance, launched in July 2012 in part to help local industry scientists start their own companies following layoffs at Pfizer and Monsanto, says Beth Noonan, the vice president of innovation and entrepreneurship at the St. Louis Economic Development Partnership. In addition, the incubator aims to capitalize on the intellectual wealth of the city’s heavy-hitting academic research facilities, such as Washington University and the Danforth Plant Sciences Center. And in Toronto, the virtual incubator Blueline Bioscience launched last fall to help build companies from the ground up by giving them access to funding, lab space, and expertise. The motivation is to feed what Blueline’s president Stefan Larson describes as the growing appetite of large pharmaceutical and biotech companies for purchasing small biotech businesses. “Big pharma and even big biotech firms are facing this problem of their pipelines drying up and their internal R&D efforts not being as productive as they need them to be, so they’re shifting huge amounts of dollars into what’s called external R&D,” Larson says. “That demand has surged dramatically in the last five years.”

Whether such incubators will deliver that pot of gold to local economies, however, remains a matter of debate. While many in politics, economic development, and business are convinced that investments in incubators offer handsome returns, there is scant evidence that incubated companies have a greater impact on the economy than those that go it alone. Nevertheless, incubators offer entrepreneurs a wealth of services, a network to plug into, and cutting-edge lab space that isn’t always easy to come by, and incubated biotech companies that survive to make a profit regularly cite this support as key to their success. Moreover, investments from both private funders and all levels of government continue to flow in, fueling a widespread incubator boom.

Return on investment

On a tour of the new EnterpriseWorks Chicago facility this January, I compliment executive director Kapila Viges on her choice of wall colors—bright lime green, warm buttercream, and a shiny slate gray. The latter, covering a wall in the shared office space and in the kitchen, actually works as a dry-erase board, Viges points out. “Everywhere you go you can think, innovate, and collaborate,” she says.

By offering month-to-month leases and easy access to investors, the incubator targets Chicago’s business embryos—ideas in need of testing before their creators are able to commit to setting up shop. EnterpriseWorks, like other life-science incubators, is more than just desks and bench space; it also provides entrepreneurs with access to services and mentorship. Similarly, at the Science Center in Philadelphia, one of the oldest life-science incubators, entrepreneurs can access a cadre of vetted professionals—a corporate attorney, say, or an advisor on regulatory approval. “An incubator is plugging you into that existing network and that community,” says Chris Laing, the vice president of science and technology at the Science Center.

But do biotech incubators really boost small business success? In 1997, Lawrence Molnar, director of the Center for Business Acceleration and Incubation Studies at the University of Michigan, and his colleagues collected information on Michigan companies that had gone through an incubator. They found that several years after the companies began operating on their own, 87 percent were still in business (NBIA Publications, 1996). “That’s an extremely high survival rate,” says Molnar, also the president of the Michigan Business Incubator Association. For comparison, the Small Business Administration says that about half of all new companies die within five years.

Another oft-cited study, commissioned by the US Economic Development Administration (EDA) and published in 2008, found that business incubators are more effective at creating jobs than investment in roads, buildings, and sewer projects (Grant Thornton, LLP and ASR Analytics, LLC, 2008). “For every $10,000 investment, business incubators created between 46 and 69 jobs,” says Matt Erskine, the acting assistant secretary of commerce for economic development at the EDA, “which is a pretty significant return.”

These studies are unlikely to reveal the whole story, however. In perhaps the most comprehensive analysis of incubators to date, Alejandro Amezcua of the Whitman School of Management at Syracuse University uncovered a much more nuanced reality. By comparing thousands of companies at US incubators with similar, nonincubated businesses, Amezcua found evidence that incubated companies tended to fail sooner than those that operated outside of an incubator. But Amezcua doesn’t necessarily think that the study, which he completed as part of his PhD research at Syracuse University, points to a negative impact of incubators on start-up survival. Rather, he suggests, his research may support the notion that an incubation period can help researchers recognize poor ideas faster, actually saving money in the long run. “An incubator could be helping an entrepreneur cut her losses sooner rather than later if her idea is not going to make it to the market,” he says.

Long says that culling is expected. “Part of the function of the incubator is to weed out the wheat from the chaff early on, which saves a lot of money.”

Amezcua also uncovered a number of other interesting and overlooked factors affecting the success of incubated businesses. For instance, specialized incubators in a crowded environment, say, biotech in San Diego, don’t appear to help companies as much as incubators that establish a new niche in a region. And characteristics of the entrepreneurs themselves also matter, with female entrepreneurs reaping more benefits from incubators than male executives, for example. The bottom line, Amezcua says, is “we really don’t know how well any of these things do.”

Biotech

DIVERSE ASSISTANCE: The Science Center in Philadelphia is one of the oldest life-science incubators in the U.S., offering entre­preneurs access to a variety of vetted professionals, from corporate attorneys to regulatory advisors.

PHOTO BY CONRAD ERB, CONRAD ERB PHOTOGRAPHY/COURTESY OF THE UNIVERSITY CITY SCIENCE CENTER.

Without randomized, controlled studies comparing the success of companies with and without time spent in an incubator, “it’s notoriously difficult to put a firm, absolutely unassailable figure on returns,” admits Erskine. “You can’t say with absolute certainty that the return is going to be x percent or result in x number of jobs.”

But the uncertainty surrounding the impact of incubators hasn’t deterred investors, most of whom are more than willing to help finance a new incubator facility, says Jasper Welch, the CEO of NBIA. Moreover, once created, life-science incubators are typically easy to fill. After only a year and a half of operation, St. Louis’s Helix Center is at about 75 percent occupancy, and EnterpriseWorks Chicago had a couple of committed clients before the doors even opened.

The hard part is finding a steady stream of money to keep the operation afloat, especially given the fact that the start-ups themselves, with very little money and a lot of instability, tend to make bad tenants. Add to these challenges the recent tendency of investors to shy away from giving money to individual young biotechs. “The funding stream for life-science technologies, particularly early-stage technologies, has dried up considerably, and the appetite for risk has also been off significantly since the recession,” says Steve Tang, the CEO of Philadelphia’s Science Center.

For this reason, incubators are usually subsidized by local, state, and/or federal government funds. According to Erskine, the EDA has invested $60 million in business incubators since 2009. The promise of job growth—which Amezcua’s study did support as a benefit of incubators—is a prime attraction for communities. “We want to be a leader in growth industries and ride the benefit of that development,” says Chicago Deputy Mayor Steve Koch, who—along with Mayor Rahm Emanuel, industry representatives, and start-up supporters—has already hatched plans for yet another biotech incubator in the city. “Our perception is, the more we can foster the start-up community, the better.”

But while it’s entirely possible that incubators are indeed the economic drivers many expect them to be, the paucity of research demonstrating the benefit of incubators has critics such as Scott Shane, an entrepreneurship researcher at Case Western Reserve University, voicing concerns about spending so much taxpayer money on the ventures. “Every dollar that’s spent on an incubator means a dollar not spent on anything else” that might grow the economy, says Shane. He points out the irony of the situation: the companies that these incubators support are bound by regulatory authorities to demonstrate the safety and effectiveness of their products with good science, yet the policymakers who fund the incubators are not held to such standards to guide their decision making. “As an academic economist it bothers me that policymakers put money into policies without some evidence that the policy works.”

And with the growth of biotech incubators in the last few years, there is the risk that they might outpace the demand. “I think you’re going to continue to see activity,” says Welch at the NBIA. “Whether or not the market can support it, either through direct market response where the companies are successful or [through] subsidies, I don’t have an answer. At some point in time you probably reach an upper limit. Have we reached it yet? I’m not sure.”

Biotech

Link: http://www.the-scientist.com/?articles.view/articleNo/39245/title/Incubator-Boom/

Logo KW

Biotecnología [24]

de System Administrator - lunes, 30 de diciembre de 2013, 17:43
 

La “biotecnología” es la tecnología basada en la biología. Fundamentalmente es utilizada en agricultura, farmacia, ciencia de los alimentos, medio ambiente y medicina. Se desarrolla con un enfoque multidisciplinario que involucra varias disciplinas y ciencias como biología, bioquímica, genética, virología, agronomía, ingeniería, física, química, medicina y veterinaria, entre otras. Para la Organización de la Cooperación y el Desarrollo Económico (OCDE), la biotecnología se define como la "aplicación de principios de la ciencia y la ingeniería para tratamientos de materiales orgánicos e inorgánicos por sistemas biológicos para producir bienes y servicios".

Fuente: http://es.wikipedia.org/wiki/Biotecnología

Logo KW

Birriel Valdi, Susana Lucía [280]

de System Administrator - lunes, 30 de diciembre de 2013, 18:25
 

Multi instrumentista y profesora (Montevideo, 11 de agosto de 1942).

Logo KW

Bisfenol-A, Plastic Planet [1077]

de System Administrator - jueves, 29 de enero de 2015, 12:21
 

Bisfenol-A

 

El bisfenol A, usualmente abreviado como BPA, es un compuesto orgánico con dos grupos funcionales fenol. Es un bloque (monómero) disfuncional de muchos importantes plásticos y aditivos plásticos.

Sospechoso de ser dañino para los humanos desde la década del 30, muchos medios de comunicación resaltaron con frecuencia los riesgos del uso de bisfenol-A en productos de consumo después de que varios gobiernos emitieran informes cuestionando su seguridad, lo cual tuvo como consecuencia que algunas cadenas de venta retiraran los productos que contuvieran este compuesto. Un informe en el 2010 de la FDA (Administración de Alimentos y Fármacos) de Estados Unidos generó mayor conciencia con respecto a la exposición de fetos, infantes y niños pequeños.

El bisfenol-A es usado principalmente para hacer plásticos. Desde hace más de cincuenta años hay en el mercado productos que contienen bisfenol-A. Es un monómero clave en la producción de resina epoxi y en la forma más común de policarbonato de plástico. El policarbonato de plástico, que es transparente y casi inastillable, se usa para fabricar una gran variedad de productos comunes, incluyendo biberones y botellas de agua, equipamiento deportivo, dispositivos médicos y dentales, composites dentales y sellantes, anteojos orgánicos, CD y DVD, y electrodomésticos varios. También se usa en la síntesis de polisulfonascetonas de poliéter, como antioxidante en algunosplastificantes y como un inhibidor de polimerización en el PVC. Las resinas epoxi que contienen Bisfenol-A se usan como recubrimiento en casi todas las latas de comidas y bebidas, sin embargo, debido a problemas de salud, en Japón el recubrimiento de epoxi fue remplazado por un filme de poliéster. BPA es también precursor de unpirorretardantetetrabromobisfenol A, y se usa como fungicida. Además, BPA es un revelador de color para papel térmico y en papel NCR. Productos pasados en BPA se usan en moldes de fundición y como recubrimiento para tuberías de agua.

 

La producción mundial de bisfenol-A en 2003 fue estimado en más de 2 millones de Toneladas. En Estados Unidos, es fabricado por Bayer MaterialScience (División deBayer), Dow Chemical Company, GE Plastics, Hexion Speciality Chemicals y Sunoco Chemicals. En 2004, estas compañías produjeron más de 1 millón de Ton de Bisfenol-A, desde apenas 7260 Ton en 1991. En 2003, el consumo anual de EEUU fue de 856 mil Ton, de las cuales el 72% se usó para Plástico de policarbonato y un 21% para resinas epoxi. Menos de un 5% de todo lo producido se usa en aplicaciones de contacto con alimentos.

 

Algunos plásticos tipo 3 pueden liberar Bisfenol-A.

Algunos plásticos tipo 7 pueden liberar Bisfenol-A.

Afectados

Los principales afectados son en su mayoría los bebés, tanto en su etapa embrionaria como la de lactancia, pues es entonces cuando las hormonas tienen un papel fundamental. Es en estos momentos de su vida, cuando su madre le traspasa, ya sea a través del cordón umbilical como de la leche, todas estas hormonas que han ido acumulándose en su cuerpo.

Toxicidad

El bisfenol A es un disruptor endocrino. Es capaz de causar desequilibrios en el sistema hormonal a concentraciones muy bajas con posibles repercusiones sobre la salud. Sus efectos tóxicos se deben al consumo de alimentos que han sido contaminados por contacto con materiales que contienen esta sustancia, tales como envases, latas o recipientes de muy diversa clase. La amplia distribución de productos con bisfenol A, especialmente en los países desarrollados, provoca una exposición continua de la población, afectando a todas las edades (desde fetos a ancianos). La presencia continua de este disruptor en el organismo se ha relacionado con un mayor riesgo de padecer diversos trastornos orgánicos.

Efectos tóxicos

Se han asociado numerosas alteraciones causadas por bisfenol A en los seres vivos, basadas en una desregulación del sistema endocrino y la producción de hormonas correspondiente:

Efectos sobre el sistema reproductor masculino

Numerosos trabajos hacen referencia a una alteración de la espermatogénesis que condiciona un descenso en los niveles de esperma, de la testosterona y en general, de la fertilidad masculina. Además, otros estudios sugieren un cambio en la conducta sexual.

Efectos sobre el sistema reproductor femenino

En mujeres, se producen cambios en la maduración de los ovocitos, disminuyendo su número y calidad. También existe algún estudio que relaciona la exposición a bisfenol A con efectos negativos sobre el endometrio, aparición de ovarios poliquísticos, abortos y partos prematuros. Por otra parte, en animales hay evidencias de quistes ováricos, endometriosis, pubertad temprana y afectación del eje hipotálamo-hipófisis-gonadal.

Efectos sobre el cerebro y el comportamiento

Diversos ensayos en animales han confirmado el efecto del bisfenol A sobre la diferenciación de las neuronas, alteración de los sistemas mediados por glutamina y dopamina y cambios en la expresión de receptores estrogénicos. También se le ha relacionado con posibles cambios en la conducta materna (menor atención hacia las crías), ansiedad, reducción de la conducta exploratoria y una feminización de los machos. En humanos se han podido establecer cambios que incluyen hiperactividad, aumento de la agresividad, aumento a la susceptibilidad de sustancias adictógenas y problemas tanto en el aprendizaje como en la memoria.

Efectos sobre el metabolismo y el sistema cardiovascular

Se han establecido asociaciones de una mayor concentración de bisfenol A con casos de diversas enfermedades cardíacas e hipertensión. Además, la exposición a bisfenol A conlleva a un aumento de los lípidos en sangre, un aumento del peso y un incremento de la lipogénesis. También puede incidir en la aparición de diabetes mellitus tipo II al aumentar la resistencia a la insulina y número de células grasas.

Efectos sobre el tiroides

Estudios en animales concluyen que el bisfenol A es capaz de afectar a la función tiroidea comportándose como antagonista de la hormona tiroidea. En anfibios, este efecto se traduce en una inhibición de su metamorfosis. La afectación del tiroides también afecta a los roedores. En el caso de los humanos, no se han realizado suficientes estudios que permitan extraer resultados concluyentes.

Efectos sobre el sistema inmune

Se ha demostrado en animales de experimentación una inducción de linfocitos T y un aumento en la producción de citoquinas, favoreciéndose así los procesos alérgicos.

Efectos sobre el intestino

Posible inflamación y alteración de la permeabilidad intestinal en animales.

Efectos carcinogénicos

Cuando el bisfenol A es metabolizado por hidroxilación y posterior oxidación, forma una ortoquinona que puede establecer enlaces covalentes con el ADN y desarrollar efectos mutagénicos y teratogénicos. Los efectos mutagénicos podrían ser los iniciadores de varios procesos carcinogénicos asociados a bisfenol A:

  • Cáncer de próstata

La actividad estrogénica de la sustancia puede derivar en un aumento del tamaño prostático y en una disminución del tamaño del epidídimo.

  • Cáncer de mama

Se ha detectado una mayor susceptibilidad mutagénica y carcinogénica a nivel de las células mamarias en mujeres debido a la estimulación estrogénica del desarrollo y división de las glándulas mamarias.

Controversia

Es necesario recalcar que pese a los efectos tóxicos citados anteriormente, esto no se debe tomar como algo absoluto. El número de ensayos experimentales realizados en seres humanos no es significativo. Aunque contamos con una gran cantidad de literatura científica sobre ensayos en animales, la complejidad de extrapolar estos resultados a los seres humanos impide conocer los efectos del bisfenol A sobre los mismos.

Por otra parte, como cualquier otro disruptor endocrino, esta sustancia puede tener efectos tóxicos en pequeñas concentraciones en nuestro organismo, aún encontrándose muy por debajo de los límites tolerables de exposición fijados por los organismos competentes. La industria respalda el uso de esta sustancia apoyándose en la falta de evidencia de pruebas en humanos. No obstante, no reconoce los numerosos estudios llevados a cabo en animales que ponen de manifiesto la peligrosidad de esta sustancia y que deberían tenerse en cuenta.

En la actualidad, la EFSA mantiene que el uso de bisfenol A es seguro para el uso humano. A pesar de ello, y ante los números estudios en animales que parecen demostrar lo contrario, este organismo está llevando a cabo una reevaluación de su estimación sobre el nivel de riesgo del bisfenol A. Esta nueva evaluación se está realizando en dos etapas: en la primera se tiene en cuenta la exposición a esta sustancia y en la segunda, los aspectos que atañen a la salud humana. Los resultados no se conocerán hasta el año 2014.

Respuesta gubernamental e industrial

Desde el Gobierno tanto español como el japonés y otros organismos regulados internacionalmente, así como empresas dedicadas al sector del plástico como PlasticsEurope, opinan que las cantidades de Bisfenol A que se encuentran en los plásticos es tan mínima que apenas puede considerarse. Sin embargo, los estudios realizados por distintos grupos de investigación concluyen que lo importante no es la cantidad de Bisfenol A sino el plazo de acción de este componente en nuestro a día a día; aunque los niveles sean mínimos, el Bisfenol A llega a interferir en las funciones endocrinas. (Francisco Cimadevila, Sobre los efectos del Bisfenol A, El Mundo, 24/09/2009, sección: cartas al director.)

En países como Canadá, donde se publica una gran mayoría de artículos al respecto, se ha dado a conocer el tema a la población y, desde el propio gobierno, se prohibió inmediatamente su uso y se buscó una solución al problema que planteaban. Ante la imposibilidad de utilizar otro tipo de biberones (pues en el momento en el que fueron publicados los artículos y fueron dadas las conferencias no existían en el mercado), se difundió entre la población la idea de que no se calentaran en el microondas, ni se utilizaran con líquidos calientes, que es cuando se liberan los tóxicos que más tarde afectan el funcionamiento normal del sistema hormonal. Actualmente, a nivel internacional la empresa, Toys 'R' Us, distribuye en sus tiendas biberones y otros artículos relacionados con la alimentación de niños y bebés.

El 1 de junio de 2011, la Unión Europea prohibió la venta de biberones de plástico que lleven el componente Bisfenol A por sus posibles efectos perjudiciales para la salud. De esta forma, Europa se suma la lista de países que ya han legislado sobre el tema, como Canadá –el primero en calificar al BPA como sustancia tóxica–, y algunos estados y ciudades de EE.UU.. Perú también se une en Noviembre del 2012 a la no comercialización bajo ninguna circunstancia de producto con Bisfenol-A, Argentina adoptó en marzo de 2012 la medida de prohibir la fabricación, importación y comercialización de mamaderas que contengan Bisfenol A debido a que esta sustancia puede causar efectos tóxicos en los lactantes.

Link: http://es.wikipedia.org/wiki/Bisfenol_A

Logo KW

BitCoin [480]

de System Administrator - jueves, 3 de julio de 2014, 17:35
 

BITCOIN: GOING FROM DECEPTIVE TO DISRUPTIVE

 

Written By: Peter Diamandis

Bitcoin is moving from its Deceptive phase to a very Disruptive phase. This post is going to explain why, and what you may want to do.

I’ve been tracking Bitcoin since its inception, and my confidence has grown to the point where I’m now trading in a portion of my gold holdings for bitcoin, buying it and accepting bitcoin for the Abundance 360 CEO Summit.

What exactly is bitcoin?

For starters, bitcoin is a digital currency. As of right now, one bitcoin is equivalent to about $600 USD. Bitcoin is divisible down to 8 decimal places, or 0.00000001 BTC. You can buy things with bitcoin, sell things for bitcoin, and exchange bitcoin for other currencies (and vice versa). You can also “mine” it, but we’ll get into that later (here).

At its core, bitcoin is a smart currency, designed by very forward-thinking engineers. It eliminates the need for banks, gets rid of credit card fees, currency exchange fees, money transfer fees, and reduces the need for lawyers in transitions… all good things.

Most importantly, it is an “exponential currency” that will change the way we think about money. Much the same way email changed the way we thought of mail. (Can you remember life before email?)

If you’ve followed my work, or participated in my Abundance 360 Summit, you understand that I teach and track exponential technologies using my “6 D’s” approach, looking for “user interface moments.”

Bitcoin is following the 6Ds and is on a path to go from deceptive to disruptive over the next 1 – 3 years. Allow me to explain.

Why Bitcoin is following the 6 D’s

1. DIGITIZED: Bitcoin is digitized money — it is a global, purely digital currency. Every bitcoin is traded, earned, sold, exchanged and bought in cyberspace. For this reason, it is living on Moore’s law and hopping on the exponential curve.

2. DECEPTIVE: Bitcoin software was released to the public in 2009 and for the first few years has been growing in its deceptive phase. Few heard about it, few used it and accepted it. In addition, the currency has been hard to use; therefore, it hasn’t had its “User Interface Moment” (the key transition from deceptive to disruptive). More soon.

3. DISRUPTIVE: As described below by my friend Barry Silbert (founder of Second Market), Bitcoin is about to enter its disruptive phase where its rate of acceptance and use will explode, as will its value. See below.

4. DEMATERIALIZING: Bitcoin is eliminating or dematerializing the use of physical money (bills and coins), even credit cards. But more than that, it is also dematerializing (read: eliminating) the need for central banks, lawyers and currency exchanges.

5. DEMONETIZING: Bitcoin eliminates middlemen (banks, lawyers, exchanges) and demonetizes the cost of transactions. No fees. It makes it cheaper to use, spread and share money.

6. DEMOCRATIZING: Bitcoin makes access to capital available to everyone, where there are no banks, no ATMs and no credit card suppliers. Ultimately, as we move (over the next 6 years) to a world of 7 billion digitally connected humans, Bitcoin makes currency available to anyone with a connection to the internet.

Bitcoin’s Evolution – Why it will be Disruptive Soon

My friend Barry Silbert (founder of Second Market) recently spoke as my guest atSingularity University’s Exponential Finance conference about Bitcoin. He provided an excellent overview of its near-term trajectory, summarized below. His input has also put me on the lookout for the “User Interface Moment” – that moment in time when an entrepreneur designs a piece of interface software (think Marc Andreessen and Mosaic) that makes it so easy to use bitcoin.

I’ll be reporting on those user interfaces, investing in those startups and helping to promote them.

Okay, now back to Barry Silbert’s insights. Barry outlined five phases for this digital currency that help explain where it’s been and where it’s going.

Phase 1: The period 2009 to 2011 was the early ‘experimentation phase’ for bitcoin (i.e. deceptive). Here the software is released to public and most technologists and hackers started playing with the code. During this phase, there was no apparent value to currency yet; mining bitcoin was easy and could be done by a single person on a MacBook or PC.

Phase 2: 2011 marked the beginning of the ‘early adopter’ phase (still deceptive). There was a lot of early hype and press around Silk Road (where you could buy drugs). The value went from less than $1 to over $30, then crashed. This spurs the first generation of bitcoin companies to build basic infrastructure: wallets, merchant processors, mining operations, exchanges, etc. – i.e. the early user interfaces.

Phase 3: 2012 thru mid-2014 marked the beginning of the ‘Venture Capital Phase.’ Folks like Marc Andreessen, Google Ventures, Benchmark and others have begun investing in Generation 2 Bitcoin companies. We are right in the middle of Phase 3 right now. Thousands of bitcoin companies are getting funding. Many of these are trying to create the “User-Interface Moment.”

Phase 4: Fall 2014 thru 2015 will likely see the start of the Wall Street Phase. Here we will begin to see institutional money acknowledging digital currencies as an asset class, and they will begin trading it, investing it and creating products around it. This marks the start of the disruptive phase.

Phase 5: Finally will come the ‘Mass Global Consumer Adoption’ phase — this is where bitcoin becomes a major player in the global economy. When consumers feel it is easy, safe and secure to use bitcoin. It won’t be possible until after the “User Interface Moment” materializes, but I believe, as does Barry, that this is only 1-2 years out.

So now what?

Learn, do, teach… Go experiment! Create a bitcoin wallet and buy some bitcoin. There is no better way to learn than by doing.

First, there are a few bitcoin exchanges where you can “buy” bitcoins with dollars (or other currencies). The most popular exchanges are:

For those of you in my Abundance 360 Community, we will be discussing bitcoin in more detail. We will talk about how they work, how you start investing, how you mine, how you get involved, how to create a wallet, and how to begin acquiring bitcoin.

If you aren’t a member yet, join us here: http://www.a360.com

Every weekend I send out a “Tech Blog” like this one. If you want to sign up, go towww.AbundanceHub.com and sign up for this and my Abundance blogs. Please forward this to your best clients, colleagues and friends — especially if they don’t trust bitcoin yet.

[Credit: golden bitcoin courtesy of Shutterstock]

Link: http://singularityhub.com/2014/07/03/bitcoin-going-from-deceptive-to-disruptive/

Logo KW

Bizcocho [105]

de System Administrator - lunes, 3 de febrero de 2014, 17:57
 

 Bizcocho

La palabra bizcocho proviene del latín "bis coctus", que significa "cocido dos veces". El bizcocho uruguayo es frecuentemente emparentado con la pastelería alemana, más específicamente con el krapfen. Los bizcochos están hechos con diferentes tipos de masa, elaborada principalmente con harina, levadura, sal, azúcar y grasa. Tienen diferentes formas, pueden ser dulces o salados y, opcionalmente, se rellenan.

Fuente: http://es.wikipedia.org/wiki/Bizcochos

Logo KW

Blockchain en dos minutos [1785]

de System Administrator - viernes, 13 de octubre de 2017, 13:03
 

Comprender la Blockchain en dos minutos

Logo KW

Blockchain [1784]

de System Administrator - viernes, 13 de octubre de 2017, 13:02
 

Comprender la Blockchain en dos minutos

Logo KW

BMW and IBM: Cognitive Research for Cars of the Future [1744]

de System Administrator - domingo, 2 de abril de 2017, 23:24
 

IBM crea un piloto de apoyo inteligente para automóviles autodirigidos

IBM technology creates smart wingman for self-driving cars

by Michael Cooney

IBM patents bring machine learning tech to make automated cars safer

IBM said that it has patented a machine learning technology that defines how to shift control of an autonomous vehicle between a human driver and a vehicle control processor in the event of a potential emergency.

Basically the patented IBM system employs onboard sensors and artificial intelligence to determine potential safety concerns and control whether self-driving vehicles are operated autonomously or by surrendering control to a human driver.

The idea is that if a self-driving vehicle experiences an operational glitch like a faulty braking system, a burned-out headlight, poor visibility, bad road conditions, it could decide whether the on-board self-driving vehicle control processor or a human driver is in a better position to handle that anomaly. If the comparison determines that the vehicle control processor is better able to handle the anomaly, the vehicle is placed in autonomous mode,” IBM stated.

“The technology would be a smart wingman for both the human and the self-driving vehicle,” said James Kozloski, manager, Computational Neuroscience and Multiscale Brain Modeling, IBM Research and co-inventor on the patent.

Kozloski also noted another recently issued IBM patent that will help automated cars and human-operated vehicles more safely interact in the future.

In a nutshell, this technology uses what’s called automatic driver modeling which to watch for the behavior of a person driving a car, such as use of turn-signal, speed and other behaviors to make assumptions about that car and pass those observation onto other autonomous cars in the vicinity.

A model of the driver is generated based on the behavior patterns of the driver measured by the one or more sensors of the autonomous vehicle. Previously stored behavior patterns of the driver are then retrieved from a database to augment the model of the driver. The model of the driver is then transmitted from the autonomous vehicle to nearby vehicles with autonomous interfaces, Kozloski said.

“The whole idea with both patents is to enable a safer environment for humans,” Kozloski said.

IBM says it has patented numerous inventions that, among other things, can help vehicles become:

  • Self-learning – powered by cognitive capability that continuously learns and gives advice based on behavior of the driver, passengers, and other vehicles
  • Self-socializing –connecting with other vehicles and the world around them
  • Self-driving –moving from limited automation to becoming fully autonomous
  • Self-configuring– adapting to a driver’s personal preferences
  • Self-integrating –integrating into the IoT, connecting traffic, weather, and mobility events with changing location

Indeed, IBM is deeply involved in the self-driving car arena. In December, the company said it would collaborate with BMW and its own cognitive computer whiz Watson to develop self-driving cars that can adapt to driver preferences.

The IBM study, “Automotive 2025: Industry without borders,” amassed interviews with 175 executives from automotive OEMs, suppliers, and other leaders in 21 countries and found that by 2025 cars will be able to learn, heal, drive and socialize with other vehicles and their surrounding environment.

Some of the study’s more interesting observations included:

  • 2025, the vehicle will be sophisticated enough to configure itself to a driver and other occupants.
  • Fifty-seven percent believe vehicle “social networks” would be in place where vehicles would communicate with each other, allowing vehicles to share not only traffic or weather conditions, but information specific to a given automaker. For instance, if a vehicle was experiencing some type of problem not recognized before, it could communicate with other vehicles of the same brand to seek help on what the issue might be.
  • Analytics capabilities will help vehicles identify and locate issues, schedule fixes and even help other vehicles with similar problems with minimal impact to the driver.
  • Like other smart devices, the vehicle will be an integrated component in the Internet of Things (IoT). It will collect and use information from others concerning traffic, mobility, weather and other events associated with moving around: details about driving conditions, as well as sensor-based and location-based information for ancillary industries, such as insurance and retail.
  • Seventy-four percent of respondents said that vehicles will have cognitive capabilities to learn the behaviors of the driver and occupants, the vehicle itself and the surrounding environment to continually optimize and advise. As the vehicle learns more about the driver and occupants, it will be able to expand its advice to other mobility services options. The report also underscores considerable skepticism about fully autonomous vehicles—where no driver is required and the vehicle is integrated into normal driving conditions. A mere 8% of executives see it becoming commonplace by 2025. Moreover, only 19% believe that a fully automated environment—meaning the driving system handles all situations without monitoring, and the driver can perform non-driving tasks—will be routine by 2025.
  • Eighty-seven percent of the participants felt partially automated driving, such as an expansion of today’s self-parking or lane change assist technologies would be commonplace. Moreover, 55% said highly automated driving, where the system recognizes its limitations and calls driver to take control, if needed, allowing the driver to perform some non-driving tasks in the meantime, would also be adapted by 2025.

Related:

Tags

Logo KW

Bobina de Tesla [81]

de System Administrator - lunes, 3 de febrero de 2014, 18:12
 

 Bobina de Tesla

Una bobina de Tesla es un tipo de transformador resonante, llamado así en honor a su inventor, Nikola Tesla, el cual la patentó en 1891 a la edad de 35 años. Las bobinas de Tesla están compuestas por una serie de circuitos eléctricos resonantes acoplados. Nikola Tesla experimentó con una gran variedad de bobinas y configuraciones, así que es difícil describir un modo específico de construcción que satisfaga a aquellos que hablan sobre bobinas de Tesla. Las primeras bobinas y sus versiones posteriores varían en configuraciones y montajes. Generalmente las bobinas de Tesla crean descargas eléctricas cuyo alcance está en el orden de metros, lo que las hace espectaculares.

 Cine Cine

Fuente: http://es.wikipedia.org/wiki/Bobina_de_tesla

Logo KW

Bolsa de Valores [71]

de System Administrator - lunes, 3 de febrero de 2014, 18:15
 

Una bolsa de valores es una organización privada que brinda las facilidades necesarias para que sus miembros, atendiendo los mandatos de sus clientes, introduzcan órdenes y realicen negociaciones de compra y venta de valores, tales como acciones de sociedades o compañías anónimas, bonos públicos y privados, certificados, títulos de participación y una amplia variedad de instrumentos de inversión.

 Wall Street

Fuente: http://es.wikipedia.org/wiki/Bolsa_de_valores

Logo KW

Borland Software Corporation [151]

de System Administrator - martes, 7 de enero de 2014, 22:52
 

 C++

Borland Software Corporation (anteriormente Borland International, Inc.) es una compañía de software, ubicada en Austin, Texas, Estados Unidos. Fue fundada en 1983 por Niels Jensen, Ole Henriksen, Mogens Glad y Philippe Kahn.

Fuente: http://es.wikipedia.org/wiki/Borland

Logo KW

Bosón de Higgs [324]

de System Administrator - viernes, 3 de enero de 2014, 18:58
 

Higgs,_Peter_(1929)

Higgs Peter (1929)

El bosón de Higgs o partícula de Higgs es una partícula elemental propuesta en el Modelo estándar de física de partículas. Recibe su nombre en honor aPeter Higgs quien, junto con otros, propuso en 1964, el hoy llamado mecanismo de Higgs, para explicar el origen de la masa de las partículas elementales. El Bosón de Higgs constituye el cuanto del campo de Higgs, (la más pequeña excitación posible de este campo). Según el modelo propuesto, no posee espíncarga eléctrica o color, es muy inestable y se desintegra rápidamente, su vida media es del orden del zeptosegundo. En algunas variantes del Modelo estándar puede haber varios bosones de Higgs.

La existencia del bosón de Higgs y del campo de Higgs asociado serían el más simple de varios métodos del Modelo estándar de física de partículas que intentan explicar la razón de la existencia de masa en las partículas elementales. Esta teoría sugiere que un campo impregna todo el espacio, y que las partículas elementales que interactúan con él adquieren masa, mientras que las que no interactúan con él, no la tienen. En particular, dicho mecanismo justifica la enorme masa de los bosones vectoriales W y Z, como también la ausencia de masa de los fotones. Tanto las partículas W y Z, como el fotón son bosones sin masa propia, los primeros muestran una enorme masa porque interactúan fuertemente con el campo de Higgs, y el fotón no muestra ninguna masa porque no interactúa en absoluto con el campo de Higgs.

El bosón de Higgs ha sido objeto de una larga búsqueda en física de partículas.

El 4 de julio de 2012, el CERN anunció la observación de una nueva partícula «consistente con el bosón de Higgs», pero se necesitaría más tiempo y datos para confirmarlo. El 14 de marzo de 2013 el CERN, con dos veces más datos de los que disponía en el anuncio del descubrimiento en julio de 2012, encontraron que la nueva partícula se ve cada vez más como el bosón de Higgs. La manera en que interactúa con otras partículas y sus propiedades cuánticas, junto con las interacciones medidas con otras partículas, indican fuertemente que es un bosón de Higgs. Todavía permanece la cuestión de si es el bosón de Higgs del Modelo estándar o quizás el más liviano de varios bosones predichos en algunas teorías que van más allá del Modelo estándar.7

El 8 de octubre de 2013 le es concedido a Peter Higgs, junto a François Englert, el Premio Nobel de física "por el descubrimiento teórico de un mecanismo que contribuye a nuestro entendimiento del origen de la masa de las partículas subatómicas, y que, recientemente fue confirmado gracias al descubrimiento de la predicha partícula fundamental, por los experimentos ATLAS y CMS en el Colisionador de Hadrones del CERN".

Fuente: http://es.wikipedia.org/wiki/Bosón_de_Higgs

Logo KW

BPEL (Business Process Execution Language) [957]

de System Administrator - miércoles, 22 de octubre de 2014, 13:00
 

BPEL (Business Process Execution Language)

Posted by Margaret Rouse

BPEL (Business Process Execution Language) is an XML-based language that enables task-sharing in a distributed computing or grid computing environment. 

BPEL (Business Process Execution Language) is an XML-based language that allows Web services in a service-oriented architecture (SOA) to interconnect and share data.

Programmers use BPEL to define how a business process that involves web services will be executed. BPEL messages are typically used to invoke remote services, orchestrate process execution and manage events and exceptions.

BPEL is often associated with Business Process Management Notation (BPMN), a standard for representing business processes graphically. In many organizations, analysts use BPMN to visualize business processes and developers transform the visualizations to BPEL for execution.

BPEL was standardized by OASIS in 2004 after collaborative efforts to create the language by Microsoft, IBM and other companies.

Continue Reading About BPEL (Business Process Execution Language)
Related Terms

Link: http://searchsoa.techtarget.com

Essential Guide

 

SOA BPM guide: Mobile, cloud drive BPM, BPEL changes

This guide provides advice for using SOA BPM and BPEL, plus reports noteworthy trends about the technology

Introduction

Innovative leaders know the important role technology plays in everyday business operations. Many managers immersed in ITare turning to business process management (BPM) as a means to move their organization forward. SOA BPM can be a powerful tool in keeping operations in sync with client wants and needs. More so than traditional management styles, BPM focuses on reducing human mistakes, promoting efficiency and integration with technology.

1. Things to know

Expert advice: Working with SOA BPM, BPEL

Technology leaders are using BPM as an efficient means to keep up with the ever-changing needs of their organization. Before you jump in to using BPM and/or BPEL, take a look at some advice to ensure your implementation goes smoothly.

News: How to avoid BPM project landmines

  • At the 2013 Gartner Conference, the keynote offered tips on avoiding politics in a BPM program. Continue Reading

Tip: How to test BPEL in SOA

  • Best practices for testing BPEL include using a testing suite and integrating BPEL fully into ALM. Continue Reading

Tip: How to develop BPEL composite apps

  • Business Process Execution Language can be used to frame a complete composite application strategy if it's used properly. Continue Reading

Tip: How to solve common BPEL problems

  • Business process execution language-related issues, whether technical or process-oriented, can be resolved with custom-code processes. Continue Reading

Tip: How to think of business design and transformation

  • Forrester Research Senior Analyst Clay Richardson describes three key steps for incorporating user experience into business process design.Continue Reading

 

2. BPM and BPEL developments

Top trends to watch for in BPM and BPEL

Technology is always evolving and the state of SOA BPM and BPEL isn't an exception. Read on to find out the latest industry trends.

  • Business process execution language hasn't changed much over the past few years, but it has received more attention from vendors. Continue Reading

Feature: The case for cloud-based DCM

  • Leaders from Eccentex discuss how knowledge workers and the BYOD trend may drive cloud-based dynamic case management. Continue Reading
  • It's important not to let solid governance practices be limited to SOA, because deficiencies in other areas impact efforts. Continue Reading

3. Mobile BPM

Look around in any public area and you're likely to see someone using a mobile device. So, it should come as no surprise that mobility has become an important development in BPM. Check out the videos in this section to learn how mobile is shaping the BPM landscape.

  • Red Hat's Rich Naszcyniec and JBoss' Jason Andersen delve into integration and BPM middleware platform trends.
  • In this videocast, Forrester Research Senior Analyst Clay Richardson describes why and how to mobilize critical business processes. Part 1 of 2. 
  • Richardson offers expert advice on developing a strategy for mobile BPM. Part 2 of 2. 

 4. Definitions

Must-know: Key BPM and BPEL terms

This glossary contains common terms related to SOA BPM and BPEL that any industry professional should be well-versed in.

5. Quiz

Test your Mobile BPM IQ

How much of this BPM and BPEL guide did you absorb? Find out if you still need to brush up with this six-question quiz.

Take This Quiz

Link: http://searchsoa.techtarget.com

Logo KW

Branded Content [1037]

de System Administrator - lunes, 5 de enero de 2015, 14:29
 

¿Qué es el Branded Content y cómo dejar de confundirlo con el Product Placement?

Fuente: Bloggin Zenith

Los últimos datos acerca de la inversión publicitaria en nuestro país reflejan que ésta sufre un acusado descensocon respecto a años anteriores. Nuestra manera de consumir los medios no es la misma que hace veinte años (tampoco los medios son los mismos), algo que los anunciantes conocen mejor que nadie.

Es por ello por lo que cada vez sus estrategias van más encaminadas a no perder a un público hoy en día reacio a las estrategias de promoción tradicionales. La clave la encontramos en una publicidad menos intrusiva, más personalizada y que aporte algún valor añadido para que la promoción solo sea un detalle. Surgen así dos formas que se pueden encontrar de forma casi constante: Product Placement y Branded Content. Ahora bien, ¿en qué se diferencian?

Siempre hay un comienzo

Todos recordamos a Popeye, el dibujo animado al que le encantaba comer espinacas. Lo que todos no saben es que esta animación fue un encargo en 1929 de la cámara de Productores de Espinacas de EE.UU para fomentar el consumo de esta verdura entre los niños de esa generación. Vino a cargo del viñetista Elzie Crisler Segar, quien realizó la primera tira cómica del marinero musculoso publicada en The New York Evening Journal.

El éxito del dibujo animado saltó del cómic a la radio, el cine y una serie de televisión producida por Hanna-Barbera. Este ejemplo, a pesar de que hayan pasado más de 80 años, sigue vigente en la actualidad demostrando que la figura del branded content no es nueva. Pero, ¿por qué no es Product Placement?

Product Placement vs. Branded content

El emplazamiento publicitario es una técnica que lleva usándose de forma muy efectiva desde los años 80, consistente en incluir de forma explícita aquello que queremos promocionar. Este puede aparecer de muy diversas formas, desde el simple atrezzo de un cartón de leche en una película hasta llegar a explicar las cualidades y ventajas del objeto anunciado por parte de los actores de una serie. Su origen vino dado, como se puede deducir, por la pérdida de efectividad de los anuncios tradicionales, debilitada por la saturación publicitaria, el zapping y más recientemente por la segmentación de audiencias de la televisión digital.

Esta publicidad es más sutil y efectiva, y tiende a quedarse en la memoria de los espectadores durante mucho tiempo. El caso más célebre se lo debemos a Steven Spielberg, cuando en 1982 implementó los Reese’s Pieces, unos caramelos de chocolate de The Hershey Company, en la película E.T., haciendo que las ventas de esta compañía aumentasen un 80%.

El Branded Content, por otro lado, es una técnica más avanzada, y posiblemente la única manera con la que en el futuro podremos rentabilizar productos de entretenimiento para internet logrando al mismo tiempo la aprobación (y por lo tanto el dinero) de las marcas. Aquí ya no hablamos de incluir publicidad en un formato audiovisual, sino de que el formato esté destinado desde el momento de su concepción para publicitar esa marca.

En palabras de Clara Ávila, consultora Social Media, el branded content “es una estrategia de contenidos en el que la marca es una parte más de esta experiencia. Busca entretener al consumidor y no venderle un producto. Prima el mensaje que lanzas a las ventajas que tienes, desdibujamos lo que constituye la publicidad y lo que constituye el entretenimiento. Desde ese punto tú ganas engagement con tu usuario (porque gustas) y fidelización (porque te siguen, porque asocian tu marca a este contenido).”

Por este motivo, Ávila insiste en que “el branded content tiene que ser innovador y sorprender. Es decir, tu mensaje tiene que ser una experiencia, algo que guste al usuario. Formatos tradicionales pierden peso,formatos nuevos lo ganan: aplicaciones, vídeo, etc.”

Si tu marca no tiene una historia que contar, entonces no es interesante. No aportarás nada, no interesarás al usuario. Una buena forma de ayudar a consolidar esa imagen que la empresa quiere construir a base de campañas publicitarias puede ser una serie para internet en la que el producto anunciado sea el continente o contenido de la acción, como veremos a continuación. Eso sí, en palabras de El crítico de la tele, “el mejor Branded será el que no se note“.

Algunos ejemplos de Branded Content

Con Volvo y su acción de Branded Content en The Route, queda demostrado que las marcas ya no venden productos, sino que venden experiencias. En este caso concreto, esta creación protagonizada por Robert Downey Jr y dirigida por Stpehen Frears, muestra los vehículos de Volvo, asociándolos a la experiencia de la conducción.

Muchos pensarán que ésta, en cierto modo, es una manera de prostituir lo que en principio es una obra de arte, al quedar las decisiones creativas en manos de personas a las que solo les interesa el dinero. Pero, como apunta El crítico de la tele, “no olvidéis una cosa, el público no se va a volver más tonto: si la calidad narrativa no es buena, no se lo van a tragar. De ahí está la decisión sabia de las agencias de publicidad de seguir contando con los mismo guionistas, directores… que hasta ahora mismo estaban trabajando en el cine y la televisión”.

No solo se recrean espacios audiovisuales. Kit Kat, con motivo del último día de San Valentín, lanzó unaaplicación de ‘branded content’ que comprobaba la compatibilidad de las parejas. La pareja más afín ganaría‘un travel’ a la ciudad ‘de las lights’. Una acción que, sin duda, muestra que las app son una fórmula más para que las marcas puedan acercar sus contenidos a los usuarios, y no solo a través de vídeos en YouTube.

Inolvidable, sin duda, y un gran ejemplo en nuestro país de Branded Content fue la Gala “Arriba ese Ánimo”,creada y puesta en marcha por Zenith y Newcast, por la que nuestra empresa recibió un Oro a la Eficacia en Medios en la pasada entrega de los Premios Eficacia 2012 por la estrategia de medios desarrolla para la campaña “Cómicos” de Campofrío, y el Premio Uso Creativo del Medio Prensa.

Ahora bien, si hay una marca que maneje el Branded Content como nadie esa es Red Bull, quien gracias al salto desde la estratosfera de Felix Baumgartner, retransmitido en streaming desde YouTube, encabeza el ranking de las 100 primeras marcas más poderosas en “branded content” en 2012. La bebida energética ha derrotado a gigantes tecnológicos como Google (2º lugar), Samsung (5º lugar), Apple (puesto 11) o Microsoft (puesto 37). Un caso de éxito, el de Red Bull, que trataremos en el siguiente post de esta serie.

Tres claves para diferenciar Product Placement de Branded Content

  • Enfoque: en el Product Placement la presencia del producto es meramente pasiva, mientras que en Branded Content el producto puede (y debe) ser el eje principal de la trama.
  • Control: en Branded Content es el anunciante quien tiene el control sobre el contenido, es decir; decide el mensaje que transmitirá su marca. Es la marca quien “marca” el eje narrativo, como una oportunidad de oro para transmitir sus valores diferenciales. En el Product Placement, el producto se inserta siempre supeditado a las necesidades del guión.
  • Credibilidad: en ocasiones, la obviedad del Product Placement hace sonreír al espectador. La credibilidad del Branded Content es precisamente su naturalidad: se trata de un contenido pull, cuya creatividad y valor lo hace nada intrusivo.

Link: http://blogginzenith.zenithmedia.es

"El Branded Content es la nueva publicidad"

por Sara M. Calahorrano

Los últimos datos relativos a la inversión en publicidad muestran la necesidad de utilizar una nueva fórmula publicitaria, como el Branded Content, no intrusiva a las audiencias y que, además, sea un modelo de negocio sostenible para las agencias y un método de comunicación eficaz para las marcas.

El Branded Content "es una forma de publicidad que no interrumpe al espectador mientras está consumiendo otro contenido", explica Eduardo Prádanos, Content Manager Havas Sports & Entertainment en Havas Media y director de cursos de Transmedia, Social TV y Branded Content. Su principal diferencia con la publicidad tradicional, añade, "es que son contenidos de entretenimiento o de interés informativo con relevancia para el público".

Javier Regueira, socio de Pop Up Música y autor del blog 'No Content no Brand', define el Branded Content como "una nueva manera de entender la comunicación publicitaria". "Si el fundamento tradicional de los anuncios que conocemos era la repetición y la interrupción en bloques, este nuevo enfoque se basa justo en lo contrario: encapsular el mensaje de marca en un formato de entretenimiento que el consumidor sí quiere recibir", explica.

En definitiva, afirma Prádanos, "el Branded Content son comunicaciones financiadas por los anunciantes y de interés para los consumidores que, además, reflejan una vinculación entre lo que la marca quiere mostrar y lo que su target quiere consumir". Se trata de "la nueva publicidad", sentencia Regueira, "y su objetivo es sólo uno: ofrecer al consumidor una publicidad que sí le apetezca ver, de modo que resulte eficaz en el plano cognitivo, en el afectivo y en el de intención de compra".
Sin embargo, indica Prádanos, se debe tener presente que "cuando se habla de Branded Content no se habla de campañas, sino de algo mucho más transversal con beneficios a medio plazo".

¿Y cómo se unen el Branded Content y el Transmedia? "Se unen de forma natural", afirma Martín Milone, director creativo de El Cañonazo Transmedia. "El transmedia como herramienta para una acción de Branded Content deriva en ventajas, porque permite diferentes niveles de narración para la historia y de involucración por parte del consumidor", explica. Lo importante, añade, "es no saturar, no usar todos los canales disponibles, sino los correctos para llegar a tus consumidores". En definitiva, apunta Regueira, "el ámbito transmedia permite construir el discurso narrativo de forma no intrusiva y contando con la participación del usuario".

En la industria del entretenimiento, el Branded Content "compite por la atención de la audiencia con el resto de contenidos y, por tanto, debe primar la calidad del contenido", explica Regueira. Para ello, añade, "las agencias deben dedicar más recursos a la producción y menos a los medios".
Milone, por su parte, apunta que el entretenimiento de Branded Content "permite que los contenidos tengan una amplitud y un vuelo imaginativo enorme, además de generar reflexión y despertar la imaginación en los espectadores/consumidores". Y añade que, "sin temor a equivocaciones, el Branded Content será una de las formas más habituales de producción televisiva y cinematográfica en los próximos años".

En el consumo en los Social Media, Regueira asegura "que lo importante no es captar seguidores, sino que el contenido asociado a la marca logre construir relaciones duraderas con los consumidores". Los usuarios, explica Milone, "adaptan nuevos hábitos a sus viejas costumbres y, en este caso, a la televisión se añaden 'las segundas pantallas'". Sin embargo, Prádanos indica que "como cada vez es más difícil destacar, en los medios sociales sólo las marcas con una estrategia y una implementación impecable logran mejorar el engagement entre marca y consumidor".

A través del Branded Content, además, se puede rescatar a públicos olvidados. Y la fórmula, asegura Regueira, "es respetar la esencia del Branded Content: Branded por un lado y Content por otro". Y explica, "Branded, 'de marca', porque paga una marca y, por tanto, un contenido de marca debe empezar por un trabajo de planificación estratégica de Branding, de dar forma a un territorio que encarne el ADN de la marca y lo que se quiere comunicar. Y, en cuanto al Content, de nada sirve definir los elementos con los que la marca cuenta para narrar su historia, si luego esta historia es irrelevante para el consumidor. Así, el contenido debe ser creado por especialistas en creación de contenidos, cuyos perfiles no suelen coincidir con los perfiles de los creadores tradicionales de anuncios".

El reto ahora, añade Regueira, "es contar una historia (utilizando un símil literario, más parecido a la prosa) cuando antes se trataba de causar un efecto rápido con una pieza de corta duración (más parecido a la poesía)".

Con todo, "la ley no delimita con claridad el ámbito de Branded Content", indica Regueira. "Hasta ahora se consideraba publicidad todo aquello que conllevaba una transacción económica, pero ¿qué sucede cuando la marca que produce un Branded Content no sólo ha pagado por su emisión sino que puede haber cobrado por los derechos de ese programa? En opinión de Regueira, "desde el punto de vista ético, el consumidor no necesita ser protegido si un contenido de marca lejos de agredirle, le agrada".

Link: http://www.expansion.com

For Your Consideration: Could a Branded Documentary Bring Home the Oscar?

Marketers back films that bear little to no promotion 

Illy's documentary A Small Section of the World focuses on Costa Rican women who run a sustainable coffee business.

In a remote Costa Rican village, a group of female entrepreneurs known as the Asomobi (Asociación de Mujeres Organizadas de Biolley) has created a sustainable coffee production business.

Their inspirational story, told in the documentary 

 will debut in theaters in December, followed by a robust online and broadcast push distributed by FilmBuff, all brought to consumers by Italian coffee maker Illy.

Considering that it bears no branding (save for a shot of an Illy-sponsored conference and some scenes inside an Illy factory), the award-winning filmmakers don't want it to be labeled a "branded documentary." They believe in its merits regardless of how it was backed—so much so that they intend to submit it for consideration at the Academy Awards as well as advertising competitions like the Clio Awards and Cannes Lions.

"It really doesn't matter any longer if it's branded entertainment or entertainment," said Dominic Sandifer, Greenlight Media and Marketing president and co-executive producer. "What matters is if it's a great story." Getting a documentary bankrolled is harder than ever. At the same time, documentaries are in vogue thanks to the growth of online video channels like Netflix and the increasing demand for premium video content on the Web, said Marc Schiller, CEO of event and film marketing firm Bond. And brands are realizing they don't need to plaster their logos on a film to get their company's positioning across.

"It's the purest form of content marketing," said Rebecca Lieb, Altimeter Group analyst.

 

A Small Section of the World's director, Lesley Chilcott, who received an Oscar for co-producing the Al Gore documentary An Inconvenient Truth, admitted she was skeptical at the outset. Illy explained it had tried for in development for a year after its agronomist visited Asomobi, which provides coffee for Illy. After investigating the story for herself and being assured that she would have final cut—the last approval on a movie—she came on board. "To be honest, at first I said no," she said. "How can I make a movie on coffee producers paid for by a coffee maker?"

Similarly, Patagonia sponsored DamNation, a film about the damage that outdated dams can create. DamNation bears minimal branding, and its directors were also granted final cut. After completing a short theatrical run to qualify for the Academy Awards, DamNation was released online. It will also be available on Netflix. "We're here to solve environmental problems," said Joy Howard, vp, marketing at Patagonia. "If we can show that, then people process what we're about, become loyal and commit to the brand."

Morgan Spurlock, who helmed the movie about branded content, 

, said filmmakers should be cautious about taking a marketer's money. Still, he's not against it, having partnered himself with Maker Studios on multiple brand-sponsored Web series in early 2015. "You can have a brand come in and be a part of something, but you have to know they want to exert some sort of influence," he said.

"[Brand backing] can hinder the film's ability to compete in that space," said Howard. "Something that had a huge budget is not viewed on the same footing as a documentary that has a scrappier background."

Chilcott understands there may be bias against A Small Section of the World. But as more brand marketers finance filmmaking, she hopes people will judge on merit, not on who is footing the bill. "I think in three to four years, this won't even be a story," she said.

Link: http://www.adweek.com

documento PDF