Neurociencia | Neuroscience


Navegue por el glosario usando este índice.

Especial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | Ñ | O | P | Q | R | S | T | U | V | W | X | Y | Z | TODAS

Página: (Anterior)   1  2  3  4  5  6  7  8  9  10  ...  71  (Siguiente)


Logo KW

Augmented reality aims at big industry [1306]

de System Administrator - domingo, 12 de julio de 2015, 00:02

Chasing Brilliance / An Ars Technica feature

Toss your manual overboard—augmented reality aims at big industry

Papers, diagrams, and checklists would be replaced with intuitive visual tools.

by Lee Hutchinson

Dr. Manhattan manipulates this reactor's components in exploded view. GE is trying to do something similar with an augmented reality maintenance manual |© Warner Bros.

For better or for worse, augmented reality ("AR") is charging forward in the consumer space—but there’s a place for AR in the industrial world as well. We’re not quite at the point of putting Microsoft HoloLens kits on the heads of roughnecks working out on oil rigs, but when it comes to complex machinery out in remote locations, augmenting what field engineers can see and do can have a tremendous impact on a company’s bottom line.

By way of example, GE is focusing efforts on constructing an extensible "field maintenance manual" intended to be used for industrial equipment. The use case being tested in the labs is with oil and gas; researchers in GE’s Research Center in Brazil are building software that they hope will replace the need to deal with bulky printed maintenance manuals—manuals which have to be kept up to date and which lack any kind of interactivity.

To learn a bit more about augmented reality in industry, Ars spoke with Dr. Camila Nunes, a scientist in charge of software and productivity analytics with GE Brazil. Nunes has an extensive background in oil and gas, having done graduate and postdoctoral work with Petrobras, the largest Brazilian energy corporation (and, indeed, the largest company in the Southern Hemisphere). At GE, Nunes works on bringing to life the interactive field maintenance manual concept.


To elaborate on how GE is using augmented reality in oil and gas, Nunes explained that frequently offshore oil and gas workers are called upon to install "Christmas trees," which are complex assemblies of pipes and valves used for a variety of purposes, including monitoring wells or injecting fluids into them. Owing to the wide range of functions, these are necessarily complex devices, and their installation can take more than a hundred hours under ideal conditions.

Currently, explained Nunes, the installation and servicing of a Christmas tree is done using paper manuals and checklists, and GE is aiming to change that by making things electronic and interactive. The interactive field maintenance manual concept is a multi-pronged beast, with a front end in the field and a back end that contains a tremendous variety of hardware and which can be continually updated as new equipment enters the field or new procedures are devised.

Nunes demonstrated this with a tablet in the augmented reality lab and a small 3D-printed duplicate of a piece of well hardware. The maintenance manual app used the tablet’s camera to figure out what kind of hardware it was looking at, and then was able to track the component as the tablet moved around it. The operator could look up installation procedures and see steps demonstrated in 3D on the parts each step involves, rather than having to refer to static printed diagrams.

More, the app enables an operator to take any part of the complex assembly being worked on and "explode" it, expanding it so that its interlinked component parts are visible. This kind of 3D exploded view is superior to a printed page because it can be zoomed and manipulated, and individual parts can be directly addressed in the app, rather than having to refer to additional printed pages.


A 3D-printed model of a well component from the lab, with a QR-like tag for easier recognition by the manual application.

Head-mounted versus hand-held

As things stand, the maintenance manual concept currently exists on tablets—iOS and Android. The back-end where the bulk of the data lives is based on Java Web Services, so when the tablet is connected to the Internet, it can pull up data on any piece of equipment in the entire library. However, in the field—particularly on sea-based drilling platforms—Internet access is never a sure thing. Because of this, the app can be preloaded with data on the pieces of equipment that the operator expects to have to work on. Then, it can function without a network connection.

Using commodity tablet hardware lets GE do things in software that would have been very difficult or impossible a decade ago; in particular, the GPUs of most consumer-grade tablets are good enough that weaving in OpenGL-based 3D renderings on top of the tablet’s live camera feed is trivial. There’s also enough CPU power available to perform image recognition tasks—in the lab, the 3D printed miniature parts had QR-like barcodes on them that the app could use to know what it was pointed at, but in the field, the manual features full image recognition and can identify equipment purely through the tablet’s camera feed.

RAM is an issue—at least for now. Nunes explained that it was relatively easy for the GE developers’ reach to exceed their grasp at first, as the tablets’ graphical and computational capabilities led the developers to try to do multiple things at once without regard for memory management. They’ve learned from their earlier efforts, though.

This is going to be a lot more important going forward, because GE wants to eventually transition the current tablet-based app into a head-mounted display, moving from tablet augmented reality to actual in-your-field-of-view augmented reality. Nunes couldn’t disclose which vendors GE was working with to make this happen, but she did say that nothing on the market today was quite good enough to make the concept work as well as it does in tablet form.


Any augmented reality app intended for use in offshore platforms (like P-51, pictured here) must be able to work in the absence of an Internet connection.

Ahead of the class

Moving away from paper and into an augmented reality maintenance manual has another tremendous benefit: if used properly, it can shorten the training cycle of equipment technicians.

The goal here would be to actually put the exact same kind of AR manual in technician training classrooms as in the field. Prospective technicians can learn to service equipment not just by reading books and working with mock-ups, but by actually walking through virtual procedures—the same procedures they’d use out at sea on a platform. Further, instructors can "share screens" with them and assist with troubleshooting—just as remote experts could help when out in the field (network connectivity permitting).

Augmented Reality in industry faces different sets of challenges than it does in the consumer world. Where we’re concerned primarily with consumer AR being intrusive and distracting to the everyday person doing everyday things, industrial AR is a lot like the military’s use of the technology: it’s acceptable to implement it with the expectation that the user will have some relevant training and job skills. Plus, as mentioned, AR can be incorporated into the actual job training, getting technicians used to using it throughout their entire job.

The AR maintenance manual concept looks like a win—and the applications are clearly there beyond oil and gas. Any kind of industrial setting with complex machinery that requires installation and maintenance checklists could benefit; GE hopes to be able to roll the technology out in the near term.

Lee Hutchinson / Lee is the Senior Reviews Editor at Ars and is responsible for the product news and reviews section. He also knows stuff about enterprise storage, security, and manned space flight. Lee is based in Houston, TX.



Logo KW

Augmented reality gets to work—and gets past the “Glassholes” [1177]

de System Administrator - domingo, 29 de marzo de 2015, 18:35

Augmented reality gets to work—and gets past the “Glassholes”

Mobile tech, Internet of things, cloud delivers info to workers' fingers, eyeballs.

Logo KW

Aural History [1377]

de System Administrator - martes, 1 de septiembre de 2015, 20:36

Aural History

The form and function of the ears of modern land vertebrates cannot be understood without knowing how they evolved.

By Geoffrey A. Manley

Unlike eyes, which are generally instantly recognizable, ears differ greatly in their appearance throughout the animal kingdom. Some hearing structures may not be visible at all. For example, camouflaged in the barn owl’s facial ruff—a rim of short, brown feathers surrounding the bird’s white face—are clusters of stiff feathers that act as external ears on either side of its head. These feather structures funnel sound collected by two concave facial disks to the ear canal openings, increasing the bird’s hearing sensitivity by 20 decibels—approximately the difference between normal conversation and shouting. Similar increases in sensitivity result from the large and often mobile external structures, or pinnae, of many mammals, such as cats and bats. Internally, the differences among hearing organs are even more dramatic.

Although fish can hear, only amphibians and true land vertebrates—including the aquatic species that descended from them, such as whales and pinnipeds—have dedicated hearing organs. In land vertebrates belonging to the group Amniota, including lizards, birds, and mammals, sound usually enters through an external canal and impinges on an eardrum that is connected through middle-ear bones to the inner ear. There, hundreds or thousands of sensory hair cells are spread along an elongated membrane that acts as a spectral analyzer, with the result that each local group of hair cells responds best to a certain range of pitches, or sound frequencies. The hair cells then feed this information into afferent nerve fibers that carry the information to the brain. (See “Human Hearing: A Primer.”)

For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing.

Together, these hair cells and nerve fibers encode a wide range of sounds that enter the ear on that side of the head. Two ears complete the picture, allowing animals’ brains to localize the source of the sounds they hear by comparing the two inputs. Although it seems obvious that the ability to process nearby sounds would be enormously useful, modern amniote ears in fact arose quite late in evolutionary history, and to a large extent independently in different lineages. As a result, external, middle, and inner ears of various amniotes are characteristically different.1 New paleontological studies and comparative research on hearing organs have revealed the remarkable history of this unexpected diversity of ears.

Divergence from a common origin

Amniote vertebrates comprise three lineages of extant groups that diverged roughly 300 million years ago: the lepidosaurs, which include lizards and snakes; the archosaurs, which include crocodilians and birds; and mammals, which include egg-laying, pouched, and placental mammals. By comparing the skulls of the extinct common ancestors of these three lineages, as well as the ears of the most basal modern amniotes, researchers have concluded that ancestral amniotes had a small (perhaps less than 1 millimeter in length) but dedicated hearing organ: a sensory epithelium called a basilar papilla, with perhaps a few hundred sensory hair cells supported by a thin basilar membrane that is freely suspended in fluid. These rudimentary structures evolved from the hair cells of vestibular organs, which help organisms maintain their balance by responding to physical input, such as head rotation or gravity. Initially, the hearing organ only responded to low-frequency sounds. On their apical surface, all hair cells have tight tufts or bundles of large, hairlike villi known as stereovilli (or, more commonly stereocilia, even though they are not true cilia), which give hair cells their name. Between these stereovilli are proteinaceous links, most of which are closely coupled to sensory transduction channels that respond to a tilting of the stereovilli bundles caused by sound waves.

The amniote hearing organ evolved as a separate group of hair cells that lay between two existing vestibular epithelia. Low-frequency vestibular hair cells became specialized to transduce higher frequencies, requiring much faster response rates. This change is attributable in part to modifications in the ion channels of the cell membrane, such that each cell is “electrically tuned” to a particular frequency, a phenomenon still observed in some modern amniote ears. Moreover, the early evolution of these dedicated auditory organs in land vertebrates led to the loss of the heavy otolithic membrane that overlies the hair-cell bundles of vestibular organs and is responsible for their slow responses. What remains is the watery macromolecular gel known as the tectorial membrane, which assures that local groups of hair cells move synchronously, resulting in greater sensitivity.

Good high-frequency hearing did not exist from the start, however. For a period of at least 50 million years after amniotes arose, the three main lineages were most likely quite hard of hearing. They had not yet evolved any mechanism for absorbing sound energy from air; they lacked the middle ear and eardrum that are vital for the function of modern hearing organs. As such, ancestral amniotes most likely perceived only sounds of relatively low frequency and high amplitude that reached the inner ear via the limbs or, if the skull were rested on the ground, through the tissues of the head. It is unclear what kind of stimuli could have existed that would have led to the retention of such hearing organs for such a long time.

The magnificent middle ear

 CONVERGING ON THE EAR: Starting around 250 million years ago, the three amniote lineages—lepidosaurs (lizards and snakes), archosaurs (crocodilians and birds), and mammals—separately evolved a tympanic middle ear, followed by evolution of the inner ear, both of which served to increase hearing sensitivity. Despite the independent origin of hearing structures in the three lineages, the outcomes were functionally quite similar, serving as a remarkable example of convergent evolution.

During the Triassic period, some 250 to 200 million years ago, a truly remarkable thing happened. Independently, but within just 20 million to 30 million years of one another, all three amniote lineages evolved a tympanic middle ear from parts of the skull and the jaws.2

The tympanic middle ear is the assemblage of tiny bones that connects at one end to an eardrum and at the other end to the oval window, an aperture in the bone of the inner ear. Despite the temporal coincidence in the evolution of these structures in the three amniote lineages and the functional similarities of the adaptations, the groups were by this time so far separated that the middle ears evolved from different structures into two different configurations. The single middle-ear bone, the columella, of archosaurs and lepidosaurs derived from the hyomandibular, a bone that earlier had formed a large strut connecting the braincase to the outer skull. In modern representatives, the columella is long and thin, with several, usually cartilaginous extensions known as the extracolumella. One of these, the “inferior process,” connects the inner surface of the eardrum and the columella, which then connects to the footplate that covers the oval window of the inner ear. This two-part system forms a lever that, together with the pressure increase incurred by transmitting from the much larger eardrum to the footplate, greatly magnifies sound entering the inner ear.

In the mammals of the Triassic, the equivalent events were more complex, but the functional result was remarkably similar. Mammal ancestors reduced the number of bones in the lower jaw from seven to one and, in the process, formed a new jaw joint. Initially, the old and new jaw structures existed in parallel, but over time the old joint moved towards the rear of the head. This event, which at any other time would likely have led to the complete loss of the old joint bones, occurred simultaneously with the origin of the mammalian tympanic middle ear. Older paleontological and newer developmental evidence from Shigeru Kuratani’s lab at RIKEN in Japan indicate that the mammalian eardrum evolved at a lower position on the skull relative to that of the other amniotes, a position outside the old jaw joint.3 In time, the bones of this old joint, together with the hyomandibula, became the three bony ossicles (malleus, incus, and stapes) of the new middle ear. Like the middle ear of archosaurs and lepidosaurs, these ossicles form a lever system that, along with the large area difference between eardrum and footplate, greatly magnifies sound input.

Thus, remarkably, these complex events led independently to all modern amniotes possessing a middle ear that, at frequencies below 10 kHz, works equally effectively despite the diverse structures and origins. There is also evidence that the three-ossicle mammalian middle ear itself evolved at least twice—in egg-laying mammals such as the platypus, and in therians, which include marsupials and placentals—with similar outcomes.

Inner-ear evolution


PITCH PERFECT: The hearing organs of amniotes are organized tonotopically, with hair cells sensitive to high frequencies at the basal end of the papilla, grading into low-frequency hair cells at the apical end.


The evolution of tympanic middle ears kick-started the evolution of modern inner ears, where sound waves are converted into the electrical signals that are sent to the brain. The inner ear is least developed in the lepidosaurs, most of which retained a relatively small auditory papilla, in some just a few hundred micrometers long. Many lepidosaurs, predominantly diurnal species, also lost their eardrum. Snakes reduced their middle ear, limiting their hearing to frequencies less than 1 kHz, about two octaves above middle C. (For comparison, humans can hear sounds up to about 15 or 16 kHz.) Clearly, hearing was not under strong selective pressure in this group. There are a few exceptions, however. In geckos, for example, which are largely nocturnal, the papillar structure shows unique specializations, accompanied by high sensitivity and strong frequency selectivity. Indeed, the frequency selectivity of gecko auditory nerve fibers exceeds that of many mammals.

One part of the inner ear that did improve in lizards (but not in snakes) is the hair cells, with the papillae developing different areas occupied by two structural types of these sound-responsive cells. One of these hair cell groups responds to sounds below 1 kHz and perhaps corresponds to the ancestral version. The higher-frequency hair cells have a more specialized structure, particularly with regard to the size and height of the stereovilli, with bundle heights and stereovillus numbers varying consistently along the papilla’s length. Taller bundles with fewer stereovilli, which are much less stiff and therefore respond best to low frequencies, are found at one end of the membrane, while shorter, thicker bundles with more stereovilli that respond best to higher frequencies are found at the other end—a frequency distribution known as a tonotopic organization. Still, with the exception of one group of geckos, lizard hearing is limited to below 5 to 8 kHz.

In contrast to the relatively rudimentary lepidosaur inner ear, the auditory papilla of archosaurs (birds, crocodiles, and their relatives) evolved much greater length. Owls, highly proficient nocturnal hunters, boast the longest archosaur papilla, measuring more than 10 millimeters and containing many thousands of hair cells. As in lizards, archosaur hair cells show strong tonotopic organization, with a gradual change in the diameter and height of the stereovillar bundles contributing to the gradually changing frequency sensitivity along the papilla. In addition, the hair cells are divided along and across the basilar membrane, with tall hair cells (THCs) resting on the inner side and the apical end, most distant from the middle ear, grading into short hair cells (SHCs) on the outer side and at the basal end. Interestingly, many SHCs completely lack afferent innervation, which is the only known case of sensory cells lacking a connection to the brain. Instead of transmitting sensory information to the brain, these hair cells likely amplify the signal received by the inner ear. Despite the more complex anatomy, however, bird hearing is also generally limited to between 5 and 8 kHz, with the exception of some owls, which can hear up to 12 kHz.

The mammalian papilla, called the organ of Corti, also evolved to be larger—generally, but not always, longer than those of birds—but the extension in length varies in different lineages.4 Mammalian papillae also have a unique cellular arrangement. The papillae of modern egg-laying monotremes, which likely resemble those of the earliest mammals, include two groups of hair cells separated by numerous supporting pillar cells that form the tunnel of Corti. In any given cross section, there are approximately five inner hair cells (IHCs) on the inner side of the pillar cells, closer to the auditory nerve, and eight outer hair cells (OHCs) on the outer side. In therian mammals (marsupials and placentals), the numbers of each cell group have been much reduced, with only two pillar cells forming the tunnel in any given cross-section, and generally just a single IHC and three or four OHCs, though the functional consequences of this reduction remain unclear. About 90 percent of afferent fibers innervate IHCs, while only 10 percent or fewer innervate OHCs, despite the fact that OHCs account for some 80 percent of all hair cells. As with bird SHCs that lack afferent innervation, there are indications that the main function of OHCs is to amplify the physical sound signal at very low sound-pressure levels.

Therian mammals also evolved another key hearing adaptation: the cochlea. Shortly before marsupial and placental lineages diverged, the elongating hearing organ, which had always been curved, reached full circle. The only way to further increase its length was to form more than one full coil, a state that was reached roughly 120 million years ago. The result is hearing organs with 1.5 to 4 coils and lengths from 7 millimeters (mouse) to 75 millimeters (blue whale). Hearing ranges also diverged, partly depending on the size of the animal (larger mammals tend to have lower upper-frequency limits), but with a number of remarkable specializations, as expected in a lineage that radiated greatly during several evolutionary episodes.

As a result of these adaptations, most mammals have an upper frequency-response limit that well exceeds those of lepidosaurs and archosaurs. Human hearing extends to frequencies of about 15 kHz; a guinea pig can hear sounds up to about 45 kHz; and in the extreme cases of many bats and toothed whales, hearing extends into ultrasonic frequencies, sometimes as high as 180 kHz, allowing these animals to echolocate in air and water. This impressive increase in frequency limits is due to an extremely stiff middle ear, as well as a stiff cochlea. During early therian evolution, the bone of the canal surrounding the soft tissues invaded the supporting ridges of the basilar membrane, creating stiff laminae. Such bony ridges were retained in species perceiving ultrasonic frequencies, but tended to be reduced and replaced by softer connective-tissue supports in those with lower-frequency limits, such as humans.

Amplification within the ear


HAIRS OF THE EAR: Rows of inner-ear hair cells have villous bundles (blue) on their apical surface that convert sound waves to nervous signals sent to the brain.


In addition to the specialized structures of the middle and inner ears of amniotes that served to greatly increase hearing sensitivity, the hair cells themselves can produce active movements that further amplify sound stimuli. The evolutionarily oldest such active mechanism was discovered in the late 1980s by Jim Hudspeth’s group, then at the University of California, San Francisco, School of Medicine, working with frogs,5 and Andrew Crawford and Robert Fettiplace, then at the University of Cambridge, working with turtles.6 The amplification mechanism, called the active bundle mechanism, probably evolved in the ancestors of vertebrates and helped overcome the viscous forces of the surrounding fluids, which resist movement. When sound stimuli move the hair-cell bundle and thus open transduction channels to admit potassium ions, some calcium ions also enter the cell. These calcium ions bind to and influence the open transduction channels, increasing the speed with which these channels close. Such closing forces are exerted in phase with the incoming sound waves, increasing the distance that the hair cells move in response, and thereby increasing their sensitivity. It is likely that this mechanism operates in all vertebrate hair cells.5 In lizards, my group provided evidence that this bundle mechanism really does operate in the living animal.7

In 1985, a second mechanism of hair cell–driven sound amplification was discovered in mammalian OHCs by Bill Brownell’s group, then at the University of Florida School of Medicine. Brownell and his colleagues showed that mammalian OHCs, but not IHCs, changed their length very rapidly in phase with the signal if exposed to an alternating electrical field.8 Such fields occur when hair cells respond to sound. Subsequent experiments showed that the change in cell length is due to changes in the molecular configuration of a protein, later named prestin, which occurs in high density along the lateral cell membrane of OHCs. In mammals, the force produced by the OHCs is so strong that the entire organ of Corti, which includes all cell types that surround the hair cells and the basilar membrane itself, is driven in an up-and-down motion. This movement can amplify sounds by at least 40dB, allowing very quiet noises to be detected. There is evidence for the independent evolution of specific molecular configurations of prestins that allow for the amplification of very high ultrasonic frequencies in bats and whales.9

Bird ears also appear to produce active forces that amplify sound. The SHCs have bundles comprising up to 300 stereovilli (about three times as many as the bundles of mammalian OHCs),10 and the movement of these bundles probably drives the movement of THCs indirectly via the tectorial membrane. Also, very recent data from the lab of Fettiplace, now at the University of Wisconsin–Madison, suggests that in birds, prestin (albeit in a different molecular form) may work in the plane across the hearing organ (i.e., not up and down as in mammals), perhaps reinforcing the influence of the bundle active mechanism on the THCs via the tectorial membrane.11

Three hundred million years of evolution have resulted in a fascinating variety of ear configurations that, despite their struc­tural diversity, show remarkably similar physiological responses.

Remarkable convergence

Three hundred million years of evolution have resulted in a fascinating variety of ear configurations that, despite their structural diversity, show remarkably similar physiological responses. There are hardly any differences in sensitivity between the hearing of endothermal birds and mammals, and the frequency selectivity of responses is essentially the same in most lizards, birds, and mammals. The combined research efforts of paleontologists, anatomists, physiologists, and developmental biologists over several decades have clarified the major evolutionary steps in all lineages that modified the malleable middle and inner ears into their present-day kaleidoscopic variety of form, yet a surprising consensus in their function.

Geoffrey A. Manley is a retired professor from the Institute of Zoology at the Technical University in Munich, Germany. He is currently a guest scientist in the laboratory of his wife, Christine Köppl, at Oldenburg University in Germany.


  1. G.A. Manley, C. Köppl, “Phylogenetic development of the cochlea and its innervation,” Curr Opin Neurobiol, 8:468-74, 1998.
  2. J.A. Clack, “Patterns and processes in the early evolution of the tetrapod ear,” J Neurobiol, 53:251-64, 2002.
  3. T. Kitazawa et al., “Developmental genetic bases behind the independent origin of the tympanic membrane in mammals and diapsids,” Nat Commun, 6:6853, 2015.
  4. G.A. Manley, “Evolutionary paths to mammalian cochleae,” JARO, 13:733-43, 2012.
  5. A.J. Hudspeth, “How the ear’s works work: Mechanoelectrical transduction and amplification by hair cells,” C R Biol, 328:155-62, 2005.
  6. A.C. Crawford, R. Fettiplace, “The mechanical properties of ciliary bundles of turtle cochlear hair cells,” J Physiol, 364:359-79, 1985.
  7. G.A. Manley et al., “In vivo evidence for a cochlear amplifier in the hair-cell bundle of lizards,” PNAS, 98:2826-31, 2001.
  8. W.E. Brownell et al., “Evoked mechanical responses of isolated cochlear outer hair cells,” Science, 227:194-96, 1985.
  9. Y. Liu et al., “Convergent sequence evolution between echolocating bats and dolphins,” Curr Biol, 20:R53-R54, 2010.
  10. C. Köppl et al., “Big and powerful: A model of the contribution of bundle motility to mechanical amplification in hair cells of the bird basilar papilla,” in Concepts and Challenges in the Biophysics of Hearing, ed. N.P. Cooper, D.T. Kemp (Singapore: World Scientific, 2009), 444-50.
  11. M. Beurg et al., “A prestin motor in chicken auditory hair cells: Active force generation in a nonmammalian species,” Neuron, 79:69-81, 2013.
  12. C. Bergevin et al., “Salient features of otoacoustic emissions are common across tetrapod groups and suggest shared properties of generation mechanisms,” PNAS, 112:3362-67, 2015.

Sea Lion: © iStock/LFStewart; Squirrel: © Erik Mandre/Shutterstock; Frog: ©Frank B. Yuwono/Shutterstock; Owl: ©XNature.Photography/Shutterstock; Lizard: ©Andrew Wijesuriya/Shutterstock; Bat: © iStock/GlobalP; Ostrich: © Jamen Percy/Shutterstock; Dog: ©Annette Shaff/Shutterstock; Lynx: © Dmitri Gomon/Shutterstock


Logo KW

Autoaceptación [694]

de System Administrator - martes, 5 de agosto de 2014, 21:45

Autoaceptación: La importancia de admitir nuestros errores


Cuando nos enfrentamos cara a cara con nuestros fracasos, es difícil no negar las consecuencias de nuestros errores, y muy a menudo empeorar los problemas mediante comportamientos que hemos estado tratando de evitar con mucha dificultad.

Según un nuevo estudio publicado en el Journal of Consumer Researchla práctica de la autoaceptación puede ser la mejor manera de aumentar nuestra autoestima y evitar conductas de autodesprecio y sus consecuencias.

Los autores del estudio, Soo Kim y David Gal, describen este fenómeno del siguiente modo:“Tengamos en cuenta a una persona que acaba jubilarse y se da cuenta de que sus ingresos no serán suficientes a partir de ahora. Es muy probable que esta persona tenga el impulso de comprar cosas caras o de salir a comer más a menudo de lo que antes lo hacía, como una manera de evitar sus problemas. Se introduce la idea de que la práctica de la autoaceptación es una alternativa más eficaz a este tipo de comportamientos autodestructivos “.

Tras la realización de cinco experimentos diferentes, los autores confirmaron que la práctica de la autoaceptación ayuda a reducir la probabilidad de involucrarse en conductas perjudiciales y aumenta la probabilidad de trabajar en la mejora de otras habilidades alternativas.

En uno de los estudios, los participantes leyeron sobre el concepto de autoaceptación y luego se les pidió que eligieran entre una revista de lujo o un libro de autoayuda y crecimiento personal. Como se predijo, los participantes fueron más propensos a seleccionar el libro antes que la revista, lo que indica el deseo de mejorar su bienestar general.

Si bien los beneficios de la autoaceptación pueden ayudar a aumentar la autoestima de una persona como un medio para promover el bienestar, los autores advierten contra el uso de elogios inmerecidos, que pueden aportar creencias poco realistas y expectativas acerca de sus habilidades.

“Cuando se socavan las creencias y expectativas de una persona, se puede dañar muy negativamente su autoestima. A diferencia de la autoestima, la aceptación de uno mismo, que es de por sí incondicional, puede preparar mejor a alguien para los inevitables fracasos, y en última instancia, constituye una alternativa menos volátil para la promoción del bienestar “, concluyeron los autores.

Se completa el artículo con un interesante documento firmado por M.B. González-Fuentes y Patricia Andrade (Universidad Autónoma de México) que bajo el título “Autoaceptación como factor de riesgo para el intento de suicidio en adolescentes”, analiza la influencia de esta variable en una muestra de estudiantes de entre 14 y 20 años, con resultados relevantes:

“Los resultados obtenidos sobre el papel de la autoaceptación como factor asociado para intento de suicidio son interesantes porque muestran que los predictores de autoaceptación para el intento suicida fueron diferentes dependiendo del sexo de los adolescentes“.


Logo KW

Autoconfianza [696]

de System Administrator - martes, 5 de agosto de 2014, 22:14

Autoconfianza: Claves para mejorarla


Por Mireia Navarro

La autoconfianza puede definirse como “la confianza en uno mismo” y tiene mucha más importancia de lo que a simple vista puede parecer; de ella depende el sentimiento de utilidad que nos atribuimos respecto al mundo que nos rodea.

Nuestro nivel de autoconfianza determina la visión que tenemos de nosotros mismos, lo que a su vez, moldea nuestro rendimiento y nuestras actividades.

Por ejemplo; ¿Cuántas veces hemos pensado en realizar una receta, pero no la hemos hecho porque hemos visto que era difícil y no nos hemos considerado capaces? ¿Cuántas veces hemos pensado que no lograremos pasar un examen? Se trata de tan solo dos ejemplos de los muchos que dependen de la autoconfianza y que están integrados en nuestro día a día.

Este nivel de autoconfianza está determinado por muchos aspectos; nuestro pasado, nuestro presente y las expectativas que tengamos de nuestro futuro; las experiencias vividas y los aprendizajes que hemos ido adquiriendo de ellas, nuestra personalidad

Son muchos los factores que influyen en nuestro nivel de autoconfianza, no obstante estamos ante un constructo moldeable y mejorable. Basta un buen entrenamiento para subir nuestro nivel de autoconfianza y materializarlo con éxitos.

Tan desadaptativo es tener un bajo nivel de confianza, como lo es tener un nivel excesivo. Los valores medios de autoconfianza permiten el autoconocimiento de nuestras limitaciones, necesarias para mantener viva la llama del aprendizaje por experiencia. Estos niveles medios son los que nos permiten desarrollar nuestra personalidad de manera óptima.

Cómo mejorar nuestra autoconfianza

¿Qué podemos hacer para mejorar nuestra autoconfianza? Basta con seguir estos pasos con constancia:

-Conócete. Tómate tu tiempo para darte cuenta de las emociones que experimentas, y entenderlas. Este paso es básico y necesario para determinar qué aspectos propician tu baja autoconfianza. Solo con una buena autoestima puede conseguirse una buena autoconfianza.

-Crea un mundo a tu alrededor en el que estés verdaderamente cómodo. Vístete con ropa que te inspire confianza. Rodéate de gente que crea en ti. Convierte tus autoinstrucciones negativas en positivas (por ejemplo; ante un “no creo que lo consiga”, pensemos en un “lo voy a intentar”).

-Por cada característica negativa que venga a la mente sobre ti, piensa una positiva.

-Intenta lo que no creas capaz de hacer, pero entiende que los fallos son necesarios para mejorar. Usa tus primeros intentos para sacar aquello positivo. No te desanimes si a la primera no te sale bien; te servirá para saber en qué tienes que mejorar. A la segunda te saldrá mejor. ¡Puede que tras varios intentos más, te salga bien!

-Refuérzate cada éxito, por pequeño que sea. Te ayudará a seguir. El refuerzo deberá ir en consonancia con el éxito conseguido. A mayor éxito, mayor refuerzo.

-No te compares con nadie. Hay gente que tiene otras cualidades de las que tienes tú, pero tú también tienes unas cuantas que los demás no tienen.

-Se constante. No te desanimes. Los cambios se suceden poco a poco.

-Si te cuesta más de lo que creías y estás a punto de desistir, consulta a un profesional. Puede aplanarte el camino.

Mejorar la autoconfianza es cuestión de predisposición y constancia. El camino, desde los primeros logros, es muy gratificante; nos enseña que las limitaciones que anteriormente teníamos eran impuestas por nosotros mismos sin razón. Supone una fuente de motivación que puede cambiarnos por completo. Empezar a experimentar estos cambios depende de nosotros mismos.

Mireia Navarro Licenciada en Psicología (Universitat de València), con Master en Psicología y Gestión Familiar. Experiencia en psicología clínica y educativa. Servicio de Psicología domiciliaria. Intervención en contextos naturales.


Logo KW

Autoestima: Tres Estados [627]

de System Administrator - miércoles, 29 de octubre de 2014, 14:14


 por Maria Chevallier

Esta clasificación de los tres estados de la Autoestima fue propuesta por Martín Ross en el libro "El Mapa de la Autoestima" (año 2007, ISBN 978-84-686-3669-6, año 2013 ISBN: 978-9870267737), y, desde entonces, ha ido cobrando repercusión, encontrándose hoy muy utilizada en los trabajos que abordan el tema de la Autoestima.

Continúa leyendo en el sitio

Logo KW

Autoimmune Diseases [825]

de System Administrator - miércoles, 3 de septiembre de 2014, 22:39

Scientists discover how to ‘switch off’ autoimmune diseases

Aggressor cells are targeted by treatment causing them to convert to protector cells. Gene expression changes gradually during treatment, as illustrated by the colour changes in these heat maps. Credit: Dr Bronwen Burton

Scientists have made an important breakthrough in the fight against debilitating autoimmune diseases such as multiple sclerosis by revealing how to stop cells attacking healthy body tissue.

Rather than the body's immune system destroying its own tissue by mistake, researchers at the University of Bristol have discovered how cells convert from being aggressive to actually protecting against disease.

The study, funded by the Wellcome Trust, is published in Nature Communications.

It's hoped this latest insight will lead to the widespread use of antigen-specific immunotherapy as a treatment for many autoimmune disorders, including multiple sclerosis (MS), type 1 diabetes, Graves' disease and systemic lupus erythematosus (SLE).

MS alone affects around 100,000 people in the UK and 2.5 million people worldwide.

Scientists were able to selectively target the cells that cause autoimmune disease by dampening down their aggression against the body's own tissues while converting them into cells capable of protecting against disease.

This type of conversion has been previously applied to allergies, known as 'allergic desensitisation', but its application to autoimmune diseases has only been appreciated recently.

The Bristol group has now revealed how the administration of fragments of the proteins that are normally the target for attack leads to correction of the autoimmune response.

Most importantly, their work reveals that effective treatment is achieved by gradually increasing the dose of antigenic fragment injected.

In order to figure out how this type of immunotherapy works, the scientists delved inside the immune cells themselves to see which genes and proteins were turned on or off by the treatment.

They found changes in gene expression that help explain how effective treatment leads to conversion of aggressor into protector cells. The outcome is to reinstate self-tolerance whereby an individual's immune system ignores its own tissues while remaining fully armed to protect against infection.

By specifically targeting the cells at fault, this immunotherapeutic approach avoids the need for the immune suppressive drugs associated with unacceptable side effects such as infections, development of tumours and disruption of natural regulatory mechanisms.

Professor David Wraith, who led the research, said: "Insight into the molecular basis of antigen-specific immunotherapy opens up exciting new opportunities to enhance the selectivity of the approach while providing valuable markers with which to measure effective treatment. These findings have important implications for the many patients suffering from autoimmune conditions that are currently difficult to treat."

This treatment approach, which could improve the lives of millions of people worldwide, is currently undergoing clinical development through biotechnology company Apitope, a spin-out from the University of Bristol.

Note: Material may have been edited for length and content. For further information, please contact the cited source.

University of Bristol   press release


Bronwen R. Burton, Graham J. Britton, Hai Fang, Johan Verhagen, Ben Smithers, Catherine A. Sabatos-Peyton, Laura J. Carney, Julian Gough, Stephan Strobel, David C. Wraith. Sequential transcriptional changes dictate safe and effective antigen-specific immunotherapy.   Nature Communications, Published Online September 3 2014. doi: 10.1038/ncomms5741


Logo KW

Automated Learning [1123]

de System Administrator - martes, 24 de febrero de 2015, 17:17

Students, meet your new lesson planners. Photo by John Tlumacki/The Boston Globe via Getty Images.

Automated Learning

NEW YORK—Teacher John Garuccio wrote a multiplication problem on a digital whiteboard in a corner of an unusually large classroom at David A. Boody Intermediate School in Brooklyn.

About 150 sixth-graders are in this math class—yes, 150—but Garuccio’s task was to help just 20 of them, with a lesson tailored to their needs. He asked, “Where does the decimal point go in the product?” After several minutes of false starts, a boy offered the correct answer. Garuccio praised him, but did not stop there.

“Come on, you know the answer, tell me why,” Garuccio said. “It’s good to have the right answer, but you need to know why.”

A computer system picked this lesson for this group of students based on a quiz they’d taken a day earlier. Similar targeted lessons were being used by other teachers and students working together, in small groups, in an open classroom the size of a cafeteria. The computer system orchestrates how each math class unfolds every day, not just here, but for about 6,000 students in 15 schools located in four states and the District of Columbia.

As more schools adopt blended learning—methods that combine classroom teachers and computer-assisted lessons—some are taking the idea a step further and creating personalized programs of instruction. Technology can help teachers design a custom lesson plan for each student, supporters say, ensuring children aren’t bored or confused by materials that aren’t a good fit for their skill level or learning style.

At David A. Boody (I.S. 228)—a public school in a Brooklyn neighborhood where five of every six students qualify for free or reduced-price lunch—teachers use a program called Teach to One: Math. It combines small group lessons, one-on-one learning with a teacher, learning directly from software, and online tutoring. A nonprofit in New York City, New Classrooms Innovation Partners, provides the software and supports the schools that use it. New Classrooms evolved from a program created several years earlier at the New York City Department of Education.

One key feature of the program is apparent even before instruction begins. The entrance to the math class at David A. Boody looks a bit like a scene at an airport terminal. Three giant screens display daily schedule updates for all students and teachers. The area is huge, yawning across a wide-open space created by demolishing the walls between classrooms.

“There is a great deal of transparency here,” said Cathy Hayes, the school’s math director, explaining the idea behind the enormous classroom. “Children and teachers can see each other, and that transparency works to share great work. This isn’t a room where you can shut the door and contain what you are doing.”

The open design is meant to encourage collaboration; teachers can learn from each other by working in close proximity. Shelves, desks, and whiteboards divide the large room to create nooks for small-group instruction. Each area has a name, such as Brooklyn Bridge or Manhattan Bridge, which helps students know where they are supposed to go. Sometimes students are instructed by teachers; other times they work in sections monitored by teaching assistants. Some stations use pencil-on-paper worksheets with prompts to help guide students through group projects; stations in another part of the classroom are for independent computer-guided lessons and tutoring.

Dmitry Vlasov, an assistant math teacher, scanned his section of the room where more than a dozen children worked on small laptops independently. If needed, he would help a student who was stuck, but he said most students don’t require much more than a nudge.

“They treat it as if it were a game,” Vlasov said.

All this activity in the large, high-ceilinged room creates a constant buzz, which seemed to distract some students. Teachers had to remind them not to peer around the big room. Students seated in the back appeared to have trouble hearing the teacher.

Math class spans two 35-minute sessions, with students and teachers rotating to new stations after the first session. The school has about 300 students in the math program. Half of them report to class at one time. On a recent day in December, the classroom was staffed with one math director, five teachers, two teaching assistants, and a technology aide.

The program’s computer system coordinates where everyone will go based on how the children perform on a daily test at the end of class. The next day’s schedule is delivered electronically. Teachers, students, and parents can log on to a website to see it.

“It’s easy to keep track of everyone,” said Amelia Tonyes, a teacher. “It helps to target students with what they need.”

The software used by the Teach to One system pulls lessons from a database created and curated by the program’s academic team. When Teach to One staffers discover a lesson that fits their program, they negotiate with the publisher to buy the lesson à la carte—sort of like buying one song from iTunes instead of buying the entire album. Staffers say they examined about 80,000 lessons to create a library of 12,000 from 20 vendors. A room in the office of the nonprofit is filled with stacks of math textbooks.

According to Joe Ventura, a spokesman Teach to One’s parent company, New Classrooms, schools pay $225 per student each year in licensing fees to use the content in Teach to One’s curriculum—the materials that replace the typical textbook. They also pay between $50,000 and $125,000 a year for professional development and support services, but those fees typically decrease as schools gain more experience running the program, Ventura said.

Schools in New York City don’t pay the software licensing fees (although they pay some of the other costs), because the program originated within the city’s Department of Education; the nonprofit that now runs Teach to One was created by former Department of Education employees. Ventura said they are in negotiations for a new contract but could not elaborate on details. Requests for information from the Department of Education brought no reply.

Supporters say the program gives every student a custom-designed math class, allowing students to get different lessons, and move faster or slower than peers in the same class.

Skeptics of this type of program say more evidence is needed to justify the time and expense.

Larry Cuban, professor emeritus of education at Stanford University, recently devoteda three-part series on his blog to what he calls automation in schools. Quality teachers deliver not just information, but a human element that can’t be replicated by a machine, he concluded. In an interview, he was not enthusiastic about Teach to One, saying that, from what he’s seen, it looks like “a jazzed-up version of worksheets.”

Teach to One’s founders, hoping to demonstrate that the program works, commissioned a study by an education professor. The first year’s results were mixed. A second-year study, released late last year, documented significant progress in 11 of 15 schools in the program. Although students still scored lower on math tests than national averages—many are from disadvantaged populations—the growth in the Teach to One students’ scores outpaced national averages.

Changes made after the first year’s report, and increased teacher and student familiarity with the program, might explain the improved results, Ventura said. The updates included improvements to an online portal where students and parents can track progress and continue to work on lessons outside school; the addition of 130 multiday collaborative student projects; refinements to the algorithm that is used to select lessons, and expansion of the library of lessons.

The early research on Teach to One: Math shows schools that use the program have improved test scores, but could not conclude that Teach to One is the reason.

The author of the research on Teach to One: Math, Douglas Ready, an associate professor of education and public policy at Teachers College of Columbia University, was cautiously optimistic. (The Hechinger Report, in which this article first appeared, is an independently funded unit of Teachers College.) He noted, though, that the early research could not determine whether the improved math scores could be attributed to Teach to One or had been affected by some other factor in the schools.

The next step in evaluating the program will be supported by a $3 million U.S. Department of Education grant, announced this month. That grant will pay for deeper research into Teach to One, as well as an expansion of the program to more schools in one of its other cities, Elizabeth, New Jersey.

Gary Miron, a professor of evaluation, measurement, and research at Western Michigan University, is generally positive about the possibilities of blended learning. Asked by the Hechinger Report to review the research on Teach to One, he said the gains were “quite remarkable.” He noted, however, that there is a lot of pressure on schools to show gains on tests, which can lead to inflated scores.

That said, Miron is enthusiastic about the possibilities for blended learning programs that invest in both quality teaching and technology. Most online programs—in which students don’t ever report to a brick-and-mortar school—have less promising results, he said.

“That is the future, blended learning,” he said.

This story was written by The Hechinger Report, a nonprofit, independent news website focused on inequality and innovation in education. Read more about blended learning.

Nichole Dobo writes for the Hechinger Report.


Logo KW

Automation Is Eating Jobs, But These Skills Will Always Be Valued In the Workplace [1591]

de System Administrator - viernes, 20 de noviembre de 2015, 20:46

Automation Is Eating Jobs, But These Skills Will Always Be Valued In the Workplace


If you’d asked farmers a few hundred years ago what skills their kids would need to thrive, it wouldn’t have taken long to answer. They’d need to know how to milk a cow or plant a field. General skills for a single profession that only changed slowly—and this is how it was for most humans through history.

But in the last few centuries? Not so much.

Each generation, and even within generations, we see some jobs largely disappear, while other ones pop up. Machines have automated much of manufacturing, for example, and they’ll automate even more soon. But as manufacturing jobs decline, they’ve been replaced by other once unimaginable professions like bloggers, coders, dog walkers, or pro gamers.

In a world where these labor cycles are accelerating, the question is: What skills do we teach the next generation so they can keep pace?

More and more research shows that current curriculums, which teach siloed subject matter and specific vocational training, are not preparing students to succeed in the 21st century; a time of technological acceleration, market volatility, and uncertainty.

To address this, some schools have started teaching coding and other skills relevant to the technologies of today. But technology is changing so quickly that these new skills may not be relevant by the time students enter the job market.

In fact, in Cathy Davidson's book, Now You See It, Davidson estimates that,

“65 percent of children entering grade school this year (2011) will end up working in careers that haven't even been invented yet."

Not only is it difficult to predict what careers will exist in the future, it is equally uncertain which technology-based skills will be viable 5 or 10 years from now, as Brett Schilke, director of impact and youth engagement at Singularity University, noted in a recent interview.

So, what do we teach?

Finland recently shifted its national curriculum to a new model called the “phenomenon-based" approach. By 2020, the country will replace traditional classroom subjects with a topical approach highlighting the four Cs—communication, creativity, critical thinking, and collaboration. These four skills “are central to working in teams, and a reflection of the 'hyperconnected' world we live in today,” Singularity Hub Editor-in-Chief David Hillrecently wrote.

Hill notes the four Cs directly correspond to the skills needed to be a successful 21st century entrepreneur—when accelerating change means the jobs we’re educating for today may not exist tomorrow. Finland’s approach reflects an important transition away from the antiquated model used in most US institutions—a model created for a slower, more stable labor market and economy that no longer exists.

In addition to the four Cs, successful entrepreneurs across the globe are demonstrating three additional soft skills that can be integrated into the classroom—adaptability, resiliency and grit, and a mindset of continuous learning.

These skills can equip students to be problem-solvers, inventive thinkers, and adaptive to the fast-paced change they are bound to encounter. In a world of uncertainty, the only constant is the ability to adapt, pivot, and get back on your feet.

Like Finland, the city of Buenos Aires is embracing change.

Select high school curriculums in the city of Buenos Aires now require technological education in the first two years and entrepreneurship in the last three years. Esteban Bullrich, Buenos Aires’ minister of education, told Singularity University in a recent interview, “I want kids to get out of school and be able to create whatever future they want to create—to be able to change the world with the capabilities they earn and receive through formal schooling.”

The idea is to teach students to be adaptive and equip them with skills that will be highly transferable in whatever reality they may face once out of school, Bullrich explains. Embedding these entrepreneurial skills in education will enable future leaders to move smoothly with the pace of technology. In fact, Mariano Mayer, director of entrepreneurship for the city of Buenos Aires, believes these soft skills will be valued most highly in future labor markets.

This message is consistent with research highlighted in a World Economic Forum and Boston Consulting Group report titled, New Vision for Education: Unlocking the Potential of Technology. The report breaks out the core 21st-century skills into three key categories—foundational literacies, competencies, and character qualities—with lifelong learning as a proficiency encompassing these categories.


From degree gathering to continuous learning

This continuous learning approach, in contrast to degree-oriented education, represents an important shift that is desperately needed in education. It also reflects the demands of the labor market—where lifelong learning and skill development are what keep an individual competitive, agile, and valued.

Singularity University CEO Rob Nail explains, “The current setup does not match the way the world has and will continue to evolve. You get your certificate or degree and then supposedly you're done. In the world that we've living in today, that doesn't work.”

Transitioning the focus of education from degree-oriented to continuous learning holds obvious benefits for students. This shift in focus, however, will also support academic institutions sustain their value as education, at large, becomes increasingly democratized and decentralized.

Any large change requires we overcome barriers. And in education, there are many—but one challenge, in particular, is fear of change. 

“The fear of change has made us fall behind in terms of advancement in innovation and human activities,” Bullrich says.

“We are discussing upgrades to our car instead of building a spaceship. We need to build a spaceship, but we don't want to leave the car behind. Some changes appear large, but the truth is, it's still a car. It doesn't fly. That's why education policy is not flying.”

Education and learning are ready to be reinvented. It’s time we get to work.



Logo KW

Ayudando a niños y adolescentes a superar la violencia y los desastres: Qué pueden hacer los padres [625]

de System Administrator - viernes, 1 de agosto de 2014, 23:03


Página: (Anterior)   1  2  3  4  5  6  7  8  9  10  ...  71  (Siguiente)