Archive for the ‘News’ Category

Loud Bass Music and Sudden Arrhythmic Death Syndrome

Friday, February 5th, 2010

A student died of SADS at a freshers’ party after complaining the music is “getting to his heart”.

Sudden Arrhythmia Death Syndrome is a disorder of the electrical system of the heart that can lead to the death of apparently healthy people without any warning according to the

Is it possible that loud bass music could bring on such a condition? Herewith a recent article:

Student who complained of loud bass music ‘getting to his heart’ dropped dead at freshers’ party day after enrolling at university

By Daily Mail Reporter
Last updated at 5:50 PM on 08th December 2009

Tom Reid.jpg Tom Reid, 19, is believed to have suffered from a heart disorder that affects young people

A brilliant student collapsed and died at a freshers’ party after complaining that the loud bass music was ‘getting to his heart’, an inquest heard.

Apparently healthy Tom Reid, 19, had only enrolled at university the day before he suddenly dropped dead in a crowded London club with a heart condition.

Today, his parents told of their anguish at the death of the ‘amazing’ linguistics student, from Leeds, which came just hours after they had shared a meal with him.

A coroner recorded a verdict of natural causes and said the award-winning public speaker had suffered from Sudden Arrythmic Death Syndrome (SADS), a disorder of the electrical system of the heart that affects young people.

Halina and Anthony Reid had driven to University College London with Tom’s belongings on Sunday, September 27 and the family shared a farewell lunch.

During the meal, Tom had made a passing comment about occasionally suffering heart palpations in response to his mother saying she had experienced irregular heartbeats, St Pancras Coroners Court heard.

Mr Reid, a sales engineer, said: ‘Basically he said “Mum, mine sometimes does that.”

‘It was a remark. It wasn’t a complaint.’

That night he went out with a friend to Koko in Camden, North London, and after complaining of a ‘fast and irregular’ heartbeat he was pronounced dead at University College Hospital in the early hours of the next morning.

Tom’s friend Alisha Riseley said on the night of his death they went to the club and as it filled up they had been pushed towards the speakers.

She said: ‘Tom said he felt like the bass was getting to his heart and we went to stand at the back.’ 

My heart feels funny, I think the bass is affecting me. Oh God, I feel very weird. My heart is beating so fast’

He told her: ‘My heart feels funny, I think the bass is affecting me. Oh God, I feel very weird. My heart is beating so fast.’

After falling ill at about 1.30am, the pair went to see a medic at the club shortly after 2am.

Miss Riseley told the court the medic had ‘preferred’ Tom to go to hospital but said he could also go home and hope he felt better.

The student had intially been keen not to ‘make a fuss’, but while he was still weighing up his options he suddenly collapsed in a side room at the club.

Ms Riseley said: ‘He suddenly leant to the side and keeled over as though he fainted.’

The medic started CPR and then paramedics tried to shock his heart into action six or seven times but his pulse only returned for 30 seconds each time, the inquest heard.

Tom was rushed to hospital and despite further treatment he was pronounced dead at 3.11am, less than two hours after first complaining of feeling unwell.

Toxicology tests showed no drugs or alcohol in his blood, while a friend said he had only bought two drinks during the night.

In a statement, Mr Reid and his wife said: ‘As parents we are totally devastated over the enormous loss of our beloved son – it has created a void which can never, ever be filled.

‘Tom was an amazing individual. He loved life and he loved his family and friends.

‘Academically brilliant, he achieved a highly prestigious place at UCL, studying linguistics.

‘He achieved the very highest grades at A level and simultaneously, he was awarded a national award for public speaking – the Voice Of The Future.

‘Voted “personality of the year” at his leavers prom, he was adored by all. He had a brilliant life ahead of him. We were, and we remain, tremendously proud of our precious son.

‘His death nearly destroyed us too. Every child is precious – Tom was our world.’

The statement added: ”What a sad indictment of our society that it was automatically assumed that Tom’s death was alcohol related just because he was in a club with friends having fun.’

The family said they were ‘horrified’ to learn that SADS claimed the lives the at least 12 apparently fit and healthy young people each week.

They added that they had been working with the charity, Cardiac Risk In The Young (CRY) which works with people affected by SADS.

Pathologist Dr Sian Hughes told the court Tom’s heart was ‘structurally normal’ and showed no signs of coronary disease. She thought he had died of an ‘undetectable condition in his heart’.

She said SADS, a new condition that is still being probed by cardiologists, was the most likely cause of the sudden death.

The court heard the condition can be genetic and the family were warned to have regular check-ups.

Coroner Dr Andrew Reid said: ‘I hope some lessons may be learnt from his tragic death, although those lessons may be of limited consolation to his bereaved parents and relatives.

‘The nature and circumstances of his death has implications for his first degree relatives. It may be something that some or all of them need to keep in their minds.’

He added that cardiologists were observing ‘more and more’ new irregular heart rhythms that come under the SADS umbrella.

Read more:

Power steering your hearing

Tuesday, December 1st, 2009

A new nano-scale motor was recently discovered in the ear by researchers at the University of Utah College of Engineering. According to these scientists the ear has a mechanical amplifier in it that uses electrical power to do mechanical amplification.

Herewith the Article:

Power steering for your hearing

Study: Ears have tiny ‘flexoelectric’ motors to amplify sound

IMAGE: Richard Rabbitt, professor and chair of bioengineering at the University of Utah, led a study indicating that a mechanism known as “flexoelectricity ” works within the cochlea of the ear to…
Click here for more information.

SALT LAKE CITY – Utah and Texas researchers have learned how quiet sounds are magnified by bundles of tiny, hair-like tubes atop “hair cells” in the ear: when the tubes dance back and forth, they act as “flexoelectric motors” that amplify sound mechanically.

“We are reporting discovery of a new nanoscale motor in the ear,” says Richard Rabbitt, the study’s principal author and a professor and chair of bioengineering at the University of Utah College of Engineering. “The ear has a mechanical amplifier in it that uses electrical power to do mechanical amplification.”

“It’s like a car’s power steering system,” he adds. “You turn the wheel and mechanical power is added. Here, the incoming sound is like your hand turning the wheel, but to drive, you need to add power to it. These hair bundles add power to the sound. If you did not have this mechanism, you would need a powerful hearing aid.”

The new study is scheduled for publication Wednesday, April 22 in PLoS ONE, a journal published by the Public Library of Science. The first author is Katie Breneman, a bioengineering doctoral student at the University of Utah. The study was coauthored by William Brownell, a professor of otolaryngology (ear, nose and throat medicine) at Baylor College of Medicine in Houston.

The researchers speculate flexoelectrical conversion of electricity into mechanical work also might be involved in processes such as memory formation and food digestion.

Dancing Cells and Hair-like Tubes in Your Ears

Previous research elsewhere indicated that hair cells within the cochlea of the inner ear can “dance” – elongate and contract – to help amplify sounds.

The new study shows sounds also may be amplified by the back-and-forth flexing or “dancing” of “stereocilia,” which are the 50 to 300 hair-like nanotubes projecting from the top of each hair cell.

Such flexing converts an electric signal generated by incoming sound into mechanical work – namely, more flexing of the stereocilia – thereby amplifying the sound by what is known as a flexoelectric effect.

“Dancing hairs help you hear,” says Breneman. The study “suggests sensory cells in the ear are compelled to move when they hear sounds, just like a music aficionado might dance at a concert. In this case, however, they’ll dance in response to sounds as miniscule as the sound of your own blood flow pulsating in your ear.”

In a yet-unpublished upcoming study, Rabbitt, Breneman and Brownell find evidence the hair cells themselves – like the stereocilia bundles atop those cells – also amplify sound by getting longer and shorter due to flexoelectricity.

IMAGE: University of Utah bioengineering doctoral student Katie Breneman and a large laboratory model of the cochlea, the part of the inner ear where incoming sound vibrations are converted into nerve…
Click here for more information.

Rabbitt and Brownell estimate the combined flexoelectric amplification – by both hair cells and the hair-like stereocilia atop hair cells – makes it possible for humans to hear the quietest 35 to 40 decibels of their range of hearing. Rabbitt says the flexoelectric amplifiers are needed to hear sounds quieter than the level of comfortable conversation.

“The beauty of the amplifier is that it allows you to hear very quiet sounds,” Brownell says. Rabbit says that because hair cells die as people age, older people often “need a hearing aid because amplification by the hair cells is not working.”

Because hair-like stereocilia also are involved in our sense of balance, the flexing of stereocilia not only contributes to hearing, but “also likely is involved in our sense of gravity, motion and orientation – all the things needed to have balance,” Rabbitt says.

The new study is part of an effort by researchers to understand the amazing sensitivity of human hearing. Rabbitt says the hair cells are so sensitive they can detect sounds almost as small as those caused by Brownian motion, which is the irregular movement of particles suspended in gas or liquid and bombarded by molecules or atoms.

An Amplifier for All Sorts of Ears

Hair cells are inside the inner ears of many animals. They are within the ear’s cochlea, which is the spiral, snail-shell-shaped cavity where incoming sound vibrations are converted into nerve impulses and sent to the brain. Incoming sounds must be amplified because incoming sound waves are “damped” by fluid that fills the inner ear.

Hair cells are about 10 microns wide, and 30 to 100 microns long. By comparison, a human hair is roughly 100 microns wide. A micron is one-millionth of a meter. The hair-like stereocilia tubes poking out the top of a hair cell are each a mere 1 to 10 microns long and about 200 nanometers wide, or 200 billionths of a meter wide.

Brownell says the new study shows how the flexoelectric effect “can account for the amplification of sound in the cochlea.”

Stereocilia essentially are membranes that have been rolled into tiny tubes, so “the fact that a membrane can generate acoustic [mechanical] energy is novel,” says Brownell. “Imagine hearing a soap bubble talk.”

Flexoelectricity in a membrane was noted a few decades ago when a researcher in Europe showed that flexing or bending a simple membrane in a laboratory generated an electrical field. Then, in 1983, Brownell showed that a hair cell from a guinea pig’s ear changed in length when an electric field was applied to it in a lab dish.

IMAGE: The illustration shows a cross-section of part of the cochlea, the fluid-filled part of the inner ear that converts vibrations from incoming sounds into nerve signals that travel to the…
Click here for more information.

The length of stereocilia changes along the coiled length of the cochlea. Different lengths are sensitive to different frequencies of sound. And different animals have different ranges of stereocilia lengths.

Breneman and colleagues devised math formulas and used computer simulations to arrive at the new study’s key finding: The flexoelectric amplifier can explain why varying lengths of stereocilia predict which sound frequencies are heard most easily by a variety of animals, from humans to bats, mice, turtles, chickens and lizards.

“They found that a longer stereocilium was more efficient if it was receiving low-frequency sounds,” while shorter stereocilia most efficiently amplified high-frequency sound, Brownell says.

Breneman says scientists now know of five ways the ears amplify sound, and “what makes this one unique is that it would be present in the stereocilia bundles of all hair cells, not only outer hair cells.”

The cochleae of humans and other mammals have “inner hair cells” that sense sound passively and active “outer hair cells” that amplify sounds. Other higher animals have hair cells, without a distinction between inner and outer.

Because the new study shows the dancing hair-like stereocilia act like an amplifier on any hair cell, “it explains how this amplifier may work in all higher animals like birds and reptiles, not just humans,” Rabbitt says.

How the Amplifier Works in the Inner Ear – and Perhaps Elsewhere

When sound enters the cochlea and reaches the hair cells, sound pressure makes the hair-like stereocilia tubes “pivot left or right similar to the way a signpost bends in heavy wind,” Breneman says.

The tops of the tubes are connected to each other by protein filaments. Where each filament comes in contact with the top end of a stereocilium tube, there is an “ion channel” that opens and closes as the bundle of stereocilia sway back and forth.

When the channel opens, electrically charged calcium and potassium ions flow into the tubes. That changes the electric voltage across the membrane encasing each stereocilium, making the tubes flex and dance even more.

Such flexoelectricity amplifies the sound and ultimately releases neurotransmitter chemicals from the bottom of the hair cells, sending the sound’s nerve signal to the brain, Breneman says.

“We’ve got these nanotubes – stereocilia – moving left and right and converting electrical power [from ions] into mechanical amplification of sound-induced vibrations in the ear,” Rabbitt says. He says the “flexoelectric motor” is the collective movement of the stereocilia in response to sound.

Brownell says the new study – showing that sound is amplified by “dancing” membrane tubes atop hair cells – adds to growing evidence that membranes do not “just sit there,” but instead are “dynamic structures capable of doing work using a mechanism called flexoelectricity.”

Brownell and Rabbitt note that stereocilia involved in amplifying hearing have similarities with other tube-like structures in the human body, such as villi in the gut, dendritic spines on the signal-receiving ends of nerve cells and growth cones on the signal-transmitting axon ends of growing nerve cells.

So they speculate flexoelectricity may play a role in how villi in the intestines help absorb food and how nerves grow and repair themselves.

“There is some evidence that dendrites and axons change their diameter during intracellular voltage changes, and that could well have flexoelectric origins,” says Rabbitt. “Any time you have a membrane with small diameter – like in axons, dendrites and synaptic vesicles [located between nerve cells], there will be large flexoelectric forces and effects. Therefore, the flexoelectric effect may be at work in things like learning and memory. But that’s pretty speculative.”


Send us more interesting stuff.

Take a moment and share this:

Bookmark and Share

Blind boy uses his ears to ‘see’

Tuesday, October 6th, 2009

We recently had a post regarding echolocation: How to speak like dolphins and how to see with your hearing (it contains a few great video clips).

This time round the BBC reported on a boy from Dorset whom has learned to use echoes to picture the world around him – similar to sonar techniques used by bats and dolphins. He clicks his tongue on the roof of his mouth and from the sound that returns he tries to work out the distance, shape, density and position of objects. The echolocation technique has helped Lucas, who was born blind, play basketball and rock climb.

Lucas Murray video

He was taught the system by blind Californian Daniel Kish, 43, who founded the World Access for the Blind charity. Lucas’s parents Sarah and Iain saw Mr Kish on TV and asked him to visit. Mr Kish said: “Lucas is one of the first in the UK to use this technique. “He is able to click his tongue and determine where things are around him and what things are around him and he is able to travel comfortably without holding on to people. “The click basically emanates a sound which bounces off the environment a bit like the flash of a camera.”

Lucas tells distance by timing how long the echo takes to return and he works out the object’s location by which ear the sound reaches first. He picks up the density and shape of it by the intensity of the sound bouncing back. An object moving away creates a lower pitch and one moving closer a higher pitch. Mr Kish said Lucas determines the qualities of an object by the characteristics of the sound that comes back. “He does play basketball, he is able to make it in to the hoop by clicking, he is actually pretty good at that,” Mr Kish added. “He is doing very well and his mobility is amazing, the best for his age in the UK.”


A couple of videos on how Daniel Kish implements echolocation (including mountain biking!!):

Send us more interesting stuff.

Take a moment and share this:

Bookmark and Share

Discovery of different ion-pore location on hair cells in the inner ear

Wednesday, September 30th, 2009

New imaging suggests that instead of being on the sides of the tallest stereocilla, ion channels sit on top of the shorter projections scientists at Stanford University reports.

Millions of people suffer from hearing loss and deafness, and until scientists understand the molecular basis of normal hearing, it’s difficult to understand what can go wrong. “We need to know specifically how hearing works,” Anthony Ricci, PhD, associate professor of otolaryngology said, “or we can’t come up with better treatments.”

Herewith the full article:

Discovery of ion-pore location on cell alters long-accepted model of hearing

Scientists thought they had a good model to explain how the inner ear translates vibrations in the air into sounds heard by the brain. Now, based on new research from the School of Medicine, it looks like parts of the model are wrong.

Anthony Ricci, PhD, associate professor of otolaryngology, and colleagues at the University of Wisconsin and the Pellegrin Hospital in France found that the ion channels responsible for hearing aren’t located where scientists previously thought. The discovery turns old theories upside down, and it could have major implications for the prevention and treatment of hearing loss.

“I had thought that the channels were in a very different place,” said Peter Gillespie, PhD, professor of otolaryngology at Oregon Health and Science University, who was not involved with the study. “This changes how we look at all sorts of previous data.” The findings were published online in Nature Neuroscience on March 29.

Ricci explained, “Location is important, because our entire theory of how sound activates these channels depends on it. Now we have to re-evaluate the model that we’ve been showing in textbooks for the last 30 years.”

Ion channels on the inner ear “hair cell” aren’t located where scientists had thought. New imaging suggests that instead of being on the sides of the tallest stereocilia, ion channels sit on top of the shorter projections.

Deep inside the ear, specialized cells called “hair cells” sense vibrations in the air. The cells contain tiny clumps of hair-like projections, known as stereocilia, which are arranged in rows by height and connected by thin filaments called “tip links.” Sound vibrations cause the stereocilia to bend slightly, and scientists think the movement opens small pores, called ion channels. As positively charged ions rush into the hair cell, mechanical vibrations are converted into an electrochemical signal that the brain interprets as sound.

But after years of searching, scientists still haven’t identified the ion channels responsible for this process. To pinpoint the channels’ location, Ricci and colleagues squirted rat stereocilia with a tiny water jet. As pressure from the water bent the stereocilia, calcium flooded into the hair cells. The researchers used ultra-fast, high-resolution imaging to record exactly where calcium first entered the cells. Each point of entry marked an ion channel.

The results were surprising: Instead of being on the tallest rows of stereocilia, like scientists previously thought, Ricci’s team found ion channels only on the middle and shortest rows.

“It doesn’t mean that all our old ideas were wrong, but it means we haven’t put the pieces together in the proper way yet,” Ricci said.

Bundles of hair-like projections, called stereocilia, on a cell in the inner ear detect vibrations in the air and translate them into sound. In mammals, three rows of stereocilia are arranged by height, as shown above.

Ion channels on hair cells not only convert mechanical vibrations into signals for the brain, but they also help protect the ear against sounds that are too loud. Through a process called adaptation, the ear adjusts the sensitivity of its ion channels to match the noise level in the environment. Most people are already familiar with this phenomenon, Ricci said, though they might not realize it. “If you watch TV in bed and you have the sound turned down low, you can hear fine when you’re going to sleep,” he said. “But then when you get up in the morning and turn on the news, you have to turn the volume up.”

That’s because at night, when everything is quiet, the ear turns up its amplifier to hear softer sounds. “But when you get up in the morning,” Ricci said, “and the kids are running around and the dog is barking, the ear has to reset its sensitivity so you can hear in noisier conditions without hurting your ear.”

Defects in the ear’s adaptation process put people at risk for both age-related and noise-related hearing loss. Understanding adaptation is a fundamental step in preventing hearing loss, said Robert Jackler, MD, the Edward C. and Amy H. Sewall Professor in Otorhinolaryngology at Stanford.

“Many forms of hearing loss and deafness are due to disturbances in the molecular biology of the hair cell,” Jackler said. “When you understand the nuts and bolts of how the hair cell works, you can understand how it goes wrong and can set about learning how to fix it.”

The study was funded by grants from the National Institute on Deafness and Other Communicative Disorders. Other scientists have attempted similar experiments in the past, but they used less sensitive imaging techniques. “Our microscope took images at 500 frames per second,” said Ricci, who led the imaging experiments. “That’s much faster than it’s ever been done before.”

Ricci and colleagues also used hair cells from rats, while previous experiments had been done in bullfrogs. Because mammals have fewer, more widely spaced rows of stereocilia, the team was able to determine the precise location of the ion channels.

“They chose their experimental preparation quite wisely,” Gillespie said. “The ear is really hard to get at, because it’s a tiny organ, it’s encased in very hard bone and there are very few hair cells.”

But Ricci’s study wasn’t just a triumph in experimental protocol. Millions of Americans suffer from hearing loss and deafness, and until scientists understand the molecular basis of normal hearing, it’s difficult to understand what can go wrong. “We need to know specifically how hearing works,” Ricci said, “or we can’t come up with better treatments.”



Send us more interesting news.

Take a moment and share this:

Bookmark and Share

NHS job cuts

Thursday, September 3rd, 2009

With Gordon Brown saying he is going to protect the NHS and the next moment he uses taxpayer money paying management consultants to tell them… oh you have to cut 10% of the NHS staff, this advice does not seem like good advice according to Shadow Health Secretary Andrew Lansley MP.

To read more have a look at the following two links. The first commenting on the outcome of the study:

and the second on Ministers rule out NHS job cuts:

Job Cuts

In short the study concludes:

The NHS would need to slash its workforce by around 10% to help meet planned savings of £20 billion, it has been reported.

A study, commissioned from consultancy firm McKinsey and Company, said the workforce would need to be cut by 137,000 to meet efficiency savings by 2014.

It said clinical staff would have to go alongside administrators.

The report, recommends a range of possible actions such as a recruitment freeze starting in the next two years, a reduction in medical school places from October and an early retirement programme to encourage older GPs and community nurses to make way for “new blood/talent”.

The report was presented to the Department of Health in March this year, it carries the department’s logo and has been disseminated among senior NHS managers.

The study said £2.4 billion could be saved if hospitals with the lowest levels of staff productivity got up nearer the average.

Send us more news.

Take a moment and share this:

Bookmark and Share

Why birds fly into jet engines

Thursday, July 9th, 2009

A strange phenomenon but here is an interesting video and article on the subject.

By Dr. S. Allen Counter

While delivering a lecture at the University of Alaska, Fairbanks, years ago, I was asked to address a group of students who had recently lost their fathers in a military airplane crash at nearby Elmendorf Air Force Base. Their jet had been brought down because of a “bird strike” – birds flying into the aircraft’s engine. Twenty-four people died.
Click here to view how birds hear.

It was a difficult assignment, but I overcame my emotions and expressed condolences. A student asked, “Why do birds collide with airplanes, and how can we prevent such collisions?”

That is a question that the aviation world has tried to answer for years. Hundreds of deaths occur worldwide each year as a result of bird strikes, with a cost of more than $600 million annually to US aviation alone, according to the Federal Aviation Administration.

The issue is made even more pressing by the recent crash landing of US Airways Flight 1549 in the Hudson River following an apparent collision with birds.

In the 1980s, before my trip to Alaska, I undertook a biological study of why birds cannot get out of the way of aircraft. My investigation took me from Logan International Airport to a sea gull nesting area on Monomoy Island.

The field maintenance crew at Logan allowed me, and one of my students, to ride with them down the runways each morning on their brightly colored trucks, with a designated person on the back firing a shotgun to scare away the sea gulls, perched on the runways for warmth.

To my utter dismay, large jets would take off at closer-than-comfortable distances just behind our truck. The field crew acknowledged worrying that the birds would become disoriented and fly right into the plane’s engine.

Like most passengers, I never knew that such a risky exercise existed just outside the plane on which I comfortably sat. To see these birds, including owls and geese, so close during takeoff gave me tremendous discomfort.

Logan staffers captured several sea gulls for my study. Suspecting that the collisions had to do with sound localization – the ability to tell where a sound was coming from – and hearing, I examined the birds’ inner ears and brains for clues.

The findings were remarkable.

By placing electrodes in the section of the brain that responded to sounds, I discovered that the most sensitive region of the birds’ hearing was in the 1 to 3 kilohertz range – which, interestingly, is also the peak acoustic noise output of a modern jet engine.

In other words, the intense jet noise may interfere with a bird’s ability to hear by over stimulating the bird’s inner ear hearing receptors. The brain wave responses also showed diminished hearing capacity in older birds.

To view the source and to read more, click here.

Send us more interesting stuff.

Take a moment and share this:

Bookmark and Share

Prevent swimmers ear this summer

Tuesday, June 30th, 2009

With temperatures soaring like it as been the past few days, many of us will resort to swimming to cool us down. Here’s some handy hints by the experts on preventing swimmers ears.

Bookmark and Share

Swimmer’s Ear – Otitis Externa

Pediatric Basics

By Vincent Iannelli, M.D.

Children with swimmer’s ear (otitis externa) have inflammation in their external ear canal. It is usually caused by water irritating the skin inside the ear, which then becomes infected with a bacteria, or more rarely, a fungus.

Symptoms of Swimmer’s Ear

Ear pain is the most common symptom of swimmer’s ear. Unlike the pain of a middle ear infection (otitis media), which might follow a cold, the ear pain from swimmer’s ear is made worse by tugging on your child’s outer ear. Looking inside your child’s ear, your Pediatrician will likely see a red, swollen ear canal, with some discharge.

Diagnosis of Swimmer’s Ear

The diagnosis of swimmer’s ear is usually made when a child has the classic symptom of outer ear pain that is made worse by tugging on the child’s ear.Swimmer’s ear can be confused with a middle ear infection, especially when your pediatrician is not able to see your child’s ear drum.

Treatments for Swimmer’s Ear

Once your child has swimmer’s ear, it is not the time to put alcohol based ear drops, which are often used to prevent swimmer’s ear. They will likely burn and make your child’s ear feel even worse. Instead, swimmer’s ear is usually treated with antibiotic ear drops, either with or without added steroids (which some experts think can reduce inflammation and make symptoms go away faster).Common otic (ear) drops that are used to treat swimmer’s ear include:

  • Ciprodex*
  • Cipro HC*
  • Cortane-B*
  • Cortisporin*
  • Domeboro Otic
  • Floxin
  • Vosol
  • Vosol HC*

*antibiotic ear drops that include a steroid.Although expensive, Floxin, Ciprodex, and Cipro HC, are most commonly prescribed, as they have less side effects, can be used just twice a day, and may provide better coverage against the bacteria that cause swimmer’s ear.

For mild cases of swimmer’s ear, you might ask your pediatrician if you can first try a solution of half strength white vinegar ear drops (half water/half white vinegar) twice a day — a common home remedy that some parents try.

Pain relievers, including acetaminophen (Tylenol) or ibuprofen (Motrin or Advil) can also be used to to reduce your child’s pain until his ear drops start working.

If there is enough swelling, so that ear drops can’t get into your child’s ear, your pediatrician may place an ear wick inside his ear canal.

Prevention of Swimmer’s Ear

In general, you can prevent swimmer’s ear by keeping water out of your kids’ ears. Fortunately, that doesn’t mean that your kids can swim and enjoy the water. Instead, use an over-the-counter ear drying agent that contains isopropyl alcohol (rubbing alcohol), such as Auro-Dri or Swim Ear, or one with acetic acid and aluminum acetate (Star-Otic).If you like, you might also create your own homemade swimmer’s ear prevention solution by mixing equal parts of rubbing alcohol and white vinegar, and putting it in your child’s ears after he swims.

Although some experts think that earplugs are irritating and can lead to swimmer’s ear, you can also keep water out of your kids’ ears by using a barrier, like earplugs, including Mack’s AquaBlock Earplugs or their Pillow Soft silicone Earplugs. If your kids have a hard time keeping their earplugs in, consider also using the Aqua-Earband or Ear Band-It neoprene swimmer’s headband.

What You Need To Know

  • Swimmer’s ear is usually caused by an infection with the Pseudomonas aeruginosa or Staphylococcus aureus bacteria.
  • You can often prevent swimmer’s ear by keeping water out of your child’s ears.
  • Pools that are poorly maintained are more likely to spread swimmer’s ear.
  • Swimmer’s ear can be treated with prescription antibiotic drops, either with or without steroids.
  • Once your child is better, you should continue to use his ear drops for an additional two or three days, during which time he stays out of the water.
  • Ear wax may be protective against swimmer’s ear, so don’t aggressively remove wax from your child’s ear. Cleaning your child’s ears with a cotton-tip applicator may also put them more at risk for swimmer’s ear.
  • In addition to swimming, kids can be at risk for getting swimmer’s ear if they get water in their ears when bathing or showering.
  • Oral antibiotics are rarely needed to treat uncomplicated cases of swimmer’s ear.
  • Malignant otitis externa is a rare complication of swimmer’s ear.
  • Fungal infections and noninfectious disorders, including eczema, psoriasis, seborrheic dermatitis, and allergic contact dermatitis, can also cause otitis externa, and should be suspected in chronic cases of swimmer’s ear.


Send us more interesting stuff.

OAE’s to replace passwords?

Tuesday, June 9th, 2009

Scientists at the University of Southampton, UK are researching the possible use of OAE’s as part of biometrics. By stimulating the ear with clicks, they hope to capture the unique OAE of a person’s ear and use it as an identification tool. Here’s the article:

Biometrics Turns Your Ear Into Your Password

What is your mother’s maiden name? What was your high school mascot? What town were you born in? Who cares! Pretty soon, your body could be the only password you’ll ever need.


Image courtesy of U. Southampton

As the field of biometrics takes off, the securities that protect your identity – from your credit cards to your property – will increasingly focus on your own flesh and blood. And it’s not all fingerprint scanners and voice recognition. New technologies are allowing your identity to be confirmed (or revealed) from a greater distance, more quickly, and without necessarily asking your permission.

Take your ear. No, that’s not a van Gogh joke; it’s called otoacoustic emission (OAE), and it’s the center of one of the many fascinating frontiers in biometric technology. Every time an auditory stimulus strikes your ear, hair cells along the spiral-shaped cochlea (part of the inner ear) translate the vibrations into action potentials, which are relayed on to the temporal lobe in the brain. With the right stimulus – a series of clicks, for example – these hair cells make some noise of their own as they expand and contract: otoacoustic emission.

Everyone’s inner ear has a unique structure, sort of like your fingerprint. Subtle differences in the cochlea translate into subtle differences in the OAE it produces. Dr. Stephen Beeby, an engineer at the University of Southampton, UK, is leading a research project to capture these unique sounds and use them as part of biometrics. By stimulating the ear with clicks, he hopes to capture the unique OAE of a person’s ear and use it as an identification tool.

If supersensitive microphones capable of sensing an OAE were built into your cell phone, your identity could be confirmed from afar. This way, your bank or cell phone company could know for sure that you were you, and not that guy who found your credit card on the sidewalk. If the technology is perfected and fully implemented, your cochlear-ID could secure every phone purchase you make. It could also shut down your cell phone or mp3-player the moment they touch foreign ears. Just in case that German guy who stole my iPod is reading: your days are numbered, mein freund.

Introducing new biometric techniques is no easy task. The researchers will have to show that OAE signals do not change over time, thereby providing a consistent biometric ever an individual’s lifetime. They will also need to prove that their technology has a low rate of false-match mistakes before its widespread use. There are still some bugs to work out. Excessive wax or ear infections can also muddle the signal, and apparently alcohol also dampens the sound of the OAE. That could make that late-night pizza delivery harder than you thought.

Audiologists have even suggested that they can distinguish gender, and even ethnic groups, by looking at OAE signals. This broadens the scope of how this technology could eventually be used; it also raises the whole host of ethical questions that surround biometric technology. How many aspect of our identity do we want to reveal with the sound of a click? I suppose you could always thwart your would-be identifier with speaker phone.

The research is funded through mid-2010, so we should be seeing their final product – bugs worked out – by then. Researchers hope to sell their microphones and software to electronics companies, and you shouldn’t be surprised to see this technology hitting the market soon after. So if you’re always forgetting your passwords, you might be able to – ahem – lend an ear instead. 


Send us more interesting stuff.

Take a moment and share this:

Bookmark and Share

Stem Cell Research Poll – Right or Wrong?

Wednesday, May 20th, 2009

You’ve seen / read the previous article on “New Stem Cell Therapy May Lead To Treatment For Deafness”

Please participate in the following poll; As an Audiologist do you agree with Auditory Stem Cell Research?

(Only click on one of the images once)


Yes, I agree with Stem Cell Research


No, I do not agree with Stem Cell Research

I’ll send you the results in next week’s email.

New Stem Cell Therapy May Lead To Treatment For Deafness

Wednesday, May 20th, 2009

The University of Sheffield has successfully isolated human auditory stem cells from foetal cochleae and found they had the capacity to differentiate into sensory hair cells and neurons. These have the potential for a variety of applications….

ScienceDaily (Mar. 23, 2009) — Deafness affects more than 250 million people worldwide. It typically involves the loss of sensory receptors, called hair cells, for their “tufts” of hair-like protrusions, and their associated neurons. The transplantation of stem cells that are capable of producing functional cell types might be a promising treatment for hearing impairment, but no human candidate cell type has been available to develop this technology.

Cross section of the cochlea, showing hair cell nerves. (Credit: Courtesy of Wikimedia Commons)

A new study led by Dr. Marcelo N. Rivolta of the University of Sheffield has successfully isolated human auditory stem cells from fetal cochleae (the auditory portion of the inner ear) and found they had the capacity to differentiate into sensory hair cells and neurons.
The researchers painstakingly dissected and cultured cochlear cells from 9- to 11-week-old human fetuses. The cells were expanded and maintained in vitro for up to one year, with continued division for the first 7 to 8 months and up to 30 population doublings, which is similar to other non-embryonic stem cell populations, such as bone marrow. Gene expression analysis showed that all cell lines expressed otic markers that lead to the development of the inner ear as well as markers expressed by pluripotent embryonic stem cells, from which all tissues and organs develop.

Take a moment and share this:

Bookmark and Share

They were able to formulate conditions that allowed for the progressive differentiation into neurons and hair cells with the same functional electrophysiological characteristics as cells seen in vivo.
“The results are the first in vitro renewable stem cell system derived from the human auditory organ and have the potential for a variety of applications, such as studying the development of human cochlear neurons and hair cells, as models for drug screening and helping to develop cell-based therapies for deafness,” say the authors.
Although the hair cell-like cells did not show the typical formation of a hair bundle, the authors suggest that future studies will aim to improve the differentiation system. They are currently working on using the knowledge gleaned from this study to optimize the differentiation of human embryonic stem cells into ear cell types.
“Although considerable information has been obtained about the embryology of the ear using animal models, the lack of a human system has impaired the validation of such information,” the authors note.
“Access to human cells that can differentiate should allow the exploration of features unique to humans that may not be applicable to animal models,” says Donald G. Phinney, co-editor of the journal. The protocol they developed to expand and isolate human fetal auditory stem cells may be able to be adapted for deriving clinical-grade cells with potential therapeutic applications.
Dr Ralph Holme, director of biomedical research for Royal National Institute for Deaf and Hard of Hearing People, said: “There are currently no treatments to restore permanent hearing loss so this has the potential to make a difference to millions of deaf people.”
The study is published in the April issue of Stem Cells.

Article source:

Send us more interesting stuff.