Medical News

New pacemaker offers heart failure hope

Medical News - Tue, 06/24/2014 - 18:19

A new pacemaker which synchronises heart rate with breathing could "revolutionise" the lives of people with heart failure, The Daily Telegraph reports.

Pacemakers are small electronic devices, implanted in the body, that help keep the heart beating regularly. They are normally used in people with conditions that disrupt the beating of the heart, such as sick sinus syndrome or heart block.

Current pacemakers actually make the heart beat "too regularly", as the healthy heart shows slight variations in rate, in terms of how it is synchronised with our breathing.

This latest research tested a more advanced form of pacemaker, known as an artificial central pattern generator (ACPG), that aims to restore the natural synchronisation of heart rate with breathing. The generator is designed to receive nerve signals from the diaphragm (a muscle used to expand and contract the lungs) and then transmit the signals to the vagus nerve, which controls heart rate.

The particular area of medical interest for the ACPG is slightly different from the current use of pacemakers. It is thought that the ACPG could be used in people with heart failure, while previous research has demonstrated that this natural synchronisation is lost in heart failure, and may be associated with poor health outcome.

The results of this early laboratory study were promising, with the technology able to coordinate a rat's heart rate with its breathing pattern.

 

Where did the story come from?

The study was carried out by researchers from the Universities of Bath and Bristol, and the University of São Paulo in Brazil. It was partly supported by the EPSRC (UK) – Higher Education Investment Fund.

The research was published in the peer-reviewed medical journal Journal of Neuroscience Methods.

The study was actually published back in 2013, but has hit the headlines now, as the British Heart Foundation has said it is to provide funding to allow the researchers to continue their analysis of ACPGs.

The Daily Telegraph’s reporting of the study is of a good quality and includes a discussion with experts, which generally view this new development in a positive light.

The associate medical director at the British Heart Foundation is quoted as saying that, “this study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure, so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future”.

 

What kind of research was this?

This was laboratory research concerned with the design of a new pacemaker that is able to synchronise the heart rate with breathing pattern, as happens naturally. 

Pacemakers are fitted in people that have conditions that disrupt the normal beating of the heart.

The researchers say that all mammals have what are called “central pattern generators” (CPGs). These contain small groups of nerve cells that regulate biological rhythms and coordinate motor rhythms, such as breathing, coughing and swallowing.

The CPG in the brainstem (the bottom part of the brain that connects to the spinal cord) is said to coordinate the heartbeat with our breathing pattern.

This phenomenon is said to be known as “respiratory sinus arrhythmia” (RSA) – an alteration in the normal heart rate that naturally occurs during our breathing cycle.

In people with heart failure (a disease process with many causes, where the heart is unable to pump enough blood to meet the body’s demands), RSA is lost, and this is said to be a prognostic indicator for poor outcome.

The aim of this latest study was to try and build an artificial (silicon) CPG that could generate these rhythms. It was then tested in rats, to see whether it was able to alter the rat’s heart rate during the respiratory cycle. 

 

What did the research involve?

The researchers describe how they developed the artificial CPG in preparation for live testing in rats.

The laboratory process is complex, but essentially the rats were anaesthetised and their body systems artificially manipulated. The CPG was connected to the phrenic nerve, which supplies the diaphragm, and the vagus nerve, which controls automatic processes in various body organs, including the heart rate.

The CPG received signals from the phrenic nerve, which were then processed electronically in the CPG, to produce voltage oscillations that stimulated the vagus nerve to control the heart rate.

The researchers monitored the heart using an electrocardiogram (ECG). They also looked at what happened when they injected a chemical (sodium cyanide) to stimulate the respiratory rate via sensory receptors.

The artificial CPG circuit was designed so it could provide three-phase stimulation, stimulating the vagus nerve during inspiration, early expiration and late expiration.

 

What were the basic results?

In rats, the heart rate naturally oscillates in rhythm with respiration, to give a natural RSA with a period of 4.1 seconds, and an amplitude (changes in wavelength) of around 0.08Hz.

In the laboratory, using the artificial CPG, the artificial RSA varied depending on the timing of impulses during the breathing cycle. The artificial CPG had the strongest influence when the vagus nerve was stimulated during the first inspiratory phase. This caused the heart rate to roughly halve, from 4.8 to 2.5 beats per second. The researchers describe that the heart rate decline during stimulation was a decrease of around 3 beats each second. During recovery, following stimulation, the heart rate returned to its resting value at an increased rate of +1 beat each second.

The CPG had a similar effect when the vagus nerve was stimulated during the early expiratory phase, but less of an effect when stimulated during late expiration (with heart rate only decreasing at a rate of around 1 beat per second to between 2.5 and 4 beats per second, rather than 2.5).

When they used the chemical to stimulate respiration, they found that this caused an increased burst rate of phrenic nerve activity, such that there was an increased rate of stimulation to the vagus nerve, allowing less time for the heart rate to recover. The heart rate was still synchronised to the respiratory rate, but the voltage oscillations had weaker amplitude. 

 

How did the researchers interpret the results?

The researchers conclude that their study shows neurostimulation using an ACPG can augment RSA (improve synchronisation between heart rate and breathing). They suggest this opens a new line of therapeutic possibilities for an artificial device that can restore RSA in people with cardiovascular conditions such as heart failure, where the synchronisation of heart rate with respiration has been lost.

 

Conclusion

This laboratory research describes the complex design and animal testing of an ACPG that aims to restore the natural synchronisation of the heart rate with the breathing pattern. Naturally in the body, our heart rate alters slightly as we breathe in and out (RSA).

In people with heart failure (a disease process with many causes, where the heart is unable to pump enough blood to meet the body’s demands), RSA is described as being "lost", and previous research has suggested this to be a prognostic indicator for poor outcome.

This research described the development of an ACPG and its testing in rats. The generator received incoming signals from the phrenic connected to the diaphragm, and then produced voltage oscillations that stimulated the vagus nerve, which controls heart rate.

The results were promising, demonstrating that the technology was able to coordinate the heart rate with breathing pattern. The heart rate varied, depending on the stage during breathing that the vagus nerve was stimulated.

When stimulated during the inspiratory phase, it decreased heart rate by around 50% of the normal rate, but had little effect on heart rate during the late expiratory phase.

Overall, this technique shows promise, but having only so far been tested in rats in the laboratory, it is far too early to tell if and when it will be developed for testing in humans and, importantly, whether it would actually have any effect on health outcomes.

We expect that further animal trials would be required, with positive results, before a phase I trial in humans is carried out.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Next generation pacemaker will 'revolutionise' patients' lives, scientists say. The Daily Telegraph, June 23 2014

Links To Science

Nogaret A, Zhao L, Moraes DJA, et al. Modulation of respiratory sinus arrhythmia in rats with central pattern generator hardware. Journal of Neuroscience Methods. Published online January 15 2013

Categories: Medical News

Menthol cigs 'encourage teens to smoke more'

Medical News - Tue, 06/24/2014 - 14:30

"Menthol cigarettes ARE more addictive," the Mail Online claims, based on a survey of 5,000 teenagers. The 2010-11 Canadian school survey found that 16% of teenagers aged 14 to 18 smoke cigarettes.

The research found that teenagers who smoked menthol cigarettes were on average smoking around 60% more cigarettes than teenagers who smoked regular cigarettes (43 compared to 27).

The researchers' unproven speculation is that this difference is because menthol cigarettes are less "harsh" on the throat. But it's important to note that this type of study is not able to prove that menthol cigarettes are more addictive than normal ones.

While the media has focused on menthol smokers being almost three times more likely to intend to continue to smoke, a worryingly whopping 89% of all smokers planned to continue.

In the UK the number of teenagers who smoke is in decline, but it is estimated that 10% of 15-year-olds are smokers.

If you are a teenage smoker, read more about why you should quit and the best ways to go about it.

You can also download the free NHS Choices stop smoking iPhone app.

 

Where did the story come from?

The study was carried out by researchers from the University of Waterloo and Concordia University in Canada, and was funded by the Canadian Cancer Society Research Institute.

It was published in the peer-reviewed medical journal, Cancer Causes and Control.

The study was accurately summarised by the Mail Online in the main body of the reporting, although the headline was inaccurate.

The study does not prove menthol cigarettes are more addictive than normal cigarettes. It shows that teenagers who smoke menthol cigarettes smoke more than those who only smoke non-menthol cigarettes.

To prove an increased level of addiction, researchers would need groups of participants to try to quit smoking either type of cigarette and then assess how difficult each group found it on average.

 

What kind of research was this?

The researchers analysed results from the 2010-11 Canadian Youth Smoking Survey. This was a cross-sectional survey for adolescents aged 14 to 18 performed every two years across schools in Canada.

Cross-sectional studies can provide information on prevalence patterns and suggest associations, but they cannot prove cause and effect.

 

What did the research involve?

The Canadian Youth Smoking Survey was sent to all state schools in Canada. The students were asked to provide details of the number and type of cigarettes they smoked on the days they smoked, and the number of cigarettes they smoked in the previous week.

The students' intent to continue smoking was assessed by their response to the question, "At any time in the next year do you think you will smoke a cigarette?"

Other data included whether they were allowed to smoke at home and if their parent, guardian or friends smoked.

Menthol cigarette smokers were defined as those who indicated smoking at least one menthol cigarette in the last 30 days. This could therefore apparently include those who exclusively smoked menthol cigarettes as well as those who smoked non-menthol cigarettes.

The researchers then analysed the data of adolescents who smoked.

 

What were the basic results?

There were responses from 56% of the schools, with 73% of the students filling out the questionnaire. Of the 31,396 students, 5,035 were smokers. The researchers analysed the results for 4,736 of these.

Menthol cigarettes were used by 32% of current smokers in the previous 30 days. Menthol smokers smoked on average 6.86 cigarettes per day, compared with 4.59 for non-menthol smokers. When this was estimated per week, menthol smokers smoked on average 42.74 cigarettes, compared with 26.33 for non-menthol smokers.

Menthol smokers were more likely to intend to continue smoking in the next year than non-menthol smokers (odds ratio [OR] 2.95, 95% confidence interval [CI] 2.24 to 3.90). However, approximately 89% of all smokers intended to continue smoking.

Adolescents living in homes with a total ban on smoking smoked fewer cigarettes per day, with an average number of -1.64 (95% CI -2.49 to -0.79). The average number of cigarettes smoked per day was higher for:

  • males – 1.10 (95% CI 0.40 to 1.81)
  • those with a smoking parent or guardian – 2.11 (95% CI 1.30 to 2.92)
  • those with at least one friend who smokes – 2.36 95% CI 1.42 to 3.29)

 

How did the researchers interpret the results?

The researchers concluded that, "Adolescent menthol smokers smoke more cigarettes and report their intent to continue smoking in the next year more frequently than non-menthol smokers."

They go on to advise that, "The findings of this study, along with existing evidence, suggest the need for banning menthol in Canada, in part because of its significant effect on youth smoking."

 

Conclusion

This large Canadian survey has worryingly found that despite the dangers of smoking, 16% of teenagers aged between 14 and 18 are smoking, and 89% of them intend to continue. The number of cigarettes smoked was higher in teenagers who smoked menthol cigarettes.

This included those who reported at least one menthol cigarette smoked in the past 30 days, so they may not have been exclusive smokers of menthol cigarettes.

These findings reflect the widespread belief that menthol cigarettes are deemed to be a more attractive option for teenagers.

However, the survey is reliant on self-reporting and is likely to be an underestimate of the number of cigarettes smoked. It is unclear why 27% of the students did not participate in the survey, or why the researchers only included 4,736 of the 5,035 students who smoked.

Nevertheless, these results show that more targeted awareness campaigns of the health risks of cigarette smoking are required for this age group in Canada.

Recent results from the Office for National Statistics in the UK showed a trend in the right direction for youth smoking. The percentage of 15-year-olds who are regular smokers has reduced from 28% in 1994 to 10% in 2012.

Hopefully, the proposed ban of menthol cigarettes will stop UK teenagers falling for the misconception that menthol cigarettes are "lighter on the throat" and are therefore in some way healthier.

If you smoke, there is a wide range of aids and advice that can help you quit.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Menthol cigarettes ARE more addictive: Teenagers who smoke them get through twice as much tobacco every week. Mail Online, June 23 2014

Links To Science

Azagba S, Minaker LM, Sharaf MF, et al. Smoking intensity and intent to continue smoking among menthol and non-menthol adolescent smokers in Canada. Cancer Causes and Control. Published online June 10 2014

Categories: Medical News

Possible pesticide link to autistic disorders

Medical News - Mon, 06/23/2014 - 18:19

“Pregnant women who live near fields sprayed with pesticides can run more than three times the risk of having a child with autism,” the Mail Online reports.

US researchers conducted a study that examined whether living in close proximity to where four common classes of agricultural pesticides were used while pregnant was associated with a higher risk of the mother’s offspring having autism spectrum disorder (ASD), or a similar developmental disorder.

Data on pesticide use was “mapped” to the mother’s place of residence while she was pregnant.

The main findings of the study were that living near (within about 1.25km distance) to where pesticides were used at any point during pregnancy (compared to no exposure) was associated with a 60% higher risk of the child having ASD.

Despite these seemingly alarming findings, it is important to note that causation cannot be established.

It is also worth noting that this study analysed data in California – a region with a high pesticide usage, so the findings may be considered "extreme".

From what is known about ASD, it is unlikely that a single environmental factor, such as exposure to pesticides, can cause the condition. It is currently thought that the condition arises through a complex mix of genetic and environmental factors.

 

Where did the story come from?

The study was carried out by researchers from the University of California in the US and was funded by various grants and the University of California’s Davis Division of Graduate Studies and MIND Institute.

The study was published in Environmental Health Perspectives, a peer-reviewed open-access journal, so it is freely available to read online.

The story was picked up by the Mail Online. The headline, “Crop sprays 'raise risk of autism in unborn children'” is alarmist, as no cause and effect link has been proven.

However, the paper does provide some useful reaction quotes from independent experts. For example, the National Autistic Society is quoted as saying that “the development of autism is much more complicated than the researchers had suggested”.

 

What kind of research was this?

This was a type of exploratory research that used data from a wider study (the Childhood Autism Risks from Genes and Environment study or CHARGE study) and linked it with data obtained on pesticide use in California. The researchers say California is the top agricultural producing state in the US and that each year, approximately 200 million pounds of active pesticide ingredients are used throughout the state.

The CHARGE study is a population-based case-control study of over 1,600 children aged between two and five years, born in California. Cases (children with diagnosed ASD or developmental delay) are matched to controls (people without these conditions). The ongoing CHARGE study aims to look at a range of factors that may contribute to autism and developmental delay by asking parents extensive questions about environmental exposures during pregnancy.

 

What did the research involve?

In this latest study, the researchers aimed to investigate the association between living near to where agricultural pesticides were being used during pregnancy and the risk of ASD and developmental delay in the offspring.

They were also interested in seeing whether possible exposure to pesticides during different stages of pregnancy were associated with a higher risk.

Previous studies suggest that any type of exposure to a particular substance that occurs during the first trimester of pregnancy can have the biggest influence on subsequent development. 

Parents of participants in the CHARGE study were asked to report all addresses where they lived, from three months prior to conception up to the time of delivery.

Based on previous studies, the researchers chose to investigate the following groups of pesticides:

  • organophosphates
  • carbamates
  • organochlorines
  • pyrethroids

Data on pesticides was obtained from a publicly available yearly pesticide report about pesticide use in California in areas such as parks, golf courses, cemeteries and rangeland.

Pesticide use in post-harvest treatment of agricultural commodities, in poultry and fish production, and in some livestock applications was also measured.

The researchers report excluding home and garden use, and most industrial and institutional uses of pesticides, though it is unclear from this description specifically what has been excluded.

The data includes use of these pesticides by date, square mile and the amount of chemical used.

In this latest study, mapping software was then used to determine a geographical picture for this pesticide use using radii of 1.25km, 1.5km and 1.75km around each place of residence. 

Each pregnancy was then assigned an exposure profile, based on pesticide use nearby to where the mother lived and days of pregnancy on which the pesticide use occurred.

Statistical techniques were used to estimate the risk of exposure to agricultural pesticides by comparing confirmed cases of ASDs or developmental delay with a control group of children who had typical development.

Adjustments were made for some confounders (e.g. paternal education, home ownership, maternal place of birth, child's race/ethnicity, maternal pre-vitamin intake and year of birth).

 

What were the basic results?

The main findings of this study were:

  • approximately one-third of mothers lived within a 1.5km (just under one mile) radius of where one of the four classes of agricultural pesticide were used
  • of the pesticides evaluated, organophosphates were the most commonly used agricultural pesticide near the home during pregnancy, followed by pyrethroids

In the analyses of any exposure during pregnancy vs. no exposure:

  • children with autism spectrum disorder were 60% more likely to have had organophosphates applied near the home (1.25km distance; adjusted odds ratio [aOR] 1.60, 95% confidence interval [CI] 1.02 to 2.51) than mothers of children with typical development. This risk was found to be higher for exposure to organophosphates during the third trimester of pregnancy (OR 2.0, 95% CI 1.1 to 3.6)
  • risk for developmental delay was increased for children of mothers who lived nearby to where carbamate pesticides were being used (1.25km distance; aOR 2.48, 95% CI 1.04 to 5.91), but no specific period during pregnancy was identified as being associated with a greater risk
  • children of mothers living near to where pyrethroid insecticide was used just prior to conception or during the third trimester were found to be at greater risk for both ASDs and developmental delay (ORs ranged from 1.7 to 2.3)

 

How did the researchers interpret the results?

The researchers concluded that the children of mothers who live near agricultural areas, or who are otherwise exposed to prganophosphate, pyrethroid or carbomate pesticides during pregnancy, may be at increased risk of neurodevelopmental disorders.

 

Conclusion

Overall, this exploratory study provides some limited evidence of a possible link between living in close proximity to where four common classes of pesticides are used during pregnancy and their offspring having ASDs. However, it does not provide evidence of causation. The exact causes of ASDs are largely unknown, although it is thought that several complex genetic and environmental factors are involved. There may be many other factors at play that the researchers did not take into account.

There is also the possibility that there is no association at all between ASDs and pesticide use, and that these were chance findings.

Though the original sample size was fairly large, the study included only 144 children with ASDs whose mothers were exposed to pesticides at any time during pregnancy or pre-conception. When further dividing this sample of 144 children into the specific pesticide they were exposed to, and the trimester of pregnancy at which they were exposed, the numbers become smaller still. When conducting statistical analyses using small sample numbers, this increases the possibility of chance findings.

The number of children with developmental delay who had been exposed to any pesticide before birth was smaller still – only 44 children.  

It is also worth noting that this study analysed data of the top agricultural state in the US: California. Because of this, more agricultural pesticides are used in this state than any other, meaning that the findings may not be generalisable to areas with different pesticide use, or to urban areas where different pesticides are used.

The authors also report some limitations to their study, including the fact that the approach used to obtain exposure to pesticides may not have included all potential sources of exposure to each of the classes of pesticides of interest. This is because not all pesticide use was captured in the publicly available report the researchers used to capture this exposure data.

In addition to this, information on the hours the mother spent in the home or elsewhere was not available, which may also contribute to errors in estimating pesticide exposure.

As stated, it is also unclear which type of industrial and institutional uses of pesticides were excluded.

The exact causes of ASDs are largely unknown, although it is thought that several complex genetic and environmental factors are involved. This study adds to the growing literature in this area.

Aside from following the standard recommendations about healthy living during pregnancy, there is no proven method that can prevent a child from developing ASD or similar disorders.

Analysis by
Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Crop sprays 'raise risk of autism in unborn children': Campaigners claim odds can be increased by around 60%. Mail Online, June 23 2014

Links To Science

Shelton JF, Geraghty EM, Tancredi DJ, et al. Neurodevelopmental Disorders and Prenatal Residential Proximity to Agricultural Pesticides: The CHARGE Study. Environmental Health Perspectives. Published online June 23 2014

Categories: Medical News

Stress 'causes damage to the heart,' study finds

Medical News - Mon, 06/23/2014 - 14:17

"Stress is already known to be bad for the heart, but now scientists have discovered why it is so harmful," The Times reports.

A new US study now offers a plausible model of how chronic psychological stress could lead to heart damage. It involved both mice and junior doctors.

Researchers checked the blood of a small group of doctors after a week at work in intensive care. After a week of this stressful work, their white blood cell count had increased.

Similarly, when mice were exposed to chronic stress (tilting their cage for an extended period of time), they also showed increased levels of white blood cells.

This finding is of interest and possible concern. Previous research suggested inflammatory white blood cells might be involved in the process of causing the rupture of fatty atherosclerotic plaques in the arteries of people with heart disease, which causes a heart attack.

However, this research is very far from providing conclusive proof that stress leads to the development of heart disease, or directly causes heart attacks.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School in the US and the University Heart Center in Germany, and was funded by the US National Institutes of Health and Deutsche Forschungsgemeinschaft.

It was published in the peer-reviewed medical journal, Nature Medicine.

The Daily Mail's topline suggestion was that "groundbreaking research" provides proof, but this "proof" is far from definitive. Only later on in the article does the newspaper explain that the only human element to the study involved examining the blood cell counts of a small sample of medical staff exposed to chronic stress.

None of these people had a heart attack or stroke, and a change in their white blood cell count is not proof that they were more likely to develop heart disease or have a heart attack. Directly attributing stress as the cause of these changes in their white blood cell count is even harder to prove.

 

What kind of research was this?

This was a laboratory study that aimed to look at the association between psychosocial stress and atherosclerosis, where a fatty build-up of cholesterol and other cellular material leads to hardening and narrowing of the arteries.

When atherosclerosis develops in the arteries that supply the heart muscle, this is known as coronary heart disease.

The researchers looked at the effect stress has on the white blood cells of the immune system. They did this by analysing blood samples from a small number of medical staff exposed to stressful situations, as well as examining the immune cells of mice exposed to stress. 

heart attack is caused when atherosclerotic plaques rupture or break apart, leading to a clotting process that can then completely block the artery. This cuts off the oxygenated blood supply to an area of heart muscle.

The chest pain of angina often develops in situations when the heart is trying to work faster (when exercising, for example) and so needs more oxygen, but it cannot get enough oxygen because of these blockages in the arteries. The pain is the result of the muscle being starved of oxygen.

Triggers of angina can therefore include not only physical activity, but also emotional stress such as anger, as this can cause the heart rate to speed up.

However, a plaque rupture causing a heart attack can happen at any time and will not necessarily be linked to any trigger.

This scientific study is loosely concerned with stress and plaque ruptures, though it did not directly look at coronary heart disease or heart attacks.

Rather, it looked at whether stress could alter the activity of the hematopoietic stem cells, which give rise to all other blood cells. This includes:

  • red blood cells, which carry oxygen
  • platelets, which are involved in blood clotting
  • white blood cells, which form the immune system (the researchers were particularly interested in these)

The theory was stress may be associated with an increase in white blood cells levels, possibly because of an increase in the activity of hematopoietic stem cells.

The researchers say previous research has suggested the infiltration of atherosclerotic plaques with certain inflammatory white blood cells may be involved in the process of plaque rupture, and so lead to a heart attack.  

 

What did the research involve?

This research involved both human and animal study.

In the first part of the study, the researchers recruited 29 medical residents (equivalent to registrar grade [specialty trainee] doctors in the UK) working in a hospital intensive care unit. As you can imagine, this is a challenging, fast-paced work environment that frequently involves the responsibility of life-or-death decisions.

The researchers asked the doctors to complete the Cohen's Perceived Stress Scale (a widely used method of assessing self-reported levels of stress) both on and off duty. At the same time, the researchers also took blood samples to look at their white blood cell count. 

The second part of the study involved mice. The researchers exposed mice to different levels of chronic stress in behavioural experiments to see what effect this had on their white blood cell count. These stress tests included tilting the cage at an angle for an extended period of time and periods of isolation in a confined space followed by crowding.

The researchers wanted to see whether any increase in white blood cell count was actually being caused by an increase in the activity of the hematopoietic stem cells. To do this they examined samples of the mice's bone marrow.

The researchers next investigated whether any increase in hematopoietic stem cell activity could be being caused by the stress hormone noradrenaline, which is involved in the "fight or flight" response.

Noradrenaline is a very similar hormone to adrenaline, with very similar functions, although they are not identical chemicals.

A final part of their study involved looking at mice genetically engineered to develop atherosclerosis.

 

What were the basic results?

The researchers found the medical residents' perception of stress was, not surprisingly, higher when they were working compared with when they were off duty.

Comparing blood samples taken on and off duty, they also found that they had higher numbers of certain white blood cells (neutrophils, monocytes and lymphocytes) after they had spent one week working in intensive care.

When the researchers further explored the theory in mice, they found that they similarly demonstrated an increase in levels of certain white blood cells (neutrophils and monocytes) when they were exposed to stress in behavioural experiments.

There was also increased activity of hematopoietic stem cells in the bone marrow of stressed mice. The researchers found noradrenaline levels increased in the bone marrow of stressed mice compared with non-stressed control mice. This suggests the hormone may be involved in increasing hematopoietic stem cell activity.

When the researchers carried out further tests in stressed mice genetically engineered to lack noradrenaline receptors, these mice didn't demonstrate the same increases in stem cell activity, suggesting that they were "protected" from stress.

The researchers then looked at mice genetically engineered to develop atherosclerosis, exposing them to six weeks of chronic stress. They found stress was, as expected, associated with increased stem cell activity and increased numbers of certain white blood cells.

When they examined their heart blood vessels in the laboratory, they found the atherosclerotic plaques were infiltrated with increased numbers of white cells.

 

How did the researchers interpret the results?

The researchers conclude that chronic stress interferes with the production of blood cells, and has interactions with the immune system and the process of atherosclerosis.

They say that with their observations in mice reflecting those in humans, "These data provide further evidence of the hematopoietic system's role in cardiovascular disease and elucidate a direct biological link between chronic variable stress and chronic inflammation".

 

Conclusion

This research investigates the widely held perceived wisdom that psychological stress is associated with coronary heart disease.

It found 29 medical residents working in a stressful intensive care unit setting had increased levels of white blood cells, which form part of the immune system. The researchers also found exposing mice to chronic stress similarly increased their levels of certain white blood cells.

When they examined the bone marrow of stressed mice, they found this increase in the number of white blood cells seemed to be mediated by an increase in the activity of hematopoietic stem cells, which produce all other types of blood cells.

In further study of the mice, the researchers found evidence the chemical noradrenaline (very similar to adrenaline) seemed to be responsible for this increased stem cell activity. They also found there was an increase in white blood cells in the fatty plaques of stressed mice predisposed to coronary artery disease.

Overall, these observations in mice and humans provide a plausible model of how chronic stress may lead to increased hematopoietic stem cell activity.

This in turn may lead to an increased white blood cell count. These white blood cells may possibly then infiltrate the fatty plaques of coronary heart disease (if they have built up), leading them to rupture and cause a heart attack.

However, there are a lot of maybes:

  • We don't know what these people's white cell counts looked like in the longer term over the full duration of their working life.
  • We do not know whether the activity of stem cells in their bone marrow was responsible for the slight increase in white cell levels. If it was, we do not know whether stress hormones were directly responsible for this activity.
  • As far as we know, none of these participants actually had heart disease. If fatty atherosclerotic plaques were present in the heart arteries of these people, we do not know whether an increase in white blood cells would actually be involved in the process of causing these plaques to rupture and so cause a heart attack.
  • We also don't know whether the elevation of white blood cells as a result of chronic stress could be involved in the development of heart disease in the first place. Still, the most well-established risk factors for the development of atherosclerotic plaques are high cholesterol, smoking and high blood pressure, in addition to unmodifiable factors such as being male, increased age and hereditary factors. A person's white blood cell count does not have a firm association with the development of heart disease.
  • These results do not alter the well-established association between emotional stress and coronary heart disease. In people who have atherosclerotic plaques built up in their heart arteries, emotional stress, like physical activity, causes an increase in heart rate and so an increased demand for oxygen in their heart muscle. The blood cannot flow past the blockages in the heart arteries well enough to meet the oxygen demands of the muscle, which causes angina pain in people with heart disease.

Overall, this is a valuable scientific study that furthers our understanding of how stress – through white blood cell count – could potentially be involved in plaque rupture, which causes heart attack.

However, the study is far from conclusive. Other lifestyle risk factors for heart disease, most notably high cholesterol and smoking, are well established.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Revealed: why stress is bad for the heart. The Times, June 23 2014

How stress damages the heart. The Daily Telegraph, June 22 2014

Proof that stress really does cause heart attacks: Adrenaline can increase white blood cell production which can cause ruptures. Daily Mail, June 23 2014

Links To Science

Heidt T, Sager HB, Courties G, et al. Chronic variable stress activates hematopoietic stem cells. Nature Medicine. Published online June 22 2014

Categories: Medical News

Pre-teen girls have 'easy access' to e-cigarettes

Medical News - Fri, 06/20/2014 - 18:19

“Children 'addicted to sweet tasting e-cigarettes',” is the alarmist headline in the Daily Mirror. A survey has found that a few pre-teen girls in Wales have experimented with the devices, but there is no evidence of widespread addiction.

The headline is based on an opinion piece written by Kelly Evans, a Director of the organisation Social Change UK, following the release of a report from the same organisation. The report looked at the prevalence of smoking, and the attitudes and behaviours towards the practice (including e-cigarettes), among girls aged 11 and 12 across North Wales.

Five focus groups were carried out, and the main findings were that most girls were aware of e-cigarettes at this age and many had also tried them.

The opinion piece focused on whether or not e-cigarettes are a gateway to taking up tobacco smoking. It is this opinion piece, rather than the underlying report, that the UK media have predominantly reported on.

Unfortunately, the media have failed to inform the reader that both the opinion piece (written by a single author) and the underlying report have not been peer-reviewed, so this should be taken into consideration when interpreting what has been presented. 

 

What are e-cigarettes?

E-cigarettes, or electronic cigarettes, are electrical devices that mimic real cigarettes in that they provide a nicotine dose in a vapour (which is why the habit is often known as “vaping”).

The vapour is considered much less harmful than tobacco smoke as it does not contain many of the cancer-causing substances (carcinogens) that make smoking so dangerous. We don’t yet know the long-term effects of vaping on the body. There are other potential drawbacks to using them, including:

  • E-cigarettes aren’t currently regulated as medicines, so you can’t be sure of their ingredients or how much nicotine they contain – whatever it says on the label.
  • The amount of nicotine you get from an e-cigarette can change over time.
  • They aren’t proven as safe. In fact, some e-cigarettes have been tested by local authority trading standards departments and been found to contain toxic chemicals, including some of the same cancer-causing agents produced from tobacco.

 

Who produced the opinion piece and report?

The opinion piece, called: "E-cigarettes, children and adults who like gummy bears? Are e-cigarettes a good thing?" was written by a Director at Social Change UK. It was based on a report called “Smoking in girls aged 11 to 12 years in North Wales”, also written by Social Change UK. The report is part of a wider social marketing campaign funded by Public Health Wales and sponsored by the North Wales Tobacco Control Alliance.

Social Change UK is a social research and campaign company that works in the UK, Europe and Australia. According to its website, it carries out social research, as well as designing campaigns that build emotional connections and encourage people to think and act.

 

What points does the opinion piece make?

This opinion piece discusses the debate around e-cigarettes and reports that not enough is known about the role they play in introducing people to smoking (also known as “gateway effects”). The role of marketing is discussed, and a point is made by the authors that over the last 12 months the marketing of e-cigarettes has shifted from being an “aid” to stopping smoking to something desirable. She says there are over 300 flavours now available, including bubblegum, milkshake, red bull (a taste like the energy drink) and gummy bear. The author also delves into whether children should be stopped from trying to buy them.

Also discussed is the fact that e-cigarettes are not regulated in the same way that tobacco is. Plans to review this regulation from the Medicines and Healthcare Products Regulatory Agency (MHRA) are mentioned, as are the government’s plan to ban the sale of e-cigarettes for those under the age of 18.

 

Does the opinion piece and report provide any new evidence?

The opinion piece cites previous research carried out by the anti-tobacco public health charity Action on Smoking (ASH) Wales in 2014, which reportedly found that 79.6% of 13 to 18-year-olds were aware of e-cigarettes in Wales. Figures on how many young people are actually using them are less clear, however.

Also cited in this opinion piece are the findings from a survey carried out by Social Change UK in June 2014. Importantly, this research has not been peer-reviewed, so findings of the survey should be interpreted with caution.

A survey was carried out on headteachers and teachers from 72 schools across England, and 53 of these schools reported that they were confiscating up to 10 e-cigarettes a week.

Five focus groups were also carried out across Wales including girls aged 11 and 12 years in areas with high levels of deprivation and high adult smoking prevalence; however, it is not reported how many girls were included in these focus groups.

According to these focus groups, most girls were aware of e-cigarettes at this age, and many had also tried them. No specific figures have been reported.

There were varying attitudes to them and understanding of what they were and what they do differed among the focus groups. A large number of girls (again, figures not reported) described e-cigarettes as “not as bad” as cigarettes, and some did not believe they could be harmful.

It was reported that in one Welsh town, almost all the girls had tried e-cigarettes at least once and found they were easy to purchase from shops, parents and friends.

Other anecdotal figures are also provided in this opinion piece, citing some cases of poisoning from the chemicals in e-cigarette cartridges, as well as findings from an electrical safety point of view carried out by six north Wales Trading Standards services. These revealed a 100% failure rate against safety regulations.

It is unclear how robust these figures are and if they are representative.

According to recent data provided by the National Poisons Information Service (NPIS), there were 29 reported cases of e-cigarette poisoning in 2012 in the UK. It's possible that many of these were due to very young children mistakenly drinking the vaping liquid, rather than through using the devices in the prescribed manner.

 

Is there any evidence that e-cigs are acting as a gateway?

This opinion piece does not include any robust evidence that e-cigarettes are acting as a gateway.

recent survey was carried out by ASH and found that non-smokers are not taking up the e-cigarette habit. However, the data only spanned from 2010 to 2014, meaning that longer-term smoking trends are unknown, so it is too early to be complacent.

To draw further conclusions about the potential gateway effects of e-cigarettes, longer-term studies (such as prospective cohorts) are required, and as the introduction of e-cigarettes is only fairly recent, it may be some time before these sorts of figures become available.

 

How accurate is the media’s reporting of the study?

This opinion piece has been widely covered in the UK media. The Independent, The BBC and The Daily Mail Online have focused their coverage on the findings from the focus groups on girls aged 11 and 12. 

However, The Daily Mirror has taken a slightly different stance with the headline: “Children are ‘addicted to sweet tasting e-cigarettes’ after milkshake and bubblegum flavours are launched". This is inaccurate reporting, as the report provides no evidence of widespread addiction amongst young children. 

Unfortunately, the majority of the media's reporting has failed to explicitly indicate to the reader that this is an opinion piece based on a report that has not been peer-reviewed.

Read more about e-cigarettes at NHS Choices.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Children 'addicted to sweet tasting e-cigarettes' after milkshake and bubblegum flavours launched. Daily Mirror, June 19 2014

Preteen girls have easy access to e-cigarettes but don't know the health risks, study shows. The Independent, June 19 2014

Girls aged 11 'get e-cigarettes easily' in north Wales. BBC News, June 19 2014

Girls as young as 11 find e-cigarettes 'highly appealing' - and their use among children is 'widespread'. Mail Online, June 19 2014 

Categories: Medical News

Sun tanning 'addictive' suggests study

Medical News - Fri, 06/20/2014 - 14:30

“Sunbathing 'may be addictive' warning,” BBC News reports.

Researchers have investigated why, despite all the evidence of the harm it can cause (namely increased skin cancer risk), people continue to want a tan. Is it purely for aesthetic purposes, or is it due to one of the leading reasons people persist in self-destructive behaviour, addiction?

Researchers exposed shaven mice to UV light five days a week for six weeks. These mice had increased levels of chemicals that can trigger a feeling of euphoria – similar to an opiate-like high – as well as increased tolerance to pain.

At the end of six weeks the mice had withdrawal symptoms and increased tolerance to morphine injections. Repeat experiments in mice genetically engineered so that they could not produce beta endorphins, removed all of these effects.

This suggests that it was these naturally occurring endorphins, driven by UV exposure, that were having the effects in the first group of mice.

An obvious limitation of the study is that mice are nocturnal animals. So the effects of UV exposure, especially on shaven mice, may have a dramatic effect on the mice’s endorphin pathways that may not correspond to humans.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School and was funded by the National Institutes of Health, Melanoma Research Alliance, the US-Israel Binational Science Foundation and the Dr Miriam and Sheldon G Adelson Medical Research Foundation.

The study was published in the peer-reviewed scientific journal Cell and has been released on an open-access basis so it is free to read online.

The media is generally representative of this research, though the BBC’s modest headline that “Sunbathing may be addictive” is probably most appropriate. The Daily Mail’s alternative that “Sunbathing … is like heroin use” is a little over-the top, to put it mildly. And only very far into their coverage does the Mail reveal that the study was in mice.

Both the BBC and the Mail do include useful quotes from independent experts, who make the case that the findings of the study may not be applicable to humans.

 

What kind of research was this?

This was an animal study aiming to see how beta endorphins could be involved in an addiction to ultraviolet (UV) light.

UV light is a well-established risk factor for skin cancers, including malignant melanoma, the most serious type of skin cancer.

Excess exposure to UV light through sunbathing or the use of sunbeds has long being recognised to increase risk of skin cancer, but despite the health warnings, these activities remain popular. This has led to speculation about whether it is simply an aesthetic preference for tanned skin, or an actual biological addiction. The researchers say the previous studies have suggested there could be an addictive process involved.

When skin is exposed to UV light, a particular protein called pro-opiomelanocortin (POMC) is broken down into smaller pieces called peptides. One of these is a hormone called a-melanocyte-stimulating hormone (a-MSH), which mediates the tanning process by stimulating pigment cells to produce a brown/black pigment. Another is a beta endorphin, which is one of the body’s naturally occurring opioids. Opioids bind to opioid receptors, resulting in pain relief.

Synthetic opioid medications include the drugs morphine and diamorphine (heroin), which are not only very powerful painkillers, but are known to be associated with tolerance (where increasing doses are required to give the same effect) and dependence (withdrawal symptoms when the medication is removed).

Therefore, naturally occurring beta endorphins are believed to play a role in both pain relief and also the reinforcement and reward system that underlies addiction. This study aimed to see whether exposing mice to UV light may cause changes in beta endorphin levels that result in opioid-related effects. These include increase in pain threshold, tolerance to synthetic opioids and symptoms of dependence.

 

What did the research involve?

Mice had their backs shaved and were then exposed to ultraviolet B (UVB) light five days a week, for six weeks. UVB is thought to be one of the most dangerous wavelengths of light produced by the sun as it can penetrate the skin to a deeper level (this is not to say that other wavelengths are safe).

This model of exposure was said to be approximately equivalent to 20-30 minutes of ambient midday sun exposure in Florida during the summer for a fair-skinned person. A control group was given mock UVB exposure. Blood samples were taken once a week to measure beta endorphin levels. They also had weekly measurements of tail elevation (Straub testing), which is an indicator of the activity of the opioid system in rodents.

Mice also received tests to measure their mechanical and thermal pain thresholds. One test involved poking the paws with fibres of increasing strength to see at what point the paw was withdrawn. Another involved similarly testing paw response (such as jumping or licking) when exposed to a hot plate.

The researchers tested whether any of these effects could be reversed by injecting the mice with naloxone, which is a drug used in medicine to block the actions of opioids (it is used to treat people who have had an opioid overdose).

After the full six weeks of UVB exposure or mock exposure the mice again received injections of naloxone to see if they demonstrated opioid withdrawal symptoms (such as shaking, teeth chattering, rearing up, diarrhoea).

After the full six weeks of UVB exposure or mock exposure, the researchers also tested the tolerance of mice to the synthetic opioid morphine. Increasing doses of morphine were given to see at what dose they could tolerate exposure to the hot plate.

As a final part to the study the researchers repeated the tests in a group of mice that had been genetically engineered so they lacked the POMC gene that enables them to produce beta endorphins.

 

What were the basic results?

The researchers found that blood levels of beta endorphins started to increase after only one week of UVB exposure. Levels remained elevated for the full six weeks of exposure, returning to normal levels one week after the exposure had stopped. There was no increase in the mock-UV-treated mice.

The mice exposed to UVB also demonstrated increased thresholds to mechanical and heat pain, which corresponded with the elevated beta endorphins levels. No change in threshold was seen in the mock-exposed mice. The painkilling effect was reversed by giving the UV-exposed mice naloxone.

By the second week of UVB exposure, mice also demonstrated increase in tail rigidity and elevation (as would be seen if the mice had been given an opioid drug), which remained for the six weeks of exposure. This effect diminished two weeks after UVB exposure had stopped. The effect was also reversed by giving the UV-exposed mice naloxone.

After the six weeks of exposure to UVB light, administration of naloxone caused several of the classic withdrawal symptoms, though these symptoms were lower in magnitude than has been observed in previous studies where mice have been treated with synthetic opioids.

The researchers also found that the mice exposed to six weeks of UVB demonstrated increased opioid tolerance, requiring significantly higher doses of morphine than the mock-exposed mice in order to tolerate the hot plate.

When repeating the tests in mice genetically engineered so that they could not produce beta endorphins, none of the effects were seen. When these mice were exposed to UVB light for six weeks they did not have increased pain thresholds and did not show signs of opioid withdrawal or opioid tolerance. This suggested, as expected, that it was the naturally occurring beta endorphins opioids that were having the effects.

 

How did the researchers interpret the results?

The researchers conclude that their findings show that chronic UV exposure stimulates the production of enough of the naturally occurring beta endorphin to cause opioid effects, and allowed the mice to develop both opioid tolerance and physical dependence.

 

Conclusion

This animal study demonstrates how continuous exposure to UV light leads to an increase in the skin’s production of beta endorphins, which are naturally occurring opioids. In mice, this resulted in increased pain thresholds and signs of opioid dependence and tolerance.

It is not known whether this mouse model could indicate an identical biological response when humans are exposed to UV light, but it may give us an idea.

The researchers suggest that the “hedonic action” of beta endorphins may have increased human liking for sun exposure, and so may contribute to the ongoing increase in the number of new cases of skin cancer.

However, it could be the case that the popularity of sun tanning is mainly due to cultural reasons: current thinking is that tanned skin is (incorrectly) seen to be healthier. In previous cultures and in previous times, such as pre-revolutionary 18th century France, having very pale skin was seen as the ideal.

Sun exposure among regular sunbathers could be a true biological addiction or an aesthetic liking for tanned skinned, or possibly a combination of the two.

Leaving this question aside, common sense should tell us of the known harms of excessive UV light exposure. UV light exposure is a well-established risk factor for skin cancers.

Take care to avoid excessive exposure of the skin to UV light, particularly in hot summer months, including use of an appropriate sunscreen, covering up with hat and sunglasses and avoiding exposure during hot times of the day.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sunbathing 'may be addictive' warning. BBC News, June 20 2014

Sunbathing IS addictive: Soaking up UV rays creates rush of feel-good chemicals 'like heroin use'. Daily Mail, June 20 2014

Sun worship offers clue to post-holiday blues. The Times, June 20 2014

Sunbathing as addictive as heroin, say researchers. The Daily Express, June 20 2014

Links To Science

Fell GL, Robinson KC, Mao J, et al. Skin b-Endorphin Mediates Addiction to UV Light. Cell. Published online June 19 2014 

Categories: Medical News

MMR jab unlikely to harm young babies

Medical News - Fri, 06/20/2014 - 14:17

A young couple's baby was given the MMR jab by mistake "potentially putting her life at risk", The Daily Telegraph website reports misleadingly.

Giving a baby the wrong vaccine is a serious mistake; fortunately, the error was quickly noticed and the baby appears not to have been seriously harmed.

Unfortunately, the Telegraph has taken a sensationalist approach by quoting the most extreme possible reaction – anaphylaxis – without stating that this is extremely rare and treatable.

The Telegraph's coverage says, “Newborns under six months must not be given the vaccine because they 'don't respond well' to it, according to NHS guidelines.”

The paper also goes on to say, “The NHS website does not specify what can happen to babies under six months if they are given the MMR vaccine.”

Unfortunately, the paper has taken words out of context, giving a misleading impression that there is some additional risk to young babies. There is no evidence of additional risk.

 

What will happen to the baby given the MMR vaccine by mistake?

It is unclear from the media coverage what has happened to the baby, although the Telegraph reports that she displayed side effects of sleepiness and appetite loss.

A statement from NHS London said: "We are investigating the concerns raised by this family about their child’s vaccination and are currently establishing the facts. Whilst it is not routine or advisable to vaccinate a child with MMR at two months of age, there is no clinical risk to the baby."

 

Is the MMR vaccine a risk for babies under 12 months of age?

No. Whilst giving the MMR vaccine at this age is not usually recommended, there is no evidence that doing so would place the baby’s life at risk.

It is also untrue that guidelines say that babies should never be given the vaccine. Earlier vaccination may be recommended if there is a measles outbreak in the local area.

 

What are the side effects of the MMR vaccine?

It's quite common for children to get a mild form of mumps or measles after the MMR vaccination, as well as some bruise-like spots. These can be worrying for parents, but will go away.

In around 1 in 1,000 cases, children may have seizures (fits) several days after vaccination. In extremely rare cases, a child can have a severe allergic reaction (anaphylaxis), but if the child is treated quickly, they can make a full recovery.

Call NHS 111 if you are worried that about your child's health.

Read more about how to report vaccine side effects.

 

Why should I have my child vaccinated?

The fact is vaccines save lives. In 1967 – the year before measles vaccination – there were more than 460,000 cases and 99 deaths. By 1997 – the year before the MMR health scare – there were fewer than 4,000 cases and just 3 deaths.

Complications of diseases such as measles include brain infections, loss of vision, liver infection and meningitis.

It has been shown that public scares about vaccination have reduced the uptake of vaccines. This in turn has led to outbreaks of diseases such as measles that we were beginning to think had been confined to the history books.

For this reason it is disappointing to see one family's shocking and unfortunate story, presented in this way.

 

What vaccinations should a two-month-old baby have?

By two months old, your baby will be due his or her first set of vaccinations. These are:

Read more information about the NHS vaccination schedule.

 

Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Two month old baby 'given MMR jab'. The Daily Telegraph, June 19 2014

Categories: Medical News

Antidepressant suicide warnings 'backfired'

Medical News - Thu, 06/19/2014 - 18:19

“Antidepressant suicide warnings 'may have backfired’,'' BBC News reports.

During 2003 and 2004, there were high-profile media reports in the US that children and adolescents who were prescribed antidepressants had an increased risk of suicidality (thoughts and attempts).

This led the Food and Drug Administration (FDA), which is responsible for regulating drugs in the US, to issue warnings about all antidepressants (these warnings were modified in 2007).

This latest research studied antidepressant-prescribing patterns for 10 million people during the time period, as well as reported suicide attempts (both successful and unsuccessful).

The study found that two years after the warnings, antidepressant prescriptions for adolescents had decreased by almost a third, and by a quarter in young adults.

There was also a corresponding increase in drug overdoses of a fifth in adolescents and a third in young adults over the same period.

Thankfully, there was no change in the overall rate of completed suicides, as the majority of these overdoses did not prove fatal.

Antidepressants remain an important part of the treatment for depression and other mental health problems, and should not be stopped abruptly.

If you are suffering from suicidal thoughts, you should see your GP as soon as possible or call the Samaritans on 08457 90 90 90.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School, Boston; Group Health Research Institute, Seattle; the University of Washington; Center for Health Policy and Health Services Research, Detroit; Center for Applied Health Research, Texas; and several Kaiser Permanente Research Institutes across the US. It was funded by the National Institute of Mental Health and the Health Delivery Systems Center for Diabetes Translational Research.

The study was published in the peer-reviewed medical journal BMJ. The article has been published on an open-access basis, meaning it is free to read online.

The media’s coverage of the story has been fair, with the BBC providing comments from experts highlighting the powerful impact the media can have on prescription practices.

A case could be made that some sections of the media have been guilty of scaremongering in regards to the potential risks of treatment or intervention, without considering the benefits. The most infamous example of this in recent years were the scare stories about the MMR vaccine being linked to autism – a claim that turned out to be baseless.

 

What kind of research was this?

This was an ecological study looking at trends in antidepressant use, suicide attempts and completed suicides in young people before and after the FDA issued warnings about potential risks of these drugs.

It aimed to see if there were any changes according to age group before and after the FDA issued the warnings about all antidepressants increasing suicidality (thoughts and attempts) in adolescents over the 2003 to 2004 period.

They also wanted to see if there were any further changes when this warning was extended to include young adults in 2007. 

The researchers report that the FDA warning was based on a meta-analysis of studies, which showed that the relative risk for suicidal thoughts or behaviour for young people on antidepressants compared to a placebo was almost double.

The relative risk was found to be at 1.95 (95% confidence interval [CI] 1.28 to 2.98), though the overall increase in absolute risk was still low. 

The researchers wanted to investigate if the warnings and media coverage were associated with changes in antidepressant use and suicidal behaviour.

An ecological study is a study of a population or community, rather than a study of individuals. Common types of ecological study include geographical comparisons, time-trend analysis or studies of migration.

A before and after study is a comparison of particular characteristics in a population, before and after an intervention or event. An example of this would be a public health campaign, such as a healthy eating campaign.

 

What did the research involve?

Data was obtained from 11 healthcare organisations that care for around 10 million people in 12 US states. This included inpatient and outpatient details, antidepressant prescriptions, drug overdoses and suicide deaths for all:

  • adolescents aged 10 to 17
  • young adults aged 18 to 29
  • adults aged 30 to 64

They compared the levels from 2000 to 2003 (before the warnings) and up to 2010 (after the warnings).

 

What were the basic results?

The study included 1.1 million adolescents, 1.4 million young adults and 5.0 million adults.

In 2006, compared to 2003 to 2004 when the warnings were first issued:

  • antidepressant use was reduced in adolescents by -31.0% (95% [CI] -33.0% to -29.0%)
  • antidepressant use was reduced in young adults by -24.3% (95% CI -25.4% to -23.2%)
  • antidepressant use was reduced in adults by -14.5% (95% CI -16.0% to 12.9%)
  • drug overdose of psychotropic medication (medication that can affect the working of the brain) increased in adolescents by 21.7% (95% CI 4.9% to 38.5%)
  • drug overdose of psychotropic medication increased in young adults by 33.7% (95% CI 26.9% to 40.4%)
  • there was no significant increase in drug overdose in adults
  • there was no increase in completed suicides in any group

There was no additional change in antidepressant use or suicidality after the warning was modified in 2007. After 2008, the level of antidepressants being prescribed began to increase again.

 

How did the researchers interpret the results?

The researchers concluded that “safety warnings about antidepressants and widespread media coverage decreased antidepressant use”, and that there were “simultaneous increases in suicide attempts among young people". Therefore, they say “it is essential to monitor and reduce possible unintended consequences of FDA warnings and media reporting”.

 

Conclusion

This study saw a decrease in the prescribing of antidepressants in adolescents and young people, and an overall increase in psychotropic medication overdoses. Thankfully, however, there was no change in completed suicide rates, following the FDA's warnings that they can increase suicidality.

Strengths of this study include the very large number of people included in the analysis. The researchers used the same parameters for assessing antidepressant prescriptions, overdoses requiring medical attention and death due to suicide throughout the study period. Although this will not capture all of the attempted overdoses, the data collection was consistent, so trends in the rates should be comparable.

However, the authors report several limitations, including the fact:

  • they could only take into account overdoses that required medical attention
  • the sample was almost exclusively of people with medical insurance, so the results may not be applicable to uninsured people in the US (who tend to be poorer and/or come from an ethnic minority)

Further limitations of this study are that it looked at the population as a whole and did not look at any difference according to:

  • sex, race, ethnicity or socioeconomic status
  • diagnosis or severity of illness
  • other confounding factors, such as the recession

The study only looked at the incidence of antidepressant use, psychiatric drug overdose and number of completed suicides across the whole population. The study design means that it was not possible to link any of these factors together. For example, it did not measure how many people taking antidepressants took an overdose and how many completed suicide. Therefore, although this study is interesting from a population basis, the results cannot be directly applied to individuals.

In addition, the study has only looked at overdose and suicide as outcomes. It did not examine the length of illness, impact or quality of life – all of which may be improved through the appropriate use of antidepressants.

The treatments for depression and reducing suicidal ideation needs to be tailored to the individual and may include antidepressants, talking therapies, increased social support and practical help. Antidepressants remain an important part of treatment for depression and other mental health problems, and should not be stopped abruptly.

As the modified FDA recommendation in 2007 puts it, a balance needs to be struck between the potential increased risk of suicidal ideation when commencing antidepressants and the risks of suicide if antidepressants are not used.

Close supervision and awareness of the risks should be taken into account when antidepressants are first prescribed.

Current UK recommendations state that if antidepressants are recommended for a person under the age of 18, they should be used in combination with a talking therapy, such as cognitive behavioural therapy (CBT), and not as the sole treatment.

If you are suffering from suicidal thoughts, it is advisable to see your GP or to call a helpline such as the Samaritans, on 08457 90 90 90.

Read more about getting help if you are thinking about suicide, as well as spotting the warning signs of suicidal thinking and behaviour in others.

Analysis by Bazian. Edited by NHS Choices
Follow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Antidepressant suicide warnings 'may have backfired'. BBC News, June 19 2014 

Links To Science

Lu CY, Zhang F, Lakoma MD, et al. Changes in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study. BMJ. Published online June 18 2014

Categories: Medical News

New weapon found in the war against superbugs

Medical News - Thu, 06/19/2014 - 14:00

"British university makes antibiotic resistance breakthrough," The Independent reports after new research found a method that could be used to attack the outer membrane of bacteria. This may help combat the threat of antibiotic resistance.

The study involved a class of bacteria called Gram-negative bacteria, some of which has developed resistance to antibiotics over time.

This is of concern because some Gram-negative bacteria cause serious conditions such as food poisoning (often caused by E.coli and salmonella) and meningitis.

If antibiotic resistance continues to increase, these types of infection could eventually become untreatable using current drugs.

Gram-negative bacteria have an exterior membrane (coating) that protects them from attacks by the human immune system and antibiotic drugs.

Until now little has been known about this defensive barrier, but using the UK's synchrotron facility (think of it as a giant microscope), scientists say they have discovered how it is built.

It may now be possible to find ways to attack the membrane, which would kill the bacteria cells. The advantage of this approach is that by targeting the membranes, rather than the bacteria themselves, there is less chance of resistance evolving.

Although it is early days, this method could eventually lead to the development of new drugs against multi-drug-resistant bacteria.

 

Where did the story come from?

The study was carried out by researchers from the University of East Anglia, the University of St Andrews, Diamond Light Source and the University of Oxford in the UK, and Sichuan Agriculture University, Sichuan University, Wuhan Technical College of Communications and Sun Yat-sen University in China.

There is no information about external funding, although some researchers were supported by the Wellcome Trust and the China Scholarship Council.

The study was published in the peer-reviewed journal Nature.

This story was widely covered in the UK press. Most of the coverage was fair and included useful quotes from the researchers involved, although the tone of the reporting was perhaps more optimistic than is warranted at present.

Some papers also got some basic technical details incorrect; "schoolboy errors" to use an old football cliché (it is the World Cup after all).

For example, the Metro reported that the technique could be used to tackle MRSA. MRSA is in fact a Gram-positive type of bacteria and this study only involved Gram-negative types.

The Daily Telegraph, on the other hand, talked about a "bug responsible for E. coli and salmonella", but although E. coli and salmonella both share the same class, they are entirely different species.

 

What kind of research was this?

This was a laboratory study of the outer membrane of Gram-negative bacteria and the biological processes used to build it. The researchers point out that these bacteria have an outer coating made up of a compound called lipopolysaccharide (LPS).

The building of this protective outer coating depends on several "transport" proteins – which the BBC called "bricklayer" proteins – two of which are called LptD and LptE. These are both crucial to the transport and insertion of LPS, but so far this process has been poorly understood.

The researchers say these two proteins would be a "particularly attractive" target for new drugs, which would not have to enter into the bacteria. However, the development of such drugs is hampered by the lack of a detailed model of the LptD-LptE "complex".

 

What did the research involve?

Researchers were able to map the structure of these proteins for the first time using special X-ray equipment at Diamond Light Source in Oxfordshire, the UK's national synchrotron science facility.

Synchrotrons are a type of particle acceleration, similar to the famous CERN acceleration that was used to detect the Higgs boson. They produce extremely powerful X-rays that help provide detailed images of extremely small objects.

The researchers conducted several experiments to examine the structure of the proteins and the way they work in transporting LPS to the outer membrane.

 

What were the basic results?

The scientists found that the two proteins form a "barrel and plug" structure to transport and insert LPS into the outer surface of the bacteria.

If this process is blocked, the bacteria will become vulnerable to the external environment, as well as the immune system, making them likely to die quickly.

 

How did the researchers interpret the results?

The researchers say their findings help us understand how the outer membrane of Gram-negative bacteria is built.

It may have "significant potential" for the development of novel drugs against multi-drug-resistant bacteria, they say.

 

Conclusion

Antibiotic resistance is already causing thousands of deaths annually and is now considered a major threat, ranking alongside terrorism and climate change.

Gram-negative bacteria such as E. coli, salmonella and Klebsiella are particularly resistant to antibiotics. This study shines a useful light on how such bacteria build a protective outer coating against attack.

It is still early days, but the findings could pave the way for the development of new drugs that attack this process.

As Mark Fielder, professor of medical microbiology at Kingston University, said: "The work reported is at a very early stage, but does offer some potentially useful information in the fight against bacterial resistance.

"What is needed now is the development of a usable inhibitor that can be tested against Gram-negative clinical strains of bacteria to see if there is a longer term value to the research published today."

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

British university makes antibiotic resistance breakthrough. The Independent, June 18 2014

Breakthrough in the war on superbugs: British scientists decode defence mechanism of bacteria in discovery that could pave way for new drugs. Mail Online, June 19 2014

Scientists devise way to stop fatal superbugs taking over in 20 years. The Times, June 19 2014

Scientists say weakness in 'superbug' bacteria could herald new treatments. The Guardian, June 19 2014

Bacteria 'bricklayer' protein set for attack. BBC News, June 19 2014

Could this finally be the solution to the superbug threat? Metro, June 18 2014

Scientists find Achilles' heel of antibiotic resistant bacteria. The Daily Telegraph, June 18 2014

Research breakthrough into antibiotic resistance. ITV News, June 19 2014

Links To Science

Dong H, Xiang Q, Gu Y, et al. Structural basis for outer membrane lipopolysaccharide insertion. Nature. Published online June 18 2014

Categories: Medical News

Key protein in childbirth contractions identified

Medical News - Wed, 06/18/2014 - 14:17

"Australian scientists discover protein that triggers child birth," the Mail Online reports. The protein (β-inhibitory protein) is thought to cause the uterus to contract and could possibly be used to induce labour in obese women.

The website reports on a study looking at what causes the muscles of the uterus (womb) to contract during labour. Researchers found decreased activity of the gene that produces β-inhibitory protein in the uterine muscle leads to stronger contractions.

However, the study did not prove these mechanisms trigger the complex process of childbirth. Nor did it uncover any mechanisms that have any bearing on preventing premature births.

In this snapshot of women undergoing caesarean section, the study found lean women had increased levels of β-inhibitory protein activity. However, overweight women with a body mass index (BMI) over 30 had lower levels (associated with weaker contractions).

The researchers suggest they might be able to suppress this gene in the future. This could increase the strength of contractions in obese women, who may be more at risk of failure to progress during labour. However, this potential treatment was not assessed in the study.

 

Where did the story come from?

The study was carried out by Australian researchers from Monash University, the University of Melbourne, the Royal Women's Hospital, Victoria, and the University of Newcastle, New South Wales.

It was funded by the National Health and Medical Research Council of Australia and was published in the peer-reviewed medical journal, Nature Communications.

The Mail Online's reporting on the study was both inaccurate and confusing. Claims that the trigger for labour has been definitely identified are incorrect. This was a small study that can only suggest an association and not prove cause and effect.

Also, somewhat oddly, the Mail claims that the research could be used to prevent premature births. While it could conceivably lead to a medication that could be used to induce labour in women who are overdue, it is difficult to see how it could lead to a treatment to prevent premature births.

 

What kind of research was this?

This was a cross-sectional study comparing the activity of muscle cells in the uterus in women at term but not in labour, compared with women at term in labour.

It aimed to see if the ability of the muscle cells to take longer to contract (leading to stronger contractions) differed between the two groups. They then analysed whether any differences were associated with increased body mass index (BMI).

As this was a cross-sectional study, it can only assess the activity and muscle contractibility at one point in time. It cannot prove that the gene expression was responsible for labour not progressing in these women. But this study can provide associations that may be used in further research.

Previous research has found that in obese women, the uterine muscle is less able to contract compared with women of a normal weight. This can lead to failure for the labour to progress and ultimately the requirement for caesarean section.

The researchers wanted to investigate whether this was a result of a problem in the ability of the uterus muscle to contract.

A gene called "human ether-à-go-go-related gene" (hERG) plays a role in the number of potassium channels in heart muscle. The channels are essentially the building blocks of muscle activity in the same way brain cells are essential for thinking.

The channels influence the length of time between muscle contractions. When the time period is short (a short time for the muscle to relax in between contractions), the contractions are weak.

During pregnancy, it is important that the muscle cells do not contract strongly so that the foetus can grow. However, strong contractions are required during labour.

This study wanted to see if these hERG protein channels for potassium are also present in the muscle cells in the uterus.

 

What did the research involve?

The researchers recruited women who required a caesarean section so that they could compare muscle contractibility before and during labour using tissue samples.

They looked at biopsies of the uterine muscle of women who had undergone planned caesarean sections but had not gone into spontaneous labour, comparing these samples with those from women who required an emergency caesarean section.

The researchers recruited a group of 43 women with singleton pregnancies undergoing planned caesarean section between 37 and 40 weeks who had no signs of labour.

These planned caesareans happened if:

  • a woman had had a previous caesarean section
  • a woman had had a third or fourth-degree tear
  • if the baby was in breech presentation (head up)

The second group were 27 women undergoing emergency caesarean section after the spontaneous beginning of labour at term.

These emergency caesareans happened if:

  • there were signs of foetal distress or foetal "compromise"
  • there was failure to progress in labour

Women were excluded from the study if they had an infection, high blood pressure or diabetes.

After delivery, all women were given an injection of oxytocin into the bloodstream. Oxytocin is a hormone that controls bleeding following delivery and stimulates the flow of milk.

Biopsies of the uterine muscle were taken three to five minutes after the oxytocin was administered. The samples were used for protein analysis and electrophysiology and conduction studies. The results from each group were then compared.

 

What were the basic results?

The researchers found that hERG potassium channels are present in the muscle of the uterus. When they are blocked with a drug (dofetilide), the muscle takes longer to relax between contractions so the contractions are stronger.

In women who had gone into spontaneous labour before they had a caesarean section, the level of activity of the potassium channels was reduced compared with women who had not yet started labour. This means that stronger contractions were possible – a necessity for successful labour.

This reduced activity was associated with a higher number of the β-subunit of the hERG potassium channel. This subunit inhibits the potassium channel, compared with the α-subunit, which encourages it.

The level of activity of the potassium channel had not reduced in 14 out of 16 women with a BMI over 30 whose labour had started but failed to progress. These women required a caesarean section.

These women also had higher activity levels in the potassium channels because they had proportionately lower levels of the inhibitory β-subunit.

The researchers say that the β-subunit is increased by oestrogen and that oestrogen levels can be dysfunctional in the pregnancies of women with a high BMI. They also report a link between higher cholesterol levels and hERG function.

 

How did the researchers interpret the results?

The researchers concluded that, "hERG proteins, both α-pore-forming and β-inhibitory subunits, are present in human [uterine muscle] in late pregnancy.

"The levels of β-inhibitory subunit are elevated in labour tissues and are associated with a decrease in hERG activity and an increase in contraction duration.

"These changes that occurred in lean labouring women did not occur in obese labouring women, and could explain the increased incidence of failure to progress in labour, necessitating caesarean delivery in obese women."

 

Conclusion

This study has found that hERG potassium channels, which have a role in the speed and strength of heart muscle contractions, are also present in uterine muscle in late pregnancy.

The study suggests that the activity level of the potassium channels increases in normal labour because of a reduced number of the β-inhibitory subunit. This increase makes it possible for longer and stronger contractions.

This finding was the opposite for obese women who had started labour but failed to progress. These women had a higher proportion of the β-inhibitory subunit compared with the α-subunit. The researchers say this could be because of their increased levels of oestrogen and cholesterol.

A drug that inhibits the α-subunit could theoretically prolong the ability of the muscles to contract, but this was not looked at in this study.

The drug (dofetilide) used on the muscle cells in the laboratory in this study is licensed for people with atrial fibrillation. However, its effects and safety have not been studied in pregnant women.

There are several limitations of the findings of this study. It was based on a small number of women and presumed that the level of activity of the potassium channels in the uterine muscle was related to the stage of pregnancy.

Rather than being a single cause of labour failing to progress, a change in the ability of the muscle to contract is likely to be one of several factors that influence childbirth.

We need to see more research before any changes to the care of pregnant women or a new drug treatment would be advised.

If you are pregnant and overweight, trying to lose weight during your pregnancy is not recommended (unless specifically advised by a health professional).

The best way to protect your and your baby's health is to go to all your antenatal appointments so that the midwife, doctor and any other health professionals can keep an eye on you both.

They can manage the risks that you might face related to your weight, and act to prevent – or deal with – any problem.

Analysis by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Labour pains explained: Australian scientists discover protein that triggers child birth - and breakthrough could lead to drugs that could prevent premature births. Mail Online, June 18 2014

Links To Science

Parkington HC, Stevenson J, Tonta MA, et al. Diminished hERG K+ channel activity facilitates strong human labour contractions but is dysregulated in obese women. Nature Communications. Published online June 17 2014

Categories: Medical News

Office screen work linked to dry eye syndrome

Medical News - Wed, 06/18/2014 - 14:00

“Staring at computer screens all day ‘changes your eyes’, scientists say,” is the headline in The Independent. This follows reports that people who stare at a computer screen experience changes in their tear fluid that are typical of the symptoms of dry eye syndrome (also known as dry eye disease).

Dry eye syndrome is a condition where the eyes do not produce enough tears. This, in turn, can cause eye pain and irritation.

This latest study involved 96 office workers in Japan. They were assessed for signs and symptoms of dry eye syndrome, and questioned on the amount of time they spent in front of a visual display terminal (VDT).

Only very few (9%) met the criteria for dry eye syndrome, but a much greater proportion had signs and symptoms of dry eyes.

There was found to be an association between work time spent using a computer screen and dry eyes.

However, it’s important to note that although the study has demonstrated an association, it cannot prove causation. Therefore, we cannot definitely say that using computers caused these symptoms. 

It is also important to note that this was a very small sample, of just 96 people.

If you regularly use a computer, make sure your computer workstation is set up correctly to minimise eye strain. Your screen should stand at eye level, or just below it. It is also advisable to look away from the screen every five minutes for a few seconds and take a few blinks. 

 

Where did the story come from?

The study was carried out by researchers from Keio University in Tokyo, Kyoto Prefectural University of Medicine in Kyoto and Santen Pharmaceutical Co, Ltd in Osaka, as well as Harvard Medical School in Boston, US. Support was provided by Grant-in-Aid for Young Scientists from the Ministry of Health, Labour and Welfare, and the Ministry of Education, Science, Sports and Culture, with additional facilities support from Santen Pharmaceutical Co, Ltd.

The study was published in the peer-reviewed medical journal JAMA Ophthalmology.

The overall reporting of the story by The Independent is accurate, but its headline: “Staring at computer screens all day ‘changes your eyes’” is not strictly correct. While it is true that an association has been found, causation cannot be proven.

It is also important to note that the study was partly funded by Santen Pharmaceutical Co, Ltd which manufactures around 40% of the eye medications available in Japan.

 

What kind of research was this?

This was a cross-sectional study of a Japanese population of office workers, which aimed to examine the relationship between the concentration of the protein mucin 5AC in tears and the amount of time the person spent in front of a VDT.

In the eye, tears are produced by the lacrimal glands under the eyelid, which produce a salt water fluid, while other glands produce oils. The researchers report that the watery tear fluid contains dissolved mucin proteins, which are produced by cells in the conjunctiva (the thin layer of tissue that covers the inside of the eyelids and white part of the eye).

Mucins are very hydrophilic (“water-liking”) and help to hold water onto the eye’s surface. Previous studies have shown that the concentration of mucin 5AC in the tears is much lower in people with dry eye syndrome.

It has been reported that prolonged use of VDTs is a risk factor for dry eyes and is associated with low levels of mucin 5AC. This study aimed to look at the associations between the number of hours working on a VDT, the severity of dry eye syndrome and the frequency of the symptoms.

The main limitation of such a cross-sectional study is that, despite being able to demonstrate associations, it cannot prove causation.

 

What did the research involve?

The researchers selected two large companies in the Japanese stock market and recruited 96 individuals who were willing to take part in the clinical examinations, out of a potential 561.

They gave participants a questionnaire on dry eye syndrome (said to be widely used in Japan), which included 12 questions with frequency responses – always, often, sometimes or never.

Responses of “always” or “often” were considered to be positive responses to the particular symptom being questioned.

They additionally answered questions on their age, sex, height, smoking status, contact lens use and VDT use: categorised as short (<5 hours); intermediate; (5-7 hours) and long (>7 hours).

Participants completed clinical examinations to assess the composition of the tears and the function of the eye's surface. The concentration of mucin 5AC in tear samples was assessed in the laboratory. 

Dry eye syndrome was diagnosed according to the most recent diagnostic criteria for the condition in Japan. The criteria include:

  • presence of symptoms (more than 1 of the 12 questions answered “always” or “often”)
  • signs of disturbance of the tear film: a Schirmer test I value of less than 5mm (this test measures the depth of moisture on some special filter paper placed on the lower eyelid) and/or a tear break-up time of 5 seconds or less
  • signs of damage to the lining of the surface of the eye and conjunctiva (as indicated by fluorescein or lissamine green staining scores of 3 points or more)

 

What were the basic results?

The 96 individuals were 63% male, with an average age of 41.7. The average duration of VDT use was 8.2 hours per day.

Most participants had some signs of disturbance of the tear film: 82% of the sample had a tear break-up time of less than 5 seconds and 21% had Schirmer test I values of less than 5mm. However, only a few had signs of damage to the lining of the surface of the eye and conjunctiva.

9 people (9%) met the criteria for definite dry eye syndrome; it was shown to affect a higher proportion of women (5; 13.9%) than men (4; 6.7%). However, over half the total sample (55; 57%) had sufficient signs of probable dry eye syndrome.

The average concentration of mucin 5AC was significantly lower in people with definite dry eye syndrome (3.5ng/mg) than in people without dry eye syndrome (8.2ng/mg).

The average mucin 5AC concentration was also significantly lower in people with VDT usage longer than 7 hours a day (5.9ng/mg) compared to people with VDT usage of less than 5 hours a day (9.6ng/mg).

The average mucin 5AC concentration was also lower in the people reporting symptoms of eye strain, excess tearing or dry eye sensation, compared to people not reporting these symptoms.

 

How did the researchers interpret the results?

The study suggests that office workers with prolonged VDT had a low concentration of mucin 5AC in their tears, as did those with symptoms of eye strain.

The researchers went on to say that mucin 5AC concentration in the tears may be lower in people with dry eye syndrome than people without.

 

Conclusion

This small cross-sectional study of 96 office workers in Japan found that while only a few participants (9%) met the criteria for dry eye syndrome, a much greater proportion had signs and symptoms of dry eyes.

The concentration of mucin protein in the tears has previously been associated with dry eye conditions and with prolonged use of VDTs. As the researchers had suspected, people with dry eye disease had a lower concentration of mucin protein in their tears, as did people who worked for longer hours on a computer (greater than seven hours per day), as did those reporting symptoms of eye strain, dry eyes or excess watering of the eyes.

The findings are perhaps not that unexpected. When we work for long hours at a computer screen, we tend to stare fixedly at the same distance for long periods of time and often don’t blink as much as is necessary.

However, it’s important to note that although the study has demonstrated an association, it cannot prove causation. It was not necessarily computer usage that definitely caused these symptoms. For example, we don’t know how long the participants had had these various problems, how long they had been working at a computer screen, whether they had symptoms before or how much they engaged in other activities that may have had an influence (e.g. TV viewing, playing computer games or reading for long periods).

Many of Japan’s citizens spend several hours a day staring at screens, meaning that the association detected in the study may not apply to other nations and cultures.

It is also important to note that this is a very small sample, of just 96 participants. When dividing people into different categories – e.g. according to the presence of different symptoms – those with definite or probable dry eye syndrome, or duration of hours spent on a visual display terminal, the numbers become even smaller. This could reduce the reliability of the associations between mucin concentrations and the factors mentioned above.

A sample of a different or larger group could give different results. Study of other non-office populations or office workers of different age groups would also be useful as a comparison.

Overall, the study shows a very plausible association between prolonged use of a VDT and dry eyes, but it still cannot prove causation.

If you are experiencing symptoms such as dryness, grittiness or soreness that get worse throughout the day, you should see your GP. Dry eye syndrome can lead to complications if left untreated.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Staring at computer screens all day 'changes your eyes', scientists say. The Independent, June 17 2014

Links To Science

Uchino Y, Uchino M, Yokoi N, et al. Alteration of Tear Mucin 5AC in Office Workers Using Visual Display Terminals. JAMA Ophthalmology. Published online June 5 2014

Categories: Medical News

Can a 'microwave helmet' really detect strokes?

Medical News - Tue, 06/17/2014 - 14:26

“Microwave helmet ‘can spot a stroke’,” reports BBC News.

There are two types of stroke. The majority of strokes are caused by a clot stopping blood flow to an area of the brain. This type of stroke can be treated with anti-clotting medications to break up or dissolve blood clots. However, this type of treatment is disastrous if the stoke turns out to have been caused by bleeding into the brain.

Currently the only way to tell the difference is for a patient to have a scan in hospital. Going to hospital and waiting for a scan can delay treatment, and the sooner treatment is given, the less damage the stroke is likely to do.

The headline on the BBC was prompted by a proof-of-concept study that has shown that a “microwave scattering” technique can distinguish between the two types of strokes. The helmet device used by researchers is portable and could therefore be used by paramedics and other health professionals before a patient gets to hospital. This could allow treatment to start vital minutes earlier.

In the studies, when the cut-off was set to identify all haemorrhagic strokes, some people with ischaemic strokes were misclassified. But the researchers hope that information from a larger data-set from an ongoing clinical study will allow them to better differentiate between the two.

This early stage research is encouraging, but further work is required before NHS ambulances are equipped with “microwave helmets” for people who may have had strokes.

 

Where did the story come from?

The study was carried out by researchers from Chalmers University of Technology, University of Gothenburg, Sahlgrenska University Hospital and MedTechWest, all in Gothenburg, Sweden.

It was funded by VINNOVA (Swedish Government Agency of Innovation Systems) within the VINN Excellence Centre Chase, by SSF (Swedish Foundation for Strategic Research) within the Strategic Research Centre Charmant, and by the Swedish Research Council. It was published in the peer-reviewed journal IEEE Transactions on Biomedical Engineering.

The research was well reported by the BBC.

 

What kind of research was this?

This study was a description of the background, design and signal analysis of two microwave-based stroke detection systems, and a proof-of-concept clinical study on people who’d had a stroke and healthy people.

The researchers wanted to develop a new way of diagnosing stroke that was capable of differentiating between ischaemic stroke (caused by clots stopping blood getting to the brain) and haemorrhagic stroke (caused by bleeding into the brain). They wanted it to be used when patients arrive at A&E or by paramedics to enable appropriate anti-clotting medication to be started as soon as possible in people who have a stroke caused by a blood clot (ischaemic stroke). The need to distinguish between the two types of stroke is vital, as giving anti-clotting treatment to someone with a haemorrhagic stroke could be disastrous.

The researchers were interested in applying “microwave scattering” to this problem. They developed two prototype helmets with 10 or 12 microwave patch antennas. One at a time, each antenna is used as a transmitter, with the remaining antennas in receiving mode.

Microwave scattering can detect strokes because the scattering properties of white and grey matter are different to those of blood. The power output of the imaging systems were about 1mW, about 100 times lower than the 125mW transmitted by mobile phones.

 

What did the research involve?

In the first clinical study, 20 patients diagnosed with acute stroke were studied in a specialist hospital clinic between seven and 132 hours after stroke onset. Of the 20 patients, nine had a haemorrhagic stroke and 11 had an ischaemic stroke. In this study the first prototype was used, which was based on a bicycle helmet and had 10 patch antennas.

In the second clinical study, 25 patients with stroke were studied on a hospital ward, between four and 27 hours after stroke onset. Of the 25 patients, 10 had a haemorrhagic stroke and 15 had an ischaemic stroke. In addition, 65 healthy people were imaged. In this study the second prototype was used, which was a custom-built helmet with 12 patch antennas.

The signals obtained were analysed by a computer algorithm.

 

What were the basic results?

In the first clinical study, if the cut-off was set to identify all patients with haemorrhagic stroke, four of the 11 patients with ischaemic stroke were misclassified with haemorrhagic stroke.

In the second clinical study, when the cut-off was set to identify all patients with haemorrhagic stroke, one of the 15 patients with ischaemic stroke was misclassified with haemorrhagic stroke.
The technique was even better at distinguishing between patients with haemorrhagic stroke and healthy people.

 

How did the researchers interpret the results?

The researchers conclude that: “The relative simplicity and size of microwave-based systems compared to CT [computed tomography] or MRI [magnetic resonance imaging] scanners make them easily applicable in a pre-hospital setting. We suggest that microwave technology could result in a substantial increase of patients reaching a stroke diagnosis in time for introduction of thrombolytic [anti-clotting] treatment.

“The socioeconomic ramifications of such a development are obvious not only in the industrial world but also, and perhaps even more so, in the developing world,” they said.

 

Conclusion

This study has shown that haemorrhagic strokes could potentially be distinguished from ischaemic strokes by analysing microwave scattering measurements.

While the two types of stroke can already be accurately diagnosed by CT or MRI scans in hospital, the “microwave helmet” development is important because it could potentially be used before someone arrives in hospital. This would avoid any time delay and allow people with ischaemic stroke to receive the anti-clotting medication that they need as soon as possible, potentially reducing the extent of damage the stroke causes.

The technique isn’t perfect yet, but the researchers are hopeful that information from a larger data-set from an ongoing clinical study will improve the predictive power of the algorithms.

They also say that “introduction of pre-hospital thrombolytic [anti-clotting] treatment based on a microwave scan diagnosis will have to await studies of larger clinical cohorts”.

So while this early stage research is encouraging, further work is required before “microwave helmets” are used to distinguish between ischaemic and haemorrhagic strokes. More work is also needed to prove whether they could improve the care and treatment of people who’ve had a stroke.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Microwave helmet 'can spot a stroke'. BBC News, June 17 2014

Links To Science

Persson M, et al. Microwave-based stroke diagnosis making global pre-hospital thrombolytic treatment possible. Transactions on Biomedical Engineering. Published online June 16 2014

Categories: Medical News

Do doctors confuse ME with a heart problem?

Medical News - Tue, 06/17/2014 - 14:17

“ME: one third of patients ‘wrongly diagnosed’,” says The Daily Telegraph, which has reported on a new study of a condition called postural tachycardia syndrome (PoTS).

In PoTS, the heart rate increases by over 30 beats per minute when standing, causing dizziness, fatigue and other symptoms that impact a person’s quality of life (for a full list, read the PoTS UK advice on PoTS symptoms).

Researchers wanted to find out some of the characteristics of people who had PoTS, and whether the condition has any similar characteristics to chronic fatigue syndrome (CFS). The media headlines have used this latest study to focus on previous research that found around a third of people with a diagnosis of CFS/myalgic encephalitis (ME) actually had PoTS.

It is unclear from this latest report whether these people had been “wrongly diagnosed”, or whether they had both PoTS and CFS. In the new study, 20% of people with PoTS also had a diagnosis of CFS.

Unfortunately, there are no cures for either condition – instead, doctors help people to manage their symptoms and reduce the impact on their quality of life. Beta-blockers were the most commonly used treatment for the symptoms of PoTS in this study, but there were a total of 21 (of a possible 136) different treatments used, and some patients received no treatment at all.

The researchers speculate that those who did not have any treatment may have experienced side effects or no improvement, compared to those who did have treatment. The results also showed very little difference in symptoms, regardless of whether medication was taken.

 

Where did the story come from?

The study was carried out by researchers from Newcastle University and was funded by the UK National Institute for Health Research (NIHR) Biomedical Research Centre in Ageing.

The study was published in the peer-reviewed medical journal BMJ Open. The article is open-access, meaning the research is free to view and download online.

The media reported that a third of people are being wrongly diagnosed with CFS when they have PoTS, which is “treatable”. However, this is not what the latest study found. Instead, it highlights that very little is known about both conditions, although it did find that there is a degree of crossover between PoTS and CFS/ME. Both are difficult to manage with current treatment options.

However, The Independent did correctly cover the results of this latest study.

 

What kind of research was this?

This was a cross-sectional survey of people with a diagnosis of PoTS and people with CFS.

The aim was to raise awareness of PoTS in health professionals, and to see if there were any differences between people with PoTS seen in one centre, compared to those who were members of the largest UK support group for people with PoTS – PoTS UK.

The researchers compared all of these people with PoTS to people with CFS who did not have PoTS. A previous cross-sectional study in 2008 found that 27% of those with a diagnosis of CFS also had PoTS.

 

What did the research involve?

The researchers sent questionnaires to:

  • people diagnosed with PoTS who had attended their clinics
  • all members of the support group PoTS UK 
  • people with CFS without PoTS

The researchers looked for differences in demographics, time taken for a diagnosis to be made, symptoms and treatments. The participants with PoTS were asked to complete two initial questionnaires about:

  • background details, including education level, age of symptom onset, age at diagnosis of PoTS and current medication
  • symptoms of PoTS, potential precipitants to the illness and any other illnesses

All of the participants, those with PoTS and those with CFS, were sent the following six validated symptom assessment tools:

  • fatigue impact scale
  • Epworth sleepiness scale
  • orthostatic grading scale
  • hospital anxiety and depression scale
  • patient-reported outcomes measurement information system, health assessment questionnaire (PHAQ)
  • cognitive failures questionnaire

 

What were the basic results?

The researchers recruited 136 people with PoTS (52 out of 87 from the clinics and 84 of the 170 members of PoTS UK).

The researchers found that patients with PoTS are predominantly women, young, well-educated and have significant and debilitating symptoms that considerably impact a person's quality of life.

The researchers noted several differences between their cohort of patients and all members of the support group. There were differences in:

  • age
  • gender
  • employment
  • working hours

20% of the total sample of people with PoTS also had a diagnosis of CFS (27 out of the 136 participants), and this went up to 42% in those recruited from the clinic (22 out of 52). Of people with PoTS who did not report CFS, 43% of them would have met the diagnostic criteria.

Compared to those with both PoTS and CFS, people diagnosed only with PoTS:

  • had significantly more symptoms before diagnosis, the majority of which were palpitations, dizziness, memory impairment, breathlessness, lightheadedness and muscle aches
  • were more likely to receive disability benefits

The researchers also compared people with PoTS to people who had CFS. It is not known how many people with CFS completed the questionnaires, but the study reports that they were matched to the people with PoTS in terms of age and gender.

Comparing all people with PoTS (with and without CFS) to people with just CFS, it was found that:

  • there were similar high levels of symptom burden in each group
  • the symptoms related to low blood pressure, such as dizziness, were more severe in the PoTS group
  • anxiety and depression scores were significantly higher in the CFS group

On analysing the questionnaire responses from people with PoTS, the researchers found that:

  • there were 21 different treatment “regimes” for PoTS
  • the most common treatment was beta-blockers
  • 27% of people were receiving no treatment for their PoTS
  • there was no difference in fatigue severity, daytime sleepiness, cognitive or autonomic symptoms or level of functional impairment between those being treated and those having no treatment
  • people without medication scored slightly higher on the anxiety and depression scale

 

How did the researchers interpret the results?

The researchers concluded that postural tachycardia syndrome (PoTS) “is a condition that is associated with significant symptoms that impact on quality of life. Currently, there are no evidence-based treatments for PoTS, and its underlying pathogenesis [reason for disease development], natural history and associated features are not fully understood.

“We would suggest that increasing awareness of this debilitating disease is important to improve understanding, diagnosis and management of PoTS”.

 

Conclusion

This study has described some of the demographics and symptoms of people with PoTS and has also shown that there is a degree of overlap in people diagnosed with CFS and PoTS.

However, in contrast to the media's coverage, this study does not show whether patients were misdiagnosed. It also does not show that there are effective treatments for PoTS. In fact, very little difference was found in symptoms between people having treatment compared to those that were having none.

There are several other limitations to this study. These were the fact that:

  • it was based on a small number of participants
  • there was a low response rate to the questionnaires, which could bias results, either because people were too ill to respond or were too busy
  • it is unclear how well people with PoTS were matched to people with CFS; 20% of the people with PoTS also had a diagnosis of CFS, which would compromise the results

However, the researchers set out to raise awareness of PoTS in health professionals. This was done to increase its recognition, so that further research can be conducted in order to improve treatment options, which from the media coverage garnered, they have almost certainly achieved.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Little-known heart condition found affecting mostly educated young women. The Independent, June 17 2014

Third of ME cases 'wrongly diagnosed': Experts says thousands thought to have chronic fatigue actually have similar condition that can be treated. Mail Online, June 17 2014

ME: one third of patients wrongly diagnosed. The Daily Telegraph, June 17 2014

Links To Science

McDonald C. et al. Postural tachycardia syndrome is associated with significant symptoms and functional impairment predominantly affecting young women: a UK perspective. BMJ Open. Published June 16 2014

Categories: Medical News

Warning issued over washing raw chicken

Medical News - Mon, 06/16/2014 - 14:30

"Don't wash chicken before cooking it, warns Food Standards Agency," The Guardian reports. The Food Standards Agency (FSA) has issued the advice as many people do not realise that washing raw poultry can spread bacteria, leading to an increased risk of food poisoning.

The bacteria in question, campylobacter, is the most common cause of food poisoning in the world and affects about 280,000 people in the UK each year.

New guidance is intended to remind people that washing raw chicken before cooking increases the likelihood of infection through splashing the bacteria on to work surfaces, clothing and cooking equipment. This is known as cross-contamination.

Washing is therefore not recommended – it is also unnecessary as thorough cooking will kill any bacteria.

 

Who has issued the advice?

The Food Standards Agency (FSA) has issued the guidance, which coincides with the start of this year's Food Safety Week. The FSA is the government department responsible for food safety and food hygiene across the UK.

They are trying to raise awareness of campylobacter infection because a survey of around 7,000 people found that while more than 90% of the public have heard of food poisoning from salmonella and E. coli, only 28% had heard of campylobacter. In fact, campylobacter causes more food poisoning than all the other major causes put together.

 

What is the advice?

The FSA is asking people to stop washing chicken before cooking it to reduce the incidence of campylobacter. The advice itself is not new, but the call has been issued after a survey found that 44% of people still wash chicken before cooking.

Other measures that are being taken by the FSA include:

  • working with farmers and producers to minimise the rates of campylobacter in chickens
  • minimising the levels of contamination in slaughterhouses and processors
  • ensuring caterers use preventative measures to reduce the risk of infection
  • asking TV production companies to ensure that programmes do not show anyone washing raw chicken

In addition, the major supermarkets have agreed to:

  • provide clearer information on their packs of raw chicken and turkey
  • have features on campylobacter in their magazines

 

What are the symptoms of campylobacter infection?

The typical symptoms are abdominal pain and diarrhoea, which occurs two to five days after infection.

The diarrhoea can sometimes contain blood. Other symptoms of campylobacter infection can include fever, headache, nausea and vomiting. It usually lasts for three to six days.

 

What are the potential dangers of campylobacter?

Campylobacter infection can be fatal in young children, the elderly and people who have a lowered immune system. Lowered immunity can occur as a result of health conditions such as HIV, or as a side effect of certain treatments, such as chemotherapy.

Complications of the infection include:

 

When should I seek medical advice for suspected food poisoning?

Most cases of food poisoning do not require medical treatment. However, you should seek medical advice if you have any of the following signs or symptoms:

  • vomiting that lasts more than two days
  • you are unable to keep liquids down for more than a day
  • diarrhoea that lasts for more than three days
  • blood in your vomit
  • blood in your stools
  • seizures (fits)
  • changes in your mental state, such as confusion
  • double vision
  • slurred speech
  • signs of severe dehydration, such as a dry mouth, sunken eyes, and an inability to pass urine, or passing small amounts of dark, strong-smelling urine

Always contact your GP if you get food poisoning during pregnancy. Extra precautions may be needed.

 

What is the best way to prepare raw chicken?

The FSA advises the public to:

  • cover and chill raw chicken
  • not wash raw chicken
  • wash used utensils
  • cook chicken thoroughly

Other measures that will reduce the risk of infection include:

  • washing hands with soap and warm water before cooking, after touching raw food, after touching the bin, and after going to the toilet
  • keeping raw food away from ready-to-eat foods
  • using different chopping boards for raw and ready-to-eat foods
  • keeping raw meat in a clean, sealed container on the bottom shelf of the fridge so it cannot drip on other foods

Read more advice about food safety.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Don't wash chicken before cooking it, warns Food Standards Agency. The Guardian, June 16 2014

Washing chicken 'spreads infection'. BBC News, June 16 2014

Should I wash raw chicken? Experts say no as new deadly bug is found. Daily Mirror, June 16 2014

Don't wash chicken before cooking, new guidance warns. The Daily Telegraph, June 16 2014

Don't wash chicken ...it splashes bugs all over the kitchen: Health experts' warning over food poisoning. Daily Mail, June 16 2014

Washing raw chicken 'can cause food poisoning,' Food Standards Agency warns. The Independent, June 16 2014

Washing raw chicken 'increases chances of food poisoning'. Metro, June 16 2014

Stop washing raw chicken, experts warn. ITV News, June 16 2014

Categories: Medical News

'Bionic' pancreas could be used to treat diabetes

Medical News - Mon, 06/16/2014 - 14:00

“An artificial pancreas could allow thousands of diabetes patients to live normal lives,” the Mail Online reports.

People with type 1 diabetes require lifelong insulin, as their body does not produce any. Insulin is a hormone that plays a key role in regulating the body’s blood sugar levels.

In a new study, the safety and effectiveness of a “closed-loop” insulin delivery system has been assessed.

Compared to a standard insulin pump, where the insulin delivery is programmed, the closed-loop system continuously measures sugar levels and automatically makes fine adjustments to insulin delivery in response. In effect, it acts like an artificial pancreas.

It can be challenging trying to keep insulin delivery at the right level in order to control blood sugar levels within the normal range, while avoiding blood sugar becoming too low (hypoglycaemia), particularly overnight.

The device improved blood sugar control overnight – importantly, it was not associated with hypoglycaemic episodes.

However, one of the trial’s limitations was its small size. In addition to this, it only examined the effects of the overnight closed-loop system compared with the standard pump over four periods of four weeks each. Longer-term studies examining the safety and effectiveness of this system in larger numbers of people with type 1 diabetes are now needed.

 

Where did the story come from?

The study was carried out by researchers from the Universities of Cambridge, Sheffield and Southampton, and King’s College London. It was funded by Diabetes UK.

The study was published in the peer-reviewed medical journal The Lancet.

While the Mail Online’s reporting of the study is broadly accurate, its headline: “Artificial pancreas could help stem the diabetes epidemic: Device could help patients lead normal lives by stopping need for constant insulin” is potentially misleading on multiple levels.

Firstly, “artificial pancreas” could be misinterpreted to mean that this is an artificial organ that is surgically transplanted into the person and can produce insulin to take the place of their own pancreas. In reality, the “closed-loop” insulin delivery system is designed to be worn outside of the body.

Secondly, the “diabetes epidemic” is usually taken to mean type 2 diabetes, which is associated with lifestyle factors such as being obese and lack of exercise. It is true that some people with type 2 diabetes can go on to need insulin; however, this particular study looked at people with type 1 diabetes.

The rise of people with type 2 diabetes can rightly be described as an “epidemic”. In contrast, the number of people who develop type 1 diabetes (which usually starts during childhood) in any given year has remained relatively static (around 24 in every 100,000 children).

Neither would this treatment “stem” the number of new cases of either type of diabetes.

Thirdly, the Mail says the treatment would “stop need for constant insulin”, which is not the case. In fact, this overnight closed-loop system delivers constant insulin. It has also only been used overnight, meaning the person continued delivering their insulin as normal during the day.

 

What kind of research was this?

This was a randomised crossover trial that aimed to see whether the use of a novel overnight insulin delivery system would help to improve blood glucose (sugar) control in people with type 1 diabetes.

Type 1 diabetes is an autoimmune condition where the body starts to produce antibodies that attack and destroy the insulin-producing cells in the pancreas. The body can therefore not make insulin, so the person relies on lifelong insulin injections to control their blood sugar. Type 1 diabetes most commonly develops in childhood.

It is different from type 2 diabetes, which is where the pancreas still produces insulin, but it either cannot produce enough, or the cells of the body are no longer sensitive enough to the actions of insulin to adequately control blood sugar. Type 2 diabetes is usually controlled by diet and medication, though some people with poor control also end up needing insulin injections, similar to people with type 1 diabetes.

As the researchers say, one of the main challenges with type 1 diabetes is keeping the right level of blood sugar control; people with this condition can face the challenge of complex daily insulin regimens and regular blood sugar monitoring.

One of the most common risks is when the blood sugar becomes very low (hypoglycaemia), which can cause varied symptoms, including agitation, confusion and altered behaviour, progressing to loss of consciousness. Hypoglycaemic episodes can often occur at night and after drinking alcohol, making it a particular risk for young people with diabetes.

This study was looking at an overnight “closed-loop” insulin delivery system – in other words, an artificial pancreas.

A small device is connected to the body through a standard insulin pump, and this delivers insulin under the skin without the need for continuous injections.

The wearer adjusts and programmes the amount of insulin to be delivered, according to their blood sugar levels.

The closed-loop system is different: a real-time sensor continuously monitors the person’s sugar level (by measuring the level in the interstitial fluid that surrounds body cells) overnight and then automatically increases or decreases insulin delivery in response to this, as would normally happen in the human body with a healthy pancreas.

Studies to date have suggested that the system is a safe and feasible option, and decreases the risk of hypoglycaemia.

This crossover randomised controlled trial aimed to see whether four weeks of unsupervised use of the overnight closed-loop system would improve blood sugar control in adults with type 1 diabetes.

The crossover design meant that the participants acted as their own controls, first receiving insulin with the closed-loop system or a standard insulin pump (control), then swapping over to the other group.

 

What did the research involve?

The study recruited 25 adults (18 years or over, with an average age of 43) with type 1 diabetes, who were used to using an insulin pump, monitoring their blood sugar and self-adjusting their insulin.

All participants first took part in a two to four week run-in period, where they were trained in the use of the insulin pumps and continuous sugar monitoring, and their treatment was optimised.

The trial was then divided into two subsequent four-week treatment periods, with a three to four week wash-out period in between, when they continued with their normal diabetes care regimen.

In the two treatment periods, the participants received continuous sugar monitoring and were randomly assigned to receive overnight insulin delivery with either the closed-loop system or a standard insulin pump (control).

The study was open-label meaning that participants and researchers knew what system was being used.

The participants received the treatment unsupervised and at home, though they stayed in the research clinic for the first night that they used the closed-loop system.

They were instructed to start the closed-loop system at home after their evening meal and discontinue it before breakfast the next morning.

The closed-loop system calculates a new insulin infusion rate every 12 minutes in response to the monitored glucose level.

The primary outcome examined was the time the person spent in the target optimum sugar range (3.9 to 8.0mmol/l) between midnight and seven in the morning.

Of the 25 people randomised, one person withdrew from the study, meaning that only 24 were available for analysis.

 

What were the basic results?

The time that participants spent in the target optimum sugar range during the seven-hour overnight period was higher when using the closed-loop system (52.6% of the time) compared to when they used the control pump (39.1%), with a significant difference of 13.5%.

The closed-loop system improved time spent in the target range in all but three participants. It also reduced the average overnight sugar level and the time spent above the target range, without increasing the time spent with a hypoglycaemic sugar level. The time spent with hypoglycaemia overnight (less than 3.9mml/l) was no different with the closed-loop and standard insulin pumps. The closed-loop system was found to deliver 30% more insulin during the night than the standard insulin pump.

There was no difference to total daily insulin delivery. However, when examining the full 24-hour period, when participants used the closed-loop system overnight their 24-hour blood sugar level was significantly reduced (by 0.5mmol/l), and their time spent within the target range was increased. People were also observed to have significantly lower levels of HbA1c (glycated haemoglobin – a longer-term indicator of blood sugar control over the past weeks to months).

There were no severe adverse effects associated with using the closed-loop system.

 

How did the researchers interpret the results?

The researchers conclude that “unsupervised overnight closed-loop insulin delivery at home is feasible and could improve [blood sugar] control in adults with type 1 diabetes”.

 

Conclusion

It can be a challenge for people with type 1 diabetes to keep insulin delivery at the right level, which is necessary to keep blood sugar levels within the normal range. Avoiding periods of hypoglycaemia can be a challenge, particularly overnight.

A further challenge is that the symptoms of type 1 diabetes usually develop during childhood. This means that children, especially teenagers, can often find the need to stick to a particular treatment “regime” and regularly monitor their blood sugar quite restrictive. However, without such treatment recommendations, they can be at risk of complications, such as hypoglycaemia.

Because of this difficulty, a device to help simplify the treatment of type 1 diabetes would be welcomed.

The device in question, the closed loop insulin delivery system, automatically makes fine adjustments to insulin delivery in response to the glucose level being continuously measured.

This crossover randomised controlled trial demonstrated that the closed-loop system improved blood sugar control overnight.

Though the closed-loop system was only used overnight, the effects also extended into the day, significantly reducing their 24-hour sugar level.

Importantly, it was not associated with hypoglycaemic episodes.

This study is also said to be the first to monitor safety and effectiveness of the closed-loop system when used unsupervised in the person’s own home over a four-week period. The participants continued all their daily activities and dietary patterns as normal during the study period, thereby assessing the system in a real-life situation without additional restrictions placed on the person.

However, there are some limitations, most notably the small sample size of only 25 participants. In addition to this, though the study period was fairly long, at four weeks, it was not long enough to monitor longer-term effects.

In particular, as the researchers acknowledge, though they monitored HbA1c, that shows the blood sugar control during the lifetime of the red blood cell, which is around four months, rather than four weeks.

This means that the short study design cannot reliably indicate whether closed-loop monitoring would have influenced longer-term blood sugar control as indicated by HbA1c.

A further limitation is that the technique was only used at night, between midnight and 7am, when each participant was resting/sleeping. It is unclear whether the technique would be responsive enough to cope with daytime activities that require greater adjustment of insulin control, such as eating and exercise.

Therefore, unfortunately, an insulin delivery system that would completely remove any need for the person to monitor their blood sugar or adjust their own insulin does not seem to be on the cards, at least for the immediate future. 

Despite these limitations, the results of this small study are encouraging. Studies involving a greater number of people and taking place over a longer duration are now required.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Artificial pancreas could help stem the diabetes epidemic: Device could help patients lead normal lives by stopping need for constant insulin. Mail Online, June 16 2014

Links To Science

Thabit H, Lubina-Solomon A, Stadler M, et al. Home use of closed-loop insulin delivery for overnight glucose control in adults with type 1 diabetes: a 4-week, multicentre, randomised crossover study. The Lancet Diabetes and Endocrinology. Published online June 16 2014

Categories: Medical News

Are 'career girl' lifestyle abortions on the rise?

Medical News - Fri, 06/13/2014 - 12:39

Media sources have reported on the "career girl" abortion (Daily Mail) or "lifestyle" abortion (The Daily Telegraph) after figures released by the Department of Health showed that increasing numbers of women aged 25 to 29, or who have long-term partners or children already, are having abortions.

The media stories follow the release of government abortion statistics in England and Wales for 2013. This provides information on abortion (medically known as "termination of pregnancy") rates and characteristics about the women having abortions.

While the total number of abortions in England and Wales has increased, the rate of abortions has actually decreased. And encouragingly, rates among teenagers have fallen. The highest abortion rate is now among those aged 22 in 2013 (up from 21 in 2012).

The Department of Health report provides us with facts only, not explanations. It does not discuss the possibilities for the changes in trends seen, and all media reports of "career girl" or "lifestyle" abortions suggesting abortion is sometimes used as a method of contraception are pure speculation only, with no foundation in this report.

 

What do the statistics show?

Any doctor performing an abortion is legally required to notify the chief medical officers (CMOs) within 14 days of the abortion. The statistics in this Department of Health report come from abortion notification forms returned to the CMOs of England and Wales, and can therefore be considered reliable.

The total number of abortions in 2013 was 185,331, an increase of 0.1% on 2012 (185,122) and a 2.1% rise from 2003 (181,582). But despite the increase in the overall number of abortions, rates among women have actually decreased.

The age-standardised abortion rate was 15.9 per 1,000 women aged between 15 and 44 years old resident in England and Wales. This was 0.8% lower than in 2012 and 4.7% lower than in 2003 (16.7). It is the lowest abortion rate for 16 years.

However, the age group with the highest abortion rate seems to be getting older. In 2013, it was highest for women aged 22 (at 30 per 1,000 women). The previous year, it was highest for women aged 21 (at 31 per 1,000).

Teenage abortion rates in 2013

The abortion rates among the teenage population are lower and have been falling: in 2013, the rate among under-16s was 2.6 per 1,000, and the rate among under-18s was 11.7 per 1,000.

These figures are both lower than the previous year (3.0 and 12.8 per 1,000 respectively) and 10 years ago (3.9 and 18.2 per 1,000 respectively).

 

What are the reasons for these abortions?

As the Department of Health report states, the Abortion Act 1967 permits the termination of a pregnancy (abortion) by a registered medical practitioner subject to certain conditions.

A legally induced abortion must be certified by two registered medical practitioners and is carried out on the grounds of one of five reasons (A to E):

  • A: risk to the life of the woman if the pregnancy is continued
  • B: risk of serious permanent injury to the physical or mental health of the woman
  • C: risk of injury to the physical or mental health of the woman (and the pregnancy had not exceeded its 24th week)
  • D: risk to the physical or mental health of any existing children in the family (and the pregnancy had not exceeded its 24th week)
  • E: risk that the child would be born with physical or mental abnormalities such that they would be seriously handicapped

Most abortions related to unwanted or unplanned pregnancy will usually be carried out on grounds C or, rarely, grounds D (for example, if the woman has other children who she feels she wouldn't be able to care for adequately if she had another).

As the report demonstrated, in 2013 97% of abortions (180,680) were carried out on grounds C. Virtually all of these (99.8%) were carried out specifically on the grounds of threat to the woman's mental health, rather than her physical health.

Meanwhile, only 1% of abortions in 2013 were carried out on grounds E – that the child would be born disabled. Nearly half of the 2,732 abortions carried out on grounds E were because of identified congenital abnormalities, a third of these being caused by chromosomal abnormalities (most commonly, Down's syndrome).

 

Where do the media reports of 'career girl abortions' come from?

The study reported varied information about the lifestyle characteristics of women undergoing abortion in 2013. The media has made sweeping statements about women's motivations and needs for abortion loosely based on these findings.

Facts that may have influenced the headlines include:

  • Most abortions (81%) were carried out for single women, a proportion that has risen since 2003 (76%). More than 60% of these "single" women were reported to be with a partner. Sixteen percent of women having an abortion were reported to be married or in a civil partnership.
  • Just over a third of women (37%) having an abortion had had one or more previous abortions. The number of women this applied to increased with age, for obvious reasons: 18% of women aged 18 or 19 had had one or more previous abortions, increasing to 34% of 20-24 year olds, 44% of 25-29 year olds, and 47% of 30-34 year olds, and 45% of those over 35.
  • Just over half of women (53%) undergoing abortion had had previous pregnancy resulting in birth, while 18% had had previous pregnancy resulting in miscarriage.
  • Presumably the most important factor influencing the tone of the headlines was the finding that 43,578 women aged between 25 and 29 had an abortion in 2013, compared with 41,882 in 2012. The number of women who had an abortion who had previously given birth in this age group increased from 52% in 2012 to 53% in 2013.

However, the report provides figures only. The Department of Health report does not discuss possibilities for changes in trends, and all media reports of "career girl" or "lifestyle" abortions suggesting abortion is sometimes used as a method of contraception are pure speculation, with no foundation in this report.

As the figures show, while the total number of abortions in England and Wales has increased, the rate per 1,000 women has actually decreased.

Teen pregnancy rates have also fallen, which is encouraging. The reason for this decrease is not known – it could be the result of improved sexual health awareness and education, but again this is pure speculation.

The fact that the number of women who have had previous abortions increases with age group is, as the report says, likely just a result of increasing age increasing the exposure time for possible pregnancy.

The information on the proportion of women having an abortion who are single, with a partner or married similarly tells us little about women's lifestyle choices.

Overall, the report only provides us with facts, not explanations.

 

Where can I get advice on abortion and pregnancy?

You can get help and advice on contraception, pregnancy and abortion from various sources, including your GP or local sexual health clinic.

Online support services include FPA and Brook, or you can call the sexual health advice line on 0300 123 7123.

 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

'Lifestyle' abortions warning as serial termination numbers surge. The Daily Telegraph, June 12 2014

Rise of the 'career girl' abortion: Number of women aged 25 to 29 having abortions increases by 20% in a decade. Daily Mail, June 12 2014

Links To Science

Department of Health Abortion Statistics, England and Wales: 2013 (PDF, 2.83Mb). Published June 12 2014

Categories: Medical News

Hip replacement cement linked with deaths

Medical News - Fri, 06/13/2014 - 12:39

"Toxic NHS hip implants blamed for more than 40 deaths," The Daily Telegraph reports. Other media sources similarly report how surgical "cement" used in some hip replacements has been linked to the deaths.

This news is based on a study looking at risk of death or severe harm associated with partial hip replacements involving cement for people with a fracture at the top of the thigh bone (fractured neck of femur).

The practice of using cement to attach the replacement "ball" joint to the "socket" is a clinical decision made by surgeons based on their experience and the patient's characteristics.

In 2009, the National Patient Safety Agency (NPSA) alerted health professionals to the risk of bone cement implantation syndrome (BCIS), which can happen when cement is used.

In BCIS, insertion of cement somehow leads to some fat and bone marrow contents being released into the bloodstream (venous embolisation). This in turn risks blocking the blood flow, potentially causing respiratory and cardiac arrest.

This study looked at the number of cases of BCIS reported between 2005 and 2012. There were 62 cases of death or severe harm caused by BCIS in this period. This is 1 case per every 2,900 partial hip replacements for fractured neck of femur.

Worryingly, three-quarters of these incidents occurred after 2009, suggesting that the precautionary measures regarding use of cement advised by the NPSA had either not been implemented or were not effective.

However, this study is not able to fully appraise the risks and benefits of using cement or not.

 

Where did the story come from?

The study was carried out by researchers from Imperial College London, including Sir Liam Donaldson, the former chief medical officer.

It is reported to be part of a research programme at Imperial College funded by National Health Service (NHS) England to develop incident reporting in the NHS.

The study has been published in the peer-reviewed medical journal BMJ Open and is open access, so it is freely available to read online.

The Daily Telegraph's headline "Toxic NHS hip implants blamed for more than 40 deaths" has somewhat misfired. It is not the implants themselves that have been called into question, but the cement used to hold them in place. The cement is not made by the NHS, and it is almost certain that similar practices are used in the UK private sector, as well as healthcare systems in other countries.

Once past the headlines, the media is representative of this research, although The Telegraph included a response from NHS England, while The Guardian and The Independent chose to take the researchers' words at face value.

This is not the first time there have been concerns about hip replacements. In 2012, some brands of metal-on-metal hip implants were recalled over safety concerns.

 

What kind of research was this?

This was a patient safety surveillance study that aimed to estimate the risk of death or severe harm in people undergoing partial hip replacement surgery for a fracture at the top of the thigh bone (fractured neck of femur).

A partial hip replacement (hemiarthroplasty) involves replacing only the top "ball" part of the thigh bone that is fractured, as opposed to a total hip replacement (often carried out because of osteoarthritis, for example), which involves replacing the "socket" part of the joint as well.

Around 75,000 fractures to the neck of femur are said to occur in the UK every year – most are related to osteoporosis. The researchers report that in 2012, 22,000 people in the UK received a partial hip replacement following fracture.

In these operations, cement is often used to hold the replacement metal "ball" in place in the socket, but there is considerable debate about this practice.

An alternative is to not use cement and allow the bone of the socket to gradually mesh with the replacement.

The decision to use cement or not usually comes down to the surgeon's choice and the characteristics of the patient.

In 2009, the National Patient Safety Agency (NPSA) accumulated an increasing number of reports attributing the cement used in partial hip replacements to severe harm and sudden death.

The specific concern – bone cement implantation syndrome (BCIS) – is said to be caused by the cementation process somehow leading to some fat and bone marrow contents being released into the venous bloodstream (venous embolisation).

This in turn can potentially cause blockages in the bloodstream, leading to low blood pressure and respiratory and cardiac arrest. The exact way that cementation may cause this to happen is poorly understood.

The identified clusters of incidents led to guidance being given to health professionals about additional precautions for the use of cement (related to patient assessment, anaesthetic technique and surgical technique). However, as the researchers say, there was no firm direction about whether to use cement or not.

Since the alert, further research studies have looked into the number of incidents reported. The current study examines the number of incidents of BCIS reported to the National Reporting and Learning System (NRLS), a patient safety incident and reporting system set up by the NHS in 2003.

 

What did the research involve?

The researchers looked for all incidents reported by NHS hospitals in England and Wales between January 2005 and December 2012 where the incident report clearly described severe patient harm associated with cement use in partial hip replacement for fractured neck of femur.

To identify potential cases, the researchers looked for key words in the report text, such as "cement" and "[death during the operation]", "cardiac arrest", "[low blood pressure]", "fat embolus", or "collapse", and words related to orthopedics and hip replacement surgery.

They specifically looked for reports classified as "death", "severe harm" or "moderate harm". The identified incidents were then separately reviewed and verified by two researchers.

The main outcomes the researchers were interested in were the number of reported deaths, cardiac arrests and near cardiac arrests per year. They also looked at the timing of the patient's deterioration and its relation to cement insertion.

They specifically looked at the number of reports that occurred before and after the 2009 NPSA alert on the potential risk of cement was issued.

 

What were the basic results?

Over the seven-year period, there were 360 identified potential reports, of which 62 were judged by the two reviewers to clearly report severe harm or death specifically associated with the use of cement in partial hip replacement for fractured neck of femur.

Of these 62 incidents:

  • two-thirds (41 of 62) were deaths, of which most (33) occurred on the operating table
  • 14 involved a cardiac arrest from which the person was resuscitated
  • 7 involved near cardiac arrests from which the person recovered

In the majority of cases (55/62, 89%) the person deteriorated during or within a few minutes of cement insertion.

Overall, there was one incident of BCIS for every 2,900 partial hip replacements for fractured neck of femur carried out over the seven-year period. There was a general increase in the number of incidents reported every year between 2005 and 2012. Nearly three times as many incidents were reported after the NPSA alert was issued in 2009 compared with before.

 

How did the researchers interpret the results?

The researchers conclude that the incident reports identified provide evidence that cement use in partial hip replacement for fractured neck of femur in England and Wales can be associated with death or severe harm as a result of BCIS.

They note that three-quarters of the deaths identified have occurred since the 2009 alert, when the NPSA publicised the issue and encouraged the use of mitigation measures related to patient assessment, anaesthetic technique and surgical techniques.

The researchers suggest that the reports show that there has been incomplete implementation or effectiveness of these mitigation measures.

They go on to say that there is a need for stronger evidence that weighs the risks and benefits of cement in partial hip replacement for fractured neck of femur.

 

Conclusion

This is valuable research that highlights that there have been 62 cases of severe patient harm or death between 2005 and 2012 as a result of cement use in partial hip replacement for fractured neck of femur resulting in bone cement implantation syndrome (BCIS).

Notably, the 2009 alert by the National Patient Safety Agency (NPSA) on the potential for this risk doesn't appear to have had an effect on decreasing the number of cases. In fact, the number of cases clearly increased year by year over the seven-year study period.

The reason for the apparent ineffectiveness of the alert is not known. The researchers can't say whether the suggested measures related to patient assessment, anaesthetic technique and surgical techniques have not been taken up by professionals, or have just not been effective.

It is also possible that an increased awareness of the risk of BCIS after the NPSA alert led to more serious harms and deaths being reported as potentially associated with cement use.

As the researchers further acknowledge, it could be that the incidence of 1 in every 2,900 partial hip replacements for fractured neck of femur could even be an underestimate, as there may have been a lack of reporting to the National Reporting and Learning System (NRLS) that was used to provide the data for this study.

Also, as the researchers say, this study of reported incidents is not able to fully appraise the benefits and risks of cement use in partial hip replacements, so its findings need to be considered alongside information on the use of cement collected through other sources.

Professor Sir Liam Donaldson, former chief medical officer and a patient safety enthusiast, was involved in this study, and is quoted in The Telegraph as saying: "We want to see this whole question about the use of cement opened up again and further research and evaluation of the risks."

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Toxic NHS hip implants blamed for more than 40 deaths. The Daily Telegraph, June 13 2014

Cement used in hip replacement linked to surgery deaths. The Independent, June 13 2014

Use of cement in hip replacements questioned by researchers. The Guardian, June 13 2014

Links To Science

Rutter PD, et al. What is the risk of death or severe harm due to bone cement implantation syndrome among patients undergoing hip hemiarthroplasty for fractured neck of femur? A patient safety surveillance study. BMJ Open. Published online June 12 2014

Categories: Medical News

'Super-mums' at risk of depression

Medical News - Thu, 06/12/2014 - 15:00

“’Super-mums’ … may be more likely to suffer from depression, researchers say,” the Mail Online reports. A US study found a possible association between concern about being perceived as a perfect parent and maternal depression risk.

The researchers developed a 26-item questionnaire designed to assess what they described as “rigidity of maternal beliefs scale” (RMDS).

Women with a high RMDS score had very fixed notions about the role of motherhood and the responsibilities it brings.

For example, they agreed strongly with statements such as “I should do everything for my baby myself” and “Having negative thoughts about my baby means something is wrong with me”. Though these types of beliefs are unlikely to match the messy reality of bringing up a baby.

The researchers did find that women with a high RMDS store had an increased tendency to develop postnatal depression.

This was a small study but the thinking underpinning it seems plausible. Mothers who assume that motherhood is always going to be joyous may be more likely to end up depressed when they are confronted with the reality of the situation.

Having a baby is certainly joyful but it is never easy. It’s important new parents feel they can call on others for support – rather than believing they must do everything themselves.

 

Where did the story come from?

The study was carried out by researchers from the University of Michigan, Florida State University and was funded by the University of Michigan.

The study was published in the peer-reviewed journal Depression and Anxiety.

It was covered fairly if in rather loose terms by the Mail Online.

The site did not explain the study’s aim – to design and test a measure of women’s beliefs and how these are related to postnatal depression.

 

What kind of research was this?

The researchers say that perinatal (or postnatal) depression has a negative impact on women, parenting and children’s development. Yet little is known about how far maternal beliefs or attitudes are associated with depression.

They suggest that “rigid” beliefs such as believing you must “fix” all parenting difficulties yourself may be associated with lower mood during the postnatal period.

Their aim here was to create and test a questionnaire for pregnant women and new mothers, examining their beliefs in three areas closely related to mood and behaviour:

  • whether a mother thinks she is competent (maternal self-efficacy)
  • whether she believes babies get hurt or sick easily (perceptions of child vulnerability)
  • whether she internalises societal beliefs about what good mothers should do and feel (perceptions of societal expectations)

They then aimed to test whether the results from the questionnaire could identify women at risk of postnatal depression.

 

What did the research involve?

The researchers initially developed a 30 item measure, called the Rigidity of Maternal Beliefs Scale (RMBS). They did this after consulting with expert clinicians and researchers in the field of women’s mental health and conducting a review of the existing literature, as well as interviewing depressed women.

They also created a seven point answer scale, ranging from 1 (strongly disagree) to 7 (strongly agree), with high scores suggesting more rigid beliefs and lower scores more flexibility.

After piloting the measure with a small group of depressed women, they removed six of the items, resulting in 26 final items.

The RMBS was designed to cover four, inter-related, areas of belief:

  • perceptions of societal expectations of mothers – beliefs about the responsibilities of motherhood – such as “I should do everything for my baby myself” and “I should be able to figure out and fix parenting difficulties myself”
  • role identity – beliefs about the experience of motherhood, such as “being a mother should be positive” and “babies get hurt or sick easily”
  • maternal confidence – how confident (or not) they feel about being a mother and how this level of confidence compares to other mothers
  • maternal dichotomy – beliefs about what makes a “good” or “bad” parent, both in terms of individual thinking and how others perceive them, such as “if my baby misbehaves, then others will think I am a bad parent”

The questionnaire was sent out to women twice – once during pregnancy and again, after the baby’s birth.

The women were also asked to complete a validated questionnaire to assess depressive symptoms.

They were also asked to fill out a further, eight item self-report questionnaire, called the parenting sense of competence scale (PSOC).

Women were eligible to participate in the study if they were pregnant, over the age of 18, fluent in English, and had no adoption plan. The prenatal questionnaires were mailed out to 273 women who met the criteria, 134 women returned the questionnaires, giving a response rate of 49%. Of those, 113 women (84%) also returned postnatal questionnaires, participating at both study time points.

They analysed the results, looking at women’s scores on the new measure, their scores on the parental competence scale and their scores on the depression scale.

 

What were the basic results?

The researchers found their 24 item scale was a reliable, valid measure for predicting postnatal depression. Women who scored higher on the Rigidity of Maternal Beliefs Scale were associated with a higher risk of developing postnatal depression.

They found that such a questionnaire could be divided into four areas reflecting a mother’s Perception of Society’s Expectations, Role Identity, Maternal Confidence and Maternal Dichotomy (mothers’ belief they are categorised into “good” and “bad” based on how their child behaves).

 

How did the researchers interpret the results?

They say their results suggest that the RMBS could be used as a valid, reliable measure to examine these areas of maternal beliefs, and to identify those at risk of postnatal depression. They argue that the RMBS should now be tested on a larger, more diverse sample of women.

 

Conclusion

This was a small study of relatively highly educated, high income women, most of them with partners, so whether its findings are generalizable to all new mothers is uncertain.

The study did not take account of stressful events which can adversely affect mental health such as relationship or financial difficulties.

Still, many experts agree with the thinking underpinning the study.

Having unrealistic beliefs and expectations about the experience of motherhood could make a woman more vulnerable to depression if she is unable to come to terms with the reality of the situation; especially if she does not seek help and support from others.

Having a baby can bring great joy but with that joy can come an immense amount of stress which can trigger depression. As one expert in the field put it “I’m not surprised that some mothers develop depression. What surprises me is that all mothers don’t develop depression.”

If you are concerned about your mood you should ask yourself two questions:

  • During the past month, have you often been bothered by feeling down, depressed or hopeless?
  • During the past month, have you often been bothered by taking little or no pleasure in doing things that normally make you happy?

If the answer to either of these is yes, then it is possible you have postnatal depression. You should contact your GP for advice.

 Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

'Super-mums' who worry about being the perfect parent are more likely to develop depression, experts claim. Mail Online, June 12 2014 

Links To Science

Thomason E, Flynn HA, Himle JA, Volling BL. Are women's parenting-specific beliefs associated with depressive symptoms in the perinatal period? Development of the rigidity of maternal beliefs scale. Depression and Anxiety. Published online May 29 2014

Categories: Medical News

High-factor sunscreen doesn’t cut melanoma risk

Medical News - Thu, 06/12/2014 - 14:00

“High-factor sun cream cannot…protect against the deadliest form of skin cancer,” The Guardian reports. Research involving mice with a predisposition to develop melanoma found that sunscreen only delayed, rather than prevented, the onset of melanoma.

Malignant melanoma occurs when cells that produce melanin – pigment that darkens the skin – rapidly divide and grow uncontrollably.

A mutation in a gene crucial for cell growth, BRAF, has been found in several cancers, including around half of melanoma cases. Mice in this study were given this mutation, and all of them developed melanoma when exposed to UV light.

Sunscreen factor 50 delayed the onset and reduced the number of tumours, but did not prevent melanoma.

The study also found that in the mice with the BRAF mutation, UV light damaged another part of the DNA that stops cells dividing too rapidly – tumour suppressor genes called TP53. Sunscreen did not prevent this damage, which means that the cells could grow unchecked.

Mutations in the BRAF gene found in melanomas are not the inherited type, and in humans may be caused by UV exposure and other environmental factors.

It should not be interpreted from this study that sunscreen is useless, but you cannot rely on it solely, especially if you have risk factors for melanoma, such as pale skin and having lots of moles.

Sunscreen should be used in combination with other preventative methods, such as wearing appropriate clothing when the sun is at its hottest.

 

Where did the story come from?

The study was carried out by researchers from the University of Manchester, the Institute of Cancer Research and the Royal Surrey County Hospital. It was funded by Cancer Research UK, the Wenner-Gren Foundations and a FEBS Long-Term Fellowship.

The study was published in the peer-reviewed medical journal Nature.

It was accurately covered in the UK media, with many news sources including useful quotes from independent experts about the research's implications. 

 

What kind of research was this?

This was a laboratory study that used mice to look at how effective sunscreen is at reducing the risk of developing melanoma, following exposure to UV light.

Melanoma is the most malignant form of skin cancer. It is the fifth most common cancer in the UK, with 13,348 new cases occurring each year, according to figures from 2011.

Melanoma occurs when melanocytes grow uncontrollably. These are the cells that produce the protective pigment melanin, which gives skin its colour. People with darker skin have more active melanocytes, which transfer more melanin to other cells to protect them from UV light.

A mutation in the BRAF gene that regulates growth and division of cells has been found in melanoma. It is known as an “oncogene”, as it can cause normal cells to become cancerous if it has a mutation. Several different BRAF gene mutations have been found in melanoma and some cancers of the colon, rectal, ovary and thyroid.

It is not known how UV light causes melanoma, but an abnormal BRAF gene has commonly been found at an early stage in the development of melanoma. The researchers wanted to study the process, so used mice that had this particular BRAF gene mutation (called BRAF [V600E]).

Another gene, tumour protein 53 (TP53), makes a protein called tumour suppressor 53 (Trp53) that stops cells dividing too rapidly or uncontrollably. If there is a mutation in this gene, there is no safety check and the cells can grow and multiply unchecked, causing a tumour. Trp53 has been implicated in non-melanoma skin cancer, but was not thought to be involved in melanoma.

 

What did the research involve?

Mice with the BRAF gene mutation in their melanocytes were used in a variety of experiments and compared to mice without the BRAF mutation.

The backs of the mice were shaved and one half was protected with a cloth.

Newborn mice were given a single exposure to UV light at a dose that would mimic mild sunburn in humans. Those also given the BRAF mutation were compared to those without.

Adolescent mice were given the BRAF mutation and then either:

  • not exposed to UV light
  • given weekly exposure to UV light for up to six months
  • repeated exposure to UV light 30 minutes after sunscreen factor 50 had been applied

 

What were the basic results?

The newborn mice given the BRAF mutation developed melanoma. This was found to be due to the inflammatory response of the skin.

In the adolescent mice given the BRAF mutation:

  • melanoma occurred in 70% of the mice with no UV exposure after about 12.6 months. They had, on average, 0.9 tumours (this somewhat unusual average is due to the fact that some mice had no tumours – much like the famous example of 2.4 children)
  • all mice developed melanoma after repeated UV exposure within 7 months. They had, on average, 3.5 tumours each; 98% of them were on the skin exposed to the UV light
  • all mice given sunscreen developed melanoma within 15 months. They had, on average, 1.5 tumours each, and were more common on the sunscreen-protected skin than the cloth-protected skin

Mice without the BRAF gene mutation did not develop melanoma after exposure to UV rays.

UV light caused damage to the DNA. This was evidenced by finding mutations in the Trp53 tumour suppressor protein in 40% of cases. These mutant Trp53 proteins increased the BRAF-driven growth of the melanoma.

 

How did the researchers interpret the results?

The researchers conclude that this study reveals “two UVR melanoma pathways: one driven by inflammation in neonates and one driven by UVR-induced mutations in adults”. They also found that “sunscreen (UVA superior, UVB sun protective factor [SPF] 50) delayed the onset of UVR-driven melanoma, but only provided partial protection”. They “advocate combining it with other sun avoidance strategies, particularly in at-risk individuals with BRAF-mutant naevi [moles]”.

 

Conclusion

This study found that in mice given the BRAF mutation, sunscreen did not prevent them from developing melanoma, although it did delay it and reduce the number of tumours. The mechanism for this appears to include damage to a tumour suppressor gene, TP53, which has previously been implicated in other skin cancers. Sunscreen did not prevent mutations occurring in this gene, but did reduce the number of mutations.

The study's authors accepte that sunscreen protects against squamous cell carcinoma – a type of skin cancer – but that there was uncertainty around its ability to protect against malignant melanoma – a second type of skin cancer. This study indicated that sunscreen did reduce the risk of developing melanoma in the mice, but that protection was not 100%. These preliminary findings in mice will need to be confirmed in humans for the results to be more credible and reliable.

These results were applicable only to those with an existing mutation in the BRAF gene. Mutations in the BRAF gene can be inherited, but these are not thought to be linked to skin cancers. Acquired mutations in the BRAF gene do increase the risk of melanoma, and can be present in moles. These people have a heightened risk of skin cancer. The complication that arises from this is that UV light might cause this mutation, setting off a cycle of cell and DNA damage, leading to cancer. This means that overexposure to the sun still increases your risk of skin cancer risk factors, whether you have the mutation or not.

People with known risk factors for melanoma should use high-factor sunscreen in combination with other preventative methods, such as wearing appropriate clothing and staying in the shade when the sun is at its hottest (between 11am and 3pm). If you are desperate for a tan, fake is the best way to go.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

High-factor sunscreen cannot give complete protection against skin cancer. The Guardian, June 12 2014

Skin cancer: Sunscreen 'not complete protection'. BBC News, June 11 2014

Wearing sunscreen may NOT prevent skin cancer, study claims. Mail Online, June 12 2014

Sunscreen does not protect against deadliest skin cancer. The Daily Telegraph, June 12 2014

Links To Science

Viros A, Sanchez-Laorden B, Pedersen M, et al. Ultraviolet radiation accelerates BRAF-driven melanomagenesis by targeting TP53. Nature. Published online June 11 2014

Categories: Medical News

Making mosquitoes male may manage malaria

Medical News - Wed, 06/11/2014 - 18:19

"Mosquitoes modified to only give birth to males in bid to wipe out malaria," The Daily Telegraph reports after new research has found an innovative way of tackling the global problem of malaria.

The technique used in this latest research is both brutal and elegant. Female mosquitoes, which spread malaria to humans through their bite, were genetically modified so that their offspring were overwhelmingly (95%) male. This male-only trait was inherited and repeated with future generations, and has the potential to wipe the species out.

It is not yet known if the genetically modified mosquitoes are able to compete with wild mosquitoes in their natural environment, as the studies have so far only been conducted in cages in a laboratory.

If the mosquitoes can have an effect in the wild, in the short-term this could reduce the spread of malaria by cutting the number of female mosquitoes. In the long-term, the species could potentially be completely eliminated.

Future studies would have to ensure that wiping out the type of mosquito that carries malaria does not upset the ecosystem and cause more problems.

A famous example of this type of ecological upset is the introduction of cane toads in Australia to manage the beetle population. The toads proved highly adaptive to the environment and are now a major pest.

 

Where did the story come from?

The study was carried out by researchers from Imperial College London, the University of Perugia in Italy, and the Fred Hutchinson Cancer Research Center in the US.

It was funded by the US National Institutes for Health and the European Research Council.

The study was published in the peer-reviewed medical journal, Nature Communications. It is open access, so it is free to read online.

The UK media's coverage was good, with The Guardian providing expert comments on the study balanced by a quote from Dr Helen Williams, director of GeneWatch UK, regarding the potential risks of interrupting the ecosystem.

 

What kind of research was this?

This was a laboratory study of mosquitoes that aimed to find a way of reducing their numbers, as female mosquitoes – which bite humans – spread malaria.

The number of female mosquitoes in the mosquito population and their speed of reproduction are both believed to be ways of controlling their population size. If there was a way to increase the proportion of male offspring, this could therefore reduce the population size.

Previous attempts in caged experiments using naturally occurring mutations – which gave a higher number of male offspring in two types of mosquito called Aedes and Culex – were unsuccessful because the females had a natural resistance to them.

The researchers aimed to genetically modify mosquitoes using a synthetic enzyme, based on the naturally occurring mutations, to damage the X chromosome in males. This would mean that they are potentially only able to pass on the Y chromosome during reproduction, thereby only producing male offspring.

 

What did the research involve?

The researchers investigated the effect of different enzymes on damaging the X chromosome of male mosquitoes in the laboratory and then performed various experiments using live mosquitoes.

They created an enzyme that targets and damages the X chromosome in the male mosquito species Anopheles gambiae, which carries malaria.

The researchers ensured that the process only damaged the X chromosome in the male mosquito and did not affect the Y chromosome so that the offspring were not sterile.

If they were sterile, they would not be able to reproduce and the effects of the genetically modified mosquitoes would be limited to one generation.

This would then require an unimaginable number of mosquitoes to be injected for there to be any impact on numbers.

The researchers performed various experiments to see if the genetic mutation would be passed on to future generations.

They tested the level of damage to the X chromosome caused by various enzymes and at different temperatures until they found the optimal genetic modification that was able to produce mostly males without affecting the fertility rate.

 

What were the basic results?

The offspring of genetically modified male mosquitoes were more than 95% male. The enzyme that damages the X chromosome was inherited by these males, causing them to have male offspring.

In five independent cage experiments, putting in three times the number of genetically modified males to normal males caused the suppression of the wild-type mosquito. All mosquitoes were eventually eliminated in four of the cages within six generations.

In the small fraction of female offspring produced by the genetically modified males, their offspring were mostly female when they were fertilised by wild male mosquitoes.

The male offspring had a 50% chance of having the genetic modification. When they were crossed with wild female mosquitoes, however, they were still more likely to have males.

 

How did the researchers interpret the results?

The researchers concluded that, "Distorter male mosquitoes can efficiently suppress caged wild-type mosquito populations, providing the foundation for a new class of genetic vector control strategies."

However, they acknowledge that, "The robustness of these traits under variable natural conditions remains to be studied."

 

Conclusion

This study found that genetically modifying the X chromosome in male mosquitoes can cause more than 95% of their offspring to be male in caged experiments. This genetic modification is inherited by these offspring, who then have similar high numbers of male offspring.

While these results are promising, it is not clear whether the small fraction of female offspring would be enough to eventually reverse the process and create mosquitoes resistant to the effects of the enzyme.

These studies were just performed on the species Anopheles gambiae, which carries malaria. It is not yet known what effect reducing or eliminating the species would have on the population size of other mosquitoes or the ecological system.

This would need to be considered carefully before any genetically modified species was released into the environment. Our ecosystem is incredibly complex, so tinkering with it could lead to a range of unexpected and unwanted consequences.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Mosquitoes modified to only give birth to males in bid to wipe out malaria. The Daily Telegraph, June 10 2014

GM mosquito designed to kill off females may limit spread of malaria. The Times, June 11 2014

GM mosquitoes a 'quantum leap' towards tackling malaria. The Guardian, June 10 2014

GM lab mosquitoes may aid malaria fight. BBC News, June 10 2014

Could malaria be wiped out by GM mosquitoes? Scientists find a way to kill off disease-carrying female of the species. Daily Mail, June 10 2014

New GM Mosquito 'Could Help Defeat Malaria'. Sky News, June 10 2014

Links To Science

Galizi R, Doyle LA, Menichelli M, et al. A synthetic sex ratio distortion system for the control of the human malaria mosquito. Nature Communications. Published online June 10 2014

Categories: Medical News