Medical News

Does happiness have a smell and is it contagious?

Medical News - Thu, 04/16/2015 - 17:00

"Humans can smell when other people are happy, researchers discover," The Independent reports; somewhat over-enthusiastically.

In a new study, Dutch researchers investigated where happiness could be "spread" to others, via body odours, through a process known as "chemosignalling".

Nine men provided sweat specimens during three sessions that aimed to make them feel happy, fearful or neutral. Film and TV clips were used to induce these feelings.

Thirty-five female students were then asked to smell the samples and their reactions were captured.

The women were more likely to have a happy facial muscle response if the sample was taken while the men watched happy clips. A fearful response was more likely if the sample was taken in the fear condition. Women seemed to be able to tell if the sweat had come from men in the happy or fearful condition compared to the neutral condition, but not from each other.

It is not possible from such a small study to be able to say with certainty that any changes were due to the smell.

The hypothesis that emotions could be spread via odours may be plausible to anyone who has been in a sweaty mosh-pit, rave, or the middle-aged equivalent, a post-wedding disco.

But while interesting, this study does not prove that body odours can transmit happy or sad feelings to others.

 

Where did the story come from?

The study was carried out by researchers from Utrecht University in the Netherlands, Koç University in Turkey, the Institute of Psychology in Lisbon and Unilever research institutes in the UK and Netherlands. It was funded by Unilever, the Netherlands Organisation for Scientific Research and the Portuguese Foundation for Science and Technology. (We seriously hope Unilever are not considering bringing any sweat-based products to market).

The study was published in the peer-reviewed medical journal Psychological Science.

The UK media reported the research accurately in terms of the actual story, though it seems some headline writers went out on a limb. For example, The Daily Telegraph’s headline "You can actually smell joy", while a delightful prospect, is unproven.

Also, the media did not explain any of the limitations in the study design.

 

What kind of research was this?

This was an experimental study of the effect of body odours in transferring human emotion from one person to another. Previous research has suggested that negative emotions, especially fear, can be conveyed to others through bodily odours, so-called chemosignals.

Chemosignalling is a recognised phenomenon in some animal species, such as rodents and deer. It is still a matter of debate whether chemosignalling occurs in humans.

The researchers aimed to see if positive emotions can also be transferred through chemosignals. In essence, whether smelling the sweat from someone in a happy state could induce happiness.

 

What did the research involve?

Sweat samples were taken from men during conditions designed to make them feel fearful, happy or neutral. Women were then asked to smell the samples and their emotional reaction was measured by their facial expression and reported emotion. Their level of attention was also tested, as researchers say that "happiness broadens the attentional scope" while fear narrows it.

Nine healthy Caucasian men of average age 22 provided sweat samples. The samples were collected using armpit pads during three separate sessions, each one week apart.

In the first session the researchers tried to induce fear in the men by showing them nine film clips.

The second session aimed to make the men feel happy, and included a clip of the "Bare Necessities" from the Jungle Book and the opera scene from The Intouchables (a "feelgood" film about the growing friendship between a disabled man and an ex-prisoner).

The final session involved neutral TV clips such as weather reports. The men washed their armpits before the sessions commenced and the pads were frozen after the sessions.

The men were asked to abstain from the following activities for two days before each session to avoid "contamination" of the sweat samples:

  • drinking alcohol
  • sexual activity
  • eating garlic or onions
  • excessive exercise

Whether the sessions induced the desired emotional effect in the men was assessed using a Chinese symbol task and a questionnaire. The Chinese symbol task involves looking at Chinese symbols and rating them on a scale from pleasant to unpleasant compared to the average Chinese character. The task is meant to give an indication of the state the viewer is in when they see the characters, rating them as more pleasant when in a happier mood. The questionnaire asked the men to rate how angry, fearful, happy, sad, disgusted, neutral, surprised, calm or amused they felt, each on a scale of one (not at all) to seven (very much). The men were paid 50 euros for participating.

The sweat pads were thawed, cut up and placed in vials to create happy, neutral or fearful samples. Each sample type was placed under the nose of 35 female students. Their facial expressions in the five seconds after smelling the vials was captured using electromyographic (EMG) pads. These devices are used to capture electrical activity produced by muscles and moving bones (e.g. whether they smiled or grimaced).

The students also completed the Chinese symbol task and other tests to measure their level of attention while smelling each vial.

After all vials had been smelled, the women were asked to rate them for how pleasant and how intense they found them. They were also asked to say whether they thought the samples came from happy, fearful or neutral individuals. They were paid 12 euros for participating.

All men and women recruited were heterosexual – to try and standardise chemosignals emitted by the men, and response from the women.

 

What were the basic results?

The combined test results for the men suggested that mainly positive feelings were induced by the happiness condition and negative feelings for the fear condition:

  • the men reported feeling happier and more amused in the happy condition
  • feelings of fear and disgust were higher in the fear condition
  • the men had lower levels of arousal in the neutral condition

In the females, a happy facial muscle EMG response was more likely if the male sample was taken in a happy condition. If the sample was taken in the fear condition, the EMG was more likely to show a fear response in the women. The women performed better in the tests measuring wider attention ability when they smelled sweat provided in the happy condition. The sample condition had no effect on the Chinese symbol task or the reported odour intensity. Women could tell if the sweat had come from men in the happy or fearful condition compared to the neutral condition.

 

How did the researchers interpret the results?

The researchers concluded that: "exposure to sweat from happy senders elicited a happier facial expression than did sweat from fearful or neutral senders". They say: "humans appear to produce different chemosignals when experiencing fear (negative affect) than when experiencing happiness (positive affect)".

 

Conclusion

The findings from this small experimental study suggest that smelling sweat produced during different emotional states can influence people’s feelings.

However, the study has many limitations and cannot prove this theory. It only looked at sweat samples from nine men, and all of the testers were female students. The researchers say this was deliberate because men sweat more and women have a better sense of smell and greater sensitivity to emotional signals. Nevertheless, this means that we do not know if similar results would be found for men smelling female sweat or within the same sex. We also don’t know whether results would be similar if the women had been with the men at the time and smelling the sweat directly from their body, rather than in a vial that has been placed under their nose.

The study aimed to assess the feelings induced by the smell through facial muscle changes, reported mood and attention. It is not possible from such a study to be able to say with any certainty that any changes were due to the smell.

Other confounding factors could have caused the effects.

In real-life situations, where people are together and more than just smell is involved, emotional responses are due to a combination of thoughts, feelings, environmental factors and all of the senses.    

While interesting, this study does not prove that body odours can transmit happy or sad feelings to others.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Humans can smell when other people are happy, researchers discover. The Independent, April 16 2015

Why happiness is infectious: you can actually smell joy. The Daily Telegraph, April 15 2015

Links To Science

De Groot JHB, Smeets MAM, Rowson PJ, et al. A Sniff of Happiness. Psychological Science. Published online April 13 2015

Categories: Medical News

Middle age 'starts at 60' claims media

Medical News - Thu, 04/16/2015 - 14:45

“Middle age begins at 60, say researchers,” The Times reports. A new population modelling study estimates that due to increased lifespan, what was once regarded as elderly should be seen as middle-aged, and this trend will continue into the future.

Traditionally, medical professionals, particularly epidemiologists, regarded 65 as the age at which somebody becomes elderly. This was based on the expectation that they probably only had a few years left to live.

As this study argues, however, this expectation is no longer valid.

Improvements in life expectancy and health mean that categorising someone as old because they've turned 65 no longer makes sense.

Instead, they suggest looking at how long a person may have left to live, based on average life expectancy, which in the UK is currently around 79 years for men and 82 for women (this is expected to rise in the future).

This means that people in their late 60s with a life expectancy of 10 to 15 years would not count as old, and the proportion of the population considered old would be smaller.

While healthy living may contribute to longer lifespans, the study doesn't suggest that we hit middle age later. Using the new definitions, middle age lasts longer, with old age postponed to our last decade-and-a-half of life.

 

Where did the story come from?

The study was carried out by researchers from Stony Brook University in the US and the International Institute for Applied Systems Analysis in Austria. It was funded by the European Research Council.

The study was published in the peer-reviewed medical journal PLOS One, which is an open-access journal, meaning that it can be read for free online.

The media focused on the comments made by the researchers to explain why they had done the study, rather than the content of the research paper itself, with much discussion of how people now stay healthier for longer. The Times' headline said that middle age now starts at 60, which is not claimed anywhere in the study. The Daily Telegraph seems to think that living longer stops you ageing – "baby boomers refuse to grow old" – sadly, this is not the case.

The Mail Online did a better job of explaining the arguments behind the research, although they said that "the proportion of old people actually falls over time" using the new analysis. However, this was not borne out by the figures.

 

What kind of research was this?

This was an analysis of population data using the cohort component method. It involved making different calculations of possible future scenarios, from information about the age and sex of European populations. The researchers used assumptions about future birth rates, death and migration, and how they could change over time. The results and conclusions all relate to what happens to ageing at population level, so can't be used to predict what might happen to individuals.

 

What did the research involve?

Researchers took international population data and calculated what would happen to the proportion of people in a country considered old, and to the median (average) age of the population. They first used conventional measures, then their own new measures. The new measures are designed to take into account the fact that older people now, and in the future, are likely to be healthier, with a longer life expectancy, and are less dependent on others than they used to be. The researchers wanted to see what effect these new measures would have on how we think about the age of a population.

Researchers based their calculations on information from the European Demographic Data Sheet 2014, which includes statistics about the populations of European countries. Conventional measures of old age and median age are based on chronological age in years, with 65 often taken as the point at which someone is classed as old. Because life expectancy is rising, by this measure, the proportion of the population classed as old will go up over time, and will rise faster as life expectancy improves.

However, people aged 65 and over may be fit, independent and working, so this measure may not be useful for governments wanting to plan future pension provision or health and care costs.

The researchers call their new measure "prospective age". They say that people should only be considered old when their remaining life expectancy falls below 15 years, because it is in the last remaining years of life that people are most likely to be dependent and to have health problems.

Life expectancy varies for different countries, because it is calculated based on the average age of death for men and women in that country. It usually rises over time, as medicine and healthcare improves.

They also looked at median age, which is the average age of the population. As people live longer, the median age increases. However, the researchers argue, this does not take into account changing life expectancy. Instead, they calculate prospective median age, which is a measure of how long people have left to live, not just how long they have already lived.

Prospective median age is the age where remaining life expectancy is the same as the median age in a specific year. Again, this changes over time.

The researchers compared the conventional measures and the prospective measures of the percentage of the German population considered old in 2013, 2030 and 2050, under three scenarios:

  • one in which life expectancy did not increase
  • one in which it increased by 0.7 years per decade
  • one in which it increased by 1.4 years per decade

The European Demographic Data Sheet assumes a 1.4 years per decade increase. The researchers also calculated the median age and the prospective median age of the German population under those three scenarios.

 

What were the basic results?

The proportion of people considered old in the future would be smaller, based on the researchers' prospective age measures, compared to current measures based on chronological age.

Using standard measures, the proportion of the German population considered old would rise from 20.7% in 2013 to 27.8 in 2050 with no increase in life expectancy, or to 33% with the predicted life expectancy increase. However, using the prospective old age (when people had a life expectancy of 15 years or less), the proportion considered old would be 14.8% in 2013, 20.5% in 2050 with no increase in life expectancy, or 19.7% with the predicted life expectancy increase.

Conventional median age of the German population would rise from 46.5 years in 2013 to 49.3 with no increased life expectancy, or 52.6 with predicted life expectancy improvements. Using prospective median age, taking into account time left to live, it would actually fall to 45.6 by 2050 with predicted improvements in life expectancy.

 

How did the researchers interpret the results?

The researchers say their results demonstrate that conventional measures of population ageing are "incomplete" because they do not take into account rises in life expectancy and what this means for people's lifestyles. In their measures, the old age threshold changes over time as life expectancy changes.

They say their prospective measures show that "faster increases in life expectancy lead to lower population ageing". In other words, although people live longer, they don't hit the threshold of being considered old as soon – so the population as a whole is middle-aged for longer.

They admit that some of the thresholds chosen for their study are arbitrary. For example, they could have used 60 for the conventional old age threshold, or used a prospective old age threshold of 10 remaining years of life. They say that the "major trends" would have been the same if they had done that, although they do not show this data.

 

Conclusion

This study is an interesting analysis of population data, which shows how looking at figures from a different perspective can change our view. We are used to hearing about "ageing Britain" and how the increasing numbers of older people could be a drain on the country's resources. This study considers whether our definitions of old age are too rigid and need to be revisited.

In the paper, the researchers focus on results for Germany, but they have done calculations for 40 European countries, including the UK. This shows that the proportion of people in the UK aged 65 or over, given expected improvements in life expectancy, would rise from 17.2% in 2013 to 24.9% in 2050. However, the proportion in the last 15 years of their life would rise from 10.9% in 2013 to 13.7%. That still represents a large and increased proportion of the population considered old.

While it’s true that, on average, people are living longer, healthier lives than in the past, the study can only make predictions based on assumptions that may or may not turn out to be correct. The paper did not go into those assumptions, so we don't know whether, for example, they factored in the possible impact of being unable to treat infections because of rising antibiotic resistance, or the increased numbers of people with diabetes due to obesity.

Studies like these make for interesting headlines and give governments a new way of thinking about how to plan for our ageing population. However, they are no predictor of what will happen to any of us on an individual basis as we get older.

While there is no guarantee of your future lifespan, you can try to live longer by reducing your risks of getting some of the most common causes of premature death:

Read about reducing your risk of premature death.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Middle age begins at 60, say researchers. The Times, April 16 2015

Why 60 is the new middle age: Our longer, healthier lives means we aren't classed as elderly until at least 70. Mail Online, April 16 2015

Sixty is the new 40: Healthy living means we now hit middle age later. Daily Mirror, April 15 2015

Middle age now lasts until 74 as baby boomers refuse to grow old. The Daily Telegraph, April 15 2015

Links To Science

Sanderson WC, Scherbov S. Faster Increases in Human Life Expectancy Could Lead to Slower Population Aging. PLOS One. Published online April 15 2015

Categories: Medical News

DNA changes in sperm may help explain autism

Medical News - Wed, 04/15/2015 - 15:30

"DNA changes could explain why autism runs in families, according to study," The Independent reports. Research suggests a set of changes in a father's DNA – known as methylation – is linked to autism spectrum disorder (ASD) in their offspring.

Methylation is a chemical process that can influence the effects of genes on the body (gene expression), essentially turning off certain genes. This process can lead to both positive and negative changes in DNA. These types of changes are known as epigenetic changes.

In this small study of 44 men and their offspring, researchers scanned for epigenetic changes at 450,000 points on the DNA molecule. They compared the DNA results with the child's score on an ASD prediction test at one year of age, and then looked for regions of DNA where changes were linked to a higher or lower risk of ASD.

The researchers found 193 areas of DNA from the men's sperm where methylation levels were associated with a statistically significant increased risk of developing ASD.

Researchers hope the study will help them see how epigenetic changes might affect ASD risk. At present, there is no genetic test for ASD and the causes are poorly understood. The study suggests ways ASD risk could be handed down in families without specific gene mutations being involved.

We're still a long way from understanding the causes of ASD, and many cases can occur in children with no family history of the condition, but this study gives researchers new avenues to explore.  

Where did the story come from?

The study was carried out by researchers from Johns Hopkins University and Bloomberg School of Public Health, the Lieber Institute for Brain Development, George Washington University, Kaiser Permanente research division, the University of California and Drexel University.

It was funded by the US National Institutes for Health and the charity Autism Speaks.

The study was published in the peer-reviewed medical journal the International Journal of Epidemiology.

Both The Independent and Mail Online covered the study well, explaining the research and outlining its limitations.  

What kind of research was this?

This was an observational study that compared changes to the chemicals attached to DNA in father's sperm (epigenetic changes) with early signs that a baby may go on to develop ASD.

It also looked at the DNA of people who had died to see whether the same changes were associated with having ASD.

This small study investigated links between epigenetic changes and the risk of ASD among children whose parents already had at least one child with the condition. However, it can't tell us whether these DNA changes cause ASD.  

What did the research involve?

Families who already had at least one child with ASD and where the mother was pregnant with another child were enrolled into the study.

The researchers took sperm samples from 44 fathers. 12 months after the babies were born, they were tested for early signs suggesting they might have ASD.

The researchers analysed the sperm samples and looked for differences between the DNA of the fathers whose children's test results showed a higher risk of ASD, and compared them with those at lower risk.

They chose to study families with at least one child with ASD, because the condition is thought to run in families. They wanted a group of children who were more likely than the general population to have ASD, so they could do a smaller study and still get useful results.

The babies were tested using the ASD Observation Scale for Infants (AOSI). This test does not show whether or not the babies have ASD. It looks at behaviour such as eye contact, eye tracking, babbling and imitation, and gives scores from 0 to 18, with a higher score meaning the baby is at higher risk of having ASD.

Other studies have found that babies with high AOSI scores at around 12 months are more likely to be diagnosed with ASD when they get older, but the test is not a 100% effective screening tool.

The fathers' sperm was analysed for epigenetic changes – these are changes to the chemicals attached to the DNA molecule, but not the genes themselves. These chemicals can affect how the genes work.

In this case, researchers looked for methylation of DNA. They used two different methods of analysing sperm, so they could check the accuracy of the primary method.

The researchers used a technique called "bump hunting" to search for regions of DNA where the levels of methylation were associated with the AOSI scores of the children.

Once they had identified the regions, they looked at DNA in samples of brain tissue taken from people after death, some of whom had ASD, to see if they could spot similar patterns. 

What were the basic results?

The researchers found 193 areas of DNA from the men's sperm where methylation levels associated with AOSI scores were statistically significant. In 73% of these regions, an AOSI score showing a higher risk of ASD was linked to lower levels of methylation.

Looking at these regions, the researchers found they overlapped genes that were important for the formation and development of nerve cells and cell movement.

They also found some – but not all – of the DNA regions identified as important in sperm analysis could also be associated with having ASD in DNA taken from brain tissue.  

How did the researchers interpret the results?

The researchers say they saw a strong relationship between epigenetic changes and increased chances of having ASD within this group of children. They said the difference in methylation was "quite substantial" and concentrated in areas of DNA associated with nerve cell development.

They point to a region of DNA that contains a group of genes thought to cause Prader-Willi syndrome, a genetic condition that has some similarities to ASD but is much rarer (affecting no more than 1 in every 15,000 children). This was one of the regions strongly associated with epigenetic changes.

The researchers say the results suggest that epigenetic changes to the father's DNA in this region "confer risk of autism spectrum disease among offspring, at least among those with an older affected sibling". 

Conclusion

This study found that epigenetic changes to a father's DNA seem to be linked to an increased chance of his child developing ASD in families where there is already one child with the condition.

ASD tends to run in families, and some studies have identified genes that may increase the chances of developing the condition. However, there is no clear genetic explanation in most cases of ASD. Research like this helps scientists to investigate other ways that the condition could be handed down.

The study raises a lot of questions. It can't tell us what causes the epigenetic changes to the DNA, or how they affect the way DNA works. Also, when the researchers looked at epigenetic changes to DNA in people's brains, they didn't find changes in many of the regions identified in the sperm analysis.

This was a fairly small study, relying on only 44 sperm samples. The researchers themselves say the results need to be confirmed in larger studies. We also can't say whether these results would apply to the general population. They may only be valid for families where one child already has the condition. 

Learning more about the genetics of ASD will hopefully lead to new treatments. This study may offer up one more piece of a very complicated, yet-to-be-solved, puzzle.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

DNA changes could explain why autism runs in families, according to study. The Independent, April 15 2015

Sperm 'may hold clues to autism': Link is found between father's DNA and symptoms. Mail Online, April 15 2015

Links To Science

Feinberg JI, Bakulski KM, Jaffe AE, et al. Paternal sperm DNA methylation associated with early signs of autism risk in an autism-enriched cohort. The International Journal of Epidemiology. Published online April 14 2015

Categories: Medical News

Paracetamol may blunt feelings of pleasure as well as pain

Medical News - Wed, 04/15/2015 - 11:44

"Paracetamol may dull emotions as well as physical pain, new study shows'," The Guardian reports.

The story comes from research testing whether over-the-counter painkiller paracetamol can blunt not just the feeling of pain, but also emotions.

Half the study’s 80 participants were given a normal dose of paracetamol, while the other half took a placebo pill. They were then asked to view photos commonly used by researchers to test both positive and negative emotional responses. These included, for instance, unpleasant pictures of crying, malnourished children and pleasant images, such as children playing with cats.

The study found that those who had taken paracetamol reported slightly less intense reactions to the photos than those who had taken a placebo pill. They also found the photos less emotionally arousing.

The researchers speculated that paracetamol may affect signalling pathways inside the brain, which may have an effect on mood.

However, far more research is needed before any conclusions can be drawn as to whether the painkiller can dull emotional reactions, particularly real life events.

If you are taking paracetamol on a long-term basis due to a chronic pain condition, and feel that you are less emotionally engaged than you used to be, you could discuss alternative treatment options with your GP.

 

Where did the story come from?

The study was carried out by researchers from Ohio State University and was funded by the National Science Foundation Graduate Research Fellowship and the National Center for Advancing Translational Sciences.

The study was published in the peer-reviewed journal Psychological Science.

Most newspapers reported the research accurately, if uncritically, although The Guardian mentioned at the end of its story that the differences between the two groups were not large.

Confusingly for UK readers, the Mail Online’s report used the US generic name for paracetamol, which is acetaminophen, and also the US brand Tylenol. Although these were the names used in the US paper, it is usual practice to use UK generic names when reporting research for a UK audience.

 

What kind of research was this?

This was a randomised controlled trial (RCT) to test whether taking paracetamol can blunt emotional reactions to negative and positive images.

The authors say the drug has recently been shown to blunt people’s reactions to a range of emotionally negative stimuli, in addition to reducing physical pain. For example, they say it has been found to blunt feelings of hurt in social relationships and reduce the discomfort felt in making difficult decisions. They suggest that this may be due to its neurochemical effects on the brain, and the drug may reduce positive reactions as well as negative ones.

 

What did the research involve?

The researchers carried out two studies involving 82 college students in the first and 85 in the second. The students were randomly assigned to be given either 1,000 milligrams of paracetamol (the maximum dose) or an identical-looking placebo, both in liquid form. They then waited 60 minutes for the drug to take effect.

Participants then viewed 40 photographs selected from a database (International Affective Picture System) used by researchers to elicit emotional responses. These consisted of 10 extremely unpleasant photos (such as crying, malnourished children), five moderately unpleasant, 10 "neutral" images (such as a cow in a field), five moderately pleasant images and 10 extremely pleasant images (for example, young children playing with cats).

In the first study, after viewing each photo, participants were asked to rate how positive or negative the photo was on a scale of -5 (extremely negative) to +5 (extremely positive). They were then asked to view all 40 images again in a different, random order and asked to rate how much the photo made them feel an emotional reaction, from 0 (little or no emotion) to 10 (an extreme amount of emotion).

In the second study, participants saw all the images again in a different randomised order and were asked to make the same judgments of evaluation and emotional reactions as in the previous study. Additionally, participants in this second study also reported how much blue they saw in each photo, using an 11-point scale from 0 (the picture has no blue colour) to 10 (the picture is 100% blue). This was to test whether paracetamol blunts individuals' broader judgments "of magnitude", not just of emotional content.

The researchers then calculated average scores for participants’ evaluations and emotional arousal toward all 40 pictures, and for evaluation and emotional arousal towards neutral, positive and negative images.

 

What were the basic results?

Results in both studies showed that, overall, participants who took paracetamol rated all the photographs less intensely than those in the placebo group.

In other words, they evaluated unpleasant stimuli (anything that triggers a psychological or physical response) less negatively, and pleasant stimuli less positively than those who took a placebo.

They also rated both positive and negative images as less emotionally arousing than those taking a placebo.

There was no difference between the two groups in the rating of the degree of colour in each image.

 

How did the researchers interpret the results?

The researchers conclude that paracetamol reduces the intensity of both negative and positive emotions. "Rather than being labelled as merely a pain reliever, acetaminophen [paracetamol] might be better described as an all-purpose emotion reliever," they argue.

They speculate that the drug elicits neurochemical changes which affect evaluative psychological processes and that it might change sensitivity to emotional stimuli more generally – for example, causing someone to feel less joy at a wedding.

 

Conclusion

This small study found some slight differences in the way a group of people taking paracetamol reacted to a range of images, compared to a group of people taking a placebo.

Though an RCT is the "gold standard" of studies for determining whether a medication causes an effect, both groups need to be evenly matched for a variety of potentially confounding factors.

There is no information about the participants, other than the impression that they are all students, as they were given course credits for partaking in the study. It is not clear whether the groups were matched in terms of age, sex, ethnicity, whether they had children, or indeed whether they liked cats. 

The study is interesting, but no conclusions can be drawn as to whether paracetamol can dull emotional reactions to real life events.

While learning more about different chemicals’ effects on the brain and mood could lead to new treatments, this research has no immediately obvious clinical implications.

Therefore, further research is required into the potential side effects of this popular and effective painkiller.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Paracetamol may dull emotions as well as physical pain, new study shows. The Guardian, April 14 2015

Paracetamol kills feelings of pleasure as well as pain. The Daily Telegraph, April 14 2015

Paracetamol can dull positive and negative emotions, study finds. The Independent, April 14 2015

Paracetamol dulls pleasure as well as pain. The Times, April 15 2015

How Tylenol blunts your emotions: Popular painkiller can reduce feelings of sadness AND happiness, claims study. Mail Online, April 14 2015

Links To Science

Durso GRO, Luttrell A, Way BM. Over-the-Counter Relief From Pains and Pleasures Alike - Acetaminophen Blunts Evaluation Sensitivity to Both Negative and Positive Stimuli. Psychological Science. Published online April 10 2015

Categories: Medical News

No proof that bad relationships raise blood pressure

Medical News - Tue, 04/14/2015 - 15:00

"If you have ever blamed your partner for making your blood boil, a new study could be the evidence you need to prove it's true," Mail Online reports. But the association between stress and blood pressure is much less clear-cut than the Mail suggests.

The study involved 1,356 older married couples in the US. They completed two sets of assessments four years apart. The assessments asked questions about their stress levels and marital satisfaction, and also measured their blood pressure. The researchers then looked at how these factors were related to each other. 

The results were quite a mixed bag, which makes it difficult to draw any firm conclusions from them. They generally suggest that husbands had higher blood pressure if their wives were more stressed.

If wives were stressed, their blood pressure was lower if their husbands were also stressed. Poor relationship quality was only detrimental to blood pressure if both partners felt negative about the relationship.

But this study has many limitations, including the difficulty in establishing whether blood pressure changes were definitely seen after stress or relationship problems. We also cannot tell whether a person actually had clinically high blood pressure.

Overall, this study will be of interest to social scientists, but provides no proof that the stress of a bad relationship causes high blood pressure. 

Where did the story come from?

The study was carried out by researchers from the University of Michigan. Data for the study was drawn from the Health and Retirement Study, which is funded by the US National Institute on Aging.

It was published in the Psychological Sciences and Social Sciences series of The Journals of Gerontology.

Mail Online took the results of this study at face value and did not consider its limitations, or explain that there is no proof of cause and effect. 

What kind of research was this?

This was an ongoing cohort study that gathered data on marital status and psychosocial health at one time point, and then looked at whether this was associated with changes in blood pressure over time.

Stress in its various forms has often been thought to have various detrimental effects on health and wellbeing. This study aimed to look at chronic stress associated with a poor marital relationship, and specifically how this was associated with changes in blood pressure.

The researchers expected to see evidence that more stress was linked to higher blood pressure, but also wanted to see if the effects differed between men and women.

The main problem with a study like this is that it can't prove cause and effect, as there are likely to be many other unmeasured factors involved (confounders)

What did the research involve?

The study used participants in the ongoing nationally representative Health and Retirement Study (HRS) in the US, which includes people born before 1954.

Participants are interviewed every two years. In 2006, psychosocial questionnaires were given in face-to-face interviews. These included an assessment of partner relationships and stress. Participants also had body measures taken, including blood pressure.

Chronic stress was assessed by asking the people involved about whether seven stressful events had been ongoing for at least 12 months:

  • physical or emotional problems (in a spouse or child)
  • problems with a family member's alcohol or drug use
  • difficulties at work
  • financial strain
  • housing problems
  • problems in a close relationship
  • helping at least one sick, limited, or frail family member or friend on a regular basis

They responded either "no", "it didn't happen", or "yes, it did". If they responded "yes", they rated this as "not", "somewhat", or "very upsetting".

They also completed a set of questions specifically looking at relationship quality, including the following questions:

  • How often does your spouse or partner make too many demands on you?
  • How often does he or she criticise you?
  • How often does he or she let you down when you are counting on them?
  • How often does he or she get on your nerves?

This study used data from the repeat assessments taken four years later in 2010 to see if blood pressure and psychosocial factors changed over time, and how they were associated with each other.

The researchers took the potential confounders of age, ethnicity, education, length of marriage and use of blood pressure medication into account. 

What were the basic results?

A total of 1,356 married couples completed the two assessments in 2006 and 2010. The average age for men was 66 and 63 for women, and they had been married for an average of 36 years.

Average blood pressure (looking at only the upper systolic figure) was slightly higher for husbands (132 in 2006 and 134 four years later) than for wives (127 to 129).

Just over a third of husbands and just under a third of wives were classified as having high blood pressure at both time points. Blood pressure was shown to significantly increase over time in both partners.

Overall, couples reported low levels of chronic stress and low relationship quality, though wives tended to report more of both of these problems than husbands.

The most common problems were the ongoing health problem of a spouse or child, ongoing financial strain, and helping at least one sick or disabled person.

The researchers also found significant associations between reported chronic stress, gender and blood pressure. Some of the findings included:

  • husbands had higher blood pressure when their wives reported higher stress
  • husbands reporting greater stress had lower blood pressure if their wives reported lower stress
  • wives reporting greater stress had lower blood pressure if their husbands reported more stress

This was interpreted as meaning that husbands appear to be more stressed by their wives' stress than the reverse. Wives' stress, meanwhile, seemed to be "buffered" by more stress in the husband.

Looking specifically at questions on relationship quality, the researchers found that if one partner reported negative relationship quality, their blood pressure was higher if the other partner also reported negative relationship quality.

Blood pressure was lower if the partner reported less negative relationship quality. There were no significant effects by gender.

The researchers interpreted this as meaning that higher levels of negative relationship quality are only detrimental when both partners feel negative about the relationship. 

How did the researchers interpret the results?

The researchers concluded that their findings indicate that in a marriage, "(a) stress and relationship quality directly affect the cardiovascular system, (b) relationship quality moderates the effect of stress, and (c) the [two] rather than only the individual should be considered when examining marriage and health". 

Conclusion

Overall, this study looking at the relationships between reported chronic stress, relationship quality and blood pressure in a group of married couples will be of interest to social researchers. But readers should not read too much into these findings.

Though it is quite plausible that ongoing stress can have a detrimental effect on your health (particularly your mental health), this study does not prove that the stress of a bad relationship affects blood pressure.

This study had many limitations:

  • It only looked at general associations between stress and relationship quality and blood pressure. It doesn't tell us whether psychosocial factors were associated with clinically meaningful changes in blood pressure, such as a person developing high blood pressure and requiring medication.
  • It is difficult to establish a clear temporal relationship by only assessing psychosocial factors and blood pressure at just two time points. For example, we cannot say if a change in blood pressure was caused by the onset of stress or relationship quality problems. 
  • The study was only able to ask fairly general questions about chronic stress and satisfaction in the relationship. These questions are unlikely to be able to capture the true nature of these issues and the extent of the effect this is having on the partner.
  • It has not been able to take into account the complex influence that personality, physical and mental health, and lifestyle factors are likely to be having on any association between stress, marriage quality and health.
  • This was a specific population sample of older married couples from the US who were married for a considerable length of time. The results may not apply to other nationalities, younger people, people married for less time, or people (of any genders) in a committed relationship who are not married.

This study provides no reliable evidence that you can blame your partner for your high blood pressure, as the media suggests.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

High blood pressure? Blame your partner! Chronic stress of a bad relationship can negatively affect your health, experts warn. Mail Online, April 13 2015

Links To Science

Birdtt KS, Newton NJ, Cranford JA, Ryan LH. Stress and Negative Relationship Quality among Older Couples: Implications for Blood Pressure. The Journals of Gerontology. Published online April 7 2015

Categories: Medical News

Breath test shows promise in diagnosing stomach cancer

Medical News - Tue, 04/14/2015 - 14:30

"A simple breath test could help predict whether people with gut problems are at high risk of developing stomach cancer," BBC News reports. The test is designed to detect a distinctive pattern of chemicals associated with stomach cancer.

The study involved 484 people with a known diagnosis – 99 who had established stomach cancer and others who had different stages of pre-cancer.

Pre-cancer is when abnormal changes have affected certain cells and these changes could trigger cancer at a later date. Not all cases of pre-cancer will progress to "full-blown" cancer.

Overall, the study found that the breath analyser had fairly high accuracy for distinguishing between established cancer and pre-cancer. However, it was less reliable at distinguishing between the different severities of pre-cancer.

The researchers suggest that this could possibly provide a new method of screening for stomach cancer, allowing a method of surveillance for people with pre-cancer. However, it is far too early to say whether this idea could come to fruition.

The breath test could potentially be of value when combined with other methods in the diagnosis of stomach cancer or pre-cancer. However, further study will need to confirm that the test is reliable and that it gives any additional benefit over standard methods.   

Stomach cancer is fairly uncommon in the UK (with an estimated 7,300 new cases each year) and is not currently screened for. Even if the test were demonstrated to be accurate, many issues would need to be considered before introducing this as a screening test for the general population, including the cost-effectiveness and other risks and benefits.

 

Where did the story come from?

The study was carried out by researchers from the Israel Institute of Technology and the University of Latvia. It was funded by the European Research Council and the Latvian Council of Science. The study was published in the peer-reviewed medical journal Gut.

The UK media’s reporting of the study was accurate and informative.

 

What kind of research was this?

This was a cross-sectional study, which aimed to look at the use of different types of breath analyser for distinguishing between stomach (gastric) cancer and early pre-cancerous lesions.

As the researchers say, there are well-recognised pre-cancerous changes in stomach cancer, only a minority of which will actually progress to cancer. However, there is currently no non-invasive tool to reliably detect these lesions and stratify their risk for cancerous development. Current diagnostic methods, such as an endoscopy (where a camera attached to a tube is placed into the stomach) can be expensive, time-consuming and not particularly pleasant for the patient (although having an endoscopy is usually a pain-free experience).

An emerging approach is the detection of volatile organic compounds (VOCs) in exhaled breath. These are chemicals that develop due to the biological changes associated with both pre-cancer and stomach cancer.

The potential benefits are that it is non-invasive, pain-free and does not have any side effects.

The researchers propose an approach of possibly distinguishing between and classifying different pre-cancerous lesions by analysing breath samples.

 

What did the research involve?

The research involved 484 people recruited from the University Hospital in Latvia, all of whom had known diagnostic status. This included 99 who were diagnosed with stomach cancer and 325 who had pre-cancerous conditions. These were graded in risk/severity from 0 to IV on the OLGIM staging system (Operative Link on Gastric Intestinal Metaplasia assessment). This is a validated system that assesses both the extent of abnormal change and potential "aggression" of the pre-cancer.

A further seven had more abnormal cell changes at high risk of developing into cancer (dysplasia). They also included 53 people with stomach ulcers (non-cancerous).

Exhaled breath samples were collected from the participants after fasting for 12 hours and refraining from smoking. Two breath samples were collected from each person, which were analysed using two different methods. The first method was gas chromatography linked to mass spectrometry (GCMS), which quantifies the types of VOCs in each patient group. The second was a nanoarray sensor method, which aimed to look at the patterns of VOCs in the exhaled breath, rather than quantify specific VOCs. A nanoarray consists of an array of extremely tiny sensors that can detect individual proteins.

The researchers looked at how reliable the methods were at distinguishing people with gastric cancer from the pre-cancerous and non-cancerous conditions. Analyses were adjusted for various potential confounding factors, including patient age, gender, smoking, alcohol and use of medications to reduce stomach acid production.

 

What were the basic results?

Using the first chemical analysis method (GCMS), the researchers found that of 130 VOCs analysed, the concentrations of eight of them were significantly different between the patient groups. However, no single VOC could reliably distinguish between the groups.

Using the second nanoarray method, the researchers found that the pattern analyser had a high level of accuracy for distinguishing between gastric cancer and the OLGIM stages of pre-cancerous lesion.

For distinguishing between people with gastric cancer compared to any pre-cancerous stage, the test had very high specificity (98% – i.e. almost all people without cancer accurately tested as not having cancer).

It had lower sensitivity, at 73% (i.e. the proportion of people with cancer who accurately tested as having cancer).

Looking by specific OLGIM stage, the test was slightly more reliable for distinguishing between people with gastric cancer and early OLGIM stages 0-II (sensitivity 97%, specificity 84%), than it was at distinguishing between people with gastric cancer and later OLGIM stages III-IV (sensitivity 93%, specificity 80%).

The test was much less reliable, however, at distinguishing between the different stages of pre-cancerous lesion. For distinguishing between stomach ulcer and stomach cancer, the specificity and sensitivity were 87%.

 

How did the researchers interpret the results?

The researchers say that: "Nanoarray analysis could provide the missing non-invasive screening tool for gastric cancer and related pre-cancerous lesions, as well as for surveillance of the latter."

 

Conclusion

This is a useful proof of concept study that has demonstrated how the measurement of VOCs in exhaled breath may be of use in distinguishing different stages of pre-cancerous change from established stomach cancer. The researchers show that the new nanoarray system that looks at the pattern of VOCs in exhaled breath has high accuracy for distinguishing cancer from pre-cancer. However, it was less reliable at distinguishing between different stages of pre-cancer.

The researchers suggest possible benefits of the nanoarray system in that it is non-invasive, quick, easy to use and inexpensive. They suggest that it could potentially provide a new method of screening for stomach cancer and pre-cancer, allowing a method of surveillance of people with pre-cancer who may be at different levels of risk for developing cancer in the future. However, this is too early to say whether this will come to fruition. 

So far, this study has only examined the breath analyser in a sample of people with known diagnostic status. It would next need to be tested in samples of people with stomach symptoms and no established diagnosis, to see how accurate it was at indicating the diagnosis. It would also need to show whether it offers any benefits compared to current diagnostic methods.

Stomach cancer is not currently screened for in the UK. Even if further study confirms that this test is reliable, the balance of benefits against risks need to be carefully considered before thinking about introducing any new potential screening test for cancer.

Overall, the research is of value, but further study is needed before it is known whether this could one day be introduced as a screening test for stomach cancer or pre-cancerous changes.

It is more likely that the test would be used to assess patients with symptoms associated with stomach cancer, who would then go on to have further testing for stomach cancer

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Breath test 'could give clues to stomach cancer risk'. BBC News, April 14 2015

Now a breath test to stop stomach cancer: Screening spots chemical signals that are linked to development of tumours. Mail Online, April 14 2015

Breath test can reveal stomach cancer scientists believe. ITV News, April 14 2015

Breath test to predict risk of stomach cancer developed by scientists. The Daily Telegraph, April 13 2015

New breath test could be used to detect stomach cancer. The Independent, April 14 2015

Breath sample predicts stomach cancer. The Times, April 14 2015

Simple breath test could soon screen for stomach cancer following breakthrough. Daily Express, April 14 2015

Links To Science

Amal H, Leja M, Funka K, et al. Detection of precancerous gastric lesions and gastric cancer through exhaled breath. Gut. Published online April 13 2015

Categories: Medical News

Can a facelift make you more likeable?

Medical News - Mon, 04/13/2015 - 15:00

"Having plastic surgery can make you more likeable," the Mail Online reports. It says cosmetic facial surgery not only makes you look younger, but could also improve what people think of your character. As the Mail Online reports, women who received surgery "were rated as more attractive, feminine, and trustworthy".

This headline is based on a study carried out by plastic surgeons, which asked volunteers to rate the before and after photos of 30 women who had facial plastic surgery to make them look younger.

It found that, on average, the post-surgery photos were rated slightly, but significantly, better for femininity, attractiveness and four personality traits, including likeability (but not trustworthiness).

However, this study has a number of limitations, which means its results are not conclusive. For example, the study was relatively small. The results also may not apply to all people who have had facial surgery, or be in line with the opinions of all people who viewed the before and after results.

In addition, the differences in scores were relatively small – between 0.36 and 0.39 on a seven-point scale. It's unclear whether this would have any real-life impact on people's interaction with the women if they saw them in person. A much larger study is needed to confirm these findings.

Many would argue that resorting to cosmetic surgery to boost your perceived likeability by a small amount is a drastic step. If you are considering plastic surgery, you should think carefully about the reasons why you want it and discuss your plans with your GP first.

Where did the story come from?

The study was carried out by researchers from Georgetown University Hospital and other surgical and research centres in the US.

No sources of funding were reported, and the authors reported no conflicts of interest. However, two of the study authors performed the facial rejuvenation surgeries on the women.

The study was published in the peer-reviewed medical journal, JAMA Facial Plastic Surgery.

The Mail Online does not point out any of this study's limitations. Its headline suggests that perceived trustworthiness was improved after surgery. But this difference was not statistically significant, meaning that we cannot confidently rule out this result occuring by chance.  

What kind of research was this?

This was a cross-sectional study looking at whether people's perceptions of women's personalities changed after they had facial rejuvenation surgery.

While this type of plastic surgery is focused on making women look younger, the researchers wanted to see if people also changed their judgements about the women's personalities based on their photos alone.

This study design seems appropriate to the question, though it has many limitations in terms of the way it was applied, including the small sample size.    

What did the research involve?

The researchers used before and after photos of 30 white women who had undergone facial rejuvenation surgery. They split these photos into six groups, each with five pre-surgery and five post-surgery photos (not of the same women).

They asked volunteers to rate the photos for their views on the women's femininity, attractiveness and six personality traits. The researchers then assessed how women scored based on their post-surgery compared with their pre-surgery photos.

The women whose photos were used had surgery between 2009 and 2013, including procedures such as:

  • facelift
  • eyelid surgery (to remove loose skin above the eyes or bags under the eyes)
  • eyebrow lift
  • neck lift
  • chin implant

To be included, the women's photos had to show well-matched, neutral facial expressions. The women had given permission for their photos to be used for research purposes.

The volunteers who rated the photos online did not know what the aim of the study was. Each set of photos was shown to at least 50 volunteers, and at least 24 responses were received for each set.

The volunteers were asked to rate the women on how much they thought they had the following personality traits on a seven-point scale, ranging from "strongly disagree" to "strongly agree", based on facial photos only:

  • aggressiveness
  • extroversion
  • likeability
  • trustworthiness
  • risk-seeking
  • social skills

The volunteers were not shown the same woman before and after surgery to avoid comparing them directly. The volunteers did not know the aim of the study.

Doctors, nurses or other healthcare workers with experience of facial analysis or facial plastic surgery were not allowed to take part.

The researchers compared the average scores for the pre- and post-surgery photos for each woman individually and overall. They also assessed the women according to what type of surgery they had. 

What were the basic results?

Overall, the researchers found the women's post-surgery photos scored better than their pre-surgery photos on the seven-point scale for:

  • likeability – post-surgery photos scored 0.36 points higher on average
  • social skills – post-surgery photos scored 0.38 points higher on average
  • attractiveness – post-surgery photos scored 0.36 points higher on average
  • femininity – post-surgery photos scored 0.39 points higher on average

There were no statistically significant differences in:

  • trustworthiness
  • aggressiveness
  • extroversion
  • risk-seeking

When looking at individual surgeries, the only two procedures associated with significant changes in scores were facelift (22 women) and lower eyelid surgery (13 women).

The researchers did not find differences in results by women's age, pre-surgery attractiveness scores, number of surgical procedures, or operating surgeon. 

How did the researchers interpret the results?

The researchers concluded that, "Facial plastic surgery changes the perception of patients by those around them."

They say that although the surgery is generally aimed at making people look younger, the study found it also affected people's views on a woman's likeability, social skills, attractiveness, and femininity. 

Conclusion

This study suggests that people's perceptions of women's femininity, attractiveness and certain personality traits can improve after they receive facial surgery that aims to make them look younger.

However, there are a number of points to bear in mind:

  • The study was relatively small, assessing only 30 women (average age not reported) and only up to 50 people rating each set of photos. The women were also all white and operated on by the same two surgeons. The results may not be applicable to all people who have these kinds of surgeries or to all people viewing the results.
  • It was not clear how many women's photos were assessed for inclusion, or whether the person selecting which photos to use knew the purpose of the study. Ideally, they would have been blinded to the purpose of the study so this could not influence their selection, either consciously or subconsciously.
  • All patients reportedly had to agree to have their photos used, but it was unclear whether this meant every patient operated on, or just those who had their photos used in the study. If they were asked after surgery, women whose surgeries had a good result may have been more likely to allow their photos to be used.
  • Results may depend on how much younger the woman looks or how natural the results look. Ideally, researchers also would have assessed people's perceptions of the women's ages and whether they had facial surgery or looked natural, and how these factors affected personality assessment. In the three "after" photos shown in the research paper, the women look relatively natural, without obvious signs of having had facial surgery.
  • The researchers carried out a lot of statistical tests and there is a chance that some of them yielded significant results just by chance.
  • It was unclear exactly how many women had each surgery, and therefore whether the analyses by type of surgery had enough "power" to detect differences between the groups. Some women had multiple surgeries, making it difficult to separate their effects.
  • The differences seen in the scores were relatively small – between 0.36 and 0.39 on a seven-point scale. It is unclear whether a difference of this size would have any real-life impact on people's interaction with the women, or whether they would express similar views if they saw the women in person.
  • The photos shown as examples in the research paper were not identical in terms of what the women were wearing (clothes or make-up) – ideally, these would have been standardised.

Overall, this small study gives some indication that people may judge photographs of women who have had facial surgery differently in terms of attractiveness, femininity and personality, but it is not conclusive. A much larger study is needed to confirm this.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Having plastic surgery can make you more LIKEABLE: Women who went under the knife were rated as more attractive, feminine and trustworthy. Mail Online, April 10 2015

Links To Science

Reilly MJ, Tomsic JA, Fernandez SJ, et al. Effect of Facial Rejuvenation Surgery on Perceived Attractiveness, Femininity, and Personality. JAMA Facial Plastic Surgery. Published online April 9 2015

Categories: Medical News

How dogs could sniff out prostate cancer

Medical News - Mon, 04/13/2015 - 13:31

"Dogs trained to detect prostate cancer with more than 90% accuracy," The Guardian reports. Two trained bomb-sniffing dogs also proved remarkably successful in detecting compounds associated with prostate cancer in urine samples.

This headline is based on research that trained two explosive-detection sniffer dogs to identify the urine samples of men with prostate cancer. They then tested the dogs on urine samples from 332 men with the condition and 540 controls without the condition, most of whom were men.

One dog correctly identified all the samples from men with prostate cancer, and the other dog identified 98.6% of them. The dogs incorrectly identified between one and four percent of the control samples as being from men with prostate cancer ("false positives").

Some of the samples in the study were used for training the dogs and assessing their performance, and ideally the study would be repeated with entirely new samples to confirm the results.

This study suggests dogs can be trained to differentiate between urine samples from men known to have prostate cancer and people without the condition. But further testing should be carried out to test whether the dogs can accurately detect men with prostate cancer who are not yet known to have the disease.

It seems unlikely that dogs would be routinely used on a widespread basis to detect prostate cancer. If researchers can identify the exact chemical(s) the dogs are detecting in urine, they could try to develop methods to detect them.

Read more about potential warning signs for prostate cancer and when you should see your GP.

Where did the story come from?

The study was carried out by researchers from the Humanitas Clinical and Research Center and other centres in Italy. Sources of funding were not reported.

It was published in the peer-reviewed medical publication Journal of Urology on an open-access basis, so it is free to read online or download.

This study has been covered by a range of news outlets, no doubt owing to the appeal of any story involving dogs.

Most news sources illustrated the story with photos of the wrong breeds of dog, but The Independent got it right by showing a German Shepherd. The Daily Mirror suggested that the control group was all male, when this was not the case. 

What kind of research was this?

This was a cross-sectional study that tested whether sniffer dogs could correctly differentiate between urine samples from men known to have or not have prostate cancer.

This type of study is suitable for an early-stage assessment of the promise of a new test. If successful, researchers would need to go on to test samples of men who are currently undergoing assessment for suspected prostate cancer, rather than those already known to have the disease. This would better assess how the dogs would perform in a real-world clinical situation.

The researchers say there is a need for a better way to detect prostate cancer. A blood test for prostate specific antigen (PSA) can indicate whether a man might have prostate cancer.

But PSA is also raised in non-cancerous conditions, such as infection or inflammation, so the test also picks up a lot of men who do not have the disease (false positives).

A raised PSA level alone is not a reliable test for prostate cancer. It needs to be combined with an examination and other invasive tests (for example, a biopsy) to determine whether a man has the condition.

Other studies have suggested sniffer dogs can detect the odour of certain chemicals in the urine of men with prostate cancer.

However, not all tests with dogs have been successful, possibly because of variations in how the dogs were trained and differences in the populations tested. The researchers wanted to test rigorously trained sniffer dogs to see how they would perform. 

What did the research involve?

The researchers trained two sniffer dogs to identify urine samples from men with prostate cancer. They then allowed the dogs to sniff urine samples from men with or without prostate cancer and indicate which ones had the prostate cancer smell.

The urine samples were collected from 362 men with prostate cancer at different stages detected in various ways. The control samples were from 418 men and 122 women who were either healthy, or had a different type of cancer or another health problem.

The dogs taking part in the study were two three-year-old female German Shepherd explosive detection dogs called Zoe and Liu. They were trained using a standard procedure to identify prostate cancer samples using 200 urine samples from the cancer group and 230 from the control group.

In the first stages of training, urine samples from healthy women and women with other forms of cancer were used as the control samples to make certain there would be no chance of the sample being from a man with undetected prostate cancer. The next stages of training first used samples from young healthy men, and then older healthy men.

After the training, the researchers tested the dogs on all of the samples from the men with prostate cancer and controls in batches of six random samples. The researcher analysing the results did not know which samples were from men with prostate cancer. 

What were the basic results?

One dog correctly identified all the prostate cancer urine samples and only incorrectly identified seven (1.3%) of the non-prostate cancer samples as coming from men with prostate cancer (false positives).

The other dog correctly identified 98.6% of the prostate cancer urine samples and missed the other 1.4% (five samples). She incorrectly identified 13 (3.6%) of the non-prostate cancer samples as coming from men with prostate cancer. The false positive results all came from men. 

How did the researchers interpret the results?

The researchers concluded that a trained sniffer dog can identify chemicals specific to prostate cancer in urine with a high level of accuracy.

They say further studies are needed to investigate how well the dog sniffing test would perform in a real-world sample of men undergoing investigation for possible prostate cancer. 

Conclusion

This study found highly trained sniffer dogs are capable of differentiating between urine samples from men known to have prostate cancer and people without the condition. The study's strengths are the rigorous training of the dogs and the large number of samples tested.

The samples tested were all from people already known to either have or not have prostate cancer, and included some samples used in the dogs' training. Ideally, the study would be repeated with completely new samples to confirm the results.

If the results are confirmed, the next step would be to test whether the dogs can accurately detect men with prostate cancer who are not yet known to have the disease. For example, the dogs could be used to assess the urine of men who have raised PSA levels but a negative biopsy who are being monitored to see if they develop the condition.

The researchers noted they could not completely rule out that a small number of men in the control group had undetected prostate cancer. The risk would be low as they were either young or had no family history of prostate cancer, no prostate enlargement detected on digital rectal examination, and low PSA levels.

It seems unlikely that dogs would ever be routinely used on a widespread basis to detect prostate cancer. However, if researchers can identify the exact chemical(s) the dogs are detecting in urine, they could try to develop methods of detecting these chemicals.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Dogs trained to detect prostate cancer with more than 90% accuracy. The Guardian, April 11 2015

Dogs can sniff out prostate cancer almost every time. The Daily Telegraph, April 11 2015

How DOGS can help diagnose prostate cancer: Canines SNIFF out disease with 98% accuracy. Daily Mail, April 11 2015

Prostate cancer detected by dogs with more than 90% accuracy. The Independent, April 11 2015

Dogs can detect prostate cancer with 98 percent reliability, study finds. Metro, April 21 2015

Dogs found to have '98% reliability rate' in sniffing out prostate cancer in men research finds. ITV News, April 11 2015

Amazing cancer-sniffing dogs detect prostate tumours in men 98% of the time. Daily Mirror, April 11 2015

Links To Science

Taverna G, Tidu L, Grizzi F, et al. Olfactory System of Highly Trained Dogs Detects Prostate Cancer in Urine Samples. The Journal of Urology. Published online April 2015

Categories: Medical News

Can plucking hairs stimulate new hair growth?

Medical News - Fri, 04/10/2015 - 15:00

"Plucking hairs 'can make more grow'," BBC News reports, while the Daily Mail went as far as saying scientists have found "a cure for baldness". But before you all reach for your tweezers, this discovery was made in mice, not humans.

The study that prompted the headlines involved looking at hair regeneration in mice. The results showed hair regeneration depended on the density at which hairs were removed. Researchers describe how the hairs seemed to have a "sense and response" process that works around a threshold.

If hair removal – specifically plucking – was below this threshold, there was no biological response to repair and regrow the hair, and the mice remained bald.

However, once the plucking threshold was crossed, the plucked hair regrew – and often more hair regrew than was there originally. This effect is known as quorum sensing.

Quorum sensing is a biological phenomenon where, as the result of a range of different signalling devices, individual parts of a group are aware of the total population of that group. This means they can respond to changes in population values in different ways.

One example is the formation of new ant nests. A worker ant can tell when an individual part of the new nest is almost full, so they will then lead other ants to other parts of the new nest.

But we don't know whether the same thing would happen in people. It is certainly too early to claim that plucking hairs can cure baldness, as the Mail Online headline suggests: that may actually do more harm than good.

Philip Murray of Dundee University, one of the authors of the study, said: "It would be a bit of a leap of faith to expect this to work in bald men without doing more experiments." 

Where did the story come from?

The study was carried out by researchers from the University of Southern California in collaboration with colleagues based in Taiwan, China and Scotland.

It was funded by the US National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), the National Science Council of Taiwan (NSC), the Taipei Veterans General Hospital, and several research grants.

The study reports that an invention number for "Enhance hair growth via plucking" was disclosed to the University of Southern California, which suggests that someone – possibly one of the authors – might have patented the idea, or a patent is pending.

The study was published in the peer-reviewed journal, Cell.

Generally, the media reported the story as if this study directly applies to people before revealing that all the research was done in mice. The Daily Mail even claimed in its headline that this research offered a cure for baldness, which was misleading. 

What kind of research was this?

This was an animal study using mice to explore the biology of hair regeneration. Hair loss, or alopecia, has many different symptoms and causes, and can be an issue for both men and women.

The study involved plucking hair from the backs of mice. This might have some similarity with people, but it's clearly not completely the same.

Researchers tend to use mice as a first step in their research when they have a theory they want to investigate without subjecting humans to experiments.

If the experiments in mice look helpful – say, in curing baldness – the researchers eventually try it in people. But the results in people aren't always the same as results in mice, so we shouldn't let our hopes climb too high. 

What did the research involve?

The study team plucked hairs from the backs of mice and studied the biological reaction. They analysed different skin cell behaviour, what chemical signals were sent to neighbouring cells, and how different repair systems were activated at different times.

They plucked hairs at different densities – that is, plucking hair close together or far apart to see if this affected any of the repair responses. 

What were the basic results?

The researchers found plucking was able to stimulate hairs to grow back, sometimes more than were there originally, but only after a certain threshold. Below this threshold, not enough signals were produced to kick-start the hair regeneration systems.

Mice usually have a hair density of between 45 and 60 hairs per square mm, probably much more than even the hairiest adults. A look at a selection of hair transplant websites suggests natural human hair density varies between 70 to 120 hairs per cm, less than 10 times the density of mice.

The researchers found they needed to pluck more than 10 hairs per square mm to stimulate regrowth, otherwise a bald patch remained. If they plucked all of the hairs, the same number grew back.

However, when they plucked 200 hairs from a diameter of 3mm, they found around 450 grew back. The new hairs grew back in the plucked area, but also nearby. When they plucked 200 hairs from a diameter of 5mm, this regenerated 1,300 hairs.

Based on these biological observations, the researchers believe each hair follicle was acting as a sensor for a wider skin area to assess the level of damage through hair loss.

Input from each follicle fed into a collective biological circuit, which was able to quantify injury strength. Once a threshold was reached, a regeneration mechanism was activated. This type of system is often referred to as quorum sensing. 

How did the researchers interpret the results?

The researchers made no mention of the human implications of this study. They concluded that the sense and response system they uncovered "is likely to be present in the regeneration of tissue and organs beyond the skin". 

Conclusion

This study showed that hair regeneration in mice depends on the density at which hairs are removed. The researchers describe a sense and response mechanism working around a threshold.

If hair removal, specifically plucking, was below this threshold, there was no biological response to repair and regrow the hair, and the mice remained bald. But once the plucking threshold was crossed, the plucked hair regrew – and often more hair regrew than was there originally.

The main limitation with this research is it did not involve humans, so we don't know whether the same thing would happen in people. It might actually be unlikely.

For example, people with trichotillomania, a condition where they impulsively pull out their hair, end up with patches of hair loss and balding that does not regrow. There may be specific stress-related reasons why this is the case, but it is a reminder not to take these mouse results at face value.

It is certainly too early to advise hair plucking as a cure for baldness, as the Daily Mail's headline suggests. That may do more harm than good. The "cure for baldness" headline is also misguided, as the study was about hair regeneration after recent plucking. The findings are less relevant to those with longer-term hair loss, either in mice or people.

Philip Murray of Dundee University, one of the authors of the study, summed this up in The Guardian when he said: "It would be a bit of a leap of faith to expect this to work in bald men without doing more experiments."

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Plucking hairs 'can make more grow'. BBC News, April 10 2015

Bald truth: plucking hair out can stimulate growth, study finds. The Guardian, April 9 2015

Cure for thinning hair? Scientists find plucking stimulates huge growth spurt. The Daily Telegraph, April 9 2015

At last, a cure for baldness! Scientists discover how to regrow hair (as long as you're prepared to pull it all out first). Daily Mail, April 9 2015

Links To Science

Chen C, Wang L, Plikus MV, et al. Organ-Level Quorum Sensing Directs Regeneration in Hair Stem Cell Populations. Cell. Published online April 9 2015

Categories: Medical News

Middle-age spread 'seems to reduce dementia risk'

Medical News - Fri, 04/10/2015 - 14:30

"Being overweight 'reduces dementia risk'," BBC News reports. The story comes from a cohort study of nearly 2 million UK adults aged over 40. It showed that being overweight or obese was linked to a lower risk of dementia up to 20 years later, compared with people who were a healthy weight. Underweight people were at a higher risk of dementia.

This result is surprising as it contradicts the current consensus of opinion, including the advice on this website, that obesity may be a risk factor for some types of dementia.

In the best scientific tradition, this study raises more questions than it answers. But it is important not to overlook the many serious health risks associated with obesity, such as heart disease and diabetes.

As one of the key authors, Dr Qizilbash, rightly says, the findings are "not an excuse to pile on the pounds or binge on Easter eggs … You can't walk away and think it's OK to be overweight or obese. Even if there is a protective effect, you may not live long enough to get the benefits".

In conclusion, a single study is unlikely to lead to a change in clinical guidelines, but it is likely to prompt further research into the issue. 

Where did the story come from?

The study was carried out by researchers from the London School of Hygiene and Tropical Medicine, London, and OXON Epidemiology; a London/Madrid-based clinical research company.

The study reports no funding for the work and the authors declare no conflicts of interest.

It was published in the peer-reviewed medical journal, The Lancet Diabetes & Endocrinology.

Generally, the media reported the story accurately and responsibly, taking a range of angles. The Daily Telegraph outlined how "a middle age spread may protect against dementia"; The Guardian said "underweight people face significantly higher risk"; while The Independent went with a lack of risk angle, saying that, "being overweight may not increase dementia risk" as previously thought. All accurately reflect the results of the underlying study.

Much of the news outlined how these findings contradict previous research, but may be more reliable because the study was bigger and more robust. Most also cautioned against taking this to mean that being overweight or obese is somehow good for your health, and said the link between dementia and obesity was an open case, needing more research to find out what's going on.

What kind of research was this?

This was a retrospective cohort study looking at body mass index (BMI) and dementia, using information from UK GP records.

BMI is a measure of weight and height. The main four BMI categories – underweight, healthy weight, overweight and obese – are based on whether your weight is likely to affect your health.

The healthy weight category means your weight isn't likely to affect your health, whereas the overweight category means your weight is likely to increase your chance of death and disease. This is the same for the underweight category. Obese people are more likely to suffer death and disease than people who are overweight. 

This type of study cannot prove cause and effect, but can give us an idea of possible links. One of the disadvantages of using existing GP records is you can only use the information that has already been collected. This might not include all the information you would want to collect as a researcher, such as changes in body weight, physical activity levels, diet, and other lifestyle factors.  

What did the research involve?

The researchers analysed more than 1.9 million UK GP records to see whether BMI was linked to a recorded diagnosis of dementia.

The cohort of people analysed were all over 40, had no previous diagnosis of dementia, and had to have a BMI measure recorded in their GP notes between 1992 and 2007. Everyone else was excluded.

Eligible medical records were reviewed to see if people went on to develop dementia, changed GP practice, or died up to July 2013. The average time elapsed between the single BMI measurement and any of these events was nine years. Some had records spanning 20 years. 

The team split the people into standard BMI categories and calculated their relative risk of developing dementia. The categories were:

  • underweight: BMI less than 20kg/m2
  • healthy weight: BMI 20 to less than 25kg/m2
  • overweight: BMI 25 to 30kg/m2
  • obese: BMI greater than 30kg/m2, actually divided into three subcategories of obesity: class I, II and III

The analysis adjusted for a range of known confounders already recorded in the GP records, including:

  • age
  • gender
  • smoking
  • alcohol consumption
  • history of heart attack, stroke or diabetes
  • recent use of statins or drugs to treat high blood pressure
What were the basic results?

Dementia affected 45,507 people, just over 2 out of every 100 taking part (crude prevalence 2.32%).

Compared with people of a healthy weight, underweight people had a 34% higher risk of dementia (rate ratio [RR] 1.34 95% confidence interval [CI] 1.30 to 1.39).

Compared with people of a healthy weight, overweight people had a 19% lower risk of dementia (RR 0.81, 95% CI 0.79 to 0.83). The incidence of dementia continued to fall marginally for every increasing BMI category, with very obese people (BMI greater than 40kg/m2) having a 33% lower dementia risk than people of a healthy weight (RR 0.67, 95% CI 0.60 to 0.74).

These patterns stayed stable throughout two decades of follow-up, after adjustment for potential confounders and allowance for the J-shape association of BMI with mortality. 

How did the researchers interpret the results?

The research team says: "Our study shows a substantial increase in the risk of dementia over two decades in people who are underweight in mid-life and late-life.

"Our findings contradict previous suggestions that obese people in mid-life have a higher subsequent risk of dementia. The reasons for and public health consequences of these findings need further investigation." 

Conclusion

This cohort study of more than 1.9 million UK adults aged over 40 links being overweight or obese to a lower risk of dementia, compared with healthy weight people. Underweight people were at a higher risk of dementia.

The study has many strengths, such as its large size and applicability to the UK. However, the authors note their results buck the trend of other research, which found being overweight or obese was linked to an increase risk. They suggest their study is probably more reliable than the past ones as they were smaller.

They aren't quite sure what this means, and say: "The reasons for and public health consequences of these findings need further investigation."

It's important to realise that this finding doesn't mean that gaining weight will somehow protect you against dementia. Many dietary, environmental and genetic factors are likely to influence both BMI and dementia, so the relationship is complex.

However, we do know that being overweight or obese is bad for your health. The same is true for people who are underweight as they are not getting the nutrients their body needs, which may be one of the reasons why they were found to have an increased risk of dementia in this study.

Dr Liz Couthard, Consultant Senior Lecturer in Dementia Neurology at the University of Bristol, said: "We do know that obesity carries many other risks, including high blood pressure, heart disease, diabetes and increased rates of some types of cancer. So maintaining a healthy weight is recommended."

However, there are limitations to bear in mind with this study that may have affected the findings to some degree.

Selection bias

First is the possibility of selection bias. Around half (48%) of eligible people did not have a BMI record, so were excluded from the study. A further third (31%) with BMI records were excluded for not having at least 12 months of previous health records. The study team were aware of this, saying: "If BMI is more likely to be measured in people with comorbidities than in healthy people, which might in turn be associated with dementia risk, then some bias is possible." But they went on to say this is unlikely.

Confounders

Residual confounding is also a possibility. The researchers had to use variables collected in the GP records, which didn't cover everything they would have wanted. For example, they adjusted for anti-hypertensive medicines and statins but not for blood pressure and blood lipid values, which, they say, do affect the associations of BMI with heart attack and stroke.

Unavailable data

Other unavailable potential confounders, such as physical activity level, socioeconomic status and ethnic origin, might also have influenced the recorded association between BMI and dementia. We can't say to what extent.

Maintaining a healthy weight is recommended to reduce the risk of heart disease, diabetes and some cancers. This study suggests the benefits of this may not extend to reducing the risk of dementia, but the relationship is likely to be complex and is not yet fully understood.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Being overweight 'reduces dementia risk'. BBC News, April 10 2015

Middle-age spread may protect against dementia. The Daily Telegraph, April 10 2015

Underweight people face significantly higher risk of dementia, study suggests. The Guardian, April 10 2015

Could being skinny in middle age raise your risk of dementia? Underweight people third more likely to develop diseases. Mail Online, April 10 2015

Being overweight may not increase dementia risk and could protect against mental decline. The Independent, April 10 2015

Links To Science

Qizilbash N, Gregson J, Johnson ME, et al. BMI and risk of dementia in two million people over two decades: a retrospective cohort study. The Lancet – Diabetes and Endocrinology. Published online April 9 2015

Categories: Medical News

'Marathon men' make better sexual partners, media claims

Medical News - Thu, 04/09/2015 - 17:31

"Marathon runners are the best in bed," is the spurious claim in Metro.

The headline is based on a study that only looked at long-distance runners’ finger ratios – said to be a marker for high testosterone levels – not reported partner sexual satisfaction (or as other sources report, high sperm counts and "reproductive fitness").

The study is based on the concept of what is known as 2D:4D ratio – a measurement of the ratio between the length of the index finger (second digit) and the ring finger (fourth digit).

Previous research suggests that men with a low 2D:4D ratio (when their ring finger is comparatively longer) may have been exposed to higher levels of testosterone in the womb, which is linked to the potential for reproductive success.

Researchers wanted to see if running prowess in males could be a sign of their evolutionary reproductive potential (as measured by their 2D:4D ratio).

They found that men with more "masculine" digit ratios – i.e. longer ring fingers – did better in the 2013 Robin Hood half marathon in Nottingham than those with the "least masculine" ratios. The same link was found in women, albeit to a lesser degree.

Researchers did not look at whether these more "masculine" men were judged to be more attractive by women.

Where did the story come from?

The study was carried out by researchers from the University of Cambridge and the Institute of Child Health, London. There was no external funding.

The study was published in the peer-reviewed medical journal PLOS ONE. This is an open-access journal, so the study is free to read online.

Reporting of this study by the UK media was almost universally poor, with many sources making claims that were not supported by the study:

  • Mail Online: "Those who run endurance races get more dates and have a higher sex drive" – unproven
  • Metro: "Marathon runners are the best in bed" – unproven
  • The Daily Telegraph: "Good runners are likely to have had ancestors who were excellent hunters… creating a biological advantage for their descendants and passing on the best genes" – unproven

At least the Daily Mirror and Huffington Post tempered their coverage with a "may" and a "probably".

None of the media coverage made it clear that the study was using running ability as a proxy for hunting prowess in pre-agricultural societies and had little or nothing to do with modern relationships.

 

What kind of research was this?

This was an observational study that aimed to test the researchers' theory that physical prowess at endurance running is associated with male reproductive fitness. In this study, the researchers used the digit ratio to predict reproductive success. This is the ratio between index and ring finger, which is a marker of hormone exposure in the womb.

The researchers explain that the high value placed by females on male ability to acquire resources has been well documented, especially in pre-industrial societies. Before agriculture developed, hunting ability may have provided an important way of demonstrating male resourcefulness and seems to be linked to fertility, offspring survivorship and number of mates.

There are several theories that try to explain this link; one is that hunting success is a reliable signal for underlying traits such as athleticism, intelligence or generosity in distributing meat.

In “Persistence Hunting” – one of the earliest forms of human hunting – prey often required running for long distances. This may act as a reliable signal of reproductive potential, say the researchers.

Since increased testosterone exposure in the womb is associated with reproductive success, an association between testosterone and endurance running would make running prowess a reliable signal of male reproductive potential, they argue.

 

What did the research involve?

The researchers recruited to their study 439 men and 103 women taking part in the Robin Hood half marathon in Nottingham in 2013. Participants ranged between the ages of 19 and 35, and were all white (Caucasian). The half marathon, they say, was chosen for its appropriateness to pre-agricultural, hunter-associated running and reflects endurance running ability.

All competitors wore small electronic chips to guarantee accurate race timings.

Photocopies were taken of the athlete’s left and right hands on finishing the race and these were used at a later date to measure the 2D:4D ratios.

The digit ratios were measured using special electronic callipers and were taken twice from each photocopy, to ensure accuracy.

The researchers then analysed the results, looking for an association between the digit ratio and the race time in each sex.

 

What were the basic results?

They found that among the men there was a "significant positive correlation" between right and left hand 2D:4D ratio and marathon time, with higher levels of performance associated with a lower, more "masculine", digit ratio. The correlation strengthened after controlling for age. The same was true of the female sample, but to a lesser degree.

 

How did the researchers interpret the results?

The researchers say their results support the theory that endurance running ability may signal reproductive potential in men through its association with prenatal exposure to testosterone. Running prowess they suggest, could act as a reliable signal for male reproductive potential. 

 

Conclusion

This study of distance runners and their digit ratios, and the possible relationship between successful hunting and male reproductive potential, is a little tenuous.

This was an observational study using long-distance runners as a proxy for hunters and digit ratio as a proxy for reproductive potential. The most it can show is an association between the two.

It should also be noted that:

  • the study did not assess any non-runners
  • the runners' ability was measured in only one race
  • many qualities contribute to marathon running success, including muscle strength and mental endurance
  • the study only included Caucasians, so the results may not apply to people of other ethnicities

This is an interesting study, but does not prove that long-distance runners are more fertile or more attractive.

Ways to increase your fertility levels include stopping smokingdrinking alcohol in moderation and maintaining a healthy weight through healthy eating and exercise.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Marathon runners are the best in bed. Metro, April 8 2015

Why long-distance runners make the best partners. The Daily Telegraph, April 8 2015

Male long-distance runners may find it easier to attract women. Daily Mirror, April 8 2015

How long distance running makes men attractive: Those who run endurance races get more dates and have a higher sex drive. Mail Online, April 8 2015

Male Long-Distance Runners Are (Probably) More Attractive To Women, Says Science. The Huffington Post, April 8 2015

Links To Science

Longman D, Well JCK, Stock JT. Can Persistence Hunting Signal Male Quality? A Test Considering Digit Ratio in Endurance Athletes. PLOS One. Published online April 8 2015

Categories: Medical News

Short people may have an increased risk of heart disease

Medical News - Thu, 04/09/2015 - 15:00

"Shorter people at greater risk of heart disease, new research finds," reports The Guardian.

It reports that a study of nearly 200,000 people has found that for every 2.5 inches (6.35cm) less in height, there is a 13.5% increased risk of coronary heart disease or CHD (also known as coronary artery disease).

This means that someone who is 5ft (1.52m) would have a 32% increased risk of CHD compared to someone who is 5ft 6 (1.71m).

Previous research identified the link between shorter adult height and increased risk of CHD but why this might be was not known. It is thought that environmental factors could be involved. For example a person fed a poor diet in childhood could grow up both shorter than average and unhealthy.

This current study attempted to create a clearer picture by looking for genetic variations linked to short stature that were also linked to CHD.

Through sophisticated statistical analysis they measured the association between shorter height due to these variants and CHD. Oddly there was no association for women.

It should be noted that this type of study can indicate potential reasons for the associations (such as shortness being associated with high cholesterol) but cannot prove that shorter height directly causes CHD.

While you can put on a pair of "killer" or Cuban heels, there is not much you can do about your genetics. Ways you can reduce your CHD risk include stopping smoking, drinking alcohol in moderation and maintaining a healthy weight through diet and exercise. These steps should help keep your cholesterol and blood pressure at a healthy rate.

 

Where did the story come from?

The study was carried out by researchers from the University of Leicester, the University of Cambridge and numerous other institutes and universities across the UK and internationally. It was funded by the British Heart Foundation, the UK National Institute for Health Research, the European Union and the Leducq Foundation.

The study was published in the peer-reviewed The New England Journal of Medicine.

The UK media accurately reported the study. The Guardian helpfully put the results of the study into context with a quote from one of the authors, Sir Nilesh Samani who said: "The findings are relative, so a tall person who smokes will very likely be at much higher risk of heart disease than somebody who is smaller". He was then quoted by the BBC News as saying: "In the context of major risk factors this [short stature] is small – smoking increases the risk by 200-300% – but it is not trivial."

 

What kind of research was this?

This was a case control study which compared genetic make-up of people with and without coronary heart disease (CHD). It specifically looked at genetic variations associated with height, and aimed to see if there was an association between ‘genetically determined height’ and risk of CHD. They also studied whether genetically determined height was associated with cardiovascular risk factors.

Previous research identified the link between shorter adult height and increased risk of CHD but the exact reason why was not known. This type of study investigates whether genetics could be a potential reason for the association, but cannot prove that shorter height causes CHD, or rule out other factors contributing to the association.

 

What did the research involve?

The researchers compared genetic variations that are associated with height in people with and without CHD.

The researchers used data on 65,066 people who had CHD (cases) and 128,383 people with no history of CHD (controls) that had been collected from a number of different studies, and pooled in a previous meta-analysis. This meta-analysis identified 180 DNA sequence variations that were estimated to account for 10% of the difference in people’s heights.

In the current study they measured the association between each DNA variant and height. They then measured the association between each DNA variant and CHD. From this, they calculated whether there was an association between height determined by each DNA variant and CHD. As this association was very small for each DNA variant, the researchers then combined all of the DNA variant results to obtain an overall association for what they termed "genetically determined height" and risk of CHD. They performed separate analyses for men and women.

The researchers then looked for any associations between the genetically determined height and the following risk factors for CHD:

  • high blood pressure
  • high LDL "bad" cholesterol
  • low HDL "good" cholesterol
  • high triglyceride level (a type of fat)
  • type 2 diabetes
  • increased body mass index (BMI)
  • high blood sugar
  • low insulin sensitivity
  • smoking

 

What were the basic results?

The average age of participants was 57.3 years and the majority of cases were male (73.8%) compared to only half of the controls (49.8%).

Most of the 180 individual genetic variants that have been associated with height had no statistically significant association with the risk of CHD. The researchers had expected this, as each variant is associated with only a very small effect.

When all of the results were combined, for each 6.5cm decrease in "genetically determined height" there was a 13.5% increased risk of CHD (95% confidence interval (CI) 5.4% to 22.1%).

When looking at men and women separately, there was an association in men, but no significant association between the genetically determined height and CHD in women.

Among the risk factors for CHD, the height-related variants were only associated with LDL (bad) cholesterol and high triglyceride levels. They estimated that 19% of the association between shorter height and CHD could be accounted for by high LDL cholesterol and 12% by high triglycerides.

 

How did the researchers interpret the results?

The researchers concluded that using a genetic approach there is "an association between genetically determined shorter height and an increased risk of CHD". They suggest that this may in part be due to "the association between shorter height and an adverse lipid profile [levels of total cholesterol, high-density lipoprotein (HDL) cholesterol, triglycerides, and the calculated low-density lipoprotein (LDL) cholesterol.]".

 

Conclusion

Previous observational studies have suggested a link between shorter height and CHD. What was not clear was the extent to which this might be due to genetic factors or confounding by socioeconomic and lifestyle factors.

The current study aimed to assess the potential role of genetics, and reduce the possibility of socioeconomic factors influencing the results. To do this the researchers calculated the association between "genetically determined height" and CHD, using 180 genetic variations previously found to be associated with height in Europeans. This reduces the influence of socioeconomic factors as genetic variations are present from birth.

They found an association between genetically determined shorter height and increased risk of CHD. They also found that the genetic variants were associated with high LDL cholesterol and triglycerides and this could at least partly account for the increased risk of CHD. It remains unclear exactly how the genetic variants identified influence cholesterol, triglycerides or CHD. It is also not known if the results would be applicable to people not of European descent.

Interestingly, there was no significant association for women. The researchers say this could be because there were too few women with CHD in the analysis.

Though the study design aims to reduce the possibility of confounding, the researchers note that they cannot rule out the possibility of different behaviours in shorter people having an impact on the results. The study also does not completely rule out other factors influencing the overall link between height and CHD.

Whatever your height you should remain vigilant about the risk of CHD, which has now become the leading killer in the UK.

You cannot change your genetics, but factors that you can control to reduce the risk of CHD include stopping smoking, drinking alcohol in moderation and maintaining a healthy weight through diet and exercise. These steps should help keep your cholesterol and blood pressure at a healthy rate.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Shorter people at greater risk of heart disease, new research finds. The Guardian, April 8 2015

Short people's 'DNA linked to increased heart risk'. BBC News, April 9 2015

Short people more likely to develop coronary heart disease. The Daily Telegraph, April 8 2015

Short People More Likely To Get Heart Disease. Sky News, April 8 2015

Short people 'at higher risk of heart disease', study finds. ITV News, April 8 2015

Short people at greater risk of heart attack, says study. The Independent, April 8 2015

Links To Science

Nelson CP, Hamby SE, Saleheen D, et al. Genetically Determined Height and Coronary Artery Disease. The New England Journal of Medicine. Published online April 8 2015

Categories: Medical News

No such thing as baby brain, study argues

Medical News - Wed, 04/08/2015 - 18:00

"'Baby brain' is a stereotype and all in the mind,
the Mail Online reports.

The headline is prompted by a US study that aimed to see if "baby brain" (aka "mumnesia") – alleged memory lapses and problems with concentration during pregnancy – is a real phenomenon or just a myth.

The study recruited 21 women in the third trimester of pregnancy. A second group of 21 women who had never been pregnant were recruited to act as a control. The women completed a variety of tests to measure their memory, attention and problem solving ability. The tests were repeated several months later and the two groups compared.

Though the pregnant women reported greater memory difficulties, there were no differences in the results of the tests between the two groups.

The researchers say this shows that pregnancy and childbirth do not affect the ability to "think straight". However, we do not know what the level of performance would have been for the pregnant women before they were pregnant. It is also possible that the small numbers of women in each group could have affected the results. The findings could be completely different with a different sample of women.

This study does not provide conclusive evidence that pregnancy has no effect on memory and attention.

Seeing that pregnancy can often cause tiredness, it would be surprising if some women didn’t have temporary problems with memory and concentration. 

 

Where did the story come from?

The study was carried out by researchers from Brigham Young University in Utah. It was funded by the Brigham Young University College of Family, Home and Social Sciences, and the Women’s Research Institute at Brigham Young University.

The study was published in the peer-reviewed Journal of Clinical and Experimental Neuropsychology.

The Mail Online reported the story reasonably accurately, but did not explain the major limitation of the study's design – that it does not take into account the memory and problem solving abilities of the women before they became pregnant.

 

What kind of research was this?

This was a case-controlled study that aimed to see if cognitive ability (memory and problem solving) changed in pregnancy and after childbirth. Previous research has found mixed results, with some studies indicating improved cognitive abilities during pregnancy and some showing a reduction or no difference.

This type of study can show associations, but cannot prove that any cognitive differences are due to the pregnancy, as other factors could cause the results.

 

What did the research involve?

The researchers recruited 21 pregnant women and a control group of 21 healthy women who had never been pregnant. The women completed a variety of tests to measure their cognitive ability. The tests were repeated several months later and the two groups compared.

The women were given 10 neuropsychological tests, which measured their memory, attention, language, executive abilities (such as problem solving) and visuospatial skills (the ability to process and interpret visual information about where objects are). They also filled out questionnaires to assess their mood, and levels of anxiety, quality of life, enjoyment and satisfaction.

Each test was conducted when the pregnant women were in their third trimester and repeated between three and six months after giving birth. The non-pregnant women were also tested twice, with a similar time gap between the tests.

Women were excluded from the study if they had a history of:

  • learning disabilities
  • attention deficit hyperactivity disorder (ADHD)
  • psychotic or bipolar disorder
  • epilepsy
  • stroke
  • traumatic brain injury
  • substance abuse/dependence

The results were then analysed during and after pregnancy, and compared to the controls. Further analysis was performed in the pregnancy group, comparing women in their first pregnancy with women who had previously given birth.

 

What were the basic results?

The pregnant women were older, on average, with a mean age of 25, compared to 22 for the control group. 11 of the pregnant women and nine of the controls were students. 

The main results were:

  • No difference between the groups in terms of language ability or memory, though the pregnant women reported worse memory than controls.
  • No difference between tests of attention and visuospatial ability, with higher scores for both groups in the second session of tests.
  • Executive functioning also improved for both groups. For one of the tests, the Trail Making Test, the pregnant women were slower at Part A both during and after pregnancy. Part A measures visual scanning and processing speed by asking the participant to draw a line as quickly as possible between consecutive numbers randomly written on paper. Part B measures scanning and processing speed, but also mental flexibility by requiring the person to join each consecutive number and letter: 1-A-2-B-3-C etc. There was no difference in scores for Part B between the groups.

Pregnant women reported a lower quality of life and were more likely to have depressive symptoms compared to controls. The results were as follows:

  • Six pregnant women had mild symptoms of depression during pregnancy. One of them continued to have mild symptoms after birth. These women performed similarly to the control women in the neuropsychological tests.
  • One woman had moderate symptoms of depression during pregnancy and developed severe symptoms by the second test after birth.
  • No women in the control group had significant symptoms of depression.

There were no differences between women in their first pregnancy compared to women who had previously given birth.

 

How did the researchers interpret the results?

The researchers say their "findings suggest no specific cognitive differences between pregnant/postpartum women and never-pregnant controls". This was despite the pregnant/postpartum women reporting more memory difficulties.

 

Conclusion

The researchers conclude that although the pregnant women reported memory problems, these did not show up on their tests. However, this does not take into account their pre-pregnancy ability. The women may have performed better before they got pregnant, which is why they are now reporting memory problems. None of these women were tested before they got pregnant, which is the major limitation of the study.

The researchers say that because there were a similar number of students in each group, the women in the control group was a good enough representation of how the pregnant women would have performed pre-pregnancy. However, there will be a wide variation between cognitive abilities, even between different students. There is no information about cognitive abilities, other than the length of time each group were in education. This was an average of 16 years for the pregnancy group, compared to 15 for the control group. The range was the same for each group, at 13 to 18 years.

The other limitation of the study is the small number of women in each group, which limits the strength of the results and makes it more likely that they could occur by chance. A different or larger sample of women could give completely different results.

It is unclear why the pregnant women were slower at the Trail Making Test Part A compared to the controls, but not with Part B. It is likely that the small sample size contributed to this anomaly.

The study highlights the importance of recognising low mood and symptoms of depression in pregnant women and in the months after giving birth. Read more about low mood and depression during pregnancy, and low mood and depression after pregnancy.

In conclusion, this study does not provide conclusive evidence that pregnancy has no effect on memory and attention.

Pregnancy can cause tiredness and fatigue, particularly during the first trimester, and looking after a newborn baby can be exhausting work. Therefore, you shouldn’t feel surprised if you do have the occasional memory lapse or loss of concentration. Dads may not be immune to "baby brain" after the baby is born, either.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

"Baby brain doesn't exist and the condition is all in mum's mind," say scientists. Daily Mirror, April 7 2015

'Baby brain' DOESN'T exist: Tests reveal pregnant women and new mothers suffer no decline - leading scientists to declare the condition is 'all in the mind'. Mail Online, April 8 2015

Links To Science

Logan DM, Hill KR, Jones R, et al. How do memory and attention change with pregnancy and childbirth? A controlled longitudinal examination of neuropsychological functioning in pregnant and postpartum women. Journal of Clinical and Experimental Neuropsychology. Published online May 12 2014

Categories: Medical News

Do diet soft drinks actually make you gain weight?

Medical News - Wed, 04/08/2015 - 14:10

"Is Diet Coke making you fat? People who drink at least one can a day have larger waist measurements," the Mail Online reports. A US study found an association between the daily consumption of diet fizzy drinks and expanded waist size.

This study included a group of older adults aged 65 or over from San Antonio, Texas. Researchers asked participants about their consumption of diet soft drinks and measured their body mass index (BMI) and waist circumference. They then looked at whether this was associated with changes in body measures over the next nine years.

The study found people who drank diet soft drinks every day had a greater increase in waist circumference at later assessments compared with those who never drank them (3.04cm gain versus 0.77cm). Daily drinkers also had a slight gain in BMI (+0.05kg/m2) compared with a minimal loss in non-drinkers (-0.41kg/m2).

The hypothesis that diet drinks can actually make you fatter is not a new one – we covered a similar study back in January 2014. The problem with this field of research is it is very difficult to prove cause and effect. As with this study, people who regularly drink diet drinks may be overweight to start with and they could be drinking diet drinks in an effort to lose weight.

This study will add to the variety of research examining the potential harms or benefits of artificial sweeteners or diet drinks. But it does not prove that drinking diet drinks will make you fat.

If you are trying to lose weight, good old-fashioned tap water is a cheaper, calorie-free alternative to diet drinks. 

Where did the story come from?

The study was carried out by researchers from the University of Texas Health Science Center in the US, and was funded by the US National Institute on Aging, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Center for Research Resources. The authors declare no conflicts of interest. 

It was published in the peer-reviewed Journal of the American Geriatrics Society.

The Mail Online's coverage of this study seems overly conclusive, suggesting it provides evidence that drinking diet fizzy drinks causes people to become overweight. But this has not been proven, and the Mail did not consider this study's many limitations in their reporting.

It also includes an error in its story, describing the study of 749 people "in which 466 participants survived". This is the number of people who had data on body measurements available for at least one of the follow-up assessments. It is the retention of people in the study, not the survival rate.

Furthermore, in saying that, "Large waistlines linked to diabetes, stroke, heart attack and cancer", it is suggested that this study found a higher waist circumference was linked to the development of these diseases. However, health outcomes were not assessed in this study.

And, somewhat unfairly, Diet Coke was singled out as the main culprit. The study actually included any kind and brand of diet fizzy drink. 

What kind of research was this?

This was a prospective cohort study that aimed to look at the link between diet soft drink intake and waist circumference.

The researchers discuss how concerns about high sugar intake over the past few decades have led to an increase in the consumption of artificial sweeteners. But the potential detrimental health effects of sweeteners have often been debated.

Some studies found no evidence for either the benefits or harms of sweeteners and diet drinks, while others found an increased risk of cardiovascular and metabolic risk factors, such as causing weight gain, leading to obesity, high blood pressure and diabetes.

This study aimed to examine the effect artificially sweetened diet drinks have on weight changes over time by looking at people taking part in an ongoing cohort study.

The main limitation with this type of study, however, is that it is not able to prove cause and effect, as the relationship is likely to be influenced by various other factors (confounders)

What did the research involve?

This research included a group of older Mexican and European American people taking part in the San Antonio Longitudinal Study of Aging (SALSA). This community-based study aimed to look at cardiovascular risk factors in people who were aged 65 or over at the start of the study (1992-96).

The first follow-up assessments were conducted an average of seven years later (2000-01), with two further follow-ups at 1.5-year intervals (2001-03, then 2003-04). The study included 749 people, with an average follow-up time of 9.4 years.

Assessments included measurements of participants' height, weight, waist circumference, fasting blood glucose levels, physical activity, and presence of diabetes. Dietary questionnaires were given at baseline and included the consumption of diet soft drinks.

People were asked the number of cans or bottles of diet soft drinks they consumed a day, week, month or year, and were categorised into three intake groups: non-users, occasional users (more than zero but less than one a day), and daily users (more than one a day) of diet soft drinks.

The researchers looked at the relationship between diet fizzy drink intake at the start of the study, and changes in BMI and waist circumference from when the study started to each follow-up point. Analyses were adjusted for age, gender, ethnicity, socio-demographics, diabetes, smoking status, and leisure activity.

Despite the large initial cohort size, only 384 people (51%) had data available on soft drink intake at baseline and body measurements at the first and second follow-ups, reducing to 291 (39%) by the third follow-up.  

What were the basic results?

The researchers found people who drank diet drinks at the start of the study also had significantly higher BMIs at the beginning of the study compared with non-users. They also tended to have higher waist circumference compared with non-users, though not significantly so.

The proportion of daily users who were overweight or obese at the start of the study was 88%, compared with 81% of occasional users and 72% of non-users.

Overall, the researchers found that for people who returned for one or more follow-ups, changes in BMI varied according to diet soft drink intake. Non-users experienced a minimal decrease in BMI (average 0.41kg/m2 decrease), as did occasional users (0.11kg/m2 decrease), while daily users had a slight increase (0.05kg/m2 gain).

Changes in waist circumference, meanwhile, were much more notable, with daily diet soft drink users experiencing a gain four times that of non-users. Average waist circumference gains at each interval were 0.77cm for non-users, 1.76cm for occasional users, and 3.04cm for daily users.  

How did the researchers interpret the results?

The researchers concluded that, "In a striking dose-response relationship, increasing diet soda intake was associated with escalating abdominal obesity, a potential pathway for cardiometabolic risk in this ageing population." 

Conclusion

This prospective study found that people who drank diet soft drinks every day experienced greater waist circumference gain over up to nine years of follow-up compared with those who never drank diet drinks (3.04cm gain versus 0.77cm).

They also experienced a minimal gain in BMI (+0.05kg/m2) over follow-up, compared with a minimal loss in non-users of diet drinks (-0.41kg/m2).

However, this study certainly does not prove that diet drinks, and diet drinks alone, are responsible for these small increases in waist circumference and BMI.

People who drank diet drinks tended to have higher BMIs and waist circumferences than non-users to start with. At the start of the study, when diet soft drink consumption was assessed, 88% of those drinking them daily were overweight or obese, compared with 72% who weren't drinking soft drinks.

Though these people experienced slightly greater gains in BMI and waist circumference, these people tended to have generally higher body measurements to start with. It is possible that people with weight concerns may consume diet drinks in an effort to try to manage their weight.

There may be a variety of unhealthy lifestyle behaviours that contributed to the gain in body measures during the study. For example, the researchers adjusted their analyses for leisure-time physical activity, but did not consider food intake, apart from diet drinks, or look at total energy intake.

Overall, it is not possible to say from this analysis that the diet drinks are the cause of the changes in body measures, as various other unmeasured health and lifestyle factors could be having an influence.

Other points to bear in mind with this study are:

  • This was an older age cohort of people above 65, so we don't know how representative the results would be for younger groups.
  • This was a specific sample of people from San Antonio in Texas, and we don't know whether their health, lifestyle and environmental influences may differ from other population groups.
  • Despite the initial sample size being fairly large at 749, data on drink consumption and body measurements was only available for about half of these people. The results may have been different had data been available for the full cohort.
  • We don't know the significance of the small changes in BMI and waist circumference observed.
  • We don't know whether continued daily consumption of diet soft drinks in the longer term would be associated with continuously increasing body measures, or whether this would have direct health effects (such as in terms of cardiovascular disease).
  • The effects observed in this study can't be attributed to specific artificial sweeteners or specific diet soft drink brands.

The researchers' statement that there is a "striking dose-response relationship" between soda consumption and obesity seems overly bold given this study's limitations.

This study does not prove that drinking diet drinks will cause you to become fat. If you are trying to lose weight, we recommend that you ditch the expensive diet drinks and stick to water.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Is Diet Coke making you fat? People who drink at least one can a day have larger waist measurements. Mail Online, April 7 2015

Links To Science

Fowler SPG, Williams K, Hazuda HP. Diet Soda Intake Is Associated with Long-Term Increases in Waist Circumference in a Biethnic Cohort of Older Adults: The San Antonio Longitudinal Study of Aging. Journal of the American Geriatrics Society. Published online March 17 2015

Categories: Medical News

Superbug 'could kill 80,000 people' experts warn

Medical News - Tue, 04/07/2015 - 18:00

"Superflu pandemic is biggest danger to UK apart from a terrorist attack – and could kill 80,000 people," is the warning in The Independent. A briefing produced by experts outlines how antibiotic resistance could pose a significant threat (PDF, 440kb) to public health.

"Up to 80,000 people in Britain could die in a single outbreak of an infection due to a new generation of superbugs," reports The Daily Telegraph – one of many news sources reporting on these estimated figures from the government.

 

Why are superbugs in the news again today?

The news is based on the threat of antimicrobial resistant microbes (sometimes called "superbugs" in the media) described in the government’s 2015 National Risk Register of Civil Emergencies (NRR). This is reported to be the first time the NRR has covered this threat.

 

What is the National Risk Register?

The NRR is an assessment of the risks of civil emergencies facing the UK over the next five years, and is produced every two years. The NRR report is a public-facing version of a classified internal government report called the National Risk Assessment (NRA). Civil emergencies are events or situations which threaten serious damage to human welfare or the environment in the UK, or threaten serious damage to national security.

In producing the report, the government assesses how likely an event is, and what the impact of it might be. The report considers events that have at least a 1 in 20,000 chance of happening in the next five years, and that would require government intervention. The report also covers issues that are longer-term or broader than single events, but which also have the potential to adversely impact society. The threat of antimicrobial resistance (AMR) is one such longer-term issue.

 

What is antimicrobial resistance and why is it a risk?

AMR is a global health threat.

Antimicrobials are drugs used to treat an infectious organism, and include antibiotics (used to treat bacteria), antivirals (for viruses), antifungals (for fungal infections) and antiparasitics (for parasites).

When antimicrobials are no longer effective against infections they were previously effective against, this is called antimicrobial resistance. Regular exposure to antimicrobials prompts the bacteria or other organisms to change and adapt to survive these drugs.

Nowadays, fewer new antibiotics are being developed, meaning we have fewer options and stronger drugs in our antibiotics armoury have to be used to treat common infections once they become resistant. This means we are now facing a possible future situation where we will be without effective antibiotics.

 

What could the impact be?

The report states that the cases of infection where AMR poses a problem are "expected to increase markedly over the next 20 years". It estimates that if a widespread outbreak were to happen, around 200,000 people could be affected by a bacterial blood infection resistant to existing drugs, and 80,000 of these people could die. It also says that many deaths could be expected from other forms of resistant infections.

 

What about "superflu"?

The Independent’s headline suggests that it is "superflu" which could kill 80,000, and that it is the "biggest danger to the UK apart from a terrorist attack". The headline appears to conflate two parts of the report. 

The 80,000 figure appears to come from the estimates of the potential impact of a resistant bacterial blood infection reported above, not specifically "superflu". The report does note that flu pandemics would become more serious without effective treatments, but does not give an estimate of how many people antimicrobial resistant pandemic flu could kill.

Flu pandemics (not specifically antimicrobial resistant flu) are also one of the specific risks assessed by the report. They are given a maximum relative impact score of five out of five, which is the same score as catastrophic terrorist attacks.

The report estimates that pandemic flu could infect half of the UK population and lead to between 20,000 and 750,000 additional deaths.

A flu pandemic was estimated as having a relative likelihood of between 1 in 2 and 1 in 20, and was reported to "[continue] to represent the most significant civil emergency risk".

 

How did the report assess the risk of AMR?

The report did not specify how it got to the specific AMR impact figures, but it does give its overall methods. The risks are identified by consulting experts both within and outside the government, and devolved administrations. For each risk the report selects a "reasonable worst case" scenario, which represents something which would be a challenge and could plausibly occur. The likelihood of an event (such as pandemic flu) was based on information such as historical analyses and modelling where possibly, along with scientific expertise. The impact score for an event was assessed on a scale of 0 to 5 (0 least and 5 greatest) and averaged across 5 areas:

  • deaths
  • illness or injury
  • social disruption
  • economic harm
  • psychological impact

 

What is being done about this threat?

The report notes that AMR is a global problem that needs international action to be tackled. The report describes some of the actions being taken:

  • The government and devolved administrations are working with international partners to get support for joint action internationally.
  • Government departments, the NHS and other partners are working together to implement the UK five-year Antimicrobial Resistance Strategy published in 2013.
  • The impact of actions to reduce the spread of AMR is being measured and reported on by a cross-government high level steering group.
  • There is an ongoing independent review of AMR, which is being chaired by economist Jim O’Neil. Two reports from this review have already been released. Further reports are expected in 2015, and in 2016 the review will recommend actions to be agreed internationally to deal with AMR.

 

What can you do to help reduce the spread of AMR?

People can help cut antibiotic (or wider antimicrobial) resistance by recognising that many common infections, such as coughs, colds and stomach upsets, are often viral infections that will go away after a short period without treatment (known as "self-limiting" infections). These infections do not need an antibiotic, as they will have no effect.

If you are prescribed an antibiotic (or other antimicrobial), it is also important to make sure you take the full course as prescribed, even if you feel better before you finish the course.

This will reduce the chances of the organisms being exposed to the drug but then surviving, which encourages the development and spread of resistance to the drug.

Taking the course as prescribed will also increase the chances of you getting better. By not taking a full course, you may find that the infection comes back and requires further antibiotic prescriptions, which further increases the chances of resistant organisms developing.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Superflu pandemic is biggest danger to UK apart from a terrorist attack – and could kill 80,000 people. The Independent, April 6 2015

Drug-resistant superbug could kill 80,000 in one outbreak as antibiotics lose their strength. Daily Mirror, April 6 2015

Superbug ‘could kill 80,000 Britons’, according to government report. Metro, April 6 2015

New breed of superbugs which are resistant to antibiotics could kill 80,000 Britons in one outbreak as scientists warn even catching flu could have 'serious' impact. Mail Online, April 6 2015

Tens of thousands of lives threatened by rise in drug-resistant superbugs, experts warn. ITV News, April 6 2015

The FLU could kill you: Shock report warns modern drugs will STOP working within 20 YEARS. Daily Express, April 6 2015

Categories: Medical News

Vigorous exercise 'may help prevent early death'

Medical News - Tue, 04/07/2015 - 15:00

"Short bursts of vigorous exercise helps prevent early death," The Independent reports after an Australian study found vigorous exercise, such as jogging, reduced the risk of premature death.

The study involved adults aged 45 to 75 years old followed up over 6.5 years. Those who did more vigorous activity (as part of their general total moderate to vigorous activity levels) were less likely to die during follow-up than those who did no vigorous activity.

This large study was well designed, and the researchers also tried to take factors into account that they knew could influence the results (confounders).

But, as with all studies, there are some limitations – for example, the researchers only asked about physical activity once and this may have changed over time.

These results also bear out the proven benefits of exercise, regardless of how much of it is vigorous, and supports current recommendations for the amount of physical activity people should do.

While doing some vigorous activity may bring some benefits, it is important that people set themselves realistic targets they can safely achieve.  

Where did the story come from?

The study was carried out by researchers from James Cook University and other universities in Australia. It was funded by the Heart Foundation of Australia.

The study was published in the peer-reviewed medical journal JAMA Internal Medicine.

The coverage in the papers is variable. While all the papers are right in saying that vigorous exercise may be beneficial, there is some misreporting. The Daily Telegraph's headline says that, "Swimming, gardening or golf 'not enough to prevent early death'," which is not true.

Gentle swimming and vigorous gardening both fell under "moderate activity", and even those who just did moderate activity had a lower risk of death than those who did no moderate to vigorous activity at all.

The Telegraph also talks about the effects on heart disease and diabetes, but these outcomes were not assessed by this study.

The Daily Express helpfully includes a quote noting that, "There is no question that some exercise is better than nothing. But the more intensive the activity, the less likely people will come back to it, so the question is how do we get people to do some – and then those who do some to do a bit more?"

However, at the end of the story, they then include a video of "chubby guy dancing in Speedos to holiday exercise class" for people's amusement, which is not likely to encourage people to take up exercise.

The Independent refers to "short bursts" of vigorous exercise being beneficial, but the study itself did not assess length of the bursts.

The paper does include a note of caution from one study author, however, who said that, "For those with medical conditions, for older people in general and for those who have never done any vigorous exercise before, it's always important to talk to a doctor first." 

What kind of research was this?

This was a prospective cohort study assessing whether achieving more moderate to vigorous activity through vigorous activity specifically was associated with a reduced risk of death during follow-up.

While we know that physical activity is associated with longer life, it is not clear whether vigorous activity is better than moderate activity.

While a recent systematic review suggested that vigorous activity may reduce the risk of death more than moderate activity, some of the studies included did not take overall activity into account.

This means these studies were not able to rule out that some of the effect of vigorous exercise was because people who did more vigorous activity tended to do more physical activity overall.

The current study wanted to avoid this problem. A prospective cohort study is the best way to assess this question. It's unlikely to be feasible to carry out a randomised controlled trial to successfully answer this question, as it's difficult to get people to agree to stick to a specific exercise pattern for a long time.

But the main limitation of a cohort study is that factors other than the factor of interest (such as overall activity, in this case) could potentially influence the results, so the researchers need to take these into account in their analyses. 

What did the research involve?

The researchers enrolled adults aged 45 and over from New South Wales. At the start of the study, participants were asked how much physical activity they did and how intense this activity was.

They were then followed up over about 6.5 years, and the researchers identified who died in this period.

The researchers then analysed whether the proportion of the total moderate to vigorous physical activity (MVPA) a person did that was vigorous was associated with their risk of death.

The participants were enrolled as part of the 45 and Up study in 2006-09. Potential participants were selected at random from the Australian national medical insurance (Medicare) database, which includes all citizens and permanent residents of the country.

This study did not include people aged over 75, as it was mainly interested in earlier preventable deaths.

Participants filled out a questionnaire at the start of the study on their MVPA in the past week. They were asked how much of this activity was:

  • vigorous – anything that "made you breathe harder or puff and pant", such as jogging, cycling, aerobics or competitive tennis, but not household chores or gardening
  • moderate – gentle swimming, social tennis, vigorous gardening or housework

Participants also reported how much walking they did, and this was included in their total MVPA.

Those who died between the start of the study and June 2014 were identified through the New South Wales Registry of Births, Deaths, and Marriages.

The main analyses in this study included 204,542 people who reported doing at least some MVPA. The researchers took factors that could affect the results (potential confounders) into account, including:

  • total MVPA
  • age
  • sex
  • educational level
  • marital status
  • area of  residence (urban or rural)
  • body mass index (BMI)
  • physical function (whether the person had any physical limitations)
  • smoking status
  • alcohol consumption
  • fruit and vegetable consumption  
What were the basic results?

During the study, 7,435 of the 217,755 participants died:

  • 8.3% of those who did no MVPA
  • 4.8% of those who did 10 to 149 minutes of MVPA a week
  • 3.2% of those who did 150 to 299 minutes of MVPA a week
  • 2.6% of those who did 300 minutes or more or MVPA a week

After taking potential confounders into account, this meant that compared with those who did no MVPA, the risk of death during the 6.5 years of follow-up was:

  • 34% lower in those who did 10 to 149 minutes of MVPA a week (hazard ratio [HR] 0.66, 95% confidence interval [CI] 0.61 to 0.71)
  • 47% lower in those who did 150 to 299 minutes of MVPA a week (HR 0.53, 95% CI 0.48 to 0.57)
  • 54% lower in those who did 300 minutes or more or MVPA a week (HR 0.46, 95% CI 0.43 to 0.49)

Among those who did at least some MVPA, doing more of that activity as vigorous activity was associated with a reduced risk of death during follow-up:

  • 3.8% of those who did no vigorous activity died
  • 2.4% of those who did vigorous activity that accounted for less than 30% of their total MVPA died – a 9% reduction relative to those who did none (HR 0.91, 95% CI 0.84 to 0.98)
  • 2.1% of those who did vigorous activity that accounted for 30% or more of their total MVPA died – a 13% reduction relative to those who did none (HR 0.87, 95% CI 0.81 to 0.93)

The researchers found similar results when they looked at people with different BMIs, people who did different amounts of MVPA, and in people with or without cardiovascular disease or diabetes.  

How did the researchers interpret the results?

The researchers concluded there was an "inverse dose-response relationship" between the proportion of MVPA done as vigorous activity and the risk of death during follow-up.

They say this suggests that vigorous activity "should be endorsed in clinical and public health activity guidelines to maximise the population benefits of physical activity". 

Conclusion

This large study suggests that in middle to older age, doing more of your total moderate to vigorous activity as vigorous activity could help to reduce your risk of death.

This study's size is one of its strengths, with more than 200,000 people taking part. The fact that information on activity was collected at the start of the study, rather than asking people to recollect what they did in the past, is also beneficial.

The researchers also tried to take factors into account that they knew could influence their results, including cardiovascular medical conditions such as coronary heart disease, or other conditions that reduced people's ability to participate in physical activity, such as type 2 diabetes.

But, as with all studies, there are some limitations:

  • The researchers only asked about physical activity once, and people's activities may have been different before or after the week that was assessed.
  • The study only included those aged 45 to 75, and results may not apply to older individuals.
  • All lifestyle measures were reported by the participants themselves, and there may be some inaccuracies – the authors state people tend to be better at reporting vigorous activity than other types of activity.
  • The results may still be influenced by confounders the authors did not measure –  for example, only fruit and vegetable intake was assessed as a sign of a healthy diet, but other dietary aspects could have had an effect.

While the results suggest that doing more vigorous activity is beneficial, there are some points to think about. For example, the people who were doing more vigorous activity may also have done more vigorous activity in their younger years, and it may be that this consistency is the important factor.

The study also did not directly compare just moderate activity with vigorous activity. Further research is likely to assess these and other questions.

Importantly, the results highlight the beneficial effect of doing some moderate to vigorous activity, regardless of how much of it is vigorous. This supports current recommendations for exercise.

While doing some vigorous activity may add some benefit, it is important that people set themselves realistic targets they can safely achieve.

If it has been a while since you last exercised, the NHS Choices Couch to 5K running programme is one way to safely raise your fitness levels.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Short bursts of vigorous exercise helps prevent early death, says study. The Independent, April 6 2015

Swimming, gardening or golf 'not enough to prevent early death'. The Daily Telegraph, April 6 2015

How to live for longer: Brisk exercise slashes risk of early death. Daily Express, April 7 2015

Links To Science

Gebel K, Ding D, Chey T, et al. Effect of Moderate to Vigorous Physical Activity on All-Cause Mortality in Middle-aged and Older Australians. JAMA Internal Medicine. Published online April 6 2015

Categories: Medical News

Sedentary lifestyle – not watching TV – may up diabetes risk

Medical News - Thu, 04/02/2015 - 18:31

“Experts claim being a couch potato can increase the risk of developing diabetes,”  the Daily Express reports.

A study of people at high risk of diabetes produced the sobering result that each hour of time spent watching TV increased the risk of type 2 diabetes by 2.1% (after being overweight was taken into account).

The study originally compared two interventions aimed at reducing the risk of developing diabetes compared to placebo. It involved 3,000 participants who were overweight, had high blood sugar levels and insulin resistance. These are early indications that they may be developing diabetes (often referred to as pre-diabetes). The interventions were either metformin (a drug used to treat diabetes) or a lifestyle intervention of diet and exercise.

This study used data collected from the original trial to see if there was a link between increased time spent watching the TV and risk of developing diabetes.

Across all of the groups they found a slightly increased risk, which was 3.4% per hour of TV watching when being overweight was not taken into account.

The findings may not be reliable, as researchers did not take other risk factors into account, such as family history of diabetes, use of other medication or smoking status. They also relied on self-reported TV watching times, which may not be very accurate.

That said, lack of exercise is a known risk factor for a range of chronic diseases – not just diabetes. Read more about why sitting too much is bad for your health.

 

Where did the story come from?

The study was carried out by researchers from the University of Pittsburgh, George Washington University, Pennington Biomedical Research Center and several other US universities. It was funded by many different US National Health Institutes and three private companies: Bristol-Myers Squibb, Parke-Davis and LifeScan Inc.

The main funding source was the National Institute of Diabetes and Digestive and Kidney Diseases of the US National Institutes of Health. One of the authors has a financial interest in a company called Omada, which develops online behaviour change programmes, with a focus on diabetes.

The study was published in the peer-reviewed medical journal Diabetologia.

The UK media has focused on the statistic that the risk of getting diabetes increases by 3.4% per hour of TV watched. However, this figure does not take into account the risk factor of being overweight. When this is accounted for, the increased risk is less, at 2.1%.

The Daily Express’s online headline "Watching too much TV can give you diabetes" would not be our preferred wording. Some readers may take it as a statement that their TV sends out dangerous rays that increase your blood sugar levels. A more accurate, if slightly less striking, headline would be "Sedentary behaviour increases your diabetes risk".

 

What kind of research was this?

This study looked at data from a randomised controlled trial that aimed to test whether lifestyle changes or the diabetes drug metformin reduced the risk of developing diabetes compared to placebo (dummy pill). It was conducted on over 3,000 people at high risk of diabetes. The trial found that metformin reduced the risk by 31% and that the lifestyle intervention reduced it by 58% compared to placebo.

This study aimed to see if the lifestyle intervention, which aimed to increase physical activity, had any effect in reducing the amount of self-reported time spent sitting. As a secondary outcome, the researchers looked at data from each group to see if there was an association between time spent sitting and the risk of diabetes. As this was not one of the aims of the study, the results of this type of secondary analysis are less reliable.

Critics of this approach argue that it is akin to "moving the goalposts"; researchers fail to get a striking result for their stated aim, so they then focus on a secondary aim that will get them the results.

 

What did the research involve?

Over 3,000 adults at high risk of diabetes were randomly allocated to take metformin, a placebo, or have a lifestyle intervention, from 1996 to 1999. They were followed up for an average of 3.2 years to see if any of the interventions reduced the risk of developing diabetes.

The lifestyle group had an "intensive" lifestyle intervention focusing on a healthy diet and exercise. The aim for this group was to achieve 7% weight loss and do at least 150 minutes of moderate intensity activity per week (the recommended minimal activity levels for adults). They were advised to limit inactive lifestyle choices, such as watching the TV. People given the metformin or placebo were also advised about a standard diet and had exercise recommendations. The study took place over 2.8 years.

A variety of measures were recorded, including weight and annual blood sugar tests. Each year, the participants were interviewed using a Modifiable Activity Questionnaire. This recorded self-reported estimates of leisure, TV watching and work-related activity.

In this analysis, the researchers compared the amount of time each person reported they spent watching the TV at the start and end of the study in each group.

 

What were the basic results?

Across all of the treatment groups, every hour per day of watching TV raised the risk of diabetes by 2.1%, after adjusting for age, sex, physical activity and weight. When the results did not take increased weight into account, the risk was higher, at 3.4% per hour.

By the end of the study, people in the lifestyle intervention group watched less TV. At the start of the study, each group reported watching a similar amount of TV – around 2 hours and 20 minutes per day. Three years later, people in the lifestyle group watched on average 22 minutes less per day. Those in the placebo group watched 8 minutes less, but those on metformin did not change their TV watching significantly.

 

How did the researchers interpret the results?

The researchers concluded that although it was not a primary goal of the study, "the lifestyle intervention was effective at reducing sedentary time". They report that "in all treatment arms, individuals with lower levels of sedentary time had a lower risk of developing diabetes". They advise that "future lifestyle intervention programmes should emphasise reducing television watching and other sedentary behaviours, in addition to increasing physical activity".

 

Conclusion

This study has found an association between TV watching and an increased risk of developing diabetes. However, there are many potential confounding factors that were not taken into account in the analysis. This includes other medical conditions, medication, family history of diabetes and smoking. 

Additionally, all of the participants were at high risk of developing diabetes. They were overweight at the start of the study, had high blood sugar levels and insulin resistance – therefore, the study does not show whether this association would be found in people at low or moderate risk.

The original study did not set out to see if increased TV watching was associated with increased risk of developing diabetes; this was an afterthought, using the data that had been collected. This makes the results less reliable.

A further limitation is that the study is reliant on self-reporting the amount of time spent watching TV. This was estimated for the previous year, which is unlikely to be entirely accurate.

Watching TV is not "going to give you diabetes" as the Express had confusingly stated, but it is important to compensate for time spent being a couch potato by exercising regularly, eating a healthy diet and trying to achieve or maintain a healthy weight.

Read more about reducing your type 2 diabetes risk

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Watching too much TV can give you diabetes, experts warn. Daily Express, April 2 2015

How watching TV can increase your risk of diabetes: Every hour spent slumped in front of the screen can raise chance of developing the condition by 3.4%. Mail Online, April 2 2015

Secret to cutting diabetes risk is to turn off your television says new research. Daily Mirror, April 1 2015

Every Hour Spent In Front Of The TV Could Increase Diabetes Risk, Study Warns. The Huffington Post, April 2 2015

Links To Science

Rockette-Wagner B, Edelstein S, Venditti EM, et al. The impact of lifestyle intervention on sedentary time in individuals at high risk of diabetes (zip file 206kb). Diabetologia. Published online April 1 2015

Categories: Medical News

New Down’s syndrome test more accurate than current screening

Medical News - Thu, 04/02/2015 - 16:00

“Blood test for Down’s syndrome 'gives better results'," reports BBC News today. The test, which is based on spotting fragments of "rogue DNA", achieved impressive results in a series of trials.

A study of over 15,000 women found that the new blood test more accurately identifies pregnancies with Down's syndrome than the test currently used.

Down's syndrome is caused by having an extra chromosome (the packages of DNA containing information to grow and develop). The new test is able to detect small fragments of DNA from the baby floating about in the mother’s blood, called cell-free DNA (cfDNA).

This blood test measures the number of chromosomes in the mother’s blood, and from that it can see if there are any of these extra chromosomes.

The cfDNA test performed significantly better than the current test across a range of screening test measures for Down’s syndrome, but was not 100% accurate. Importantly, it had a much lower false positive rate than the current test; false positive is where a healthy baby is wrongly identified as having Down’s. A false positive result often leads to an unnecessary further diagnostic test that carries a small risk of causing a miscarriage.

The test is not yet available on the NHS, but it is being reviewed and a decision is expected later this year. It can be accessed privately at a cost of between £400 and £900.

 

Where did the story come from?

The study was carried out by researchers from the University of California, the Perinatal Diagnostic Center in San Jose, Sahlgrenska University Hospital in Sweden, and several other US institutions. It was funded by Ariosa Diagnostics and the Perinatal Quality Foundation.

The study was published in the peer-reviewed New England Journal of Medicine.

BBC News accurately reported on the study and provided expert opinion from both Great Ormond Street Hospital and the Down's Syndrome Association. Both organisations highlight the need for women to be given clear information about screening, so they can make an informed decision.

 

What kind of research was this?

This was a diagnostic study, which compared a new antenatal screening test with standard screening for three genetic conditions, including Down’s syndrome.

Normally, people have 23 pairs of chromosomes. However, in these three genetic conditions, there is an extra copy of one of the chromosomes. In Down’s syndrome, there is an extra chromosome 21 (trisomy 21); Edwards' syndrome has an extra chromosome 18 (trisomy 18); and Patau’s syndrome has an extra chromosome 13 (trisomy 13). In most cases, this happens by chance and isn’t inherited from the parents. This is why all mothers-to-be are offered screening to see whether this has happened.

Currently, all pregnant women in the UK are offered screening for these conditions, which involves a two-step process. The test offered depends on how far along the pregnancy is. Women between 11 and 14 weeks pregnant are offered a blood test plus an ultrasound scan, called a combined test. Women between 14 and 20 weeks of pregnancy are offered a different blood test. This is less accurate than the combined test.

If either of these tests indicates an increased risk of having a baby with Down’s, Edwards’ or Patau’s syndromes, the woman will be offered either chorionic villus sampling (CVS) or amniocentesis to find out. Both of these tests involve taking samples from the mother’s abdomen, which can be uncomfortable, although not usually painful. This increases the risk of miscarriage, which occurs in one in 100 women (1%).

The new test detects short fragments of the baby’s DNA floating about in the mother’s blood, called cell-free DNA (cfDNA). By measuring the level of each of the chromosomes, it is possible to see if there are more chromosomes 21, 18 or 13.

The researchers had previously performed proof of principle studies of cfDNA in women at high risk of having a baby with one of these conditions. They now wanted to see how accurate the test was in a large sample of women with any level of risk.

 

What did the research involve?

The researchers recruited 15,841 pregnant women eligible for screening for Down’s, Edwards’ or Patau’s syndromes. All were tested using the new cfDNA blood test and the standard combined test. The results of the two tests were compared to see which was more accurate at picking up any of the three trisomy conditions.

Women were enrolled in the study between March 2012 and April 2013 from 35 medical centres across the US, Canada and Europe. They were eligible to participate if they were aged 18 or older, and had a singleton pregnancy between weeks 10 and 14.3 at the time of screening.

A blood test for cfDNA was taken at the same time as the standard screening tests. The blood sample was then analysed at a laboratory without the analysts knowing any clinical details about the pregnancy, other than the gestational age and mother’s age (the sample was blinded). The results were not given to the mother or clinician.

The researchers then obtained the outcome of the pregnancy and compared the accuracy of the standard test results with the new cfDNA test. This included any termination of pregnancies and miscarriages if a genetic test had confirmed whether or not they had a trisomy condition.

They originally enrolled 18,955 women, but excluded 3,114, due to:

  • them not meeting the inclusion criteria
  • withdrawal from the study (either the woman or the investigator)
  • sample handling errors
  • no standard screening result
  • no cfDNA result
  • lost to follow-up

 

What were the basic results?

The new test outperformed the current one at detecting Down’s syndrome. Results were similar for Edwards' and Patau’s syndromes, but tended to be less accurate.

One of the most important measures for whether a new screening test is any good is the positive predictive value (PPV). This takes into account the number of correct test results, but also the number of false positives, based on the condition's prevalence.

In rare conditions, like these chromosomal conditions, the false positives are important, because they represent a potentially large group of women who could be sent to have further invasive diagnostic tests they might not need.

The PPV of the new test for Down’s syndrome was 80.9% – significantly higher than the 3.4% scored for the combined test. The PPV difference was lower for women deemed at lower risk of having a baby with Down’s syndrome (76.0% for the new test v 50.0% for the current test).

The detailed results for Down’s syndrome (trisomy 21) were:

  • cfDNA screening identified all 38 babies with Down’s syndrome (sensitivity 100%, 95% confidence interval (CI) 90.7 to 100)
  • standard screening identified 30 out of 38 babies with Down’s syndrome (sensitivity 78.9%, 95% CI 62.7 to 90.4)
  • the cfDNA test was positive in nine pregnancies that did not have Down’s syndrome (false positive rate 0.06%, 95% CI 0.03 to 0.11)
  • standard screening was positive in 854 pregnancies that did not have Down’s syndrome (false positive rate 5.4% (95% CI 5.1 to 5.8)

Results for Edwards' syndrome (trisomy 18) were:

  • cfDNA identified nine out of 10 cases (sensitivity 90%, 95% CI 55.5 to 99.7)
  • standard testing identified eight out of 10 (sensitivity 80%, 95% CI 44.4 to 97.5)
  • cfDNA wrongly diagnosed Edwards' syndrome in one case (false positive rate 0.01%, 95% CI 0 to 0.04)
  • standard testing was positive in 49 pregnancies that did not have Edwards' syndrome (false positive rate 0.31%, 95% CI 0.23 to 0.41)

The results for Patau’s syndrome (trisomy 13) were:

  • cfDNA screening identified both babies (sensitivity 100%,95% confidence interval (CI) 15.8 to 100)
  • standard screening identified one out of the two babies (sensitivity 50.0%, 95% CI 1.2 to 98.7)
  • the cfDNA test was positive in two pregnancies that did not have Patau’s syndrome (false positive rate 0.02%, 95% CI 0 to 0.06)
  • standard screening was positive in 28 pregnancies that did not have Patau’s syndrome (false positive rate 0.25% (95% CI 0.17 to 0.36)

 

How did the researchers interpret the results?

The researchers concluded that "the performance of cfDNA testing was superior to that of traditional first trimester screening for the detection of trisomy 21". They say that further cost benefit studies are now needed. The researchers also caution that "as emphasised by professional societies, the use of cfDNA testing and other genetic tests requires an explanation of the limitations and benefits of prenatal test choices to the patient".

 

Conclusion

This large study has shown that the new cfDNA test is better than current standard screening at detecting three trisomy conditions during pregnancy. The confidence in accurately identifying affected pregnancies was strongest for Down’s syndrome. There were much wider confidence intervals for the other two conditions.

The cfDNA test was not 100% accurate, as there were false positive results for each condition, though much fewer than with standard screening.

Around 3% of the cfDNA tests did not produce a result. Careful consideration and further research may be needed to decide the best approach in these cases. Should they all be sent for the next stage of diagnostic tests as a precaution, repeat the test, or be offered the standard test instead? 

The author’s admit that, had they included these "no result" cases in their main analysis, the performance of the cfDNA test would have been lower. How much lower we don’t know, as they don’t appear to have presented an analysis of this scenario.

The potential benefit of the test is that it could reduce the number of women being sent for the CVS or amniocentesis testing, which carry their own risks. As the authors say: "Before cfDNA testing can be widely implemented for general prenatal aneuploidy screening, careful consideration of the screening method and costs is needed."

This test is not yet available on the NHS, though it is being considered under an evaluation project run by Great Ormond Street Hospital. In the evaluation study, which is being carried out on women at low risk, if the results of the test show that a trisomy is highly likely or it is inconclusive, then they are offered the invasive tests to confirm the result. This is because of potential false positive results, which in previous research was found to occur in one in 300 women (0.3%) and false negative results – not picking up the diagnosis in two out of 100 babies.

At present, the test is only offered by private clinics and costs £400 to £900. It takes two weeks to get the result, as the sample is sent to the US. Details of private clinics can be easily found via any internet search engine.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Blood test for Down's syndrome 'gives better results'. BBC News, April 1 2015

Links To Science

Norton ME, Jacobsson B, Swamy GK, et al. Cell-free DNA Analysis for Noninvasive Examination of Trisomy. The New England Journal of Medicine. Published online April 1 2015

Categories: Medical News

Concerns raised about increased e-cigarette use in teenagers

Medical News - Wed, 04/01/2015 - 15:30

"E-cigarettes: Many teenagers trying them, survey concludes," BBC News reports after a survey of around 16,000 English teenagers found one in five teens had tried an e-cigarette.

The concern is that rather than using e-cigarettes as a device to stop smoking, teenagers with no history of smoking could be using e-cigarettes because of their novelty value. This hypothesis seems to be borne out by the survey finding that 16% of teen e-cig users said they had never smoked conventional cigarettes.

While e-cigarettes are undoubtedly far safer than cigarettes, this does not mean they are 100% safe. Nicotine is a powerful substance and it is unclear what long-term effects it may have, especially on a teenage brain and nervous system that is still developing.

The study also found a strong association between alcohol misuse, such as binge drinking, and access to e-cigarettes. Other experts fear e-cigs could act as a potential gateway to smoking among children.

Legislation banning the sale of e-cigarettes to under-18s is expected to be introduced later this year.

One limitation of the study, however, is that it relied on self-reporting, so it is prone to selection bias. This makes its findings less reliable.

One final message you may want to convey to your children is that a nicotine addiction brings no useful benefits, but it can be expensive (especially for a teenager) and its long-term effects are unclear. 

Where did the story come from?

The study was carried out by researchers from Liverpool John Moores University, Public Health Wales, Health Equalities Group, and Trading Standards North West.

It was published in the peer-reviewed journal BMC Public Health. BMC Public Health is an open-access journal, so the study is free to read online.

It was covered broadly accurately in the papers, although reports focused on the number of non-smokers who had reportedly used e-cigarettes.

This raised fears in the press that the devices may become a gateway drug to tobacco, rather than concerns about the number of young smokers who reported using them.

The study's limitations, such as the issue of selection bias (which could either lead to an over- or underestimation of the true figure) and the fact the sample may not be representative of England, were not discussed.  

What kind of research was this?

This was a cross-sectional survey of more than 16,000 school students in northwest England looking at reported use of e-cigarettes, conventional smoking, alcohol consumption and other factors.

The authors say that while e-cigarettes are marketed as a healthier alternative to tobacco, they contain the addictive drug nicotine.

The battery-powered devices, which can be bought online and in some pubs, chemists and newsagents, deliver a hit of addictive nicotine and emit water vapour to mimic the feeling and look of smoking.

The vapour is considered potentially less harmful than cigarette smoke and is free of some of its damaging substances, such as tar. 

What did the research involve?

The researchers used a cross-sectional survey of 16,193 school students aged 14 to 17 in northwest England. This is part of a biennial survey conducted in partnership with Trading Standards, whose remit includes enforcing regulations on the sale of age-restricted products in the UK.

The survey includes detailed questions on:

  • age
  • gender
  • alcohol use (drinking frequency, binge drinking frequency, drink types consumed, drinking location, drinking to get drunk)
  • smoking behaviours (smoking status, age of first smoking)
  • how alcohol and tobacco were accessed
  • parental smoking
  • involvement in violence when drunk

In 2013, the survey included a question about e-cigarettes for the first time, asking students if they had ever tried or bought them.

The questionnaire was given to students by teachers during normal school lessons between January and April 2013. Students completed the questionnaire themselves voluntarily and anonymously. The researchers excluded questionnaires where data was incomplete or spoiled.

The researchers also collected information on deprivation using both home and school postcodes and assigning participants to five different groups (or quintiles). They used standard statistical methods to analyse associations between e-cigarette access and other factors. 

What were the basic results?

The main findings are summarised below:

  • one in five children (19.2%) who responded said they had "accessed" e-cigarettes
  • over one-third (35.8%) of those who reported accessing e-cigarettes were regular smokers, 11.6% smoked when drinking, 13.6% were ex-smokers, and 23.3% had tried smoking but didn't like it
  • 15.8% of teenagers who accessed e-cigarettes had never smoked conventional cigarettes
  • e-cigarette access was also associated with being male, having parents or guardians that smoke, and students' alcohol use
  • compared with non-drinkers, teenagers who drank alcohol at least weekly and binge drank were more likely to have accessed e-cigarettes (adjusted odds ratio [AOR] 1.89)
  • the link between e-cigarettes and alcohol was particularly strong among those who had never smoked tobacco (AOR 4.59)
  • among drinkers, e-cigarette access was related to drinking to get drunk, alcohol-related violence, consumption of spirits, self-purchase of alcohol from shops or supermarkets, and accessing alcohol by recruiting adult proxy purchasers outside shops  
How did the researchers interpret the results?

The researchers say their findings suggest e-cigarettes are being accessed by teenagers more for experimentation and as a recreational drug, rather than for help with smoking cessation.

There is an urgent need for controls on the promotion and sale of e-cigarettes to children, the researchers argue, although they also point out that those most likely to obtain e-cigarettes may already be familiar with "illicit methods" of accessing age-restricted substances. 

Conclusion

As the authors point out, this cross-sectional survey had a number of limitations:

  • it did not record how frequently e-cigarettes were reportedly accessed
  • it cannot tell us whether children who reported both conventional smoking and e-cigarette access had accessed e-cigarettes before or after using conventional cigarettes
  • it is possible that, as the questionnaire was voluntary, it suffered from selection bias, with only certain students completing it
  • students may have under- or over-reported their smoking and drinking behaviours

The survey should not be considered representative of all 14- to 17-year-olds in England or in the northwest. However, the finding that one in five children reported having access to e-cigarettes, and that many of them are non-smokers, is a clear cause for concern.

Legislation banning the sale of e-cigarettes to under-18s is expected to be introduced later this year.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

E-cigarettes: Many teenagers trying them, survey concludes. BBC News, March 31 2015

Four in 10 teenage e-cigarette users would not have smoked, warn health experts. The Daily Telegraph, March 31 2015

Are teenagers trying e-cigarettes as a trendy novelty? ITV News, March 31 2015

E-cigarettes: why are young people vaping? Channel 4 News, March 31 2015

One in five teens have tried e-cigs: Fears youngsters will move on to real cigarettes after getting taste for nicotine. Mail Online, March 31 2015

Links To Science

Hughes K, Bellis MA, Hardcastle KA, et al. Associations between e-cigarette access and smoking and drinking behaviours in teenagers. BMC Public Health. Published online March 31 2015

Categories: Medical News

Paracetamol 'not effective' for lower back pain or arthritis

Medical News - Wed, 04/01/2015 - 14:31

"Paracetamol doesn't help lower-back pain or arthritis, study shows," The Guardian reports on a new review.

The review found no evidence that paracetamol had a significant positive effect, compared to placebo (dummy treatment) in relieving pain and disability in cases of acute lower back pain and was only minimally effective in osteoarthritis.

Before you start clearing out your medicine cabinet, the results of this review are not as clear-cut as reported.

The findings for lower back pain are based on three randomised controlled trials (RCTs), which, when grouped together, found no difference for pain relief, disability or quality of life between paracetamol and placebo. However, there are limitations in each of these studies. Two of the studies were small and the third only looked at acute lower back pain up to six weeks, when paracetamol may not be strong enough.

They did actually find that paracetamol slightly improved pain and disability from osteoarthritis of the hip or knee compared to placebo.

The study does not prove that paracetamol is no better than placebo for other types of back pain, such as chronic back pain (pain that persists for more than six weeks).

The National Institute for Health and Care Excellence (NICE) recommends that people with persistent back pain and recurrent back pain should stay physically active to manage and improve the condition.

Paracetamol is recommended as a first choice of painkiller because it has few side effects. NICE recommends that if this is not effective, stronger or different types of painkillers should be offered.

This guidance is currently under review, and this will take into account any new research such as the results of this study.

 

Where did the story come from?

The study was carried out by researchers from the University of Sydney, St Vincent’s Hospital and University of New South Wales and Concord Hospital in Sydney. It was funded by the National Health and Medical Research Council.

The study was published in the peer-reviewed British Medical Journal (BMJ) on an open-access basis so is free to read online (PDF 673kb).

The UK media reported the story accurately but did not explain any of the limitations of the study.

 

What kind of research was this?

This was a systematic review of all RCTs assessing the effectiveness of paracetamol for back pain and osteoarthritis of the hip or knee compared to placebo. The researchers also performed a meta-analysis. This is a statistical technique that combines the results of the RCTs to give an overall measure of effectiveness.

Pooling the results of multiple studies can help to give a better estimate of effectiveness, which is sometimes not seen in the individual studies, for example if they are too small.

This type of research is good at summarising all the research on a question and calculating an overall treatment effect, but relies on the quality and availability of the RCTs.

Paracetamol is currently recommended as the first line for pain relief for back pain and osteoarthritis of the hip and knee in clinical guidelines. The researchers wanted to assess whether this recommendation is backed up by the evidence.

 

What did the research involve?

A systematic review and meta-analysis was performed to identify and pool all RCTs that have assessed paracetamol compared to placebo for back pain and osteoarthritis of the hip and knee.

The following medical databases were searched for RCTs published up until December 2014: Medline, Embase, AMED, CINAHL, Web of Science, LILACS, International Pharmaceutical Abstracts, and Cochrane Central Register of Controlled Trials. A search was also made for unpublished studies, and authors were contacted for further information where required.

Three reviewers selected all relevant RCTs that reported on any of the following outcomes:

  • pain intensity
  • disability status
  • quality of life

Trials were excluded where a specific serious cause of the back pain had been identified, such as a tumour or infection, if they looked at post-operative pain and studies of people with rheumatoid arthritis.

The quality of each RCT was assessed using the standardised approach called a "risk of bias" assessment. The strength of the body of evidence as a whole was summarised using the internationally recognised GRADE approach (The Grading of Recommendations Assessment, Development and Evaluation).

A meta-analysis was then performed to pool the results of trials in people with the different conditions using appropriate statistical methods. This included an analysis of whether the RCTs were similar enough to be combined. The researchers also performed "secondary exploratory analysis", which looks at the effect various different factors may have had in biasing the results.

 

What were the basic results?

The systematic review included 13 moderate- to high-quality RCTs and 12 of them in the meta-analysis:

  • three trials investigated short-term use of paracetamol for lower back pain (including 1,825 people)
  • 10 trials assessed paracetamol compared to placebo for osteoarthritis of the knee or hip (including 3,541 people)
  • no trials were found for neck pain

No significant difference was found between paracetamol and placebo in the short term control of lower back pain in terms of:

  • pain intensity
  • disability
  • quality of life

Paracetamol slightly improved pain and disability from osteoarthritis of the hip or knee compared to placebo.

People experienced a similarly small number of side effects when taking paracetamol or placebo. However, people taking paracetamol were four times more likely to have abnormal liver function tests than those taking placebo. The review did not describe how abnormal the tests were or how quickly the tests returned to normal after stopping paracetamol.

 

How did the researchers interpret the results?

The researchers concluded that "paracetamol is ineffective in the treatment of lower back pain and provides minimal short term benefit for people with osteoarthritis". They call for "reconsideration of recommendations to use paracetamol for patients with lower back pain and osteoarthritis of the hip or knee in clinical practice guidelines".

 

Conclusion

This systematic review and meta-analysis suggests paracetamol may not be effective for some people with lower back pain and of limited help to people with osteoarthritis of the hip and knee.

Strengths of the study include:

  • the systematic review only contained the "gold standard" type of trials – RCTs
  • existing published RCTs comparing paracetamol with a placebo were likely to have been identified, as a large number of databases were searched from the beginning of their records up to December 2014. There were also two independent reviewers, which reduces the risk of any slipping through the net
  • they also searched for unpublished studies, reducing the risk of publication bias in their results (trials are less likely to be published if their results do not show a clear benefit)
  • the quality of evidence was appropriately assessed

However, as noted above, this type of research is reliant on the availability of relevant RCTs.

So while the review itself was well-conducted, the actual body of new evidence found about lower back pain was small.

In this case, the results for back pain were limited to three studies in specific populations. Non-specific lower back pain (i.e. back pain without an obvious cause) is complex in nature and these small studies may not be representative of all people who experience lower back pain.

First study

The first study was small, of 36 adults on strong (opioid) painkillers for at least six months for chronic back pain. While on these painkillers they did not find any difference in pain between an injection into the vein of either paracetamol, placebo or the non-steroidal anti-inflammatory drugs (NSAID) diclofenac and parecoxib.

Second study

The second study assessed the effect of paracetamol in acute back pain in 113 people after two and four days of use, compared to 20 people on placebo. The small study size limits the strength of the results. It may be that paracetamol was not a strong enough painkiller at this point in the course of the back pain, but may have been during the recovery phase.

Third study

The main outcome for the third study was whether paracetamol speeded up the time to recovery from acute lower back pain compared to placebo. How effective paracetamol was at pain relief was a secondary outcome so may not be as reliably assessed.

Some people will find paracetamol helps relieve the pain with relatively few side effects compared to other types of pain killers. The NICE guideline recommends paracetamol as a first line pain relief drug for lower back pain that has lasted for at least six weeks, along with other measures such as staying active. They recommend that if this does not provide adequate pain relief, then an NSAID should be offered.

NICE is currently updating its guidance on lower back pain and will take the results of this review into account.

NICE’s guidance also recommends paracetamol as a first line pain relief drug for osteoarthritis, however it does note that an evidence review suggested paracetamol may not be as effective for these people as originally thought. They are going to be reviewing this guidance (a draft is expected in 2016), and may revise their recommendations at that point, but for now have kept their existing guidance.

If you are finding that any prescribed treatment doesn’t seem to be working then you shouldn’t suddenly stop taking it (unless advised to). You do have the option of contacting your GP or doctor in charge of your care to discuss alternative drug (as well as non-drug) options.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Paracetamol doesn't help lower-back pain or arthritis, study shows. The Guardian, March 31 2015

Paracetamol 'does not help back pain or arthritis'. The Daily Telegraph, March 31 2015

Paracetamol ‘no good for back pain'. BBC News, March 31 2015

Paracetamol for back pain? It's no better than a placebo: Experts say treatment does nothing to improve recovery time, sleep or quality of life. Daily Mail, April 1 2015

Paracetamol is ineffective against lower back pain says study in top medical journal. Daily Mirror, March 31 2015

Paracetamol 'doesn't work on back pain'. ITV News, March 31 2015

Links To Science

Machado GC, Maher CG, Ferreia PH, et al. Efficacy and safety of paracetamol for spinal pain and osteoarthritis: systematic review and meta-analysis of randomised placebo controlled trials (PDF, 672kb). BMJ. Published online March 31 2015

Categories: Medical News