Medical News

Links between hay fever, asthma and prostate cancer inconclusive

Medical News - Wed, 05/20/2015 - 16:00

"Men with hay fever are more likely to have prostate cancer – but those with asthma are more likely to survive it," the Daily Mirror reports. Those were the puzzling and largely inconclusive findings of a new study looking at these three conditions.

Researchers looked at data involving around 50,000 middle-aged men and followed them up for 25 years, looking at whether asthma or hay fever at study start were associated with diagnoses of prostate cancer or fatal prostate cancer during follow-up.

The findings weren't as conclusive as the headline suggests. The researchers did find hay fever was associated with a small (7%) increased risk of prostate cancer development. There was some suggestion asthma may be associated with a decreased risk of getting prostate cancer or fatal prostate cancer. However, these links were only of borderline statistical significance, meaning there was a high risk they could have been the result of chance.

And the links between hay fever and fatal prostate cancer weren't significant at all, meaning there was no evidence that men with hay fever were more likely to die from the disease (so no need to worry if you are affected).

The possibility that inflammation, or the immune system more generally, could be associated with risk of prostate cancer is plausible, but this study tells us little about how different immune profiles could influence cancer risk.

 

Where did the story come from?

The study was carried out by researchers from Johns Hopkins Bloomberg School of Public Health and other institutions in the US. It was funded by grants from The National Cancer Institute and The National Heart, Lung and Blood Institute. The study was published in the peer-reviewed International Journal of Cancer.

The Daily Mirror has taken an uncritical view of the research findings and fails to make clear to its readers that the findings were mainly based on borderline statistically significant or non-significant results. These don't provide firm proof of links between asthma or hay fever and prostate cancer or lethal prostate cancer. 

 

What kind of research was this?

This was a prospective cohort study looking into how the immune system might be involved in the development of prostate cancer.

The study authors say emerging research suggests inflammation, and the immune response in general, may be involved in the development of prostate cancer. As they say, one way to explore this is by looking at the links between prostate cancer and conditions that have a particular immune profile. Two such immune-mediated conditions are asthma and allergies, such as hay fever.

Previous studies looking at links between the conditions gave inconsistent results. This study looked at the link in a prospective cohort of almost 50,000 cancer-free men, looking to see whether they developed prostate cancer and the factors associated. Cohort studies such as these can demonstrate associations, but they cannot prove cause and effect as many other unmeasured factors may be involved.

 

What did the research involve?

The cohort was called the Health Professionals Follow-Up Study. In 1986 it enrolled 47,880 cancer-free men, then aged 40-75 years (91% white ethnicity), who were followed up for 25 years.

Every two years, men completed questionnaires on medical history and lifestyle, and filled in food questionnaires every four years.

At study enrolment they were asked whether they had ever been diagnosed with asthma, hay fever or another allergy and, if so, the year it started. In subsequent questionnaires they were asked about new asthma diagnoses and asthma medications, but hay fever was only questioned at study start.

Men reporting a diagnosis of prostate cancer on follow-up questionnaires had this confirmed through medical records. The researchers also used the National Death Index to identify cancer deaths.

The researchers looked at the associations between prostate cancer and reported asthma or hay fever, particularly looking at the link with "lethal" prostate cancer. This was defined as being prostate cancer either diagnosed at a later stage when the cancer had already spread around the body (so expected to be terminal), or being the cause of death.

They adjusted their analyses for potential confounders of:

  • age
  • body mass index (BMI)
  • ethnicity
  • smoking status
  • physical activity
  • diabetes
  • family history of prostate cancer

 

What were the basic results?

Five percent of the cohort had a history of asthma at study start and 25% had hay fever. During the 25-year follow-up there were 6,294 cases of prostate cancer. Of these, 798 were expected to be lethal, including 625 recorded deaths.

After adjusting for confounders, there was a suggestion that having asthma at study start was associated with a lower risk of developing prostate cancer. We say a suggestion, because the 95% confidence interval (CI) of the result included 1.00. This makes it of borderline relative risk (RR) 0.89, 95% CI 0.78 to 1.00) meaning the finding may have been down to chance alone.

Hay fever, by contrast, was associated with an increased risk of developing prostate cancer, which did just reach statistical significance (RR 1.07, 95% CI 1.01 to 1.13).

Looking at lethal prostate cancer, there was again a suggestion that asthma was associated with decreased risk, but this again appeared of borderline statistical significance (RR 0.67, 95% CI 0.45 to 1.00). Hay fever was not this time significantly associated with risk of lethal prostate cancer.

The researchers then looked at ever having a diagnosis of asthma, this time not only looking at the 5% already diagnosed at study start, but also the 4% who developed the condition during follow-up. Again they found that ever having a diagnosis of asthma was associated with a decreased risk of lethal prostate cancer, but this was only of borderline statistical significance (RR 0.71, 95% CI 0.51 to 1.00).

The researchers also considered the time of diagnosis. They report that onset of hay fever in the distant past (more than 30 years ago) "was possibly weakly positively associated with risk of lethal" prostate cancer. However, this link is not statistically significant (RR 1.10, 95% CI 0.92 to 1.33).

 

How did the researchers interpret the results?

The researchers' conclude: "Men who were ever diagnosed with asthma were less likely to develop lethal and fatal prostate cancer." They add: "Our findings may lead to testable hypotheses about specific immune profiles in the [development] of lethal prostate cancer."

 

Conclusion

The researchers' suggestion that this research is "hypothesis generating" is the most apt. It shows a possible link between immune profiles and prostate cancer, but doesn't prove it or explain the underlying reasons for any such link.

This single study does not provide solid evidence that asthma or hay fever would have any influence on a man's risk of developing prostate cancer or dying from it, particularly when you consider the uncertain statistical significance of several of the findings.

Links suggesting asthma may be associated with a lower risk of total or lethal prostate cancer were all only of borderline statistical significance, meaning we can have less confidence that these are true links.

Links with hay fever were similarly far from convincing. Though the researchers found a 7% increased risk of developing prostate cancer with hay fever, this only just reached statistical significance (95% CI 1.01 to 1.13). The links between hay fever and risk of lethal prostate cancer that hit the headlines weren't significant at all, so they provide no evidence for a link.

Even if there is a link between asthma and allergy and prostate cancer risk, it's still possible this could be influenced by unmeasured health and lifestyle factors that have not been adjusted for.

Other limitations to this prospective cohort include its predominantly white sample, particularly given that prostate cancer is known to be more common in black African or black Caribbean men.

The results may not be applicable to these higher-risk populations. Also, though prostate cancer diagnoses were confirmed through medical records and death certificates, there is the possibility for inaccurate classification of asthma or allergic conditions as these were self-reported.

The possibility that inflammation, or the immune system more generally, could be associated with risk of prostate cancer is definitely plausible. For example, history of inflammation of the prostate gland is recognised to be possibly associated with increased prostate cancer risk. Therefore, study into how different immune profiles could have differing cancer risk is a worthy angle of research into prostate cancer.

However, the findings of this single cohort should not be of undue concern to men with hay fever or, conversely, suggest that men with asthma have protection from the disease. 

Links To The Headlines

Men with hay-fever more likely to have prostate cancer – but those with asthma more likely to survive. Daily Mirror, May 19 2015

Links To Science

Platz EA, Drake CG, Wilson KM, et al. Asthma and risk of lethal prostate cancer in the Health Professionals Follow-Up Study. International Journal of Cancer. Published online February 27 2015

Categories: Medical News

Children of the 90s more likely to be overweight or obese

Medical News - Wed, 05/20/2015 - 15:48

"Children of the 90s are three times as likely to be obese as their parents and grandparents," the Mail Online reports. A UK survey looking at data from 1946 to 2001 found a clear trend of being overweight or obese becoming more widespread in younger generations. Another related trend saw the threshold from being a normal weight to being overweight was passed at a younger age in younger generations

The study examined 273,843 records of weight and height for 56,632 people in the UK from five studies undertaken at different points since 1946. The results found children born in 1991 or 2001 were much more likely to be overweight or obese by the age of 10 than those born before the 1980s, although the average child was still of normal weight.

The study also found successive generations were more likely to be overweight at increasingly younger ages, and the heaviest people in each group got increasingly more obese over time. These results won't come as much of a surprise given the current obesity epidemic.

The findings are a potential public health emergency in the making. Obesity-related complications such as type 2 diabetesheart disease and stroke can be both debilitating and expensive to treat. The researchers called for urgent effective interventions to buck this trend. 

Where did the story come from?

The study was carried out by researchers from University College London and was funded by the Economic and Social Research Council.

It was published in the peer-reviewed journal, PLOS Medicine. The journal is open access, meaning the study can be read online free of charge.

The Mail Online focused on the risk to children, saying children were more likely to be obese.

But the figures in the study were for obesity and being overweight combined. We don't know how the chances of obesity alone changed over time because there were too few obese children in the earliest cohorts to carry out the calculations.

BBC News gave a more accurate overview of the study and the statistics.  

What kind of research was this?

This was an analysis of data from five large long-running cohort studies carried out in England ranging around 50 years in total. It aimed to see how people's weight changed over time through childhood and adulthood, and how this compared across generations.

Studies like this are useful for looking at patterns and telling us what has changed and how, but can't tell us why these changes arose.  

What did the research involve?

Researchers used data from cohort studies that recorded the weight and height of people born in 1946, 1958, 1970, 1991 and 2001.

They used the data to examine how the proportion of people who were normal weight, overweight or obese changed over time for the five birth cohort groups. They also calculated the chances of becoming overweight or obese at different ages, across childhood and adulthood, for the five groups.

The researchers used data from 56,632 people, with 273,843 records of body mass index (BMI) recorded at ages ranging from 2 to 64. BMI is calculated for adults as weight in kilograms divided by height in metres squared.

For children, BMI is assessed differently to account for the way children grow, using a reference population to decide whether children are underweight, normal weight, overweight or obese at specific ages.

To keep the populations as similar as possible across the cohort studies, the researchers only included data on white people, as there were few non-white people in the earliest study. Immigration of non-white people to the UK did not begin in any significant numbers until the 1950s.

For each of the five cohort studies, men were analysed separately from women, and children were analysed separately from adults. Each cohort was divided into 100 equal centiles, or sub-groups, according to BMI – for example, the 50th centile is the group where half the people in the study have a higher BMI and half have a lower BMI.

Tracking the 50th centile over time can show whether the average person in the group is normal weight or overweight at certain ages. Higher centiles, such as the 98th centile, show the BMI of the heaviest people in the group, where only 2% of people in the group had a higher BMI and 97% had a lower BMI.  

What were the basic results?

The study found that:

  • People born in the more recent birth cohorts were more likely to be overweight at younger ages. The age at which the average (50th centile) subgroup became overweight was 41 for men born in 1946, 33 for men born in 1958, and 30 for men born in 1970. For women, the age fell from 48 to 44, then to 41 across the three birth cohorts.
  • The chances of becoming overweight in childhood increased dramatically for children born in 1991 or 2001. For children born in 1946, the chance of being overweight or obese at the age of 10 was 7% for boys and 11% for girls. For children born in 2001, the chance was 23% for boys and 29% for girls. However, the average children (50th centile) remained in the normal weight range in all five birth cohorts.
  • The biggest changes in weight were seen at the top end of the spectrum. The heaviest people from the group born in 1970 (98th centile) reached a higher BMI earlier in life than the people born in earlier birth cohorts.  
How did the researchers interpret the results?

The researchers say their results show children born after the 1980s are more at risk of being overweight or obese than those born before the 1980s.

They say this is because of their exposure to an "obesogenic environment", with easy access to high-calorie food. They say the changes in obesity over time among older cohorts also support the theory that changes to the food environment in the 1980s are behind the rise in obesity.

They go on to warn that if trends persist, modern day and future generations of children will be more overweight or obese for more of their lives than previous generations, and this could have "severe public health consequences" as they will be more likely to get illnesses such as coronary heart disease and type 2 diabetes. 

Conclusion

The study shows how, while the whole population of England has become heavier over the past 70 years, different generations have been affected in different ways. People born in 1946 were, on average, normal weight until their 40s, but this group has since seen their weight rise and they are now, on average, overweight.

By the time they reached 60, 75% of men and 66% of women from this group were overweight or obese. People born in 1946 from the heaviest cohorts, who were already overweight in early adulthood, are now likely to be obese or very obese.

For people born since 1946, the chance of being overweight as young adults, adolescents or children has been increasing. The chances of being overweight or obese by the age of 40 was 65% for men born in 1958 (45% for women) and 67% for men born in 1970 (49% for women). The chances of children born in 2001 being overweight or obese by the age of 10 are almost three times that of the children born in 1946.

We can deduce from the figures that something may have happened during the 1980s – the decade when the earliest birth cohort's average group moved from normal weight to overweight – to increase the chances of people of all ages becoming overweight or obese.

What these figures can't tell us is what that was, despite the researchers' assertion this was a change to an obesogenic environment. Still, it seems plausible that a combination of high-calorie, low-cost food and an increasingly sedentary lifestyle – both in terms of working life and recreation – contributed to this trend.

This study has some limitations. Of the five studies, four were national studies across the UK, while one (the 1991 study) was limited to one area of England, so may not be representative of the UK as a whole.

More importantly, the five studies used differing methods to record height and weight at different time points. Some records were self-reported, which means they rely on people accurately recording and reporting their own height and weight.

We know that being overweight and obese is bad for our health. These conditions increase the chances of a range of illnesses, including heart disease, diabetes and some cancers. We also know children who are overweight tend to grow up to be overweight or obese adults, so increasing their chances of illness.

This study gives us more information about who is at risk of becoming overweight and at which ages, which may help health services develop better strategies for turning the tide of obesity.

Links To The Headlines

Children of the 90s are THREE times as likely to be obese as their parents and grandparents. Mail Online, May 19 2015

UK children becoming obese at younger ages. BBC News, May 20 2015

Links To Science

Johnson W, Li L. Kuh D, Hardy R. How Has the Age-Related Process of Overweight or Obesity Development Changed over Time? Co-ordinated Analyses of Individual Participant Data from Five United Kingdom Birth Cohorts. PLOS Medicine. Published online May 19 2015

Categories: Medical News

Stem cells could provide a treatment for a 'broken heart'

Medical News - Tue, 05/19/2015 - 17:00

"Scientists believe they may have discovered how to mend broken hearts," reports the Daily Mirror.

While it may sound like the subject of a decidedly odd country and western song, the headline actually refers to damage to the heart muscle.

heart attack occurs when the muscle of the heart becomes starved of oxygen causing it to be damaged. If there is significant damage the heart can become weakened and unable to effectively pump blood around the body. This is known as heart failure and can cause symptoms such as shortness of breath and fatigue.

The heart contains "dormant" stem cells, and researchers want to learn more about them to work out ways to get them to help repair damaged heart tissue.

In this new laboratory and animal study, researchers identified a characteristic genetic "signature" of adult mouse heart stem cells. This led to them being more easily identified than they have been previously, making them easier to "harvest" for study.

Injections of these cells into damaged mouse hearts was shown to improve heart function, even though very few of the donor cells remained in the heart.

These findings will help researchers to study these cells better, for example investigating whether they could be chemically triggered to repair the heart without removing them first. While the hope is that this research could lead to treatments for human heart damage, as yet the results are just in mice.

The researchers also note that they need to find out whether human hearts have the equivalent cells.

Where did the story come from?

The study was carried out by researchers from Imperial College London and other UK and US universities. It was funded by the British Heart Foundation, European Commission, European Research Council and the Medical Research Council, with some of the researchers additionally supported by the UK National Heart and Lung Institute Foundation and Banyu Life Science Foundation International.

The study was published in the peer-reviewed scientific journal Nature Communications. It is open access, meaning it can be read for free online.

The Mirror’s main report covers the story reasonably, but one of its subheadings – that scientists have identified a protein that if injected can stimulate heart cell regeneration – is not quite right. The researchers have not yet been able to utilise a protein to stimulate heart regeneration. They have just used a specific protein on the surface of the stem cells to identify the cells. So it was the cells, and not the protein, that were used in regeneration.

The Daily Telegraph’s coverage of the study is good and includes some useful quotes from the lead researcher Professor Michael Schneider. The article also makes it clear that this study only involved mice.

What kind of research was this?

This was laboratory and animal research studying the adult stem cells in mice that can develop into heart cells.

A number of diseases cause (or are caused by) damage to the heart. For example, heart attacks occur when some heart muscle cells do not get enough oxygen and die – usually due to a blockage in the coronary arteries that supply the heart muscle with oxygen-rich blood. There are "dormant" stem cells in the adult heart that can generate new heart muscle cells, but are not active enough to completely repair damage.

Researchers are starting to test ways to encourage the stem cells to repair heart damage fully. In this study, the researchers were studying these cells very closely, to understand whether all heart stem cells are the same, or whether there are different types and what they do. This information could help them to identify the right type of cells and conditions they need to fix heart damage.

This type of research is a common early step in understanding how the biology of different organs works, with the aim of eventually being able to develop new treatments for human diseases. Much of human and animal biology is very similar, but there can be differences. Once researchers have developed a good idea of how the biology works in animals, they will then carry out experiments to check to what extent this applies to humans.

What did the research involve?

The researchers obtained stem cells from adult mouse hearts and studied their gene activity patterns. They then went on to study which of these cell types could develop into heart muscle cells in the lab, and which could successfully produce heart muscle cells that could integrate into the heart muscle of living mice.

The researchers started by identifying a population of adult mouse heart cells that is known to contain stem cells. They separated out these into different groups, some of which are known to contain stem cells, and further separated each group into single cells, and studied exactly which genes were active in each cell. They looked at whether the cells showed very similar gene activity patterns (suggesting that they were all the same type of cells, doing the same things), or whether there were groups of cells with different gene activity patterns. They also compared these activity patterns to young heart muscle cells from newborn mice.

Once they identified a group of cells that looked like the cells that could develop into heart muscle cells, they tested whether they would be able to grow and maintain these in the lab. They also injected the cells into the damaged hearts of mice to see if they formed new heart muscle cells. They also carried out various other experiments to further characterise the cells that form new heart muscle cells.

What were the basic results?

The researchers found distinct groups of cells with different gene activity patterns. One particular group of these cells was identified as the cells that have started to develop into heart muscle cells. These cells were referred to as Sca1+ SP cells, and one of the genes they expressed produces a protein called PDGFRα, which is found on the surface of these cells. These cells grew and divided well in the lab, and the offspring cells maintained the characteristics of the original Sca1+ SP cells.

When the researchers injected samples of the offspring cells into damaged mouse hearts, they found that between 1% and 8% of the cells remained in the heart muscle tissue the day after the injection. Over time, most of these cells were lost from the heart muscle, but some remained (about 0.1% to 0.5% at two weeks).

By two weeks, some (10%) of the remaining cells were showing signs of developing into immature muscle cells. At 12 weeks, more of the remaining cells (50%) were showing signs of being muscle cells. These cells were also showing signs of being more developed and forming muscle tissue. However, there were only a few of these donor cells in each heart (5 to 10 cells). Some of the donor cells also appeared to have developed into the two sorts of cells found in blood vessels.

Mice whose hearts had been injected with the donor cells showed better heart function at 12 weeks than those who had a "dummy" injection with no cells. The size of the damaged area was smaller in those with donor cell injections, and the heart was able to pump more blood.

Further experiments showed the researchers that they could identify and separate out the cells that specifically develop into heart muscle cells by looking for the PDGFRα protein on their surface. The cells identified in this way grew well in the lab, and when injected into the heart they could integrate into the heart muscle and showed signs of developing into muscle cells after two weeks.

How did the researchers interpret the results?

The researchers concluded that they had developed a way to identify and separate out a specific subset of adult mouse heart stem cells and can generate new heart muscle cells. They say that at the very least this will help them to study these cells in mice more easily. If a human equivalent of these cells exists, they may also be able to utilise this knowledge to obtain stem cells from adult heart tissue.

Conclusion

This laboratory and animal study has identified a characteristic genetic "signature" of adult mouse heart stem cells. This has allowed them to be more easily identified than they have been previously. Injections of these cells have also been shown to be able to improve heart function after heart muscle damage in mice.

These findings will help researchers to study these cells more closely in the lab and investigate how they can prompt them to repair damaged heart muscle, possibly without removing them from the heart first. While the hope is that this research could lead to treatments for human heart damage, for example after a heart attack, as yet the results are just in mice. The researchers themselves note that they now need to find out whether human hearts have the equivalent cells.

Many researchers are working on the potential uses of stem cells to repair and damage human tissue, and studies such as this are important parts in this process.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Stem cells could be used to repair previously irreversible damage from heart attacks, say scientists. Daily Mirror, May 18 2015

'Heartbreak' stem cells could repair damage of heart attack. The Daily Telegraph, May 19 2015

Links To Science

Noseda M, Harada M, McSweeney S, et al. PDGFRα demarcates the cardiogenic clonogenic Sca1+ stem/progenitor cell in adult murine myocardium. Nature Communications. Published online May 18 2015

Categories: Medical News

Bioengineering advances raise fears of 'home-brew heroin'

Medical News - Tue, 05/19/2015 - 15:00

The Daily Mirror carries the alarming headline that, "Heroin made in home-brew beer kits could create epidemic of hard drug abuse". It says scientists are "calling for urgent action to prevent criminal gangs gaining access to [this] new technology" following the results of a study involving genetically modified yeast.

This study did not actually produce heroin, but an important intermediate chemical in a pathway that produces benzylisoquinoline alkaloids (BIAs). BIAs are a group of plant-derived chemicals that include opioids, such as morphine.

BIAs have previously been made from similar intermediate chemicals in genetically engineered yeast. Researchers hope that by joining these two parts of the pathway, they will get yeast that can produce BIAs from scratch. This could be cheaper and easier than current production methods, which often still involve extraction from plants.

But because morphine can be refined into heroin using standard chemical techniques and yeast can be grown at home, this has led to concerns about the potential misuse of this discovery.

So, will this lead to a rash of "Breaking Bad"-style heroin labs in criminals' garages and spare rooms? We doubt it – at least in the near future. A strain that can produce morphine has not yet been made and would need to be specially genetically engineered to do this, not just using unmodified home-brewing yeast available off the shelf.

Still, it may be worth raising awareness about the potential need for regulation of opioid-producing strains. 

Where did the story come from?

The study was carried out by researchers from the University of California and Concordia University in Canada.

It was funded by the US Department of Energy, the US National Science Foundation, the US Department of Defense, Genome Canada, Genome Quebec, and a Canada Research Chair.

The study was published in the peer-reviewed journal, Nature Chemical Biology. It is open access, meaning it can be read online for free.

The Daily Mirror's reporting takes a sensationalist angle – the picture caption, for example, reads: "Home-brewed heroin is on the rise, scientists warn". No heroin was made in this study, and complete opioid-producing strains of yeast have not been made yet – home-brewing heroin from yeast is not yet possible, much less on the rise.

The possibility of home-brewing comes from a commentary on the article in Nature, which discusses the findings of this and related studies. This commentary also discusses the potential legal implications, and the ways that risks could be reduced. For example, scientists could only produce yeast strains that make weaker opioids. But they acknowledge that the risk of criminals making opiate-producing yeast strains themselves is low.

The Guardian and BBC News take a slightly more restrained approach, suggesting that home-brew heroin may be a problem in the future but it certainly is not an issue now. The BBC also points out that producing medicines in microbes is not a new thing. 

What kind of research was this?

This laboratory research studied whether a group of chemicals called benzylisoquinoline alkaloids (BIAs) could be produced in yeast. BIAs include a range of chemicals used as drug treatments in humans. This includes opioids used for pain relief, as well as antibiotics and muscle relaxants.

Opioids are among the oldest drugs first identified as being produced naturally by opium poppies. Morphine is an opioid derived from poppies, and it and other derivatives or man-made versions of opioids are used to treat pain.

Opioids also produce euphoria and can be addictive. The illegal drug heroin is an opiate that can be produced by refining morphine to make it more powerful.

The researchers say many of these compounds are still made from plants such as the opium poppy, as they are chemically very complex and therefore difficult and expensive to make from scratch in the lab.

However, now we know much more about how the chemicals are made in plants, it may be possible to genetically engineer microbes in the lab to produce these chemicals in industrial quantities.

The researchers say the yeast S. cerevisiae – sometimes known as baker's or brewer's yeast – has been used to produce BIAs in the lab from intermediate chemicals in the BIA production pathway. The earlier steps in the pathway have not yet been managed in yeast, although they have in bacteria.

In this study, the researchers wanted to see if they could produce the intermediate chemical (S)-reticuline in yeast. This has been tried before, but was not successful. 

What did the research involve?

The researchers knew they needed one particular type of protein called a tyrosine hydroxylase, which would work in yeast to perform the first step in the process of making (S)-reticuline.

They developed a system to allow them to quickly screen a large group of known tyrosine hydroxylases to identify one that would work in yeast. The tyrosine hydroxylase is needed to produce the intermediate chemical dopamine.

The researchers then needed other proteins that convert dopamine and another chemical already present in yeast into another intermediate chemical, and then carry out the other chemical steps needed to form (S)-reticuline. They identified proteins they needed for these stages from the opium poppy and the Californian poppy.

Finally, they genetically engineered yeast cells to produce tyrosine hydroxylase and all of the other proteins needed, and tested whether the yeasts were able to produce (S)-reticuline.  

What were the basic results?

The researchers were able to identify tyrosine hydroxylase from the sugar beet that worked in yeast, allowing them to produce the intermediate chemical dopamine. They used genetic engineering to make a version of this protein in yeast that worked even better than the original one.

They were also able to produce the other proteins they needed in yeast. A yeast strain producing all of these proteins was able to produce (S)-reticuline, the chemical intermediate needed in the production of opioids. 

How did the researchers interpret the results?

The researchers concluded that coupling their work with the work already done, and improving on the yield of the process, "will enable low-cost production of many high-value BIAs".

They say that, "Because of the potential for illicit use of these products, including morphine and its derivatives [such as heroin], it is critical that appropriate policies for controlling such strains be established so that we garner the considerable benefits while minimising the potential for abuse." 

Conclusion

This laboratory study has successfully managed to produce an important intermediate chemical in the pathway that produces benzylisoquinoline alkaloids (BIAs), a group of plant-derived chemicals that include opioids.

BIAs such as morphine have previously been made from similar intermediate chemicals in genetically engineered yeast, but this is the first time the earlier stages have been completed successfully in yeast. The researchers hope that by joining these two parts of the pathways, they will get yeast that can produce BIAs from scratch.

This study has not completed this final step, however. Researchers will need to test this before they know that it will be successful. They acknowledge that further optimisation of their method to produce more of the intermediate chemical is needed before it could be used to produce BIAs.

This study has generated media coverage speculating about the possibility of "home-brew heroin" creating an "epidemic of hard drug use". But the researchers did not produce heroin or any other opioid, only an intermediate chemical. These yeasts have been specially genetically engineered, and the experiments are not the sort of thing most people are going to be able to easily replicate in their garage.

While the likelihood of such strains being successfully made for criminal use seems very small, at least in the short to medium term, criminals can be resourceful. Considering the potential implications of this research and whether policies are needed, both nationally and internationally, may be prudent.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Home-brewed heroin? Scientists create yeast that can make sugar into opiates. The Guardian, May 18 2015

'Home-brewed morphine' made possible. BBC News, May 19 2015

Heroin made in home-brew beer kits could create epidemic of hard drug abuse. Daily Mirror, May 18 2015

Links To Science

DeLoache WC, Russ ZN, Narcross L, et al. An enzyme-coupled biosensor enables (S)-reticuline production in yeast from glucose. Nature Chemical Biology. Published online May 18 2015

Categories: Medical News

No proof orange juice boosts brain power

Medical News - Mon, 05/18/2015 - 16:00

"Drinking orange juice every day could improve brain power in the elderly, research shows," the Mail Online reports. Despite the encouraging words from the media, the small study this headline is based on does not provide strong evidence that an older person would see any noticeable difference in their brain power if they drink orange juice for two months.

The study involved 37 healthy older adults, who were given orange juice or orange squash daily for eight weeks before switching to the other drink for the same amount of time. The 100% orange juice contains more flavonoids, a type of plant compound that has been suggested to have various health benefits.

The researchers gave participants a whole battery of cognitive tests before and after each eight-week period. Both drinks caused very little change on any of the test results and were not significantly different from each other on any of the tests individually.

The researchers also carried out analyses where they combined the test results and looked at the statistical relationships between the drink given and when the test was given. On this occasion, they did find a significant result – overall cognitive function (the pooled result of all the tests combined) was better after the juice than after the squash.

But the overall pattern of the results doesn't seem very convincing. This study does not provide conclusive evidence that drinking orange juice has an effect on brain function.  

Where did the story come from?

The study was carried out by researchers from the University of Reading, and was funded by Biotechnology and Biological Sciences Research Council grants and the Florida Department of Citrus, also known as Florida Citrus.

Florida Citrus is a government-funded body "charged with the marketing, research and regulation of the Florida citrus industry", a major industry in the state. Florida Citrus was reported to have helped design the study.

The study was published in the peer-reviewed American Journal of Clinical Nutrition.

The Mail Online took the study at face value without subjecting it to any critical analysis. Looking into the research reveals rather unconvincing evidence that drinking orange juice would have any effect on a person's brain function.  

What kind of research was this?

This was a randomised crossover trial that aimed to compare the effects of 100% orange juice, which has high flavanone content, and orange-flavoured cordial, which has low flavanone content, on cognitive function in healthy older adults.

Flavonoids are pigments found in various plant foods. It has been suggested they have various health benefits – for example, some studies have suggested that high consumption of flavonoids can have beneficial effects on cognitive function. Flavanones are the specific type of flavonoids found in citrus fruits. This trial investigated the effect of flavanones in orange juice.

This was a crossover trial, meaning the participants acted as their own control, taking both the high and low flavanone content in random order a few weeks apart. The crossover design effectively increases the sample size tested, and is appropriate if the interventions are not expected to have a lasting impact on whatever outcome is being tested. 

What did the research involve?

The study recruited 37 older adults (average age 67) who were given daily orange juice or orange squash for eight weeks in a random order, with a four-week "washout" period in between. They were tested to see whether the drinks differed in their effect on cognitive function.

All participants were healthy, without significant medical problems, did not have dementia and had no cognitive problems. In random order, they were given:

  • 500ml 100% orange juice containing 305mg natural flavanones daily for eight weeks
  • 500ml orange squash containing 37mg natural flavanones daily for eight weeks

The drinks contained roughly the same calories. The participants were not told which drink they were drinking, and the researchers assessing the participants also did not know.

Before and after each of the eight-week periods, participants visited the test centre and had data collected on height, weight, blood pressure, health status and medication. They also completed a large battery of cognitive tests assessing executive function (thinking, planning and problem solving) and memory.

The researchers analysed change in cognitive performance from baseline to eight weeks for each drink, and compared the effects of the two drinks. 

What were the basic results?

On the whole, the two drinks caused very minimal change from baseline on any of the individual tests. There was no statistically significant difference between the two drinks when comparing score change from baseline on any of the tests individually.

There was only a single significant observation when looking at the individual tests at the end of treatment (rather than change from baseline). A test of immediate episodic memory was higher eight weeks after drinking 100% orange juice compared with squash (score 9.6 versus 9.1). However, when this was compared to the change from baseline, it didn't translate into any significant difference between the groups.

The researchers also carried out a statistical analysis looking at the interactions between the drink given and the testing occasion. In this analysis, they did find an interaction between the drink and testing for global cognitive function (when all test results were combined). This showed that, overall, this was significantly better at the eight-week visit after the orange juice intake. 

How did the researchers interpret the results?

The researchers concluded that, "Chronic daily consumption of flavanone-rich 100% orange juice over eight weeks is beneficial for cognitive function in healthy older adults."

They further say that, "The potential for flavanone-rich foods and drinks to attenuate cognitive decline in ageing and the mechanisms that underlie these effects should be investigated." 

Conclusion

Overall, this small crossover study does not provide conclusive evidence that drinking orange juice has an effect on brain function.

A wide variety of cognitive tests were performed in this study before and after the two drinks (orange juice and squash). The individual test results do not indicate any large effects. Notably, both drinks caused very little change from baseline on any of the test results, and were not significantly different.

The only significant results were found for overall cognitive function when combining test results and looking at statistical interactions. The fact a consistent effect wasn't seen across individual measures and the different analyses means the results are not very convincing.

The trial is also quite small, including only 37 people. These participants were also a specific sample of healthy older adults who volunteered to take part in this trial, and none of them had any cognitive impairment, so the results may not be applicable to other groups.

While the participants were not told what they were drinking and the drinks were given in unlabelled containers, they did have to dilute them differently. This and the taste of the drinks may have meant the participants could tell the drinks apart. The researchers did ask participants what they thought they were drinking, and although about half said they did not know, most of those who gave an opinion (16 out of 20) got it right.

There is also only comparison of high- and low-flavanone orange juice. There is no comparison with a flavanone-free drink, or foods or drinks that contain other types of flavonoid.

The possible health benefits of flavonoids or flavanones specifically will continue to be studied and speculated. However, this study can't conclusively tell us that they have an effect on brain power.

A good rule of thumb is what's good for the heart is also good for the brain – taking regular exercise, eating a healthy diet, avoiding smoking, maintaining a healthy weight, and drinking alcohol in moderation.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Two daily glasses of orange juice 'boosts elderly brainpower': Significant improvements can be seen in less than two months, research shows. Mail Online, May 16 2015

Orange juice 'improves brain function'. The Daily Telegraph, May 15 2015

Links To Science

Kean RJ, Lamport DJ, Dodd GF, et al. Chronic consumption of flavanone-rich orange juice is associated with cognitive benefits: an 8-wk, randomized, double-blind, placebo-controlled trial in healthy older adults. The American Journal of Clinical Nutrition. Published online January 14 2015

Categories: Medical News

Drug combination for cystic fibrosis looks promising

Medical News - Mon, 05/18/2015 - 15:00

"A 'groundbreaking' new therapy for cystic fibrosis could hugely improve patients' quality of life," The Daily Telegraph reports after a combination of two drugs – lumacaftor and ivacaftor – was found to improve lung function.

The headline is prompted by a trial looking at a new treatment protocol for cystic fibrosis, a genetic condition caused by a mutation in a gene that normally creates a protein that controls salt balance in a cell. This leads to thick mucus build-up in the lungs and other organs, causing a persistent cough, poor weight gain and regular lung infections.

The prognosis for cystic fibrosis has improved dramatically over the past few decades, but the condition is still life-limiting. This new drug combination works together to make the faulty cell protein work better.

More than 1,000 people with cystic fibrosis were given the new protocol or a placebo for 24 weeks. The treatment led to meaningful improvements in lung function compared with the placebo. It also reduced the number of lung infections, improved quality of life, and helped people gain weight.

Further study of the drugs' effects in the longer term will be needed, in addition to collecting more information on side effects.

But this treatment won't work for all people with cystic fibrosis. There are various gene mutations, and this treatment only targeted the most common one, which affects half of people with the condition.  

Where did the story come from?

The study was carried out by researchers from various international institutions, including the University of Queensland School of Medicine in Australia, and Queens University Belfast.

There were various sources of funding, including Vertex Pharmaceuticals, which makes the new treatment.

The study was published in the peer-reviewed New England Journal of Medicine.

The UK media provided balanced reporting of the study, including cautions that the treatment should work in around half of people with cystic fibrosis. Researchers were quoted as saying that although they hope this could improve survival for people with cystic fibrosis, they don't know this for sure.

However, some of the reporting focusing on the quality of life improvements does not take note of the researchers' caution that, overall, these improvements fell short of what was considered meaningful.

The media also debated the cost of the treatment protocol. The Guardian reports one year's course of lumacaftor alone costs around £159,000. The new treatment protocol is being assessed by the National Institute for Health and Care Excellence (NICE) to see if it is a cost effective use of resources.  

What kind of research was this?

This was a randomised controlled trial (RCT) aiming to investigate the effects of a new treatment for cystic fibrosis.

Cystic fibrosis is a hereditary disease caused by mutations in a gene called cystic fibrosis transmembrane conductance regulator (CFTR). The protein made by the CFTR gene affects the balance of chloride and sodium inside the cells.

In people with cystic fibrosis, the CFTR protein does not work. This causes mucus secretions in the lungs and other parts of the body to be too thick, leading to symptoms such as a persistent cough and frequent chest infections.

There is no cure for cystic fibrosis, and current management focuses on breaking down mucus and controlling the symptoms with treatments such as physiotherapy and inhaled medicines.

We have two copies of all of our genes – one inherited from each parent. To develop cystic fibrosis, you need to inherit two abnormal copies of the CFTR gene. One in 25 people carry a copy of the abnormal CFTR gene. If two people carrying an abnormal gene have a child and the child receives the abnormal gene from both parents, they will develop cystic fibrosis.

This trial looked at the effects of a treatment that helps the abnormal CFTR protein work better, called lumacaftor. It was tested in combination with another treatment called ivacaftor, which also boosts the activity of CFTR proteins.

There are various different types of CFTR gene mutations. One, called Phe508del, is the most common and affects 45% of people with the condition. Lumacaftor specifically corrects the abnormality caused by the Phe508del mutation, so this trial only included people with this mutation. An RCT is the best way of examining the safety and effectiveness of a new treatment. 

What did the research involve?

This study reports the pooled results of two identical RCTs that have investigated the effects of two different doses of lumacaftor, in combination with ivacaftor, for people with cystic fibrosis who have two copies of the Phe508del CFTR mutation.

The study recruited 1,122 people aged 12 or older; 559 in one of the trials and 563 in the other. Participants in both trials were randomly assigned to one of three study groups:

  • 600mg of lumacaftor every 24 hours in combination with 250mg of ivacaftor every 12 hours
  • 400mg of lumacaftor every 12 hours in combination with 250mg of ivacaftor every 12 hours
  • placebo pills every 12 hours

The placebo pills looked just like the lumacaftor and ivacaftor pills and were taken in the same way, so researchers and participants could not tell whether they were taking placebo or not. All treatments were taken for 24 weeks.

The main outcome examined was how well the participants' lungs worked, measured as a change in percentage of predicted forced expiratory volume (FEV1). This is the amount of air that can be forcibly exhaled in the first second after a full in-breath, which provides a well-validated method of assessing lung health and function.

The percentage of predicted FEV1 shows how much you exhale as a percentage of what you would be expected to, based on your age, sex and height.

The researchers also looked at the change in body mass index (BMI) and in people's quality of life in terms of their lung function, as reported in the patient-reported Cystic Fibrosis Questionnaire – Revised (CFQ-R).

The study analysis included all patients who received at least one dose of the study drug, which was 99% of all participants.  

What were the basic results?

At the start of the study, the average FEV1 of participants was 61% of what was predicted (what it ought to be). There were no differences between the randomised groups in terms of age, sex, lung function, BMI or current cystic fibrosis treatments used.

Lumacaftor-ivacaftor significantly improved how well the participants' lungs worked compared with placebo in both trials, and at both doses. The change in percentage of predicted FEV1 ranged between 2.6% and 4.0% across the two trials compared with placebo over the 24 weeks.

There were also significant improvements compared with placebo in BMI (range of improvement 0.13 to 0.41 units), and respiratory quality of life (1.5 to 3.9 points on the CFQ-R). There was also a reduced rate of lung infections in the treatment groups.

There was similar reporting of side effects across the two treatment groups and placebo groups (around a quarter of participants experienced side effects). The most common adverse effect participants experienced was lung infections.

However, the proportion of participants who stopped taking part in the study as a result of side effects was slightly higher among the drug treatment groups (4.2%) compared with placebo groups (1.6%).

The specific reasons for discontinuation varied between individuals – for example, a couple stopped because of shortness of breath or wheezing; some stopped because of blood in their sputum; some because of a rash; and so on. 

How did the researchers interpret the results?

The researchers concluded that, "These data show that lumacaftor in combination with ivacaftor provided a benefit for patients with cystic fibrosis [who carried two copies of] the Phe508del CFTR mutation." 

Conclusion

This trial has demonstrated that this new treatment combination could be effective in improving lung function for people with cystic fibrosis who have two copies of the common Phe508del CFTR mutation.

The trial has many strengths, including its large sample size and the fact it captured outcomes at six months for almost all participants. The improvements in lung function were seen while the participants continued to use their standard cystic fibrosis treatments. As the researchers suggest, this indicates the treatment could be a beneficial add-on to normal care to further improve symptoms.

The results seem very promising, but there are limitations that should be addressed. Though lung function improvements were said to be clinically meaningful, improvements in quality of life relating to lung function fell short of what is considered to be meaningful clinically (four points and above on the CFQ-R scale).

The trial only included people with well-controlled cystic fibrosis, and effects of the treatment might not be as good for people with poorer disease control. The treatment combination would also only be suitable for people with the Phe508del CFTR mutation.

This trial only included people with two copies of this mutation, which is only the case in around 45% of people with the condition. Whether the treatment would benefit people who carry one copy of the Phe508del mutation and a different second CFTR mutation is not yet clear, and people with two non-Phe508del mutations would not be expected to benefit from this treatment.

The effects of this treatment combination will need to be studied in the longer term, beyond six months – for example, to see whether it could prolong life. Further information will need to be collected on side effects and how commonly they cause people to stop treatment.

Though this treatment targets the abnormal protein that causes symptoms, as one of the study authors notes in The Guardian, it is not a cure. The lead researcher, Professor Stuart Elborn, was quoted as saying: "It is not a cure, but it is as remarkable and effective a drug as I have seen in my lifetime."

Overall, the results of this trial show promise for this new treatment for people with cystic fibrosis who carry two copies of this specific gene mutation.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

'Groundbreaking' new treatment for cystic fibrosis. The Daily Telegraph, May 17 2015

Cystic fibrosis treatment found to improve lives of sufferers in trials. The Guardian, May 17 2015

Cystic fibrosis drug offers hope to patients. BBC News, May 17 2015

New treatment brings hope on cystic fibrosis: Thousands of patients could have lives transformed by new drug for most common form of the condition. Mail Online, May 18 2015

Links To Science

Wainwright CE, Elborn JS, Ramsey BW, et al. Lumacaftor–Ivacaftor in Patients with Cystic Fibrosis Homozygous for Phe508del CFTR. The New England Journal of Medicine. Published online May 17 2015

Categories: Medical News

Does holding your breath during an injection make it less painful?

Medical News - Fri, 05/15/2015 - 15:00

"Hate injections? Holding your breath can make the pain of jabs more bearable," the Mail Online reports. A team of Spanish researchers mechanically squeezed the fingernails of 38 willing volunteers to cause them pain.

For one round of experiments, the group were told to hold their breath before and during the pain squeeze. In the second round, they had to breathe in slowly while the pain was applied. Those holding their breath reported slightly lower pain ratings overall than those breathing in slowly.

The hypothesis underpinning this technique is that holding your breath increases blood pressure, which in turn reduces nervous system sensitivity, meaning you have a reduced perception of any pain signals.

But before you try this out, it's worth saying the pain perception differences were very small – a maximum 0.5 point difference on a scale from 0 to 10.

Also, the pain scores of the experimental breathing styles weren't compared with normal breathers, so we don't actually know if they were beneficial overall at reducing pain perception, only relative to one other.

We wouldn't advise changing your breathing habits in an attempt to avoid pain based on the results of this study.  

Where did the story come from?

The study was carried out by researchers from University of Jaén in Spain, and was funded by the Spanish Ministry of Science and Innovation.

It was published in the peer-reviewed journal, Pain Medicine.

Generally, the Mail Online reported the story accurately. In their article, the lead study author explained that holding your breath won't work for an unexpected injury, such as standing on a pin or stubbing a toe. But it might work if you start holding your breath before the pain kicks in – for example, anticipating the sting of an injection.

The Mail added balance by indicating other scientists were critical of the findings. They said the pain reduction was very small, and pointed out that holding your breath might make your muscles more tense, which could worsen pain in some circumstances, such as childbirth.  

What kind of research was this?

This human experimental study looked at whether holding your breath affects pain perception.

The researchers explain that holding your breath immediately after a deep inhalation slows your heart rate and increases your blood pressure. This stimulates pressure-sensing receptors called baroreceptors to send signals to the brain to reduce blood pressure.

This happens through reduced activity of the sympathetic nervous system, which is involved in the "fight or flight" response to danger. When working as it should, this feedback loop ensures blood pressure doesn't get too high.

The researchers say the dampening down of this part of the nervous system might also reduce sensitivity to pain. In this study, the researchers wanted to test their theory that increasing your blood pressure through holding your breath would reduce your perception of pain.   

What did the research involve?

Researchers used a machine to squeeze the fingernails of 38 healthy adult volunteers at different pressures to stimulate pain. Before the squeeze, the group were told to inhale slowly or to hold their breath after a deep breath in.

The researchers analysed ratings of pain in the two breathing styles to see if there was a difference. Volunteers were pre-tested to find a nail squeeze pressure they found painful and three personalised pain intensity thresholds.

Two breathing styles were tested and compared in each person. One involved breathing in slowly for at least seven seconds while the pain was applied. The other involved inhaling deeply, holding your breath while the pain was applied, before exhaling for seven seconds without actively forcing the breath out.

Both groups practised the breathing styles before the experiment began until they were confident they could do it properly. Once they had established their breathing, each volunteer had one fingernail mechanically squeezed for five seconds. After the squeeze, participants could breathe normally.

They were asked to rate the pain on a Likert scale ranging from 0 (not at all painful) to 10 (extremely painful). The experiment was repeated on the same person using three pain intensity thresholds for each breathing condition.

Volunteers knew the experiment was about pain and breathing, but they were not told which breathing experiment the study team expected to work. 

What were the basic results?

Ratings of pain intensity were consistently higher in the slow breathing group compared with the breath-holders. This held true for each of the three pain intensities tested.

Both breathing styles slowed heart rates, but this happened a little quicker, and the difference was larger, in the breath-holding condition. 

How did the researchers interpret the results?

The researchers concluded that, "During breath-holding, pain perception was lower relative to the slow inhalation condition; this effect was independent of pain pressure stimulation."

On the implications of their findings, they said: "This simple and easy-to-perform respiratory manoeuvre may be useful as a simple method to reduce pain in cases where an acute, short-duration pain is present or expected (e.g. medical interventions involving needling, bone manipulations, examination of injuries, etc.)." 

Conclusion

This small human experimental study used a fingernail-squeezing machine to cause pain to 38 willing volunteers. It found those instructed to hold their breath before the pain stimulus consistently rated their pain lower than those told to breathe slowly.

The difference between the two breathing groups was very small, although statistically significant. The biggest pain difference seen looked to be less than 0.5 points on a 10-point scale. How important this is to doctors or patients is debateable.

Similarly, the study compared two artificial breathing conditions against one another. They did not compare these against pain scores in people breathing normally throughout. This would have been useful, as it would give us an idea of whether one or both of the breathing types were any better than breathing normally.

On this point, the Mail Online reported that, "On a scale of 1 to 10, the pain experienced by volunteers fell by half a point from 5.5 to 5 when they held their breath". It wasn't completely clear whether they were talking about the difference between the two groups, or the absolute pain reduction experienced related to normal breathing.

This figure wasn't clear in the published research, so may have come from an interview. If true, it again highlights the rather small reduction in pain found.

The volunteers knew they were taking part in a pain study related to breathing. Participants' general expectations about the likely effects of the two breathing conditions therefore might have biased the results. Larger studies involving study blinding and randomisation would reduce the chance of this bias and others.

Overall, this study shows that changing your breathing pattern might affect your pain perception – but at such a small level that it might not be useful in any practical way.

There may be other dangers in holding your breath in an attempt to control pain. For example, you might feel lightheaded and pass out, or tense your muscles, which can hamper the ease of injections.

If you are worried about having an injection, you should tell the health professional before they give you an injection. They can take steps to make the experience less distressing.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Hate injections? Holding your breath can make the pain of jabs more bearable. Mail Online, May 14 2015

Links To Science

Reyes del Paso G, de Guevara ML, Montoro CI. Breath-Holding During Exhalation as a Simple Manipulation to Reduce Pain Perception. Pain Medicine. Published online April 30 2015

Categories: Medical News

Single mothers have 'worse health in later life'

Medical News - Fri, 05/15/2015 - 14:30

The Daily Telegraph today tells us that: "Single mothers in England [are] more likely to suffer ill health because their families 'do not support them'."

This is a half-truth. The large international study – involving 25,000 people from England, the US and 13 other European countries – behind the headline found a link between single motherhood between the ages of 16 and 49 and worse health in later life. But it did not find this was because families do not support them.

It would appear that this claim is prompted by a trend spotted in the study by the researchers. It found that health risks were more pronounced in northern European countries and the US. While in southern European countries the risk was less pronounced.

The researchers speculated that in southern European countries there is more of a tradition of informal support services, where grandparents, aunts, uncles, cousins etc all pitch in with childcare duties. Or as the proverb puts it "It takes a village to raise a child".

While this hypothesis is plausible it is also unproven and was not backed up with any new robust data on social support as part of the study.

The study was very large and diverse so the mother health link appears real. However, the reasons and causes behind it are still to be worked out.

 

Where did the story come from?

The study was carried out by researchers from US, Chinese, UK and German universities and was funded by the US National Institute on Aging.

The study was published in the peer-reviewed Journal of Epidemiology & Community Health.

The media reporting was generally partially accurate, as most took the finding about social support at face value. The link between single motherhood and later ill health was supported by the body of this study, but the study did not collect any information on social support, so this explanation, although plausible, was not based on direct evidence. 

 

What kind of research was this?

The study investigated if single motherhood before the age of 50 was linked to poorer health later in life, and whether it was worse in countries with weaker "social [support] safety nets". To do this they analysed data collected from past cohort and longitudinal studies across 15 countries.

The researchers say single motherhood is known to be linked to poorer health, but didn’t know whether this link varied between countries.

Analysing previously collected data is a practical and legitimate study method. A limitation is that the original information was collected for specific reasons that usually differ from the research aims when coming to use it later. This can mean some information that would ideally be analysed is not there. In this study, the researchers couldn’t get information on social support networks, which they thought might explain some of their results.

 

What did the research involve?

The research team analysed health and lifestyle information on single mothers under 50 collected from existing large health surveys. The single mothers' health was documented into older age and compared across 15 countries.

Data was available from 25,125 women aged over 50 who participated in the US Health and Retirement Study; the English Longitudinal Study of Ageing; or the Survey of Health, Ageing and Retirement in Europe (SHARE). Thirteen of the 21 countries represented by SHARE (Denmark, Sweden, Austria, France, Germany, Switzerland, Belgium, The Netherlands, Italy, Spain, Greece, Poland, Czech Republic) had collected relevant data. With the US and England on board, this gave 15 countries for final analysis.

The researchers used data on number of children, marital status and any limitations on women's capacity for routine daily activities (ADL), such as personal hygiene and getting dressed, and instrumental daily activities (IADL), such as driving and shopping. Women also rated their own health.

Single motherhood was classified as having a child under the age of 18 and not being married, rather than living with a partner.

 

What were the basic results?

Single motherhood between the ages of 16 and 49 was linked to poorer health and disability in later life in several different countries. The risks were highest for single mothers in England, the US, Denmark and Sweden.

Overall 22% of English mothers had experienced single motherhood before age 50, compared with 33% in the US, 38% in Scandinavia, 22% in western Europe and 10% in southern Europe.

While single mothers had a higher risk of poorer health and disability in later life than married mothers, associations varied between countries.

For example, risk ratios for ADL limitations were significant in England, Scandinavia and the US but not in western Europe, southern Europe and eastern Europe.

Women who were single mothers before age 20, for more than eight years, or resulting from divorce or non-marital childbearing, had a higher risk.

 

How did the researchers interpret the results?

The researchers' concluded that: "Single motherhood during early adulthood or mid-adulthood is associated with poorer health in later life. Risks were greatest in England, the US and Scandinavia."

Although they didn’t have good data to back it up, they suggested that social support and networks may partially explain the findings. For example, areas like southern Europe, which the researchers say have strong cultural emphasis on family bonds, were not associated with higher health risks.

They add: "Our results identify several vulnerable populations. Women with prolonged spells of single motherhood; those whose single motherhood resulted from divorce; women who became single mothers at young ages; and single mothers with two or more children, were at particular risk."

 

Conclusion

This large retrospective study of over 25,000 women linked single motherhood between the ages of 16 and 49 with worse health in later life. This is not a new finding. What was new was that the link varied across different countries. Risks were estimated as greatest in England, the US and Scandinavia for example, but were less consistent in other areas of Europe.

The research team thought this might be caused by differences in how social networks supported single mothers in different countries, such as  being able to rely on extended families. But they had no data to directly support this. They did not have information on, for example, socioeconomic status, social support or networks during single motherhood, so could not analyse whether these were important causes. They also did not know whether any of the women they classed as single were actually in non-marital or same-sex partnerships, which may have affected results.

Health status in later life is likely to be linked to a complex number of interrelated factors. Being a single mum may be one, social networks might be another. But based on this study we don't yet know for sure, or the mechanisms by which this could lead to worse health.

Studies that collect information on levels of social support alongside health outcomes for single women would be able to tell us whether this is the likely cause, but getting this data may not be easy.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Single mothers more likely to suffer ill health, study finds. The Independent, May 14 2015

'Health risk' for single mothers. Mail Online, May 15 2015

Single mothers in England more likely to suffer ill health because their families 'do not support them'. The Daily Telegraph, May 14 2015

Links To Science

Berkman LF, Zheng Y, Glymour MM, et al. Mothering alone: cross-national comparisons of later-life disability and health among women who were single mothers. Journal of Epidemiology and Community Health. Published online May 14 2015

Categories: Medical News

Cannabis-extract pills 'not effective' for dementia symptoms

Medical News - Thu, 05/14/2015 - 17:00

"Cannabis pills 'do not help dementia sufferers'," reports The Daily Telegraph. Previous research suggested one of the active ingredients in cannabis – tetrahydrocannabinol (THC) – can have effects on the nervous system and brain, such as promoting feelings of relaxation.

In this study, researchers wanted to see if THC could help relieve some of the behavioural symptoms of dementia, such as mood swings and aggression.

They set up a small trial involving 50 dementia patients with behavioural symptoms. They found taking a pill containing a low dose of THC for three weeks did not reduce symptoms any more than a dummy pill. Other studies suggested the substance might have benefits, but these studies were not as well designed as this trial.

The study was small, which reduces its ability to detect differences between groups. But the trend was for a greater reduction of symptoms in the placebo group than the THC group, suggesting that THC would not be expected to be better, even with a larger group.

People taking the THC pills did not show more of the typical side effects expected, such as sleepiness or dizziness. This led researchers to suggest the dose of THC may need to be higher to be effective. Further studies would be needed to determine whether a higher dose would be effective, safe and tolerable. 

Where did the story come from?

The study was carried out by researchers from Radboud University Medical Center and other research centres in the Netherlands and the US.

It was funded by the European Regional Development Fund and the Province of Gelderland in the Netherlands. The study drug was provided by Echo Pharmaceuticals, but they did not provide any other funding or have any role in performing the study.

The study was published in the peer-reviewed medical journal, Neurology.

The Daily Telegraph covered this story well.  

What kind of research was this?

This was a randomised controlled trial (RCT) looking at the effects of tetrahydrocannabinol (THC), one of the active ingredients in cannabis, on neuropsychiatric symptoms in people with dementia.

This was a phase II trial, meaning it is a small-scale test in people with the condition. It aims to check safety and get an early indication of whether the drug has an effect.

The researchers say they had also carried out a similar trial with a lower dose of THC (3mg daily), which did not have an effect, so they increased the dose in this trial to 4.5mg daily.

People with dementia often have neuropsychiatric symptoms, such as being agitated or aggressive, delusions, anxiety, or wandering. 

The researchers report that existing drug treatments for dementia have a delicate balance of benefits and harms, and non-drug treatments are therefore preferred, but they have limited evidence of effectiveness and may be difficult to put into practice.

An RCT is the best way to assess the effects of a treatment. Randomisation is used to create well-balanced groups, so the treatment is the only difference between them. This means any differences in outcome can be attributed to the treatment itself and not to other confounding factors

What did the research involve?

The researchers enrolled 50 people with dementia and neuropsychiatric symptoms. They randomly assigned them to take either a THC pill or an identical-looking inactive placebo pill for three weeks. They assessed symptoms over that time and looked at whether these differed in the two groups.

The trial initially intended to assess people who also had pain, but the researchers could not find enough people with both symptoms to participate, so they focused on neuropsychiatric symptoms. It also intended to recruit 130 people, but did not reach this number because of delays in getting approval for the trial in some centres.

About two-thirds (68%) of the participants had Alzheimer's disease and the rest had vascular dementia or mixed dementia. They all had experienced neuropsychiatric symptoms for at least a month. Both groups were taking similar neuropsychiatric drugs, including benzodiazepines, and continued to take these drugs during the study period. 

People with a major psychiatric disorder or severe aggressive behaviour were excluded. Just over half (52%) lived in a special dementia unit or nursing home. The participants were aged about 78 years on average.

The pills contained 1.5mg of THC (or none in the case of placebo) and were taken three times a day for three weeks. Neither the participants nor the researchers assessing them knew which pills they were taking, which stops them influencing the results.

The researchers assessed the participants' symptoms at the start of the study, two weeks later, and then at the end of the study. They used a standard questionnaire, which asked the caregiver about symptoms in 12 areas, including agitation or aggression and unusual movement behaviour, such as pacing, fidgeting or repeating actions such as opening and closing drawers.

The researchers used a second method to measure agitated behaviour and aggression, and also measured quality of life and the participants' ability to carry out daily activities. They also assessed whether the participants experienced any side effects from treatment. The researchers then compared the results of the two groups.   

What were the basic results?

Three participants did not complete the study: one in each group stopped treatment because they experienced side effects, and one in the placebo withdrew their consent for taking part.

Both the placebo and the THC pill groups had a reduction in neuropsychiatric symptoms during the trial. There was no difference in the reduction between the groups. The groups also did not differ in a separate measure of agitation and anxiety, quality of life, or ability to carry out daily activities.

Two-thirds of people (66.7%) taking THC experienced at least one side effect, and over half of those taking placebo (53.8%). The sorts of side effects that have been previously reported with THC, such as sleepiness, dizziness and falls, were actually more common with placebo. 

How did the researchers interpret the results?

The researchers concluded they found no benefit of 4.5mg oral THC for neuropsychiatric symptoms in people with dementia after three weeks of treatment.

They suggested the dose of THC used may be too low because the participants did not experience the expected side effects of THC, such as sleepiness. 

Conclusion

This small phase II trial has found no benefit of taking THC pills (4.5mg a day) for neuropsychiatric symptoms in people with dementia in the short term.

The authors say this contrasts with previous studies, which have found some benefit. However, they note the previous studies were also limited in that they were even smaller, did not have control groups, or did not collect data prospectively.

The study was small, which reduces its ability to detect differences between groups. However, the non-significant trend was for a greater reduction of symptoms in the placebo group than the THC group.

The researchers note the improvement in the placebo group was "striking" and may be the result of factors such as the attention and support received from the study team, expectations of the participants on the effects of THC leading to perceived improvement, and training of the nursing personnel in the study.

While the authors suggest the dose of THC may need to be higher, further studies are needed to determine whether a higher dose would be effective and safe.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Cannabis pills 'do not help dementia sufferers'. The Daily Telegraph, May 13 2015

Links To Science

van den Elsen G, Ahmed AIA, Verkes R, et al. Tetrahydrocannabinol for neuropsychiatric symptoms in dementia. Neurology. Published online May 13 2015

Categories: Medical News

Could testing grip strength predict heart disease risk?

Medical News - Thu, 05/14/2015 - 14:40

"Poor grip can signal chances of major illness or premature death," the Mail Online reports. An international study has provided evidence that assessing grip strength could help identify people who were at higher risk of cardiovascular incidents such as a heart attack.

The study authors wanted to see whether muscle strength, measured by grip, can predict the chances of getting a range of illnesses, and of dying, in high-, medium- and low-income countries. To find out, they tested 142,861 people across 17 countries and tracked what happened to them over the course of four years. The study found that the chances of dying during this period were higher for people with weaker grips, as were the chances of having a heart attack or stroke.

The grip test predicted death from any cause better than systolic blood pressure did, but the blood pressure test was better at predicting whether someone would have a heart attack or stroke.

Grip tests may be a quick way of assessing someone's chances of having cardiovascular disease, or dying from it, but the study doesn't tell us whether muscle weakness causes illness, or the other way around.

It is unlikely that a "grip test" would replace standard protocols for diagnosing cardiovascular diseases, which rely on a combination of risk assessment methods and tests, such as electrocardiogram (ECG) and a coronary angiography. However, such a test could be useful in areas of the world where access to medical resources is limited.

Where did the story come from?

The study was carried out by researchers from 23 different universities or hospitals, in 17 different countries. It was led by researchers at McMaster University in Ontario, Canada, and funded by grants from many different national research institutes and pharmaceutical companies. The study was published in the peer-reviewed medical journal The Lancet.

The media reported the study reasonably accurately, although the Mail and The Daily Telegraph seemed to confuse the maximum grip strength measured by dynamometer with the strength of a person's handshake, which isn't the same thing. You would hope that somebody shaking your hand wouldn’t try to grip it as powerfully as possible.

What kind of research was this?

This was a longitudinal population study carried out in 17 countries, with high, medium and low income levels. It aimed to see whether muscle strength, measured by grip, could predict someone's chances of illness or death from many different causes. As this was an observational study, it can't tell us whether grip strength is a cause of illness or death, but it can show us whether the two things are linked.

What did the research involve?

Researchers recruited 142,861 people from households across the 17 countries included in the study. They tested their grip strength and took other measurements, including their weight and height, and asked questions about their:

  • age
  • diet
  • activity levels
  • education
  • work
  • general health

They checked up with them every year for an average of four years, to find out whether they were still alive and whether they had developed certain illnesses. After four years, the researchers used the data to calculate whether grip strength was linked to a higher or lower risk of having died or developed an illness.

The researchers aimed to get an unbiased sample of people from across the countries involved. They tried to get documentary evidence about cause of death, if people had died. However, if that was not available, they asked a standard set of questions of the people in their household to try to ascertain the cause of death. Most people in the study had their grip strength tested in both hands, although some at the start of the study had only one hand tested.

The data was analysed in a number of different ways, to check whether the results were consistent across different countries and within the same country. One big problem with this type of study is reverse causation. This means that the thing being measured – in this case, grip strength – could be either a cause or a consequence of illness. So someone with a weak grip might have weak muscles because they are already ill with a fatal disease. To try to get around this, the researchers analysed the figures excluding everyone who had died within six months of being enrolled in the study, and another analysis excluding everyone with cardiovascular disease or cancer at the start of the study. The results were adjusted to take account of age and gender, because older people and women, on average, have weaker muscle strength than younger people and men.

What were the basic results?

The researchers had follow-up data for 139,691 people, of whom 3,379 (2%) died during the study. After adjusting their figures, researchers found that people with lower grip strength were more likely to have died during the study, whether from any cause, cardiovascular disease or non-cardiovascular disease. People with low grip strength were also more likely to have had a heart attack or stroke. There was no link between grip strength and diabetes, hospital admission for pneumonia or chronic obstructive pulmonary disease (COPD), injury from falling, or breaking a bone. The results did not change significantly when excluding people who died within six months, or who had cancer or cardiovascular disease at the start.

Grip was measured in kilograms (kg) and adjusted for age and height. Average values for men ranged from 30.2kg in low income-countries to 38.1kg in high-income countries. On average, across all study participants, a 5kg reduction in grip strength was associated with a 16% increase in the chance of death (hazard ratio 1.16, 95% confidence interval 1.13 to 1.20). Grip strength alone was more strongly associated with the chance of dying from cardiovascular disease (such as a heart attack or stroke) than systolic blood pressure – a more commonly used measurement. However, blood pressure was better at predicting whether someone was going to have a heart attack or stroke.

How did the researchers interpret the results?

The researchers said their findings showed that muscle strength is a strong predictor of death from cardiovascular disease and a moderately strong predictor of having a heart attack or stroke. They say that muscle strength predicts the chances of death from any cause, including non-cardiovascular disease, but not the chances of getting non-cardiovascular illness.

They go on to say that these findings suggest muscle strength may predict what happens to people who get illnesses, rather than just predicting whether they get ill. When they looked at what happened to people who got ill, whether from cardiovascular disease or other causes, those who had low grip strength were more likely to die than those with high grip strength. 

They say they cannot tell from the study why there is an association between muscle strength and chances of getting cardiovascular disease. They say further research is needed to see whether improving muscle strength would cut the chances of having heart attacks or strokes. 

Conclusion

These are interesting results from a range of very different countries, showing that people with low muscle strength may be at higher risk of dying prematurely than other people. Earlier studies in high-income countries had already suggested that this was the case, but this is the first study to show it holds true across countries from high to low incomes.

The study also shows that Europeans, and men from high-income countries, on average, have higher grip strength than people from lower-income countries. Interestingly, women from middle-income regions, such as China and Latin America, had slightly higher muscle strength than women from high-income countries.

What we don't know from the study is why and how muscle strength is linked to the chances of death. It might seem obvious that people who are weak and frail are more at risk of death than other people, but we don't know whether this is because they are already ill, or whether their weak muscle strength makes them more vulnerable to getting ill, or less able to survive illness if they do get sick.

Importantly, the study doesn't tell us what can be done for people with low muscle strength. Should we all be doing weight training to increase our strength, or would that make no difference? Low muscle strength may reflect lots of things, such as the amount of exercise people do, what type of diet they eat, their age and occupation. We know that muscle strength declines as we age, but we don't know the effect of trying to halt this decline.

Should doctors routinely measure people's grip strength to test their health? The researchers say it is a better predictor of cardiovascular death than blood pressure, and could be easily used in lower-income countries. But raised blood pressure and cholesterol both increase the risk of cardiovascular disease, and there are treatments available to get them under control. Simply measuring a person’s grip strength would miss this opportunity and not lead to any preventive strategies.

The "grip test" could be used in poorer countries as a quick way to identify people who might be at risk of heart attack or stroke, who could then benefit from follow-up testing.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

What your handshake says about your health: Poor grip can signal chances of major illness or premature death. Mail Online, May 14 2015

Palm 'holds secrets of future health'. BBC News, May 14 2015

Handshake strength 'could predict' heart attack risk. The Daily Telegraph, May 14 2015

Why testing your grip could save your life. Daily Express, May 14 2015

Links To Science

Leong DP, Teo KK, Rangarajan S, et al. Prognostic value of grip strength: findings from the Prospective Urban Rural Epidemiology (PURE) study. The Lancet. Published online May 13 2015

Categories: Medical News

Study finds seasons may affect immune system activity

Medical News - Wed, 05/13/2015 - 14:25

"Winter immune boost may actually cause deaths," The Guardian reports. A new gene study suggests there may be an increase in inflammation levels during winter, which can protect against infection but could also make the body more vulnerable to other chronic diseases.

The study looked at gene expression (the process of using a gene to make a protein) in blood samples taken from 1,315 children and adults in different months throughout the year in a range of different countries. The researchers found increased activity of some of the genes involved in inflammation during the winter, and decreased activity in the summer.

The authors concluded that this seasonal change in the immune system could, for example, contribute to the worsening of some autoimmune disorders during the winter, such as rheumatoid arthritis.

But the immune system is extremely complex, and different genes showed different seasonal expression patterns. There were also important discrepancies in expression patterns in different parts of the world. Saying the immune system is "weaker" in certain seasons at this stage is therefore oversimplifying the findings of this research.

It is also likely these seasonal changes could at least in part be a response to changes in infections and allergens, such as pollen in the summer, but this type of study cannot prove cause and effect. Further research into this area is required before any practical application of these results can be found. 

Where did the story come from?

The study was carried out by researchers from the University of Cambridge and the London School of Hygiene and Tropical Medicine in the UK, and the Technical University of Munich and the Technical University of Dresden in Germany.

It was funded by various institutions, including the National Institute for Health Research, Cambridge Biomedical Research Centre, the UK Medical Research Council (MRC), The Wellcome Trust, and the UK Department for International Development.

The study was published in the peer-reviewed journal Nature Communications. This is an open-access journal, so the study is free to read online.

The media, on the whole, reported the story accurately, though the total number of people who had gene expression analyses was 1,315, not more than 16,000, as reported.

Many of the news sources talked of the immune system being "stronger", "weaker" or "boosted". These terms are, arguably, overly simplistic and not representative of the findings of this research. It is probably better to think of the overall pattern of immune activity changing from season to season, rather than the immune system going from "weak" to "strong", and back to "weak" again.

The Mail Online also reported that it is believed the amount of daylight we get "plays a role" in this increased immune activity. They say this "could explain why the seasonal effect was weaker in people from Iceland, where the extremely long summer days and short, dark winter days may upset the process". But this seems contradictory – if daylight plays a role, you would expect a greater seasonal effect in Iceland. 

What kind of research was this?

This research combined several observational studies that looked at the level of immune system activity at different times of the year in people from around the world.

It aimed to see if there was seasonal variation in the:

  • gene expression of inflammatory proteins and receptors such as interleukin-6 (IL-6) and C-reactive protein (these proteins are associated with autoimmune conditions such as rheumatoid arthritis)
  • number of each type of white cell in the blood (white cells fight different types of infections)

As these were observational studies, they can only show an association between the different seasons and the immune system. They cannot prove that the season causes the immune system to become more or less active, as there could be other factors (confounders) causing any results seen.

What did the research involve?

The researchers looked at the gene expression of nearly 23,000 genes in one type of white blood cell in samples of blood taken from children and adults at different times of the year.

They measured the number of each type of white cell in blood samples from healthy adults from the UK and The Gambia taken during different months. They then looked at gene expression in samples of fat tissue from women in the UK.

Gene expression of 22,822 genes was analysed in samples from 109 children genetically at risk of developing type 1 diabetes. The samples came from the German BABYDIET study, where babies had a blood test taken every three months until the age of three.

Gene expression was measured from blood samples taken at different times of the year from:

  • 236 adults with type 1 diabetes from the UK
  • adults with asthma but no reported current infection from Australia (26 people), the UK/Ireland (26 people), the US (37 people) and Iceland (29 people)

The researchers then measured the number of each type of white cell in blood samples taken from 7,343 healthy adults from the UK and 4,200 healthy children and adults from The Gambia. They wanted to see if there were seasonal changes in the types of white cells in the blood.

Finally, they looked at gene expression in fat tissue samples taken from 856 women from the UK. They did this to see whether only cells in the immune system showed variation in gene expression with the seasons. 

What were the basic results?

In the first group of children and adults from Germany, the researchers found almost a quarter of all genes (23%, about 5,000 genes) showed seasonal variation in the white blood cells assessed. Some genes were more active in the summer and others in winter.

When looking at all of the population groups they tested, 147 genes were found to show the same seasonal variation in the blood samples taken from the children and adults from the UK/Ireland, Australia and the US.

Again, some genes were more active in the summer and others in the winter. The genes included one encoding protein, which controls the production of anti-inflammatory proteins and was found to be more active in the summer months.

Other genes involved in promoting inflammation were more active in the winter. Seasonal genes from the samples of Icelandic people did not show the same pattern.

The numbers of different types of white blood cells from the UK samples also showed seasonal variation. Lymphocytes, which mostly fight viral infections, were highest in October and lowest in March. Eosinophils, which have many immune functions, including allergic reactions, were highest in the summer.

There were also seasonal patterns in the numbers of different types of white blood cell from people in The Gambia, but these were different from those in the UK. All white cell types increased during the rainy season.

The researchers also found some genes showed seasonal variation in their activity in fat cells.  

How did the researchers interpret the results?

The researchers say their results indicate gene expression and the composition of blood varies with seasons and geographical locations.

They say the increased gene expression of inflammatory proteins in the European winter may help explain why some autoimmune conditions are more likely to start in the winter, such as type 1 diabetes.

Conclusion

This research found seasonal variations in gene expression in one type of white blood cell. Some genes became more active in the summer months, while others became more active in the winter.

For example, one gene involved in the body's anti-inflammation response was increased during the summer, while some involved in inflammation were increased in the winter.

The researchers also found seasonal variation in the numbers of each type of white cell. These patterns were different in samples taken from people in the UK, compared with people from The Gambia.

Because of the observational nature of each study, it is not possible to say for certain that the time of year caused the results seen. The immune system is affected by a variety of factors, such as current and past infections, stress and exposure to allergens.

For example, it is not surprising that the number of eosinophils was highest in the UK during the summer months, when the allergen pollen (linked to hay fever) is most abundant.

Concurrent illness may have confounded the results of the gene expression studies, as they were performed on adults with either type 1 diabetes or asthma and children at increased risk of type 1 diabetes.

The immune system is extremely intricate, involving a wide range of different genes, proteins and cells that have complex interactions, as shown in this study. Further research into this area is required before any practical application of these results can be found.

The most season-specific health advice we can offer at this point is to wrap up warm in winter, avoid getting sunburnt during the summer, and take the opportunity to safely top up your vitamin D throughout the year.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Winter immune boost may actually cause deaths, study suggests. The Guardian, May 12 2015

Seasons affect 'how genes and immune system work'. BBC News, May 12 2015

Body's immune response is stronger in the summer, Cambridge University study suggests. The Daily Telegraph, May 12 2015

We really ARE healthier in the summer: 'Serious illnesses strike more in winter because our immune systems are weaker'. Mail Online, May 12 2015

Links To Science

Dopico XC, Evangelou M, Ferreira RC, et al. Widespread seasonal gene expression reveals annual differences in human immunity and physiology. Nature Communications. Published online May 12 2015

Categories: Medical News

Doctors issue warning about overtreating patients

Medical News - Wed, 05/13/2015 - 14:00

"NHS tests and drugs 'do more harm than good'," is the headline in The Telegraph, while The Guardian warns: "Doctors to withhold treatments in campaign against 'too much medicine'."

Both of these alarmist headlines are reactions to a widely reported opinion piece from representatives of the UK's Academy of Medical Royal Colleges (AMRC) in the BMJ about the launch of a campaign to reduce overdiagnosis and overtreatment in the UK.

However, the article does not suggest that doctors should "withhold" effective treatments, or say that all, or most, NHS tests and drugs do more harm than good.  

Who wrote the opinion piece?

The piece was written by a group of doctors representing the UK's AMRC. The academy represents all medical royal colleges in the UK.

The authors include Dr Aseem Malhotra, consultant clinical associate at the AMRC, Dr Richard Lehman, senior research fellow at the University of Oxford, and Professor Sir Muir Gray, the founder of the NHS Choices Behind the Headlines service.

The piece marks the launch of the Choosing Widely campaign in the UK. The campaign is already underway in the US and Canada. Its purpose is to ask medical organisations to identify five commonly used tests or treatments in their specialities that may be unnecessary, and should be questioned and discussed with their patients.

An example given on the website for the US Choosing Wisely campaign is the routine use of X-rays for first-line management of acute lower back pain. As these types of cases usually resolve by themselves, the use of X-rays could be seen as both a waste of time and money.

The piece is published as an open-access article, meaning it can be read for free online in the peer-reviewed medical journal BMJ. Read the article in full on the BMJ website

What arguments does the piece make?

The piece argues that some patients are diagnosed with conditions that will never cause symptoms or death (overdiagnosis) and are then treated for these conditions unnecessarily (overtreatment).

In addition, the authors say, some treatments are used with little evidence that they help, or despite being more expensive, complex or lengthy than other acceptable treatments.

They say overdiagnosis and overtreatment are driven by "a culture of 'more is better', where the onus on doctors is to 'do something' at each consultation".

The idea that doing nothing may actually be the best option could be a concept alien to many doctors as a result of medical culture and training.

The article says this culture is caused by factors such as:

  • NHS England's system of payment by results, which rewards doctors for carrying out investigations and providing treatment – though a case could be made that this is more of a problem in private healthcare systems, such as the US, where the incentive to provide often expensive investigations and treatment is much higher
  • patient pressures, partly driven by lack of shared information and decision making with patients
  • misunderstanding of health statistics, which means doctors, for example, overestimate the benefits of treatment or screening

Overtreatment matters, the authors say, because it exposes people to the unnecessary risk of side effects and harms, and because it wastes money and resources that could be spent on more appropriate and beneficial treatments.  

What evidence do the authors use to support their argument?

The authors cite various studies and sources to support their arguments. They point to patterns of variation in the use of medical and surgical interventions around the country, which do not relate to a need for these procedures.

They say the National Institute for Health and Care Excellence (NICE) has identified 800 clinical interventions commissioners could stop paying for because the available evidence suggests they do not work or have a poor balance of benefits and risks.

It should be pointed out the BMJ piece did not report specific evidence estimating how common overdiagnosis or overtreatment are in the UK as a whole.

The authors also note that a study of the effect of the GP payment system introduced in 2004 – which provides financial incentives for a range of activities, such as recording blood pressure, testing for diabetes and prescribing statins to people at risk of heart disease – found these tests and treatments are now more common, but this did not seem to have reduced the levels of premature death in the population.

Finally, the piece cites a study that found fewer people chose to have an angioplasty when told that, although it can improve symptoms, it does not reduce the future chances of a heart attack, compared with people who had not been told this explicitly.

It is important to point out the evidence presented in this article did not seem to be gathered via a systematic method (a systematic review). This means there could have been evidence that countered the authors' argument that was overlooked or not included.

The authors acknowledge there is no evidence that the Choosing Wisely campaign has had any effect in reducing the use of low-value medical procedures in the US. 

How accurate is the media reporting?

While the actual articles in the UK press are, in most cases, accurate, some of the headlines are alarmist and not particularly useful.

The Independent gives a good overview of the campaign and sets it into context, with information from NICE and examples from the US.

Several of the newspapers highlight specific tests and treatments that might be targeted by the campaign. For example, the Guardian reports that, "Doctors are to stop giving patients scores of tests and treatments, such as X-rays for back pain and antibiotics for flu, in an unprecedented crackdown".

This is premature – the first step that will be taken by medical organisations such as the royal colleges is to identify the top five lists of treatments or tests they consider to have dubious value, before discussing whether to reduce their use or, in some cases, not use them at all.

Once these have been identified, the piece calls for sharing this information with doctors and patients to help them discuss the benefits and harms of the identified treatments and tests more fully.

Does it make any recommendations?

The AMRC makes four recommendations:

  • Doctors should provide patients with resources to help them better understand the potential harms of medical tests and treatments.
  • Patients should be encouraged to ask whether they really need a test or treatment, what the risks attached to it are, and whether there are simpler, safer options. They should also be encouraged to ask what happens if they do nothing about their condition.
  • Medical schools should teach students better about risk and the overuse of tests and treatments, and organisations responsible for postgraduate training should ensure practising doctors receive the same education.
  • Those responsible for paying hospitals and doctors should consider a different payment system that doesn't encourage overtreatment.

In addition, the authors say the clinical, patient and healthcare organisations participating in the Choosing Wisely campaign are to work together to develop top five lists of tests or interventions of questionable value. They will then promote discussion about these interventions.

For an up-to-date, unbiased and entirely transparent overview of your options for testing or treating a particular condition, go to the NHS Choices Health A-Z.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Doctors urged to stop 'over-treating'. BBC News, May 13 2015

Stop handing out so many drugs, doctors are warned: 'Over-treating' patients is wasting the NHS money and can cause harm. Mail Online, May 13 2015

Millions of patients are being put at risk with pills and surgery they don't really need. Daily Mirror, May 12 2015

NHS tests and drugs 'do more harm than good'. The Daily Telegraph, May 12 2015

'Over-treating' patients is wasteful, unnecessary and can cause them harm, campaign claims. The Independent, May 12 2015

Doctors to withhold treatments in campaign against 'too much medicine'. The Guardian, May 12 2105

Links To Science

Malhotra A, Maughan D, Ansell J, et al. Choosing Wisely in the UK: the Academy of Medical Royal Colleges’ initiative to reduce the harms of too much medicine. BMJ. Published online May 12 2015

Categories: Medical News

Hormone oestrogen linked to male breast cancer

Medical News - Tue, 05/12/2015 - 16:00

"Men with high oestrogen more likely to develop breast cancer," reports the Daily Telegraph.

This headline is based on an international study looking at potential risk factors for male breast cancer. This is a much rarer cancer compared to female breast cancer – an estimated 350-400 UK cases per year for men compared to 50,000 cases in women.

It is known that the hormone oestrogen can trigger the development of some types of female breast cancer. Men as well as women produce oestrogen, but at much lower levels, so researchers wanted to see if there was a similar connection.

This study compared blood samples taken from 101 men who went on to develop breast cancer, with 217 men who didn’t.

It found that men with the highest levels of one form of the hormone oestrogen were about two-and-a-half times more likely to develop the condition than those with the lowest levels.

The study used a good design and approach, and the findings seem plausible, given what is known in women. However, it is still difficult to say whether a raised oestrogen level is directly raising the risk of breast cancer, or if both could be the result of another underlying factor.

Learning more about the causes of male breast cancer might help to find ways to prevent it or find new treatments in the long term.

 

Where did the story come from?

The study was carried out by researchers from the National Cancer Institute in the US, and other research centres in the US, Europe and Canada. It was part of the Male Breast Cancer Pooling Project, and was funded by various international sources, including the National Cancer Institute in the US, and Cancer Research UK and the UK Medical Research Council.

The study was published in the peer-reviewed Journal of Clinical Oncology.

The Telegraph covers this study reasonably well.

 

What kind of research was this?

This was a nested case-control study looking at whether levels of sex hormones are related to a man’s risk of developing breast cancer.

Breast cancer can occur in men, but is very rare. In the UK, about 350 men are reported to be diagnosed with the condition each year. This makes the condition difficult to study, and this is why researchers came together to form an international collaboration, so that they could identify more cases than they would be able to by working alone.

Men and women both produce the sex hormones oestrogen and testosterone – but at different levels. In women, breast cancer is known to be influenced by these hormones. The roles these hormones play in male breast cancer is not known.

A nested case-control study is the most feasible way of looking for possible risk factors for rare diseases. Being "nested" means that information is collected on risk factors in a prospective fashion in a larger group of people, and then people who develop the condition are identified. These people are the "cases" and a matched group of people with similar characteristics, but without the condition, are the "controls".

 

What did the research involve?

The researchers identified 101 men with breast cancer (cases), and 217 similar men without the condition were selected as controls. They analysed blood samples that had been collected from the men before their diagnosis, and compared hormone levels to see if there were any differences from cases and controls.

The participants were identified through seven cohort studies that recruited men without breast cancer. The men provided blood samples, and these were stored. They were then followed up to see if they developed breast cancer. When a case was identified, the researchers selected up to 40 control men from their cohort who were similar to the affected man in terms of race, year of birth, year they entered the study, and how long they had been followed up for.

The researchers then analysed the stored samples to measure the levels of various forms of the steroid sex hormones oestrogen and testosterone. They compared levels in men who later went on to develop breast cancer and controls, to see if they differed. They took into account factors that might affect results (potential confounders) such as:

  • age when the blood sample was taken
  • race
  • body mass index (BMI)
  • date of the blood sample

 

What were the basic results?

The researchers found that for the male sex hormones (androgens such as testosterone) there were no differences in levels between men who went on to develop breast cancer, and those who did not.

However, men who developed breast cancer did have higher levels of the hormone oestradiol (one form of oestrogen) than controls. Men who had the highest oestradiol levels were about two-and-a-half times more likely to develop the condition than those with the lowest levels (odds ratio (OR) 2.47, 95% confidence interval (CI) 1.10 to 5.58).

 

How did the researchers interpret the results?

The researchers conclude that their results support a role for oestradiol (oestrogen) in the development of breast cancer in men. They report that this is similar to the level of effect seen in postmenopausal women.

 

Conclusion

This study has identified that oestrogen may play a role in the development of breast cancer in men. The study’s strengths include the prospective collection of data, and the relatively large group of cases, given how rare the disease is.

One of the main limitations of this type of study is that other factors may influence results. In this study, this risk was minimised by matching controls to cases within each country, and by adjusting for various confounders in the analyses. Despite this, some unmeasured confounders may still have an effect. For example, breast cancer in a first-degree relative (parent or sibling) was five times more common in men who developed breast cancer, and there was no information on whether any of the men carried a high risk form of the BRCA genes, which increase the risk of cancer.

In addition, only one blood sample appeared to be tested for each man, and at various times before their diagnosis. It is possible that the single sample taken may not be representative of levels over a longer period.

It is difficult to say from this type of study whether oestrogen levels are directly causing an increase in risk. The authors note that it is not clear how higher levels of oestrogen might increase breast cancer risk.

Overall, the findings of this study seem plausible, given what is known about breast cancer in women, and could increase knowledge about possible risk factors for male breast cancer.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Men with high oestrogen more likely to develop breast cancer. The Daily Telegraph, May 11 2015

Links To Science

Brinton LA, Key TJ, Kolonel LN, et al. Prediagnostic Sex Steroid Hormones in Relation to Male Breast Cancer Risk. Journal of Clinical Oncology. Published online May 11 2015

Categories: Medical News

Scientists 'amazed' at spread of typhoid 'superbug'

Medical News - Tue, 05/12/2015 - 15:00

“Antibiotic-resistant typhoid is spreading across Africa and Asia and poses a major global health threat,” BBC News reports.

Typhoid fever is a bacterial infection. If left untreated, it can lead to potentially fatal complications, such as internal bleeding.

Uncommon in the UK (there were 33 confirmed UK cases in the first quarter of 2015 and it is thought most of these were contracted abroad), it is more widespread in countries where there is poor sanitation and hygiene.

The headline is based on a study that looked at the genetics of the bacteria that causes typhoid fever, Salmonella typhi, to trace their origins.

The study analysed genetic data from almost 2,000 samples of Salmonella typhi collected between 1903 and 2013. It was looking for a strain called H58 that is often antibiotic-resistant. It found that this strain was likely to have originated in South Asia around the early 1990s, and has spread to other countries in Africa and Southeast Asia. It accounted for about 40% of samples collected each year. Over two-thirds of the H58 samples had genes that would allow them to be resistant to antibiotics.

It would be complacent to assume that this is just a problem for people in the developing world, as antibiotic resistance is a major threat facing human health worldwide. Studies such as this help researchers to identify and track how such bacteria spread. This may help them to use existing antibiotics more effectively, by identifying where specific types of resistance are common.

 

Where did the story come from?

The study was carried out by a large number of researchers from international institutions, including the Wellcome Trust Sanger Institute, in the UK. The researchers were also funded by a wide range of international organisations, including the Wellcome Trust and Novartis Vaccines Institute for Global Health.

The study was published in the peer-reviewed journal Nature Genetics.

The news sources cover this story reasonably. Some reporting implies that it is the H58 strain that is killing 200,000 people a year, but this study has not assessed this.

The 200,000 figure seems to be taken from information provided by the US’s Centers for Disease Control and Prevention (CDC), and is an estimate of all types of typhoid fever, not just the H58 strain.

 

What kind of research was this?

This was a genetic study looking at the origins and spread of the H58 strain of Salmonella typhi – the bacteria that causes typhoid fever. This strain is often found to be antibiotic-resistant.

The typhoid bacteria are spread by ingestion of infected faecal matter from a person with the disease. This means that it is a problem in countries where there is poor sanitation and hygiene. Typhoid fever is uncommon in the UK, and most cases in this country are people who have travelled to high-risk areas where the infection still occurs, including the Indian subcontinent, South and Southeast Asia, and Africa. The researchers say that 20-30 million cases of typhoid are estimated to occur each year worldwide.

Typhoid fever has been traditionally treated with the antibiotics chloramphenicol, ampicillin and trimethoprim-sulfamethoxazole. Since the 1970s, strains of typhoid have started to emerge that are resistant to these antibiotics (called multidrug-resistant strains). Different antibiotics, such as fluoroquinolones, have been used since the 1990s, but strains resistant to these antibiotics have been identified recently in Asia and Africa. One such strain, H58, is becoming more common, and was the focus of this study.

 

What did the research involve?

The researchers used genetic sequence data from 1,832 samples of Salmonella typhi bacteria collected across the world. They used this data to assess when the H58 strain (which has identifiable genetic characteristics) had arisen and how it had spread.

They first identified which of the samples belonged to the H58 strain, and in what year it was first identified. They also looked at what the proportion of samples collected in each year were of this strain, to see if it was becoming more common.

Over time, DNA accumulates changes, and the researchers used computer programmes to analyse the genetic changes present in each sample to identify how each strain is likely to be related to the others. By combining this information with the origin and year of each sample, the researchers developed an idea of how the strain had spread.

 

What were the basic results?

The researchers found that nearly half of their samples (47%) belonged to the H58 strain. The first sample identified as part of this strain was from Fiji in 1992, and continued to be identified up to the latest samples, from 2013. H58 strain samples were identified from 21 countries in Asia, Africa and Oceania, showing that it is now widespread. Overall, 68% of these H58 samples had genes that would allow them to be antibiotic-resistant.

There were some very genetically closely related samples found in different countries, suggesting that there was human transfer of the bacteria between these countries. Their genetic analyses suggested that the strain was initially located in South Asia, and then spread to Southeast Asia, western Asia and East Africa, as well as Fiji.

There was evidence of multiple transfers of the strain from Asia to Africa. The H58 strain accounted for 63% of samples from eastern and southern Africa. The analysis suggested that there had been a recent wave of transmission of the H58 strain from Kenya to Tanzania, and on to Malawi and South Africa. This had not previously been reported, and the researchers described it as an “ongoing epidemic of H58 typhoid across countries in eastern and southern Africa”.

Multidrug resistance was reported to be common among H58 samples from Southeast Asia in the 1990s and, more recently, samples from this region have acquired mutations which have made them less susceptible to fluoroquinolones. These have become more common in the area, and researchers suggested that this is due to the use of fluoroquinolones to treat typhoid fever over this period, leading to these resistant strains having a survival advantage.

In South Asia, there are lower rates of multidrug resistance in recent samples than in Southeast Asia. In Africa, most samples showed multidrug resistance to the older antibiotics, but not fluoroquinolones, as these are not frequently used there.

 

How did the researchers interpret the results?

The researchers say that their analysis is the first of its kind for the H58 typhoid strain, and that the spread of this strain “requires urgent international attention”. They say that their study “highlights the need for longstanding routine surveillance to capture epidemics and monitor changes in bacterial populations as a means to facilitate public health measures, such as the use of effective antimicrobials and the introduction of vaccine programs, to reduce the vast and neglected morbidity and mortality caused by typhoid”.

 

Conclusion

This study has provided information about the spread of a strain of typhoid called H58, which is commonly antibiotic-resistant, by looking at the genetics of samples collected between 1903 and 2013. It has shown that the strain was likely to have arisen in South Asia and then spread to Southeast Asia and Africa. The strain showed different patterns of antibiotic resistance in different regions – likely driven by different patterns in the use of antibiotics.

While this study has not estimated the number of cases or deaths worldwide attributable to this strain specifically, there are reported to be 20-30 million cases of typhoid fever globally each year.

The spread of antibiotic resistance is a major threat to human health, and studies like this can help us to monitor them and target treatment more effectively.

Read more about the battle against antibacterial resistance and how we can all help do our bit.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Drug-resistant typhoid 'concerning'. BBC News, May 11 2015

Antibiotic-resistant typhoid spreading in silent epidemic, says study. The Guardian, May 11 2015

Deadly typhoid superbug poses global threat after 'rapidly spreading' through Asia and Africa, experts warn. Mail Online, May 11 2015

Links To Science

Wong VK, Baker S, Pickard DJ, et al. Phylogeographical analysis of the dominant multidrug-resistant H58 clade of Salmonella Typhi identifies inter- and intracontinental transmission events. Nature Genetics. Published online May 11 2015

Categories: Medical News

Claims a 'sweet tooth' increases your Alzheimer’s risk too simplistic

Medical News - Mon, 05/11/2015 - 15:30

"Could cake and chocolate lead to Alzheimer's disease?" The Daily Telegraph asks.

In a series of animal experiments, researchers attempted to see whether high blood glucose could be involved in the development of amyloid protein plaques in the brain; a characteristic hallmark of Alzheimer’s disease. These plaques are abnormal "clumps" of protein that are thought to gradually destroy healthy brain cells.

Some studies have suggested that people with high blood glucose levels and those with type 2 diabetes may be at greater risk of the disease, and this study aimed to see why that might be the case.

The experiments found that giving the mice a sugar solution over a number of hours led to an increased concentration of amyloid in the fluid surrounding the brain cells. The effect was more pronounced in older mice.

The study has only looked at the short-term effects, and not at whether high glucose levels affected longer-term plaque formation or symptoms in the mice.

At this stage, it is not conclusively proven that type 2 diabetes is a risk factor for Alzheimer’s disease, or that you have increased risk of the disease if you have a high-sugar diet.

However, sticking to current healthy eating and activity recommendations is a good way to increase your chances of staying healthy.

 

Where did the story come from?

The study was carried out by researchers from Knight Alzheimer’s Disease Research Center and Washington University School of Medicine in the US, and was funded by the National Institutes of Health. The study was published in the peer-reviewed Journal of Clinical Investigation. It is an open-access study, so it is free to read online or download as a PDF.

The Daily Express accurately describes the methods of the study, but doesn’t make it clear until later in the research that study was on mice. The Daily Telegraph was more upfront about this fact.

The Telegraph piece also includes information on a related study into green tea and Alzheimer’s disease. We have not analysed the study, so we cannot say how accurate the Telegraph’s reporting of this study was.

 

What kind of research was this?

This was animal research that aimed to look into why there might be a link between blood glucose and risk of dementia, specifically Alzheimer’s disease.

The causes of Alzheimer’s disease are still not fully understood. Increasing age is the most well-established factor to date, and there is the possibility of hereditary factors. The influence of health and lifestyle factors is uncertain. Some previous studies have suggested that glucose levels in the blood may have an impact on the development of the beta-amyloid "plaques" and the tau protein "tangles" in the brain that are the hallmarks of the disease. This is supported by other studies that have suggested people with type 2 diabetes are more likely to develop Alzheimer’s disease. Therefore, this research aimed to look at whether there was a biological reason for this. 

Animal studies can provide a valuable indication of how disease processes may work, but the process may not be exactly the same in humans.

 

What did the research involve?

The researchers carried out experiments to control the blood glucose level of a genetically engineered mouse model of Alzheimer’s disease and looked at the effect on the composition of the fluid surrounding the brain cells.

The research involved three-month-old mice, who would normally be too young to have beta-amyloid protein deposits in the brain. Under anaesthetic, the researchers gained access to the large vein and artery in the neck, and then a catheter was guided through the blood vessel into one region of the brain (the hippocampus). Once the mice were awake again, these tubes allowed the researchers to infuse glucose into the brain, and to sample the fluid around the brain cells while the mice were still awake and moving around.

In their experiments, the researchers withheld food from the mice for several hours before a glucose solution was gradually infused into the brain over four hours.

Fluid around the brain cells was sampled every hour during the infusion to look at levels of glucose, beta-amyloid protein and lactate (a compound involved with the metabolism of the brain) – the latter was used as a marker of brain cell activity. The brain was also examined after death.

Other experiments included infusing older, 18-month-old, mice that would already be expected to have some beta-amyloid build-up.

They also tried infusing different drugs to examine in more depth precisely what biological mechanisms were occurring in the brain that could be causing these effects.

 

What were the basic results?

In the main experiments in younger mice, the glucose infusion almost doubled glucose concentration in the brain fluid and increased the concentration of beta-amyloid by 25%. Lactate levels also increased, suggesting an increase in brain cell activity.

In the older mice, glucose infusion raised the concentration of beta-amyloid even higher – by around 45%.

 

How did the researchers interpret the results?

The researchers found that increased blood glucose levels affect brain cell activity glucose, leading to increased beta-amyloid in the fluid surrounding the brain cells in young mice that would normally have minimal beta-amyloid. In aged mice, the effect was even more pronounced.

They further suggest that "during the preclinical period of Alzheimer’s disease, while individuals are cognitively normal, our findings suggest that repeated episodes of transient [high blood glucose], such as those found in [type 2 diabetes], could both initiate and accelerate plaque accumulation".

 

Conclusion

This animal study supports the theory that elevated blood sugar might influence the development of beta-amyloid plaques in the brain – one of the characteristic hallmarks of Alzheimer’s disease. As the researchers say, glucose could similarly be involved in their development in humans.

However, at this stage, we cannot extrapolate these short-term results in mice much further. While animal studies provide a valuable indication of how disease processes may work in humans, the process may not be exactly the same. The study has not looked at the long-term effects of raised glucose on plaque formation in these Alzheimer’s-model mice, and how long raised levels need to be present to have an effect.

Even if the development of amyloid plaques in the human brain could be affected by levels of glucose, we don’t understand the intricacies of how this might happen or whether it can be avoided. Body cells – particularly those in the brain – need glucose, so clearly it cannot be avoided.

Currently, it has not been conclusively proven that type 2 diabetes is a risk factor for Alzheimer’s disease, or that you have increased risk of disease development by having a high-sugar diet. However, high-calorie diets are well established to be a risk factor for overweight and obesity, which are linked to many chronic health conditions, including type 2 diabetes. Sticking to current diet and activity recommendations can help to maintain good health.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Could cake and chocolate lead to Alzheimer's disease? The Daily Telegraph, May 6 2015

Got a sweet tooth? It could put you at risk of Alzheimer's. Daily Express, May 6 2015

Links To Science

Macauley SL, Stanley M, Caesar EE, et al. Hyperglycemia modulates extracellular amyloid-β concentrations and neuronal activity in vivo. The Journal of Clinical Investigation. Published online May 4 2015

Categories: Medical News

Overweight diabetics 'live longer' than slimmer diabetics

Medical News - Mon, 05/11/2015 - 15:00

“Overweight diabetics are 13 per cent less likely to die prematurely than those of a normal weight or those who are obese,” the Mail Online reports.

A new study followed over 10,000 English older adults with type 2 diabetes for 10 years. It examined how their body mass index (BMI) was linked to risk of later cardiovascular disease events such as heart attack and stroke, and death from any cause.

It found that overweight people had a 13% reduced risk of death compared with people who had a normal BMI. Risk of death was no different between obese people and those with a normal BMI.

However, it also found that people who were overweight or obese had an increased risk of cardiovascular disease events that required hospitalisation.

Great care must be taken before jumping to the conclusion that being overweight could be good for people with type 2 diabetes. As seen in this study, being overweight or obese increased the risk of diabetes complications, which could have an adverse impact on quality of life, even if not fatal.

The findings could also have been influenced by various factors other than BMI, including how well people’s diabetes is controlled. Further study is needed to uncover the biological mechanism, if there is a real link.

Current advice remains the same – whatever your current health, aim for a healthy BMI through a balanced diet and regular exercise.

 

Where did the story come from?

The study was carried out by researchers from University of Hull, Imperial College London and Federico II University in Naples, Italy. Financial support was provided by National Institute for Health Research, Hull York Medical School at the University of Hull, and Imperial College London.

The study was published in the peer-reviewed journal Annals of Internal Medicine.

The Mail’s coverage takes the findings at face value, suggesting that being overweight or obese could prolong life for people with type 2 diabetes. However, the study does not prove this and there are known adverse health risks of being overweight or obese.

 

What kind of research was this?

This was a prospective cohort study that aimed to investigate whether body weight has an influence on prognosis (what happens to health over time) in people with type 2 diabetes.

The link between obesity and increased risk of cardiovascular disease is well established. However, some other studies have suggested that in people with established cardiovascular disease, obesity could somehow offer a survival advantage. This observation has been named the "obesity paradox" – as it goes against what one would expect. The researchers wanted to investigate whether a similar link might be seen between obesity and survival in people with type 2 diabetes. The main limitation of this type of study is that there may be unmeasured confounding factors influencing any apparent relationship.

 

What did the research involve?

The study included adults diagnosed with type 2 diabetes who attended the outpatient clinic of a single NHS hospital in England, with a follow-up period of around 10 years. The researchers analysed whether participants’ BMIs were linked to their risk of cardiovascular events or death from any cause.

The participants had attended the clinic between 1995 and 2005, and had their data entered into a patient registry. A total of 10,568 people with type 2 diabetes (54% men) were included.

At the first visit data was collected on age, duration of diabetes, height, weight, blood pressure, smoking history and other significant illnesses (eg cancer, lung or kidney disease). All of these factors were adjusted for in analyses, to try to remove their effects.

Participants were followed for an average of 10.6 years, to end-2011. The main outcome examined was all-cause mortality (death from any cause). Cardiovascular events, such as heart attack, stroke or heart failure, were also examined.

 

What were the basic results?

Average BMI at the study's start was 29, which is in the overweight range, and participants had an average [median] age of 63 years.

During follow-up, 35% of participants died, 9% had a heart attack, 7% a stroke and 6% had heart failure. Overweight or obese participants (BMI >25) had a significantly higher risk of heart attack or heart failure than people of normal BMI (18.5 to 24.9). Risk of stroke was significantly increased in obese people (BMI >30), but not those who were overweight.

However, all-cause mortality risk was not increased for people who were overweight or obese.

Obese people had no significant difference in mortality risk compared with people with a normal BMI. Meanwhile, overweight people actually had decreased mortality risk compared with people with a normal BMI (hazard ratio (HR) 0.87, 95% confidence interval (CI) 0.79 to 0.95).

Meanwhile, underweight people had increased mortality risk compared with people with a normal BMI (HR 2.84, 95% CI 1.97 to 4.10), though there was no difference in their risk of cardiovascular disease events.

 

How did the researchers interpret the results?

The researchers conclude: "In this cohort, patients with type 2 diabetes who were overweight or obese were more likely to be hospitalised for cardiovascular reasons. Being overweight was associated with a lower mortality risk, but being obese was not."

 

Conclusion

This large prospective cohort following over 10,000 older adults with type 2 diabetes for 10 years has found that while being overweight or obese is linked to increased risk of cardiovascular events, being overweight is linked to reduced risk of death. This is similar to the "obesity paradox" seen in some other studies, where being overweight or obese is associated with a survival benefit in people with established cardiovascular disease.

The researchers note that 16 other studies have assessed the same question and found conflicting results. Their study aimed to improve on the methods in these studies, and its large sample size and prospective design, following people for 10 years, are strengths. However, caution must be taken before concluding from the findings of this cohort that "being FAT", as Mail Online states, is a good thing for people with type 2 diabetes.

There are important points to note:

  • The cohort demonstrated significantly increased risk of cardiovascular disease events, such as heart attack and heart failure, for overweight or obese people with type 2 diabetes compared to healthy weight individuals. This is consistent with what is already known about the risks of overweight and obesity for cardiovascular disease.
  • The researchers adjusted their analyses for various factors, including age, blood pressure, other illnesses and smoking history. However, other unmeasured confounding factors (confounders) could still be influencing the association between mortality and BMI – for example, other lifestyle factors (exercise, diet and alcohol) or health (including mental health), disability and quality of life factors. We also don’t know about diabetes medications each person was taking or how well controlled their diabetes was. If these factors differed between people with different BMIs, these could be influencing results rather than BMI itself.
  • The study has also only looked at BMI but not at other measures of body fat, such as distribution of fat mass, or body weight in terms of fat mass and non-fat mass. Analysing these measures might be a way to confirm whether the BMI findings seem robust.
  • As the researchers say, they have not specifically examined cause of death. An analysis of causes of death might help to understand why this difference is seen, and whether being overweight is having some protective effect.
  • The study has looked at cardiovascular diseases and mortality only; the researchers have not looked at the development of other overweight- and obesity-linked diseases that may have had a detrimental effect on health.
  • Though a large sample size, this is still a sample of older people with diabetes from a single UK region. Different results may have been obtained from other, more diverse, samples.

The reasons behind the apparent link are not yet known, and further study is needed into the possible biological mechanism. This study does not prove that being overweight is having a direct beneficial effect on risk of death in people with type 2 diabetes. The authors themselves caution against "promoting preconceptions about the ideal BMI" until further research is done to untangle the "obesity paradox".

For now, the advice regarding weight remains the same – whatever your current state of health, aim for a healthy BMI through a balanced diet and regular exercise.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Could being FAT help diabetics live longer? Those who are overweight are 13% less likely to die early than their slimmer counterparts. Mail Online, May 5 2015

Links To Science

Costanzo P, Cleland JGF, Pellicori P, et al. The Obesity Paradox in Type 2 Diabetes Mellitus: Relationship of Body Mass Index to Prognosis: A Cohort Study. Annals of Internal Medicine. Published online May 5 2015

Categories: Medical News

Eating little and often 'no better for dieters than fewer feasts'

Medical News - Fri, 05/08/2015 - 13:01

"Eating little and often – like Jennifer Aniston – could help dieters achieve a healthy weight loss," reports the Mirror. Meanwhile, the Mail Online urges us to "Forget three square meals a day – eating six smaller portions is better for your waistline".

But don't rush to change how often you eat: the claims are based on a tiny study that has been overstated and misinterpreted by the media. In fact, women lost a similar amount of weight regardless of the number of daily meals they ate.

In the study, 11 obese women ate the same low number of calories in either two meals or six meals a day. They lost around the same amount of weight with both diets.

They did retain their non-fat mass (the weight of the body in muscle, organs and bone) better when they were on six meals a day, but the authors warn against drawing firm conclusions from this.

The two-meal pattern seemed to improve levels of "good" cholesterol more than the six-meal pattern. Whether either of these differences would lead to any health benefits for the women was not assessed.

Overall, this study is too small to prove whether six or two meals a day is better for dieters. What is important is to choose an approach to weight loss or healthy weight maintenance that works for you that you can stick to.

Where did the story come from?

The study was carried out by researchers from California State University and other research centres in the US. It was funded by the University of New Mexico.

Nutrisystem Inc, a commercial weight loss company that provides home delivery of calorie-controlled food portions for weight loss, donated all food products used in the study.

The study was published in the peer-reviewed medical journal, Nutrition Research.

The Mirror and the Mail Online have very similar coverage, suggesting that the stories may be based on the same press release. They both say that, "Those who ate six meals a day had healthier levels of glucose, insulin and cholesterol". But this is not true.

When the women ate two meals a day, they had better levels of "good" cholesterol than when they ate six meals a day. The levels of other blood fats, glucose and insulin were generally very similar between the groups, and any slight differences were not large enough to rule out having occurred by chance.

What kind of research was this?

This was a crossover randomised controlled trial assessing whether splitting calories into two or six meals had different effects on body composition and blood markers of health.

In crossover trials, the same group of people received both of the interventions being compared in a random order.

This approach is suitable if the effects of the interventions are not long lasting; therefore, it is likely to be a better way to look at short-term effects on blood markers than the long-term effect on weight loss.

What did the research involve?

The researchers recruited 15 adult women who were obese but not diabetic. They randomly assigned them to eat a reduced-calorie diet as either two or six meals a day over two weeks. They then had a two-week break before switching to the other meal pattern.

The researchers measured various blood markers and the women's body compositions during the different parts of the study.

In each part of the study, the food products were the same and delivered to participants in pre-packaged portions. The meals gave about 1,200 calories per day.

During the break, the participants ate four times a day (three meals and a snack). Fluid consumption was not strictly controlled during the trial.

What were the basic results?

Eleven women (73%) completed the study, and four withdrew because they did not comply with the diet, time constraints, or had family issues.

Overall, the women lost weight during the study and reduced their body mass index (BMI), waist circumference, fat mass and percentage of body fat. Their calorie intake reduced from an average of 2,207 calories a day to 1,200 calories.

Women lost similar amounts of weight after the two meals a day period (2.7% loss) and the six meals a day period (2.0% loss). When the women ate two meals a day, they lost more fat-free mass (3.3% loss) than when they ate six meals a day (1.2% gain).

The researchers did not find any difference between fat mass loss, resting metabolic rate, or the levels of insulin, glucose or most fats in the blood when the women were on the different meal frequencies.

"Good" cholesterol (HDL, or high-density lipoprotein) levels increased more when the women were eating two meals a day (1.3% increase) than when they were eating six meals a day (0.12% increase).

How did the researchers interpret the results?

The researchers concluded that calorie restriction was an effective way of losing weight.

Consuming these calories in two meals a day was associated with improved "good" cholesterol levels.

Conversely, consuming the calories in six meals a day preserved fat-free mass during weight loss. Whether either of these changes would have a beneficial impact on health is unclear.

Conclusion

This small crossover trial found little difference between eating the same low number of calories over six meals a day as opposed to two meals a day.

Both patterns resulted in similar weight loss, but the six meal a day group lost less non-fat weight from their bodies, suggesting that they may have, for example, lost less muscle.

However, the authors themselves suggest their body composition findings should be interpreted with care. They did not impose strict fluid replacement rules, and the method they used for measuring body composition could have been affected by how hydrated the women were during the trial.

This was also a very small study (15 obese women), and almost a quarter dropped out before the study finished. The study size may have limited its ability to identify important differences between the groups.

The study was also very short, with each meal frequency tested over a fortnight. The results may not be representative of what would be seen in more diverse groups of people, over a longer period of time, or what would happen if people had to prepare their own meals.

While the news has suggested the findings show that six meals a day is "better", it is not possible to clearly say this from the results. It is unclear whether the difference in body composition seen is reliable and would have any effect on health.

The only other difference was that women had increased levels of "good" cholesterol during the two meal a day period. While this seems to favour the two-meal pattern, whether this difference would be maintained or have a beneficial impact on health is not clear.

Overall, very little can be concluded from this study. What we can say is that obese women eating a calorie-controlled diet can lose weight, and how they split these calories up does not seem to have much impact on their weight loss in the short term.

Some of the participants reported being more "comfortable" with the two meals a day pattern, while others reported the opposite. Reaching and maintaining a healthy weight brings health benefits, and people should use whatever meal frequency they find helps them achieve this. 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Want to lose weight? Eat SIX small meals a day say nutrition experts in new study. Mirror Online, May 7 2015

Forget three square meals a day - eating SIX smaller portions is better for your waistline, say experts. Mail Online, May 8 2015

Links To Science

Alencar MK, et al. Increased meal frequency attenuates fat-free mass losses and some markers of health status with a portion-controlled weight loss diet. Nutrition Research. Published March 17 2015

Categories: Medical News

Media hypes molecular blood pressure regulation discovery

Medical News - Fri, 05/08/2015 - 12:35

The Mail Online hails a "breakthrough in treating high blood pressure", saying scientists have discovered how the body regulates it, which could "slash risk of heart attacks and stroke".

But there's a hint of hype around this news as, perhaps surprisingly, the research that prompted this story did not test any new treatments for high blood pressure.

Instead, studies in the laboratory and in mice found genetically engineered mice lacking a protein called ERp44 had low blood pressure. This led the researchers to do other experiments, showing how the protein works with another protein called ERAP1, which is involved in controlling blood pressure.

Overall, this finding has increased researchers' knowledge of how blood pressure is controlled at a molecular level. While it is likely these processes in mice are similar to those of humans, further study would be needed to confirm this.

Even if it is confirmed, as yet the researchers have developed no drugs to target these proteins. Any new treatment aiming to do so would need to be thoroughly tested in the laboratory before it would be safe enough to test on humans. 

Where did the story come from?

The study was carried out by researchers from the RIKEN Brain Science Institute and other research centres in Japan.

It was funded by JST International Cooperative Research Project-Solution Oriented Research for Science and Technology, the Japan Society for the Promotion of Science, Scientific Research C, The Moritani Scholarship Foundation, and RIKEN.

The study was published in the peer-reviewed journal Molecular Cell.

The Mail Online headline overstates these findings in two ways – first, this experiment is only in mice and needs to be confirmed in humans. Second, we don't yet know if these findings will lead to treatments for human high blood pressure or other conditions. 

What kind of research was this?

This laboratory and animal research studied the function of a protein known as ERp44. Researchers wanted to know more about this protein, which is already known to be involved in helping make sure other cell proteins are made properly and controlling how they are secreted from the cell.

Often, when the function of a protein is not fully understood, researchers start by genetically engineering mice to lack the protein. They then look at what happens to these mice to find out more.

This is what this study has done. This type of study can suggest ways human diseases might be treated, but is at a very early stage and no drugs were involved. 

What did the research involve?

The researchers genetically engineered mice to lack the ERp44 protein. They studied the health and development of these mice, and looked at exactly what knock-on effect a lack of ERp44 had on the cells.

They also identified which proteins ERp44 was normally interacting with and studied the effect of removing this protein in the mice lacking the ERp44 protein. 

What were the basic results?

The researchers found baby mice that lacked the ERp44 protein produced less urine and had changes in the internal structure of their kidneys. Adult mice lacking ERp44 had low blood pressure.

These findings were similar to those known to occur in mice with low levels of the blood pressure-controlling hormone angiotensin. The researchers found angiotensin was being broken down more quickly than normal in ERp44-lacking mice.

The researchers then looked for proteins that interacted with ERp44. They found a protein called ERAP1 and showed how this protein formed a bond with the ERp44 protein. Experiments in cells in the lab suggested ERp44 was stopping ERAP1 from being released from the cells.

This led the researchers to believe that more ERAP1 would be released in ERp44-lacking mice, and this could be responsible for the breakdown of the angiotensin.

To test this, they removed the ERAP1 from blood samples from ERp44-lacking mice using antibodies. As they expected, these ERAP1-depleted samples did not show as much breakdown of angiotensin.

The researchers also found that in mice experiencing severe infection (which usually causes a large drop in blood pressure), the cells produce more ERp44 and ERAP1, and these form more of the ERp44-ERAP1 "complex".

These mice have less of a drop in their blood pressure than mice genetically engineered to have half the normal levels of ERp44. This suggests the extra ERp44-ERAP1 complex helps normal mice stop their blood pressure dropping too low during infection. 

How did the researchers interpret the results?

The researchers concluded they had showed that, "ERp44 is required for suppressing the release of excess ERAP1 into the bloodstream in order to prevent unfavourable [low blood pressure]."

They reported how variations in the gene encoding ERAP1 have been associated with low blood pressure, psoriasis and a skeletal problem called ankylosing spondylitis, and that, "development of specific drugs targeting ERAP1 activity may contribute to treatment of these diseases". 

Conclusion

This animal research has identified a role for certain proteins in controlling blood pressure. Studies like this give clues as to how human biology works and how it can be fixed when it goes wrong.

While the researchers suggest drugs targeting the proteins identified could help develop drugs to treat abnormal blood pressure, these drugs have not yet been developed.

Researchers will need to develop such chemicals and thoroughly test their effects in animals first before they can be tested in humans.

As such, this is early stage research, and there has not been a "treatment breakthrough" yet, because no treatment exists. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Breakthrough in treating high blood pressure, as scientists discover how the body regulates it - and say discovery could slash risk of heart attacks and stroke. Mail Online, May 7 2015

Links To Science

Hisatsune C, et al. ERp44 Exerts Redox-Dependent Control of Blood Pressure at the ER. Molecular Cell. Published May 7 2015

Categories: Medical News

Probiotic yoghurts 'may help' hay fever

Medical News - Thu, 05/07/2015 - 15:00

"Is YOGURT the secret to easing hay fever? Probiotics can 'relieve sneezing and itchy eyes','' the Daily Mail reports. New research found initial, but not definitive, evidence that probiotics may offer some relief from this common allergic condition for some people.

Hay fever affects around one in five people, causing frequent sneezing, a runny nose and itchy eyes. It happens when an allergic irritant sets off an immune response in the mucosa of the nasal passages, causing an allergic reaction. Most often, people are sensitive to seasonal allergens such as pollen, hence the name hay fever. However, some people can get symptoms all year round (this is known as allergic rhinitis).

There has been a lot of interest in whether probiotics – "healthy" bacteria that live in the gut – can relieve symptoms.

This review identified 23 trials investigating the effect of probiotic supplements on allergic rhinitis. These studies were highly variable in terms of their study populations, the probiotics used, outcomes measured and, importantly, results. While most studies found some benefit for at least one outcome, others found no benefit at all.

The authors concluded that probiotics may have a beneficial effect when added to other allergic rhinitis treatments, but that higher-quality, larger studies with standardised measures of effects are still needed.

 

Where did the story come from?

The study was carried out by researchers from the University School of Medicine in Nashville, US. The study's funding is not mentioned. It was published in the peer-reviewed medical journal International Forum of Allergy and Rhinology.

Most of the UK media coverage reported the study’s headline results uncritically, suggesting that yoghurt could be a cure for hay fever symptoms. However, not all yoghurt is probiotic, and it is not clear whether the people in these studies were taking these probiotics in the form of yoghurt or capsules. The study was not specific to people with hay fever, as it included people with allergic rhinitis not associated with seasonal allergens. The study did not find an overall effect on standard symptom scores, and the study authors did not recommend probiotics as a standalone cure.

 

What kind of research was this?

This was a systematic review that has searched the literature to identify randomised controlled trials (as well as two randomised crossover studies) that investigated the effects of probiotics on allergic rhinitis. Hay fever is a type of allergic rhinitis that medical specialists refer to as "seasonal allergic rhinitis".

Where study designs and measured outcomes were similar enough, it pooled the results of these studies in meta-analysis. The review aimed to see whether probiotic supplements helped people with allergic rhinitis.

Systematic reviews of randomised controlled trials are usually a good source of reliable evidence to show whether a treatment is helpful. However, the review is only as good as the studies that have been carried out.

 

What did the research involve?

Researchers searched for randomised controlled trials that studied the effect of probiotics on allergic rhinitis, according to pre-defined specifications, and summarised the results of the studies that came up to their quality standards. They then did a meta-analysis, where they pooled the results of those studies that had used standardised clinical measures for allergic rhinitis treatment, to get an overall picture of treatment effect.

The researchers found 153 studies, 42 of which were relevant. They excluded 19 studies, mainly because they didn't give results using standardised clinical outcome measures. The remaining 23 studies were mostly double-blind randomised controlled trials, with two randomised crossover studies. These studies, which included 1,919 participants, were included in the review.

The outcomes measured included two measures of symptom control. These were the Rhinitis Quality of Life Questionnaire (RQLQ), which includes questions about how much symptoms affect people's daily activities, and the Rhinitis Total Symptom Score (RTSS). Some studies had also measured blood levels of immunoglobin E (IgE) – a naturally occurring antibody involved in allergic reactions.

Where possible, they pooled trial results for these different measures to get an overall picture of the effect of probiotics.

 

What were the basic results?

The review found that 17 of the 23 studies reported a significant improvement in at least one of the outcomes measured for people taking probiotics, while six studies showed no benefit from probiotics.

Results of the meta-analysis were mixed. The only measure that showed a clear beneficial effect from taking probiotics was the RQLQ. When the results were pooled from the four studies that measured RQLQ in a way that allowed direct comparison, the study found a mean reduction in score compared to placebo of -2.23 points (95% confidence interval (CI) -4.07 to -0.4). The researchers say a reduction of 0.5 or more is considered important.

The researchers found no statistically significant difference between placebo and probiotic treatment for the RTSS (pooled analysis of four trials) or IgE scores (from eight trials).

 

How did the researchers interpret the results?

The researchers were cautious about the results. They said that differences between the studies, such as different types of probiotic used and different sizes in the study populations, meant it was possible that the positive effect of probiotics on quality of life they found "may be at least partially due to confounding factors and differences between studies". They point out that the two older, smaller studies showed a bigger effect, while two bigger, more recent studies found no effect or a small effect.

They say their meta-analysis "suggests that probiotics have the potential to alter disease severity, symptoms and quality of life" for people with allergic rhinitis, but that the evidence is not strong enough to recommend using probiotics alone to relieve it.

 

Conclusion

This review has identified 23 trials investigating the effect of probiotics upon allergic rhinitis, which most people experience as hay fever. Overall, it found some evidence that taking probiotic yoghurts or supplements could improve the quality of life of people with allergic rhinitis, compared to taking placebo. However, it didn't find a direct effect on overall symptoms, or on levels of IgE in the blood.

The review of the data showed some of the problems with research into probiotics in relation to allergies. Many different strains of probiotic organisms were used in the study, although most were from the families Bifidobacterium or Lactobacillus. It's possible that some strains work well and others don't work at all. It’s also unclear from the review what form these probiotics were being taken in – for example, in the form of yoghurt or yoghurt drinks, or as capsules or tablets. This could affect absorption and effects.

The populations included in these studies are also likely to be highly variable. The age categories, for example, ranged from young children in some, to middle-aged adults in others. We also don’t know what they were specifically suffering from. For example, some could have had hay fever, while others could have had an allergy to dust mites or animal fur.

Only a few studies reported their results using standardised measures, making it hard to pool data from different studies. Though 23 studies were identified, pooled analyses for effects on symptoms and quality of life came from only four studies each.

The review showed that 17 of the 23 studies included found at least one positive clinical outcome for patients taking probiotics. However, this did not translate into convincing results on symptom scores when results from four of these studies were pooled. Pooled results on quality of life were positive, though without further information it is not possible to tell how much effect there may be on the person’s daily life. The researchers say that a reduction of 0.5 or more is considered important, so the 2.23 reduction in score should make a difference. However, if probiotics had no effect on rhinitis symptoms, it would be interesting to know in what ways they were helping a person’s quality of life.

Allergic rhinitis, or hay fever specifically, is a common problem in the UK, and treatments don't help everyone. While the evidence for probiotics is not strong enough to recommend them as treatment, the researchers said that few people reported any adverse effects from taking them. Some people taking probiotics reported diarrhoea, abdominal pain or flatulence (wind), but so did similar numbers of people taking placebos.

Overall, the review cannot answer with certainty how much benefit probiotics may have, and as the researchers say, better-quality evidence is needed.

Other treatments for hay fever, such as antihistamine medication, have proved to be effective for many people.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Is YOGURT the secret to easing hay fever? Probiotics can 'relieve sneezing and itchy eyes'. Daily Mail, May 7 2015

Could probiotic yoghurt be the key to fighting hay fever? The Daily Telegraph, May 6 2015

Eating a pot of yogurt could keep hay fever symptoms at bay. Daily Express, May 6 2015

Links To Science

Zajac AE, Adams AS, Turner JH. A systematic review and meta-analysis of probiotics for the treatment of allergic rhinitis. International Forum of Allergy & Rhinology. Published online April 20 2015

Categories: Medical News

Smartphone app used to scan blood for parasites

Medical News - Thu, 05/07/2015 - 15:00

"A smartphone has been used to automatically detect wriggling parasites in blood samples," BBC News reports. It is hoped the customised device could help in programmes to get rid of parasites in parts of Africa.

In certain regions of Africa, two parasitic diseases – river blindness and elephantiasis – are a major health problem affecting millions. Both of these diseases can be treated with a drug called ivermectin.

But if you give somebody ivermectin and they also have high numbers of a less harmful parasite called Loa loa (African eye worm) inside their body, it can trigger potentially deadly side effects.

This has hampered large-scale ivermectin treatment programmes aimed at eradicating river blindness and elephantiasis in some areas,  as people need to have time-consuming tests for Loa loa levels before they can be treated.

The new device – a standard iPhone hooked up to a specially designed lens module – allows people with minimal training to quickly measure Loa loa levels in a sample of blood.

This study found the device performed similarly to standard, more time-consuming, laboratory tests performed by trained technicians.

But this was a small pilot study in just 33 people, and larger studies are needed to confirm the technique's accuracy.

The development of a technique that could be carried out quickly in the field without much specialised equipment could be an important step forward in treating these parasitic diseases.

The researchers speculate the device could also be used to detect other moving disease-causing parasites in the blood. 

Where did the story come from?

The study was carried out by researchers from the University of California, the National Institute of Allergy and Infectious Diseases in the US, the Centre for Research on Filariasis and other Tropical Diseases, and the University of Yaoundé, Cameroon and the University of Montpellier, France.

It was funded by the Bill and Melinda Gates Foundation, the University of California, the US Agency for International Development, the Purnendu Chatterjee Chair Fund, and the National Institute of Allergy and Infectious Diseases.

Some of the researchers hold patents or have applied for patents relating to this new approach, and two hold shares in the company that developed the device.

The study was published in the peer-reviewed journal, Science Translational Medicine.

The BBC's coverage was fair and included a comment from an independent expert in the UK. 

What kind of research was this?

This laboratory study looked at whether a mobile phone video microscope could accurately detect and measure the amount of a parasitic worm called Loa loa (African eye worm) in a drop of a patient's blood.

In certain regions of Africa, parasitic diseases are a major public health problem affecting millions of people. In particular, an infection called onchocerciasis, or river blindness, is the second most common cause of infectious blindness worldwide, and can also result in disfiguring skin disease.

Lymphatic filariasis leads to elephantiasis, which is marked by disfiguring swelling and is the second leading cause of disability worldwide.

Both these diseases can be treated with the antiparasitic drug ivermectin, but this can have dangerous side effects for patients who are also infected with Loa loa.

When there are high numbers of microscopic Loa loa worms in a patient's blood, treatment with ivermectin can lead to severe and sometimes fatal brain damage. The authors say this has led to suspension of mass public health campaigns to administer ivermectin in central Africa.

At present, the standard method of assessing Loa loa levels involves trained technicians manually counting the worms using conventional laboratory microscopes. This process is impractical for health professionals working in communities where they do not have access to labs, or in large ivermectin treatment campaigns.

This study tested a new method researchers developed for detecting Loa loa, which uses a smartphone camera and avoids the need to send samples to a lab. 

What did the research involve?

To test the accuracy of the new technique, researchers compared it with gold standard microscope analysis in a laboratory. They did this for blood samples taken from 33 people in Cameroon, who were all over the age of six and were potentially infected with Loa loa.

The new technique uses a mobile phone-based video microscope that automatically detects the tell-tale wriggling movement of the worms. It examines a fingerpick sample of blood using time lapse photography and uses this characteristic movement to count the worms.

The process uses an iPhone 5 camera attached to a 3D-printed plastic base, where the sample of blood is positioned. Control of the device is automated through an app the researchers developed for the purpose.

The patients' blood was taken from a finger prick and then loaded into two rectangular capillaries to obtain duplicate measurements. A series of videos was taken of each sample by the mobile phone software.

The researchers say it takes a minute to prick the finger and load the blood on to the capillary, and the whole process takes two minutes maximum, starting from the time the sample is inserted to the phone displaying results.

In total, 5 or 10 videos were taken of each sample, resulting in some 300 videos. Sixteen of these were excluded from the analysis either because of inconsistent counts or device malfunction.

Blood was also taken from each patient for gold standard laboratory analysis for Loa loa worms. These samples were transported to a central laboratory for assessment by two independent technicians.

The counts from this analysis were used to assess whether the Loa loa worm count was below the level at which it was safe to treat patients with ivermectin. This was called the treatment threshold.

The researchers then compared results from the smartphone microscope with those from the laboratory.   

What were the basic results?

The researchers found the Loa loa worm count measured by the mobile phone video was very similar to the results from the laboratory. Compared with the laboratory analysis, among the smartphone samples:

  • there were no false negatives – that is, there were no patients who had a worm count above the safe treatment threshold of gold standard methods who were incorrectly identified as safe for treatment by the smartphone technique
  • there were two false positives – that is, two patients whose worm count fell below the safe treatment threshold by gold standard methods were incorrectly identified as not safe for treatment by the smartphone technique

This meant the mobile phone device had:

  • 100% sensitivity – this measures how good the test is at identifying those with an unsafe worm count and who should not be treated with ivermectin
  • 94% specificity – this measures how good the test is at identifying those with a safe worm count who could be treated with ivermectin; this means 6% of people tested would be told their worm levels were unsafe when in fact they were safe  
How did the researchers interpret the results?

The researchers say this new technology could be used at the point of care to identify patients who could not be treated safely using ivermectin.

They say this would allow the mass drug treatment for both river blindness and elephantiasis in central Africa to be resumed.   

Conclusion

This study suggests a new smartphone-based approach could provide a quick way of measuring levels of infection with the Loa loa worm in blood samples, and with a high level of accuracy.

This technique could allow assessment of people's infection in communities without easy access to the laboratory testing that is usually used to detect the worms.

This is important, as people with high levels of this infection can suffer potentially fatal side effects with the drug ivermectin, which is used to treat two other parasitic infections.

It's worth bearing in mind that this was a pilot study in only 33 people using a prototype device. The new device will require more refinement and testing to make sure it performs well enough before it can be put into practice. 

The test seemed to correctly pick up all people with worm levels that would make ivermectin unsafe, but did class 6% of people as having unsafe levels when in fact laboratory tests found they had safe levels. This means that 6% of people might miss out on ivermectin unnecessarily.

If its accuracy is confirmed, this new approach could allow health workers to quickly determine on site whether it is safe to give someone ivermectin for the treatment of river blindness or elephantiasis.

Elephantiasis is a leading cause of preventable disability in the developing world, while river blindness is the second leading cause of infection-related blindness. Approaches that allow cheap, effective and safe mass treatment programmes could have an important impact on health.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Smartphone 'scans' blood for parasites. BBC News, May 7 2015

Links To Science

D'Ambrosio MV, Bakalar M, Bennuru S, et al. Point-of-care quantification of blood-borne filarial parasites with a mobile phone microscope. Science Translational Medicine. Published online May 6 2015

Categories: Medical News