Medical News

'Social jet lag' linked to obesity and 'unhealthy' metabolism

Medical News - Wed, 01/21/2015 - 14:30

"Social jet lag is driving obesity" is the misleading headline in The Daily Telegraph. A new study only found a link between "social jet leg", obesity, and metabolic markers that may indicate a person has an increased risk of obesity-related diseases, such as type 2 diabetes. A cause and effect relationship was not found.

Social jet lag is the term used to describe the difference in someone's sleep patterns between work days and free days – also known as having a lie-in at the weekend.

The researchers' hypothesis was that regularly disrupting our sleep patterns could upset the body clock (circadian rhythms), which could then have a harmful effect on the metabolism.

The study of more than 800 non-shift workers found people with a greater difference in sleep patterns between free days and work days were more likely to be obese and "metabolically unhealthy" (have markers for obesity-related diseases) than those with little or no difference between these timings.

But the study does not prove regular lie-ins cause obesity or obesity-related diseases, as it assessed sleep patterns and health at the same time. It is possible with this type of study that the reverse is true – that obesity and any associated health conditions may cause people to lie in more.

Overall, this study provides no proof having a lie-in will affect your health, though the occasional early-morning Saturday stroll may improve both your fitness and wellbeing.

 

Where did the story come from?

The study was carried out by researchers from the Medical Research Council (MRC) and the University of London in the UK, Duke University and the University of North Carolina in the US, and the University of Otago, New Zealand.

It was funded by the US National Institute of Aging and the MRC.

The study was published in the peer-reviewed International Journal of Obesity.

The quality of the UK's media coverage of the study was mixed. The Independent correctly mentioned there was no proof social jet lag causes obesity, but none of the papers mentioned the possibility of reverse causation: that obesity makes people more likely to lie in, rather than lie-ins causing obesity.

The Daily Telegraph's choice of headline was particularly unhelpful, as it implied social jet lag was now a proven partial cause of the obesity epidemic and the related complications. This is not the case.

 

What kind of research was this?

This was a cross-sectional analysis of a cohort study that aimed to look at the association between obesity and metabolic markers that may indicate obesity-related disease, and social jet lag. Social jet lag is a measure of the discrepancy in sleep timing between our work and free days.

The researchers say travel-induced jet lag results in problems with circadian rhythms (the body's internal clock), which causes temporary problems with metabolic rate (the rate at which the body uses up energy).

However, they suggest social jet lag can become chronic throughout someone's life and therefore have longer term consequences for metabolism, possibly increasing the risk of metabolic syndrome. Metabolic syndrome is the medical term for a combination of diabetes, high blood pressure and obesity.

The researchers also say recent research found people with higher social jet lag and a greater discrepancy between internal and social clocks were found to have a higher self-reported body mass index (BMI).

They consider it possible that if our internal clocks are at odds with external schedules, this may partly underlie the increase in obesity seen in the last few decades.

Cross-sectional studies look at all data at the same time, so they cannot be used to see if one factor (in this case, social jet lag) has caused the others (in this case, obesity or metabolic markers).

 

What did the research involve?

This study included 815 non-shift workers who were participants of an ongoing health study in New Zealand (Dunedin Longitudinal Study), which is following more than 1,000 people born between 1972 and 1973 to investigate links between health and behaviour.

At the age of 38, each participant was asked to fill in a standard questionnaire to assess social jet lag, as well as sleep duration and chronotype (their "natural" preference in sleep timing). 

Social jet lag was measured by subtracting each person's midpoint of sleep on work days from their midpoint of sleep on free days (assuming five work days and two free days a week as standard).

So, for example, if someone slept from 12am to 8am on workdays, the midpoint was 4am. If they then slept from 1am to 11am on free days, the midpoint was 6am, giving a social jet lag of two hours.

Researchers also measured participants' height and weight to calculate BMI, with obesity defined as a BMI of 30 or more. Waist circumference and fat mass were also measured.

The researchers also assessed whether participants had markers of metabolic syndrome, a disorder associated with diabetes and obesity.

They assessed five biomarkers, and people with "high-risk values on three or more" were defined as having metabolic syndrome. These were:

  • waist circumference (88cm or more for women, 102cm or more for men)
  • high blood pressure (130/85mm Hg or higher)
  • low levels of high-density lipoprotein (HDL, or "good") cholesterol
  • high triglycerides (another blood fat)
  • high blood levels of a glycated haemoglobin (an indicator of blood glucose control –  a marker for diabetes)

They also assessed blood levels of an inflammatory marker called C-reactive protein.

The authors say recent research has shown a subset of obese individuals who are "metabolically healthy". They therefore created a measure for obesity status with three levels:

  • non-obese (a BMI of below 30)
  • healthy obese (a BMI of 30 or above, but no metabolic syndrome)
  • unhealthy obese (a BMI of 30 or above and metabolic syndrome)

Researchers also asked people about their current smoking status (since smoking is positively associated with jet lag and may also keep weight low) and socioeconomic status, assessed by their current or most recent occupation.

They were then allocated to one of six categories (from 1 – unskilled labourer to 6 – professional). Those not working were rated according to their educational status.

Researchers analysed their results to determine if social jet lag was associated with "unhealthy" obesity. They created three models, with one adjusting the figures for potential confounders, including smoking, socioeconomic status, sleep duration, and sleep preferences.

 

What were the basic results?

The researchers report social jet lag was associated with numerous measures of metabolic dysfunction and obesity, with higher social jet lag levels in "metabolically unhealthy" obese individuals.

Among metabolically unhealthy obese individuals, social jet lag was additionally associated with high blood levels of glycated haemoglobin and CRP (an indicator of inflammation).

Individuals with higher social jet lag scores were more likely to be obese (odds ratio [OR] 1.2, 95% confidence interval [CI] 1.0 to 1.5) and to meet the researchers' criteria for metabolic syndrome (OR 1.3, 95% CI 1.0 to 1.6) – though both of these risk increases are only of borderline statistical significance.

 

How did the researchers interpret the results?

The researchers say the findings are consistent with the possibility that, "living against our internal clock may contribute to metabolic dysfunction and its consequences".

They suggest a two-hour difference in sleep patterns at the weekend is the "threshold" for a higher BMI and other biomarkers, although they also point out this association was weakened or non-significant once smoking and socioeconomic status were taken into account.

Further research is needed, they say, to determine the physiological mechanisms underlying these associations.

 

Conclusion

The study involved 815 non-shift workers. It found people with a greater difference in sleep patterns between free days and work days (so-called "social jet lag") were more likely to be obese and "metabolically unhealthy" (have markers for obesity-related diseases) than those with little or no difference between these timings.

This study adds to previous research in both animals and humans that has explored the possible effects altering the body clock may have upon our metabolism, being overweight or obese. A recent UK survey found a link between shift work and chronic diseases, which we discussed at the end of 2014.

However, this new study cannot prove regular lie-ins cause obesity or obesity-related diseases.

The study is cross-sectional, assessing sleep patterns and health at the same time. It is possible with this type of study that the reverse is true – that obesity and any associated health conditions may cause people to lie in more whenever possible.

There may be many underlying factors this study has not taken into account that are influencing the apparent relationship between obesity, metabolic markers, and higher levels of social jet lag.

For example, the study did not take account of people's diets or their exercise levels, which are two key factors that influence BMI and may also influence our sleep patterns.

The increased risks of obesity and metabolic syndrome with social jet lag were only of borderline statistical significance in any case, which further indicates the overall lack of strength in these associations.

Experts tend to agree it is best to keep to a regular sleep schedule on week days and weekends to prevent sleep problems. Whether following this advice can also keep the weight off is uncertain. Overall, this study provides no proof having a lie-in will affect your health.

Still, we can't help but agree with the recommendations of one of the authors of the study, as quoted on the Mail Online website: "I don't want to tell people not to have a lie-in because I enjoy one myself," lead study author Michael Parsons said. He then went on to recommend that employers could offer flexible hours, so staff could synchronise their week days with their weekends.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Social jet lag is driving obesity and illness, say scientists. The Daily Telegraph, January 20 2015

Fancy a lie-in on weekends? New study finds it could lead obesity and diabetes. The Independent, January 20 2015

Looking forward to your Saturday lie-in? Careful, it may be a health hazard: Changes in sleep pattern between work days and weekend can raise chance of obesity and diabetes. Daily Mail, January 21 2015

Links To Science

Parsons MJ, Moffitt TE, Gregory AM, et al. Social jetlag, obesity and metabolic disorder: investigation in a cohort study. International Journal of Obesity. Published online December 22 2014

Categories: Medical News

Becoming healthier may motivate your partner to join in

Medical News - Tue, 01/20/2015 - 15:30

“Fitness 'rubs off on your partner’,'' BBC News reports.

This headline is based on a study of more than 3,000 married couples aged 50 and over in the UK, where at least one of the partners smoked, was inactive, or was overweight or obese at the start of the study. It followed them up and looked at their and their partner’s behaviours over time.

It found that a person was more likely to change their unhealthy behaviours if their partner did too, more so than if they had a partner who was always healthy, or one who remained unhealthy.

These behaviours included quitting smoking if they smoked, increasing physical activity levels and losing some weight.

There are some limitations to the study. For example, while the researchers took into account some factors that could affect the results, others – such as unmeasured health conditions – could still be having an impact.

Still, the findings seem plausible; working together as a team to improve health, whether it be just you or your partner, or in a larger exercise or weight loss group, may help in practical ways (such as eating the same foods), as well as boosting motivation and confidence levels.

 

Where did the story come from?

The study was carried out by researchers from University College London. Funding was provided by the US National Institute on Aging and a consortium of UK government departments co-ordinated by the Office for National Statistics. Additional support for the authors was provided by the British Heart Foundation and Cancer Research UK.

The study was published in the peer-reviewed medical journal JAMA Internal Medicine.

The coverage of this study in the news has been generally reasonable. The BBC’s headline “Fitness ‘rubs off on your partner’” may make it sound like you don’t have to do anything to get fitter – as long as your partner is – but unfortunately this is not the case.

 

What kind of research was this?

This was an analysis of data from an ongoing cohort study of older adults called the English Longitudinal Study of Ageing (ELSA). It aimed to look at the effect of a partner’s behaviour on a person making healthy behaviour changes.

If a person has unhealthy behaviours (such as eating unhealthily), their partner is also likely to, and if one of them changes this behaviour then the other often does too.

In this study the researchers specifically wanted to look at whether there was a difference in the effect of having a partner who is consistently healthy (e.g. had always eaten healthily) and one who had unhealthy behaviour but then makes a positive change (e.g. starts eating healthily).

While other studies have assessed the impact of partners changing behaviour, few have assessed this specific question.

This type of study is the best way of looking at the impact of behaviour that people choose themselves in real life. The main limitation to this type of study is that factors other than the one the researchers are looking at (called confounders) could also have an effect. The researchers can take steps in their analyses to reduce the effect of potential confounders, but they can never be entirely sure they have accounted for every confounder.

 

What did the research involve?

The ELSA study started prospectively collecting data on adults aged 50 and over in England in 1998.

For the current study researchers looked at information on 3,722 married couples who lived together, where at least one had an unhealthy behaviour or characteristic at the start of the study (smoking, physically inactive, or overweight or obese). They then looked at whether their partner’s behaviour over time had an influence on whether the person changed their unhealthy behaviours.

Participants in ELSA had taken part in the Health Survey for England in 1998, 1999 and 2001. All household members aged 50 and over, as well as partners were invited for interview. Those who enrolled were sent a computer-assisted interview and self-administered questionnaires every two years from 2002. Smoking and physical activity were assessed in every questionnaire. Every four years this assessment included a health assessment, where a nurse visited the participants in their homes. This assessment included measuring height and weight.

For the current study, the researchers analysed data for the first two consecutive assessments that the person and their partner completed. They looked at smoking, physical activity and weight in people and their partners, and whether individuals:

  • quit smoking (said they smoked at the first assessment but not at the second assessment)
  • became active after being inactive (said they took part in moderate to vigorous activity less than once a week at the first assessment, but took part more often than this at the second assessment)
  • lost weight (were overweight or obese at the first assessment and had lost at least 5% of their body weight by the second assessment)

A partner was considered “consistently” healthy if they did not have the unhealthy behaviour at either the first or the second assessment.

Couples where the partner moved from a healthy behaviour to a less healthy behaviour were excluded from the analyses, as there were so few of them.

The researchers took into account a number of potential confounders in their analyses, including:

  • age
  • gender
  • socioeconomic status (household non-pension wealth)
  • health conditions (cancer, diabetes, heart disease, stroke, heart attack, or other long-standing illness that limited their activities)
What were the basic results?

At the start of the study:

  • 13.9% of men and 14.8% of women smoked
  • 31.2% of men and 35.5% of women were physically inactive
  • 77.3% of men and 67.6% of women were overweight or obese

By the next assessment overall:

  • 17% of smokers had quit
  • 44% of inactive individuals had become active
  • 15% of overweight or obese individuals had lost at least 5% of their body weight

The researchers found that when one partner changed to a healthier behaviour, the other person was more likely to also change to a healthier behaviour than if their partner had remained unhealthy. This was the case across all three behaviours:

  • If their partner stopped smoking 50% of women and 48% of men stopped smoking also, compared to only 8% stopping smoking if their partner kept smoking.
  • If their partner became more physically active 66% of women and 67% of men also became more physically active, compared to 24% of women and 26% of men becoming more active if their partner remained inactive.
  • If their partner lost weight 36% of women and 26% of men also lost weight, compared to 15% of women and 10% of men if their partner did not lose weight.

Having a consistently healthy partner also increased the likelihood that a person would stop smoking or become more active, but not the likelihood of losing weight. For all three behaviours, having a partner who changed to a healthier behaviour was associated with a greater likelihood of a person themselves changing behaviour than having a partner with consistently healthy behaviour. The impact of a partner’s behaviour was limited to that specific behaviour (e.g. smoking, or activity, or weight loss) and was not associated with changes in other behaviours in the other partner.

 

How did the researchers interpret the results?

The researchers conclude that “men and women are more likely to make a positive health behavior (sic) change if their partner does too, and with a stronger effect than if the partner had been consistently healthy in that domain”. They suggest that involving partners in programmes aiming to get a person to change their behaviour might improve the outcomes of these programmes.

 

Conclusion

This cohort study has found that individuals with unhealthy behaviours such as smoking, being inactive or being overweight are most likely to change these behaviours if their unhealthy partner also changes these behaviours.

Having a partner who has consistently healthy behaviours was also associated with a greater likelihood of change in smoking and activity compared to a consistently unhealthy partner, but less so than having a partner who changed behaviour.

There were some limitations to the study, including that:

  • The study took into account some confounders, such as age and some health conditions, but other factors could also be having an effect – such as unmeasured health conditions or events. For example, there could have been a mutual life event experienced by both partners that motivated the change, such as the death of a friend or relative from lung cancer leading to quitting smoking.
  • As both partners were assessed at the same time it is not possible to say which person changed first, or whether they both changed together.
  • Smoking and physical activity were reported by the participants themselves and not verified, so may not be accurate.
  • Weight was measured by a nurse and was therefore more likely to be accurate.
  • Behaviours were assessed only twice, either two or four years apart. If a person changed between those assessments but then reverted to their original behaviour this would not have been picked up, and it is not possible to say how long the changes lasted.
  • Results may not apply to younger couples, as the study was restricted to couples with at least one partner aged 50 or over at the start of the study.

It is known that social support from family, friends or other groups can be an important component in people changing their behaviours.

This study supports this concept, and suggests that the impact may be greatest, for partners at least, if that partner is also changing their behaviour.

Our Find Services section can provide details of exercise, stop smoking and weight loss services, many of which are free, in your local area.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Fitness 'rubs off on your partner'. BBC News, January 19 2015

Let’s quit together: health kicks are easier if your partner signs up too. The Guardian, January 19 2015

The best way to get in shape? Get your partner involved: Couples who ditch bad habits are more likely to succeed than those going it alone. Mail Online, January 19 2015

Couples who get fit together more likely to succeed. The Daily Telegraph, January 19 2015

Links To Science

Jackson SE, Steptoe A, Wardle J. The Influence of Partner’s Behavior on Health Behavior Change - The English Longitudinal Study of Ageing. JAMA Internal Medicine. Published online January 19 2015

Categories: Medical News

Does moderate drinking reduce heart failure risk?

Medical News - Tue, 01/20/2015 - 14:10

"Seven alcoholic drinks a week can help to prevent heart disease," the Daily Mirror reports. A US study suggests alcohol consumption up to this level may have a protective effect against heart failure.

This large US study followed more than 14,000 adults aged 45 and older for 24 years. It found those who drank up to 12 UK units (7 standard US "drinks") per week at the start of the study had a lower risk of developing heart failure than those who never drank alcohol.

The average alcohol consumption in this lower risk group was about 5 UK units a week (around 2.5 low-strength ABV 3.6% pints of lager a week).

At this level of consumption, men were 20% less likely to develop heart failure compared with people who never drank, while for women it was 16%.

The study benefits from its large size and the fact data was collected over a long period of time.

But studying the impact of alcohol on outcomes is fraught with difficulty. These difficulties include people not all having the same idea of what a "drink" or "unit" is.

People may also intentionally misreport their alcohol intake. We also cannot be certain alcohol intake alone is giving rise to the reduction in risk seen.

Steps you can take to help reduce your risk of heart failure – and other types of heart disease – include eating a healthy diet, achieving and maintaining a healthy weight, and quitting smoking (if you smoke).

 

Where did the story come from?

The study was carried out by researchers from Brigham and Women's Hospital in Boston, and other research centres in the US, the UK and Portugal.

It was published in the peer-reviewed European Heart Journal.

The UK media generally did not translate the measure of "drinks" used in this study into UK units, which people might have found easier to understand.

The standard US "drink" in this study contained 14g of alcohol, and a UK unit is 8g of alcohol. So the group with the reduced risk actually drank up to 12 units a week.

The reporting also makes it seem as though 12 units – what is referred to in the papers as "a glass a day" – is the optimal level, but the study cannot not tell us this.

While consumption in this lower risk group was "up to" 12 units per week, the average consumption was about 5 units per week. This is about 3.5 small glasses (125ml of 12% alcohol by volume) of wine a week, not a "glass a day".

And the poor old Daily Express got itself into a right muddle. At the time of writing, its website is actually running two versions of the story. 

One story claims moderate alcohol consumption was linked to reduced heart failure risk, which is accurate. 

The other story claims moderate alcohol consumption protects against heart attacks, which is not accurate, as a heart attack is an entirely different condition to heart failure.

 

What kind of research was this?

This was a large prospective cohort study looking at the relationship between alcohol consumption and the risk of heart failure.

Heavy alcohol consumption is known to increase the risk of heart failure, but the researchers say the effects of moderate alcohol consumption are not clear.

This type of study is the best way to look at the link between alcohol consumption and health outcomes, as it would not be feasible (or arguably ethical) to randomise people to consume different amounts of alcohol over a long period of time.

As with all observational studies, other factors (confounders) may be having an effect on the outcome, and it is difficult to be certain their impact has been entirely removed.

Studying the effects of alcohol intake is notoriously difficult for a range of reasons. Not least is what can be termed the "Del Boy effect": in one episode of the comedy Only Fools and Horses, the lead character tells his GP he is a teetotal fitness fanatic when in fact the opposite is true – people often misrepresent how healthy they are when talking to their doctor.

 

What did the research involve?

The researchers recruited adults (average age 54 years) who did not have heart failure in 1987 to 1989, and followed them up over about 24 years.

Researchers assessed the participants' alcohol consumption at the start of and during the study, and identified any participants who developed heart failure.

They then compared the likelihood of developing heart failure among people with different levels of alcohol intake.

Participants came from four communities in the US, and were aged 45 to 64 years old at the start of the study. The current analyses only included black or white participants. People with evidence of heart failure at the start of the study were excluded.

The participants had annual telephone calls with researchers, and in-person visits every three years.

At each interview, participants were asked if they currently drank alcohol and, if not, whether they had done so in the past. Those who drank were asked how often they usually drank wine, beer, or spirits (hard liquor).

It was not clear exactly how participants were asked to quantify their drinking, but the researchers used the information collected to determine how many standard drinks each person consumed a week.

A drink in this study was considered to be 14g of alcohol. In the UK, 1 unit is 8g of pure alcohol, so this drink would be 1.75 units in UK terms.

People developing heart failure were identified by looking at hospital records and national death records. This identified those recorded as being hospitalised for, or dying from, heart failure.

For their analyses, the researchers grouped people according to their alcohol consumption at the start of the study, and looked at whether their risk of heart failure differed across the groups.

They repeated their analyses using people's average alcohol consumption over the first nine years of the study.

The researchers took into account potential confounders at the start of the study, including:

  • age
  • health conditions, including high blood pressure, diabetes, coronary artery disease, stroke and heart attack
  • cholesterol levels
  • body mass index (BMI)
  • smoking
  • physical activity level
  • educational level (as an indication of socioeconomic status)

 

What were the basic results?

Among the participants:

  • 42% never drank alcohol
  • 19% were former alcohol drinkers who had stopped
  • 25% reported drinking up to 7 drinks (up to 12.25 UK units) per week (average consumption in this group was about 3 drinks per week, or 5.25 UK units)
  • 8% reported drinking 7 to 14 drinks (12.25 to 24.5 UK units) per week
  • 3% reported drinking 14 to 21 drinks (24.5 to 36.75 UK units) per week
  • 3% reported drinking 21 drinks or more (36.75 UK units or more) per week

People in the various alcohol consumption categories differed from each other in a variety of ways. For example, heavier drinkers tended to be younger and have lower BMIs, but be more likely to smoke.

Overall, about 17% of participants were hospitalised for, or died from, heart failure during the 24 years of the study.

Men who drank up to 7 drinks per week at the start of the study were 20% less likely to develop heart failure than those who never drank alcohol (hazard ratio [HR] 0.80, 95% confidence interval [CI] 0.68 to 0.94).

Women who drank up to 7 drinks per week at the start of the study were 16% less likely to develop heart failure than those who never drank alcohol (HR 0.84, 95% CI 0.71 to 1.00).

But at the upper level of the confidence interval (1.00), there would be no actual difference in risk reduction.

People who drank 7 drinks a week or more did not differ significantly in their risk of heart failure compared with those who never drank alcohol.

Those who drank the most (21 drinks per week or more for men, and those drinking 14 drinks per week or more for women) were more likely to die from any cause during the study.

 

How did the researchers interpret the results?

The researchers concluded that, "Alcohol consumption of up to 7 drinks [about 12 UK units] per week at early middle age is associated with lower risk for future HF [heart failure], with a similar but less definite association in women than in men."

 

Conclusion

This study suggests drinking up to about 12 UK units a week is associated with a lower risk of heart failure in men compared with never drinking alcohol.

There was a similar result for women, but the results were not as robust and did not rule out the possibility of there being no difference.

The study benefits from its large size (more than 14,000 people) and the fact it collected its data prospectively over a long period of time.

However, studying the impact of alcohol on outcomes is fraught with difficulty. These difficulties include people not being entirely sure what a "drink" or a "unit" is, and reporting their intakes incorrectly as a result.

In addition, people may intentionally misreport their alcohol intake – for example, if they are concerned about what the researchers will think about their intake.

Also, people who do not drink may do so for reasons linked to their health, so may have a greater risk of being unhealthy.

Other limitations are that while the researchers did try to take a number of confounders into account, unmeasured factors could still be having an effect, such as diet.

For example, these confounders were only assessed at the start of the study, and people may have changed over the study period (such as taking up smoking). 

The study only identified people who were hospitalised for, or died from, heart failure. This misses people who had not yet been hospitalised or died from the condition.

The results also may not apply to younger people, and the researchers could not look at specific patterns of drinking, such as binge drinking.

Although no level of alcohol intake was associated with an increased risk of heart failure in this study, the authors note few people drank very heavily in their sample. Excessive alcohol consumption is known to lead to heart damage.

The study also did not look at the incidence of other alcohol-related illnesses, such as liver disease. Deaths from liver disease in the UK have increased 400% since 1970, due in part to increased alcohol consumption, as we discussed in November 2014.

The NHS recommends that:

  • men should not regularly drink more than 3-4 units of alcohol a day
  • women should not regularly drink more than 2-3 units a day
  • if you've had a heavy drinking session, avoid alcohol for 48 hours

Here, "regularly" means drinking this amount every day or most days of the week.

The amount of alcohol consumed in the study group with the reduced risk was within the UK's recommended maximum consumption limits.

But it is generally not recommended that people take up drinking alcohol just for any potential heart benefits. If you do drink alcohol, you should stick within the recommended limits.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Seven alcoholic drinks a week can help to prevent heart disease, new research reveals. Daily Mirror, January 20 2015

A drink a day 'cuts heart disease risk by a fifth' researchers claim...so don't worry about having a dry January. Mail Online, January 19 2015

A drink a night 'is better for your heart than none at all'. The Independent, January 19 2015

Glass of wine a day could protect the heart. The Daily Telegraph, January 20 2015

Daily drink 'cuts risk' of middle-age heart failure. The Times, January 20 2015

Drinking half a pint of beer a day could fight heart failure. Daily Express, January 20 2015

Links To Science

Gonçalves A, Claggett B, Jhund PS, et al. Alcohol consumption and risk of heart failure: the Atherosclerosis Risk in Communities Study. European Heart Journal. Published online January 20 2015

Categories: Medical News

Shell shock remains 'unsolved'

Medical News - Mon, 01/19/2015 - 13:30

The Mail Online tells us shell shock has been "solved" after scientists claimed they have pinpointed the brain injury that causes pain, anxiety and breakdowns in soldiers.

The Mail's claim is prompted by a study that carried out autopsies on five military veterans who had a history of blast exposure to see what type of brain damage this might have caused.

Four out of five of these people showed signs of what is called diffuse axonal injury, where there is damage to the long nerve fibres that carry electrical signals throughout the brain. This nerve fibre damage seemed to have accumulated in "honeycomb" patterns.

However, we cannot conclude with any degree of certainty that blast injury was the direct and only cause of this damage, as these results are clouded by several factors.

Three of the five veterans died from an opiate overdose. People without a military background who died from an overdose also showed this nerve fibre damage, as did people who had suffered other types of brain injury, such as from a traffic accident – albeit without the honeycomb pattern.

This means it is difficult to know how much other factors contributed to this nerve fibre damage. In short, shell shock has not been "solved", as the Mail Online would have us believe.

 

Where did the story come from?

The study was carried out by researchers from the Johns Hopkins University School of Medicine in the US.

Funding was provided by the Johns Hopkins Alzheimer's Disease Research Center, the Kate Sidran Family Foundation, and the Sam and Sheila Giller family.

The study was published in the peer-reviewed medical journal, Acta Neuropathologica Communications on an open-access basis, so it is free to read online or download as a PDF.

The Mail Online coverage does not acknowledge that we cannot draw any firm conclusions on cause and effect from the results of this small study.

Claims stating shell shock has been "solved" are simplistic and cannot be supported by the results of such a small study, where multiple confounding factors are involved.

 

What kind of research was this?

This was a laboratory study that aimed to look at the brain changes that may occur from exposure to blast injury during military deployment.

The researchers say there are thought to be 250,000 veterans of conflicts in Iraq and Afghanistan with traumatic brain injury, many resulting from a blast.

This a complex form of injury said to incorporate "the direct effects of overpressure wave (primary injury), the gunshot-like effects of debris and shrapnel showering the head (secondary injury), the fall impact from translocation of the body by the overpressure wave (tertiary injury), as well as flash burns from the intense heat and asphyxiation or inhalation injuries".

Though there is a 100-year history of blast injuries, starting with those resulting from artillery shelling during the First World War, there is still a lack of understanding of the actual physical damage and injury it causes the brain.

Recent animal studies suggest these blasts cause what is called diffuse axonal injury. Diffuse means the injury is spread throughout the brain, rather than being isolated to one specific area.

It usually results from acceleration or deceleration forces moving the brain within the skull, similar to what may occur through vigorous shaking, which causes tearing injuries to the long nerve fibres (axons) that transmit signals throughout the brain.

Diffuse axonal injury is one of the most common types of traumatic brain injury, and effects can range from concussion to coma and death. 

This study conducted autopsies of veterans who had a history of blast injury to see whether there was any evidence of diffuse axonal injury.

 

What did the research involve?

The study included five male veterans with a history of blast injury who died at an average age of 28. Three died from an opiate or alcohol overdose. Similarly aged control subjects used as a comparison included:

  • six people who died from an opiate overdose (four females, two males)
  • six people who died from a lack of oxygen to the brain (three males, three females)
  • five people who died from another type of traumatic brain injury, such as falls or road traffic accidents (all male)
  • seven people who died with no history of traumatic brain injury, overdose or oxygen starvation

The researchers carried out brain autopsies on these people, particularly looking for evidence of amyloid precursor protein (APP), which is said to accumulate when there is diffuse axonal injury.

 

What were the basic results?

The researchers found four out of five of the blast injury cases showed evidence of APP accumulation in the nerve fibres in various parts of the brain, most predominantly in the frontal area.

These areas of damage were described to have formed into irregularly shaped "honeycomb" patterns.

The one person who did not show these abnormalities was said to have died from a gunshot wound to the head, and had a history of exposure to several IED attacks.

Three out of four of these cases with APP accumulation in the nerve fibres died from an opiate overdose. When compared with six non-military people who also died from opiate overdose, five of these controls were also found to have a few APP abnormalities, but they were significantly fewer in number.

Also, compared to the war veterans, none of these controls displayed the same "honeycomb" distribution of nerve fibre damage. 

In the controls who also died from traumatic brain injury, but not military related, these people showed quite a different pattern of nerve fibre damage from both the veterans and those who had died from an opiate overdose.

Their nerve fibre abnormalities tended to be "thick with prominent undulations and bulbs", while the non-military controls who died from an opiate overdose tended to have thin, straight abnormalities.

The controls who died as a result of a lack of oxygen to the brain showed quite variable APP accumulation – two showed APP abnormalities, four did not.

The controls without any history of traumatic brain injury, oxygen starvation or overdose did not show any APP abnormalities at all.

 

How did the researchers interpret the results?

The researchers say that: "Our findings demonstrate that many cases with history of blast exposure are featured by APP [nerve fibre damage] that may be related to blast exposure, but an important role for opiate overdose, [lack of oxygen to the brain before death], and concurrent blunt traumatic brain injury events in war theatre or elsewhere cannot be discounted."

 

Conclusion

This research aimed to shed light on the type of brain damage that blast exposure during military conflict may cause.

Previous research suggested blast exposure can cause diffuse axonal injury, where the forces acting upon the brain cause tearing and damage of the long nerve fibres that connect different parts of the brain.

This study found some supportive evidence suggesting this might be the case. Four of the five veterans with a history of blast injury did show this type of nerve fibre damage.

Researchers also observed a distinctive "honeycomb" pattern of nerve fibre damage, which was not present in other controls.

However, it cannot be concluded with much certainty that blast injury was the direct and only cause of this damage, as these results are clouded by several factors. Three of these five veterans died from an opiate overdose.

Non-military people who also died from an overdose still showed this nerve fibre damage, albeit in a different pattern. Similarly, people who suffered other types of traumatic brain injury also had this type of nerve fibre damage, though again with a different pattern.

Therefore, as the researchers acknowledge, it is difficult to rule out the influence that opiate overdose, lack of oxygen to the brain around the time of death, and other non-blast trauma may have had upon these brain changes in this military sample.

It is also not known whether these nerve fibre injuries had any effect on the person's subsequent health and brain function, or whether the injury was related to their cause of death in any way.

This is likely to depend on the severity of the brain damage: as is already recognised, diffuse axonal injury can encompass a wide extent of brain damage, from mild concussion to death.

The reliability of this study's conclusions would be improved if the results were replicated in a larger number of people, or in studies that better accounted for the wide range of other confounders (such as associated injuries or causes of death) that could explain the difference observed.

Although this study is of interest, the small sample sizes examined here – both the military personnel and the various control groups – make it difficult to draw any firm conclusions about the type of damage and subsequent health effects that may result from blast injuries during military conflict.

If you serve, or have served, in the armed forces and think your experiences have taken a psychological toll, there is help and support available. Read more about accessing healthcare for military personnel and veterans.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Shell shock solved: Scientists pinpoint brain injury that causes pain, anxiety and breakdowns in soldiers. Mail Online, January 16 2015

The mystery of shellshock solved: Scientists identify the unique brain injury caused by war. The Independent,  January 15 2015

Links To Science

Ryu K, Horkayne-Szakaly I, Xu L, et al. The problem of axonal injury in the brains of veterans with histories of blast exposure. Acta Neuropathological Communications. Published online November 25 2014

Categories: Medical News

Could 'DNA editing' lead to designer babies?

Medical News - Mon, 01/19/2015 - 12:54

"Rapid progress in genetics is making 'designer babies' more likely and society needs to be prepared," BBC News reports.

The headline is prompted by advances in “DNA editing”, which may eventually lead to genetically modified babies (though that is a very big “may”).

The research in question involved the technique of intacytoplasmic sperm injection (ICSI), where a mouse sperm cell was injected into a mouse egg cell. At the same time, they injected an enzyme (Cas9) capable of cutting bonds within DNA, alongside “guide” RNA to guide the enzyme to its target location in the genome. This system then “cut out” targeted genes.

So far, the techniques have only been tested in animals and for “cutting out” very specific genes (currently, under UK law, any attempt to modify human DNA is illegal).

Although this is very early stage research, the potential uses could be vast. They range from arguably more “worthy” uses, such as editing out genes linked to genetic conditions such as cystic fibrosis, to opening up the possibility for a whole manner of cosmetic or “designer” uses – such as choosing your baby’s eye colour.

Such a possibility is always going to be controversial and lead to much ethical debate. As the researchers say, the possibility that these findings could one day lead to similar tests using ICSI techniques in human cells suggests that it is time to start giving this careful consideration.

 

Where did the story come from?

The study was carried out by researchers from the University of Bath and was funded by the Medical Research Council UK and an EU Reintegration Grant.

The study was published in the peer-reviewed scientific journal Scientific Reports. The study is open access, so it is free to read online or download as a PDF.

The BBC accurately reports this study, including quotes from experts about the possible implications.

 

What kind of research was this?

This was laboratory and animal research, which aimed to explore whether the DNA of mammals can be “edited” around the time of conception.

The researchers explain how recent study has developed the use of an enzyme that cuts bonds within DNA (Cas9). This enzyme is guided to its target location in the genome by “guide” RNA (gRNA). To date, the Cas9 system has been used to introduce targeted DNA mutations into various species including yeast, plants, fruit flies, worms, mice and pigs.

In mice, Cas9 has been used successfully to introduce mutations in single-cell embryos, called pronuclear embryos. This is the stage where the egg has just been fertilised and the two pronuclei – one from the mother and one from the father – are seen in the cell. Such early targeting of the embryo’s genome directly leads to an offspring with the introduced genetic mutation.

However, it is unknown whether Cas9 and gRNA could be used to introduce genetic change immediately before the pronuclei are formed (that is, when the sperm cell is fusing with the egg cell, but before the genetic material from the sperm has formed the paternal pronucleus). Therefore, in this study, the researchers aimed to see whether it was possible to use Cas9 to edit the paternal mouse DNA immediately following ICSI. 

 

What did the research involve?

Briefly, the researchers collected egg cells and sperms cells from 8-12 week old mice. In the laboratory, the sperm were injected into the egg cells using the ICSI technique.

The Cas9 and gRNA system was used to introduce targeted gene mutations. This was tried in two ways: firstly, by a one-step injection, where the sperm cell was injected in a Cas9 and gRNA solution; and secondly, where the egg cell was first injected with Cas9 and then the sperm was subsequently injected in a gRNA solution.

The sperm cell that they used had been genetically engineered to carry a certain target gene (eGFP). They were using the Cas9 and gRNA system to see whether it could “edit out” this gene. Therefore, the researchers examined the subsequent stages of blastocyst development (a mass of cells that develops into an embryo) to see whether the system had introduced the required genetic change.

They followed the studies targeting eGFP with studies targeting naturally occurring genes.

Resulting embryos were transferred back to the female to grow and develop.

 

What were the basic results?

Following ICSI, around 90% of fertilisations developed to the blastocyst stage.

When the researchers first carried out a fertilisation using the male sperm that had been genetically engineered to carry the eGFP gene, about half of the resulting blastocysts had a functioning copy of this gene (i.e. they made the eGFP protein). When the sperm were simultaneously injected with the Cas9 and gRNA system to “edit” this gene, none of the resulting blastocysts showed a functioning copy of this gene. 

When they next tested the effect of pre-injecting the egg cell with Cas9, and then injecting the sperm cell with gRNA, they found that this was also effective at editing the gene. In fact, subsequent tests showed that this sequential method was more effective at “editing” than the one-step injection method.

When the eGFP gene was introduced into the egg cell rather than the sperm, and then the Cas9 and gRNA system introduced in the same way, only 4% of the resulting blastocysts demonstrated a functioning copy of this gene.

When next testing the naturally occurring genes, they chose to target a gene called Tyr because mutations to this gene in black mice resulted in a loss of pigment to the coat and eyes. When the Cas9 and gRNA system was similarly used to target this gene, loss of pigment was transmitted to the offspring.

 

How did the researchers interpret the results?

The researchers conclude that their experiments show that injecting egg cells with sperm, along with Cas9 and guide RNA, “efficiently produces embryos and offspring with edited genomes”.

 

Conclusion

This laboratory research using sperm and egg cells from mice demonstrates the use of a system to produce targeted alterations in the DNA – a process the media like to call “genetic editing”. The editing happened just before the genetic material of the egg and sperm cell fuse together.

The system makes use of an enzyme (Cas9) capable of cutting bonds within DNA, and a “guide” molecule targeting it to the correct genetic location. So far, the techniques have only been tested in animals, and for “editing out” a small number of genes.

However, though this is very early stage research, the results do unavoidably lead to questions about where such technology could lead. ICSI techniques are already widely used in the field of assisted human reproduction. ICSI is where a single sperm is injected into the egg cell, as in this study, as opposed to in vitro fertilisation (IVF), where an egg cell is cultured with many sperm to allow fertilisation to take place “naturally”.

Therefore, the use of ICSI makes it theoretically possible that this study may one day lead to similar techniques being possible to edit the human DNA around the time of fertilisation and so prevent inherited diseases, for example.

As the research importantly states: “this formal possibility will require exhaustive evaluation”.

Such a possibility is always going to be controversial and lead to much ethical and moral debate over whether such steps are “correct” and where they could possibly then lead to (such as altering other non-disease aspects of inheritance, like personal traits).

As one of the lead researchers reports to BBC News, extreme caution will be needed with any further developments. However, they consider that the time is right to think about this, as it is an issue that the UK’s Human Fertilisation and Embryology Authority (HFEA) – the body that monitors UK research involving human embryos – is likely to have to face at some point.

While the possibility of DNA editing in humans may seem like the stuff of science fiction, our Victorian ancestors would have felt the same way about organ transplants.

A spokesman for the HFEA is quoted in BBC News as saying: “We keep a watchful eye on scientific developments of this kind and welcome discussions about future possible developments…It should be remembered that germ-line modification of nuclear DNA remains illegal in the UK”. They say that new legislation would be needed from Parliament “with all the open and public debate that would entail” for there to be any change in the law.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

'Designer babies' debate should start, scientists say. BBC News, January 19 2015

Links To Science

Suzuki T, Asami M, Perry ACF. Asymmetric parental genome engineering by Cas9 during mouse meiotic exit. Scientific Reports. Published online December 23 2014

Categories: Medical News

Wearing killer high heels could lead to osteoarthritis, study warns

Medical News - Fri, 01/16/2015 - 14:30

"Killer heels could lead to osteoarthritis in knees," The Daily Telegraph reports. An analysis of the walking patterns (gait) of 14 women found evidence that walking in high heels puts the knees under additional strain. Over time, this may potentially lead to osteoarthritis: so-called wear and tear arthritis, where damage to a joint causes stiffness and pain.

The main finding was that wearing high heels (3.8cm and 8.3cm were tested) changed the walking gait, especially around the knee joint area.

Hypothetically, the changes in knee dynamics seen in this study could potentially cause strain on the joint, damaging the cartilage inside the knee, thus increasing the possibility of knee osteoarthritis in later life.

However, the study did not keep in touch with participants to see if they went on to develop arthritis, so doesn’t prove any direct evidence that wearing high heels causes more knee osteoarthritis further down the line.

There are many factors linked to developing osteoarthritis in later life, most notably obesity, joint injuries and repetitive stress. Based on this study alone, it is not clear whether footwear is an important additional factor in the mix.

That said, we suspect that wearing high-heels all day, seven days a week, won’t do wonders for your feet.

 

Where did the story come from?

The study was carried out by researchers from Stanford University Medical Center (US) and was funded by the National Institutes for Health.

The study was published in the peer-reviewed medical journal the Journal of Orthopaedic Research.

The UK’s media reporting was factually accurate, although did not highlight any of the limitations of the research. Coverage tended to assume that the study had found a causal link between heel height and osteoarthritis in later life, which was not the case.

 

What kind of research was this?

The research team outline that knee osteoarthritis is about twice as prevalent in women as men and that wearing high-heeled shoes might be contributing to the higher risk in women.

This was an experimental study examining whether high-heeled walking, with and without additional weight, produces gait changes similar to those associated with increased risk of knee osteoarthritis.

The team were testing two theories.

Firstly, that there are significant changes to knee movement and forces during walking that increase in magnitude as heel height increases; and secondly, that the changes to knee movement during walking in high heels are made more extreme by a 20% increase in weight.

The study was set up to tell us whether women walk differently with heels and with added weight. It was not designed to prove that any change would cause more knee damage, specifically osteoarthritis in the future, but this was the research team’s working assumption.

 

What did the research involve?

The research involved 14 healthy female volunteers whose walking pattern – called their gait – was analysed while wearing different footwear. They were comparing “flat athletic shoes” – presumably trainers – with high heels of various heights, 3.8cm (1.5inches) and 8.3cm (3.2 inches). Each participant underwent measurements nine times in total for each shoe. This included walking at three different speeds.

A second part of the study was looking at whether adding weight to the person affected their walking pattern still further. They achieved this by studying the women’s gait with and without them wearing a vest that added 20% to their total body weight. Women with the added weight were tested wearing the different footwear.

The study analysis compared the gait parameters between the different footwear and for the added weight, to look for changes to the women’s normal walking style.

The authors were aware that walking speed affects measures of walking pattern, so performed additional analysis to account for potential differences in walking speed.

 

What were the basic results?

The bottom line was that there were some significant walking pattern changes linked to the two heel heights tested, and the 20% extra weight. For example, when wearing heels, women tended to bend their knees more during specific phases of their walk.

Women walked slower in heels, but weight did not affect walking speed. 

 

How did the researchers interpret the results?

The researchers said that “Many of the changes observed with increasing heel height and weight were similar to those seen with ageing and OA [osteoarthritis] progression,” and that, “This suggests that high heel use, especially in combination with additional weight, may contribute to increased OA risk in women."

 

Conclusion

The main finding of this study was that wearing high heels affected the way women walk compared to flat shoes. Although not a surprise, the study's findings could still be unreliable, as it involved only 14 women. A study of more people would improve confidence in the findings.

The issue that grabbed the headlines was the possibility that this might lead to a higher risk of knee osteoarthritis later in life.

While the study authors do say that “Many of the changes observed with increasing heel height and weight were similar to those seen with ageing and OA [osteoarthritis] progression”, this does not prove cause and effect. The study itself does not provide evidence on whether heels actually cause an increase in joint disease or any kind, only that they affect the way women walk. Other factors, such as how often the women wear heels, what height, at what age they start and stop wearing them and many other factors, could also influence any association between footwear and joint problems in later life. 

There is potentially a different way of assessing the theory that heels may be related to different prevalence of knee osteoarthritis in men and women in later life. You could study knee osteoarthritis rates in men who regularly wear high heels (for example, transvestites and panto performers) to see if they have similar rates of osteoarthritis to similar heel wearers who are women.

Overall, this small study gives researchers more information on the precise gait changes that occur when a woman wears heels, or when they carry added weight. However, the study does not contribute any further understanding about whether wearing heels is causally related to joint problems in later life.

However, there have been reports of an association between “over-wearing” high heels and foot problems such as corns and calluses. Most foot care specialists would recommend saving your killer heels for special occasions, and sticking to flats or trainers for the daily commute. Read more advice about foot care.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Killer heels could lead to osteoarthritis in knees, warn scientists. The Daily Telegraph, January 14 2015

How 3.5inch heels could prematurely age your joints: Walking in stilettos this high causes changes to the gait seen in ageing and those with arthritic knees. Mail Online, January 15 2015

Links To Science

Titchenal MR, Asay JL, Favre J, et al. Effects of high heel wear and increased weight on the knee during walking. Journal of Orthopaedic Research. Published online December 22 2014

Categories: Medical News

Study finds care home residents 'more likely' to be dehydrated

Medical News - Fri, 01/16/2015 - 14:30

"Care home residents five times more likely to be left thirsty," The Independent reports after an analysis of some London hospital admission records found people admitted from care homes were five times more likely to be dehydrated than people coming from their own homes.

Equally serious was the discovery that dehydration at admission was associated with a higher risk of dying while in hospital.

Much of the media seized on anecdotal reports that dehydration was the result of staff restricting access to fluids so residents were less likely to wet themselves during the night or ask to go to the toilet.

But anecdotal reports cannot be proved and, in terms of evidence-based medicine, don't hold high value.

The study did not explore, or provide any hard evidence of, why care home residents are more likely to be dehydrated.

While it would be complacent to discount suspected poor standards of care in certain homes, other factors may also be involved. For example, many people with dementia have reduced thirst and are reluctant to drink.

The truth is we do not yet know what is behind the higher dehydration levels in patients coming from care homes. Finding an explanation is the crucial next step.

 

Where did the story come from?

The study was carried out by researchers from Barnet and Chase Farm NHS Trust (London), the University of Oxford, and the London School of Hygiene and Tropical Medicine.

It was funded by a Wellcome Trust Investigator Award.

The study was published in The Journal of the Royal Society of Medicine, a peer-reviewed medical journal.

The media generally reported the findings of the story accurately, but many fell into the trap of reporting the study authors' speculation as if it was proven fact.

For example, the Daily Mail had the headline, "Lives of care home patients put at risk through lack of water: Staff 'don't want them going to the toilet at night'." This accusation is unproven.

The reasons why patients were dehydrated were not investigated as part of this study. Plausible explanations were put forward by the study’s authors to explain their observations.

They also raised concerns about potential poor care standards, but none of this speculation is based on new evidence. Additional work is needed to find out the reasons behind this worrying statistic.

 

What kind of research was this?

This was a cross-sectional study looking at the risk of dehydration on admission to hospital in older people living in care homes, compared with those who were living in their own home.

The researchers state older people are at a higher risk of dehydration, and dehydration is associated with worse outcomes while in hospital.

They also say mild to moderate dehydration in older people is easily missed, and is often only detected once individuals are admitted to hospital and have their electrolytes measured, revealing sodium imbalances. Abnormally high sodium levels can be a sign of dehydration.

A study like this can tell us whether a person is likely to have been dehydrated on admission to hospital, but it cannot tell us why this was, as there are many possible reasons.

 

What did the research involve?

The study team were granted permission to analyse information already collected on 21,610 people over the age of 65 who were admitted to an NHS hospital in London over a two-year period in January 2011 to December 2013.

The team obtained data on patients' ages, the type of admission (emergency or planned), and whether they lived in a care home or their own home.

They also had information on whether the person was dehydrated when they were admitted to hospital and whether they subsequently died in hospital.

The main analysis looked for links between whether a person was admitted from a care home and dehydration and death.

The team used hypernatraemia (plasma sodium levels of more than 145 mmol/L) to measure dehydration. This measure of sodium levels in the blood is a pretty accurate indicator of whether a person has had enough water or not.

Certain conditions make hypernatraemia more likely, such as prolonged vomiting or diarrhoea, sweating, and high fevers with inadequate replacement of the fluid lost. Some drugs and hormonal conditions can also increase the level of sodium in the blood.

 

What were the basic results?

The results came in two parts. The crude results presented did not take into account any influencing factors (confounders), while the adjusted results did.

But these did not include the reason for the admission, only whether it was planned or an emergency. 

Initial crude findings showed patients admitted from care homes had a 10 times higher prevalence of hypernatraemia than those who lived in their own home (12.0% versus 1.3%, respectively; odds ratio [OR] 10.5, 95% confidence interval [CI] 8.43-13.0).

From this, the research team worked out around one in three cases of dehydration on admission would be avoided if people who lived in care homes were properly hydrated (population attributable fraction 36.0%).

Of note, 61.9% of people in nursing homes suffered from dementia, which can make it challenging for carers to ensure residents are properly hydrated, compared with 14.7% of people in their own homes.

After accounting for age, gender, mode of admission and dementia, the adjusted results found care home residents were around five times more likely to be admitted with hypernatraemia than people who lived in their own homes (adjusted OR 5.32, 95% CI: 3.85-7.37).

Care home residents were also about twice as likely to die while in hospital (adjusted OR: 1.97, 95% CI: 1.59-2.45).

 

How did the researchers interpret the results?

The researchers' interpretation was simple and stark: "Patients admitted to hospital from care homes are commonly dehydrated on admission and, as a result, appear to experience significantly greater risks of in-hospital mortality [death while at hospital]."

 

Conclusion

This research showed older people living in care homes were five times more likely to be admitted to hospital with dehydration than patients who lived in their own homes.

The research team and media expressed great concern this might be a result of poor-quality care in care homes.

While the study was able to show there is a worrying variation in dehydration levels linked to care homes, it was not able to provide evidence to explain these statistics.

There are many possible explanations for these results, many of which are highlighted by the study authors and the media. This study does not provide any direct evidence supporting any of these explanations, which are speculative at this stage.

The analysis attempted to correct for the finding that people in care homes were slightly older, more likely to be admitted as emergency cases, and far more had dementia. This made a large difference to the relative risk, taking it from 10 times more likely to five times more likely.

This hints at the possibility that people in care homes may be more unwell or have more complex illness and care issues than people who live in their own homes, which may influence their ability to remain hydrated. This is an alternative explanation to the conclusion that the care provided by care homes is inadequate.

The analyses also did not take into account the reason why patients were admitted to hospital, which would have clarified this issue. It is possible these factors (residual confounding) and other unmeasured factors (bias) may still be influencing the results to some degree. 

This type of study is useful in flagging up potential care issues for further investigation by care regulators. In the UK this job falls to the Care Quality Commission (CQC)

The Independent informs us that, "The CQC said ensuring residents get enough food and drink was central to their inspections of care homes," reassuring readers that, "Deputy chief inspector of adult social care in London, Sally Warren, said information on dehydration supplied by Dr Wolff [the author of this study] had been shared with local inspectors."

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Care home residents five times more likely to be left thirsty, study reveals. The Independent, January 16 2015

Care home staff may leave pensioners dehydrated to stop incontinence, Oxford University warns. The Daily Telegraph, January 16 2015

Lives of care home patients put at risk through lack of water: Staff 'don't want them going to the toilet at night'. Daily Mail, January 16 2015

Scandal of dehydration in care homes exposes neglect. The Times, January 16 2015

Links To Science

Wolff A, Stuckler D, McKee M. Are patients admitted to hospitals from care homes dehydrated? A retrospective analysis of hypernatraemia and in-hospital mortality. The Journal of the Royal Society of Medicine. Published online January 15 2015

Categories: Medical News

Inactivity 'twice as deadly' as obesity

Medical News - Thu, 01/15/2015 - 14:30

“Lack of exercise is twice as deadly as obesity,” The Daily Telegraph reports. The headline is prompted by a Europe-wide study on obesity, exercise and health outcomes.

Researchers wanted to see how many deaths could theoretically be avoided if inactive people became more active, compared to how many would be avoided if obese people lost weight.

Researchers calculated that if activity levels were increased so that no-one was classed as inactive, then this could reduce early deaths by more than 7%. This compares to avoiding obesity, which could reduce deaths by nearly 4%. In 2008, say the researchers, 676,000 deaths were attributable to physical inactivity, compared with 337,000 deaths attributable to obesity.

This large study also found that among inactive individuals, even small increases in activity may be of benefit, whatever their weight or waist size.

So should we concentrate purely on physical activity and stop worrying about losing weight?

In practice it’s hard to disentangle the two, since exercise, along with diet, helps maintain a healthy weight. Also, obesity is an established risk factor for diseases such as type 2 diabetes, which is best tackled with a combination of diet and exercise.

So it would be a bad idea to ignore the risks of obesity, whatever your levels of physical activity.

 

Where did the story come from?

The study was carried out by researchers from a number of academic centres in Europe, including the Universities of Cambridge, Oxford and London. It was funded by grants from many bodies across Europe, including the EU and in the UK the Department of Health, the Medical Research Council, Cancer Research UK, the Wellcome Trust, the Stroke Association, the British Heart Foundation and the Food Standards Agency.

The study was published in the peer-reviewed American Journal of Clinical Nutrition and has been made available on an open-access basis, so it is free to read online or download as a PDF.

It was covered fairly by the UK media, although the aims and design of the study were more complex than some of the reporting suggests.

 

What kind of research was this?

This was a large cohort study following 334,161 European men and women for an average of about 12 years, looking at levels of physical activity, body mass index (BMI) and waist circumference (a measure of abdominal adiposity) and the risk of early death.  

The researchers say that lack of physical activity has long been associated with an increased risk of death, independent of people’s BMI. Their aim was to find out if either BMI or waist circumference had any influence on the association between physical activity and the risk of early death.

They also compared how many deaths could theoretically be avoided if inactive or obese individuals were more active or non-obese respectively.

 

What did the research involve?

The researchers used data from an ongoing European study (the EPIC study) of more than half a million participants from 23 centres in 10 countries – Sweden, Denmark, Norway, the Netherlands, the UK, France, Germany, Spain, Italy and Greece. They were recruited to the study between 1992 and 2000.

Participants were aged between 25 and 70. Those who reported having heart disease, stroke or cancer at recruitment were excluded from this current analysis, as were those with missing data in areas such as physical activity and lifestyle.

Participants’ height, weight and waist circumference were measured at the start of the study. From this data they were categorised as normal weight (BMI 18.5-24.9), overweight (BMI 25-30) or obese (BMI of 30 or over). For waist circumference researchers considered waist circumference to be high if over 102cm for men and over 88cm for women.

Participants self reported their levels of occupational, recreational and household physical activity, using a validated questionnaire. Levels of physical activity at work were categorised as either sedentary (e.g. office work), standing (e.g. hairdresser, security guard) or physical (e.g. plumber, nurse) or heavy manual work (e.g. bricklayer).

Recreational activity was assessed as the amount of hours per week spent cycling, jogging, swimming and other physical exercise.

The researchers assessed overall activity levels by combining occupational and recreational activity levels. Physical activity was then categorised into four groups:

  • active
  • moderately active
  • moderately inactive
  • inactive

Researchers collected data on participants’ death from all causes between 2008 and 2010 from official records in each country, both at the regional or national level.

They then examined associations between physical activity, obesity, waist circumference (WC) and deaths from all causes. They adjusted their results for sex, smoking, education and alcohol intake.

 

What were the basic results?

The analysis included 116,980 men (average age 52.6 years) and 217,181 women (average age 51.2 years). There were 11,086 deaths among the men and 10,352 deaths among the women.

The risk of early death was reduced by 16-30% in people were who moderately inactive, compared to those who were inactive, whatever their BMI or waist circumference.

In normal weight and overweight people, higher levels of physical activity were associated with a further reduction in risk.

The researchers calculate that avoiding all inactivity could theoretically reduce all-cause mortality by 7.35% (95% confidence interval (CI), 5.88-8.83%).

Avoiding obesity could theoretically reduce all-cause mortality by 3.66% (95% CI, 2.30-5.01%).

 

How did the researchers interpret the results?

The researchers say that the greatest reduction in risk of death was in the moderately inactive groups, compared to those who were totally inactive. This reduction in risk was found across all groups at all levels of BMI and waist size.

Physical inactivity may theoretically be responsible for twice as many deaths as a high BMI, they say.

Efforts to encourage even small increases in activity may be of benefit.

 

Conclusion

This study’s strengths included its large size and long follow-up period. Researchers also took into account a large number of factors (called confounders) that might have influenced the risk of death, such as diet, smoking history and alcohol intake, although it is still possible that both measured and unmeasured confounders influenced mortality rates.

The study had one important limitation. It only measured people’s BMI (calculated by combining their weight and height) and their physical activity once, at the start of the study. It is quite possible that people’s BMI changed over time, and that this would have had an effect on mortality rates. For example, if physical activity helped reduce obesity over time, it is not possible to say that physical activity reduced the risk of mortality, independent of people’s weight.

Also, the calculations on the number of deaths that could be avoided by both reductions in physical inactivity and obesity is hypothetical.

It would be a bad idea to ignore the risks of obesity, whatever your levels of physical activity.

Obesity is an established risk factor for a range of conditions such as diabetes and cardiovascular disease and it is best tackled by both diet and exercise. But no-one would argue with the notion that everyone should be encouraged to increase levels of physical activity, whatever their size.

An ideal way to gradually raise your activity levels is our Couch to 5K programme, which can turn a couch potato into a successful five kilometre runner over the course of nine weeks. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Lack of exercise is twice as deadly as obesity, Cambridge University finds. The Daily Telegraph, January 15 2015

Inactivity 'kills more than obesity'. BBC News, January 15 2015

Physical inactivity kills twice as many as obesity, new study claims. The Independent, January 15 2015

Inactivity 'deadlier than obesity'. Mail Online, January 15 2015

Scientists recommend 20-minute daily walk to avoid premature death. The Guardian, January 14 2015

No exercise kills TWICE as many people as obesity, shock research reveals. Daily Mirror, January 14 2015

Links To Science

Ekelund U, Ward HA, Norat T, et al. Physical activity and all-cause mortality across levels of overall and abdominal adiposity in European men and women: the European Prospective Investigation into Cancer and Nutrition Study (EPIC). The American Journal of Clinical Nutrition. Published online January 14 2015

Categories: Medical News

'Hibernation protein' could help repair dementia damage

Medical News - Thu, 01/15/2015 - 14:00

"Neurodegenerative diseases have been halted by harnessing the regenerative power of hibernation," BBC News reports. Researchers have identified a protein used by animals coming out of hibernation that can help rebuild damaged brain connections – in mice.

Research found the cooling that occurs in hibernation reduces the number of nerve connections in the brain, but these regrow when an animal warms up.

A protein called RNA-binding motif protein 3 (RBM3) increases during the cooling, and it appears this protein is part of a pathway involved in the regrowth.

When the level of RBM3 was increased without cooling, researchers found the protein protected against the loss of nerve connections in mice with early-stage rodent forms of Alzheimer’s disease and a prion infection similar to Cruetzfeldt-Jakob disease (CJD). The diseases progressed more quickly when the level of RBM3 was lowered.

This same protein is increased in humans when they are given therapeutic hypothermia, where the body temperature is reduced to 34C as a protective treatment after events such as a heart attack.

The hope is that restoring neural connections (synapses) in humans could halt, or even reverse, the effects of dementia and associated neurodegenerative diseases. But this research is still very much in the early stages.

 

Where did the story come from?

The study was carried out by researchers from the University of Leicester and the University of Cambridge, and was funded by the Medical Research Council.

It was published in the peer-reviewed journal, Nature.

On the whole, the media reported the study accurately, but the Mail Online got carried away when they said a drug produced from this research "given in middle age … could keep the brain healthy for longer".

The experiments have only been done in mice so far, and no drug has been developed to target the pathway in humans.

 

What kind of research was this?

This was an animal study that looked at the effects of hibernation on the brain synapses (nerve connections) of mice.

Normally, synapses in the brain go through a process of forming, being removed, and then forming again. Various toxic processes can cause more degeneration, and in some conditions they are not reformed.

This leads to a reduction in the number of synapses, as occurs in conditions such as Alzheimer's disease, which are associated with symptoms such as memory loss and confusion.

A similar loss of synapses occurs when animals hibernate, but they are renewed when the animal warms up at the end of hibernation. Previous research found this also happens when rodents are cooled in a laboratory setting.

Researchers found the production of many proteins does not occur at these low temperatures, but some proteins called "cold-shock proteins" are stimulated – one of these is RBM3.

Here, the researchers wanted to further investigate whether this protein plays a role in the regeneration of synapses. They hope it might be key to understanding how we could restart the process of synapse renewal in humans.

 

What did the research involve?

Three groups of mice were studied during hibernation induced in the laboratory setting:

  • normal (wild type) mice – controls
  • mice with a rodent form of Alzheimer's disease
  • mice with a prion disease, similar to Cruetzfeldt-Jakob Disease (CJD)

Some mice were cooled to 16-18C for 45 minutes and then gradually warmed back to their normal body temperature.

Their brains were studied at various stages of the cooling and rewarming process to count the number of synapses and measure the level of RBM3.

Some mice with the prion disease were not cooled so they could be used as a comparison to see if the cooling process had any effect on the course of the disease.

The other mice were also not cooled, but their levels of RBM3 were chemically increased or decreased to see what effect this had on their brains.

 

What were the basic results?

Normal mice and mice with the very early stages of a rodent form of Alzheimer's disease (at two months) and a prion disease (at four and five weeks after infection) lost synapses as they were cooled down, but recovered them as they warmed up.

They also all had increased levels of RBM3 during the cooling stage. These levels of RBM3 stayed elevated for up to three days afterwards.

The prion-infected mice did not succumb to the disease as quickly as mice that had been infected but not cooled.

They survived for seven days longer on average (91 days compared with 84 days). This suggests the cooling process gave some protection against the prion disease.

Mice who had rodent Alzheimer's disease for three months and a prion disease for six weeks (that is, more advanced disease) also lost synapses when they were cooled, but were not able to regrow them on warming up.

They did not have increased levels of RBM3. There was no difference in survival between these prion-infected mice and the prion-infected mice that were not cooled.

In mice where RBM3 levels were artificially reduced, both types of disease worsened more quickly and synapses were lost faster.

Reducing RBM3 levels in mice without these diseases also reduced the number of synapses, and the mice had memory problems.

When RBM3 production was stimulated in one region of the brain (the hippocampus) in mice with prion infection, this reduced the number of synapses that were lost and increased their survival.

 

How did the researchers interpret the results?

The researchers concluded the protein RBM3 is involved in the pathway of synapse regeneration in mice. They found stimulating the protein was protective against synapse loss in mice with a rodent form of Alzheimer's disease and a prion disease. They hope that, with further research, this might be a new avenue for drug development for humans.

 

Conclusion

The researchers have shown how cooling is protective against the loss of synapses in the early stages of rodent forms of Alzheimer's disease and a form of prion disease. Cooling also increased how long prion-infected mice survived.

But cooling was not protective in the later stages of the diseases. The researchers found this may in part be because of the protein RBM3, which is stimulated during cooling. They found levels of RBM3 increased in the early stages of the diseases when the mice were cooled, but did not in the later stages.

Stimulating this protein without cooling the mice also slowed down the loss of synapses and improved survival in mice with a prion infection.

The results also showed the disease processes sped up when RBM3 levels were reduced. The researchers say this indicates RBM3 is likely to be involved in the maintenance of synapse connections under normal conditions, not just during hibernation.

It is already known from other studies that similar increases in RBM3 occur when humans are given therapeutic hypothermia, where the body temperature is reduced to 34C as a protective treatment – for instance, after a heart attack.

It may be the case that if this pathway is stimulated in humans, it could be a new avenue of research for the treatment of neurodegenerative disorders such as Alzheimer's disease.

This is intriguing research, but still very much in its early stages. There is much we don't know about Alzheimer's disease and other related diseases, though there is evidence that taking steps to maintain a healthy blood flow to the brain by taking regular exercise and eating a healthy diet may lower the risk (as well as help prevent heart disease).

Read more about dementia prevention.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Hibernating hints at dementia therapy. BBC News, January 15 2015

Do squirrels hold key to preventing Alzheimer's? Breakthrough after scientists discover putting the brain into 'hibernation' could help prevent the devastating disease. Daily Mail, January 14 2015

How hibernating animals could help fight Alzheimer's disease. The Daily Telegraph, January 14 2015

Why hibernating bears could be good news in the fight against dementia. The Independent, January 14 2015

Links To Science

Peretti D, Bastide A, Radford H, et al. RBM3 mediates structural plasticity and protective effects of cooling in neurodegeneration. Nature. Published online January 14 2015

Categories: Medical News

How therapy and exercise 'may help some with CFS'

Medical News - Wed, 01/14/2015 - 15:00

"Chronic fatigue syndrome patients' fear of exercise can hinder treatment," The Guardian reports.

Chronic fatigue syndrome (CFS) is a long-term condition that causes persistent and debilitating fatigue. We do not know what causes the condition and there is no cure, though many people improve over time.

Treatments for CFS aim to reduce symptoms, but some people find certain treatments help, while others don't.

The news coverage is further analysis of a trial from 2011, which investigated four different treatments for CFS.

This study suggested adding either cognitive behavioural therapy (CBT) or graded exercise therapy (GET) to a person's medical care saw some improvements in their symptoms of fatigue and physical function.

CBT is a type of "talking therapy" designed to change patterns of thinking and behaviour, while GET is a structured exercise programme that aims to gradually increase how long a person can carry out a physical activity.

The current analysis assessed a range of possible factors to see whether these might explain how CBT and GET improved symptoms.

The findings suggested the treatments could be having an effect at least in part by helping to reduce fear avoidance beliefs, such as worrying exercise would make symptoms worse.

However, this study does have limitations, including the fact the researchers have looked at a lot of different possible factors, and some of the statistical associations may arise by chance.

The researchers aim to use these findings to help them improve these treatments or develop new ones.

As the authors make clear, it is important to note this study did not look at what causes CFS.

 

Where did the story come from?

The study was carried out by researchers from King's College London and other UK universities.

It was funded by the UK Medical Research Council, the Department of Health for England, the Scottish Chief Scientist Office, the Department for Work and Pensions, the National Institute for Health Research (NIHR), the NIHR Biomedical Research Centre for Mental Health at the South London and Maudsley NHS Foundation Trust, and the Institute of Psychiatry, Psychology and Neuroscience at King's College London.

The study was published in the peer-reviewed medical journal, Lancet Psychiatry.

The UK news headlines covering this complex study have all tended to miss the point slightly. The headlines either focus on the already published results (The Independent), or talk about "fear of exercise" exacerbating CFS (The Daily Telegraph and the Daily Mail) or hindering treatment (The Guardian).

This study did not look at what causes or "exacerbates" CFS, or hinders treatment. It assessed how CBT and GET might have improved fatigue and physical function.

It found at least part of the treatments' effects seemed to be down to reducing people's "fear avoidance beliefs", such as worrying exercise would make their symptoms worse.

The Daily Telegraph's suggestion that the study says "people suffering from ME [myalgic encephalopathy] should get out of bed and exercise if they want to alleviate their condition" is particularly unhelpful, and feeds the idea that people with CFS are "lazy": this is not the case.

CFS is a serious condition that can cause long-term illness and disability, and it is not reasonable to suggest people with CFS should simply get up and do some exercise.

People living with CFS need to talk to their doctors about what is appropriate for them and, if an exercise programme is recommended as part of their treatment, that this is done in a structured way. If anything, attempting to exercise before the body is ready to can reverse the rehabilitation process.

 

What kind of research was this?

This was an analysis of data from a randomised controlled trial of different treatments for CFS, which attempted to investigate how these treatments might work.

The trial was called PACE (adaptive Pacing, graded Activity and Cognitive behaviour therapy; a randomised Evaluation trial). It compared four different treatments in 641 people with CFS:

  • specialist medical care alone
  • specialist medical care with adaptive pacing therapy, which involves balancing periods of activity with periods of rest
  • specialist medical care with cognitive behaviour therapy (CBT)
  • specialist medical care with graded exercise therapy (GET)

These treatments are described in more detail in our analysis of this study from 2011.

It found adding CBT or GET to medical care gave moderate improvements in physical function and fatigue compared with medical care alone.

In this study, researchers wanted to see if they could identify what factors (mediators) CBT and GET might be influencing to give rise to these improvements.

The researchers had planned these "secondary" analyses of the PACE trial in advance, so they were able to collect all the relevant data they needed during the trial.

This is a more robust approach than carrying out ad hoc analyses after the study is completed. These secondary analyses tend to be used to generate hypotheses that can be further investigated in future studies.

 

What did the research involve?

The researchers carried out analyses of the PACE trial data to identify possible mediators (factors than can influence the effectiveness of treatments).

This essentially involved looking at whether the effects of CBT or GET were still statistically significant if the researchers adjusted for the potential mediators in their analyses.

The idea is that if CBT or GET work by changing one or more of the mediators, adjusting the analyses to essentially "remove" changes in these mediators will also reduce or remove the effects of CBT or GET on the outcomes.

They also looked at the effect of CBT and GET on these mediators, and the relationship between the mediators and the outcomes.

At the start and various other points during the PACE trial, the researchers measured certain factors they thought could be potential mediators.

Most of these mediators were measured using the Cognitive Behavioural Responses Questionnaire (CBRQ), while a few were measured using specific tests.

These factors included the level of participants':

  • fear avoidance beliefs – such as being afraid exercising would make symptoms worse
  • symptom focusing – thinking a lot about symptoms
  • catastrophising – such as believing they would never feel right again
  • embarrassment avoidance beliefs – such as being embarrassed by symptoms
  • damage beliefs – such as the belief that symptoms show they are damaging themselves
  • avoidance or resting behaviour – such as staying in bed to control symptoms
  • all-or-nothing behaviour – behaviour characterised by periods of high activity and subsequent long periods of resting
  • self-efficacy – feelings of control over symptoms and the disease
  • sleep problems – measured using the Jenkins Sleep Scale
  • anxiety and depression – measured using the Hospital Anxiety and Depression Scale (HADS)
  • fitness and perceived exertion – measured using a step test
  • walking ability – measured as the maximum distance a person could walk in six minutes

For their analyses, the researchers took into account the participants' level of these mediators 12 weeks into the trial. The exception was the walk test, which was assessed at 24 weeks.

The researchers also looked for mediators of the effect of CBT and GET at 52 weeks. These outcomes were measured using the physical function subscale of the Short Form (SF)-36 and the Chalder Fatigue Scale respectively.

Individuals with missing data were excluded from the analyses. The researchers also adjusted for a range of potential confounders in their analyses.

 

What were the basic results?

The researchers found fear avoidance beliefs appeared to be the strongest mediator of the effects of both CBT and GET on physical function and fatigue compared with specialist medical care. It seemed to account for up to 60% of their effect on these outcomes.

For GET, adjusting for participants' increase in exercise tolerance (how far they could walk in six minutes) substantially reduced the effects of GET, but not CBT.

A number of other factors also seemed to be mediators of CBT or GET (compared with specialist medical care alone or adaptive pacing therapy), but the effects tended to be smaller. Fitness and perceived exertion did not appear to be mediating the effects of treatment.

 

How did the researchers interpret the results?

The researchers concluded fear avoidance beliefs were the most important mediators of the effects of CBT and GET.

They say that: "Changes in both beliefs and behaviour mediated the effects of both CBT and GET, but more so for GET."

 

Conclusion

This study has tried to pick apart how cognitive behavioural therapy (CBT) and graduated exercise therapy (GET) affected fatigue and physical function in the PACE randomised controlled trial (RCT).

Its findings suggest this could partly be a result of CBT and GET reducing fear avoidance beliefs, such as the fear that exercise would make symptoms worse. But these treatments were less effective in cases where fear avoidance beliefs remained.

The researchers also identified other factors (mediators) that seemed to be playing a role, such as GET increasing the maximum distance an individual could walk in the six-minute walk test.

The advantages of the study include that this is a pre-planned analysis of an RCT, as well as the fact that after the treatments were started, mediators and outcome were measured in temporal order (i.e. “one after the other”). The latter means that it is possible that the treatments are influencing the mediators, which are then influencing outcomes.

The authors acknowledge that the outcomes were showing changes by 12 weeks when the mediators were measured, so it is possible that they were both affecting each other. However, without measurements of the mediators before 12 weeks they were not able to look at this more closely to see if they could be certain which change came first.

The study only measured some potential mediators, and the authors note they could not rule out the possibility unmeasured factors are influencing the results. They did adjust for a range of confounders to try to reduce this chance, however.

Another potential limitation was the main analysis excluded participants with missing data. This is appropriate if those with missing data are missing at random, but if particular types of people – such as those for whom the treatments are not working as well – are more likely to be missing data, this can bias the results.

The researchers did a separate analysis that included incomplete data to look at whether this might be a problem, and this did not differ very much from the original analysis. This suggested missing data was not having a large effect.

The analyses also only included mediators and outcomes assessed at one point, although they were measured multiple times. The authors say they are analysing this additional data, as well as looking at the mediators together, rather than singly. They say the multiple analyses may have made it more likely some of their significant findings were down to chance.

Overall, this analysis has given researchers an idea of how CBT and GET could have been having an effect in the PACE trial. They hope this knowledge could help them improve these treatments or develop new ones. Any new or adapted treatments will need to be tested in RCTs to know how effective and safe they are.


Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Chronic fatigue syndrome patients' fear of exercise can hinder treatment. The Guardian, January 14 2015

Chronic fatigue victims 'suffer fear of exercise': Patients are anxious activities such as walking could aggravate the condition. Daily Mail, January 14 2015

Sufferers of chronic fatigue syndrome 'can benefit from exercise'. The Independent, January 14 2015

ME: fear of exercise exacerbates chronic fatigue syndrome, say researchers. The Daily Telegraph, January 14 2015

Links To Science

Chalder T, Goldsmith KA, White PD, et al. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. The Lancet Psychiatry. Published online January 13 2015

Categories: Medical News

Under-80 cancer deaths 'eliminated by 2050' claim

Medical News - Wed, 01/14/2015 - 12:54

“Cancer deaths will be eliminated for all under 80 by 2050,” The Independent reports. This is the optimistic prediction contained in a paper written by specialists in pharmacy from University College London (UCL).

The paper is an opinion piece (PDF, 2.1Mb) that points out that deaths from the most common cancers have fallen by nearly a third in the last two decades. This is due to factors such as the decline in smoking, more effective early diagnosis, and better drug and surgical treatments. However, it argues that a further reduction in death rates requires more advances in areas such as screening, genetic testing, cancer awareness programmes and innovative treatments.

It claims that further advances in prevention are needed, including the use of aspirin to reduce the risk of colorectal cancer and more effective treatment for late-stage cancers, so that people with advanced diseases can continue to live fulfilling lives.

In particular, it says that "winning the cancer war" depends on reforming a healthcare culture that discourages the reporting of "minor" symptoms that can indicate a serious disease, since all cancers are most effectively treated at an early stage.

To play “devil’s advocate”, you could argue that while certain trends are improving, such as a reduction in smokers, others are worsening, such as the number of people who are now obese. And, as a recent study found that we discussed last year, the UK is now one of the leading countries when it comes to the number of obesity-related cancers, such as bowel cancer.

Our advice is not to be complacent. It is unlikely that a cure for cancer will be available in the near future. Therefore, core cancer prevention recommendations, such as avoiding smoking, taking regular exercise and eating a healthy diet, remained unchanged. 

 

Where does the report come from?

The report has been researched and written by academics from the School of Pharmacy at UCL. It is unclear whether the report has been peer-reviewed. The study was funded by Boots UK.

There is a potential conflict of interest as several of the measures suggested for improving prevention, early detection and treatments revolve around community pharmacists. While community pharmacists, such as Boots, do provide an essential service, they are not charities.

 

What type of study is it?

The study is a narrative review. This is a type of study that usually gives a comprehensive overview of a topic, rather than addressing a specific question, such as how effective a treatment is for a particular condition.

It does not report on how the search for literature was carried out or how it was decided which studies were relevant to include. Because of this, it is not a systematic review, where all of the relevant evidence is included based on pre-specified criteria. This means there could be gaps in the evidence that is presented.

 

What are the figures?

The basic UK figures provided are:

  • 325,000 new cases of cancer in 2013
  • 150,000 deaths from cancer in 2013

The incidence of cancer increases with age. The yearly risk is:

  • 1 in 5,000 in people aged 20 or less
  • 1 in 100 for people in their 50s 
  • 1 in 30 for people over 65

In 2011, nearly half of new cases were in people aged 70 or over, and more than half of the deaths were in people over the age of 75.

As cancer is more common in old age, the ageing UK population means that the incidence of cancer is higher than at any time in history. However, despite the increased number of cases, the death rate is improving. For example, deaths from the "top four cancers" (breast, lungbowel and prostate) have fallen by 30% in the last 20 years.

The authors highlight the following factors that have contributed to this improvement:

  • reduced smoking
  • more effective early diagnosis
  • better cancer treatments

What changes are proposed to improve cancer prevention?

The authors suggest:

  • improving access to weight management programmes, such as through local pharmacies
  • continuing smoking cessation services
  • better screening for "pre-cancers", such as bowel polyps
  • testing for genetic vulnerabilities, such as being a BRCA gene carrier
  • improving access to immunisations, such as HPV and Hep B vaccination
  • reducing the risk of bowel cancer by encouraging people in their 50s to take low-dose aspirin

 

What measures do they say could improve cancer survival rates?

The report says that there is room to improve the number of cancers that are identified at an earlier stage when there is more chance of a cure. They quote an estimate that 5,000-10,000 lives could be saved if the UK had the same rate as “the best in the world”. A component of this would be improving awareness of early symptoms and encouraging people to see a healthcare professional, including a community pharmacist.

They support continued research into more effective methods for diagnosis and better treatments. They also suggest improvements in supportive care provided for people with more advanced and metastatic cancers (cancers that have spread) or survivors who have long-term side effects as a result of the cancer treatments.

 

Conclusion

Most of the recommendations in this paper are already part of cancer prevention strategy and best practice guidelines.

The advice that all people over 50 should take aspirin is controversial. While there is some evidence of a protective effect, as we discussed last year, this has to be balanced against side effects such as peptic ulcers and bleeding from the stomach, particularly in older people. It's important to see your GP before deciding to take aspirin regularly.

This review could be considered to be over-optimistic. Recommendations regarding cancer prevention remain unchanged.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Cancer deaths will be eliminated for all under 80 by 2050, new research predicts. The Independent, January 14 2015

Under-80s cancer deaths 'eradicated within 30 years': Death rates from most common cancers will continue to drop, report claims. Mail Online, January 14 2015

How cancer death rates have dropped since 1991. The Daily Telegraph, January 14 2015

Cancer deaths under 80 ‘will be eradicated’. The Times, January 14 2015

Categories: Medical News

Napping 'key' to babies' memory and learning

Medical News - Tue, 01/13/2015 - 17:30

"The key to learning and memory in early life is a lengthy nap, say scientists," BBC News reports.

The scientists were interested in babies' abilities to remember activities and events.

They carried out a study involving 216 babies, who took part in trials to see whether napping affected their memory for a new activity.

The babies first watched the researchers taking a mitten off a hand puppet, shaking it, and putting it back on. Half had a nap shortly afterwards and half did not.

Babies who had the nap were able to mimic more of the activities when they played with the hand puppet four hours later. This was also true when the babies were tested 24 hours after being shown the puppets. This may suggest napping shortly after a new activity or event helps to consolidate that memory.

The researchers speculate sleep may help "strengthen" the impact of recent memories as they are stored in the hippocampus, an area of the brain associated with long-term memory retention.

The study suggests napping is important for memory in babies. While sleep is important for memory in adults, this study was only in infants, so you can't use it as an excuse if caught napping at work.

 

Where did the story come from?

The study was carried out by researchers from the Ruhr University Bochum in Germany and the University of Sheffield.

It was funded by a grant from the Deutsche Forschungsgemeinschaft (German Research Foundation).

The study was published in the peer-reviewed journal, Proceedings of the National Academy of Sciences USA (PNAS).

In general, BBC News reported the story accurately, but its headline doesn't make it clear that the research is in babies.

 

What kind of research was this?

This was a randomised controlled trial assessing whether napping shortly after being taught a new activity influences how well babies remember how to do that activity.

Previous studies assessing sleep and memory in babies have mostly been observational, and could not determine whether the sleep patterns might be directly influencing memory.

This study overcomes this by directly assessing the impact of napping on developing a specific memory in a controlled experiment.

Randomly allocating the babies into groups is the best way of ensuring the groups are well balanced and the only thing differing between them is whether or not they had a nap.

 

What did the research involve?

The researchers enrolled babies aged six months or a year old. The babies were randomly allocated to have a nap or no nap after being taught a new activity involving playing with hand puppets.

They were then tested to see if they could remember and repeat the activity, either four hours later (experiment one) or 24 hours later (experiment two). The researchers then assessed whether the babies who had naps remembered the activity better.

The activity involved the researchers showing the babies a hand puppet wearing a mitten with a bell in it. The researcher removed the mitten from the puppet and shook it to ring the bell to draw attention to the mitten. They then replaced the mitten.

This was all done out of the babies' reach and repeated three times for the year-old babies and six times for the six month olds.

The test involved showing the baby the puppet again, but this time within arm's reach. The researchers recorded if the baby removed the puppet's mitten, attempted to shake the mitten, and attempted to replace the mitten within 90 seconds of being shown the puppet.

The babies scored one point for each of the three actions they tried to replicate. The researchers and parents did not verbally or physically encourage the babies to remove the mitten, and the bell in the mitten had been removed so its sound did not prompt the babies to grab the mitten.

For the "nap" group, the researchers scheduled the activity to occur just before they were about to have a nap. For the "no nap" group, it was scheduled for just after they had had a nap, so they would not be due to have another nap in the next four hours.

A nap was considered as at least 30 minutes of uninterrupted sleep, and the babies wore small motion detectors to see if they were awake or asleep. Caregivers also recorded the babies' sleep patterns. The researchers used both of these sources to assess nap timing and duration.

In the study, caregivers were told not to influence their baby's sleep patterns for the study, and babies whose sleep patterns were not compatible with the group they were assigned to were excluded. This may have unbalanced the groups. Another group of babies were excluded for various reasons, such as tester error.

In their first experiment, the researchers compared the nap and no nap groups with babies who had not been shown the hand puppet activity, but just tested to see what they did naturally when shown the hand puppet.

In total, 120 babies took part in experiment one (test at four hours after hand puppet activity), and 96 babies took part in experiment two (test at 24 hours after hand puppet activity).

They looked at whether nap timing had an impact on how well the baby performed. 

 

What were the basic results?

The researchers found the babies who took naps shortly after the hand puppet activity were better able to remember it after four and 24 hours.

 

How did the researchers interpret the results?

The researchers say that to their knowledge, this is the first evidence sleep enhances the ability of babies to retain memories of new behaviours in their first year of life.

 

Conclusion

This study suggests napping shortly after an event may help babies up to the age of one to remember those events.

The study was well designed. The design means differences seen between the groups should be attributable to the naps and not other factors.

The fact some babies were excluded – for example, if they did not have naps as intended – might lead to some imbalances in the groups that could influence results. However, it is difficult to tell if this is the case.

The main assessors of the babies' performance were not blinded as to which group they were in, and therefore could theoretically have influenced results consciously or subconsciously.

However, an independent assessor who was blinded to the groups carried out an assessment of half of the test sessions and showed a high level of agreement with the main assessor. This showed assessor bias was not likely to explain the findings.

Sleep is important for brain function and memory in older children and adults, and this study supports a similar role in younger children.

There are no official UK guidelines on exactly how much sleep babies and children get. One rough guide is babies aged around 12 months need around 2.5 hours of sleep during the day and 11.5 hours a night. Read more about How much sleep do kids need?

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Regular naps are 'key to learning'. BBC News, January 13 2015

Links To Science

Seehagen S, Konrad C, Herbert JS, et al. Timely sleep facilitates declarative memory consolidation in infants. PNAS. Published online January 12 2015

Categories: Medical News

Could brain protein help people 'sleep off' the flu?

Medical News - Tue, 01/13/2015 - 14:55

"Scientists…believe that a nasal spray could be produced which boosts a protein so sufferers could sleep off the flu," The Daily Telegraph reports.

As yet, the research has been confined to assessing the role of one protein – in mice.

The paper reports on complex research in mice on a protein called AcPb, which researchers thought could be playing a role in regulating normal sleep and the body's response to flu infection.

They found mice genetically engineered to lack the protein could not catch up on sleep as well after sleep deprivation.

Also, while normal mice slept more if they were infected with an adapted flu virus, the mice lacking AcPb slept less. They also showed worse signs of the flu and were more likely to die as a result of the infection.

The researchers have shown if you remove the AcPb protein, mice don't fight the flu virus as well. This does not necessarily mean giving mice more of the protein would make them fight it better.

While the news suggests there may be a possibility of an effective flu treatment, we are a long way off knowing whether this is the case. 

Differences between the species may mean the normal role of the protein may not be exactly the same in humans. We also don't know if giving humans (or indeed mice) extra doses of the protein would be safe or effective.

When it comes to flu, prevention is better than (the non-existent) cure. Check to see if you need the flu jab, and always maintain good hygiene if you're unwell.

 

Where did the story come from?

The study was carried out by researchers from the University of Washington and Washington State University Spokane. It was funded by the US National Institutes of Health.

It was published in the peer-reviewed journal, Brain, Behavior, and Immunity.

The Telegraph overemphasises the implications of this animal research for humans. This in part seems to be prompted by the scientists envisaging a "nasal spray" of the protein to treat humans – something that has not been developed or tested in this study.

The Telegraph says that, "The protein will also fight the H1N1 bird flu strain, which swept across the world in the 2009 pandemic". This mouse research did use an adapted strain of H1N1 flu virus – and it was an H1N1 flu virus that caused so-called "swine flu" (not bird flu).

But this is very early-stage research, and we have no idea whether it will result in useful treatments for seasonal flu, let alone any future potential flu pandemic.

 

What kind of research was this?

This was an animal study in mice, looking at the role of a protein called AcPb on sleep and the body's response to the flu virus.

The researchers wanted to test what role the AcPb protein plays in a pathway (a chain of biochemical events) that affects how our bodies regulate our sleep while we are well and during infection. AcPb is primarily found in the brain.

Animal experiments such as these are used when researchers could not carry out similar studies in humans because of ethical and safety concerns.

Other animals are similar enough to humans to help researchers get an insight into how our bodies work. But there are differences between different species, and not all findings in rats or mice will be representative of what happens in humans.

Researchers therefore ideally need to go on to test their hypotheses from animal studies in humans.

 

What did the research involve?

The researchers looked at how mice genetically engineered to lack the AcPb protein differed from normal mice.

They tested their responses to sleep deprivation at different time points, and also to a form of the human H1N1 flu virus adapted to infect mice.

 

What were the basic results?

When normal mice were deprived of sleep at any time, they "caught up" on that sleep later. Mice genetically engineered to lack the AcPb protein (AcPb "knockout" mice) were less able to catch up on sleep after sleep deprivation.

Levels of the AcPb protein naturally fluctuate during the day, and the extent to which the AcPb knockout mice were able to catch up on sleep depended on exactly where in this fluctuation cycle they were.

When exposed to the flu virus, normal mice slept more than they normally did, but AcPb knockout mice slept less than they normally would, and also less than normal mice.

The knockout mice also suffered from the effects of the flu on their body temperature and activity, and were more likely than normal mice to die after exposure to the flu virus.

 

How did the researchers interpret the results?

The researchers concluded the AcPb protein plays a role in regulating sleep and the body's defences against viral attack.

 

Conclusion

This complex study suggests the AcPb protein is playing a role in regulating normal sleep and the response to flu infection in mice.

At this stage, the implications of this research for humans are unclear, as differences between the species may mean the results would not be exactly the same in humans.

While The Telegraph suggests this "could finally lead to an effective treatment for the [flu], which until now has eluded experts", we are a long way off knowing whether this is the case.

What the researchers have shown – in mice – is if you remove this protein, mice don't fight the virus as well. This does not necessarily mean giving mice more of the protein would make them fight it better. It also doesn't mean giving more of the protein wouldn't have side effects.

Overall, this research is at a very early stage, with much more animal research needed before we know if we are closer to a flu treatment.

There is currently no cure for the flu, so the most effective weapon against it is prevention, such as good basic hygiene procedures and the flu jab.

The jab is recommended for people who are at risk of developing serious complications if they catch the flu, such as the over 65s, pregnant women, and people with a serious long-term illness.

Read about the influenza vaccination and who should get it.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Treatment for flu possible as scientists find healing protein. The Daily Telegraph, January 12 2015

Links To Science

Davis CJ, Dunbrasky D, Oonk M, et al. The neuron-specific interleukin-1 receptor accessory protein is required for homeostatic sleep and sleep responses to influenza viral challenge in mice. Brain, Behavior and Immunity. Published online November 4 2014

Categories: Medical News

Blood test may tell you the 'best' way to quit smoking

Medical News - Mon, 01/12/2015 - 14:40

“A blood test could help people choose a stop-smoking strategy that would give them the best chance of quitting,” BBC News reports. The test measures how quickly an individual breaks down nicotine inside their body, which is known as the nicotine-metabolite ratio (NMR).

Researchers wanted to see whether people with “normal” and “slow” NMR responded differently to stop smoking treatments, and if the blood test could eventually be used as an aid to help guide people to the best treatments to help them quit smoking.

They first tested people and categorised them as slow or normal metabolisers of nicotine. These people were then randomised to an 11-week treatment plan of a placebo, nicotine patches or the stop smoking drug varenicline. All treatments were given in addition to behavioural counselling.

Overall, they found that varenicline was more effective at helping a “normal metaboliser” to quit than patches. For slower metabolisers, there was no difference in the effectiveness of the two treatments, but they tended to get more side effects with varenicline.

Importantly, quit rates were only different after the 11-week treatment. A significant proportion started smoking again six or 12 months later. Therefore, how to maintain quit rates in the longer term is an issue that needs to be resolved.

There are multiple methods available that can help you quit smoking. If one doesn’t work out for you, then you can always try another one.

 

Were did the story come from?

The study was carried out by researchers from the University of Pennsylvania and other institutions in the US and Canada. Funding was provided by the National Institutes of Health, Canadian Institutes of Health Research, Abramson Cancer Center, Centre for Addiction and Mental Health Foundation, and Pennsylvania Department of Health.

A number of the researchers had received grants from the pharmaceutical company Pfizer, which manufactures and sells varenicline. This arguably represents a conflict of interest (which was made clear in the study).

The study was published in the peer-reviewed medical journal The Lancet.

The UK media accurately reported the findings of the study.

 

What kind of research was this?

This was a randomised controlled trial aiming to see whether a new biological marker could help pick the most appropriate method of stopping smoking for someone.

The researchers say that there is considerable variability in a person’s treatment response and side effects to different treatments for tobacco dependence. This provides a strong incentive to try to find biomarkers that may indicate the optimal treatment for a particular individual. In this study, they identified a genetically-informed biomarker of nicotine clearance – the ratio of two breakdown products of nicotine (3ʹ-hydroxycotinine and cotinine). They referred to this as the nicotine-metabolite ratio (NMR).

In this study, people were assigned to placebo, nicotine patch or varenicline (all in addition to behavioural counselling) and they looked at how good the NMR was at predicting the response to each treatment.

 

What did the research involve?

The research included 1,246 smokers. They excluded people using e-cigarettes, taking other smoking treatments, with a history of substance misuse or other significant medical problems. At enrolment, all provided blood samples for testing of their NMR. Based on their NMR results, they were categorised as either “slow” or “normal” metabolisers of nicotine (based on a pre-defined cut-off level).

The smokers were randomised to three groups, stratified by their NMR status, to ensure they had an even number of slow and normal metabolisers in each group. All groups also received behavioural counselling. The participants were divided as such:

  • placebo patch and placebo pill (408 people)
  • nicotine patch and placebo pill (418 people)
  • placebo patch and varenicline pill (420 people)

Treatment was given double-blind, with neither researchers nor investigators aware of treatment allocation or NMR status. The stop smoking treatment period lasted 11 weeks.

The main endpoint was seven-day point prevalence abstinence at 11 weeks, which was defined as no self-reported smoking (“not even a puff” as the study put it) for at least seven days before the telephone assessment, with in-person verification (by carbon monoxide levels). Participants lost to follow-up were considered smokers. Later follow-ups at six and 12 months were also conducted. The main aim was to compare the efficacy of a nicotine patch with varenicline by NMR group (normal metabolisers vs. low metabolisers).

 

What were the basic results?

At the end of the 11-week treatment, slow metabolisers had quit rates of 17.2% using a placebo, 27.7% using a nicotine patch and 30.4% using varenicline.

Normal metabolisers had quit rates of 18.6% using placebo, 22.5% using nicotine patch and 38.5% using varenicline.

From these results, varenicline was significantly more effective than the nicotine patch for normal metabolisers. Their odds of abstinence with varenicline were more than double that compared with the nicotine patch (odds ratio [OR] 2.17, 95% confidence interval [CI] 1.38 to 3.42). Slow metabolisers were no more likely to achieve abstinence with varenicline than the nicotine patch (OR 1.13, 0.74 to 1.71).

Among normal metabolisers, the number of people needing to enter the stop smoking programme to achieve one case of abstinence (the number needed to treat or NNT) 11 weeks later was 26 using nicotine patches and just 4.9 with varenicline.

For slow metabolisers, the NNT was not much different: 10.3 for nicotine patch and 8.1 for varenicline.

The researchers also found that slow metabolisers were significantly more likely than normal metabolisers to have more severe side effects when taking varenicline compared to placebo.

 

How did the researchers interpret the results?

The researchers conclude that “Treating normal metabolisers with varenicline and slow metabolisers with nicotine patch could optimise quit rates while minimising side effects”.

 

Conclusion

This is a well-conducted randomised controlled trial, which found that use of the nicotine-metabolite ratio (NMR) may be helpful in indicating which stop smoking treatment may be best for different people. For those with a normal NMR, varenicline was more effective than a nicotine patch. For slower metabolisers, there was no significant difference in the effectiveness of the two treatments, but they tended to get more side effects with varenicline.

The study benefits from its large size, double-blind design and high follow-up rates.

However, there are still questions to be answered. For example, the difference in abstinence rates between varenicline and nicotine patch for normal metabolisers was significant at the end of 11-week treatment. But by six months, abstinence rates had generally deteriorated across all treatment groups and for both slow and normal metabolisers. Abstinence rates for normal metabolisers given varenicline were still significantly higher than those given nicotine patch at six months, but this difference was no longer significant at the 12-month follow-up.

How to maintain quit rates in the longer term after cessation of treatment still appears to be an issue that needs resolving, regardless of treatment or nicotine metabolism type.

Overall, the research is promising. Whether the NMR is something that will ever be brought into wider use when deciding on the most appropriate smoking cessation therapy is not currently known. Even if it is, other factors are also still likely to guide treatment decisions, such as individual preference or previously tried treatments.

What the study does highlight is that there are multiple methods available you can use to quit smoking. These range from nicotine replacement products such as patches or gum and medication such as varenicline and bupropion (Zyban). While currently unlicensed as a stop smoking treatment (and with little direct scientific evidence of their effectiveness), many people have found that e-cigarettes can help them quit or cut down smoking.

The important thing is not to get discouraged if your first attempt at quitting smoking is unsuccessful. Try a different method, as this could be better suited to you.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Blood test to help smokers 'find best way to quit'. BBC News, January 12 2015

Quit smoking success 'predictable'. Mail Online, January 12 2015

Ability to quit smoking 'predictable'. ITV News, January 12 2015

Links To Science

Lerman C, Schnoll RA, Hawk LW, et al. Use of the nicotine metabolite ratio as a genetically informed biomarker of response to nicotine patch or varenicline for smoking cessation: a randomised, double-blind placebo-controlled trial. The Lancet Respiratory Medicine. Published online January 11 2015

Categories: Medical News

Does contraceptive jab make HIV more likely?

Medical News - Mon, 01/12/2015 - 14:05

"Contraceptive injections moderately increase a woman's risk of becoming infected with HIV," The Guardian reports.

The headline was prompted by an analysis of 12 studies that looked at whether the use of hormonal contraception, such as the oral contraceptive pill, increases the risk of contracting HIV.

All of the studies involved were conducted in sub-Saharan Africa in low- and middle-income countries.

Researchers found a link between a common injectable form of contraception called depot medroxyprogesterone acetate (Depo-Provera) and the risk of HIV. No link was found with other types of hormonal contraception.

But these results do not prove the depot injection directly increases the risk of HIV. The studies included varied in their design and methods, and have several potential sources of bias.

Any link could be down to behavioural patterns rather than medical reasons. For example, women who know they have an effective long-term contraceptive may forget about the risks of sexually transmitted infections.

Hormonal contraception, including injections or oral tablets, can be an extremely effective form of contraception. But it won't protect you against sexually transmitted infections.

It is worth discussing with your health professional and making sure you are using the method that is most effective, convenient and safest for you, depending on your circumstances.  

 

Where did the story come from?

The study was carried out by researchers from the University of California and received no financial support.

It was published in the peer-reviewed medical journal, The Lancet.

The Mail Online correctly reports the main findings of this study, but would benefit from highlighting that the findings do not prove a causal association between the depot injection and HIV risk, a point clearly made by the researchers in the original publication.

The Guardian's reporting of the study is more measured and highlights how for women in poorer countries, an unwanted pregnancy may pose a greater threat to health and wellbeing than HIV. Rates of maternal death occurring during or shortly after pregnancy remain high in many sub-Saharan countries.

 

What kind of research was this?

This was a systematic review that aimed to search the global literature to find studies examining whether the use of hormonal contraception, such as the oral contraceptive pill or contraceptive injections, increase the risk of contracting HIV.

The researchers say previous study into whether there could be an associated risk has been inconsistent. They pooled the results of different studies in a meta-analysis.

A systematic review and meta-analysis is the best way of identifying and looking at all evidence that has addressed the particular question of interest.

But this type of research is always going to have some limitations reflecting the strength and quality of the underlying studies being reviewed.

It is unlikely a trial would be conducted that would allocate women to hormonal contraception or not, purely to see if this increased their risk of getting HIV.

Instead, the studies are likely to be observational or trials that have primarily been investigating other things.

This means there is potential that associations are being influenced by confounders. In short, other factors linked to contraceptive use, such as lifestyle behaviours, are themselves influencing the risk of HIV, rather than contraceptives directly. 

 

What did the research involve?

The researchers built on the findings of a previous 2012 World Health Organization (WHO) review.

For the current review, they searched one literature database for English-language articles published from December 2011 onwards that included the terms "hormonal contraception", "HIV/acquisition", "injectables", "progestin", and "oral contraceptive pills".

They included studies that assessed hormonal contraceptives, included women without HIV at the study start, and were prospective in nature (following people over time).

Eligible studies were also required to have followed up at least 70% of their participants, have adjusted at least for a woman's age and condom use (to try to minimise confounding from these factors), and been conducted in a low- or middle-income country.

Separate researchers individually assessed the methods and quality of the eligible studies, and extracted data.

 

What were the basic results?

A total of 12 studies met the criteria for being included. All of these studies were conducted in low- or middle-income African countries.

These studies included large numbers of women, from between 400 to more than 8,000, and lasted between one and three years.

What were the studies investigating?

Three of the 12 studies included were observational studies designed specifically to examine any connection between contraception and HIV, while the other studies included women taking part in trials investigating interventions for HIV prevention.

Who was involved in the studies?

Most of the the 12 studies included looked at women aged 25 to 40 in the general population, while two looked specifically at women at high risk of HIV (commercial sex workers or women whose partner was HIV positive).

What contraceptives did the studies examine?

Some of the studies looked at women taking oral hormonal contraception (either combined pill or progestogen only).

In some the women were taking the injectable progestogen depot medroxyprogesterone acetate, and in the remaining studies the women were taking another type of injectable progestogen (norethisterone enanthate).

Most trials compared these hormonal types of contraception with a non-hormonal method of contraception, or with no method of contraception at all.

What were the specific results for the contraceptive injection?

Pooled results of 10 studies of depot medroxyprogesterone acetate found it was associated with a 40% increased risk of HIV (hazard ratio (HR) 1.40, 95% confidence interval (CI) 1.16 to 1.69).

This risk was slightly lower when restricted to only the studies of women in the general population (pooled HR 1.31, 95% CI 1.10 to 1.57) rather than those at high risk of contracting HIV.

There was no evidence of an increased risk of HIV in women taking the other injectable progestogen, norethisterone enanthate (pooled HR 1.10, 0.88 to 1.37); nor was there any increased of HIV risk found from the use of oral contraceptive pills (HR 1.00, 0.86 to 1.16).

 

How did the researchers interpret the results?

The researchers concluded their findings "show a moderate increased risk of HIV acquisition for all women using depot medroxyprogesterone acetate, with a smaller increase in risk for women in the general population.

"Whether the risks of HIV observed in our study would merit complete withdrawal of depot medroxyprogesterone acetate needs to be balanced against the known benefits of a highly effective contraceptive."

 

Conclusion

This is a well-conducted systematic review that tried to identify all studies investigating the possible link between hormonal contraceptive use and HIV.

It did not find an association between HIV risk and oral hormonal contraceptive use, nor with one type of injectable progestogen contraceptive.

But it did find an increased risk of HIV in studies where women used a commonly used injectable form of contraception called depot medroxyprogesterone acetate.

The review had strict inclusion criteria, but the possibility of selection bias and confounding from other factors still cannot be ruled out.

Only three out of the 12 studies directly set out to look at whether hormonal contraceptive use was linked to HIV. And these were still observational studies, meaning the women chose their method of contraception.

The other nine studies were not designed to look for this association.

As women in all of the 12 studies included chose their method of contraception, this could mean there are other differences – such as health and lifestyle – between the women who chose to use this type of contraception and those who chose to use non-hormonal methods. So the contraception may not have been the sole or direct cause of the link.

Two of the studies also included high-risk women, such as commercial sex workers or women whose partner was HIV positive. Exclusion of these studies decreased the association between depot contraceptive injection use and HIV, although the link remained statistically significant.

As the researchers themselves acknowledge, the studies can't say whether the association between hormonal contraception and HIV is "causal". And this is crucial to bear in mind when looking at this review.

Other limitations of this research
  • As the authors also say, it is difficult for them to be sure of the timing of contraception use in relation to subsequent HIV infection.
  • Although the studies included contraceptive methods that are used in the UK, none of these studies were UK-based and all were conducted in sub-Saharan Africa. The prevalence of HIV in these countries is much higher than it is the UK, so the baseline risk of contracting HIV is already much higher than it would be in the UK. The 40% risk increase with the depot injection is a relative increase of what would comparatively be a very small baseline risk in the UK.

Contraceptive injections such as Depo-Provera are extremely effective – estimated to have a failure rate of less than 1 in 330. But they provide no protection against sexually transmitted infections.

Only barrier methods such as condoms protect against HIV and other sexually transmitted infections, such as chlamydia and genital warts.

Talk to your GP if you're unsure you're using the most effective and convenient contraceptive for you. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Contraceptive injection raises risk of HIV, research warns. The Guardian, January 9 2015

Women on the contraceptive jab have a greater risk of contracting HIV than those using other forms of birth control or NONE at all. Mail Online, January 9 2015

Links To Science

Ralph LJ, McCoy SI, Shu K, et al. Hormonal contraceptive use and women's risk of HIV acquisition: a meta-analysis of observational studies. The Lancet Infectious Diseases. Published online January 9 2015

Categories: Medical News

'Bionic' spinal implant helped paralysed rats walk

Medical News - Fri, 01/09/2015 - 15:00

"Elastic implant 'restores movement' in paralysed rats," BBC News reports after researchers developed an implant that can be used to treat damaged spinal cords in rats.

The spinal cord, which is present in all mammals, is a bundle of nerves that runs from the brain through the spine, before branching off to different parts of the body.

It is the main "communication route" the brain uses to control the body, so damage usually results in some degree of paralysis or sensory loss, depending on the extent of the injury.

This promising research developed a novel spinal cord implant that has been able to restore movement in paralysed rats. The implant is made of a flexible material that is able to integrate and move with the spinal cord.

This overcomes problems found with previously tested rigid and inflexible implants, which have caused inflammation and quickly stopped working.

The implant works by delivering both electrical and chemical signals, and enabled the rats to walk again for the six weeks of testing.

However, the research is mainly "proof of concept" at this stage, showing the technique works in animals – at least in the short term. It remains to be seen whether implants are safe and effective at restoring movement in people with paralysis. 

 

Where did the story come from?

The study was carried out by researchers from École Polytechnique Fédérale de Lausanne in Switzerland and other institutions in Switzerland, Russia, Italy and the US.

Financial support was provided by various organisations, including the Bertarelli Foundation, the International Paraplegic Foundation, and the European Research Council.

It was published in the peer-reviewed journal, Science Magazine. 

Of all the UK coverage, BBC News reported the research most accurately, and included quotes about the promising nature of the research, but also due caution about the long timeline ahead before it is known whether such implants could be used in people.

Other headlines, such as that in The Times, arguably offer premature hope of a new treatment that can help the paralysed walk again. 

 

What kind of research was this?

This animal research aimed to develop a new flexible spinal implant to restore movement after a spinal cord injury.

Implants are just one of the ways medical science is exploring how to help people who have spine injuries regain sensation and movement.

In the past, electrical implants for the spinal cord encountered problems because spinal cord tissue is soft and flexible, while the implants of old were often rigid and inflexible.

The researchers expected implants with mechanical properties matching those of the host tissue would work better and for longer.

Here, they designed and developed a new soft electrical implant, which has the shape and elasticity of the dura mater, the outermost layer of the protective membranes (meninges) that cover the brain and spinal cord.

The device was tested in paralysed rats. Animal studies are a valuable first step in the development of treatments that may one day be used in people.

However, the road ahead is a long one in terms of developing the treatment for testing in people, hopefully followed by trials of its safety and effectiveness.

 

What did the research involve?

The researchers developed a silicone implant they called electronic dura mater, or e-dura. This implant has interconnecting channels that transmit electrical signals and can also deliver drugs. It was made for surgical insertion just beneath the dura mater layer.

They first tested the long-term functionality of this soft implant compared with conventional stiff implants. Long-term meant testing the device for six weeks.

Each type of implant was inserted into the lower part of the spinal cord of healthy rats. The rats were then assessed using specialised movement recordings, and the rats with the soft spinal implant were able to behave and move as normal.

However, rats with the stiff implants started to demonstrate problems with their movement one to two weeks after surgery, which only deteriorated further up to six weeks.

When examining the rats' spinal cords after the implants were removed at six weeks, the researchers found rats with the stiff implants displayed significant deformity and inflammation in the spinal cord. None of these adverse effects were observed in those who had the soft implant.

They followed this with a series of further tests of the mechanics and functioning of the soft implant, both in the laboratory using a model of spinal cord tissue and in further tests in healthy rats.

The researchers also examined the ability of e-dura to restore movement after spinal cord injury.

The rats received a spinal cord injury that led to permanent paralysis of both hind legs. The e-dura implant was then surgically inserted in the spinal cord, and drug therapy and electrical stimulation were delivered through the electrode to see how it worked. 

 

What were the basic results?

Most of the results in the publication relate to the initial developmental stages of the device. When it came to the paralysed rats, relatively little was said.

However, what the researchers did say is the combination of electrical and chemical stimulation through the implant enabled the paralysed rats to move both of their hind legs again and walk, apparently as normal (though this isn't specifically stated).

The e-dura implant was able to bring about these effects for the six-week period it was tested. 

 

How did the researchers interpret the results?

The researchers concluded they have developed a soft implant that shows long-term biointegration and functioning with the spinal cord.

The implants met the demanding mechanical properties of the spinal tissue, with a limited inflammatory reaction that has been seen with other implants.

When used in paralysed rats, the implant allowed for electrical and chemical stimulation to restore movement deficits over an extended period of time.

 

Conclusion

This is promising research that demonstrates how a new spinal cord implant has been able to restore movement in paralysed rats.

The e-dura implant is a breakthrough in that it overcomes a lot of the problems presented by previous rigid and inflexible implants. Instead, it is made of a flexible material that is able to integrate with spinal cord tissue.

The study demonstrated long-term functionality in rats and few side effects over the six-week testing period.

Rats given a serious spinal cord injury, who were consequently permanently paralysed, were able to walk again after the implant was surgically placed in their spinal cord. The implant works by delivering both electrical and chemical signals.

However, this research is still in the very early stages. While the findings are promising, there is a long way to go before we know whether these implants can be developed to successfully help humans with spinal injuries.

If the implants were developed for human testing, they would need to go through several stages of safety and effectiveness testing to see whether they worked at restoring movement in paralysed people.

It also needs to be seen how they would function in the much longer term, beyond just a few weeks.

Loss of movement is only one of the ways a person can be affected by permanent paralysis of both legs.

We do not know whether this implant would have any effect on loss of bladder, bowel or sexual function, for example.

These effects can have as much of a detrimental effect on quality of life as loss of physical movement.

But, overall, this is promising early-stage research and future developments are awaited with anticipation.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Elastic implant 'restores movement' in paralysed rats. BBC News, January 9 2015

'Cyborg' spinal implant could help paralysed walk again. The Daily Telegraph, January 8 2015

The tiny ribbon that could help the paralysed walk again: 'Cyborg' implant can delivers electric shocks and drugs directly to the spine and even read brain activity. Mail Online, January 9 2015

Links To Science

Minev IR, Musienko P, Hirsch A, et al. Electronic dura mater for long-term multimodal neural interfaces. Science. Published online January 9 2015

Categories: Medical News

How 'baby talk' may give infants a cognitive boost

Medical News - Fri, 01/09/2015 - 13:30

"Say 'mama'! Talking to babies boosts their ability to make friends and learn,” the Mail Online reports. In a review, two American psychologists argue that even very young infants respond to speech and that "baby talk" is essential for their development.

It is important to stress that a review of this sort is not the same as fresh evidence.

The review must largely be considered to be the authors’ opinion based on the studies they have looked at. The methods and quality of these underlying studies informing this review are also unknown, so we cannot say how solid this evidence is.

That said, the authors’ arguments would chime with most parents’ instinctive beliefs: regularly talking to your baby is a “good thing”. Regularly talking to your baby is likely to have many benefits, not least in helping their understanding of speech and strengthening the bond between parent and baby.

However, whether talking to your baby has greater effects on their learning capacity or ability to make friends in the future is something that cannot be proven by this review.

 

Where did the story come from?

The study was written by two psychologists from New York University and Northwestern University in the US. The work was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development of the National Institutes of Health, and the National Science Foundation. It was published in the peer-reviewed scientific journal Cell.

The Mail Online reports the review accurately, but doesn’t recognise the important limitations of this review in relation to its absent methods, which means it must largely be considered to be the authors’ opinions.

 

What kind of research was this?

This was a narrative review discussing a selection of evidence about the effects of exposure to human speech during the first year of a baby’s life. They discuss how this affects not only their speech and language development, but potentially their cognitive ability and social capacity as well.

The authors provide no methods for their review. This does not appear to be a systematic review, where the authors have systematically searched the global literature to identify all evidence related to this topic. It is not known how the authors selected the studies they chose to discuss, and whether other relevant evidence was left out. Therefore, this review must largely be considered to be the opinion of the authors.

While we find the possibility highly unlikely, this sort of unsystematic review may have been subject to what is known as “cherry-picking” – where evidence that does not support the authors’ arguments is deliberately ignored.

 

What do the authors discuss?

The researchers say it has been thought that listening to speech is mainly beneficial for infants in helping them to develop language. However, they say that new evidence suggests that benefits lie beyond just language acquisition. 

They say that from the first months of life, listening to speech promotes the acquisition of fundamental psychological processes, including:

  • pattern learning – the ability to recognise both visual and verbal patterns, such as “ma-ma-ma”
  • the formation of object categories – the ability to place external objects into categories, such as being able to tell the difference between a white van and a white sheep
  • identifying people to communicate with
  • acquiring knowledge about social interactions
  • development of social cognition – the ability to interpret, recognise and respond appropriately to other peoples’ feelings and emotions

They also discuss the idea that as babies grow, they specifically favour human speech over other vocalisations, such as laughing or sneezing. They discuss the different nerve cell responses to human speech compared with other sounds, and how speech particularly activates certain areas of the brain. The researchers then go on to discuss the more intricate patterns of how babies learn the rules and patterns of speech as they grow, such as understanding repetitive sequences of different syllables.

The authors present findings of some experiments that aim to see how speech helps babies to learn object categorisation. Babies aged three to 12 months viewed different objects (such as animals) accompanied by listening to either speech or sounds/tones. This found that those listening to speech were better able to categorise similar objects than those who had heard only tones accompanying the objects.

The discussion then turned to how speech may enable babies to identify “potential communicative partners”. That is, they develop the knowledge to treat people and objects differently (for example smiling and making sounds at people). Babies also develop an understanding of how speech conveys information and intentions, even if they cannot understand what is being conveyed. 

 

What do the authors conclude?

The authors conclude: “Before infants begin talking, they are listening to speech. We have proposed that even before infants can understand the meaning of the speech that surrounds them, listening to speech transforms infants’ acquisition of core cognitive capacities […]. What begins as a natural preference for listening to speech actually provides infants with a powerful natural mechanism for learning rapidly about the objects, events and people that populate their world”.

They say that further research is needed into the range of cognitive and social processes that are and are not facilitated by speech, and the mechanisms underlying this.

 

Conclusion

This is an interesting narrative review that challenges the belief that speaking to babies is only beneficial in terms of their own speech and language acquisition. The discussion presents what they describe as new evidence, suggesting that the benefits may extend far beyond this. They argue that speaking to babies may have benefits in terms of developing their cognitive abilities, such as the tests where accompanying speech helped babies better categorise objects. The review suggested that it may enhance their social capacity, such as recognising people to talk to and understanding the nature of speech and how it conveys thoughts and intentions.

Much of this discussion is plausible, but the limitations of this review must be noted. The authors provide no methods on how they have searched for, reviewed and selected the evidence they discuss. We don’t know whether all evidence relevant to the topic has been considered, or whether a biased account has been given. Therefore, this review must largely be considered to be the authors’ opinions based on the studies that they have looked at. The methods and quality of these underlying studies informing this review are also unknown, so we cannot say how solid the evidence is.

It makes sense that regularly talking to your baby is beneficial, not least by helping their understanding of speech and strengthening the bond between the two of you.

There is also evidence that babies born into “speech-poor” environments, where they do not receive regular exposure to spoken language, may have delayed development.

However, whether talking to your baby will turn them into a new Mozart or Einstein, or make them super-popular in later life, cannot be proven by this review.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Say 'mama'! Talking to babies boosts their ability to make friends and learn, psychologists claim. Mail Online, January 8 2015

Links To Science

Vouloumanos A, Waxman SR. Listen up! Speech is for thinking during infancy. Trends in Cognitive Sciences. Published online November 7 2014

Categories: Medical News

Can eating like a Viking 'reduce obesity risks'?

Medical News - Thu, 01/08/2015 - 15:15

"A Nordic diet could reduce the dangers of being overweight, a study suggests," The Daily Telegraph reports. The headline comes from the results of a small randomised controlled trial.

Half the people in the trial were put on the Nordic diet, which consists of wholegrain products, vegetables, root vegetables, berries, fruit, low-fat dairy products, rapeseed oil, and three servings of fish a week.

The other half acted as a control group and ate a diet of low-fibre grain products, butter-based spreads, and a limited intake of fish.

Researchers found people on the Nordic diet developed reduced activity (expression) in 128 genes associated with inflammation of their abdominal fat compared with controls.

Inflammation may cause some of the adverse health effects associated with being overweight, such as insulin resistance, which is a risk factor for type 2 diabetes.

However, changes in gene expression are not the same as proven changes in clinical outcomes. The study did not find any correlation between these changes in gene expression and clinical measurements of risk factors, such as blood pressure or cholesterol. 

Nevertheless, it is plausible that the Nordic diet has a protective effect – it is relatively similar to the Mediterranean diet (with a bit more herring and a bit less pasta), which has been associated with a reduced risk of chronic diseases.

 

Where did the story come from?

The study was carried out by researchers from a number of academic institutions in Finland, Norway, Sweden, Iceland and Denmark.

Funding came from several sources in these countries, including research foundations and academic institutes. Several commercial companies provided food products for the study participants.

The study was published in the peer-reviewed American Journal of Clinical Nutrition.

The Daily Telegraph and the Mail Online's coverage was accurate, but both overstated the results of the study, failing to point out that research into gene activity alone is not enough to show the health benefits of a diet.

 

What kind of research was this?

This was a randomised controlled trial, which is the best way to determine the effects of an intervention.

The trial was designed to look at whether a Nordic diet had an effect on the activity of genes in abdominal fat just beneath the skin (adipose tissue) in obese people.

It also aimed to see whether any changes in gene expression were associated with clinical and biochemical effects.

In previous research, "dysfunctional adipose tissue" had been proposed as an important link between obesity and its adverse health effects, such as insulin resistance and an unhealthy balance of blood fats.

However, little is known about how diet influences adipose tissue inflammation at the molecular level.

 

What did the research involve?

Researchers recruited 200 adults to the trial, although only 166 completed it. Participants had to be between the ages of 30 and 65, with a body mass index (BMI) of 27 to 38. A BMI of 25 or above is considered overweight, while a BMI of 30 or above is considered obese.

Participants also had to have at least two other features of metabolic syndrome, a condition characterised by symptoms such as high blood pressure, high blood sugar and abnormal blood fat levels, and is often associated with diabetes.

For a period of 18 to 24 weeks, 104 people were put on the Nordic diet, comprising wholegrain products, berries, fruits and vegetables, rapeseed oil, three fish meals a week, and low-fat dairy products. They also avoided sugar-sweetened products.

96 people were put on the control diet, comprising low-fibre cereal products and dairy fat-based spreads, with a limited amount of fish.

A clinical nutritionist or a dietitian gave instructions about the diets. The participants' dietary intake was monitored throughout using regular food records.

To reduce any confounding factors, the study participants were advised to keep their body weight and physical activity unchanged, and to continue their current smoking habits, alcohol consumption and drug treatment during the study.

Researchers took biopsy samples of the participants' adipose tissue at the beginning and end of the study, and extracted RNA, which is used to carry out DNA's genetic instructions.

A test called a transcription analysis was performed to study the expression of genes in the tissue.

Researchers also took various other clinical and biochemical measurements, including levels of blood sugar, cholesterol and triglycerides.

 

What were the basic results?

56 participants were included in the final analysis – 31 from the Nordic diet group and 25 from the control group.

People were excluded if there was a change in their body weight of more than 4kg, and if they started to use statins, had a BMI over 38, or poor adipose tissue samples.

The researchers report differences between the two groups in the activity of 128 genes.

Many of these genes were associated with pathways relating to the immune response, with a slightly reduced activity among people in the Nordic diet group and increased activity among people in the control diet group.

There were no differences between the groups in terms of clinical or biochemical measurements.

 

How did the researchers interpret the results?

The researchers say their study indicates that the Nordic diet reduced the activity of genes associated with inflammation in adipose tissue when compared with the control diet group.

The quality of diet may be an important factor for regulating adipose tissue inflammation independent of weight change, they say.

 

Conclusion

This study found that the activity of certain genes, some of which are associated with inflammation, was different in obese people who ate a Nordic diet compared to those on a control diet.

Yet there was little correlation between these findings and any changes in measurements of risk factors such as participants' cholesterol or blood pressure. The authors concede that the clinical relevance of their findings is unclear.

As the authors say, one limitation is that volunteers in the study may have had healthy eating habits before the study began.

If these volunteers had been randomised to the control diet group, they may have modified their diet to become more unhealthy, and therefore changes in gene expression would seem to be more evident in this group.

Being overweight or obese increases the risk of chronic illness such as diabetes, heart disease and some cancers, so it's important to maintain a healthy weight.

The Nordic diet is being touted as one of the latest trends in healthy eating. Whether it is a proven method to prevent chronic diseases is uncertain, but it does appear to be based on sensible nutritional principles, such as eating lots of wholegrains, fruit and vegetables, while cutting down on saturated fats.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

High blood pressure? Eat like a Viking. The Daily Telegraph, January 7 2015

High blood pressure? Go Nordic: Eating like a Viking can reduce the damaging effects of being overweight - and stave off diabetes. Mail Online, January 7 2015

Links To Science

Kolehmainen M, Ulven SM, Paananen J, et al. Healthy Nordic diet downregulates the expression of genes involved in inflammation in subcutaneous adipose tissue in individuals with features of the metabolic syndrome. The American Journal of Clinical Nutrition. Published online November 19 2014

Categories: Medical News

New 'game-changing' antibiotic discovered

Medical News - Thu, 01/08/2015 - 14:30

“New class of antibiotic could turn the tables,” on antibiotic resistance, The Guardian reports and is just one of many headlines proclaiming the discovery of a “super-antibiotic”. For once, such enthusiastic headlines might be largely justified.

The study in the spotlight shows the discovery of a new antibiotic, teixobactin, and is exciting for two main reasons.

Firstly, teixobactin proved effective against certain types of drug-resistant bacteria such as MRSA and tuberculosis (TB) in mice models. The way it works, by attacking cell walls rather than proteins, also suggested that bacteria would have a hard time evolving around its effects to develop resistance. This is the first potentially new antibiotic in over 20 years.

Secondly, the mechanism of discovery is potentially revolutionary. The research team used a device known as an iChip to make bacteria in soil “lab-ready” for use. Previously, only 1% of the organisms in soil could be grown and studied in the laboratory. This leaves 99% of bacteria as an untapped source of new antibiotics useful to people. Unlocking this natural reservoir of antibiotic production could potentially lead to the discovery of many more antibiotics in the future.

We now need to wait for tests on humans to make sure that teixobactin works and is safe. Also, teixobactin only appears to be effective against a subset of bacteria (Gram-positive bacteria), so is not a cure-all for Gram-negative bacterial infections, which include E.coli.

This is genuinely exciting news, but only time will tell whether this is a historical moment of similar magnitude to that of Alexander Fleming’s original discovery of penicillin in 1928.

 

Where did the story come from?

The study was carried out by researchers from the US, Germany and the UK, and was funded by the US National Institutes of Health, the Charles A King Trust, German Research Foundation and the German Centre for Infection Research.

Many of the authors declare financial conflicts of interest, as they are employees and consultants of NovoBiotic Pharmaceuticals, a biotech firm with an interest in creating new drugs.

The study was published in the peer-reviewed science journal Nature.

The study attracted widespread attention from both the UK and international media. Generally, the media reported the story accurately, with many highlighting that while the study was promising, no human tests had yet taken place.

 

What kind of research was this?

This was a laboratory and mice study looking for new antibiotics.

Antibiotics – chemicals that kill bacteria – were first found in the early 20th century. This led to an explosion of antibiotic discovery that revolutionised medicine, and provided cures for previously incurable diseases. It also led to a marked decrease in complications arising from infection during surgical procedures we now regard as routine and safe, such as caesarean sections.

However, there have been no new antibiotic discoveries for decades. Existing antibiotics are becoming less effective because some bacteria are not killed by them and these bacteria can spread over time; these are so-called "drug-resistant bacteria".

Most people are aware of the "superbugs", such as MRSA and C-difficile, which are a leading cause of hospital-based infections. There are other candidates out there, such as extensively drug-resistant TB, which can take up to two years to treat. Therefore, the problem of drug-resistant bacteria is serious and growing, and could pose one of the greatest threats to public health in the 21st century.

This research sought to identify new bacteria from soil, which is heaving with micro-organisms harbouring naturally occurring antibiotics. Amazingly, the researchers tells us, only 1% of the organisms in soil can be grown and studied in the laboratory. This means the remaining 99% are potentially an untapped source of new antibiotics.

The team sought to devise a new way of growing and studying some of the soil micro-organisms, to screen them for any that display antibiotic properties and could be turned into new drugs.

 

What did the research involve?

The team designed and tested a number of methods to grow (culture) previously un-growable (unculturable) micro-organisms from soil.

This including making a device (iChip) that could be immersed in soil to “trick” the organisms into growing, but still allowed the team to isolate the micro-organisms for further study. This was used alongside a range of chemical growth factors to encourage and maintain growth.

When successful, they screened the newly cultured organisms for any signs that they were producing antibiotics. A number of new chemicals that looked promising were found and then tested in mice, including mice infected with methicillin-resistant Staphylococcus aureus (MRSA).

 

What were the basic results?

The results revealed a number of striking new discoveries:

  • Researchers could successfully grow a range of new organisms from the soil, which had never been done before.
  • Some of these newly grown organisms naturally produced antibiotics.
  • One such antibiotic, named teixobactin, was particularly promising and was subsequently studied intensely in the laboratory and in mice.
  • Tests in mice revealed teixobactin was effective against Gram-positive bacteria including MRSA and the bacteria that cause TB. However, it was not effective against Gram-negative bacteria such as E.coli, which have an extra layer of cell wall protection.
  • Teixobactin inhibited cell wall synthesis via a mechanism that bacteria are unlikely to develop resistance to, as it is so fundamental to their normal survival.
  • Backing this up, when teixobactin was used against bacteria Staphylococcus aureus or Mycobacterium tuberculosis no drug-resistant bacteria were found or developed. This is unusual, as most tests reveal some naturally occurring resistance over time.

 

How did the researchers interpret the results?

The research team simply concluded that: “The properties of this compound [teixobactin] suggest a path towards developing antibiotics that are likely to avoid development of resistance.”

 

Conclusion

This study shows the discovery mechanism of teixobactin and is exciting for two reasons. Teixobactin by itself shows effectiveness against MRSA and TB in mice models and has properties indicating that drug resistance may be unlikely to develop. This is encouraging for the potential future development of it for human diseases caused by Gram-positive bacteria.

Also, the mechanism of discovery shows great promise. The research team devised a completely new way of growing micro-organisms from soil that could not previously be grown. These micro-organisms, 99% of which are unknown to science, have the potential to produce natural antibiotics. Therefore, this discovery opens up the possibility that many more antibiotics can be found in the future. This is encouraging as there has been a lack of new antibiotic discoveries since the 1980s, while at the same time, the problem of drug-resistant bacteria has been growing.

While this discovery is undoubtedly good news, there are a number of moderating factors to bear in mind:

  • We don’t know what proportion of the 99% of currently ungrowable bacteria this new method will help to unleash, and what proportion of them might yield useful antibiotics.
  • Teixobactin has so far only been trialled in the lab and in mice. We need to await tests in humans before we can be sure it works and is safe.
  • Teixobactin looks effective against a subset of bacteria only (Gram-positive bacteria) so is not a cure-all for bacterial diseases.

With these limitations in mind, for once a study matches up to the media hype, as it discovered a promising new antibiotic candidate (teixobactin) and shows us a method that has the potential to lead to many more.

It is early days, but we could potentially be heading into a future where antibiotic-resistance is a thing of the past.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New class of antibiotic could turn the tables in battle against superbugs. The Guardian, January 8 2015

Antibiotics: US discovery labelled 'game-changer' for medicine. BBC News, January 7 2015

Teixobactin discovery: Scientists create first new antibiotic in 30 years - and say it could be the key to beating superbug resistance. The Independent, January 8 2015

First new antibiotic in 30 years discovered in major breakthrough. The Daily Telegraph, January 8 2015

Is this the answer to doctors' prayers? Super-antibiotic that could wipe out diseases from MRSA to TB is hailed as a 'game-changer' by scientists. Mail Online, January 8 2015

Scientists may have found first new antibiotic in 25 years. The Times, January 8 2015

New antibiotic could work for 30 years – if used right. New Scientist, January 7 2015

Links To Science

Ling LL, Schneider T, People AJ, et al. A new antibiotic kills pathogens without detectable resistance. Nature. Published online January 7 2015

Categories: Medical News

Could meal-in-a-pill 'trick' body into losing weight?

Medical News - Wed, 01/07/2015 - 15:00

“Weight loss drug fools body into reacting as if it has just eaten,” The Guardian reports. The drug, fexaramine (or Fex), stimulates a protein involved in metabolism that is usually activated when the body begins eating, though it has only been tested in mice.

Researchers found that obese mice given Fex stayed the same weight despite continuing to eat the same amount of a high-fat diet. However, unlike some media claims, they did not actually lose any weight. It had no effect on mice of normal weight.

The protein that is stimulated, FXR (farnesoid X receptor), is present in many organs of the body and plays a complex role in metabolism that is not fully understood. 

Previous drugs developed to activate this protein have shown conflicting results, possibly because they entered the bloodstream and so acted on all of the organs. Fex has been developed so that it appears to be barely absorbed into the blood stream, and so only acts on the FXR in the intestines. This provided better results for obese mice and also reduces the risk of side effects.

Further animal and primate studies will need to be conducted before the drug would be allowed to progress to human trials, but these are promising results. However, even if these trials passed with flying colours, we would estimate that it would take at least 5-10 years before any drug based on this research came to market.

 

Where did the story come from?

The study was carried out by researchers from the Salk Institute for Biological Studies in California and several other institutes in the US, Australia and Switzerland. It was funded by the US National Institutes of Health, the Glen Foundation for Medical Research, the Leona M. and Harry B. Helmsley Charitable Trust, Ipsen/Biomeasure, the California Institute for Regenerative Medicine, the Ellison Medical Foundation, the National Health and Medical Research Council of Australia, and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

A financial conflict of interest was reported. Many of the contributing authors “are co-inventors of FXR molecules and methods of use, and may be entitled to royalties from their use”.

The study was published in the peer-reviewed journal Nature Medicine.

In general, the media have reported the story accurately, pointing out that it is in the early stages of development and that it has only been tested on mice. However, as mentioned, headlines such as the Daily Mirror’s “'Imaginary meal' diet pill tricks body into losing weight”, or The Daily Telegraph’s assertion that the “pill makes you feel full” are inaccurate. None of the mice lost weight and none of their appetites were suppressed.

 

What kind of research was this?

This was an animal study to test whether a new drug could improve the metabolism of mice. The researchers conducted a variety of experiments of the drug, comparing their response with mice receiving a placebo.

The drug was created to mimic an effect of eating food. Food causes bile acids to be secreted and this activates a protein called FXR (farnesoid X receptor).

FXR plays a complex role in metabolism that is not fully understood. It is present in many organs of the body, including the kidney, stomach, intestines, gall bladder, liver and both white and brown fat cells.

Previously, drugs have been developed to activate FXR, but they have encountered problems because of activating FXR in all of the organs. This gave conflicting outcomes. For example, mice of normal weight given these drugs had improved glucose tolerance, whereas obese mice put on more weight and had even poorer glucose tolerance. It was not clear why this happened, so the researchers wanted to investigate whether just activating FXR in the intestines improved metabolism.

They developed Fex to activate the intestinal FXR drug instead of food, without it being absorbed into the general circulation, to see if this made a difference. They also say that limiting absorption means there would be less potential for side effects.

 

What did the research involve?

The researchers developed a drug called Fex and performed a number of tests using mice.

They first tested the absorption of Fex into the general circulation. They gave mice either a Fex pill by mouth or an injection of Fex into the fluid that surrounds the abdominal organs. The researchers then measured the level of FXR activation in each organ.

The researchers then gave normal weight mice either a Fex pill or a placebo for 35 days. They then compared their weight, metabolic rate and sensitivity to insulin.

Lastly, mice were fed a high-fat diet (60% fat) for 14 weeks to make them obese. The researchers then gave them different doses of the Fex pill or a placebo for five weeks. They compared their weight, metabolic rate, extent of unhealthy white fat and healthy brown fat, and markers of tissue inflammation.

 

What were the basic results?

The oral Fex pill activated FXR in the intestine and did not activate it in the liver or kidneys. The researchers say this shows that it was only minimally absorbed into the general circulation. This was in comparison to the injection of Fex into the abdominal cavity, which stimulated FXR in the intestine as well as the liver and kidneys.

There was no difference between normal weight mice given oral Fex for five weeks in terms of weight gain (small amount) and other metabolic measurements, compared to normal weight mice given placebo.

In obese mice, the Fex pill caused an improvement in metabolism compared to placebo, including:

  • reduced weight gain
  • increased sensitivity to insulin
  • more unhealthy white fat turning into healthy brown fat
  • reduced inflammation

These obese mice were 34 grams at the start of the experiment (normal weight mice would be around 28 grams). They continued on the high-fat diet (60% fat) for five weeks. Those given placebo increased in weight to 44 grams, but those given the highest dose of Fex did not gain any more weight. None of these mice lost weight. The researchers report that there was no change in appetite or food consumption between the mice given Fex and those given placebo.

 

How did the researchers interpret the results?

The researchers concluded that Fex might be a “promising” approach to stimulating FXR, in order to improve metabolism. The say that the “absence of a change in food intake is notable, as failure of appetite control is a major reason for weight gain”. They say that, as this drug appears to improve metabolism without any change in food intake, it “may offer a viable alternative for obesity treatments”. They also point out that as Fex is only minimally absorbed and only stimulates the intestinal FXR, it offers “improved safety profiles” by not circulating around the whole of the body.

 

Conclusion

This animal study has shown that a new drug called Fex prevents obese mice from further weight gain, despite remaining on a high-fat diet. There were also other metabolic improvements, including improved sensitivity to insulin and a reduction of unhealthy white fat cells. There were no differences in measures of metabolism between mice of normal weight given Fex or placebo, although both groups gained a small amount of weight.

This preliminary study appears to show that, unlike previous drugs that have stimulated FXR from the general circulation and shown conflicting results, by targeting intestinal FXR, obese mice benefit. As it has only been tested in mice for five weeks, there is limited information on what these side effects might be in humans.

Further animal and primate studies will need to be conducted before the drug would progress to human trials, but these are promising results.

As the drug appears to improve metabolic function rather than promote weight loss, it could be a candidate to treat diseases of the metabolism, such as type 2 diabetes or metabolic syndrome (where a person has a combination of diabetes, high blood pressure and obesity).

Due to the length of time it takes to bring a drug to market, as well as the chance of a drug proving to be ineffective or unsafe in humans, we can’t envisage Fex (or a variant) appearing in your local pharmacy anytime soon.

In the meantime, tips to help you lose weight can be found here and you can get online support here.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Weight loss drug fools body into reacting as if it has just eaten. The Guardian, January 5 2015

Could an 'imaginary meal' pill solve the obesity crisis? Drug tricks the body into feeling full - AND lowers cholesterol and blood sugar. Mail Online, January 5 2015

Diet pill that makes you feel full proven to keep weight off in mice, scientists say. The Independent, January 5 2015

'Imaginary meal' diet pill tricks body into losing weight. Daily Mirror, January 5 2015

'Imaginary meal' pill makes you feel full and burns fat. The Daily Telegraph, January 5 2015

An 'imaginary meal' pill to lose weight? Scientists reveal latest weapon to battle obesity. Daily Express, January 5 2015

Links To Science

Fang S, Suh JM, Reilly SM, et al. Intestinal FXR agonism promotes adipose tissue browning and reduces obesity and insulin resistance. Nature Medicine. Published online January 5 2015

Categories: Medical News