Medical News

Easy access to takeaways 'increases obesity risk'

Medical News - Fri, 03/14/2014 - 13:00

"Encountering too many takeaway outlets near our homes, workplaces and even on the daily commute to work could be increasing our risk of obesity," The Independent reports.

The headline is based on a new study looking at whether the density of fast food outlets in some areas is contributing towards the obesity epidemic. Researchers looked at the availability of fast food outlets in the area around people's workplace and home, as well as along their commuting route.

The researchers then looked at how this related to how much fast food people said they ate and their body mass index (BMI). They found that increased exposure to fast food outlets was generally associated with increased fast food consumption and marginally increased BMI.

The work environment appeared to give the strongest results – people who had the most takeaways near their workplace ate an additional 5.3g of takeaway food per day and had a BMI score 0.92 higher than those least exposed.

It seems rational to expect that an increased prevalence of fast food outlets is linked to increased consumption, but the design of this fascinating study can't prove this is the case.

Nobody forces us to eat junk food. Most fast food outlets also have healthy alternatives. Read more about healthy eating out.

 

Where did the story come from?

The study was carried out by researchers from the University of Cambridge and was published in the peer-reviewed British Medical Journal. The article has been published on an open access basis, so it is freely available to access online.

The study was undertaken by the Centre for Diet and Activity Research, a UK Clinical Research Collaboration Public Health Research Centre of Excellence.

Additional funding was provided by the British Heart Foundation, Cancer Research UK, the Economic and Social Research Council, the Medical Research Council, the National Institute for Health Research and the Wellcome Trust under the UK Clinical Research Collaboration.

The media's reporting was generally good quality as it provided an accurate summary of the research. However, none mentioned the inherent limitations of the cross-sectional study design in that it cannot prove cause and effect, only highlight an association.

 

What kind of research was this?

This was a cross-sectional study that included a large population sample. Researchers looked at the takeaway food outlets available near to where the individuals in the sample lived and worked so they could look at whether this was associated with their body weight and eating habits.

The researchers say that the food environment of our neighbourhood – the so-called "foodscape" – has been considered as an influence on our health and diet.

Over the past 10 years, our consumption of food away from the home in the UK has increased by almost a third, and the number of takeaway outlets has increased dramatically. This could be creating what is known as an "obesogenic" environment (one that increases the risk of residents becoming obese).

It is thought that these social and environmental trends could be contributing to rising levels of people who are overweight or obese. It follows that modifying the availability of fast food outlets could be an element in influencing nutrition and health in the UK.

However, this cross-sectional study from one region in the UK can only demonstrate associations. It cannot prove that takeaway or fast food outlets contribute to the cause of the obesity problem, although many may think it is common sense that they would. As the Mail Online's headline put it, "Another study from the University of the Obvious: People who live or work near takeaways are twice as likely to be obese."

Still, there are likely to be a combination of several factors in our lifestyle, diet and activity that are contributing to the nation's growing waistlines – the foodscape may be an additional factor.

 

What did the research involve?

This research included a sample of 5,442 working adults (aged 29-62 years) who were participating in the Fenland study, an ongoing population cohort study based in Cambridgeshire in the UK.

It looked at whether there were fast food outlets near to where participants lived and worked, and compared this with their self-reported consumption of takeaway food and their BMI.

From the full sample of almost 10,500 people in the Fenland study, the researchers excluded those with incomplete data on their work and travel or who worked outside the county.

Participants' home and work addresses were mapped by postcode. Their home and work neighbourhoods were defined as circular regions with a one-mile straight line radius centred on the postcode.

Accurate data on food outlet locations was sourced from 10 local councils covering the study area, again mapped by postcode.

The participants also recorded their commuting route and distance, and the researchers looked at accessible takeaway food outlets along these routes. They used a 100-metre "buffer zone" if they were walking or cycling, and a 500-metre buffer if they were travelling by car.

Participants completed questionnaires relating to their general lifestyle and medical history, and were weighed and measured by trained researchers. They also completed food frequency questionnaires.

The researchers were mainly interested in how much energy-dense foods from takeaway outlets people reported eating. Using food frequency questionnaires, the researchers estimated people's daily intake (in grams) of:

  • pizza
  • burgers
  • fried food (such as fried chicken)
  • chips

Together, these foods gave an indication of the grams per day of takeaway-type food consumption.

The researchers also looked at the participants' body mass index (BMI). They then looked at the associations between these diet and BMI outcomes and the takeaway food environment around a person's home and work, as well as along travel routes.

Their models took into account various possible confounders, including:

  • age
  • sex
  • household income and educational level (a proxy for socioeconomic status)
  • car ownership
  • daily energy intake and physical activity
  • smoking status

 

What were the basic results?

On average, the full sample was exposed to 9.3 takeaway food outlets at home, 13.8 at work and 9.3 along commuting routes. People were therefore exposed to 48% more takeaway food outlets at work than at home.

The researchers found that there was a positive association between exposure to takeaway food outlets and the consumption of takeaway food. The link was strongest in the work environment, where there was a dose-response relationship (exposure goes up, consumption goes up).

People most exposed to takeaway food outlets at work consumed an additional 5.3g per day of takeaway food (95% confidence interval (CI) 1.6 to 8.7g) compared with those least exposed.

At home, people in the most exposed areas ate 4.9g per day more than those least exposed, but there was less evidence for a dose-response relationship. There was also little evidence for an association between exposure across travel routes and consumption of fast food.

However, when combining exposure in all environments put together, those most exposed consumed 5.7g per day more fast food than those least exposed.

There was also a "dose-response relationship" between exposure to fast food at work and BMI (as you'd expect, those who said they ate most fast food had higher BMI). People most exposed had significantly higher BMI, with a difference of 0.92kg/m2 compared with those least exposed. 

Again, when looking at all exposure environments put together, those with the highest exposure had BMI 1.21kg/m2 higher.

There was no difference by gender for either takeaway consumption or BMI.

 

How did the researchers interpret the results?

The researchers conclude that, "Exposure to takeaway food outlets in home, work, and commuting environments combined was associated with marginally higher consumption of takeaway food, greater body mass index, and greater odds of obesity.

"Government strategies to promote healthier diets through planning restrictions for takeaway food could be most effective if focused around the workplace."

 

Conclusion

Research has found that increased exposure to fast food outlets, particularly around work, is associated with increased fast food consumption and marginally increased BMI.

The research benefits from including a large population sample and from taking into account various possible confounders that could influence the association between fast food outlet exposure, consumption and BMI, including markers of socioeconomic status and diet and lifestyle in general.
 
The finding that people were exposed to almost 50% more takeaway food outlets around their work locations than their home is perhaps not surprising. Most people live in residential areas, while their work locations will often be in towns and city centres, where there are many more food outlets. It also may be expected that the more takeaway food outlets that people are exposed to, the more they are likely to eat.

However, this remains a cross-sectional study conducted in only one region of the UK, which can only demonstrate associations and cannot prove cause and effect. The availability of fast food outlets in our environment may certainly be a contributor, but it is likely to be a combination of several factors in our lifestyle, diet and activity that is contributing to the obesity epidemic.

While the study has attempted to adjust for several potential confounders, it possibly has not been able to account for all factors that may be having an influence.

When interpreting this study, it's important to note that it was only conducted in one very rural region of the UK, and different results may be found in other places. Also, despite the best efforts of the researchers, there may be some inaccuracies in how they determined people's exposure to fast food, as well as in the people's reporting of their food consumption.

It should also be pointed out that the study didn't look at consumption of soft drinks, which are commonly sold in fast food outlets and can contain a significant amount of calories.

Nevertheless, the researchers' suggestion that, "Policies designed to improve diets and body weight by restricting takeaway food access may work, and could be most successful if focused around the workplace" seems reasonable.

It is possible that promoting a voluntary scheme where employees are rewarded with small treats or prizes if they stay out of the local burger joint could work.

But ultimately, your risk of obesity comes down to the choices you make. You choose where and what you eat. The good news is that it is easy to make healthy food swaps and choose healthier options while eating out.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

People who live or work near takeaways 'twice as likely to be obese'. The Independent, March 14 2014

Takeaway clampdowns 'may combat obesity epidemic'. BBC News, March 14 2014

Living near glut of takeaways doubles changes of obesity. The Daily Telegraph, March 14 2014

Another study from the University of the Obvious: People who live or work near takeaways are twice as likely to be obese. Mail Online, March 13 2014

Links To Science

Burgoine T, Forouhi NG, Griffin SJ, et al. Associations between exposure to takeaway food outlets, takeaway food consumption, and body weight in Cambridgeshire, UK: population based, cross sectional study. BMJ. Published online March 13 2014

Categories: Medical News

Statins side effects are minimal, study argues

Medical News - Thu, 03/13/2014 - 14:30

“Cholesterol-lowering statins have almost no side effects,” The Guardian reports. A new UK study argues that the majority of reported side effects are actually due to the nocebo effect – symptoms that are “all in the mind”. 

The researchers looked at the combined results of 29 studies and found there was no difference in the incidence of common side effects in the treated group compared to those in the placebo group. However, there was a slightly higher occurrence of diabetes.

Statins slightly reduced the risk of death from any cause, as well as the risk of heart attack and stroke in people with or without vascular disease. 

However,  the research did not include analysis for some reported side effects of statins, such as memory problems, blurred vision, ringing in the ears or skin problems.

The frequently reported side effect of muscle weakness was only considered if there was also a 10-fold rise in a muscle enzyme associated with muscle injury. Muscle aches, in particular, were no more common in the statin group than the placebo group.

This research has provided a novel approach to assessing the risks and benefits of using statins. Arguably, it provides the most comprehensive research yet on the number of people thought to have genuine side effects, and the risks and benefits of taking statins in both low- and high-risk groups for cardiovascular diseases such as heart attacks.

However, some headlines – such as “Statins are safe” – have overstated the case. There is no such thing as an entirely "safe" drug for everyone who takes it. If a drug doesn’t have side effects, it doesn’t work.

If you have any concerns about taking statins, you should discuss this with your GP or health advisor.

 

Where did the story come from?

The study was carried out by researchers from Imperial College London and the London School of Hygiene and Tropical Medicine. They say they did not receive any grants from any funding agency in the public, commercial or not-for-profit sectors. The authors are supported by the British Heart Foundation, the National Institute for Health Research and the Wellcome Trust.

The study was published in the peer-reviewed medical journal European Journal of Preventive Cardiology.

The media reported that this study shows that statins have no side effects in comparison to placebo.

This is misleading, as the research was aiming to ask a different question: “What proportion of symptomatic side effects in patients taking statins are genuinely caused by the drug?”

And the researchers were more cautious in their conclusion.

It has not comprehensively looked at all side effects, and it gives no indication of the severity or frequency of side effects experienced.

The media also did not report how small the benefits of statins were found to be in this study. This is an important consideration for people who want to make an informed choice when weighing up the risks and benefits of statin treatment.

 

What kind of research was this?

This was a meta-analysis of double-blind randomised controlled trials. This means the researchers added together and analysed the results of all studies that met their inclusion criteria. Double-blind randomised controlled trials are the gold standard for studies of whether a drug works or not, as they compare a drug directly with a placebo (dummy), and neither the participant nor the clinician knows which one they are taking. This removes any bias that could affect the results.

Studies of safety are often based on long-term observational studies, often without a placebo. The approach of reviewing randomised trials for safety data, as used by these researchers, would be particularly good at checking on differences between a drug and placebo.

 

What did the research involve?

The researchers found studies comparing statins to placebo and pooled the results to see if statins increase the risk of side effects, compared to rates in the placebo arm.

Two large databases were searched for relevant studies looking at statins being compared to placebo for cardiovascular disease prevention. Studies were excluded if they compared statins with standard therapy or no treatment. They also excluded studies that mainly included people on renal dialysis, those with organ transplants or if other non-statin medication was also started. This was because people in these categories did not represent the majority of people treated with statins.

They separately analysed studies of primary cardiovascular disease prevention (i.e. in people who had not had a heart attack or stroke) and secondary cardiovascular disease prevention (reducing the risk of a further heart attack or stroke in people who have already had one or the other).

They recorded any serious events for each trial and pooled the results, including:

  • mortality of any cause
  • fatal heart attack
  • non-fatal heart attack
  • fatal stroke
  • non-fatal stroke
  • any life-threatening condition
  • any hospitalisation

They also recorded other side effects, but only if they were reported in at least two trials and the sample size was at least 500 people:

  • increased liver enzymes
  • newly diagnosed diabetes mellitus
  • myopathy symptoms (muscular weakness)
  • muscle aches
  • increased creatine kinase (a muscle enzyme that raises during muscle injury) more than 10 times the upper limit of normal
  • back pain
  • newly diagnosed cancer
  • kidney problems
  • insomnia
  • gastrointestinal disturbance, nausea
  • dyspepsia (indigestion), diarrhoea or constipation
  • fatigue
  • headache
  • suicide

They performed internationally recognised statistical analysis to pool the results together. They then calculated the increased risk of experiencing each side effect for participants taking the statins and for participants taking placebo. They subtracted the placebo risk from the statin risk to find the absolute increase in risk for being on statins. By doing this, they worked out the proportion of symptoms that would not have been attributable to taking medication.

The researchers reported risks as “absolute risks” and calculated the reduction in risk by subtracting the risk in one arm from that in the other. This makes a direct comparison of the possible risks and benefits.

 

What were the basic results?

They found 14 randomised controlled trials, which included 46,262 people without previous heart disease or stroke (primary prevention). They also found 15 randomised controlled trials, including 37,618 people who already had heart disease or stroke (secondary prevention). On average, the trials lasted between 6 months and 5.4 years, and those included were mostly men.

In the studies, on people who had not already suffered from a heart attack or stroke, the rate of new-onset diabetes for people on statins was 2.7% and on placebo was 2.2%.

The difference between rates on treatment and on placebo is 0.5% (95% confidence interval [CI] 0.1 to 1%), meaning there was a small, statistically significant increase in the rate of developing diabetes with a statin.

This means that in 100 people taking statins, 20 cases of newly diagnosed diabetes mellitus could be due to taking this medicine. In people who had already suffered from a heart attack or stroke, there was only one study that reported new onset diabetes, and no significant effect was seen.

In the studies on people who had not already suffered from a heart attack or stroke, the risk of death from any cause on statins was 0.5% (CI -0.9 to -0.2%) less than the risk on placebo. The risk of a heart attack was 1% (CI -1.4 to -0.7%) less and the risk of stroke 0.3% (CI -0.5 to -0.1%) less.

In the studies on people who had already suffered from a heart attack or stroke, the reduction in absolute risk of death from any cause was even more: 1.4% (CI -2.1 to -0.7%) less compared to placebo. Statins also significantly reduced the risk of a heart attack by 2.3% (CI -2.8 to -1.7%) and the risk of stroke was 0.7% (-1.2 to -0.3%) less.

The proportion of people developing symptoms or other blood test abnormalities was as follows:

  • In both study groups, liver enzymes rose in 0.4% of people on statins. No symptoms were reported, and it is unclear if this was harmful.
    There was no significant difference between
  • taking statins or placebo for any of the other adverse events or side effects listed above.
  • With regards to muscle weakness, this was only recorded if the muscle enzyme (creatinine kinase) level was greater than 10 times the upper limit of normal, so was found in just 16/19,286 people on statins and 10/17,888 on placebo in the primary prevention group. A separate category for muscle aches were experienced in 1744/22,058 (7.9%) in people on statins and 1646/21,624 (7.6%) on placebo.

 

How did the researchers interpret the results?

At the doses tested in these 83,880 patients, only a small minority of symptoms reported on statins are genuinely due to the statins; almost all reported symptoms occurred just as frequently when patients were administered placebo. New-onset diabetes mellitus was the only potentially or actually symptomatic side effect whose rate was significantly higher on statins than placebo; nevertheless, only one in five of these new cases were actually caused by statins.

 

Conclusion

This meta-analysis pooled results from 29 studies and has shown a very small increased risk of newly diagnosed diabetes mellitus. This is the same as the decreased risk of any cause of death in people taking statins, compared to placebo, to prevent a heart attack or stroke.

The researchers point out some limitations to the meta-analysis:

  • Each study did not report on all of the side effects, meaning that for each category of side effect, the number of participants differed. The side effect categories were only included if at least 500 people had reported suffering from it. This means there may be numerous other side effects that were not covered by this research.
  • New onset diabetes was only documented in 3 of the 29 trials, though the numbers were still reasonably large.
  • Many trials do not state clearly how and how often adverse events were assessed. This is particularly important, as it is not clear from this type of analysis how often the side effects were experienced or the severity.

Side effects not covered by this review include memory problems, blurred vision, ringing in the ears and skin problems.

Anecdotally, muscle aches or weakness is one of the main reasons people stop taking statins. In this review, the category for muscle weakness was only looked at if the person also had a 10-fold increase in creatinine kinase level (indicating muscle damage). Muscle aches were separately recorded, as this is more common and not always experienced alongside muscle weakness. No firm conclusions can therefore be drawn from this meta-analysis regarding whether statins have an effect on the risk of muscle weakness, if there was less than a 10-fold increase in creatinine levels.

This research was limited to studying the side effects reported in the included studies. Although it was not a comprehensive study of all side effects, it has provided a novel approach to assessing the balance of risks and benefits.

It provides extremely useful data on the proportion of people expected to have genuine side effects and the balance of risks and benefits when taking statins in both low- and high-risk groups.

There are other ways you can lower your cholesterol levels, such as eating a healthy diet low in saturated fat and taking regular exercise.

Read more about preventing high cholesterol.  

Analysis by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Statin side-effects minimal, study finds. The Guardian, March 13 2014

Statins do NOT have major side effects, claims study: Research finds users less likely to suffer maladies than control group. Mail Online, March 13 2014

Statins found to have 'almost no side effects'. ITV News, March 13 2014

Statin heart pills are safe, according to experts. Daily Express, March 13 2014

Links To Science

Finegold JA, Manisty CH, Goldacre B, et al. What proportion of symptomatic side effects in patients taking statins are genuinely caused by the drug? Systematic review of randomized placebo-controlled trials to aid individual patient choice. European Journal of Preventive Cardiology. Published online March 12 2014

Categories: Medical News

Oxytocin nasal spray tested for anorexia

Medical News - Thu, 03/13/2014 - 14:30

“A kiss to 'cure' anorexia: 'Love hormone' can help reduce sufferers' obsession with food and weight,” is the unsupported claim on the Mail Online.

This story involved the media’s favourite hormone, oxytocin which, depending on what pop-science source you read, has been dubbed as the “love”, “cuddle” or “kissing” hormone, as it is associated with intense emotions (both positive and negative).

The study found that 31 South Korean women with anorexia given an intranasal spray containing the hormone oxytocin paid less attention to images of food and fatter body shapes, but not to other weight-related images, 45 minutes later. Oxytocin had no effect on how much fruit juice the women could drink at the end of the study.

It is at best unclear whether these short term effects would lead to any improvement in the symptoms of anorexia. The results also may not be indicative of what would be found in a more diverse and larger group of people with anorexia.

This is far from convincing evidence that oxytocin could offer a treatment or “cure” for anorexia as implied by the headlines.

And even if oxytocin was effective, it doesn’t necessarily mean patients would use it.

From the current evidence, the mainstay of effective anorexia treatment involves psychological therapies. 

 

Where did the story come from?

The study was carried out by researchers from Inje University and other universities in the Republic of Korea as well as King’s College London.

It was funded by the National Research Foundation of Korea and the Ministry of Education in the Republic of Korea, with the UK author being part funded by the National Institute for Health Research.

The study has been accepted for publication in the peer-reviewed journal Psychoneuroendocrinology.

similar study, carried out by some of the same researchers, looked at the effect of oxytocin on attention to images of faces showing different emotions, has been published in the open access journal PLoS One. 

This study also found some effect of oxytocin on attention to faces showing disgust and anger.

The media’s interest in this story may owe more to an interest in being able to use the phrase “love hormone” than the strength of the study findings, which the headlines over-state. 

These findings are of a short term effect in a very small group of women on an outcome of, at best, unclear clinical relevance.

Any mention of a possible “cure” is arguably irresponsible journalism as it may give false hope to those who are concerned about family members or friends affected by anorexia.

 

What kind of research was this?

This was a randomised crossover study looking at the impact of an oxytocin nasal spray on response to food and weight related images in people with anorexia.

Oxytocin is a hormone released in high levels during childbirth. The drug is also released during sex, and is thought to be involved in helping people to form bonds.

Its role in these areas has led to it being called various things in the popular media, such as the “bonding hormone” or “love hormone”. However, there is also evidence that it is associated with less “cuddly” emotions such as envy and hostility to strangers.

The hormone also has effects on appetite, and fear and reward pathways in the brain, so the researchers wanted to test whether the hormone might have a beneficial effect on anorexia.

Drugs based on this hormone are already used medically to help induce labour, and it has been tested as a possible treatment in some mental health conditions, such as generalised anxiety disorderpostnatal depression and autistic spectrum disorder.

 

What did the research involve?

The researchers tested oxytocin in women with or without anorexia. They gave them a nasal spray containing the hormone or an inactive solution (placebo), and then tested their responses to pictures of food and weight related images and offered them a fruit drink. The researchers then assessed whether responses and drink consumption differed after oxcytocin or placebo, and if this depended on whether the women had anorexia or not.

The study enrolled 31 women with anorexia from South Korea, who were inpatients and outpatients at an early stage of treatment for the condition. The control group were 33 healthy female university student volunteers without anorexia.

The women all received oxytocin and placebo four to seven days apart. The order in which they received the drugs was randomly selected. The women self-administered a nasal spray containing either oxytocin or placebo, and the doctors and women did not know which the nasal spray contained.

Forty five minutes after administering the spray, the women took part in an experiment to test which of two alternative images gained their attention. They were shown paired photos on either side of a screen that either related to food and weight or an unrelated subject.

The food related images were:

  • Food: High or low calorie foods, or neutral foods (not specified).
  • Body shape: Parts of women’s bodies (such as thighs or stomach) of different shapes – some fatter, some slimmer, and some neutral (images of eyes or elbows that are not associated with body shape).
  • Weight: Women weighing themselves, weighing scales or other weight related images.

Each of these photos was matched with an image expected to have a similar effect (positive, negative or neutral) but not relating to food, weight, or body shape. For example, this could include photos of kittens (positive), snakes (negative), or birds (neutral).

The images were displayed on either side of a screen for a second, then a symbol flashed on one side of the screen. The women had to press a button to identify which symbol they had seen as quickly as they could. The idea is that the women will get the question right more quickly when the symbol is on the side of the screen with the picture they were focusing on. So the experiment aimed to see whether they were focusing on the food and weight picture or the unrelated subject.

The implication being that if women with anorexia, who were in the oxytocin group focused less on food, weight and body shape related images, compared to placebo, then the hormone may be having a beneficial effect on their mind-set.

At the end of these tests, the women were asked to drink as much as they could of a 190ml carton of apple juice. The researchers compared the effects of oxytocin and placebo on their experiments, as well as various psychological measures taken during the day.

 

What were the basic results?

Women with anorexia and those without the condition showed similar responses to the images after placebo.

The women with anorexia paid less attention to the food images (negative, positive, or neutral) and negative shape images after the oxytocin nasal spray than after the placebo spray. Oxytocin did not have an effect on the women’s responses to the weight images.

The healthy women showed some slight differences in some responses to oxytocin to the women with anorexia, but these did not reach statistical significance.

Oxytocin had no effect on how much juice the women allowed themselves to drink at the end of the day.

 

How did the researchers interpret the results?

The researchers concluded that the effects they saw with oxytocin suggested that it could potentially reduce anorexia symptoms. They say that a study evaluating oxytocin as a treatment for anorexia is needed.

 

Conclusion

The study found that oxytocin nasal spray may reduce the short term attention to food and body shape images in women with anorexia.

This study was a small study including 31 South Korean women being treated for anorexia. While it showed some potential effect on the women’s attention to food and negative body shape images in the very short term, it is not clear whether this would result in an alleviation of their anorexia symptoms. Oxytocin did not have an impact the women’s consumption of a fruit juice at the end of a study, so any potential to affect (either in a negative or positive way) eating behaviour in these women remains unproven.

The small size and very select sample used in the study (women from one centre in South Korea) means the results may not be representative of the wider population with anorexia, particularly in other countries. Risk factors for anorexia are thought to be highly culturally specific and may vary from country to country.

Also, the study carried out multiple statistical tests, and this may mean that some are likely to find significant effects by chance.

This study presents far from convincing evidence that oxytocin could offer a treatment or “cure” from anorexia as implied by the headlines.

The current evidence suggests that the most effective treatments are talking therapies such as cognitive behavioural therapy.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

'Love hormone' may treat anorexia. BBC News, March 13 2014

A kiss to 'cure' anorexia: 'Love hormone' can help reduce sufferers' obsession with food and weight. Mail Online, March 13 2014

Links To Science

Kim Y, Kim C, Cardi V, et al. Intranasal oxytocin attenuates attentional bias for eating and fat shape stimuli in patients with anorexia nervosa. Psychoneuroendocrinology. Published online March 13 2014

Categories: Medical News

Breast cancer exercise advice may change

Medical News - Wed, 03/12/2014 - 13:58

“Exercise advised for lymphoedema after breast cancer,” reports BBC News.

The headlines follow the publication of new draft recommendations by the National Institute for Health and Care Excellence (NICE), which is updating its guidance on the diagnosis and treatment for people with advanced breast cancer. The move follows new evidence on the safety and benefits of exercise for breast cancer-related lymphoedema. NICE has developed two new draft recommendations, which have been published for consultation, to directly address this issue.

 

What is lymphoedema?

Lymphoedema is a painful and incurable condition that causes chronic swelling.

It occurs when lymph nodes or vessels are damaged or blocked, which lymph fluid unable to pass through them. Fluid then builds up and causes swelling.

Lymphoedema may occur as a result of cancer treatment – such as surgery or radiotherapy – or because cancer cells block the lymph system.

NICE estimates that each year in the UK, nearly 10,000 people with breast cancer will go on to develop lymphoedema in the arm, following treatment. Cancer Research UK estimates that 20% of people develop lymphoedema after breast cancer treatment that includes surgery to remove lymph nodes or radiotherapy to the lymph nodes in the armpit.

 

Why has NICE released new draft guidance on lymphoedema?

NICE has released new draft guidance to provide clearer advice to patients on exercise and breast cancer-related lymphoedema.

Previously, people with, or who were at risk of, breast cancer-related lymphoedema were advised to be cautious with the affected (or potentially affected) arm, and to avoid strenuous activities, such as demanding exercise or carrying heavy weights.

NICE carried out a review of the evidence on exercise and people who have or who are at risk of developing breast cancer-related lymphoedema. The review found a lack of evidence to suggest avoiding such exercise is beneficial, with the majority of studies being short-term and without sufficient follow-up or patient-focused outcomes, such as quality of life.

 

What are the draft recommendations?

The draft recommendations say that doctors and nurses should discuss with people who have, or who are at risk of, breast cancer-related lymphoedema that: 

  • current evidence indicates that exercise does not prevent, cause or worsen lymphoedema
  • exercise may improve their quality of life

 

Do the recommendations include specific types or frequency of exercise?

No. The NICE recommendations are based on studies that included resistance exercises, aerobic exercise, stretching, weightlifting and water-based exercise.

 

How was this reported in the media?

The BBC accurately covered the story, including helpful advice from a nurse from Breast Cancer Care.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Exercise advised for lymphoedema after breast cancer. BBC News, March 12 2014

Categories: Medical News

Obese 11-year-old girls 'get lower school grades'

Medical News - Wed, 03/12/2014 - 12:58

"Study links obesity in teenage girls with lower academic results," BBC News reports. Previous studies have reported that child and adolescent obesity has a wide variety of adverse consequences in both the short and long term.

A large study of UK secondary school pupils has now looked at whether being overweight or obese at the age of 11 influences educational attainment on SAT tests at 11 and 13 years of age and GCSE grades achieved at 16 years of age.

The researchers found an association between obesity at 11 years of age and poorer academic achievement in GCSE exams five years later in girls, even after adjustment for a wide range of factors that could have influenced the results (confounders).

They say that the difference in academic achievement was equivalent to one-third of a grade at age 16, which would be sufficient to lower average attainment to a grade D instead of a grade C. However, the association between obesity and academic attainment was less clear in boys.

The reasons for an association between obesity and academic achievement in girls are unknown, but causes may include the health effects of obesity that could lead to girls missing school. 

 

Where did the story come from?

The study was carried out by researchers from the universities of Dundee, Strathclyde, Bristol and Georgia. It was funded by the UK Medical Research Council, the Wellcome Trust, the University of Bristol and the BUPA Foundation.

The study was published in the peer-reviewed International Journal of Obesity. The article is open access, meaning it can be accessed for free from the journal's website.

The results of this study were well reported in the media.

 

What kind of research was this?

This was a cohort study that aimed to determine whether being obese at 11 years old was associated with poorer academic attainment at 11, 13 and 16 years old. It also aimed to look for factors that might explain the relationship seen.

Cohort studies are the ideal study design to address this sort of question, although they cannot show a cause and effect relationship as there could be other factors (confounders) that are responsible for any association seen.

 

What did the research involve?

The researchers looked at the association between weight status at 11 years old and academic attainment assessed by national tests at 11, 13 and 16 years of age in 5,966 children who were part of the Avon Longitudinal Study of Parents and Children.

Children were only included in the study if they did not have a psychiatric diagnosis or special educational needs.

The researchers measured the weight and height of children when they were 11 and 16 years old, and their body mass index (BMI) was also calculated.

Healthy weight was defined by BMI "z-scores". Z-scores show how different an individual's BMI is from the average in the UK population (more precisely, it shows the number of standard deviations different it is).

There are no agreed cut-offs in the UK, but the World Health Organization (WHO) defines being overweight as more than one standard deviation higher than average and obesity as more than two standard deviations. This study has used a lower level to define obesity. In this study:

  • normal weight was defined as a BMI z-score more than 1.04
  • being overweight was defined as a BMI z-score of 1.04 to 1.63
  • being obese was defined as a BMI z-score of 1.64 or more

Academic attainment was assessed from performance in English, Maths and Science on Key Stage 2 tests at age 10/11, Key Stage 3 tests at 13/14 and GCSEs at age 15/16. The researchers concentrated their analyses on English attainment.

The researchers looked at the association between weight status and academic attainment. Boys and girls were analysed separately, as differences have been seen in academic attainment between the sexes.

The researchers also looked at whether the association seen between weight status and academic attainment could be explained by:

  • depressive symptoms at age 11
  • IQ at age 8
  • age of girls when their periods started

The researchers adjusted their analyses for a large number of potential confounders.

 

What were the basic results?

At 11 years of age, 71.4% of children were a healthy weight, 13.3% were overweight and 15.3% were obese.

After adjusting for all potential confounders, girls who were obese at the age of 11 had lower English marks at 13 and 16 years of age compared with girls who were a healthy weight. There was no significant difference in marks for overweight girls compared with healthy weight girls.

The association between obesity and English marks were less clear in boys, with a significant association only being seen for obesity at the age of 11 and lower English marks at 11.

Neither depressive symptoms nor IQ explained the relationship between weight status at 11 and academic attainment at 16 years old for boys or girls. The age at which girls had their first period also did not explain the relationship between weight status at 11 and academic attainment at 16 years old for girls.

The researchers also looked at changes in weight status. Changes in weight status had no effect on English attainment at GCSE in boys.

However, girls who were overweight or obese at age 11 and 16, or who changed from being overweight at 11 to obese at 16 had poorer English attainment at GCSE than girls who remained a healthy weight.

Girls who were a healthy weight who became overweight and girls who became a healthy weight from being overweight had no difference in English attainment at GCSE than girls who remained a healthy weight.

 

How did the researchers interpret the results?

The researchers conclude that, "For girls, obesity in adolescence has a detrimental impact on academic attainment five years later". They say that, "Being obese at 11 predicted lower attainment by one-third of a grade at age 16. In the present sample, this would be sufficient to lower average attainment to a grade D instead of a grade C."

The researchers go on to say: "Mental health, IQ and age of menarche [period] did not mediate this relationship, suggesting that further work is required to understand the underlying mechanisms. Parents, education and public health policy makers should consider the wide-reaching detrimental impact of obesity on educational outcomes in this age group."

 

Conclusion

This large UK-based cohort study has looked at whether being overweight or obese at age 11 influences educational attainment on SAT tests at 11 and 13 years of age and GCSE grades achieved at 16 years of age.

It found an association between obesity at 11 years of age and poorer academic achievement in GCSE exams five years later in girls, even after adjustment for a wide range of confounders.

The researchers found that depression, IQ and the age girls started menstruating could not explain the association. The association between obesity and academic attainment was less clear in boys.

The reasons for an association between obesity and academic achievement in girls are unknown. The researchers suggest that obese girls may miss school because obesity affects physical and mental health. They also suggest that obese children's academic attainment may suffer because they tend to be stigmatised by other children or teachers, or because excess fat might affect brain function.

Strengths of this study include the large sample size, cohort design, adjustment of analyses for a wide range of confounders, and the fact that BMI and educational attainment were objectively assessed. However, it should be noted that the researchers defined childhood obesity at a lower BMI level than that used by the World Health Organization.

However, as the researchers noted, despite the adjustment for a range of confounders, cohort studies cannot prove a cause-and-effect relationship, as there might be other confounders that are responsible for the association seen. They also point out that many of the confounders were only measured at one point and they were unable to adjust for change, such as changes in depressive symptoms.

The researchers suggest that future studies should explore the influence of other potential factors that could explain the association seen, including self-esteem, absenteeism, school environment and the role of the teacher.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Study links obesity in teenage girls with lower academic results. BBC News, March 11 2014

Link between obesity & poor grades in adolescent girls. ITV News, March 11 2014

Girls who are obese at 11 'get lower GCSE results': Effect of weight can be difference between C and D grade. Daily Mail, March 11 2014

Links To Science

Booth JN, et al. Obesity impairs academic attainment in adolescence: findings from ALSPAC, a UK cohort (PDF, 983kb). International Journal of Obesity. Published online March 11 2014

Categories: Medical News

Low-level drinking in early pregnancy 'harms baby'

Medical News - Tue, 03/11/2014 - 14:57

"Drinking alcohol early in pregnancy, even in small amounts, increases chances of harming your baby," reports The Independent, one of several news outlets to report on the latest study on the risks of drinking during pregnancy.

The study of 1,303 pregnant women aged between 18 and 45 years old found that women who drank less than two units a week during the first 12 weeks of pregnancy were at increased risk of complications, including premature birth.

While the risk at the individual level is still low, researchers from the University of Leeds concluded that the first trimester was the most "vulnerable period" of a woman's pregnancy.

They also found a more general association between drinking alcohol throughout pregnancy and worse birth outcomes, such as lower birth weight or foetal growth restriction.

"Our results highlight the need for endorsing the abstinence-only message, and further illuminate how timing of exposure is important in the association of alcohol with birth outcomes," say the study's authors.

The study, published in the Journal of Epidemiology and Community Health, suggests that women who are planning to conceive and who are pregnant should abstain from alcohol.

However, for now it is unclear whether consideration of this study alongside other evidence will lead to a change to government recommendations on drinking during pregnancy.

There currently remains some uncertainty around whether there is a safe level of alcohol consumption during pregnancy, or whether it should be avoided altogether.

The Department of Health recommends that pregnant women and women trying to conceive should avoid alcohol completely, and, if they choose to drink, should never drink more than one to two units once or twice a week and never get drunk.

The most recent National Institute for Health and Care Excellence (NICE) guidance on Antenatal Care (2008) specifically advises that women trying to conceive, and those in the first three months of pregnancy, should avoid drinking alcohol, as this may be associated with an increased risk of miscarriage.

 

Where did the story come from?

The study was carried out by researchers from the University of Leeds and was funded by the Food Standards Agency. It was published in the peer-reviewed Journal of Epidemiology and Community Health.

The results of the research were well reported by BBC News and The Independent. The Daily Telegraph's headline about "middle-class mothers' alcohol risk to babies" was based on the finding that women reporting alcohol intake above two units per week were more likely to be older, have a university degree, be of European origin, and were less likely to live in a deprived area.

However, the researchers did not conclude that "middle-class mothers are at greater risk of having premature and smaller babies as they drink too much in pregnancy", as the paper reports.

 

What kind of research was this?

This was a prospective cohort study carried out in Leeds that aimed to determine the association between alcohol intake before and during pregnancy with birth weight and gestational age.

It also aimed to examine whether there was a different effect depending on the timing of exposure during pregnancy.

There currently remains some uncertainty around whether there is a safe level of alcohol consumption during pregnancy or whether it should be avoided altogether.

Cohort studies are the ideal study design to address this question. However, despite the fact that the researchers adjusted for a number of confounders, it is possible there are other factors that explain the association seen in the research.

 

What did the research involve?

The researchers studied 1,303 pregnant women aged between 18 and 45 years old.

The women were asked about their alcohol consumption using food frequency questionnaires. The women completed the questionnaire three times:

  • when they were enrolled into the study – they were asked about consumption from 4 weeks prior to pregnancy through to week 12 of pregnancy
  • at week 28 of pregnancy – they were asked about consumption between weeks 13 and 28 of pregnancy
  • after they had given birth – they were asked about consumption between weeks 29 and 40 of pregnancy

The reported frequency and type of alcohol drunk was used to calculate the units consumed per week.

Based on their responses, women were categorised as non-drinking, drinking two or fewer units per week, or drinking more than two units per week.

Birth outcomes were obtained from hospital maternal records. The primary outcome was birth weight, but the researchers also looked at birth centiles (birth size compared with other babies, adjusted for maternal height, weight, ethnicity, how many children she had, and the birth weight and gender of the baby), premature birth (before 37 weeks of gestation) and babies being small for gestational age (below the 10th growth centile).

The researchers then looked at the association between drinking status and birth outcomes. They adjusted for confounders that included the mother's salivary cotinine (a biomarker of smoking status), pre-pregnancy weight, height, age, ethnicity, how many children she had, caffeine intake, and education.

 

What were the basic results?

Approximately three-quarters of women before pregnancy and more than half in the first trimester reported alcohol intakes of more than two units per week.

Just over a quarter of women reported drinking more than two units per week in the second or third trimester.

Alcohol consumption four weeks before pregnancy

Drinking more than two units per week four weeks before pregnancy was associated with lower birth weight (105.7g lower) and a 7.7 decrease in birth centile compared with not drinking. Drinking less than two units was not associated with lower birth weight or birth centile compared with not drinking.

Alcohol consumption during the first trimester of pregnancy

Drinking more than two units per week during the first trimester of pregnancy was associated with an approximate 100g reduction in birth weight and an 8.2 decrease in birth centile compared with not drinking.

Drinking more than two units per week was associated with increased odds of having a baby small for its gestational age (odds ratio (OR) 2.0, 95% confidence interval (CI) 1.2 to 3.4) and a premature birth (OR 3.5, 95% CI 1.1 to 11.2).

Drinking two units or less per week during the first trimester of pregnancy was associated with an approximate 100g reduction in birth weight and a 5.8 decrease in birth centile compared with not drinking. Drinking two units or less per week was also associated with increased odds of having a preterm birth (OR 4.6, 95% CI 1.4 to 14.7).

Alcohol consumption during the second and third trimesters of pregnancy

Drinking more than two units per week during the second trimester of pregnancy was associated with an approximate 100g reduction in birth weight, and during the third trimester of pregnancy was associated with an approximate 50g reduction in birth weight compared with not drinking. 

 

How did the researchers interpret the results?

The researchers concluded that, "Maternal alcohol intake during the first trimester was found to have the strongest association with foetal growth and gestational age. Women who adhered to guidelines in this period were still at increased risk of adverse birth outcomes even after adjustment for known risk factors.

"Maternal alcohol intakes which exceeded the recommendations in the period leading up to pregnancy were also found to be associated with foetal growth, suggesting that the [period around conception] could be particularly sensitive to the effects of alcohol on the foetus.

"Our results highlight the need for endorsing the abstinence-only message, and further illuminate how timing of exposure is important in the association of alcohol with birth outcomes, with the first trimester being the most vulnerable period."

 

Conclusion

This cohort study suggests that women who are planning to conceive and who are pregnant should abstain from alcohol. Drinking more than two units of alcohol per week was associated with adverse birth outcomes in all trimesters.

In addition, the study found that women who reported limiting drinking to less than two units per week during the first trimester of pregnancy were also at increased risk of adverse birth outcomes.

This study has the strength that it associated alcohol intake at three time points, covering different periods of pregnancy. Alcohol intake was assessed during pregnancy and not after birth, reducing the possibility of recall bias. In addition, it adjusted for a number of confounders and used an objective measure of smoking.

However, the study was originally designed to assess the impact of caffeine intake on birth outcomes. Alcohol intake was self-reported, and there is the possibility of under-reporting. Few women had data on alcohol consumption in the third trimester (30% of the original cohort).

Also, despite the fact that the researchers adjusted for a number of confounders, it is possible there are other factors that explain the association seen.

This research provides further evidence for not drinking during pregnancy. However, for now it is unclear whether consideration of this study alongside other evidence will lead to a change in NICE or Department of Health recommendations around alcohol use during pregnancy.

In the UK, the Department of Health currently recommends that pregnant women and women trying to conceive should avoid alcohol altogether, and, if they choose to drink, should never drink more than one to two units once or twice a week, and never get drunk.

The most recent NICE guidance on Antenatal Care (2008) specifically advises that women trying to conceive, and those in the first three months of pregnancy, should avoid drinking alcohol, as this may be associated with an increased risk of miscarriage.

After this, if women choose to drink during pregnancy they should drink no more than one to two units once or twice a week. NICE says that, "Although there is uncertainty regarding a safe level of alcohol consumption in pregnancy, at this low level there is no evidence of harm to the unborn baby."

The organisation advises that getting drunk or binge drinking should be avoided throughout pregnancy, as this may be harmful to the unborn baby. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Light drinking 'is preterm risk'. BBC News, March 11 2014

Middle-class mothers' alcohol risk to babies. The Daily Telegraph, March 10 2014

 

Drinking alcohol early in pregnancy, even in small amounts, increases chances of harming your baby, study finds. The Independent, March 10 2014

Links To Science

Nykjaer C, Alwan NA, Greenwood DC. Maternal alcohol intake prior to and during pregnancy and risk of adverse birth outcomes: evidence from a British cohort. Journal of  Epidemiology and Community Health. Published March 10 2014

Categories: Medical News

Children's diets contain 'too much salt'

Medical News - Tue, 03/11/2014 - 13:03

“Children eat too much salt,” reports The Daily Telegraph, one of several news outlets to report on new findings that have led researchers to call for further reductions in the salt content of processed food.

The news is based on the results of a study assessing the salt intake of 340 children in south London by keeping a record of everything they ate and drank over 24 hours and measuring the salt content in their urine.

It found that on average, children aged from five to six ate 3.75g salt/day, children aged from eight to nine ate 4.72g/day and 13 to 17 year olds ate 7.55g/day.

Bread, breakfast cereals and other cereal-based products were the biggest source of dietary salt (36%) followed by meat products (19%), milk and milk products (11%).

On average, five and six-year-old children in the study consumed 3.75g of salt a day – more than the Government’s recommended 3g maximum for that age group. Eight and nine-year olds consumed 4.72g a day – within their 5g limit. Thirteen to 17-year-olds consumed 7.55g a day – more than the 6g limit.

This study provides a single snapshot of the salt levels of a relatively small sample of children from south London only. It doesn’t tell us whether the same results would consistently be obtained if salt levels were measured on a number of different occasions in these children; or in a sample of children from another area. They also cannot tell us whether there are any detrimental health effects of this level of salt consumption.

Where did the story come from?

The study was carried out by researchers from St George’s University of London and Queen Mary University of London and was funded by the British Heart Foundation. It was published in the peer-reviewed medical journal Hypertension.

What kind of research was this?

This was a cross sectional study assessing dietary salt intake by measuring levels of salt in urine among five to six years olds, eight to nine years olds and 13 to 17 year olds in south London, identifying the main sources of salt in children’s diets.

Cross-sectional studies are the ideal study design to assess current salt intake; however, they only provide a single snapshot. They don’t tell us whether the same results would consistently be obtained if salt levels were measured on a number of different occasions. They also cannot tell us whether there are any detrimental health effects of this salt consumption.

What did the research involve?

Dietary salt intake was assessed in 340 children by analysing the level of sodium in urine collected over 24 hours.

The Scientific Advisory Committee on Nutrition (SACN) recommends a maximum intake of salt of 6g a day for adults.

Although there are existing recommendations for salt intake for children, researchers devised their own daily upper limits based on body size:

  • 2g for three to four year old children
  • 3g for five to eight year old children
  • 4g for nine to 11 year old children
  • 5g for 12-15 year old children
  • 6g for young adults aged 16 years and over

The children’s dietary salt intake was compared to the study’s own maximum salt intake recommendations.

To identify the sources of salt in their diet, children were also provided with a digital camera and asked to photograph everything they ate over 24 hours. The parent or child was also asked to keep a detailed dietary record of the food and drinks consumed. Details of recipes used were requested, as well as brand names of products, cooking methods, and portion sizes using household measurers.

What were the basic results?

The average salt intakes were:

  • 3.75g/day for five to six year olds
  • 4.72g/day for eight to nine year olds
  • 7.55g/day for 13 to 17 year olds.

Compared to the study’s own maximum daily intake recommendations:

  • 66% of five to six year old children ate too much salt
  • 73% of eight to nine year olds ate too much salt
  • 73% of 13 to 17 year olds ate too much salt
     

The main sources of dietary salt were cereal-based products, such as bread, breakfast cereals, biscuits, cakes, pasta and rice (36%), meat products, including chicken, turkey, sausages and ham (19%), milk and milk products, such as cheese (11%).

How did the researchers interpret the results?

The researchers concluded “this study demonstrates that salt intake in children in south London is high, with most of the salt coming from processed foods. Much further effort is required to reduce the salt content of manufactured foods.”

Conclusion

This study has found that salt intake in 340 children in south London is high. The researchers have proposed new maximum recommendations for salt intake based on recommendations for adults adjusted for differences in body size. Based on the study's own recommendations on salt intake, around two-thirds of five to six year olds, and three-quarters of those aged eight to nine and 13 to 17 had too much salt in their diet.

The scope of the study has some limitations. It provides a single snapshot of the salt levels of a relatively small sample of children from south London. It doesn’t tell us whether the same results would consistently be obtained if salt levels were measured on a number of different occasions in these children; or in a sample of children from another area. They also cannot tell us whether there are any detrimental health effects of this level of salt consumption.

 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Children's diets 'far too salty'. BBC News, March 11 2014 

Children eat too much salt, researchers find. The Daily Telegraph, March 11 2014

Three in four children are at risk from eating too much salt because of high levels in bread and breakfast cereals. Mail Online, March 10 2014

Links To Science

Marrero NM, Feng JH and MacGregor GA. Salt Intake of Children and Adolescents in South London. Hypertension. Published March 10 2014

Categories: Medical News

Are e-cigarettes a 'gateway to nicotine addiction'?

Medical News - Mon, 03/10/2014 - 13:02

"E-cigarettes are encouraging a new generation to become hooked on nicotine," reports the Mail Online.

E-cigarettes are devices that deliver a heated aerosol ("vapour") of nicotine in a way that mimics conventional cigarettes. But they have lower levels of toxins such as tar than a conventional tobacco cigarette. They are marketed as a safer alternative to regular smoking, or as a way to quit.

Today's headlines followed a survey of thousands of US teenagers (who were under 15 on average, meaning that those who smoked cigarettes were underage).

It found that those who had tried e-cigarettes were more likely to have smoked conventional cigarettes and less likely to be abstaining from conventional smoking than those who hadn't tried e-cigarettes.

However, it also found those who tried e-cigarettes were more likely to want to quit conventional smoking.

On average tobacco smokers die significantly younger and spend more of their shorter lives ill. Because e-cigarettes can be marketed to young people, there is a worry that if they did lead to more conventional smoking, they could have a potentially disastrous impact on public health.

This current study does suggest that e-cigarettes may not be the harmless alternative some believe, and may be acting as a "gateway drug" to conventional smoking.

However, it does not prove that is the case. It is quite plausible that existing teenage smokers are also trying e-cigarettes for a variety of reasons.

The debate about the safety and regulation of e-cigarettes is likely to continue until more robust long-term evidence emerges.

 

Where did the story come from?

The study was carried out by researchers from the Center for Tobacco Research and Education at the University of California, San Francisco, and was funded by the US National Cancer Institute.

It was published in the peer-reviewed medical journal, JAMA Pediatrics.

The Mail Online coverage was balanced and discussed the pros and cons of e-cigarettes. It also usefully brought in some wider research from 75,000 Korean adolescents "which also found that adolescents who used e-cigarettes were less likely to have stopped smoking conventional cigarettes".

 

What kind of research was this?

This was a cross-sectional study looking at whether e-cigarette use was linked to conventional cigarette smoking behaviour among US adolescents.

E-cigarettes are devices that deliver a heated aerosol of nicotine in a way that mimics conventional cigarettes while delivering lower levels of toxins, such as tar, than a conventional combusted cigarette. They are often marketed as a safer alternative to regular smoking, or as a way of helping people quit traditional smoking.

The devices are not currently regulated in the US or the UK, meaning there are limited or vague rules concerning appropriate advertising. The researchers say e-cigarettes are being aggressively marketed using the same messages and media channels that cigarette companies used to market conventional cigarettes in the 1950s and 1960s. These include targeting young people to get a new generation of smokers hooked on nicotine for life.

The researchers outline how studies have demonstrated that youth exposure to cigarette advertising causes youth smoking. Meanwhile, electronic cigarettes can be sold in flavours such as strawberry, liquorice or chocolate, which are banned in cigarettes in the US because they appeal to youths.

Given the potential for a new generation to be hooked on nicotine and then tobacco smoking in this unregulated environment, the researchers wanted to investigate whether e-cigarettes were associated with regular smoking behaviour in adolescents.

 

What did the research involve?

The researchers used existing smoking data collected from US middle and high school students in 2011 (17,353 students) and 2012 (22,529) during the large US National Youth Tobacco Survey. They analysed whether use of e-cigarettes was linked with conventional tobacco smoking and smoking abstinence behaviour.

The National Youth Tobacco Survey was described as an anonymous, self-administered, 81-item, pencil-and-paper questionnaire that included:

  • indicators of tobacco use (cigarettes, cigars, smokeless tobacco, kreteks [southeast Asian clove cigarettes], pipes, and "emerging" tobacco products)
  • tobacco-related beliefs
  • attitudes about tobacco products
  • smoking cessation
  • exposure to secondhand smoke
  • ability to purchase tobacco products
  • exposure to pro-tobacco and anti-tobacco influences

Smoking behaviour was categorised as:

  • conventional cigarette experimenters – adolescents who responded "yes" to the question "Have you ever tried cigarette smoking, even one or two puffs?"
  • ever-smokers of conventional cigarettes – those who replied "100 or more cigarettes (five or more packs)" to the question "About how many cigarettes have you smoked in your entire life?"
  • current smokers of conventional cigarettes – those who had smoked at least 100 cigarettes and smoked in the past 30 days
  • ever e-cigarette users – adolescents who responded "electronic cigarettes or e-cigarettes, such as Ruyan or NJOY" to the question "Which of the following tobacco products have you ever tried, even just one time?"
  • current e-cigarette users – those who responded "e-cigarettes" to the question "During the past 30 days, which of the following tobacco products did you use on at least one day?"

Data on intention to quit smoking in the next year, previous quit attempts and abstinence from conventional cigarettes was also collected. The analysis was adjusted for potential confounding factors such as race, gender and age.

 

What were the basic results?

The main analysis included 92.0% of respondents (17,353 of 18,866) in 2011 and 91.4% of respondents (22,529 of 24,658) in 2012 who had complete data on conventional cigarette use, e-cigarette use, race, gender and age. The mean age was 14.7, and 5.6% of respondents reported ever or current conventional cigarette smoking (of these, 5% currently smoked).

In 2011, 3.1% of the study sample had tried e-cigarettes (1.7% dual ever use, 1.5% only e-cigarettes) and 1.1% were current e-cigarette users (0.5% dual use, 0.6% only e-cigarettes).

In 2012, the 6.5% of the sample had tried e-cigarettes (2.6% dual use, 4.1% only e-cigarettes) and 2.0% were current e-cigarette users (1.0% dual use, 1.1% only e-cigarettes).

Ever e-cigarette users were significantly more likely to be male, white and older. The rates of ever-tried e-cigarettes and current e-cigarette smoking approximately doubled between 2011 and 2012.

The main analysis found use of e-cigarettes was significantly associated with:

  • higher odds of ever or current cigarette smoking
  • higher odds of established smoking
  • higher odds of planning to quit smoking among current smokers
  • among e-cigarette experimenters, lower odds of abstinence from conventional cigarettes

 

How did the researchers interpret the results?

The researchers' interpretation was clear: "Use of e-cigarettes does not discourage, and may encourage, conventional cigarette use among US adolescents."

They added that, "In combination with the observations that e-cigarette users are heavier smokers and less likely to have stopped smoking cigarettes, these results suggest that e-cigarette use is aggravating rather than ameliorating the tobacco epidemic among youths. These results call into question claims that e-cigarettes are effective as smoking cessation aids."

 

Conclusion

This study found US adolescents who use e-cigarettes are more likely to smoke conventional cigarettes. They also have lower odds of abstaining from conventional cigarettes than those who don't try e-cigarettes. On the flip side, e-cigarette users were more likely to report planning to quit conventional smoking.

The research sample was large, so is likely to provide a relatively accurate picture of the smoking behaviour of US adolescents.

These results suggest that e-cigarettes may not discourage conventional cigarette smoking in US adolescents, and may encourage it. However, because of the cross-sectional nature of the information, it cannot prove that trying e-cigarettes causes adolescents to take up conventional smoking. There may be other factors at play.

And indeed, smoking tobacco cigarettes may cause teenagers to take up e-cigarettes. For example, the type of person who may want to try smoking in the past could only try conventional smoking. Nowadays, they have e-cigarettes as an option too.

Retrospectively trying to work out if they would have taken up conventional smoking had they not tried e-cigarettes first is not possible. This question would require a cohort study that tracks behaviour over time. You would then be able to see which smoking method they took up first and if one led to the other. This was not possible using the data the researchers had to hand in the current study.

Conventional smoking has been a public health priority for many decades because, on average, smokers die significantly younger (more than a decade in some groups) and they spend more of their shorter lives ill. Consequently, any product that may increase the rates of conventional smoking among the young – such as e-cigarettes – has serious and widespread health consequences.

Currently, regulation around e-cigarettes is minimal, but there are plans to introduce stricter rules in the UK. In the meantime, this study provides some evidence that e-cigarettes may not be the harmless, safe alternative some believe, and may be acting as a gateway drug to conventional smoking.

The research stops short of proving this, so the debate on whether e-cigarettes should be treated similarly to conventional cigarettes, through advertising and sales restrictions, is likely to continue.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

E-cigarettes are 'encouraging a new generation to become hooked on nicotine'. Daily Mail, March 7 2014

Links To Science

Dutra LM and Glantz SA. Electronic Cigarettes and Conventional Cigarette Use Among US Adolescents. JAMA Pediatrics. Published March 6 2014

Categories: Medical News

Claims new blood test can detect Alzheimer's disease

Medical News - Mon, 03/10/2014 - 12:50

“Blood test that can predict Alzheimer's,” was the headline used by BBC News, the Daily Mail and The Guardian today. Similar coverage was seen across many of the front pages of other newspapers.

These headlines reflected new research showing how a simple blood test may be able to detect early signs of cognitive decline and mild Alzheimer’s disease.

US researchers discovered a panel of 10 biomarkers that, with 90% accuracy, could distinguish people who would progress to have either mild cognitive impairment or mild Alzheimer’s disease within two to three years, from those who wouldn’t.

While promising, the results were only based on a small group of adults over 70 years old who were studied over five years. Of those who developed mild cognitive impairment or mild Alzheimer’s disease, only 28 people had the test. Consequently, it is not clear if the test has any predictive power in the wider population, is applicable to younger adults, or can predict the disease more than two to three years in advance.

The Daily Mail outlined how, while the research was a breakthrough, experts had warned it would bring “ethical concerns”. This is an important point, because there is currently no cure for Alzheimer’s disease, so some people may prefer not to know they might get it. The current unrefined test means at least one in 10 would be wrongly told they will go on to develop the condition, given the severity of the disease, this may cause significant needless worry.

 

Where did the story come from?

The study was carried out by researchers from a range of US Universities and Medical institutions and was funded by the US National Institutes of Health.

The study was published in the peer-reviewed medical journal, Nature Medicine.

The media reporting was generally balanced, with many highlighting the clear ethical question of whether there is any benefit in telling people they are likely to develop a serious condition that currently has no cure. Most media sources correctly acknowledged the need for more research to confirm the usefulness of the test, and that a useable test may be many years away.

However, while this research is exciting, it is still in an early stage and so front page coverage in four national newspapers is perhaps a little over-the-top.

 

What kind of research was this?

This was a cohort study looking to see if a blood test could detect Alzheimer’s disease before symptoms developed.

Alzheimer’s disease causes a progressive dementia. It affects more than 35 million individuals worldwide and is expected to affect 115 million by 2050.

There are currently no cures for the disease and no treatments to improve symptoms to any significant degree. This is because, at the moment, it is only possible to diagnose Alzheimer’s when symptoms such as memory loss show up. Unfortunately, this is usually long after the brain has deteriorated at a cellular level, meaning the disease is well underway by the time it is diagnosed.

Current tests for detecting early disease involve invasive medical treatments, which are also time consuming and often expensive. Discovering new tests and treatments targeting early stages of Alzheimer’s, before any outwardly obvious symptoms occur (known as pre-clinical disease), is a hot topic for research. Theoretically, detecting the disease early on will enable more options to be used to stop or slow down the progress of the disease.

 

What did the research involve?

The researchers recruited a group of people aged 70 or over and analysed their blood and recorded their cognitive abilities over the next five years for signs of decline. The researchers’ examined participants’ blood samples to see if anything in the blood could be used to predict who among the cognitively normal group would develop mental impairment problems and who would not.

The researchers enrolled 525 people over the five years and subjected them to a number of questionnaires to assess their mental health, including memory, verbal reasoning, attention, functional capacities. Based on this they were divided into two groups:

  • a healthy control group showing “normal” cognitive abilities
  • a group with memory problems at the start of the study, defined as amnestic mild cognitive impairment (aMCI) or mild Alzheimer’s disease (AD)

The control group was selected to match the memory-impaired group on the basis of age, sex and education.

The analysis investigated how people’s mental health scores changed after each year during a five year follow up period. Specifically, they wanted to know how many healthy controls went on to develop aMCI or mild Alzheimer’s disease. The main analysis looked for differences in the blood samples of people who went on to develop aMCI or AD and those that did not.

 

What were the basic results?

The researchers analysed 126 blood samples, including from 18 people who had developed aMCI or mild Alzheimer’s disease during the study period. The blood tests pointed towards a way of distinguishing between those that would develop cognitive impairment and those that would not.

After further investigation, the researchers found a set of 10 lipids (fats) in the blood could predict the conversion of people with normal cognitive abilities to either amnestic mild cognitive impairment or Alzheimer’s disease within a two to three year timeframe with more than 90% accuracy.

Once they had the panel of 10 fats that predicted disease development, they tested it on a further group of 41 participants to validate their results. This included 10 people who developed aMCI or mild Alzheimer’s disease during the study period. Similar results were found, confirming the initial findings.

The sensitivity and specificity of the test in the validation experiments was 90%.

 

How did the researchers interpret the results?

Based on the biochemical tests, the researchers believed the panel of 10 blood fats detected may reflect a worsening of cell membrane integrity contributing to the disease. They concluded the panel of 10 lipids could act as a test that may provide an indication of early deterioration of brain function in the pre-clinical stage of Alzheimer’s disease (when the person does not yet have symptoms).

The researchers said they had found and validated a way of assessing blood samples that distinguish cognitively normal participants who will progress to have either aMCI or AD within two to three years from those who won’t. They said that their defined panel of markers featured biochemicals that have essential structural and functional roles in the integrity and functionality of cell membranes.

 

Conclusion

This small cohort study has presented a collection of 10 biomarkers that predicted with 90% accuracy 28 cognitively normal participants who progressed to have either aMCI or mild Alzheimer’s disease within two to three years compared to those who did not.

This represents a proof-of-concept that an easily administered blood test may provide a way of detecting Alzheimer’s disease at a pre-clinical stage.

The main limitation to bear in mind when interpreting this study is the relatively older group (over 70) and short predictive range investigated. This means the test was only able to detect who would develop cognitive decline in the next two to three years. For that reason, the study does not provide any information on whether the test can predict the disease any earlier, for example by testing the blood of people in their 50s. This will inevitably be the subject of further study.

The Daily Mail outlines how “experts called the breakthrough a real step forward, but warn it will bring with it ethical concerns”. This is an important point to consider because there is currently no cure for Alzheimer’s disease.

As The Independent put it, “Would anyone welcome being told that they are going to develop – and very likely die from – an incurable disorder that will eventually rob them of their memories, emotions and personality over a period of many years?”

The reaction to the news will certainly be different for different individuals, but could be emotionally and psychologically damaging for some.

Along a similar line, the current test was 90% accurate. This means at least one in 10 will be wrongly told they will go on to develop the condition, causing needless worry.

The researchers cannily point out that the test “requires external validation using similar rigorous clinical classification before further development for clinical use. Such additional validation should be considered in a more diverse demographic group than our initial cohort”.

Ultimately, this research provides proof-of-concept that a blood test may predict early stage Alzheimer’s disease, but it’s too early to say whether this test in particular is definitely effective, or could soon be used in mainstream clinical practice. Time, and more research, will tell. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New blood test to detect who is on the brink of dementia. The Times, March 10 2014

Blood test that predicts Alzheimer's. The Independent, March 10 2014

Breakthrough blood test that can predict the onset of Alzheimer's. Daily Express, March 10 2014

Blood test that can predict Alzheimer's: Elderly could be given early warning. Daily Mail, March 10 2014

Blood test can predict Alzheimer's, say researchers. BBC News, March 10 2014

Blood test to spot Alzheimer's Disease developed - and researchers say it's at least 90% accurate. Daily Mirror, March 10 2014

Links To Science

Mapstone M, et al. Plasma phospholipids identify antecedent memory impairment in older adults. Nature Medicine. Published online March 9 2014

Categories: Medical News

Antibiotic resistance 'toolkit' launched

Medical News - Fri, 03/07/2014 - 17:00

"Antibiotics: 'national threat' from steep rise in patients who are resistant to drugs,” The Daily Telegraph reports. The Mail Online reports that there were, “600 reported cases of drugs failing because of resistant bacteria last year”.

What isn’t made very clear is that these 600 cases were of one very specific form of antibiotic-resistant bacteria called carbapenemase-producing Enterobacteriaceae (CPE).

Enterobacteriaceae is a large group of types of bacteria. The group includes harmless bacteria that live in the gut, as well as bacteria such as E. coli, and Salmonella that can cause food poisoning. Enterobacteriaceae can also cause infection if they enter the wrong part of the body, such as the blood stream.

Some of these bacteria have developed resistance against a group of strong antibiotics called carbapenems, which are normally used to treat the most serious infections.

The resistant CPE bacteria produce an enzyme (carbapenemase) that breaks down the antibiotic and makes it ineffective.

This is potentially serious as carbapenems are essentially a weapon of last resort in our “antibiotic armoury”. If carbapenem resistance became widespread then the consequences for public health could be akin to a return to the pre-antibiotic era.

To address the concern, Public Health England has released a “toolkit” – a series of recommendations to help health staff limit the spread of CPE in hospitals.

 

What is antibiotic resistance?

Antibiotics are drugs used to treat infections caused by bacteria. Sometimes bacteria develop the ability to survive antibiotic treatment, this is called antibiotic resistance.

When a strain of bacteria become resistant to an antibiotic it means this antibiotic will no longer be effective for treating the infections they cause. Antibiotic resistance is one of the most significant threats to patient safety in Europe. Read more about antibiotic resistance.

 

What are CPE bacteria?

Carbapenemase-producing Enterobacteriaceae are bacteria that live in the gut and are normally harmless. However, if they get into other parts of the body, such as the bladder or bloodstream, they can cause an infection.

Carbapenems are strong antibiotics similar to penicillin. They are used by doctors as a ‘last resort’ to treat some infections when other antibiotics have failed. Some Enterobacteriaceae make enzymes called carbapenemases that allow them to break down these antibiotics, and this makes them resistant. Only a few strains of Enterobacteriaceae produce carbapenemases currently, but this number is growing.

 

What does the CPE public health toolkit aim to do?

Public Health England’s advice for healthcare professionals in England focuses on early detection of CPE, as well as advice on how to manage or treat CPE, and control their spread in hospitals and residential care homes.

Public Health England has produced information leaflets for healthcare professionals to give to people who have been identified as being carriers or infected with CPE, or who are in contact with people who are infected.

 

Why do we need help managing these antibiotic-resistant bacteria?

There are currently only a few strains of carbapenemase-producing Enterobacteriaceae (CPE), but this number is growing. In 2006, there were five patients reported to Public Health England as having CPE, but by 2013 this number had risen to more than 600. These numbers include people who were just carriers of CPE, as well as those with infections.

Public Health England wants to act quickly to minimise the spread of CPE, as rapid spread could mean doctors are less able to rely on carbapenem antibiotics. This could pose an increasing threat to public health.

 

What information does Public Health England offer patients?

Public Health England’s CPE toolkit explains what CPE are, and the importance of carbapenem resistance. It explains that:

  • there is an increased chance of picking up CPE if you have been a patient in a hospital abroad or in a UK hospital that has had patients carrying the bacteria, or if you have been in contact with a CPE carrier elsewhere
  • if a doctor or nurse suspects that you are a CPE carrier, they will arrange for screening to see if you are a carrier
  • screening usually involves taking a rectal swab or giving a sample of faeces
  • while a patient is waiting for the screening result, and if they are found to have CPE, they will be kept in a single room with its own toilet facilities. This is to limit the potential for spread of CPE to other people through contaminated faeces
  • the most important way for a patient and visitors to reduce spread of CPE is to regularly wash hands well with soap and water, especially after going to the toilet
  • patients who have a CPE infection need to be treated with antibiotics, but those who just carry CPE in their gut do not need antibiotics

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Antibiotics: 'national threat' from steep rise in patients who are resistant to drugs. The Daily Telegraph, March 6 2014

Fears that cases of antibiotic-resistant bacteria could get 'out of control'. The Independent, March 6 2014

Antibiotic resistance soars: Cases of gut bacteria not destroyed by drugs increase by 12,000% in seven years. Mail Online, March 6 2014

Categories: Medical News

'Peeing' in pool may create harmful byproducts

Medical News - Fri, 03/07/2014 - 12:58

“Peeing in the pool could be bad for your health,” the Mail Online reports. As well as being unpleasant and socially unacceptable, new research suggests that a chemical in wee can react with chlorinated swimming pool water, creating potentially harmful byproducts.

The study in question used lab tests to study the reaction between a chemical found in urine (uric acid) and the chlorine in swimming pools. Researchers found that the combination of these substances can form some potentially harmful chemicals, known as nitrogen-containing disinfection byproducts (N-DBPs).

N-DBPs found at low levels in swimming pools have been linked to eye and throat irritation. At high levels, they can adversely effect the nervous and cardiovascular systems.

These byproducts were already known to be in chlorinated pools and to be formed from the reaction between chlorine and organic chemicals, such as those found in body fluids. This latest study confirms that uric acid is one of the potential sources of these chemicals.

The Mail’s coverage of this study is, primarily, an excuse to run an amusing story about weeing in pools, rather than to report on new research. It shouldn't need a study to tell us that weeing in a pool is not the most hygienic or polite of habits.

Swimming in a pool, with lifeguards to protect you, is a great form of exercise. If you choose to swim in open water, find out how to stay safe when swimming in the great outdoors.

 

Where did the story come from?

The study was carried out by researchers from China Agricultural University in Beijing and Purdue University in the USA. It was funded by the Chinese Universities Scientific Fund, the National Natural Science Foundation of China and the National Swimming Pool Foundation in the USA. 

The study was published in the peer-reviewed journal Environmental Science and Technology.

The Mail Online reports the study fairly, quoting a lot of information directly from the scientific paper itself. We suspect that a Chinese study published in a relatively obscure environmental health journal would not have garnered such coverage if it didn’t cover such a topic as public urination.

 

What kind of research was this?

Chlorine is used to disinfect pools, but it can react with other chemicals in the water – such as human bodily fluids – to produce potentially harmful chemicals. This was a laboratory study looking at the chemical reactions that occur as a result of the mixing of chlorine in pools and a chemical called uric acid, which is found mainly in urine, but also in sweat. 

Previous studies have found that, on average, swimmers release between 0.2 and 1.8 litres of sweat (up to more than 3 pints) and between 25 and 117 millilitres of urine per swim (up to about half a cup of urine).

This study tells us about the chemical reactions that may occur in pools, but didn't look into the health effects of these. The researchers note in their introduction that nitrogen-containing disinfection byproducts (the substance produced by the reaction) “tend to be more genotoxic, cytotoxic and carcinogenic”.

 

What did the research involve?

In a lab, the researchers mixed chlorinated water with uric acid – or mixtures of chemicals designed to replicate human bodily fluids – under different conditions. They then monitored these to see if certain potentially harmful chemicals, called volatile nitrogen-containing disinfection by-products (N-DBPs), were formed, and how much of them there were. The word “volatile” means these chemicals easily form gases and can therefore be breathed in.

The researchers also collected water from swimming pools in China and analysed them in the lab. In some experiments, extra chlorine or uric acid was added to the pool water to see what chemicals were produced.

The two N-DBPs the researchers looked at (cyanogen chloride and trichloramine) are known to be formed at low levels as a byproduct of chlorination in pools. These chemicals are irritants and potentially harmful to the lungs, heart and the central nervous system above certain levels of exposure. It was already known that these chemicals can form as a result of the reaction between chlorine and amino acids (building blocks of protein that are also found in bodily fluids). However, whether chlorine has a similar effect when mixed with uric acid is unknown.

 

What were the basic results?

The researchers found that the reaction between the chlorinated water and uric acid in the lab produced both cyanogen chloride and trichloramine.

Swimming pool water analysis showed both cyanogen chloride and trichloramine in all samples. Adding extra uric acid to the swimming pool water led to more cyanogen chloride forming, but the effects on trichloramine levels were less consistent.

Experiments with solutions mimicking body fluids suggested that chlorination of uric acid may account for a considerable proportion of the cyanogen chloride formed in pools, but less of the trichloramine.

 

How did the researchers interpret the results?

The researchers concluded that as most uric acid is introduced into pools by urination, reducing this habit could lead to benefits in both pool and air chemistry.

 

Conclusion

This study suggests that certain, potentially harmful, byproducts of pool water chlorination result, in part, from a reaction to the uric acid found in urine.

The media coverage of this study is likely to be more of an excuse to run an amusing story about weeing in pools, rather than the study itself. The byproducts in question were already known to exist in pools, and to be formed from the reaction between chlorine and organic chemicals, such as those found in body fluids. The current study confirms that uric acid is one of the potential sources of these chemicals.

The only swimming pool water tested in this study was from China, and the exact types of disinfectant chemical used, levels of chlorine and extent of weeing in the pool may differ in pools from different countries.

At best, the practice of weeing in a pool is socially unacceptable; at worst, it may be a potential health hazard.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Peeing in the pool could be bad for your health: Researchers warn unhygenic habit could trigger chemical reactions that cause respiratory problems. Mail Online, March 7 2014

Links To Science

Lian L, E Yue, Li J, Blatchley ER. Volatile disinfection byproducts resulting from chlorination of uric acid: Implications for swimming pools. Environmental Science and Technology. Published online February 25 2014

Categories: Medical News

HIV 'gene hack' offers new treatment hope

Medical News - Thu, 03/06/2014 - 14:48

“HIV gene therapy using GM cells hailed a success after trial,” reports The Guardian, while the BBC tells us that an “immune upgrade” could offer “HIV shielding”.

These headlines come following a small trial that examined whether it was safe to inject genetically modified white blood cells into people with HIV. This was achieved, but the study did not show whether HIV could actually be treated.

This was the first human trial for the technique and involved 12 people that already had HIV. They were all taking antiretroviral (anti-HIV) medication and had undetectable levels of the HIV virus in their blood. A type of white cell in their blood was genetically modified and then multiplied in the lab.

This genetic modification was done to imitate a rare, naturally occurring mutation, when two copies are present, which makes people highly resistant to HIV infection.

Researchers injected the modified blood cells into each of the 12 people with HIV. They did this to test the safeness of the treatment. There was only one serious transfusion reaction, with many of the participants experiencing milder reactions, which included fever, chills and bone pain.

The researchers also looked at the effectiveness of the genetically modified cells by asking six of the participants to stop their antiretroviral medication for 12 weeks – 4 weeks after the infusion. The researchers then looked at what happened to participants if they did not take their HIV medication for a few weeks, and what happened when they restarted it. The effects were variable in the six individuals.

This study provides some hope that genetically “edited” immune cells could be used to treat people with HIV, but it’s too soon to draw any strong conclusions as to whether it will be an effective treatment.

 

Where did the story come from?

The study was carried out by researchers from: the University of Pennsylvania; the Albert Einstein College of Medicine, Bronx; and Sangamo BioSciences, Richmond, California. It was funded by the National Institute of Allergy and Infectious Diseases; the Penn Center for AIDS research; and Sangamo BioSciences.

The study was published in peer-reviewed medical journal the New England Journal of Medicine.

The media reported the trial responsibly; however, there were a couple of inaccuracies.

The reduction in HIV viral level occurred after the levels had shot up when six participants stopped taking their antiretroviral medications. The HIV viral levels reached a peak six to eight weeks after the treatment was stopped, and then only gradually declined in the three participants who did not immediately restart medication, or already have one strand of their own DNA with the genetic mutation. This was not due to replication of the injected genetically modified T helper cells, as their numbers were steadily reducing.

 

What kind of research was this?

This was a phase one trial of a new potential treatment for HIV. It was non-randomised (the participants were specifically selected), and the participants and doctors were aware they were having the treatment. There was a selected group of people who were not given the treatment and acted as controls, but these people were not reported on in the journal article.

Phase one trials are the first ones carried out for a new treatment in humans. They are usually very small and are performed to test treatment safety. If successful, larger phase two trials and phase three trials are carried out to look further at safety and start to examine the effectiveness.

 

What did the research involve?

12 people with HIV infection were given genetically modified CD4 T cells. These a type of white blood cell and are often called “T helper cells”, as they send messages to other immune cells. The aim of the study was to assess the safety and side effects of the potential treatment, with a secondary aim of assessing the effect on the immune system and HIV resistance.

The genetic modification was performed to mimic a naturally occurring DNA mutation that some people have and is thought of affect around 1% of the population. This mutation, when present on both copies of a section of DNA, has been found to make them resistant to the most common strains of HIV. In people with HIV who have this mutation on one of the strands of DNA, the progression of their disease to AIDS is slower. There has also been one case of a person who had a stem cell transplant from a donor who had the mutation on both copies, and the HIV virus has been undetectable for them from more than four years without any antiviral therapy (the standard HIV treatment).

From this discovery, previous research on mice using genetically modified T helper cells showed that they functioned normally and were able to divide and multiply in response to usual stimuli. They were also protected from HIV infection and reduced the HIV RNA infection levels in the blood.

The main aim of this study was to assess the safety of the potential treatment in humans. The secondary aim was to assess the immune system and whether there was any HIV resistance.

12 people with HIV entered the study between May 2009 and July 2012. The inclusion criteria were that they were taking antiretroviral medication and were “aviraemic” (meaning that the level of HIV RNA was undetectable in their blood). The participants were split into two groups of six.

The participants gave a blood sample. From this, the T helper cells were genetically modified and multiplied. The cells were then injected back into their veins as an infusion. The infusion contained about 10 billion T helper cells, 11-28% of which had been genetically modified.

The participants were closely monitored for the first four weeks. The first group of six then stopped their antiretroviral treatment for 12 weeks. All participants were monitored for 36 weeks, and they are now in a 10-year follow-up study.

 

What were the basic results?

In terms of the primary aim of safety:

  • One participant suffered from a serious reaction. They had fever, chills, joint pain and back pain within 24 hours of the infusion, which was diagnosed as a transfusion reaction.
  • There were 82 mild and 48 moderate adverse events reported, but the researchers report that 71 of them were not related to the study drug.
  • The most common adverse event was a milder version of the transfusion reaction.
  • Garlic-like body odour was common and is due to the metabolism of a drug used in the process.

For the secondary aim of immunity to HIV:

  • In all 12 participants, the amount of T helper cell was significantly higher one week after the infusion (from 448 cells per cubic millimetre to 1,517) and 13.9% of them were genetically modified. It took on average 48 weeks for the cells to reduce by half, which suggests that the immune system did not reject them.
  • The genetically modified T helper cells went from the blood stream into the soft tissue, where the majority of this type of cell usually resides.
  • Virus levels became detectable in the blood of all six of the group who stopped treatment. Two of them restarted the antiretroviral treatment after eight weeks. The viral levels in three of the participants gradually reduced after a peak at eight weeks, before the antiretroviral treatment was restarted at 12 weeks. It then took 4-20 weeks for the viral levels to become undetectable.
  • The viral level in one of the patients who stopped antiretroviral treatment rose, but became undetectable before restarting treatment. It was found that he already had the genetic mutation in one strand of his DNA.

 

How did the researchers interpret the results?

The researchers concluded that genetically modified CD4 T-cell infusions are safe within the study's limits, but that the size of the study was too small to generalise this finding. The immune system did not reject the genetically modified T helper cells.

 

Conclusion

This phase one trial showed that the infusion of genetically modified T helper cells was achieved reasonably safely in 12 people with chronic HIV.

It isn’t clear if it could be an effective treatment for HIV, as the virus became detectable in the blood of all six participants who stopped taking their antiretroviral treatment. Although the levels of the virus then began to reduce after eight weeks, it only came back to undetectable levels in the person who already had one DNA strand of the genetic mutation. It took several weeks for this to happen in the other five people.

The primary aim of the study was to determine the safety of the treatment in humans, rather than to determine immunity to HIV. It may be that a different dose of cells is more effective. Further studies in larger numbers of people will now be needed to further examine the treatment’s safety and to look at its possible effectiveness and what factors and characteristics in a person might influence this.

Analysis by Bazian. Edited by NHS Choices
Follow Behind the Headlines on Twitter
Join the Healthy Evidence forum.

Links To The Headlines

Immune upgrade gives 'HIV shielding'. BBC News, March 6 2014

HIV gene therapy using GM cells hailed a success after trial. The Guardian, March 6 2014

Could AIDS be cured by modifying patients' genes? New treatment could spell the end of daily drug regime. MailOnline, March 6 2014

Links To Science

Tebas P, et al. Gene Editing of CCR5 in Autologous CD4 T Cells of Persons Infected with HIV. New England Journal of Medicine. Published online March 6 2014

Categories: Medical News

WHO says halving sugar target has extra benefit

Medical News - Thu, 03/06/2014 - 12:58

“Halve sugar intake, say health experts,” The Daily Telegraph reports, while The Guardian tells us that “a can of coke a day is too much sugar”.

The widespread media reports follow new draft international guidelines looking at the healthy maximum recommended levels of sugars in the diet.

Currently, people are advised to have less than 10% of their total energy intake from sugars. However, the new draft guidelines from the World Health Organization (WHO), state that a reduction to below 5% of total energy intake would have “additional benefits”.

An intake of 5% is equivalent to around 25 grams (six teaspoons) of sugar a day for a healthy adult. The WHO’s suggested limits apply to all sugars, including “hidden” sugars added to foods by manufacturers, as well as sugars that are naturally present in fruit juices and honey.

 

Why has the WHO published new draft guidelines on sugar?

The WHO has produced its draft guideline on recommended daily sugar intake to get feedback from the public and experts, before it finalises its advice.

The new consultation is needed because the existing WHO guideline was published 12 years ago. This advice, based on the best evidence available in 2002 says that sugars should make up less than 10% of total energy intake.

 

Why has a new draft guideline on sugar been produced?

The WHO says there is growing concern that consumption of sugars – particularly in the form of sugar-sweetened drinks – may result in:

  • reduced intake of foods containing more nutritionally adequate calories 
  • an increase in total calorie intake, leading to an unhealthy diet and weight gain

These may in turn increase the risk of conditions such as cardiovascular disease and diabetes.

The WHO points out that much of the sugars we eat and drink are “hidden” in processed foods that are not usually seen as sweets. For example, a tablespoon of ketchup contains around 4 grams (around a teaspoon) of sugars. A single can of sugar-sweetened fizzy drink contains up to 40 grams (around 10 teaspoons) of sugar.

The WHO also highlights the role sugars play in dental disease (tooth decay). This, the WHO argues, is globally the most common non-infectious disease in the world today.

The new guidelines on sugar intake aim to help both children and adults to reduce their risk of unhealthy weight gain and dental decay, the WHO says. The WHO’s recommendations, once agreed, can act as a benchmark for health policy officials to assess people’s current sugar intake and to help them develop measures to reduce sugar intake.

 

What does the new draft WHO sugar guideline say?

There is no change in the WHO’s new draft guideline from the existing target of having less than 10% of your total energy intake per day from sugars. However, the new guideline does suggest that a reduction to below 5% of total energy intake per day would have “additional benefits”. This target of 5% of your total energy intake is equivalent to around 25 grams (around six teaspoons) of sugar per day for an adult of normal BMI.

The suggested limits on intake of sugars in the draft guideline apply to all sugars that are added to food by manufacturers, cooks or consumers. It also applies to sugars naturally present in honey, syrups, fruit juices and fruit concentrates. These sugars include glucose, fructose, sucrose and table sugar.

Dr Francesco Branca, WHO's nutrition director, reportedly told a news conference that the 10% target was a "strong recommendation" while the 5% target was "conditional", based on current evidence. "We should aim for 5% if we can," said Dr Branca.

 

What evidence is the new draft guideline based on?

WHO commissioned two systematic reviews that it says informed the development of the draft guideline. One is a study from the University of Otago in New Zealand which looked at sugar intake and body weight. This review, which included 68 studies, found that in adults:

  • advice to reduce free sugars was associated with an average 0.80kg reduction in weight
  • advice to increase intake was associated with a corresponding 0.75kg increase

However, the evidence was less consistent in children than in adults. The researchers say the effect seems to be due to an altered type of energy intake, as replacing sugars with other carbohydrates did not result in any change in body weight.

A second review, from the University of Newcastle, looked at the effect of restricting sugar intake on dental caries. It included 55 studies and found “moderate quality” evidence showing that the incidence of caries is lower when free-sugars intake is less than 10%. With a 5% cut-off, a “significant relationship” was observed, although this evidence was judged to be of very low quality.

 

Was media coverage of the WHO draft sugar guidelines accurate?

The media’s reporting of the new guidelines was factually accurate. Some papers chose to include comments critical of the WHO’s “lack of clarity”. According to BBC News, UK campaigners say it is a "tragedy" that the WHO has taken 10 years to think about changing its advice and are in favour of 5% becoming the firm recommendation.

The Independent reports one expert saying he suspected “dirty work” on the part of food and drinks companies might lie behind the WHO’s “less than resounding” message. However, The Independent also reports other experts who have pointed out that a limit of less than 5% would be “ambitious and challenging” and “untried and untested”.

The news follows a post-Christmas surge in stories about the harms from sugar, including the launch in January this year of the new campaign group Action on Sugar. The news also comes after Dame Sally Davies, England’s Chief Medical Officer, reportedly suggested a tax on sugar to help combat growing levels of obesity.

 

Should I cut down on sugar?

Most adults and children in the UK eat too much sugar. According to Public Health England, surveys show that the average intake for adults is 11.6% and for children intake is 15.2%. This is well above the current recommendation of less than 10% of energy intake from sugar.

Sugars are added to a wide range of foods, such as sweets, cakes, biscuits, chocolate, and some fizzy drinks and juice drinks. These are the sugary foods that you should cut down on. Fruit and vegetables also contain naturally occurring sugars, but these foods also have a range of other nutrients that make them a much better option.

You can find out more information about the relative proportions of food we should be eating in the Eatwell Plate

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

People should cut their sugar intake to just six teaspoons a day, says World Health Organisation. Daily Mail, March 6 2014

World Health Organisation advises halving sugar intake. The Daily Telegraph, March 6 2014

WHO: Daily sugar intake 'should be halved'. BBC News, March 5 2014

Is sugar the new evil? Arguments for and against the grain. The Independent, March 5 2014

A can of Coke a day is too much sugar, warns WHO. The Guardian, March 6 2014

Links To Science

Systematic reviews cited by the WHO

Te Morenga L, Mallard S and Mann J. Dietary sugars and body weight: systematic review and meta-analyses of randomised controlled trials and cohort studies. BMJ. Published online January 15 2013

Moynihan PJ and Kelly SAM. Effect on Caries of Restricting Sugars Intake. Systematic Review to Inform WHO Guidelines. Journal of Dental Research. 2014;93:8-18

 

Categories: Medical News

Parental smoking 'ages' children’s arteries

Medical News - Wed, 03/05/2014 - 14:48

“Passive smoking causes lasting damage to children's arteries, prematurely ageing their blood vessels by more than three years,” BBC News reports.

The news is based on emerging evidence that exposure to secondhand smoke damages children’s arteries. This news is concerning, as the thickening of blood vessel walls (atherosclerosis) is known to increase the risk of heart attacks and strokes in later life.

Parents of approximately 3,700 children from Finland and Australia were asked if neither, one or both parents smoked.

Up to 25 years later, the grown-up children underwent ultrasound scans on the large carotid arteries in their necks, to estimate their carotid intima media thickness (IMT). A high IMT can indicate the flow of blood through the carotid arteries to the brain, which can potentially become blocked, triggering a stroke.

The researchers found that having a high carotid IMT in adulthood was significantly more likely in those who were exposed to both parents smoking. However, having only one parent who smoked was not linked to increased greater carotid IMT in adulthood. 

The study does have limitations, including its reliance on self-reporting; however, the results add to already existing evidence that exposing children to passive smoking can increase the risk of both short-term and long-term conditions.

 

Where did the story come from?

The study was carried out by researchers from the University of Tasmania in Australia and the Universities of Turku and Tampere in Finland, along with other Australian and Finnish institutions. It was funded by various organisations: the Commonwealth Departments of Sport, Recreation and Tourism, and Health; the National Heart Foundation; the Commonwealth Schools Commission; the National Health and Medical Research Council; the Heart Foundation; the Tasmanian Community Fund; and Veolia Environmental Services.

The study was published in the peer-reviewed European Heart Journal and is available on an open access basis, so it is free to read online or download.

Both BBC News and the Mail Online website provide an accurate summary of the study.

 

What kind of research was this?

This study was a secondary analysis of two independent long running prospective studies carried out in Australia and Finland, which followed participants for up to 25 years.

The aim was to assess the role of exposure to parental smoking in children or adolescence on carotid IMT in adulthood.

 

What did the research involve?

The study involved 2,401 children from Finland (aged 3-18 years) and 1,375 children from Australia (aged 9-15 years), with a follow-up of up to 25 years. The Finnish study also analysed the association between cumulative exposure to parental smoking for 3 years and carotid IMT in adulthood.

In both studies, researchers determined the self-reported smoking status of the participant’s parents at the beginning of the studies. Parental smoking was reported as:

  • neither parent
  • one parent
  • two parents

Self-reported current smoking status of the participants (children) was also recorded at the beginning of the study. Questionnaires also gathered other information on the participants, including height, weight (to calculate their body mass index) and physical activity level. 

At follow up (up to 25 years later), questionnaires gathered information on the participant’s total years of schooling, current smoking status, physical activity levels, alcohol consumption and cardiovascular risk factors.

At follow up, ultrasound measurements were carried out on the participants (who were now adults) to measure the thickness of their carotid artery walls (IMT). The carotid IMT measurements were made by members of the research team, as they did not know whether the participants had been exposed to passive smoke during childhood.

 

What were the basic results?

The main findings of this study were that:

  • carotid IMT in adulthood was significantly greater in those exposed to both parents smoking in childhood compared to those whose parents did not smoke (adjusted marginal means 0.647 mm (standard deviation (0.022) vs. 0.632 mm (standard deviation 0.021). This association remained significant after adjustment for all potential confounders of both the participants and parents including age, sex, parental education and the child’s smoking status at the start of the study and at follow up.
  • cardiovascular risk factors of the participants in adulthood were also considered in the adjustments 
  • having just one parent (mother or father) who smoked was not linked to carotid IMT in adulthood (pooled analysis of both studies)
  • in one of the studies, greater exposure to parental smoking over a three-year period was linked to a significantly greater carotid IMT in adulthood

The researchers determined that the effects of exposure to both parents smoking on participants’ vascular age was equivalent to being 3.3 years older (95% confidence interval (CI) 1.31 to 4.48) than the participants actual age in adulthood (pooled analysis of both studies).

 

How did the researchers interpret the results?

The researchers concluded there is an extensive effect of exposure to parental smoking on children’s vascular health up to 25 years later. They state that  there must be continued efforts to reduce smoking among adults, in order to protect young people and to reduce the prevalence of cardiovascular diseases, such as heart attacks, across the population.

Dr Seana Gall, one of the researchers from the Menzies Research Institute Tasmania, is reported in the media as saying: "Our study shows that exposure to passive smoke in childhood causes a direct and irreversible damage to the structure of the arteries."

They went on to say that: "Parents, or even those thinking about becoming parents, should quit smoking. This will not only restore their own health, but also protect the health of their children into the future."

 

Conclusion

Overall, this secondary analysis study provides preliminary evidence of the effects of parental passive smoking on the artery walls of children and adolescents in adulthood.

The researchers attempted to adjust for potential factors that could influence risk (confounders), such as:

  • age
  • sex
  • height
  • weight
  • smoking status
  • physical activity levels
  • alcohol consumption
  • schooling level of the parent(s)

In their analysis, they also took into consideration cardiovascular risk factors of the participants in adulthood.

There are some limitations to the study, which are worth noting. Parental smoking status was self-reported and not objectively measured by the researchers, so there is a possibility that parents did not have accurately reported their actual smoking status. This is also the case for the reporting of the participants’ smoking status. The study was a secondary analysis of two larger cohort studies, so it’s likely that the cohorts themselves did not have the same outcome of interest as the current study.

Despite these limitations, however, there is a substantial body of evidence that shows that passive smoking is harmful, especially when those involved are children. Tobacco smoke contains around 70 cancer causing chemicals and hundreds of other toxins.

If you do choose to smoke, you should do it outside and well away from your children. There is plenty of free advice and treatment that could help you kick the habit.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Passive smoking 'damages children's arteries'. BBC News, March 5 2014

Passive smoking 'ages children's arteries': Exposure means youngsters are at greater risk of heart attacks and strokes in later life. Mail Online, March 5 2014

Links To Science

Gall S, Huynh QL, Magnussen CG, et al. Exposure to parental smoking in childhood or adolescence is associated with increased carotid intima-media thickness in young adults: evidence from the Cardiovascular Risk in Young Finns study and the Childhood Determinants of Adult Health Study. European Heart Journal. Published online March 4 2014

Categories: Medical News

High protein diet not as bad for you as smoking

Medical News - Wed, 03/05/2014 - 12:58

“People who eat diets rich in animal protein carry similar cancer risk to those who smoke 20 cigarettes each day,” reports The Daily Telegraph.

We have decades of very good evidence that smoking kills and – fortunately for meat lovers – this latest unhelpful comparison with high protein diets largely appears to be a triumph of PR spin.

The warning was raised in a press release about a large study which found that for people aged 50-65, eating a lot of protein was associated with an increased risk of dying.

However, the study, which assessed the diets of Americans in a single 24-hour period (rather than long-term), found in those aged over 65 that a high protein diet was actually associated with a reduced risk of death from any cause or from cancer. These differing findings meant that overall there was no increase in risk of death, or from dying of cancer with a high protein diet.

There are several reasons to be cautious when interpreting the results of this study, including that the researchers did not take into account important factors such as physical activity in their study.

The claim in much of the media, that a high protein diet in middle-aged people is “as dangerous as smoking” is unsupported.

We need to eat protein, we do not need to smoke.

 

Where did the story come from?

The study was carried out by researchers from the University of Southern California (USC) and other research centres in the US and Italy. It was funded by US National Institutes of Health, National Institute on Aging, and the USC Norris Cancer Center. The study was published in the peer-reviewed journal Cell Metabolism and has been made available on an open access basis to read for free.

In general, reporting of the results of the study was reasonable. However, the prominence given to the story (which featured as a front page lead in The Daily Telegraph and The Guardian) in the UK media seems disproportionate.

The headlines suggesting a high protein diet is “as harmful as smoking” was not a specific finding of the study and could be seen as unnecessary fear-mongering. This is particularly of note given that the effects of a high protein diet were found to differ dramatically by age.

To be fair to the UK’s journalists, this comparison was raised in a press release, issued by the University of Southern California. Unfortunately this PR hype appears to have been taken at face value.

 

What kind of research was this?

This study looked at the relationship between the amount of protein consumed and subsequent risk of death among middle aged and older adults. It used data collected in a previous cross-sectional study and information from a national register of deaths in the US.

While the data used allowed researchers to identify what happened to people over time, this wasn’t the original purpose of the data collection. This means that some information on what happened to people may be missing, as researchers had to rely on national records rather than keeping close track of the individuals as part of the study.

 

What did the research involve?

The researchers had data on protein consumption for 6,381 US adults aged 50 and over (average age 65). They then identified which of these people died over the following 18 years (up to 2006) using national records. The researchers carried out analyses to see whether people who ate more protein in their diets were more likely to die in this period than those who ate less protein.

The information on protein consumption was collected as part the third National Health and Nutrition Examination Survey (NHANES). These surveys are designed to assess the health and nutritional status of people in the US. The participants are selected to be representative of the general US population. As part of the survey they reported their food and drink intake over the past 24 hours using a computerised system. The system then calculated how much of different nutrients they consumed.

Each person’s level of protein consumption was calculated as the proportion of calories consumed from protein. Protein intake was classed as:

  • High – 20% or more of calories from protein (1,146 people) 
  • Moderate – 10 to 19% of calories from protein (4,798 people) 
  • Low – less than 10% of calories from protein (437 people)

The researchers used the US National Death Index to identify any of the survey participants who died up to 2006, and the recorded cause of death. The researchers looked at whether proportion of calories consumed from protein was related to risk of death overall, or from specific causes. As well as overall deaths, they were also interested in deaths specifically from cardiovascular disease, cancer, or diabetes. The researchers also looked at whether the relationship differed in people aged 50-65 years, and older individuals, and whether it was influenced by fat, carbohydrate or animal protein intake.

The analyses took into account factors (confounders) that could influence the results, including:

  • age
  • ethnicity
  • education
  • gender
  • "disease status"
  • smoking history
  • participants’ dietary changes in the last year
  • participants’ attempted weight loss in the last year
  • total calorie consumption

The researchers also carried out studies to look at the effect of protein and their building blocks (amino acids) in yeast and mice.

 

What were the basic results?

On average, the participants consumed 1,823 calories over the day:

  • 51% from carbohydrates
  • 33% from fat
  • 16% from protein (11% from animal protein).

Over 18 years, 40% of participants died; 19% died from cardiovascular diseases, 10% died from cancer, and about 1% died from diabetes.

Overall, there was no association between protein intake and risk of death from any cause, or death from cardiovascular disease or cancer. However, moderate or high protein consumption was associated with an increased risk of death related to complications associated with diabetes. The authors noted that the number of people dying from diabetes-related causes was low, so larger studies were needed to confirm this finding.

The researchers found that results for death from any cause and from cancer seemed to vary with age. Among those aged 50-65, those who ate a high protein diet were 74% more likely to die during follow up than those who ate a low protein diet (hazard ratio (HR) 1.74, 95% confidence interval (CI) 1.02 to 2.97). People in this age group who ate a high protein diet were more than four times as likely to die from cancer during follow up than those who ate a low protein diet (HR 4.33, 95% CI 1.96 to 9.56).

The results were similar once the researchers took into account the proportion of calories consumed from fat and carbohydrates. Further analyses suggested that animal protein was responsible for a considerable part of this relationship, particularly for death from any cause.

However, the opposite effect of high protein intake was seen among those aged over 65. In this age group high protein intake was associated with:

  • a 28% reduction in the risk of death during follow up (HR 0.72, 95% CI 0.55 to 0.94)
  • a 60% reduction in the risk of death from cancer during follow up (HR 0.40, 95% CI 0.23 to 0.71)

 

How did the researchers interpret the results?

The researchers concluded that low protein intake during middle age followed by moderate to high protein consumption in older adults may optimise health and longevity.

 

Conclusion

This study has found a link between high protein intake and increased risk of death among people aged 50-65, but not older adults. There are some important points to bear in mind when thinking about these results:

  • The human data used was not specifically collected for the purpose of the current study. This meant that the researchers had to rely on the completeness of, for example, national data on deaths and causes of death. This may mean that deaths of some participants may have been missed.
  • Information on food intake was only collected for one 24-hour period, and this may not be representative of what people ate over time. Most people (93%) reported that it was typical of their diet at the time, but this may have changed over the 18 years of follow up.
  • The researchers took into account some factors that could affect results, but not others, such as physical activity.
  • Although the study was reasonably large, numbers in some comparisons were relatively low, for example, there were not many diabetes-related deaths and only 437 people overall ate a low protein diet. The broad confidence intervals for some of the results reflect this.
  • Many news sources have suggested that a high protein diet is “as bad for you” as smoking. This is not a comparison that is made in the research paper, therefore its basis is unclear. While we do need some protein in our diets, we don’t need to smoke, so this is not a helpful comparison.
  • While the authors suggested that people eat a low protein diet in middle age and switch to a high protein diet once they get older, it is not possible to say from the study whether this is what the older participants actually did, as their diets were only assessed once.
  • Ideally the findings need to be confirmed in other studies set up to specifically address the effects of higher protein diets, particularly the strikingly different results for different age groups.

While certain diet plans, such as the Atkins diet or the “caveman diet” have promoted the idea of eating a high-protein diet for weight loss, relying on a single type of energy source in your diet is probably not a good idea. Consumption of some high-protein foods such as red meat and processed meat is already known to be associated with increased risk of bowel cancer.

 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

High-protein diet 'as bad for health as smoking'. The Daily Telegraph, March 4 2014

Diets high in meat, eggs and dairy could be as harmful to health as smoking. The Guardian, March 4 2014

Eating too much meat and eggs is ‘just as bad as smoking’, claim scientists. The Independent, March 4 2014

Eating lots of meat and cheese in middle age is 'as deadly as SMOKING'. Daily Mail, March 4 2014

Eating lots of meat and cheese could be as bad for you as smoking, report reveals. Daily Mirror, March 4 2014

Meat And Cheese 'As Bad For You As Smoking'. Sky News, March 4 2014

Links To Science

Levine ME, Suarez JA, Brandhorst S, et al. Low Protein Intake Is Associated with a Major Reduction in IGF-1, Cancer, and Overall Mortality in the 65 and Younger but Not Older Population. Cell Metabolism. Published online March 4 2014

Categories: Medical News

Do short people also have smaller IQs?

Medical News - Tue, 03/04/2014 - 14:42

“They’re already called ‘vertically challenged’ – but are short people intellectually challenged too?” is the headline in the Mail Online. The website reports on a gene study which found taller people were more likely to have a genetic makeup associated with increased intelligence.

The study analysed 6,815 unrelated people and found some relationship between height and intelligence, although this relationship was not very strong. They also found evidence that this relationship could be due to shared genetic factors. The researchers hope this and future studies will help them better understand the links between height, IQ, and health.

Perhaps the most important thing to highlight is that the link between height and IQ is not clear cut – so it would be unfair to equate being shorter with being “intellectually challenged”.

 

Where did the story come from?

The study was carried out by researchers from the University of Edinburgh and other Universities as part of Generation Scotland – a collaboration between the University Medical Schools and National Health Service in Aberdeen, Dundee, Edinburgh and Glasgow. It was funded by the Chief Scientist Office of the Scottish Government Health Directorates and the Scottish Funding Council, the UK Medical Research Council, Alzheimer Scotland and the BBSRC.

The study was published in the peer-reviewed journal Behavior Genetics and has been published on an open access basis so it is free to read online or download.

Unsurprisingly, the UK media’s reporting focuses on the alleged link between height and IQ. Determining whether there was a relationship between height and IQ was not the main aim of the study and the association between these factors was limited.

 

What kind of research was this?

This was a cross-sectional study which looked at whether any relationship between height and general intelligence – in a large sample of unrelated adults – could be explained by shared genetics.

Traits may be correlated because they are controlled by some of the same genes or for other, non-genetic factors, for example if they are developmentally or structurally related.

 

What did the research involve?

The researchers took blood samples from 6,815 unrelated people and extracted DNA from the samples.

Using this DNA they looked at specific single-nucleotide polymorphisms (SNPs) – places where a single letter of the DNA code differs across the population. A change to a single “letter” of DNA can have a significant impact on how an organism develops.

Participants had their general intelligence assessed using four cognitive tests (processing speed, verbal declarative memory, executive function and vocabulary), and had their height measured.

The researchers then looked at whether there was a correlation between height and intelligence. They then used computer analysis to see whether there was evidence that this correlation was due to shared genetics (a genetic correlation).

 

What were the basic results?

After the researchers adjusted for age and sex, they found that height showed some correlation with general intelligence. This meant that there was some tendency for height to increase as intelligence increased – a “phenotypic correlation” (a correlation of observable characteristics). However, this relationship was not particularly strong.

The researchers then looked at the genetics. They found that 58% of the variability in height in people in their sample and 28% of the variation in intelligence were related to the SNPs that they had assessed.

The researchers found a genetic correlation between height and general intelligence. They estimated that 71% of the phenotypic correlation (correlation between observed height and intelligence) was explained by the same genetic variants.

 

How did the researchers interpret the results?

The researchers concluded that they had found a “modest” genetic correlation between height and intelligence, with they said, “the majority of the phenotypic correlation being explained by shared genetic influences.”

 

Conclusion

This study found some relationship between height and intelligence, and found evidence to suggest that this may be due to shared genetic influences on these traits.

Importantly, the association between height and intelligence was relatively small; meaning the link between the two is not clear cut. So it would be unfair to suggest, as some headlines have, that being short equates to being “intellectually challenged”. 

It is also important to note that it’s not clear to what extent the results are due to the way in which traits affect how humans choose a mate, as opposed to the same genes directly affecting height and IQ.

Greater height and IQ have both been linked to better health outcomes, and researchers hope their findings might help them to understand why this is. At the moment, however, the findings do not have any direct implication.

There is not much you can do about how tall you are, aside from buying some killer heels or Cuban boots, but there are plenty of ways you can keep your brain active.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

You’re coming up short in the IQ stakes, titch. The Sunday Times, March 2 2014

They're already called 'vertically challenged' - but are short people intellectually challenged too? Mail Online, March 3 2014

 

Links To Science

Marioni RE, Batty GD, Hayward C, et al. Common Genetic Variants Explain the Majority of the Correlation Between Height and Intelligence: The Generation Scotland Study. Behaviour Genetics. Published online 20 February 2014

Categories: Medical News

Angry outbursts may up heart attack risk

Medical News - Tue, 03/04/2014 - 12:58

“Having a hot temper may increase your risk of having a heart attack or stroke, according to researchers,” BBC News reports.

Research has found that people prone to attacks of rage have a higher risk of a experiencing a serious cardiovascular event, such as a heart attack or stroke.

Researchers conducted a systematic review, collating the findings of previous studies. Their results suggest that outbursts of anger increase the risk of a serious cardiovascular event by almost fivefold (4.74, to be exact).

However, there are a few flaws to the research. The most pertinent is that some of the studies included in the review collected information after the cardiovascular event occurred. This type of information gathering is prone to what is known as recall bias.

For example, if a person has a heart attack in the afternoon, they may be more likely to remember the driver that “cut them up” at a roundabout that morning than someone who didn’t experience a cardiovascular event.

There could also be other factors that link anger and cardiovascular events and are responsible for the association seen.

Finally, no studies used in the analysis looked at the general population and whether anger levels increased their likelihood of having a cardiovascular event. The researchers said the results “should not be assumed to indicate the true causal effect of anger episodes on cardiovascular events”.

That said, frequent outbursts of anger are not good for either your mental or physical health. Read more advice about ways you can control your anger.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School, Harvard School of Public Health and New York-Presbyterian Hospital/Weill Cornell Medical Centre, New York. It was funded by the US National Institutes of Health.

The study was published in the peer-reviewed European Heart Journal. The article is open access, meaning it can be accessed for free from the publisher’s website.

The results of the research were well reported by the UK media. They put the small risk of a cardiovascular event into its proper context, while also pointing out that unresolved anger can impact on a person’s health.

 

What kind of research was this?

This was a systematic review that aimed to find out whether episodes of anger are linked to a short-term risk of experiencing a cardiovascular event, such as a heart attack, stroke or disturbances in heart rhythm.

Systematic reviews are the best way of determining what is known about a topic.

However, it’s worth considering that this systematic review contained case-crossover studies. In these, information on whether the participants were angry or calm was obtained during two different periods of time: immediately before the cardiovascular event and at an earlier time.

The level and frequency of anger just prior to a cardiovascular event was then compared to the level of anger in the same people at an earlier time.

Although case-crossover studies can show a link between two things, there could be other factors at play. In this instance, there could be numerous reasons for the link between anger and cardiovascular events. As mentioned previously, case-crossover studies are also prone to recall bias.

 

What did the research involve?

The researchers searched databases of literature to identify all studies that had evaluated whether outbursts of anger are linked to a short-term risk of heart attack, stroke or disturbances in heart rhythm. For studies to be included, they had to evaluate triggers occurring within one month of the cardiovascular event.

The results of these studies were then combined to see if anger was associated with the short-term risk of one of these events. The researchers calculated incidence rate ratios, which compared the number of cardiovascular events in the two hours following outbursts of anger with the number of cardiovascular events that weren’t preceded by anger.

 

What were the basic results?

The researchers identified and included nine case-crossover studies:

  • four of anger outbursts and heart attack/acute coronary syndrome (various heart conditions including heart attack and unstable angina caused by reduced blood flow to a part of the heart)
  • two of anger outbursts and stroke
  • one of anger outbursts and ruptured intracranial aneurysm
  • two of anger outbursts and ventricular arrhythmia (abnormal heart rhythm)

The studies were substantially different – they were performed in different countries and collected information about anger episodes differently.

The researchers found there was a 4.74 times higher risk of heart attack/acute coronary syndrome in the two hours following outbursts of anger compared to normal (95% Confidence Interval [CI] 2.50 to 8.99).

The risk of having a stroke was not significantly higher in the two hours following anger compared to normal (95% CI 0.82 to 16.08).

The one study that assessed the risk of ruptured intracranial aneurysms found a statistically significant increased risk in the hour following an outburst of anger (95% CI 1.59 to 24.90).

The two studies that analysed whether outbursts of anger are associated with ventricular arrhythmia were too different to be combined, but both studies found that anger episodes significantly increase a person’s risk.

Researchers point out that although the relative risks of cardiovascular events following an anger outburst are large, the impact on an individual’s absolute risk of a cardiovascular event may be small.

This is because the initial baseline risk of experiencing a cardiovascular event is small.

That said, when we consider the increased risk at population levels, it seems that frequent outbursts of anger do take a toll on the public's health.

They calculate, based on the combined estimate of a 4.74 times higher rate of heart attack/acute coronary syndrome in the two hours following outbursts of anger, that:

  • one episode of anger per month would result in one excess cardiovascular event per 10,000 people per year at low (5%) 10-year cardiovascular risk
  • one episode of anger per month would result in four excess cardiovascular events per 10,000 people per year at high (20%) 10-year cardiovascular risk
  • five episodes of anger per day would result in approximately 158 excess cardiovascular events per 10,000 per year for people at low 10-year cardiovascular risk
  • five episodes of anger per day would result in approximately 657 excess cardiovascular events per 10,000 per year among people at high 10-year cardiovascular risk
How did the researchers interpret the results?

The researchers concluded that, “there is a higher risk of cardiovascular events shortly after outbursts of anger”.

 

Conclusion

This systematic review found there is an increased risk of cardiovascular events, including heart attack and disturbances in heart rhythm, shortly after outbursts of anger.

This is based on results from nine case-crossover studies. In these, information on feelings of anger in the period before the cardiovascular event, as well as an earlier period, were collected retrospectively. The risk of having a cardiovascular event after an episode of anger was then calculated.

The researchers point out several limitations to their review, including the fact that:

  • participants were asked to remember angry outbursts hours or days after the cardiovascular event – in one of the studies this was two weeks later. It’s possible that this could be inaccurate if someone has just experienced an unpleasant or life-threatening event. They were also asked to recall periods of anger in the preceding 6-12 months. There may also be selective recall, forgetting or being unaware of the frequency of other angry outbursts. In one study, people were asked to recall any angry outbursts on the same day and time of the previous week, which may have been difficult to accurately remember
  • the studies used different methods to record outbursts of anger. Some studies recorded this using an interview, while others used a questionnaire – these methods can result in people responding differently
  • A further limitation is that the studies only included people who have suffered from a cardiovascular event. None of them looked at the general population and made any between the number of angry outbursts and risk of cardiovascular event.

Although case-crossover studies can demonstrate a link, there could be other factors that link anger and cardiovascular events and are responsible for the association seen. The researchers concluded that “the results should not be assumed to indicate the true causal effect of anger episodes on cardiovascular events”.

Anger is a normal, healthy emotion. However, if you find yourself getting intensely angry on a regular basis, you may have an anger management problem. Read more about possible treatment options that could help you control your anger.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Angry people 'risking heart attacks'. BBC News, March 4 2014

Angry outbursts cause fivefold increase in heart attack risk. The Daily Telegraph, March 4 2014

Lose your temper and risk a heart attack - up to two HOURS after calming down. Daily Mail,  March 4 2014

Links To Science

Mostofsky E, Penner EA, Mittleman MA. Outbursts of anger as a trigger of acute cardiovascular events: a systematic review and meta-analysis. European Heart Journal. Published online March 3 2014

Categories: Medical News

Childhood nightmares link to psychotic experiences

Medical News - Mon, 03/03/2014 - 14:48

“Regular nightmares in childhood may be an early warning sign of psychotic disorders,” BBC News reports. While many children have the occasional nightmare, a history of regular nightmares could be the sign of something more serious, the news reports.

The study in question followed more than 6,000 UK children and found that those whose mothers reported them as having regular nightmares over at least one period up to age nine were significantly more likely to report having had a “psychotic experience" at age 12.

While the news reports may be understandably worrying for parents, it is worth bearing in mind that the findings need to be confirmed in further studies.

Also, the findings don’t suggest that having regular nightmares definitely mean your child will have psychotic experiences. In addition, reporting a single psychotic experience at age 12 would not mean that a child definitely had a psychotic disorder such as schizophrenia, or would go on to develop one later on.

The authors note that it is not possible to say whether nightmares are directly causing the increase in risk of psychotic experiences. This means that it is not clear whether stopping the nightmares (if this were possible) will have an effect on risk of these experiences.

 

Where did the story come from?

The study was carried out by researchers from King’s College London and other research centres in the UK. It was funded by the UK Medical Research Council, Wellcome Trust, University of Bristol, and Economic and Social Research Council. The study was published in the peer-reviewed medical journal Sleep.

The BBC News headline “Childhood nightmares may point to looming health issues” is unnecessarily frightening for parents. The figures quoted in the BBC News about the risk associated with nightmares (“a three-and-a-half times” increase in risk), comes from an analysis that cannot tell us whether the sleep problems or psychotic experience came first. And therefore it cannot tell us which might be contributing to the other.

The Mail Online provides a better summary of the results in its story.

 

What kind of research was this?

This was a prospective cohort study looking at the possibility of a link between sleep disorders and later psychotic experiences in childhood. This is the most appropriate study design for assessing this question.

The research was part of an ongoing birth cohort study called the Avon Longitudinal Study of Parents and Children (ALSPAC). This ongoing study looks at factors that determine a person’s health from childhood onwards.

This is the most appropriate study design for assessing this question. The researchers also carried out some cross-sectional analyses, but these cannot tell us which factor came first, and therefore which might be influencing the other.

Therefore, these analyses cannot answer the question of whether frequent nightmares might increase psychosis risk or whether psychotic experiences might increase risk of nightmares.

 

What did the research involve?

The researchers assessed whether the children had any problems with sleep (such as difficulty getting to sleep, nightmares, night terrors, or sleepwalking) between the ages of two-and-a-half and nine years, and at age 12. They also assessed whether the children had experienced psychotic experiences at age 12. They then analysed whether children with sleep problems were more likely to report psychotic experiences.

The study aimed to recruit all pregnant women living in the Avon region who were due to give birth between April 1st 1991 and the end of 1992. They recruited 14,775 women who gave birth to a live baby.

The mothers completed questionnaires about their and their child’s health and development from the time of recruitment. Sleep problems were assessed in six postal questionnaires sent at intervals between the ages of two and a half and nine years, and in a standard face to face interview when the child was aged 12 years.

The questionnaires asked the mother if their child experienced regular problems going to sleep, nightmares, or sleep walking. The interview asked the child whether they had nightmares, or someone had told them they had shown signs of night terrors or sleep walking in the past six months. If they answered yes, they were asked more questions to gain further information.

At age 12, the children also had a face to face semi structured interview to find out whether they had any psychotic experiences. These experiences could be:

  • Hallucinations: seeing or hearing something that wasn’t there
  • Delusions: for example feeling spied on, persecuted, that their thoughts were being read, or having delusions of grandeur
  • Thought interference: feeling that someone was inserting thoughts into their mind or removing thoughts, or that other people could hear their thoughts

These types of experiences can be symptoms of serious mental health conditions such as schizophrenia, or can be triggered by physical illnesses or substance use.

The current study included the 6,796 children whose mothers had completed at least three questionnaires about sleep problems up to the age of nine, as well as the child interview about psychotic experiences at age 12 years.

The researchers then looked at whether children with sleep problems were more likely to report psychotic experiences. They took into account factors that might influence this association (confounders), including:

  • family adversity during the pregnancy
  • child IQ
  • evidence of neurological problems
  • mental health diagnoses (made at age seven)
  • behavioural problems

 

What were the basic results?

According to the mothers’ reports, between the ages of two-and-a-half and nine years, about three-quarters of children experienced at least some nightmares. About a fifth of children (20.7%) had regular nightmares reported in one time point in this period; 17% had regular nightmares reported at two time points, and 37% had regular nightmares reported at three or more time points.

At age 12, 36.2% reported at least one sleep problem (nightmares, night terrors, or sleep walking). At this age, 4.7% of children reported having had a psychotic experience that was judged not to be related to fever or substance use, and was not experienced when the child was falling asleep or waking up.

Children who were reported as experiencing regular nightmares at one time point between the ages of two-and-a-half and nine years, had higher increased odds of reporting psychotic experiences at age 12 than those who never had regular nightmares (odds ratio (OR) 1.16, 95% confidence interval (CI) 1.00 to 1.35).

The more persistent the nightmares were, the greater the increase in the odds. For example, those who were reported as having regular nightmares in at least three time periods between the ages of two and a half and nine years had a 56% increase in odds of a psychotic experience (OR 1.56).

Problems getting to sleep, or night waking between the ages of two and a half and nine years were not associated with psychotic experiences at age 12.

Children who reported any sleeping problems at the age of 12 (nightmares, night terrors, or sleep problems) were also at higher odds of reporting psychotic experiences than those without these problems (OR 3.62, 95% CI 2.57 to 5.11).

 

How did the researchers interpret the results?

The researchers conclude that nightmares and night terrors in childhood, but not other sleeping problems, are associated with reporting psychotic experiences at age 12 years.

 

Conclusion

The study has found that children who have regular nightmares between the ages of two-and-a-half and nine were more likely to report a psychotic experience (for example a hallucination or delusion) at age 12. While the study was relatively large and well designed, it does have limitations. As with all research findings, they ideally need to be confirmed by other studies.

Parents reading this article should not become unduly distressed by thinking that their child’s nightmares mean they will develop psychosis later on in life. Firstly, while a lot of children experienced nightmares at some point up to the age of nine (almost three-quarters), very few reported having had a psychotic experience at age 12 (about one in twenty).

In addition, a single psychotic experience at age 12 would not mean that the child had a diagnosis of a psychotic disorder, or guarantee that they would go on to develop psychosis later on.

Thankfully, psychosis is uncommon, affecting around one in 100 people, and mostly at ages 15 or over. Cases among children under the age of 15 are rare.

Finally, as the authors themselves note, it is not possible to say whether nightmares are directly causing the increase in risk of psychotic experiences.

There are some other points to note:

  • Although BBC News reports that night terrors were mostly experienced between the ages of three and seven years, night terrors in this study were only specifically assessed at age 12. At younger ages the researchers only asked about nightmares, problems getting to sleep, and night waking.
  • The analyses of the link between sleep problems at age 12 (such as night terrors) and psychotic experiences at the same age is cross-sectional, and therefore it is not possible to say which factor came first – the sleep problem or the psychotic experience.
  • The figure from these analyses (3.5 times increase in risk) is much higher than the increase in risk of having a psychotic experience age 12 after having nightmares from age two and a half to nine years which was only 16%.
  • The study relies on mothers’ reports of children’s sleep problems up to the age of nine years and did not delve into the frequency or severity of sleep problems. It is possible that this may lead to some inaccuracies – for example, some children with sleep problems may be missed.
  • Although the researchers tried to take into account some factors that may have influenced results (potential confounders), others may also be having an effect, such as total amount of sleep a child had.

Read more about common sleep problems in children.

If you child is experiencing persistent sleep problems then ask your GP for advice.  

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Childhood nightmares may point to looming health issues. BBC News, March 1 2014

Children who have lots of nightmares at risk of suffering hallucinations and psychosis as teenagers. Mail Online, March 2 2014

Childhood nightmares 'increase risk' of health problems. ITV News, March 1 2014

Children who have nightmares at greater risk of psychosis in later life. Daily Mirror, March 1 2014

Links To Science

Fisher HL, Lereya ST, Thompson A, et al. Childhood Parasomnias and Psychotic Experiences at Age 12 Years in a United Kingdom Birth Cohort. Sleep. Published online March 1 2014

 

Categories: Medical News

How seaweed could slow the obesity tidal wave

Medical News - Mon, 03/03/2014 - 14:00

“Seaweed could be key to weight loss, study suggests,” BBC News reports.

UK researchers have looked at alginates that occur naturally in “kelp” seaweed (the variety that resembles large blades). They found that these alginates may help reduce the amount of fat the body digests.

Their study showed that, in the lab, certain types of alginates can slow down the enzyme activity of a fat digesting enzyme called pancreatic lipase. The researchers believe that if the alginates can block this enzyme, less fat would be absorbed by the body, which would stop people becoming obese.

However, the research did not draw any definitive conclusions, the most pertinent being that weight loss would not necessarily occur in humans (or even in mice). It's also unclear whether any potential effect from seaweed extract would lead to an improvement in weight-related health issues, such as reduced risk of diabetes.

Even if the alginates studied were successful in achieving weight loss, this does not mean they are safe to consume. Ultimately, ingesting a substance that slows down fat absorption is unlikely to have the same health benefits as a well-balanced diet and exercise – this is a tried and tested lifestyle choice for maintaining a healthy weight.

Nonetheless, the market for quick-fix weight loss treatments is large and extremely profitable, so research into seaweed extract will almost certainly continue.

 

Where did the story come from?

The study was carried out by researchers from Newcastle University and was funded through a BBSRC CASE studentship (a grant programme for bioscience researchers) with industrial sponsors Technostics Ltd.

The study was published in the peer-reviewed science journal Food Chemistry.

The UK media’s reporting of the study was generally accurate, though much of the reporting gives the impression the alginates studied had proven to be an effective weight loss supplement in humans.

 

What kind of research was this?

This was a laboratory study investigating how a compound called alginate could influence the digestion of fat.

Alginates are chemicals that can be extracted from the cell walls of brown seaweed or from certain bacteria. Using alginate as a food additive is not a new concept, but this latest news covers new territory: their potential as an anti-obesity treatment.

In industrialised countries, dietary fats can account for 40% of energy intake, with triacylglycerol (TAG) being the major component. An enzyme called lipase, excreted from the pancreas, plays an important role in the digestion of fats in the body, so reducing pancreatic lipase activity would reduce fat breakdown, resulting in lower amounts being absorbed by the body. This would mean the fat passes straight through the body and wouldn’t accumulate under the skin or around organs, which is bad for your health.

Laboratory research like this is useful for establishing proof of a particular concept, but many more tests are needed for potential food additives. Experiments on humans are more important and would provide more information about the potential risks and rewards of using alginate as a food additive or a weight loss agent.

 

Links To The Headlines

Seaweed could be key to weight loss, study suggests. BBC News, March 1, 2014

Want to lose weight? Eat more SEAWEED. Daily Mirror, March 1 2014

Could seaweed stop the tide of obesity? Scientists creating supplement that blocks fat. The Independent, March 1 2014

For a guilt-free way to enjoy cake, just add seaweed: Extract stops fat being broken down and absorbed by the body. Mail Online, March 1 2014

Seaweed on your sausages could keep weight down. The Times, March 1 2014

Links To Science

Wilcox MD, Brownlee IA, Richardson JC, et al. The modulation of pancreatic lipase activity by alginates. Food Chemistry. Published online March 1 2014

 

Categories: Medical News

Claims of 'anti-ageing pill' may be premature

Medical News - Fri, 02/28/2014 - 14:48

The Daily Telegraph and Daily Express both carry headlines about how a “pill” to help humans live longer could be on the cards. Though while the substance being studied shows promise, the research only involved mice.

Researchers were looking at a chemical called SRT1720 which activates a particular protein called Sirtuin 1 (SIRT1). Previous research has demonstrated that activating SIRT1 can have health benefits in various organisms, and it has been proposed as an anti-ageing protein.

This study focused on comparing the lifespan, health and diseases of mice fed the same diet, but with or without the addition of a SRT1720.

Overall they found mice fed a normal diet but with the supplement had a longer natural lifespan on average (about five weeks longer).

During their lifetime, additional tests also suggested they had improved muscle function and coordination, improved metabolism, improved glucose tolerance, decreased body fat and cholesterol.

All in all this suggests that giving the mice this supplement could protect them from the equivalent of metabolic syndrome, a series of risk factors associated with conditions such as heart disease and type 2 diabetes.

This is interesting research but as it only involved mice, the normal caveats regarding animal studies apply. Importantly, researchers did not look at whether SIRT1 may cause side effects or complications. So it is currently unclear whether SIRT1 would be safe in humans, let alone effective.

The SIRT1 protein could be a possible candidate in the quest to find an “elixir of life” but these are very early days.

 

Where did the story come from?

The study was carried out by researchers from the US National Institute of Aging part of the National Institutes of Health and other research institutions in the US and Australia. Funding was provided by the National Institute on Aging, National Institutes of Health. Some of the researchers involved in the study are employed by Sirtis, a company with a declared commercial interest in developing a SIRT1 activator.

The study was published in the peer reviewed scientific journal Cell. This article was open-access (unlike most of Cell’s content), meaning that it can be accessed for free from the journal’s website.

The media is rather premature in concluding from this early stage research in mice that a life-prolonging pill could be on the cards anytime soon. Though it is true that this research has findings worthy of further study.

Also, unlike the Telegraph, the fact that the research is in mice isn’t apparent in the Daily Express article.

 

What kind of research was this?

This was an animal study in mice which centred on a chemical called SRT1720 which is thought to activate a particular protein, Sirtuin 1 (SIRT1). SIRT1 is known to play an important role in maintaining homeostatic balance (keeping the various systems of the body on a “stable footing”).

Previous research has demonstrated that SIRT1 activation can have health benefits in various organisms, and it has been proposed as an anti-ageing protein. It has been suggested that pharmacological interventions that aim to increase SIRT1 activity could slow the onset of ageing as well as delaying the onset of diseases associated with ageing – such as heart disease.

Prior study has shown that treating mice with small molecule activators of SIRT1 such as SRT1720 can balance the detrimental effects of a high-fat diet. This is achieved by improving insulin sensitivity and preventing oxidative metabolism (damage at the cellular level).

However, most of the previous research in mice has focused on reversing the effects of a poor diet.

This research aimed to see whether activating SIRT1 using SRT1720 can improve health and lifespan in mice fed a normal diet.

 

What did the research involve?

The researchers used 28-week-old male mice separated into four groups of 100. They were fed on four diets:

  • standard diet (carbohydrate: protein: fat ratio of 64:19:17 percent of kcal)
  • the standard diet supplemented with the SRT1 activator molecule – SRT1720
  • high-fat diet (carbohydrate: protein: fat ratio of 16:23:61)
  • the high fat diet supplemented with SRT1720

The SRT1720 supplements were included in the diets at an approximate daily dose of 100mg/kg body weight. The mice had their body weight and food intake monitored biweekly.

Mice received various tests during the study, including having their metabolic rate measured after they had been on the diets for about six months. And then their body fat mass and glucose tolerance measured when they had been on the diets for almost a year.

They also had exercise testing when between one and two years of age. The animals lived out their lifespan and then their organs were examined after death.

 

What were the basic results?

The researchers found that survival between the two groups of mice fed the standard diet was significantly different – average lifespan was increased by 8.8% (around five weeks) when mice were given the SRT1720 supplement. In the high fat mice, survival was significantly lower, but still the SRT1720 supplement increased lifespan by 21.7% (around 22 weeks). Overall statistical analyses showed that the supplement significantly reduced the risk of death.

Also, the SRT1720 supplement decreased the body weight of both the standard diet and high fat diet mice, compared with their counterparts on the same diets, despite the fact that the mice were consuming the same number of calories.

However, the supplement only decreased percentage of body fat in those mice on the standard diet; in those on the high fat diet the supplement had no effect on fat mass percentage.

In the standard diet mice, the SRT1720 supplement also had beneficial effects on their metabolism and lead to a noticeably improved performance on an activity test. This suggests they had better balance and muscle function, though a similar effect was not seen in the high fat diet.

There was also some suggestion that the supplement improved insulin sensitivity and glucose balance, and also lowered blood cholesterol in mice fed the standard diet. Cataract formation in the eyes was also reduced.

There was no difference in the number of diseases seen on autopsy examination after death between animals given the supplement and those not. However, the researchers say that as the average age at autopsy was around 10 weeks later in those given the supplements, it could be that SRT1720 delays the onset of diseases allowing mice to live a longer life without disease.

How did the researchers interpret the results?

The researchers conclude that their results show that supplementation with a molecule that activates SRT1 improves health and extends lifespan in mice maintained on a standard diet. They say their work “highlights the importance of examining the therapeutic value of small molecule activators of SIRT1 in general health and longevity”.

 

Conclusion

Previous research has demonstrated that SIRT1 protein could be a potential target for treatments to try and prolong life and prevent diseases of ageing. However, much animal research to date has focused on demonstrating how activators of this protein can reverse the detrimental effects of a high fat diet.

Therefore, though the current study also included mice fed a high fat diet, the main aim of researchers was to see what the effects could be when mice were fed their normal diet.

They found generally promising results. Overall they found that mice fed a normal diet supplemented with SRT1720, a chemical which is thought to activate SIRT1, had a longer natural lifespan (about five weeks longer on average). During their lifetime, additional tests also suggested they had improved muscle function and coordination, improved metabolism, improved glucose tolerance, decreased body fat and cholesterol.

All in all this suggests that giving the mice this supplement could protect them from the equivalent of metabolic syndrome in humans, and reduce the risk of diseases such as heart disease and diabetes. This is potentially important, as these types of disease are now a leading cause of disability and death in the developing world.

This research is at a very early stage, and we don’t know whether a treatment could be developed for testing in humans, and if it was, whether it would be safe or effective.

Though seeing the potential billions of pounds that could be made from a safe and effective anti-ageing drug, we would be extremely surprised if this study did not lead to further research into SRT1720 and SIRT1.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

New pill to fight ageing: Could drug be secret to a longer life? Daily Express, February 28 2014

Pill could help humans live longer. The Daily Telegraph, February 27 2014

Links To Science

Mitchell SJ, Martin-Montalvo A, Mercken EM, et al. The SIRT1 Activator SRT1720 Extends Lifespan and Improves Health of Mice Fed a Standard Diet. Cell Reports. Published online February 27 2014

Categories: Medical News