Medical News

Obese 11-year-old girls 'get lower school grades'

Medical News - Wed, 03/12/2014 - 12:58

"Study links obesity in teenage girls with lower academic results," BBC News reports. Previous studies have reported that child and adolescent obesity has a wide variety of adverse consequences in both the short and long term.

A large study of UK secondary school pupils has now looked at whether being overweight or obese at the age of 11 influences educational attainment on SAT tests at 11 and 13 years of age and GCSE grades achieved at 16 years of age.

The researchers found an association between obesity at 11 years of age and poorer academic achievement in GCSE exams five years later in girls, even after adjustment for a wide range of factors that could have influenced the results (confounders).

They say that the difference in academic achievement was equivalent to one-third of a grade at age 16, which would be sufficient to lower average attainment to a grade D instead of a grade C. However, the association between obesity and academic attainment was less clear in boys.

The reasons for an association between obesity and academic achievement in girls are unknown, but causes may include the health effects of obesity that could lead to girls missing school. 

 

Where did the story come from?

The study was carried out by researchers from the universities of Dundee, Strathclyde, Bristol and Georgia. It was funded by the UK Medical Research Council, the Wellcome Trust, the University of Bristol and the BUPA Foundation.

The study was published in the peer-reviewed International Journal of Obesity. The article is open access, meaning it can be accessed for free from the journal's website.

The results of this study were well reported in the media.

 

What kind of research was this?

This was a cohort study that aimed to determine whether being obese at 11 years old was associated with poorer academic attainment at 11, 13 and 16 years old. It also aimed to look for factors that might explain the relationship seen.

Cohort studies are the ideal study design to address this sort of question, although they cannot show a cause and effect relationship as there could be other factors (confounders) that are responsible for any association seen.

 

What did the research involve?

The researchers looked at the association between weight status at 11 years old and academic attainment assessed by national tests at 11, 13 and 16 years of age in 5,966 children who were part of the Avon Longitudinal Study of Parents and Children.

Children were only included in the study if they did not have a psychiatric diagnosis or special educational needs.

The researchers measured the weight and height of children when they were 11 and 16 years old, and their body mass index (BMI) was also calculated.

Healthy weight was defined by BMI "z-scores". Z-scores show how different an individual's BMI is from the average in the UK population (more precisely, it shows the number of standard deviations different it is).

There are no agreed cut-offs in the UK, but the World Health Organization (WHO) defines being overweight as more than one standard deviation higher than average and obesity as more than two standard deviations. This study has used a lower level to define obesity. In this study:

  • normal weight was defined as a BMI z-score more than 1.04
  • being overweight was defined as a BMI z-score of 1.04 to 1.63
  • being obese was defined as a BMI z-score of 1.64 or more

Academic attainment was assessed from performance in English, Maths and Science on Key Stage 2 tests at age 10/11, Key Stage 3 tests at 13/14 and GCSEs at age 15/16. The researchers concentrated their analyses on English attainment.

The researchers looked at the association between weight status and academic attainment. Boys and girls were analysed separately, as differences have been seen in academic attainment between the sexes.

The researchers also looked at whether the association seen between weight status and academic attainment could be explained by:

  • depressive symptoms at age 11
  • IQ at age 8
  • age of girls when their periods started

The researchers adjusted their analyses for a large number of potential confounders.

 

What were the basic results?

At 11 years of age, 71.4% of children were a healthy weight, 13.3% were overweight and 15.3% were obese.

After adjusting for all potential confounders, girls who were obese at the age of 11 had lower English marks at 13 and 16 years of age compared with girls who were a healthy weight. There was no significant difference in marks for overweight girls compared with healthy weight girls.

The association between obesity and English marks were less clear in boys, with a significant association only being seen for obesity at the age of 11 and lower English marks at 11.

Neither depressive symptoms nor IQ explained the relationship between weight status at 11 and academic attainment at 16 years old for boys or girls. The age at which girls had their first period also did not explain the relationship between weight status at 11 and academic attainment at 16 years old for girls.

The researchers also looked at changes in weight status. Changes in weight status had no effect on English attainment at GCSE in boys.

However, girls who were overweight or obese at age 11 and 16, or who changed from being overweight at 11 to obese at 16 had poorer English attainment at GCSE than girls who remained a healthy weight.

Girls who were a healthy weight who became overweight and girls who became a healthy weight from being overweight had no difference in English attainment at GCSE than girls who remained a healthy weight.

 

How did the researchers interpret the results?

The researchers conclude that, "For girls, obesity in adolescence has a detrimental impact on academic attainment five years later". They say that, "Being obese at 11 predicted lower attainment by one-third of a grade at age 16. In the present sample, this would be sufficient to lower average attainment to a grade D instead of a grade C."

The researchers go on to say: "Mental health, IQ and age of menarche [period] did not mediate this relationship, suggesting that further work is required to understand the underlying mechanisms. Parents, education and public health policy makers should consider the wide-reaching detrimental impact of obesity on educational outcomes in this age group."

 

Conclusion

This large UK-based cohort study has looked at whether being overweight or obese at age 11 influences educational attainment on SAT tests at 11 and 13 years of age and GCSE grades achieved at 16 years of age.

It found an association between obesity at 11 years of age and poorer academic achievement in GCSE exams five years later in girls, even after adjustment for a wide range of confounders.

The researchers found that depression, IQ and the age girls started menstruating could not explain the association. The association between obesity and academic attainment was less clear in boys.

The reasons for an association between obesity and academic achievement in girls are unknown. The researchers suggest that obese girls may miss school because obesity affects physical and mental health. They also suggest that obese children's academic attainment may suffer because they tend to be stigmatised by other children or teachers, or because excess fat might affect brain function.

Strengths of this study include the large sample size, cohort design, adjustment of analyses for a wide range of confounders, and the fact that BMI and educational attainment were objectively assessed. However, it should be noted that the researchers defined childhood obesity at a lower BMI level than that used by the World Health Organization.

However, as the researchers noted, despite the adjustment for a range of confounders, cohort studies cannot prove a cause-and-effect relationship, as there might be other confounders that are responsible for the association seen. They also point out that many of the confounders were only measured at one point and they were unable to adjust for change, such as changes in depressive symptoms.

The researchers suggest that future studies should explore the influence of other potential factors that could explain the association seen, including self-esteem, absenteeism, school environment and the role of the teacher.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Study links obesity in teenage girls with lower academic results. BBC News, March 11 2014

Link between obesity & poor grades in adolescent girls. ITV News, March 11 2014

Girls who are obese at 11 'get lower GCSE results': Effect of weight can be difference between C and D grade. Daily Mail, March 11 2014

Links To Science

Booth JN, et al. Obesity impairs academic attainment in adolescence: findings from ALSPAC, a UK cohort (PDF, 983kb). International Journal of Obesity. Published online March 11 2014

Categories: Medical News

Low-level drinking in early pregnancy 'harms baby'

Medical News - Tue, 03/11/2014 - 14:57

"Drinking alcohol early in pregnancy, even in small amounts, increases chances of harming your baby," reports The Independent, one of several news outlets to report on the latest study on the risks of drinking during pregnancy.

The study of 1,303 pregnant women aged between 18 and 45 years old found that women who drank less than two units a week during the first 12 weeks of pregnancy were at increased risk of complications, including premature birth.

While the risk at the individual level is still low, researchers from the University of Leeds concluded that the first trimester was the most "vulnerable period" of a woman's pregnancy.

They also found a more general association between drinking alcohol throughout pregnancy and worse birth outcomes, such as lower birth weight or foetal growth restriction.

"Our results highlight the need for endorsing the abstinence-only message, and further illuminate how timing of exposure is important in the association of alcohol with birth outcomes," say the study's authors.

The study, published in the Journal of Epidemiology and Community Health, suggests that women who are planning to conceive and who are pregnant should abstain from alcohol.

However, for now it is unclear whether consideration of this study alongside other evidence will lead to a change to government recommendations on drinking during pregnancy.

There currently remains some uncertainty around whether there is a safe level of alcohol consumption during pregnancy, or whether it should be avoided altogether.

The Department of Health recommends that pregnant women and women trying to conceive should avoid alcohol completely, and, if they choose to drink, should never drink more than one to two units once or twice a week and never get drunk.

The most recent National Institute for Health and Care Excellence (NICE) guidance on Antenatal Care (2008) specifically advises that women trying to conceive, and those in the first three months of pregnancy, should avoid drinking alcohol, as this may be associated with an increased risk of miscarriage.

 

Where did the story come from?

The study was carried out by researchers from the University of Leeds and was funded by the Food Standards Agency. It was published in the peer-reviewed Journal of Epidemiology and Community Health.

The results of the research were well reported by BBC News and The Independent. The Daily Telegraph's headline about "middle-class mothers' alcohol risk to babies" was based on the finding that women reporting alcohol intake above two units per week were more likely to be older, have a university degree, be of European origin, and were less likely to live in a deprived area.

However, the researchers did not conclude that "middle-class mothers are at greater risk of having premature and smaller babies as they drink too much in pregnancy", as the paper reports.

 

What kind of research was this?

This was a prospective cohort study carried out in Leeds that aimed to determine the association between alcohol intake before and during pregnancy with birth weight and gestational age.

It also aimed to examine whether there was a different effect depending on the timing of exposure during pregnancy.

There currently remains some uncertainty around whether there is a safe level of alcohol consumption during pregnancy or whether it should be avoided altogether.

Cohort studies are the ideal study design to address this question. However, despite the fact that the researchers adjusted for a number of confounders, it is possible there are other factors that explain the association seen in the research.

 

What did the research involve?

The researchers studied 1,303 pregnant women aged between 18 and 45 years old.

The women were asked about their alcohol consumption using food frequency questionnaires. The women completed the questionnaire three times:

  • when they were enrolled into the study – they were asked about consumption from 4 weeks prior to pregnancy through to week 12 of pregnancy
  • at week 28 of pregnancy – they were asked about consumption between weeks 13 and 28 of pregnancy
  • after they had given birth – they were asked about consumption between weeks 29 and 40 of pregnancy

The reported frequency and type of alcohol drunk was used to calculate the units consumed per week.

Based on their responses, women were categorised as non-drinking, drinking two or fewer units per week, or drinking more than two units per week.

Birth outcomes were obtained from hospital maternal records. The primary outcome was birth weight, but the researchers also looked at birth centiles (birth size compared with other babies, adjusted for maternal height, weight, ethnicity, how many children she had, and the birth weight and gender of the baby), premature birth (before 37 weeks of gestation) and babies being small for gestational age (below the 10th growth centile).

The researchers then looked at the association between drinking status and birth outcomes. They adjusted for confounders that included the mother's salivary cotinine (a biomarker of smoking status), pre-pregnancy weight, height, age, ethnicity, how many children she had, caffeine intake, and education.

 

What were the basic results?

Approximately three-quarters of women before pregnancy and more than half in the first trimester reported alcohol intakes of more than two units per week.

Just over a quarter of women reported drinking more than two units per week in the second or third trimester.

Alcohol consumption four weeks before pregnancy

Drinking more than two units per week four weeks before pregnancy was associated with lower birth weight (105.7g lower) and a 7.7 decrease in birth centile compared with not drinking. Drinking less than two units was not associated with lower birth weight or birth centile compared with not drinking.

Alcohol consumption during the first trimester of pregnancy

Drinking more than two units per week during the first trimester of pregnancy was associated with an approximate 100g reduction in birth weight and an 8.2 decrease in birth centile compared with not drinking.

Drinking more than two units per week was associated with increased odds of having a baby small for its gestational age (odds ratio (OR) 2.0, 95% confidence interval (CI) 1.2 to 3.4) and a premature birth (OR 3.5, 95% CI 1.1 to 11.2).

Drinking two units or less per week during the first trimester of pregnancy was associated with an approximate 100g reduction in birth weight and a 5.8 decrease in birth centile compared with not drinking. Drinking two units or less per week was also associated with increased odds of having a preterm birth (OR 4.6, 95% CI 1.4 to 14.7).

Alcohol consumption during the second and third trimesters of pregnancy

Drinking more than two units per week during the second trimester of pregnancy was associated with an approximate 100g reduction in birth weight, and during the third trimester of pregnancy was associated with an approximate 50g reduction in birth weight compared with not drinking. 

 

How did the researchers interpret the results?

The researchers concluded that, "Maternal alcohol intake during the first trimester was found to have the strongest association with foetal growth and gestational age. Women who adhered to guidelines in this period were still at increased risk of adverse birth outcomes even after adjustment for known risk factors.

"Maternal alcohol intakes which exceeded the recommendations in the period leading up to pregnancy were also found to be associated with foetal growth, suggesting that the [period around conception] could be particularly sensitive to the effects of alcohol on the foetus.

"Our results highlight the need for endorsing the abstinence-only message, and further illuminate how timing of exposure is important in the association of alcohol with birth outcomes, with the first trimester being the most vulnerable period."

 

Conclusion

This cohort study suggests that women who are planning to conceive and who are pregnant should abstain from alcohol. Drinking more than two units of alcohol per week was associated with adverse birth outcomes in all trimesters.

In addition, the study found that women who reported limiting drinking to less than two units per week during the first trimester of pregnancy were also at increased risk of adverse birth outcomes.

This study has the strength that it associated alcohol intake at three time points, covering different periods of pregnancy. Alcohol intake was assessed during pregnancy and not after birth, reducing the possibility of recall bias. In addition, it adjusted for a number of confounders and used an objective measure of smoking.

However, the study was originally designed to assess the impact of caffeine intake on birth outcomes. Alcohol intake was self-reported, and there is the possibility of under-reporting. Few women had data on alcohol consumption in the third trimester (30% of the original cohort).

Also, despite the fact that the researchers adjusted for a number of confounders, it is possible there are other factors that explain the association seen.

This research provides further evidence for not drinking during pregnancy. However, for now it is unclear whether consideration of this study alongside other evidence will lead to a change in NICE or Department of Health recommendations around alcohol use during pregnancy.

In the UK, the Department of Health currently recommends that pregnant women and women trying to conceive should avoid alcohol altogether, and, if they choose to drink, should never drink more than one to two units once or twice a week, and never get drunk.

The most recent NICE guidance on Antenatal Care (2008) specifically advises that women trying to conceive, and those in the first three months of pregnancy, should avoid drinking alcohol, as this may be associated with an increased risk of miscarriage.

After this, if women choose to drink during pregnancy they should drink no more than one to two units once or twice a week. NICE says that, "Although there is uncertainty regarding a safe level of alcohol consumption in pregnancy, at this low level there is no evidence of harm to the unborn baby."

The organisation advises that getting drunk or binge drinking should be avoided throughout pregnancy, as this may be harmful to the unborn baby. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Light drinking 'is preterm risk'. BBC News, March 11 2014

Middle-class mothers' alcohol risk to babies. The Daily Telegraph, March 10 2014

 

Drinking alcohol early in pregnancy, even in small amounts, increases chances of harming your baby, study finds. The Independent, March 10 2014

Links To Science

Nykjaer C, Alwan NA, Greenwood DC. Maternal alcohol intake prior to and during pregnancy and risk of adverse birth outcomes: evidence from a British cohort. Journal of  Epidemiology and Community Health. Published March 10 2014

Categories: Medical News

Children's diets contain 'too much salt'

Medical News - Tue, 03/11/2014 - 13:03

“Children eat too much salt,” reports The Daily Telegraph, one of several news outlets to report on new findings that have led researchers to call for further reductions in the salt content of processed food.

The news is based on the results of a study assessing the salt intake of 340 children in south London by keeping a record of everything they ate and drank over 24 hours and measuring the salt content in their urine.

It found that on average, children aged from five to six ate 3.75g salt/day, children aged from eight to nine ate 4.72g/day and 13 to 17 year olds ate 7.55g/day.

Bread, breakfast cereals and other cereal-based products were the biggest source of dietary salt (36%) followed by meat products (19%), milk and milk products (11%).

On average, five and six-year-old children in the study consumed 3.75g of salt a day – more than the Government’s recommended 3g maximum for that age group. Eight and nine-year olds consumed 4.72g a day – within their 5g limit. Thirteen to 17-year-olds consumed 7.55g a day – more than the 6g limit.

This study provides a single snapshot of the salt levels of a relatively small sample of children from south London only. It doesn’t tell us whether the same results would consistently be obtained if salt levels were measured on a number of different occasions in these children; or in a sample of children from another area. They also cannot tell us whether there are any detrimental health effects of this level of salt consumption.

Where did the story come from?

The study was carried out by researchers from St George’s University of London and Queen Mary University of London and was funded by the British Heart Foundation. It was published in the peer-reviewed medical journal Hypertension.

What kind of research was this?

This was a cross sectional study assessing dietary salt intake by measuring levels of salt in urine among five to six years olds, eight to nine years olds and 13 to 17 year olds in south London, identifying the main sources of salt in children’s diets.

Cross-sectional studies are the ideal study design to assess current salt intake; however, they only provide a single snapshot. They don’t tell us whether the same results would consistently be obtained if salt levels were measured on a number of different occasions. They also cannot tell us whether there are any detrimental health effects of this salt consumption.

What did the research involve?

Dietary salt intake was assessed in 340 children by analysing the level of sodium in urine collected over 24 hours.

The Scientific Advisory Committee on Nutrition (SACN) recommends a maximum intake of salt of 6g a day for adults.

Although there are existing recommendations for salt intake for children, researchers devised their own daily upper limits based on body size:

  • 2g for three to four year old children
  • 3g for five to eight year old children
  • 4g for nine to 11 year old children
  • 5g for 12-15 year old children
  • 6g for young adults aged 16 years and over

The children’s dietary salt intake was compared to the study’s own maximum salt intake recommendations.

To identify the sources of salt in their diet, children were also provided with a digital camera and asked to photograph everything they ate over 24 hours. The parent or child was also asked to keep a detailed dietary record of the food and drinks consumed. Details of recipes used were requested, as well as brand names of products, cooking methods, and portion sizes using household measurers.

What were the basic results?

The average salt intakes were:

  • 3.75g/day for five to six year olds
  • 4.72g/day for eight to nine year olds
  • 7.55g/day for 13 to 17 year olds.

Compared to the study’s own maximum daily intake recommendations:

  • 66% of five to six year old children ate too much salt
  • 73% of eight to nine year olds ate too much salt
  • 73% of 13 to 17 year olds ate too much salt
     

The main sources of dietary salt were cereal-based products, such as bread, breakfast cereals, biscuits, cakes, pasta and rice (36%), meat products, including chicken, turkey, sausages and ham (19%), milk and milk products, such as cheese (11%).

How did the researchers interpret the results?

The researchers concluded “this study demonstrates that salt intake in children in south London is high, with most of the salt coming from processed foods. Much further effort is required to reduce the salt content of manufactured foods.”

Conclusion

This study has found that salt intake in 340 children in south London is high. The researchers have proposed new maximum recommendations for salt intake based on recommendations for adults adjusted for differences in body size. Based on the study's own recommendations on salt intake, around two-thirds of five to six year olds, and three-quarters of those aged eight to nine and 13 to 17 had too much salt in their diet.

The scope of the study has some limitations. It provides a single snapshot of the salt levels of a relatively small sample of children from south London. It doesn’t tell us whether the same results would consistently be obtained if salt levels were measured on a number of different occasions in these children; or in a sample of children from another area. They also cannot tell us whether there are any detrimental health effects of this level of salt consumption.

 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Children's diets 'far too salty'. BBC News, March 11 2014 

Children eat too much salt, researchers find. The Daily Telegraph, March 11 2014

Three in four children are at risk from eating too much salt because of high levels in bread and breakfast cereals. Mail Online, March 10 2014

Links To Science

Marrero NM, Feng JH and MacGregor GA. Salt Intake of Children and Adolescents in South London. Hypertension. Published March 10 2014

Categories: Medical News

Are e-cigarettes a 'gateway to nicotine addiction'?

Medical News - Mon, 03/10/2014 - 13:02

"E-cigarettes are encouraging a new generation to become hooked on nicotine," reports the Mail Online.

E-cigarettes are devices that deliver a heated aerosol ("vapour") of nicotine in a way that mimics conventional cigarettes. But they have lower levels of toxins such as tar than a conventional tobacco cigarette. They are marketed as a safer alternative to regular smoking, or as a way to quit.

Today's headlines followed a survey of thousands of US teenagers (who were under 15 on average, meaning that those who smoked cigarettes were underage).

It found that those who had tried e-cigarettes were more likely to have smoked conventional cigarettes and less likely to be abstaining from conventional smoking than those who hadn't tried e-cigarettes.

However, it also found those who tried e-cigarettes were more likely to want to quit conventional smoking.

On average tobacco smokers die significantly younger and spend more of their shorter lives ill. Because e-cigarettes can be marketed to young people, there is a worry that if they did lead to more conventional smoking, they could have a potentially disastrous impact on public health.

This current study does suggest that e-cigarettes may not be the harmless alternative some believe, and may be acting as a "gateway drug" to conventional smoking.

However, it does not prove that is the case. It is quite plausible that existing teenage smokers are also trying e-cigarettes for a variety of reasons.

The debate about the safety and regulation of e-cigarettes is likely to continue until more robust long-term evidence emerges.

 

Where did the story come from?

The study was carried out by researchers from the Center for Tobacco Research and Education at the University of California, San Francisco, and was funded by the US National Cancer Institute.

It was published in the peer-reviewed medical journal, JAMA Pediatrics.

The Mail Online coverage was balanced and discussed the pros and cons of e-cigarettes. It also usefully brought in some wider research from 75,000 Korean adolescents "which also found that adolescents who used e-cigarettes were less likely to have stopped smoking conventional cigarettes".

 

What kind of research was this?

This was a cross-sectional study looking at whether e-cigarette use was linked to conventional cigarette smoking behaviour among US adolescents.

E-cigarettes are devices that deliver a heated aerosol of nicotine in a way that mimics conventional cigarettes while delivering lower levels of toxins, such as tar, than a conventional combusted cigarette. They are often marketed as a safer alternative to regular smoking, or as a way of helping people quit traditional smoking.

The devices are not currently regulated in the US or the UK, meaning there are limited or vague rules concerning appropriate advertising. The researchers say e-cigarettes are being aggressively marketed using the same messages and media channels that cigarette companies used to market conventional cigarettes in the 1950s and 1960s. These include targeting young people to get a new generation of smokers hooked on nicotine for life.

The researchers outline how studies have demonstrated that youth exposure to cigarette advertising causes youth smoking. Meanwhile, electronic cigarettes can be sold in flavours such as strawberry, liquorice or chocolate, which are banned in cigarettes in the US because they appeal to youths.

Given the potential for a new generation to be hooked on nicotine and then tobacco smoking in this unregulated environment, the researchers wanted to investigate whether e-cigarettes were associated with regular smoking behaviour in adolescents.

 

What did the research involve?

The researchers used existing smoking data collected from US middle and high school students in 2011 (17,353 students) and 2012 (22,529) during the large US National Youth Tobacco Survey. They analysed whether use of e-cigarettes was linked with conventional tobacco smoking and smoking abstinence behaviour.

The National Youth Tobacco Survey was described as an anonymous, self-administered, 81-item, pencil-and-paper questionnaire that included:

  • indicators of tobacco use (cigarettes, cigars, smokeless tobacco, kreteks [southeast Asian clove cigarettes], pipes, and "emerging" tobacco products)
  • tobacco-related beliefs
  • attitudes about tobacco products
  • smoking cessation
  • exposure to secondhand smoke
  • ability to purchase tobacco products
  • exposure to pro-tobacco and anti-tobacco influences

Smoking behaviour was categorised as:

  • conventional cigarette experimenters – adolescents who responded "yes" to the question "Have you ever tried cigarette smoking, even one or two puffs?"
  • ever-smokers of conventional cigarettes – those who replied "100 or more cigarettes (five or more packs)" to the question "About how many cigarettes have you smoked in your entire life?"
  • current smokers of conventional cigarettes – those who had smoked at least 100 cigarettes and smoked in the past 30 days
  • ever e-cigarette users – adolescents who responded "electronic cigarettes or e-cigarettes, such as Ruyan or NJOY" to the question "Which of the following tobacco products have you ever tried, even just one time?"
  • current e-cigarette users – those who responded "e-cigarettes" to the question "During the past 30 days, which of the following tobacco products did you use on at least one day?"

Data on intention to quit smoking in the next year, previous quit attempts and abstinence from conventional cigarettes was also collected. The analysis was adjusted for potential confounding factors such as race, gender and age.

 

What were the basic results?

The main analysis included 92.0% of respondents (17,353 of 18,866) in 2011 and 91.4% of respondents (22,529 of 24,658) in 2012 who had complete data on conventional cigarette use, e-cigarette use, race, gender and age. The mean age was 14.7, and 5.6% of respondents reported ever or current conventional cigarette smoking (of these, 5% currently smoked).

In 2011, 3.1% of the study sample had tried e-cigarettes (1.7% dual ever use, 1.5% only e-cigarettes) and 1.1% were current e-cigarette users (0.5% dual use, 0.6% only e-cigarettes).

In 2012, the 6.5% of the sample had tried e-cigarettes (2.6% dual use, 4.1% only e-cigarettes) and 2.0% were current e-cigarette users (1.0% dual use, 1.1% only e-cigarettes).

Ever e-cigarette users were significantly more likely to be male, white and older. The rates of ever-tried e-cigarettes and current e-cigarette smoking approximately doubled between 2011 and 2012.

The main analysis found use of e-cigarettes was significantly associated with:

  • higher odds of ever or current cigarette smoking
  • higher odds of established smoking
  • higher odds of planning to quit smoking among current smokers
  • among e-cigarette experimenters, lower odds of abstinence from conventional cigarettes

 

How did the researchers interpret the results?

The researchers' interpretation was clear: "Use of e-cigarettes does not discourage, and may encourage, conventional cigarette use among US adolescents."

They added that, "In combination with the observations that e-cigarette users are heavier smokers and less likely to have stopped smoking cigarettes, these results suggest that e-cigarette use is aggravating rather than ameliorating the tobacco epidemic among youths. These results call into question claims that e-cigarettes are effective as smoking cessation aids."

 

Conclusion

This study found US adolescents who use e-cigarettes are more likely to smoke conventional cigarettes. They also have lower odds of abstaining from conventional cigarettes than those who don't try e-cigarettes. On the flip side, e-cigarette users were more likely to report planning to quit conventional smoking.

The research sample was large, so is likely to provide a relatively accurate picture of the smoking behaviour of US adolescents.

These results suggest that e-cigarettes may not discourage conventional cigarette smoking in US adolescents, and may encourage it. However, because of the cross-sectional nature of the information, it cannot prove that trying e-cigarettes causes adolescents to take up conventional smoking. There may be other factors at play.

And indeed, smoking tobacco cigarettes may cause teenagers to take up e-cigarettes. For example, the type of person who may want to try smoking in the past could only try conventional smoking. Nowadays, they have e-cigarettes as an option too.

Retrospectively trying to work out if they would have taken up conventional smoking had they not tried e-cigarettes first is not possible. This question would require a cohort study that tracks behaviour over time. You would then be able to see which smoking method they took up first and if one led to the other. This was not possible using the data the researchers had to hand in the current study.

Conventional smoking has been a public health priority for many decades because, on average, smokers die significantly younger (more than a decade in some groups) and they spend more of their shorter lives ill. Consequently, any product that may increase the rates of conventional smoking among the young – such as e-cigarettes – has serious and widespread health consequences.

Currently, regulation around e-cigarettes is minimal, but there are plans to introduce stricter rules in the UK. In the meantime, this study provides some evidence that e-cigarettes may not be the harmless, safe alternative some believe, and may be acting as a gateway drug to conventional smoking.

The research stops short of proving this, so the debate on whether e-cigarettes should be treated similarly to conventional cigarettes, through advertising and sales restrictions, is likely to continue.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

E-cigarettes are 'encouraging a new generation to become hooked on nicotine'. Daily Mail, March 7 2014

Links To Science

Dutra LM and Glantz SA. Electronic Cigarettes and Conventional Cigarette Use Among US Adolescents. JAMA Pediatrics. Published March 6 2014

Categories: Medical News

Claims new blood test can detect Alzheimer's disease

Medical News - Mon, 03/10/2014 - 12:50

“Blood test that can predict Alzheimer's,” was the headline used by BBC News, the Daily Mail and The Guardian today. Similar coverage was seen across many of the front pages of other newspapers.

These headlines reflected new research showing how a simple blood test may be able to detect early signs of cognitive decline and mild Alzheimer’s disease.

US researchers discovered a panel of 10 biomarkers that, with 90% accuracy, could distinguish people who would progress to have either mild cognitive impairment or mild Alzheimer’s disease within two to three years, from those who wouldn’t.

While promising, the results were only based on a small group of adults over 70 years old who were studied over five years. Of those who developed mild cognitive impairment or mild Alzheimer’s disease, only 28 people had the test. Consequently, it is not clear if the test has any predictive power in the wider population, is applicable to younger adults, or can predict the disease more than two to three years in advance.

The Daily Mail outlined how, while the research was a breakthrough, experts had warned it would bring “ethical concerns”. This is an important point, because there is currently no cure for Alzheimer’s disease, so some people may prefer not to know they might get it. The current unrefined test means at least one in 10 would be wrongly told they will go on to develop the condition, given the severity of the disease, this may cause significant needless worry.

 

Where did the story come from?

The study was carried out by researchers from a range of US Universities and Medical institutions and was funded by the US National Institutes of Health.

The study was published in the peer-reviewed medical journal, Nature Medicine.

The media reporting was generally balanced, with many highlighting the clear ethical question of whether there is any benefit in telling people they are likely to develop a serious condition that currently has no cure. Most media sources correctly acknowledged the need for more research to confirm the usefulness of the test, and that a useable test may be many years away.

However, while this research is exciting, it is still in an early stage and so front page coverage in four national newspapers is perhaps a little over-the-top.

 

What kind of research was this?

This was a cohort study looking to see if a blood test could detect Alzheimer’s disease before symptoms developed.

Alzheimer’s disease causes a progressive dementia. It affects more than 35 million individuals worldwide and is expected to affect 115 million by 2050.

There are currently no cures for the disease and no treatments to improve symptoms to any significant degree. This is because, at the moment, it is only possible to diagnose Alzheimer’s when symptoms such as memory loss show up. Unfortunately, this is usually long after the brain has deteriorated at a cellular level, meaning the disease is well underway by the time it is diagnosed.

Current tests for detecting early disease involve invasive medical treatments, which are also time consuming and often expensive. Discovering new tests and treatments targeting early stages of Alzheimer’s, before any outwardly obvious symptoms occur (known as pre-clinical disease), is a hot topic for research. Theoretically, detecting the disease early on will enable more options to be used to stop or slow down the progress of the disease.

 

What did the research involve?

The researchers recruited a group of people aged 70 or over and analysed their blood and recorded their cognitive abilities over the next five years for signs of decline. The researchers’ examined participants’ blood samples to see if anything in the blood could be used to predict who among the cognitively normal group would develop mental impairment problems and who would not.

The researchers enrolled 525 people over the five years and subjected them to a number of questionnaires to assess their mental health, including memory, verbal reasoning, attention, functional capacities. Based on this they were divided into two groups:

  • a healthy control group showing “normal” cognitive abilities
  • a group with memory problems at the start of the study, defined as amnestic mild cognitive impairment (aMCI) or mild Alzheimer’s disease (AD)

The control group was selected to match the memory-impaired group on the basis of age, sex and education.

The analysis investigated how people’s mental health scores changed after each year during a five year follow up period. Specifically, they wanted to know how many healthy controls went on to develop aMCI or mild Alzheimer’s disease. The main analysis looked for differences in the blood samples of people who went on to develop aMCI or AD and those that did not.

 

What were the basic results?

The researchers analysed 126 blood samples, including from 18 people who had developed aMCI or mild Alzheimer’s disease during the study period. The blood tests pointed towards a way of distinguishing between those that would develop cognitive impairment and those that would not.

After further investigation, the researchers found a set of 10 lipids (fats) in the blood could predict the conversion of people with normal cognitive abilities to either amnestic mild cognitive impairment or Alzheimer’s disease within a two to three year timeframe with more than 90% accuracy.

Once they had the panel of 10 fats that predicted disease development, they tested it on a further group of 41 participants to validate their results. This included 10 people who developed aMCI or mild Alzheimer’s disease during the study period. Similar results were found, confirming the initial findings.

The sensitivity and specificity of the test in the validation experiments was 90%.

 

How did the researchers interpret the results?

Based on the biochemical tests, the researchers believed the panel of 10 blood fats detected may reflect a worsening of cell membrane integrity contributing to the disease. They concluded the panel of 10 lipids could act as a test that may provide an indication of early deterioration of brain function in the pre-clinical stage of Alzheimer’s disease (when the person does not yet have symptoms).

The researchers said they had found and validated a way of assessing blood samples that distinguish cognitively normal participants who will progress to have either aMCI or AD within two to three years from those who won’t. They said that their defined panel of markers featured biochemicals that have essential structural and functional roles in the integrity and functionality of cell membranes.

 

Conclusion

This small cohort study has presented a collection of 10 biomarkers that predicted with 90% accuracy 28 cognitively normal participants who progressed to have either aMCI or mild Alzheimer’s disease within two to three years compared to those who did not.

This represents a proof-of-concept that an easily administered blood test may provide a way of detecting Alzheimer’s disease at a pre-clinical stage.

The main limitation to bear in mind when interpreting this study is the relatively older group (over 70) and short predictive range investigated. This means the test was only able to detect who would develop cognitive decline in the next two to three years. For that reason, the study does not provide any information on whether the test can predict the disease any earlier, for example by testing the blood of people in their 50s. This will inevitably be the subject of further study.

The Daily Mail outlines how “experts called the breakthrough a real step forward, but warn it will bring with it ethical concerns”. This is an important point to consider because there is currently no cure for Alzheimer’s disease.

As The Independent put it, “Would anyone welcome being told that they are going to develop – and very likely die from – an incurable disorder that will eventually rob them of their memories, emotions and personality over a period of many years?”

The reaction to the news will certainly be different for different individuals, but could be emotionally and psychologically damaging for some.

Along a similar line, the current test was 90% accurate. This means at least one in 10 will be wrongly told they will go on to develop the condition, causing needless worry.

The researchers cannily point out that the test “requires external validation using similar rigorous clinical classification before further development for clinical use. Such additional validation should be considered in a more diverse demographic group than our initial cohort”.

Ultimately, this research provides proof-of-concept that a blood test may predict early stage Alzheimer’s disease, but it’s too early to say whether this test in particular is definitely effective, or could soon be used in mainstream clinical practice. Time, and more research, will tell. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New blood test to detect who is on the brink of dementia. The Times, March 10 2014

Blood test that predicts Alzheimer's. The Independent, March 10 2014

Breakthrough blood test that can predict the onset of Alzheimer's. Daily Express, March 10 2014

Blood test that can predict Alzheimer's: Elderly could be given early warning. Daily Mail, March 10 2014

Blood test can predict Alzheimer's, say researchers. BBC News, March 10 2014

Blood test to spot Alzheimer's Disease developed - and researchers say it's at least 90% accurate. Daily Mirror, March 10 2014

Links To Science

Mapstone M, et al. Plasma phospholipids identify antecedent memory impairment in older adults. Nature Medicine. Published online March 9 2014

Categories: Medical News

Antibiotic resistance 'toolkit' launched

Medical News - Fri, 03/07/2014 - 17:00

"Antibiotics: 'national threat' from steep rise in patients who are resistant to drugs,” The Daily Telegraph reports. The Mail Online reports that there were, “600 reported cases of drugs failing because of resistant bacteria last year”.

What isn’t made very clear is that these 600 cases were of one very specific form of antibiotic-resistant bacteria called carbapenemase-producing Enterobacteriaceae (CPE).

Enterobacteriaceae is a large group of types of bacteria. The group includes harmless bacteria that live in the gut, as well as bacteria such as E. coli, and Salmonella that can cause food poisoning. Enterobacteriaceae can also cause infection if they enter the wrong part of the body, such as the blood stream.

Some of these bacteria have developed resistance against a group of strong antibiotics called carbapenems, which are normally used to treat the most serious infections.

The resistant CPE bacteria produce an enzyme (carbapenemase) that breaks down the antibiotic and makes it ineffective.

This is potentially serious as carbapenems are essentially a weapon of last resort in our “antibiotic armoury”. If carbapenem resistance became widespread then the consequences for public health could be akin to a return to the pre-antibiotic era.

To address the concern, Public Health England has released a “toolkit” – a series of recommendations to help health staff limit the spread of CPE in hospitals.

 

What is antibiotic resistance?

Antibiotics are drugs used to treat infections caused by bacteria. Sometimes bacteria develop the ability to survive antibiotic treatment, this is called antibiotic resistance.

When a strain of bacteria become resistant to an antibiotic it means this antibiotic will no longer be effective for treating the infections they cause. Antibiotic resistance is one of the most significant threats to patient safety in Europe. Read more about antibiotic resistance.

 

What are CPE bacteria?

Carbapenemase-producing Enterobacteriaceae are bacteria that live in the gut and are normally harmless. However, if they get into other parts of the body, such as the bladder or bloodstream, they can cause an infection.

Carbapenems are strong antibiotics similar to penicillin. They are used by doctors as a ‘last resort’ to treat some infections when other antibiotics have failed. Some Enterobacteriaceae make enzymes called carbapenemases that allow them to break down these antibiotics, and this makes them resistant. Only a few strains of Enterobacteriaceae produce carbapenemases currently, but this number is growing.

 

What does the CPE public health toolkit aim to do?

Public Health England’s advice for healthcare professionals in England focuses on early detection of CPE, as well as advice on how to manage or treat CPE, and control their spread in hospitals and residential care homes.

Public Health England has produced information leaflets for healthcare professionals to give to people who have been identified as being carriers or infected with CPE, or who are in contact with people who are infected.

 

Why do we need help managing these antibiotic-resistant bacteria?

There are currently only a few strains of carbapenemase-producing Enterobacteriaceae (CPE), but this number is growing. In 2006, there were five patients reported to Public Health England as having CPE, but by 2013 this number had risen to more than 600. These numbers include people who were just carriers of CPE, as well as those with infections.

Public Health England wants to act quickly to minimise the spread of CPE, as rapid spread could mean doctors are less able to rely on carbapenem antibiotics. This could pose an increasing threat to public health.

 

What information does Public Health England offer patients?

Public Health England’s CPE toolkit explains what CPE are, and the importance of carbapenem resistance. It explains that:

  • there is an increased chance of picking up CPE if you have been a patient in a hospital abroad or in a UK hospital that has had patients carrying the bacteria, or if you have been in contact with a CPE carrier elsewhere
  • if a doctor or nurse suspects that you are a CPE carrier, they will arrange for screening to see if you are a carrier
  • screening usually involves taking a rectal swab or giving a sample of faeces
  • while a patient is waiting for the screening result, and if they are found to have CPE, they will be kept in a single room with its own toilet facilities. This is to limit the potential for spread of CPE to other people through contaminated faeces
  • the most important way for a patient and visitors to reduce spread of CPE is to regularly wash hands well with soap and water, especially after going to the toilet
  • patients who have a CPE infection need to be treated with antibiotics, but those who just carry CPE in their gut do not need antibiotics

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Antibiotics: 'national threat' from steep rise in patients who are resistant to drugs. The Daily Telegraph, March 6 2014

Fears that cases of antibiotic-resistant bacteria could get 'out of control'. The Independent, March 6 2014

Antibiotic resistance soars: Cases of gut bacteria not destroyed by drugs increase by 12,000% in seven years. Mail Online, March 6 2014

Categories: Medical News

'Peeing' in pool may create harmful byproducts

Medical News - Fri, 03/07/2014 - 12:58

“Peeing in the pool could be bad for your health,” the Mail Online reports. As well as being unpleasant and socially unacceptable, new research suggests that a chemical in wee can react with chlorinated swimming pool water, creating potentially harmful byproducts.

The study in question used lab tests to study the reaction between a chemical found in urine (uric acid) and the chlorine in swimming pools. Researchers found that the combination of these substances can form some potentially harmful chemicals, known as nitrogen-containing disinfection byproducts (N-DBPs).

N-DBPs found at low levels in swimming pools have been linked to eye and throat irritation. At high levels, they can adversely effect the nervous and cardiovascular systems.

These byproducts were already known to be in chlorinated pools and to be formed from the reaction between chlorine and organic chemicals, such as those found in body fluids. This latest study confirms that uric acid is one of the potential sources of these chemicals.

The Mail’s coverage of this study is, primarily, an excuse to run an amusing story about weeing in pools, rather than to report on new research. It shouldn't need a study to tell us that weeing in a pool is not the most hygienic or polite of habits.

Swimming in a pool, with lifeguards to protect you, is a great form of exercise. If you choose to swim in open water, find out how to stay safe when swimming in the great outdoors.

 

Where did the story come from?

The study was carried out by researchers from China Agricultural University in Beijing and Purdue University in the USA. It was funded by the Chinese Universities Scientific Fund, the National Natural Science Foundation of China and the National Swimming Pool Foundation in the USA. 

The study was published in the peer-reviewed journal Environmental Science and Technology.

The Mail Online reports the study fairly, quoting a lot of information directly from the scientific paper itself. We suspect that a Chinese study published in a relatively obscure environmental health journal would not have garnered such coverage if it didn’t cover such a topic as public urination.

 

What kind of research was this?

Chlorine is used to disinfect pools, but it can react with other chemicals in the water – such as human bodily fluids – to produce potentially harmful chemicals. This was a laboratory study looking at the chemical reactions that occur as a result of the mixing of chlorine in pools and a chemical called uric acid, which is found mainly in urine, but also in sweat. 

Previous studies have found that, on average, swimmers release between 0.2 and 1.8 litres of sweat (up to more than 3 pints) and between 25 and 117 millilitres of urine per swim (up to about half a cup of urine).

This study tells us about the chemical reactions that may occur in pools, but didn't look into the health effects of these. The researchers note in their introduction that nitrogen-containing disinfection byproducts (the substance produced by the reaction) “tend to be more genotoxic, cytotoxic and carcinogenic”.

 

What did the research involve?

In a lab, the researchers mixed chlorinated water with uric acid – or mixtures of chemicals designed to replicate human bodily fluids – under different conditions. They then monitored these to see if certain potentially harmful chemicals, called volatile nitrogen-containing disinfection by-products (N-DBPs), were formed, and how much of them there were. The word “volatile” means these chemicals easily form gases and can therefore be breathed in.

The researchers also collected water from swimming pools in China and analysed them in the lab. In some experiments, extra chlorine or uric acid was added to the pool water to see what chemicals were produced.

The two N-DBPs the researchers looked at (cyanogen chloride and trichloramine) are known to be formed at low levels as a byproduct of chlorination in pools. These chemicals are irritants and potentially harmful to the lungs, heart and the central nervous system above certain levels of exposure. It was already known that these chemicals can form as a result of the reaction between chlorine and amino acids (building blocks of protein that are also found in bodily fluids). However, whether chlorine has a similar effect when mixed with uric acid is unknown.

 

What were the basic results?

The researchers found that the reaction between the chlorinated water and uric acid in the lab produced both cyanogen chloride and trichloramine.

Swimming pool water analysis showed both cyanogen chloride and trichloramine in all samples. Adding extra uric acid to the swimming pool water led to more cyanogen chloride forming, but the effects on trichloramine levels were less consistent.

Experiments with solutions mimicking body fluids suggested that chlorination of uric acid may account for a considerable proportion of the cyanogen chloride formed in pools, but less of the trichloramine.

 

How did the researchers interpret the results?

The researchers concluded that as most uric acid is introduced into pools by urination, reducing this habit could lead to benefits in both pool and air chemistry.

 

Conclusion

This study suggests that certain, potentially harmful, byproducts of pool water chlorination result, in part, from a reaction to the uric acid found in urine.

The media coverage of this study is likely to be more of an excuse to run an amusing story about weeing in pools, rather than the study itself. The byproducts in question were already known to exist in pools, and to be formed from the reaction between chlorine and organic chemicals, such as those found in body fluids. The current study confirms that uric acid is one of the potential sources of these chemicals.

The only swimming pool water tested in this study was from China, and the exact types of disinfectant chemical used, levels of chlorine and extent of weeing in the pool may differ in pools from different countries.

At best, the practice of weeing in a pool is socially unacceptable; at worst, it may be a potential health hazard.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Peeing in the pool could be bad for your health: Researchers warn unhygenic habit could trigger chemical reactions that cause respiratory problems. Mail Online, March 7 2014

Links To Science

Lian L, E Yue, Li J, Blatchley ER. Volatile disinfection byproducts resulting from chlorination of uric acid: Implications for swimming pools. Environmental Science and Technology. Published online February 25 2014

Categories: Medical News

HIV 'gene hack' offers new treatment hope

Medical News - Thu, 03/06/2014 - 14:48

“HIV gene therapy using GM cells hailed a success after trial,” reports The Guardian, while the BBC tells us that an “immune upgrade” could offer “HIV shielding”.

These headlines come following a small trial that examined whether it was safe to inject genetically modified white blood cells into people with HIV. This was achieved, but the study did not show whether HIV could actually be treated.

This was the first human trial for the technique and involved 12 people that already had HIV. They were all taking antiretroviral (anti-HIV) medication and had undetectable levels of the HIV virus in their blood. A type of white cell in their blood was genetically modified and then multiplied in the lab.

This genetic modification was done to imitate a rare, naturally occurring mutation, when two copies are present, which makes people highly resistant to HIV infection.

Researchers injected the modified blood cells into each of the 12 people with HIV. They did this to test the safeness of the treatment. There was only one serious transfusion reaction, with many of the participants experiencing milder reactions, which included fever, chills and bone pain.

The researchers also looked at the effectiveness of the genetically modified cells by asking six of the participants to stop their antiretroviral medication for 12 weeks – 4 weeks after the infusion. The researchers then looked at what happened to participants if they did not take their HIV medication for a few weeks, and what happened when they restarted it. The effects were variable in the six individuals.

This study provides some hope that genetically “edited” immune cells could be used to treat people with HIV, but it’s too soon to draw any strong conclusions as to whether it will be an effective treatment.

 

Where did the story come from?

The study was carried out by researchers from: the University of Pennsylvania; the Albert Einstein College of Medicine, Bronx; and Sangamo BioSciences, Richmond, California. It was funded by the National Institute of Allergy and Infectious Diseases; the Penn Center for AIDS research; and Sangamo BioSciences.

The study was published in peer-reviewed medical journal the New England Journal of Medicine.

The media reported the trial responsibly; however, there were a couple of inaccuracies.

The reduction in HIV viral level occurred after the levels had shot up when six participants stopped taking their antiretroviral medications. The HIV viral levels reached a peak six to eight weeks after the treatment was stopped, and then only gradually declined in the three participants who did not immediately restart medication, or already have one strand of their own DNA with the genetic mutation. This was not due to replication of the injected genetically modified T helper cells, as their numbers were steadily reducing.

 

What kind of research was this?

This was a phase one trial of a new potential treatment for HIV. It was non-randomised (the participants were specifically selected), and the participants and doctors were aware they were having the treatment. There was a selected group of people who were not given the treatment and acted as controls, but these people were not reported on in the journal article.

Phase one trials are the first ones carried out for a new treatment in humans. They are usually very small and are performed to test treatment safety. If successful, larger phase two trials and phase three trials are carried out to look further at safety and start to examine the effectiveness.

 

What did the research involve?

12 people with HIV infection were given genetically modified CD4 T cells. These a type of white blood cell and are often called “T helper cells”, as they send messages to other immune cells. The aim of the study was to assess the safety and side effects of the potential treatment, with a secondary aim of assessing the effect on the immune system and HIV resistance.

The genetic modification was performed to mimic a naturally occurring DNA mutation that some people have and is thought of affect around 1% of the population. This mutation, when present on both copies of a section of DNA, has been found to make them resistant to the most common strains of HIV. In people with HIV who have this mutation on one of the strands of DNA, the progression of their disease to AIDS is slower. There has also been one case of a person who had a stem cell transplant from a donor who had the mutation on both copies, and the HIV virus has been undetectable for them from more than four years without any antiviral therapy (the standard HIV treatment).

From this discovery, previous research on mice using genetically modified T helper cells showed that they functioned normally and were able to divide and multiply in response to usual stimuli. They were also protected from HIV infection and reduced the HIV RNA infection levels in the blood.

The main aim of this study was to assess the safety of the potential treatment in humans. The secondary aim was to assess the immune system and whether there was any HIV resistance.

12 people with HIV entered the study between May 2009 and July 2012. The inclusion criteria were that they were taking antiretroviral medication and were “aviraemic” (meaning that the level of HIV RNA was undetectable in their blood). The participants were split into two groups of six.

The participants gave a blood sample. From this, the T helper cells were genetically modified and multiplied. The cells were then injected back into their veins as an infusion. The infusion contained about 10 billion T helper cells, 11-28% of which had been genetically modified.

The participants were closely monitored for the first four weeks. The first group of six then stopped their antiretroviral treatment for 12 weeks. All participants were monitored for 36 weeks, and they are now in a 10-year follow-up study.

 

What were the basic results?

In terms of the primary aim of safety:

  • One participant suffered from a serious reaction. They had fever, chills, joint pain and back pain within 24 hours of the infusion, which was diagnosed as a transfusion reaction.
  • There were 82 mild and 48 moderate adverse events reported, but the researchers report that 71 of them were not related to the study drug.
  • The most common adverse event was a milder version of the transfusion reaction.
  • Garlic-like body odour was common and is due to the metabolism of a drug used in the process.

For the secondary aim of immunity to HIV:

  • In all 12 participants, the amount of T helper cell was significantly higher one week after the infusion (from 448 cells per cubic millimetre to 1,517) and 13.9% of them were genetically modified. It took on average 48 weeks for the cells to reduce by half, which suggests that the immune system did not reject them.
  • The genetically modified T helper cells went from the blood stream into the soft tissue, where the majority of this type of cell usually resides.
  • Virus levels became detectable in the blood of all six of the group who stopped treatment. Two of them restarted the antiretroviral treatment after eight weeks. The viral levels in three of the participants gradually reduced after a peak at eight weeks, before the antiretroviral treatment was restarted at 12 weeks. It then took 4-20 weeks for the viral levels to become undetectable.
  • The viral level in one of the patients who stopped antiretroviral treatment rose, but became undetectable before restarting treatment. It was found that he already had the genetic mutation in one strand of his DNA.

 

How did the researchers interpret the results?

The researchers concluded that genetically modified CD4 T-cell infusions are safe within the study's limits, but that the size of the study was too small to generalise this finding. The immune system did not reject the genetically modified T helper cells.

 

Conclusion

This phase one trial showed that the infusion of genetically modified T helper cells was achieved reasonably safely in 12 people with chronic HIV.

It isn’t clear if it could be an effective treatment for HIV, as the virus became detectable in the blood of all six participants who stopped taking their antiretroviral treatment. Although the levels of the virus then began to reduce after eight weeks, it only came back to undetectable levels in the person who already had one DNA strand of the genetic mutation. It took several weeks for this to happen in the other five people.

The primary aim of the study was to determine the safety of the treatment in humans, rather than to determine immunity to HIV. It may be that a different dose of cells is more effective. Further studies in larger numbers of people will now be needed to further examine the treatment’s safety and to look at its possible effectiveness and what factors and characteristics in a person might influence this.

Analysis by Bazian. Edited by NHS Choices
Follow Behind the Headlines on Twitter
Join the Healthy Evidence forum.

Links To The Headlines

Immune upgrade gives 'HIV shielding'. BBC News, March 6 2014

HIV gene therapy using GM cells hailed a success after trial. The Guardian, March 6 2014

Could AIDS be cured by modifying patients' genes? New treatment could spell the end of daily drug regime. MailOnline, March 6 2014

Links To Science

Tebas P, et al. Gene Editing of CCR5 in Autologous CD4 T Cells of Persons Infected with HIV. New England Journal of Medicine. Published online March 6 2014

Categories: Medical News

WHO says halving sugar target has extra benefit

Medical News - Thu, 03/06/2014 - 12:58

“Halve sugar intake, say health experts,” The Daily Telegraph reports, while The Guardian tells us that “a can of coke a day is too much sugar”.

The widespread media reports follow new draft international guidelines looking at the healthy maximum recommended levels of sugars in the diet.

Currently, people are advised to have less than 10% of their total energy intake from sugars. However, the new draft guidelines from the World Health Organization (WHO), state that a reduction to below 5% of total energy intake would have “additional benefits”.

An intake of 5% is equivalent to around 25 grams (six teaspoons) of sugar a day for a healthy adult. The WHO’s suggested limits apply to all sugars, including “hidden” sugars added to foods by manufacturers, as well as sugars that are naturally present in fruit juices and honey.

 

Why has the WHO published new draft guidelines on sugar?

The WHO has produced its draft guideline on recommended daily sugar intake to get feedback from the public and experts, before it finalises its advice.

The new consultation is needed because the existing WHO guideline was published 12 years ago. This advice, based on the best evidence available in 2002 says that sugars should make up less than 10% of total energy intake.

 

Why has a new draft guideline on sugar been produced?

The WHO says there is growing concern that consumption of sugars – particularly in the form of sugar-sweetened drinks – may result in:

  • reduced intake of foods containing more nutritionally adequate calories 
  • an increase in total calorie intake, leading to an unhealthy diet and weight gain

These may in turn increase the risk of conditions such as cardiovascular disease and diabetes.

The WHO points out that much of the sugars we eat and drink are “hidden” in processed foods that are not usually seen as sweets. For example, a tablespoon of ketchup contains around 4 grams (around a teaspoon) of sugars. A single can of sugar-sweetened fizzy drink contains up to 40 grams (around 10 teaspoons) of sugar.

The WHO also highlights the role sugars play in dental disease (tooth decay). This, the WHO argues, is globally the most common non-infectious disease in the world today.

The new guidelines on sugar intake aim to help both children and adults to reduce their risk of unhealthy weight gain and dental decay, the WHO says. The WHO’s recommendations, once agreed, can act as a benchmark for health policy officials to assess people’s current sugar intake and to help them develop measures to reduce sugar intake.

 

What does the new draft WHO sugar guideline say?

There is no change in the WHO’s new draft guideline from the existing target of having less than 10% of your total energy intake per day from sugars. However, the new guideline does suggest that a reduction to below 5% of total energy intake per day would have “additional benefits”. This target of 5% of your total energy intake is equivalent to around 25 grams (around six teaspoons) of sugar per day for an adult of normal BMI.

The suggested limits on intake of sugars in the draft guideline apply to all sugars that are added to food by manufacturers, cooks or consumers. It also applies to sugars naturally present in honey, syrups, fruit juices and fruit concentrates. These sugars include glucose, fructose, sucrose and table sugar.

Dr Francesco Branca, WHO's nutrition director, reportedly told a news conference that the 10% target was a "strong recommendation" while the 5% target was "conditional", based on current evidence. "We should aim for 5% if we can," said Dr Branca.

 

What evidence is the new draft guideline based on?

WHO commissioned two systematic reviews that it says informed the development of the draft guideline. One is a study from the University of Otago in New Zealand which looked at sugar intake and body weight. This review, which included 68 studies, found that in adults:

  • advice to reduce free sugars was associated with an average 0.80kg reduction in weight
  • advice to increase intake was associated with a corresponding 0.75kg increase

However, the evidence was less consistent in children than in adults. The researchers say the effect seems to be due to an altered type of energy intake, as replacing sugars with other carbohydrates did not result in any change in body weight.

A second review, from the University of Newcastle, looked at the effect of restricting sugar intake on dental caries. It included 55 studies and found “moderate quality” evidence showing that the incidence of caries is lower when free-sugars intake is less than 10%. With a 5% cut-off, a “significant relationship” was observed, although this evidence was judged to be of very low quality.

 

Was media coverage of the WHO draft sugar guidelines accurate?

The media’s reporting of the new guidelines was factually accurate. Some papers chose to include comments critical of the WHO’s “lack of clarity”. According to BBC News, UK campaigners say it is a "tragedy" that the WHO has taken 10 years to think about changing its advice and are in favour of 5% becoming the firm recommendation.

The Independent reports one expert saying he suspected “dirty work” on the part of food and drinks companies might lie behind the WHO’s “less than resounding” message. However, The Independent also reports other experts who have pointed out that a limit of less than 5% would be “ambitious and challenging” and “untried and untested”.

The news follows a post-Christmas surge in stories about the harms from sugar, including the launch in January this year of the new campaign group Action on Sugar. The news also comes after Dame Sally Davies, England’s Chief Medical Officer, reportedly suggested a tax on sugar to help combat growing levels of obesity.

 

Should I cut down on sugar?

Most adults and children in the UK eat too much sugar. According to Public Health England, surveys show that the average intake for adults is 11.6% and for children intake is 15.2%. This is well above the current recommendation of less than 10% of energy intake from sugar.

Sugars are added to a wide range of foods, such as sweets, cakes, biscuits, chocolate, and some fizzy drinks and juice drinks. These are the sugary foods that you should cut down on. Fruit and vegetables also contain naturally occurring sugars, but these foods also have a range of other nutrients that make them a much better option.

You can find out more information about the relative proportions of food we should be eating in the Eatwell Plate

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

People should cut their sugar intake to just six teaspoons a day, says World Health Organisation. Daily Mail, March 6 2014

World Health Organisation advises halving sugar intake. The Daily Telegraph, March 6 2014

WHO: Daily sugar intake 'should be halved'. BBC News, March 5 2014

Is sugar the new evil? Arguments for and against the grain. The Independent, March 5 2014

A can of Coke a day is too much sugar, warns WHO. The Guardian, March 6 2014

Links To Science

Systematic reviews cited by the WHO

Te Morenga L, Mallard S and Mann J. Dietary sugars and body weight: systematic review and meta-analyses of randomised controlled trials and cohort studies. BMJ. Published online January 15 2013

Moynihan PJ and Kelly SAM. Effect on Caries of Restricting Sugars Intake. Systematic Review to Inform WHO Guidelines. Journal of Dental Research. 2014;93:8-18

 

Categories: Medical News

Parental smoking 'ages' children’s arteries

Medical News - Wed, 03/05/2014 - 14:48

“Passive smoking causes lasting damage to children's arteries, prematurely ageing their blood vessels by more than three years,” BBC News reports.

The news is based on emerging evidence that exposure to secondhand smoke damages children’s arteries. This news is concerning, as the thickening of blood vessel walls (atherosclerosis) is known to increase the risk of heart attacks and strokes in later life.

Parents of approximately 3,700 children from Finland and Australia were asked if neither, one or both parents smoked.

Up to 25 years later, the grown-up children underwent ultrasound scans on the large carotid arteries in their necks, to estimate their carotid intima media thickness (IMT). A high IMT can indicate the flow of blood through the carotid arteries to the brain, which can potentially become blocked, triggering a stroke.

The researchers found that having a high carotid IMT in adulthood was significantly more likely in those who were exposed to both parents smoking. However, having only one parent who smoked was not linked to increased greater carotid IMT in adulthood. 

The study does have limitations, including its reliance on self-reporting; however, the results add to already existing evidence that exposing children to passive smoking can increase the risk of both short-term and long-term conditions.

 

Where did the story come from?

The study was carried out by researchers from the University of Tasmania in Australia and the Universities of Turku and Tampere in Finland, along with other Australian and Finnish institutions. It was funded by various organisations: the Commonwealth Departments of Sport, Recreation and Tourism, and Health; the National Heart Foundation; the Commonwealth Schools Commission; the National Health and Medical Research Council; the Heart Foundation; the Tasmanian Community Fund; and Veolia Environmental Services.

The study was published in the peer-reviewed European Heart Journal and is available on an open access basis, so it is free to read online or download.

Both BBC News and the Mail Online website provide an accurate summary of the study.

 

What kind of research was this?

This study was a secondary analysis of two independent long running prospective studies carried out in Australia and Finland, which followed participants for up to 25 years.

The aim was to assess the role of exposure to parental smoking in children or adolescence on carotid IMT in adulthood.

 

What did the research involve?

The study involved 2,401 children from Finland (aged 3-18 years) and 1,375 children from Australia (aged 9-15 years), with a follow-up of up to 25 years. The Finnish study also analysed the association between cumulative exposure to parental smoking for 3 years and carotid IMT in adulthood.

In both studies, researchers determined the self-reported smoking status of the participant’s parents at the beginning of the studies. Parental smoking was reported as:

  • neither parent
  • one parent
  • two parents

Self-reported current smoking status of the participants (children) was also recorded at the beginning of the study. Questionnaires also gathered other information on the participants, including height, weight (to calculate their body mass index) and physical activity level. 

At follow up (up to 25 years later), questionnaires gathered information on the participant’s total years of schooling, current smoking status, physical activity levels, alcohol consumption and cardiovascular risk factors.

At follow up, ultrasound measurements were carried out on the participants (who were now adults) to measure the thickness of their carotid artery walls (IMT). The carotid IMT measurements were made by members of the research team, as they did not know whether the participants had been exposed to passive smoke during childhood.

 

What were the basic results?

The main findings of this study were that:

  • carotid IMT in adulthood was significantly greater in those exposed to both parents smoking in childhood compared to those whose parents did not smoke (adjusted marginal means 0.647 mm (standard deviation (0.022) vs. 0.632 mm (standard deviation 0.021). This association remained significant after adjustment for all potential confounders of both the participants and parents including age, sex, parental education and the child’s smoking status at the start of the study and at follow up.
  • cardiovascular risk factors of the participants in adulthood were also considered in the adjustments 
  • having just one parent (mother or father) who smoked was not linked to carotid IMT in adulthood (pooled analysis of both studies)
  • in one of the studies, greater exposure to parental smoking over a three-year period was linked to a significantly greater carotid IMT in adulthood

The researchers determined that the effects of exposure to both parents smoking on participants’ vascular age was equivalent to being 3.3 years older (95% confidence interval (CI) 1.31 to 4.48) than the participants actual age in adulthood (pooled analysis of both studies).

 

How did the researchers interpret the results?

The researchers concluded there is an extensive effect of exposure to parental smoking on children’s vascular health up to 25 years later. They state that  there must be continued efforts to reduce smoking among adults, in order to protect young people and to reduce the prevalence of cardiovascular diseases, such as heart attacks, across the population.

Dr Seana Gall, one of the researchers from the Menzies Research Institute Tasmania, is reported in the media as saying: "Our study shows that exposure to passive smoke in childhood causes a direct and irreversible damage to the structure of the arteries."

They went on to say that: "Parents, or even those thinking about becoming parents, should quit smoking. This will not only restore their own health, but also protect the health of their children into the future."

 

Conclusion

Overall, this secondary analysis study provides preliminary evidence of the effects of parental passive smoking on the artery walls of children and adolescents in adulthood.

The researchers attempted to adjust for potential factors that could influence risk (confounders), such as:

  • age
  • sex
  • height
  • weight
  • smoking status
  • physical activity levels
  • alcohol consumption
  • schooling level of the parent(s)

In their analysis, they also took into consideration cardiovascular risk factors of the participants in adulthood.

There are some limitations to the study, which are worth noting. Parental smoking status was self-reported and not objectively measured by the researchers, so there is a possibility that parents did not have accurately reported their actual smoking status. This is also the case for the reporting of the participants’ smoking status. The study was a secondary analysis of two larger cohort studies, so it’s likely that the cohorts themselves did not have the same outcome of interest as the current study.

Despite these limitations, however, there is a substantial body of evidence that shows that passive smoking is harmful, especially when those involved are children. Tobacco smoke contains around 70 cancer causing chemicals and hundreds of other toxins.

If you do choose to smoke, you should do it outside and well away from your children. There is plenty of free advice and treatment that could help you kick the habit.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Passive smoking 'damages children's arteries'. BBC News, March 5 2014

Passive smoking 'ages children's arteries': Exposure means youngsters are at greater risk of heart attacks and strokes in later life. Mail Online, March 5 2014

Links To Science

Gall S, Huynh QL, Magnussen CG, et al. Exposure to parental smoking in childhood or adolescence is associated with increased carotid intima-media thickness in young adults: evidence from the Cardiovascular Risk in Young Finns study and the Childhood Determinants of Adult Health Study. European Heart Journal. Published online March 4 2014

Categories: Medical News

High protein diet not as bad for you as smoking

Medical News - Wed, 03/05/2014 - 12:58

“People who eat diets rich in animal protein carry similar cancer risk to those who smoke 20 cigarettes each day,” reports The Daily Telegraph.

We have decades of very good evidence that smoking kills and – fortunately for meat lovers – this latest unhelpful comparison with high protein diets largely appears to be a triumph of PR spin.

The warning was raised in a press release about a large study which found that for people aged 50-65, eating a lot of protein was associated with an increased risk of dying.

However, the study, which assessed the diets of Americans in a single 24-hour period (rather than long-term), found in those aged over 65 that a high protein diet was actually associated with a reduced risk of death from any cause or from cancer. These differing findings meant that overall there was no increase in risk of death, or from dying of cancer with a high protein diet.

There are several reasons to be cautious when interpreting the results of this study, including that the researchers did not take into account important factors such as physical activity in their study.

The claim in much of the media, that a high protein diet in middle-aged people is “as dangerous as smoking” is unsupported.

We need to eat protein, we do not need to smoke.

 

Where did the story come from?

The study was carried out by researchers from the University of Southern California (USC) and other research centres in the US and Italy. It was funded by US National Institutes of Health, National Institute on Aging, and the USC Norris Cancer Center. The study was published in the peer-reviewed journal Cell Metabolism and has been made available on an open access basis to read for free.

In general, reporting of the results of the study was reasonable. However, the prominence given to the story (which featured as a front page lead in The Daily Telegraph and The Guardian) in the UK media seems disproportionate.

The headlines suggesting a high protein diet is “as harmful as smoking” was not a specific finding of the study and could be seen as unnecessary fear-mongering. This is particularly of note given that the effects of a high protein diet were found to differ dramatically by age.

To be fair to the UK’s journalists, this comparison was raised in a press release, issued by the University of Southern California. Unfortunately this PR hype appears to have been taken at face value.

 

What kind of research was this?

This study looked at the relationship between the amount of protein consumed and subsequent risk of death among middle aged and older adults. It used data collected in a previous cross-sectional study and information from a national register of deaths in the US.

While the data used allowed researchers to identify what happened to people over time, this wasn’t the original purpose of the data collection. This means that some information on what happened to people may be missing, as researchers had to rely on national records rather than keeping close track of the individuals as part of the study.

 

What did the research involve?

The researchers had data on protein consumption for 6,381 US adults aged 50 and over (average age 65). They then identified which of these people died over the following 18 years (up to 2006) using national records. The researchers carried out analyses to see whether people who ate more protein in their diets were more likely to die in this period than those who ate less protein.

The information on protein consumption was collected as part the third National Health and Nutrition Examination Survey (NHANES). These surveys are designed to assess the health and nutritional status of people in the US. The participants are selected to be representative of the general US population. As part of the survey they reported their food and drink intake over the past 24 hours using a computerised system. The system then calculated how much of different nutrients they consumed.

Each person’s level of protein consumption was calculated as the proportion of calories consumed from protein. Protein intake was classed as:

  • High – 20% or more of calories from protein (1,146 people) 
  • Moderate – 10 to 19% of calories from protein (4,798 people) 
  • Low – less than 10% of calories from protein (437 people)

The researchers used the US National Death Index to identify any of the survey participants who died up to 2006, and the recorded cause of death. The researchers looked at whether proportion of calories consumed from protein was related to risk of death overall, or from specific causes. As well as overall deaths, they were also interested in deaths specifically from cardiovascular disease, cancer, or diabetes. The researchers also looked at whether the relationship differed in people aged 50-65 years, and older individuals, and whether it was influenced by fat, carbohydrate or animal protein intake.

The analyses took into account factors (confounders) that could influence the results, including:

  • age
  • ethnicity
  • education
  • gender
  • "disease status"
  • smoking history
  • participants’ dietary changes in the last year
  • participants’ attempted weight loss in the last year
  • total calorie consumption

The researchers also carried out studies to look at the effect of protein and their building blocks (amino acids) in yeast and mice.

 

What were the basic results?

On average, the participants consumed 1,823 calories over the day:

  • 51% from carbohydrates
  • 33% from fat
  • 16% from protein (11% from animal protein).

Over 18 years, 40% of participants died; 19% died from cardiovascular diseases, 10% died from cancer, and about 1% died from diabetes.

Overall, there was no association between protein intake and risk of death from any cause, or death from cardiovascular disease or cancer. However, moderate or high protein consumption was associated with an increased risk of death related to complications associated with diabetes. The authors noted that the number of people dying from diabetes-related causes was low, so larger studies were needed to confirm this finding.

The researchers found that results for death from any cause and from cancer seemed to vary with age. Among those aged 50-65, those who ate a high protein diet were 74% more likely to die during follow up than those who ate a low protein diet (hazard ratio (HR) 1.74, 95% confidence interval (CI) 1.02 to 2.97). People in this age group who ate a high protein diet were more than four times as likely to die from cancer during follow up than those who ate a low protein diet (HR 4.33, 95% CI 1.96 to 9.56).

The results were similar once the researchers took into account the proportion of calories consumed from fat and carbohydrates. Further analyses suggested that animal protein was responsible for a considerable part of this relationship, particularly for death from any cause.

However, the opposite effect of high protein intake was seen among those aged over 65. In this age group high protein intake was associated with:

  • a 28% reduction in the risk of death during follow up (HR 0.72, 95% CI 0.55 to 0.94)
  • a 60% reduction in the risk of death from cancer during follow up (HR 0.40, 95% CI 0.23 to 0.71)

 

How did the researchers interpret the results?

The researchers concluded that low protein intake during middle age followed by moderate to high protein consumption in older adults may optimise health and longevity.

 

Conclusion

This study has found a link between high protein intake and increased risk of death among people aged 50-65, but not older adults. There are some important points to bear in mind when thinking about these results:

  • The human data used was not specifically collected for the purpose of the current study. This meant that the researchers had to rely on the completeness of, for example, national data on deaths and causes of death. This may mean that deaths of some participants may have been missed.
  • Information on food intake was only collected for one 24-hour period, and this may not be representative of what people ate over time. Most people (93%) reported that it was typical of their diet at the time, but this may have changed over the 18 years of follow up.
  • The researchers took into account some factors that could affect results, but not others, such as physical activity.
  • Although the study was reasonably large, numbers in some comparisons were relatively low, for example, there were not many diabetes-related deaths and only 437 people overall ate a low protein diet. The broad confidence intervals for some of the results reflect this.
  • Many news sources have suggested that a high protein diet is “as bad for you” as smoking. This is not a comparison that is made in the research paper, therefore its basis is unclear. While we do need some protein in our diets, we don’t need to smoke, so this is not a helpful comparison.
  • While the authors suggested that people eat a low protein diet in middle age and switch to a high protein diet once they get older, it is not possible to say from the study whether this is what the older participants actually did, as their diets were only assessed once.
  • Ideally the findings need to be confirmed in other studies set up to specifically address the effects of higher protein diets, particularly the strikingly different results for different age groups.

While certain diet plans, such as the Atkins diet or the “caveman diet” have promoted the idea of eating a high-protein diet for weight loss, relying on a single type of energy source in your diet is probably not a good idea. Consumption of some high-protein foods such as red meat and processed meat is already known to be associated with increased risk of bowel cancer.

 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

High-protein diet 'as bad for health as smoking'. The Daily Telegraph, March 4 2014

Diets high in meat, eggs and dairy could be as harmful to health as smoking. The Guardian, March 4 2014

Eating too much meat and eggs is ‘just as bad as smoking’, claim scientists. The Independent, March 4 2014

Eating lots of meat and cheese in middle age is 'as deadly as SMOKING'. Daily Mail, March 4 2014

Eating lots of meat and cheese could be as bad for you as smoking, report reveals. Daily Mirror, March 4 2014

Meat And Cheese 'As Bad For You As Smoking'. Sky News, March 4 2014

Links To Science

Levine ME, Suarez JA, Brandhorst S, et al. Low Protein Intake Is Associated with a Major Reduction in IGF-1, Cancer, and Overall Mortality in the 65 and Younger but Not Older Population. Cell Metabolism. Published online March 4 2014

Categories: Medical News

Do short people also have smaller IQs?

Medical News - Tue, 03/04/2014 - 14:42

“They’re already called ‘vertically challenged’ – but are short people intellectually challenged too?” is the headline in the Mail Online. The website reports on a gene study which found taller people were more likely to have a genetic makeup associated with increased intelligence.

The study analysed 6,815 unrelated people and found some relationship between height and intelligence, although this relationship was not very strong. They also found evidence that this relationship could be due to shared genetic factors. The researchers hope this and future studies will help them better understand the links between height, IQ, and health.

Perhaps the most important thing to highlight is that the link between height and IQ is not clear cut – so it would be unfair to equate being shorter with being “intellectually challenged”.

 

Where did the story come from?

The study was carried out by researchers from the University of Edinburgh and other Universities as part of Generation Scotland – a collaboration between the University Medical Schools and National Health Service in Aberdeen, Dundee, Edinburgh and Glasgow. It was funded by the Chief Scientist Office of the Scottish Government Health Directorates and the Scottish Funding Council, the UK Medical Research Council, Alzheimer Scotland and the BBSRC.

The study was published in the peer-reviewed journal Behavior Genetics and has been published on an open access basis so it is free to read online or download.

Unsurprisingly, the UK media’s reporting focuses on the alleged link between height and IQ. Determining whether there was a relationship between height and IQ was not the main aim of the study and the association between these factors was limited.

 

What kind of research was this?

This was a cross-sectional study which looked at whether any relationship between height and general intelligence – in a large sample of unrelated adults – could be explained by shared genetics.

Traits may be correlated because they are controlled by some of the same genes or for other, non-genetic factors, for example if they are developmentally or structurally related.

 

What did the research involve?

The researchers took blood samples from 6,815 unrelated people and extracted DNA from the samples.

Using this DNA they looked at specific single-nucleotide polymorphisms (SNPs) – places where a single letter of the DNA code differs across the population. A change to a single “letter” of DNA can have a significant impact on how an organism develops.

Participants had their general intelligence assessed using four cognitive tests (processing speed, verbal declarative memory, executive function and vocabulary), and had their height measured.

The researchers then looked at whether there was a correlation between height and intelligence. They then used computer analysis to see whether there was evidence that this correlation was due to shared genetics (a genetic correlation).

 

What were the basic results?

After the researchers adjusted for age and sex, they found that height showed some correlation with general intelligence. This meant that there was some tendency for height to increase as intelligence increased – a “phenotypic correlation” (a correlation of observable characteristics). However, this relationship was not particularly strong.

The researchers then looked at the genetics. They found that 58% of the variability in height in people in their sample and 28% of the variation in intelligence were related to the SNPs that they had assessed.

The researchers found a genetic correlation between height and general intelligence. They estimated that 71% of the phenotypic correlation (correlation between observed height and intelligence) was explained by the same genetic variants.

 

How did the researchers interpret the results?

The researchers concluded that they had found a “modest” genetic correlation between height and intelligence, with they said, “the majority of the phenotypic correlation being explained by shared genetic influences.”

 

Conclusion

This study found some relationship between height and intelligence, and found evidence to suggest that this may be due to shared genetic influences on these traits.

Importantly, the association between height and intelligence was relatively small; meaning the link between the two is not clear cut. So it would be unfair to suggest, as some headlines have, that being short equates to being “intellectually challenged”. 

It is also important to note that it’s not clear to what extent the results are due to the way in which traits affect how humans choose a mate, as opposed to the same genes directly affecting height and IQ.

Greater height and IQ have both been linked to better health outcomes, and researchers hope their findings might help them to understand why this is. At the moment, however, the findings do not have any direct implication.

There is not much you can do about how tall you are, aside from buying some killer heels or Cuban boots, but there are plenty of ways you can keep your brain active.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

You’re coming up short in the IQ stakes, titch. The Sunday Times, March 2 2014

They're already called 'vertically challenged' - but are short people intellectually challenged too? Mail Online, March 3 2014

 

Links To Science

Marioni RE, Batty GD, Hayward C, et al. Common Genetic Variants Explain the Majority of the Correlation Between Height and Intelligence: The Generation Scotland Study. Behaviour Genetics. Published online 20 February 2014

Categories: Medical News

Angry outbursts may up heart attack risk

Medical News - Tue, 03/04/2014 - 12:58

“Having a hot temper may increase your risk of having a heart attack or stroke, according to researchers,” BBC News reports.

Research has found that people prone to attacks of rage have a higher risk of a experiencing a serious cardiovascular event, such as a heart attack or stroke.

Researchers conducted a systematic review, collating the findings of previous studies. Their results suggest that outbursts of anger increase the risk of a serious cardiovascular event by almost fivefold (4.74, to be exact).

However, there are a few flaws to the research. The most pertinent is that some of the studies included in the review collected information after the cardiovascular event occurred. This type of information gathering is prone to what is known as recall bias.

For example, if a person has a heart attack in the afternoon, they may be more likely to remember the driver that “cut them up” at a roundabout that morning than someone who didn’t experience a cardiovascular event.

There could also be other factors that link anger and cardiovascular events and are responsible for the association seen.

Finally, no studies used in the analysis looked at the general population and whether anger levels increased their likelihood of having a cardiovascular event. The researchers said the results “should not be assumed to indicate the true causal effect of anger episodes on cardiovascular events”.

That said, frequent outbursts of anger are not good for either your mental or physical health. Read more advice about ways you can control your anger.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School, Harvard School of Public Health and New York-Presbyterian Hospital/Weill Cornell Medical Centre, New York. It was funded by the US National Institutes of Health.

The study was published in the peer-reviewed European Heart Journal. The article is open access, meaning it can be accessed for free from the publisher’s website.

The results of the research were well reported by the UK media. They put the small risk of a cardiovascular event into its proper context, while also pointing out that unresolved anger can impact on a person’s health.

 

What kind of research was this?

This was a systematic review that aimed to find out whether episodes of anger are linked to a short-term risk of experiencing a cardiovascular event, such as a heart attack, stroke or disturbances in heart rhythm.

Systematic reviews are the best way of determining what is known about a topic.

However, it’s worth considering that this systematic review contained case-crossover studies. In these, information on whether the participants were angry or calm was obtained during two different periods of time: immediately before the cardiovascular event and at an earlier time.

The level and frequency of anger just prior to a cardiovascular event was then compared to the level of anger in the same people at an earlier time.

Although case-crossover studies can show a link between two things, there could be other factors at play. In this instance, there could be numerous reasons for the link between anger and cardiovascular events. As mentioned previously, case-crossover studies are also prone to recall bias.

 

What did the research involve?

The researchers searched databases of literature to identify all studies that had evaluated whether outbursts of anger are linked to a short-term risk of heart attack, stroke or disturbances in heart rhythm. For studies to be included, they had to evaluate triggers occurring within one month of the cardiovascular event.

The results of these studies were then combined to see if anger was associated with the short-term risk of one of these events. The researchers calculated incidence rate ratios, which compared the number of cardiovascular events in the two hours following outbursts of anger with the number of cardiovascular events that weren’t preceded by anger.

 

What were the basic results?

The researchers identified and included nine case-crossover studies:

  • four of anger outbursts and heart attack/acute coronary syndrome (various heart conditions including heart attack and unstable angina caused by reduced blood flow to a part of the heart)
  • two of anger outbursts and stroke
  • one of anger outbursts and ruptured intracranial aneurysm
  • two of anger outbursts and ventricular arrhythmia (abnormal heart rhythm)

The studies were substantially different – they were performed in different countries and collected information about anger episodes differently.

The researchers found there was a 4.74 times higher risk of heart attack/acute coronary syndrome in the two hours following outbursts of anger compared to normal (95% Confidence Interval [CI] 2.50 to 8.99).

The risk of having a stroke was not significantly higher in the two hours following anger compared to normal (95% CI 0.82 to 16.08).

The one study that assessed the risk of ruptured intracranial aneurysms found a statistically significant increased risk in the hour following an outburst of anger (95% CI 1.59 to 24.90).

The two studies that analysed whether outbursts of anger are associated with ventricular arrhythmia were too different to be combined, but both studies found that anger episodes significantly increase a person’s risk.

Researchers point out that although the relative risks of cardiovascular events following an anger outburst are large, the impact on an individual’s absolute risk of a cardiovascular event may be small.

This is because the initial baseline risk of experiencing a cardiovascular event is small.

That said, when we consider the increased risk at population levels, it seems that frequent outbursts of anger do take a toll on the public's health.

They calculate, based on the combined estimate of a 4.74 times higher rate of heart attack/acute coronary syndrome in the two hours following outbursts of anger, that:

  • one episode of anger per month would result in one excess cardiovascular event per 10,000 people per year at low (5%) 10-year cardiovascular risk
  • one episode of anger per month would result in four excess cardiovascular events per 10,000 people per year at high (20%) 10-year cardiovascular risk
  • five episodes of anger per day would result in approximately 158 excess cardiovascular events per 10,000 per year for people at low 10-year cardiovascular risk
  • five episodes of anger per day would result in approximately 657 excess cardiovascular events per 10,000 per year among people at high 10-year cardiovascular risk
How did the researchers interpret the results?

The researchers concluded that, “there is a higher risk of cardiovascular events shortly after outbursts of anger”.

 

Conclusion

This systematic review found there is an increased risk of cardiovascular events, including heart attack and disturbances in heart rhythm, shortly after outbursts of anger.

This is based on results from nine case-crossover studies. In these, information on feelings of anger in the period before the cardiovascular event, as well as an earlier period, were collected retrospectively. The risk of having a cardiovascular event after an episode of anger was then calculated.

The researchers point out several limitations to their review, including the fact that:

  • participants were asked to remember angry outbursts hours or days after the cardiovascular event – in one of the studies this was two weeks later. It’s possible that this could be inaccurate if someone has just experienced an unpleasant or life-threatening event. They were also asked to recall periods of anger in the preceding 6-12 months. There may also be selective recall, forgetting or being unaware of the frequency of other angry outbursts. In one study, people were asked to recall any angry outbursts on the same day and time of the previous week, which may have been difficult to accurately remember
  • the studies used different methods to record outbursts of anger. Some studies recorded this using an interview, while others used a questionnaire – these methods can result in people responding differently
  • A further limitation is that the studies only included people who have suffered from a cardiovascular event. None of them looked at the general population and made any between the number of angry outbursts and risk of cardiovascular event.

Although case-crossover studies can demonstrate a link, there could be other factors that link anger and cardiovascular events and are responsible for the association seen. The researchers concluded that “the results should not be assumed to indicate the true causal effect of anger episodes on cardiovascular events”.

Anger is a normal, healthy emotion. However, if you find yourself getting intensely angry on a regular basis, you may have an anger management problem. Read more about possible treatment options that could help you control your anger.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Angry people 'risking heart attacks'. BBC News, March 4 2014

Angry outbursts cause fivefold increase in heart attack risk. The Daily Telegraph, March 4 2014

Lose your temper and risk a heart attack - up to two HOURS after calming down. Daily Mail,  March 4 2014

Links To Science

Mostofsky E, Penner EA, Mittleman MA. Outbursts of anger as a trigger of acute cardiovascular events: a systematic review and meta-analysis. European Heart Journal. Published online March 3 2014

Categories: Medical News

Childhood nightmares link to psychotic experiences

Medical News - Mon, 03/03/2014 - 14:48

“Regular nightmares in childhood may be an early warning sign of psychotic disorders,” BBC News reports. While many children have the occasional nightmare, a history of regular nightmares could be the sign of something more serious, the news reports.

The study in question followed more than 6,000 UK children and found that those whose mothers reported them as having regular nightmares over at least one period up to age nine were significantly more likely to report having had a “psychotic experience" at age 12.

While the news reports may be understandably worrying for parents, it is worth bearing in mind that the findings need to be confirmed in further studies.

Also, the findings don’t suggest that having regular nightmares definitely mean your child will have psychotic experiences. In addition, reporting a single psychotic experience at age 12 would not mean that a child definitely had a psychotic disorder such as schizophrenia, or would go on to develop one later on.

The authors note that it is not possible to say whether nightmares are directly causing the increase in risk of psychotic experiences. This means that it is not clear whether stopping the nightmares (if this were possible) will have an effect on risk of these experiences.

 

Where did the story come from?

The study was carried out by researchers from King’s College London and other research centres in the UK. It was funded by the UK Medical Research Council, Wellcome Trust, University of Bristol, and Economic and Social Research Council. The study was published in the peer-reviewed medical journal Sleep.

The BBC News headline “Childhood nightmares may point to looming health issues” is unnecessarily frightening for parents. The figures quoted in the BBC News about the risk associated with nightmares (“a three-and-a-half times” increase in risk), comes from an analysis that cannot tell us whether the sleep problems or psychotic experience came first. And therefore it cannot tell us which might be contributing to the other.

The Mail Online provides a better summary of the results in its story.

 

What kind of research was this?

This was a prospective cohort study looking at the possibility of a link between sleep disorders and later psychotic experiences in childhood. This is the most appropriate study design for assessing this question.

The research was part of an ongoing birth cohort study called the Avon Longitudinal Study of Parents and Children (ALSPAC). This ongoing study looks at factors that determine a person’s health from childhood onwards.

This is the most appropriate study design for assessing this question. The researchers also carried out some cross-sectional analyses, but these cannot tell us which factor came first, and therefore which might be influencing the other.

Therefore, these analyses cannot answer the question of whether frequent nightmares might increase psychosis risk or whether psychotic experiences might increase risk of nightmares.

 

What did the research involve?

The researchers assessed whether the children had any problems with sleep (such as difficulty getting to sleep, nightmares, night terrors, or sleepwalking) between the ages of two-and-a-half and nine years, and at age 12. They also assessed whether the children had experienced psychotic experiences at age 12. They then analysed whether children with sleep problems were more likely to report psychotic experiences.

The study aimed to recruit all pregnant women living in the Avon region who were due to give birth between April 1st 1991 and the end of 1992. They recruited 14,775 women who gave birth to a live baby.

The mothers completed questionnaires about their and their child’s health and development from the time of recruitment. Sleep problems were assessed in six postal questionnaires sent at intervals between the ages of two and a half and nine years, and in a standard face to face interview when the child was aged 12 years.

The questionnaires asked the mother if their child experienced regular problems going to sleep, nightmares, or sleep walking. The interview asked the child whether they had nightmares, or someone had told them they had shown signs of night terrors or sleep walking in the past six months. If they answered yes, they were asked more questions to gain further information.

At age 12, the children also had a face to face semi structured interview to find out whether they had any psychotic experiences. These experiences could be:

  • Hallucinations: seeing or hearing something that wasn’t there
  • Delusions: for example feeling spied on, persecuted, that their thoughts were being read, or having delusions of grandeur
  • Thought interference: feeling that someone was inserting thoughts into their mind or removing thoughts, or that other people could hear their thoughts

These types of experiences can be symptoms of serious mental health conditions such as schizophrenia, or can be triggered by physical illnesses or substance use.

The current study included the 6,796 children whose mothers had completed at least three questionnaires about sleep problems up to the age of nine, as well as the child interview about psychotic experiences at age 12 years.

The researchers then looked at whether children with sleep problems were more likely to report psychotic experiences. They took into account factors that might influence this association (confounders), including:

  • family adversity during the pregnancy
  • child IQ
  • evidence of neurological problems
  • mental health diagnoses (made at age seven)
  • behavioural problems

 

What were the basic results?

According to the mothers’ reports, between the ages of two-and-a-half and nine years, about three-quarters of children experienced at least some nightmares. About a fifth of children (20.7%) had regular nightmares reported in one time point in this period; 17% had regular nightmares reported at two time points, and 37% had regular nightmares reported at three or more time points.

At age 12, 36.2% reported at least one sleep problem (nightmares, night terrors, or sleep walking). At this age, 4.7% of children reported having had a psychotic experience that was judged not to be related to fever or substance use, and was not experienced when the child was falling asleep or waking up.

Children who were reported as experiencing regular nightmares at one time point between the ages of two-and-a-half and nine years, had higher increased odds of reporting psychotic experiences at age 12 than those who never had regular nightmares (odds ratio (OR) 1.16, 95% confidence interval (CI) 1.00 to 1.35).

The more persistent the nightmares were, the greater the increase in the odds. For example, those who were reported as having regular nightmares in at least three time periods between the ages of two and a half and nine years had a 56% increase in odds of a psychotic experience (OR 1.56).

Problems getting to sleep, or night waking between the ages of two and a half and nine years were not associated with psychotic experiences at age 12.

Children who reported any sleeping problems at the age of 12 (nightmares, night terrors, or sleep problems) were also at higher odds of reporting psychotic experiences than those without these problems (OR 3.62, 95% CI 2.57 to 5.11).

 

How did the researchers interpret the results?

The researchers conclude that nightmares and night terrors in childhood, but not other sleeping problems, are associated with reporting psychotic experiences at age 12 years.

 

Conclusion

The study has found that children who have regular nightmares between the ages of two-and-a-half and nine were more likely to report a psychotic experience (for example a hallucination or delusion) at age 12. While the study was relatively large and well designed, it does have limitations. As with all research findings, they ideally need to be confirmed by other studies.

Parents reading this article should not become unduly distressed by thinking that their child’s nightmares mean they will develop psychosis later on in life. Firstly, while a lot of children experienced nightmares at some point up to the age of nine (almost three-quarters), very few reported having had a psychotic experience at age 12 (about one in twenty).

In addition, a single psychotic experience at age 12 would not mean that the child had a diagnosis of a psychotic disorder, or guarantee that they would go on to develop psychosis later on.

Thankfully, psychosis is uncommon, affecting around one in 100 people, and mostly at ages 15 or over. Cases among children under the age of 15 are rare.

Finally, as the authors themselves note, it is not possible to say whether nightmares are directly causing the increase in risk of psychotic experiences.

There are some other points to note:

  • Although BBC News reports that night terrors were mostly experienced between the ages of three and seven years, night terrors in this study were only specifically assessed at age 12. At younger ages the researchers only asked about nightmares, problems getting to sleep, and night waking.
  • The analyses of the link between sleep problems at age 12 (such as night terrors) and psychotic experiences at the same age is cross-sectional, and therefore it is not possible to say which factor came first – the sleep problem or the psychotic experience.
  • The figure from these analyses (3.5 times increase in risk) is much higher than the increase in risk of having a psychotic experience age 12 after having nightmares from age two and a half to nine years which was only 16%.
  • The study relies on mothers’ reports of children’s sleep problems up to the age of nine years and did not delve into the frequency or severity of sleep problems. It is possible that this may lead to some inaccuracies – for example, some children with sleep problems may be missed.
  • Although the researchers tried to take into account some factors that may have influenced results (potential confounders), others may also be having an effect, such as total amount of sleep a child had.

Read more about common sleep problems in children.

If you child is experiencing persistent sleep problems then ask your GP for advice.  

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

Childhood nightmares may point to looming health issues. BBC News, March 1 2014

Children who have lots of nightmares at risk of suffering hallucinations and psychosis as teenagers. Mail Online, March 2 2014

Childhood nightmares 'increase risk' of health problems. ITV News, March 1 2014

Children who have nightmares at greater risk of psychosis in later life. Daily Mirror, March 1 2014

Links To Science

Fisher HL, Lereya ST, Thompson A, et al. Childhood Parasomnias and Psychotic Experiences at Age 12 Years in a United Kingdom Birth Cohort. Sleep. Published online March 1 2014

 

Categories: Medical News

How seaweed could slow the obesity tidal wave

Medical News - Mon, 03/03/2014 - 14:00

“Seaweed could be key to weight loss, study suggests,” BBC News reports.

UK researchers have looked at alginates that occur naturally in “kelp” seaweed (the variety that resembles large blades). They found that these alginates may help reduce the amount of fat the body digests.

Their study showed that, in the lab, certain types of alginates can slow down the enzyme activity of a fat digesting enzyme called pancreatic lipase. The researchers believe that if the alginates can block this enzyme, less fat would be absorbed by the body, which would stop people becoming obese.

However, the research did not draw any definitive conclusions, the most pertinent being that weight loss would not necessarily occur in humans (or even in mice). It's also unclear whether any potential effect from seaweed extract would lead to an improvement in weight-related health issues, such as reduced risk of diabetes.

Even if the alginates studied were successful in achieving weight loss, this does not mean they are safe to consume. Ultimately, ingesting a substance that slows down fat absorption is unlikely to have the same health benefits as a well-balanced diet and exercise – this is a tried and tested lifestyle choice for maintaining a healthy weight.

Nonetheless, the market for quick-fix weight loss treatments is large and extremely profitable, so research into seaweed extract will almost certainly continue.

 

Where did the story come from?

The study was carried out by researchers from Newcastle University and was funded through a BBSRC CASE studentship (a grant programme for bioscience researchers) with industrial sponsors Technostics Ltd.

The study was published in the peer-reviewed science journal Food Chemistry.

The UK media’s reporting of the study was generally accurate, though much of the reporting gives the impression the alginates studied had proven to be an effective weight loss supplement in humans.

 

What kind of research was this?

This was a laboratory study investigating how a compound called alginate could influence the digestion of fat.

Alginates are chemicals that can be extracted from the cell walls of brown seaweed or from certain bacteria. Using alginate as a food additive is not a new concept, but this latest news covers new territory: their potential as an anti-obesity treatment.

In industrialised countries, dietary fats can account for 40% of energy intake, with triacylglycerol (TAG) being the major component. An enzyme called lipase, excreted from the pancreas, plays an important role in the digestion of fats in the body, so reducing pancreatic lipase activity would reduce fat breakdown, resulting in lower amounts being absorbed by the body. This would mean the fat passes straight through the body and wouldn’t accumulate under the skin or around organs, which is bad for your health.

Laboratory research like this is useful for establishing proof of a particular concept, but many more tests are needed for potential food additives. Experiments on humans are more important and would provide more information about the potential risks and rewards of using alginate as a food additive or a weight loss agent.

 

Links To The Headlines

Seaweed could be key to weight loss, study suggests. BBC News, March 1, 2014

Want to lose weight? Eat more SEAWEED. Daily Mirror, March 1 2014

Could seaweed stop the tide of obesity? Scientists creating supplement that blocks fat. The Independent, March 1 2014

For a guilt-free way to enjoy cake, just add seaweed: Extract stops fat being broken down and absorbed by the body. Mail Online, March 1 2014

Seaweed on your sausages could keep weight down. The Times, March 1 2014

Links To Science

Wilcox MD, Brownlee IA, Richardson JC, et al. The modulation of pancreatic lipase activity by alginates. Food Chemistry. Published online March 1 2014

 

Categories: Medical News

Claims of 'anti-ageing pill' may be premature

Medical News - Fri, 02/28/2014 - 14:48

The Daily Telegraph and Daily Express both carry headlines about how a “pill” to help humans live longer could be on the cards. Though while the substance being studied shows promise, the research only involved mice.

Researchers were looking at a chemical called SRT1720 which activates a particular protein called Sirtuin 1 (SIRT1). Previous research has demonstrated that activating SIRT1 can have health benefits in various organisms, and it has been proposed as an anti-ageing protein.

This study focused on comparing the lifespan, health and diseases of mice fed the same diet, but with or without the addition of a SRT1720.

Overall they found mice fed a normal diet but with the supplement had a longer natural lifespan on average (about five weeks longer).

During their lifetime, additional tests also suggested they had improved muscle function and coordination, improved metabolism, improved glucose tolerance, decreased body fat and cholesterol.

All in all this suggests that giving the mice this supplement could protect them from the equivalent of metabolic syndrome, a series of risk factors associated with conditions such as heart disease and type 2 diabetes.

This is interesting research but as it only involved mice, the normal caveats regarding animal studies apply. Importantly, researchers did not look at whether SIRT1 may cause side effects or complications. So it is currently unclear whether SIRT1 would be safe in humans, let alone effective.

The SIRT1 protein could be a possible candidate in the quest to find an “elixir of life” but these are very early days.

 

Where did the story come from?

The study was carried out by researchers from the US National Institute of Aging part of the National Institutes of Health and other research institutions in the US and Australia. Funding was provided by the National Institute on Aging, National Institutes of Health. Some of the researchers involved in the study are employed by Sirtis, a company with a declared commercial interest in developing a SIRT1 activator.

The study was published in the peer reviewed scientific journal Cell. This article was open-access (unlike most of Cell’s content), meaning that it can be accessed for free from the journal’s website.

The media is rather premature in concluding from this early stage research in mice that a life-prolonging pill could be on the cards anytime soon. Though it is true that this research has findings worthy of further study.

Also, unlike the Telegraph, the fact that the research is in mice isn’t apparent in the Daily Express article.

 

What kind of research was this?

This was an animal study in mice which centred on a chemical called SRT1720 which is thought to activate a particular protein, Sirtuin 1 (SIRT1). SIRT1 is known to play an important role in maintaining homeostatic balance (keeping the various systems of the body on a “stable footing”).

Previous research has demonstrated that SIRT1 activation can have health benefits in various organisms, and it has been proposed as an anti-ageing protein. It has been suggested that pharmacological interventions that aim to increase SIRT1 activity could slow the onset of ageing as well as delaying the onset of diseases associated with ageing – such as heart disease.

Prior study has shown that treating mice with small molecule activators of SIRT1 such as SRT1720 can balance the detrimental effects of a high-fat diet. This is achieved by improving insulin sensitivity and preventing oxidative metabolism (damage at the cellular level).

However, most of the previous research in mice has focused on reversing the effects of a poor diet.

This research aimed to see whether activating SIRT1 using SRT1720 can improve health and lifespan in mice fed a normal diet.

 

What did the research involve?

The researchers used 28-week-old male mice separated into four groups of 100. They were fed on four diets:

  • standard diet (carbohydrate: protein: fat ratio of 64:19:17 percent of kcal)
  • the standard diet supplemented with the SRT1 activator molecule – SRT1720
  • high-fat diet (carbohydrate: protein: fat ratio of 16:23:61)
  • the high fat diet supplemented with SRT1720

The SRT1720 supplements were included in the diets at an approximate daily dose of 100mg/kg body weight. The mice had their body weight and food intake monitored biweekly.

Mice received various tests during the study, including having their metabolic rate measured after they had been on the diets for about six months. And then their body fat mass and glucose tolerance measured when they had been on the diets for almost a year.

They also had exercise testing when between one and two years of age. The animals lived out their lifespan and then their organs were examined after death.

 

What were the basic results?

The researchers found that survival between the two groups of mice fed the standard diet was significantly different – average lifespan was increased by 8.8% (around five weeks) when mice were given the SRT1720 supplement. In the high fat mice, survival was significantly lower, but still the SRT1720 supplement increased lifespan by 21.7% (around 22 weeks). Overall statistical analyses showed that the supplement significantly reduced the risk of death.

Also, the SRT1720 supplement decreased the body weight of both the standard diet and high fat diet mice, compared with their counterparts on the same diets, despite the fact that the mice were consuming the same number of calories.

However, the supplement only decreased percentage of body fat in those mice on the standard diet; in those on the high fat diet the supplement had no effect on fat mass percentage.

In the standard diet mice, the SRT1720 supplement also had beneficial effects on their metabolism and lead to a noticeably improved performance on an activity test. This suggests they had better balance and muscle function, though a similar effect was not seen in the high fat diet.

There was also some suggestion that the supplement improved insulin sensitivity and glucose balance, and also lowered blood cholesterol in mice fed the standard diet. Cataract formation in the eyes was also reduced.

There was no difference in the number of diseases seen on autopsy examination after death between animals given the supplement and those not. However, the researchers say that as the average age at autopsy was around 10 weeks later in those given the supplements, it could be that SRT1720 delays the onset of diseases allowing mice to live a longer life without disease.

How did the researchers interpret the results?

The researchers conclude that their results show that supplementation with a molecule that activates SRT1 improves health and extends lifespan in mice maintained on a standard diet. They say their work “highlights the importance of examining the therapeutic value of small molecule activators of SIRT1 in general health and longevity”.

 

Conclusion

Previous research has demonstrated that SIRT1 protein could be a potential target for treatments to try and prolong life and prevent diseases of ageing. However, much animal research to date has focused on demonstrating how activators of this protein can reverse the detrimental effects of a high fat diet.

Therefore, though the current study also included mice fed a high fat diet, the main aim of researchers was to see what the effects could be when mice were fed their normal diet.

They found generally promising results. Overall they found that mice fed a normal diet supplemented with SRT1720, a chemical which is thought to activate SIRT1, had a longer natural lifespan (about five weeks longer on average). During their lifetime, additional tests also suggested they had improved muscle function and coordination, improved metabolism, improved glucose tolerance, decreased body fat and cholesterol.

All in all this suggests that giving the mice this supplement could protect them from the equivalent of metabolic syndrome in humans, and reduce the risk of diseases such as heart disease and diabetes. This is potentially important, as these types of disease are now a leading cause of disability and death in the developing world.

This research is at a very early stage, and we don’t know whether a treatment could be developed for testing in humans, and if it was, whether it would be safe or effective.

Though seeing the potential billions of pounds that could be made from a safe and effective anti-ageing drug, we would be extremely surprised if this study did not lead to further research into SRT1720 and SIRT1.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

New pill to fight ageing: Could drug be secret to a longer life? Daily Express, February 28 2014

Pill could help humans live longer. The Daily Telegraph, February 27 2014

Links To Science

Mitchell SJ, Martin-Montalvo A, Mercken EM, et al. The SIRT1 Activator SRT1720 Extends Lifespan and Improves Health of Mice Fed a Standard Diet. Cell Reports. Published online February 27 2014

Categories: Medical News

Draft regulations on 'three parent' IVF published

Medical News - Fri, 02/28/2014 - 14:45

Draft regulations  into what is known as “three person IVF” – or mitochondria replacement – have been published by the Department of Health. If accepted by Parliament, the UK could become the first country to carry out the procedure, which can be used to prevent life-threatening conditions, known as mitochondrial diseases.

 

What are mitochondrial diseases?

Almost all of the genetic material in our bodies is inside the cell nucleus that contains 23 chromosomes inherited from our mother and 23 inherited from our father. However, there is also a small amount of genetic material contained in cellular structures called mitochondria, which produce the cell’s energy. Unlike the rest of our DNA, this small amount of genetic material is passed to the child only from the mother. There are a number of rare diseases caused by gene mutations in the mitochondria. Women carrying these mutations will pass them directly to their child, with no influence from the father.

The IVF technique being considered aims to prevent these “mitochondrial diseases” by replacing the mother’s mitochondria with healthy mitochondria from a donor, thereby creating a healthy embryo. The child would then have the genetic material of three people – the majority still from the mother and father, but with around 1% of mitochondrial DNA coming  from a donor. 

 

What is mitochondria replacement?

There are two IVF mitochondria replacement techniques currently at the research stage, called pronuclear transfer and spindle transfer. These are the techniques under debate.

Pronuclear transfer involves an egg during the process of fertilisation. In the laboratory, the nucleus of the egg and the nucleus of the sperm, which have not yet fused together (the pronuclei) are taken from the fertilised egg cell containing the “unhealthy” mitochondria and placed into another donor fertilised egg cell, which has had its own pronuclei removed. This early stage embryo would then be placed into the mother’s body. The new embryo would contain the transplanted chromosomal DNA from both of its parents, but would have “donor” mitochondria from the other egg cell.

The alternative mitochondria replacement technique of spindle transfer involves egg cells prior to fertilisation. The nuclear DNA from an egg cell with “unhealthy” mitochondria is removed and placed into a donor egg cell that contains healthy mitochondria and has had its own nucleus removed. This “healthy” egg cell can then be fertilised.

Pronuclear transfer and spindle transfer are said to be potentially useful for the few couples whose child may have severe or lethal mitochondrial disease, and who would have no other option for having their own genetic child. It is estimated that in the UK, around 10-20 couples a year could benefit from these treatments. 

 

How many children does mitochondrial disease affect?

It is estimated that around 1 in 200 children are born each year with some form of mitochondrial disease. Some of these children will have mild or no symptoms, but others can be severely affected – with symptoms including muscle weakness, intestinal disorders and heart disease – and have reduced life expectancy.

 

What ethical concerns have been raised about the techniques?

There are obvious ethical implications from creating an embryo with genetic material from three parents.

Among the questions raised are:

  • Should the details of the donor remain anonymous or does the child have the right to know who their “third parent” is?
  • What would be the long-term psychological effects on the child knowing it was born using donated genetic tissue?

Opponents of these types of treatments cite what can be broadly summarised as the “slippery slope” argument; this suggests that once a precedent has been set for altering the genetic material of an embryo prior to implantation in the womb, it is impossible to predict how these types of techniques might be used in the future.

Similar concerns were raised, however, when IVF treatments were first used during the 1970s; today, IVF is generally accepted.  

How do I make my views known?

The draft guidelines contain a number of questions for consideration.

You can send your responses to these questions to:

Mitochondrial Donation Consultation
Department of Health
Room 109
Richmond House 
79 Whitehall
London
SW1A 2NS

Alternatively, comments can be sent by email to: mitochondrial.donation@dh.gsi.gov.uk

When responding, please state whether you are responding as an individual or representing the views of an organisation. If you are responding on behalf of an organisation, please make it clear who the organisation represents and, where applicable, how the views of members were collected.   

Edited by NHS Choices.
Follow
Behind the Headlines on Twitter.

Links To The Headlines

Three-person baby details announced. BBC News, February 27 2014

100 babies a year in UK will have three parents: Births 'as early as 2015' in world first. Mail Online, February 27 2014

Ministers take next step towards approving IVF procedures that would allow three-parent babies. The Independent, February 27 2014

Revolutionary new IVF treatment means babies could be born with DNA of three parents. Daily Express, February 27 2014

 

Categories: Medical News

Stethoscopes could spread hospital infections

Medical News - Fri, 02/28/2014 - 14:42

"Stethoscopes 'more contaminated' than doctors hands," BBC News reports after new Swiss research has suggested that the much-used instrument may spread bacteria inside hospitals, including MRSA.

The BBC reports on an observational study involving 71 patients carried out at a Swiss university teaching hospital. Doctors were asked to perform a routine physical examination of these patients. None of the patients had an active skin infection, but around half were known to be colonised with MRSA before the examination took place.

After the examination, four areas of the doctors' dominant hand (or glove) and their stethoscopes were pressed onto culture media (liquid or gel designed to support the growth of bacteria) to see how many bacteria were grown in the laboratory. Hands (or gloves) and stethoscopes were sterilised prior to the examinations, so the researchers would only find bacteria transferred onto them after the single examination.

Overall, the study found that after the examinations, the most contaminated areas were the fingertips, followed by the diaphragm (the round "listening part") of the stethoscope. The diaphragm was more contaminated than other regions of the hand, such as the skin around the base of the thumb and little finger or the back of the hand.

The study serves as an important reminder for doctors and other health professionals about the dangers of cross-contamination. Transferring equipment from one patient to another without disinfecting the items in-between could pose as much of a risk as unwashed hands. This study has only investigated stethoscopes, but the results could just as easily apply to other hospital equipment, such as blood pressure cuffs and thermometers. 

 

Where did the story come from?

The study was carried out by researchers from the University of Geneva Hospitals and was funded by the University of Geneva Hospitals and the Swiss National Science Foundation. There were no reported conflicts of interest.

The study was published in the peer-reviewed medical journal, Mayo Clinic Proceedings.

The reporting of the study is generally accurate, but all of the sources that reported on it (BBC News, ITV News and the Mail Online website) made the mistake of claiming that stethoscopes were more contaminated than doctors' hands. This is not strictly true.

What the study actually found was that fingertips were most contaminated, followed by the "listening part" of the stethoscope that comes into contact with patients' skin.

 

What kind of research was this?

This was an observational study carried out at a Swiss university teaching hospital. After normal physical examination of patients, doctors' hands (or the gloves they used during the examination) and stethoscopes were pressed onto culture media (a substance that can support the growth of bacteria) to see what bacteria were grown in the laboratory over a period of five months.

As the researchers say, the transmission of bacteria and other micro-organisms between patients poses a significant risk to the health of patients staying in hospitals and increases the risk of death.

There is a wealth of evidence that healthcare workers' hands are one of the main routes of cross-contamination. However, there is a lack of evidence supporting the role that medical equipment such as stethoscopes play as sources of contamination.

The researchers say that they aimed to compare doctors' hands and stethoscopes immediately after examination to see if stethoscopes could pose as much of a risk for cross-contamination as unwashed hands.

 

What did the research involve?

The study was conducted between January and March 2009 at the University of Geneva Hospitals. The researchers included a sample of adult patients from medical or orthopaedic wards who were in a stable medical condition and did not have an obvious skin infection. However, they also included a sample of people who were found to be colonised with methicillin resistant Staphylococcus aureus (MRSA) on standard hospital admission screening.

Three doctors were involved in the tests and the study involved two phases. In the first, they wore sterile gloves to ensure the initial bacterial count on their hands would be zero. This study specifically involved the people free from MRSA and aimed to look at the total count of (aerobic) bacteria after examination.

In the second phase, the doctor examined the patients without gloves, but prior to the examination they used alcohol hand rub following the technique laid out by the World Health Organization (WHO), which recommends rubbing the hand rub in for 30 seconds.

This part of the study specifically involved the people with MRSA colonisation and aimed to look at transmission of MRSA.

The stethoscopes used by the doctors were sterilised prior to each patient examination.

After the examinations, four regions of the physicians' dominant gloved or ungloved hand were sampled for bacteria. Two sections of the stethoscope were also tested, including the diaphragm and the tube attached to it.

Sampling was done by pressing the regions being studied onto culture plates. After culture for up to 24 hours, the researchers examined the total count of (aerobic) bacterial and MRSA colonies.

 

What were the basic results?

The first study included 33 patients without MRSA (64% male, average age 62). The second study included 38 patients with MRSA colonisation (58% male, average age 72). Around a third of the patients in each study were receiving antibiotics.

In the first study, of the regions tested, the fingertips were most heavily contaminated with bacteria, with a median of 467 colony forming units per 25cm2.

Colony forming units, or CFUs, is an estimate of viable bacteria numbers; in this case, the amount of bacteria contained in an area of 25cm squared, which is roughly equivalent to the size of a small hardback book.

Fingertip testing was then followed by testing of the diaphragm of the stethoscope (median 89 CFUs/25cm2).

Further testing involved:

  • regions around the base of the thumb and little finger (around 35 CFUs/25cm2)
  • the stethoscope tube (18 CFUs/25cm2)
  • the back of the hand least used (8 CFUs/25cm2)

On statistical comparison, the contamination level of the stethoscope diaphragm was significantly lower than the contamination level of the fingertips, but significantly higher than around the base of the thumb or little finger or the back of the hand.

In the second study, where 38 patients with MRSA were examined, the pattern of contamination was similar, though with lower colony levels. The most heavily contaminated region was the fingertips (12 CFUs/25cm2), followed by the stethoscope diaphragm (7 CFUs/25cm2), then around the thumb or little finger.

However, the stethoscope tube and back of the hand had no MRSA. There was also no significant difference between contamination of the stethoscope diaphragm and fingertips.

In both studies, the level of contamination on the stethoscope was related to the level of contamination on the fingertips.

 

How did the researchers interpret the results?

The researchers concluded that, "These results suggest that the contamination level of the stethoscope is substantial after a single physical examination and comparable to the contamination of parts of the physician's dominant hand."

 

Conclusion

This study demonstrates that after a patient examination with sterile hands and stethoscope, the part of a doctor's hands most highly contaminated with bacteria was the fingertips, followed by the diaphragm of the stethoscope.

This part of the stethoscope was more contaminated than other regions of the hand, including the skin around the base of the thumb and little finger, or the back of the hand. The pattern was similar when looking at MRSA and total bacterial count in general.

It must be acknowledged that this study was small, involving the examination of only 71 patients by just three doctors at a single Swiss hospital over a period of five months.

However, the scenario examined – where hands and stethoscope were sterilised prior to use, and the patients involved were in a stable medical condition and did not have an active skin infection – should mean they are fairly representative of the "best situation" that could be found if similar tests were carried out in hospitals elsewhere.

In other "less than best" situations, such as where doctors' hands and equipment haven't been completely sterilised prior to use, levels of contamination could be much higher than those seen here. As the researchers say, no piece of equipment used on patient wards can be fully sterile, and most objects in the healthcare environment will yield some micro-organisms when sampled.

However, what is difficult to say is the clinical significance of detecting these levels of contamination. This study didn't test whether transferring the level of bacteria contamination detected on fingertips and stethoscopes would result in infection if it was then transferred to another patient without sterilisation.

But it is plausible that if repeated examinations were conducted without sterilisation in-between, the contamination would get worse and may be more likely to pose an infection risk, particularly to vulnerable patients.  

A useful follow-on to this study would be to investigate how effective different methods for decontaminating stethoscopes are at reducing bacterial counts. That is, while clear WHO guidance is in place informing the process by which hands need to be sanitised to make them "safe", similar guidance for other equipment, such as stethoscopes, is not available and would be useful.

Overall, this study serves as an important reminder for doctors and other health professionals about the potential risks of cross-contamination if hospital equipment and hands are not disinfected between one patient and the next.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Stethoscopes 'more contaminated' than doctors' hands. BBC News, February 27 2014

Doctors' stethoscopes can be dirtier than their hands - and are helping to spread bugs such as MRSA. Mail Online, February 27 2014

Stethoscopes 'more contaminated' than doctor's hands. ITV News, February 27 2014

Links To Science

Longtin Y, Schneider A, Tschopp C, et al. Contamination of Stethoscopes and Physicians' Hands After a Physical Examination. Mayo Clinic Proceedings. Published online February 27 2014

Categories: Medical News

Caesareans linked to obesity in offspring

Medical News - Thu, 02/27/2014 - 14:48

Babies born by caesarean section are more likely to be obese adults, the Daily Mail reports, after an analysis found a link between caesarean section and obesity in later life. However, a direct link between the two remains unproven.

The paper reports on a large analysis that studied links between delivery method and weight of offspring in later life. It found that the chances of being overweight in adulthood were 26% higher for babies born by caesarean section, while the chance of being obese was 22% higher. 

However, the research does not prove that being born by caesarean section makes people obese – there are a number of other determining factors. For example, many obese women require caesareans for medical reasons, and if a mother is obese, there is a higher risk of the child also becoming obese.

The analysis looked at studies that dated mainly from the 1930s to the 1970s. Back then, there were fewer grounds for performing a caesarean section, so a serious medical reason would probably have been present. This may, in part, account for the increased rates seen, with caesareans becoming increasingly more common. 

The number of caesarean sections has risen 100% since 1990 and obesity rates are also soaring, meaning that further research may be beneficial.

 

Where did the story come from?

The study was carried out by researchers from the University of London and was funded by Imperial College, London.

It was published in the peer-reviewedopen access medical journal PLOS One. Read it for free online here.

The story was covered fairly in the media, with several papers pointing out that other determining factors may explain the links between caesarean sections and adult obesity.

The Daily Telegraph made a technical error when it reported that a healthy body mass index (BMI) is between 25 and 29.9. A BMI between 18.5 and 24.9 is classified as healthy, while a BMI between 25 and 29.9 is overweight.

What kind of research was this?

This was a systematic review and meta-analysis, looking at the association between delivery method and obesity in adult offspring. The authors point out that over the last 20 years, there have been increases worldwide in both obesity and caesarean sections. In England, adult obesity rose from 16.4% to 26% between 1995 and 2010, while there was a 100% increase in caesarean sections between 1990 and 2008.

Previous research suggests that a caesarean section delivery may be linked to health problems in childhood, such asthma and type 1 diabetes. An association between a caesarean section delivery and adulthood obesity has also been mooted.

 

What did the research involve?

The researchers conducted a systematic review of studies reporting adult BMI, height, weight, incidence of being overweight or obese and the mode of delivery.

In this type of study, researchers identify and assess all the high-quality evidence they find on a specific question. They have pre-specified criteria and use methods that minimise bias, to produce more reliable findings.

The researchers searched several electronic databases for any article on the subject published before March 2012. Research published in all languages and from all countries was included.

The studies were independently screened for eligibility by two reviewers, and the quality of each study was assessed using a validated scale.

Researchers also carried out a meta-analysis of the association between the mode of delivery, the BMI of offspring and whether they were overweight or obese in adulthood. They separately analysed the data for male and female offspring.

They divided caesarean sections into those carried out as an emergency and those that were planned.

 

What were the basic results?

The researchers identified 35 studies on the topic, 15 of which were suitable for inclusion in their review. The 15 studies had a combined population of 163,753, from 10 countries. The age at which adult offspring had their BMI measured ranged from 18 to 69.6 years. The caesarean sections occurred between 1934 and 1989, with only one study looking at more recent caesarean sections.

Compared to those born by vaginal delivery, adults who had been born by caesarean section had:

  • an average increase in BMI of 0.44kg
  • a 26% greater chance of being overweight (Odds Ratio (OR) 1.26; Confidence Interval (CI) 1.16-1.38)
  • a 22% greater chance of being obese (OR 1.22; CI 1.05-1.42)

Similar results were found when separate analyses were carried out for men and women.   

How did the researchers interpret the results?

The researchers conclude that there is a strong association between caesarean sections and the chances of having a higher BMI, being overweight and being obese in adulthood.

Given the rising number of caesarean sections being carried out worldwide, they say further research is urgently required to determine whether the method directly leads to a high risk of being overweight or obese in adulthood, or whether other factors are involved.

In an accompanying press release, Professor Neena Modi from the Department of Medicine at Imperial College London, the report's senior author, said: "There are good reasons why caesarean sections may be the best option for many mothers and their babies, and caesarean sections can, on occasion, be life-saving. However, we need to understand the long-term outcomes, in order to provide the best advice to women who are considering caesarean delivery”.

 

Conclusion

Given the increase in both weight problems and caesarean sections, any association between the two is an important topic for further research.

As the researchers point out, there are plausible mechanisms by which caesarean section delivery might influence adult body weight in offspring. The types of healthy bacteria in the gut differ in babies born by caesarean section and vaginal delivery, which may affect later health.

Also, the way the baby is compressed during vaginal birth appears to influence which genes are “switched on”, and this could have a long-term effect on metabolism.

However, as the researchers point out, their study did not adjust its findings for factors (called confounders), which could have influenced the results. 

There are a number of underlying factors that can both make a caesarean section necessary and increase obesity risk in the offspring.

They include a high BMI in the mother, gestational diabetes, pregnancy and lower socioeconomic status.

Even if there is definite association between caesarean sections and an increased risk of obesity, it is possible to offset that risk by encouraging your child to eat healthily and undertake regular exercise.

Research suggests that healthy habits engrained during early childhood are likely to persist into adulthood. Read more about child health.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter.
Join the Healthy Evidence forum.

Links To The Headlines

C-section babies are more likely to be obese as adults: Chance of being overweight increases by 25% compared with those born naturally. Daily Mail, February 27 2014

C-section birth 'link to later obesity'. BBC News, February 26 2014

Caesarean raises risk of obesity. The Daily Telegraph, February 27 2014

Links To Science

Darmasseelane K, Hyde MJ, Santhakumaran S, et al. Mode of Delivery and Offspring Body Mass Index, Overweight and Obesity in Adult Life: A Systematic Review and Meta-Analysis. PLOS One. Published online February 27 2014

Categories: Medical News

Is breast milk really best, American study asks

Medical News - Thu, 02/27/2014 - 14:10

"Breast milk is 'no better for a baby than bottled milk' and it increases the risk of asthma, expert claims," reports the Mail Online. The news comes from a large US study of children aged 4 to 14 looking at whether breastfeeding is associated with better health and academic outcomes.

The researchers argue that the majority of mothers who choose to breastfeed in developed countries are white middle-class women. It could be that this privileged position in society, rather than breastfeeding itself, accounts for the improved outcomes claimed to be associated with breastfeeding.

They found that overall, breastfed children had statistically better outcomes in 9 out of 11 areas. Unexpectedly, an association between breastfeeding and higher rates of asthma was also found.

But when they looked at children from the same family who had been fed differently (one bottle-fed, one breastfed), they found no statistically significant differences in outcomes for breastfed and bottle-fed children.

The researchers conclude that there is little evidence that breastfeeding improves outcomes. However, it is more likely that the influence of the children's genes and environment played a bigger role than whether or not they had been breastfed.

There is conflicting previous research looking at the association between breastfeeding and asthma, but the Department of Health and Asthma UK recommend breastfeeding where possible. Although breastfeeding is still the preferred option, the lack of a significant difference between siblings who were fed differently seen in this study should allay maternal fears if they are unable to breastfeed their baby.

 

Where did the story come from?

The study was carried out by researchers from the department of sociology at Ohio State University and was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

It was published in the peer-reviewed journal, Social Science and Medicine.

The Mail Online generally reported the story accurately.

 

What kind of research was this?

This was a cohort study that used data from the US National Longitudinal Survey of Youth (NLSY). It aimed to see if breastfeeding made a difference to outcomes for children between the ages of 4 and 14 after socioeconomic factors were taken into account.

As this was a cohort study, it can only show an association and cannot prove that breastfeeding was the cause of any differences found. These could be related to other factors called confounders. The only way to prove causation is to conduct a randomised control trial, but this would be unethical.

 

What did the research involve?

The researchers took the NLSY data and studied physical, behavioural and academic outcomes to compare children who had been breastfed or bottle-fed. They then compared the results of the whole sample – sibling samples and "discordant siblings" (siblings who had been fed differently) – to determine if the differences were because of breastfeeding or socioeconomic factors.

They looked at data from 8,237 children who were born after 1978, interviewed (or their parents) between 1986 and 2010. Twins and triplets were excluded. Two subgroups of this sample were analysed:

  • 7,319 siblings (more than one child per mother)
  • 1,773 discordant siblings (siblings who were fed differently as babies)

The researchers say that studying discordant sibling data (within the same families) should eliminate socioeconomic status from having an effect on the results.

They also wanted to see if differences could be seen in later childhood, so looked at data from the age of 4 to 14 in terms of:

Physical health:

Behaviour:

Academic achievement:

  • reading comprehension
  • vocabulary recognition
  • maths ability
  • memory-based intelligence
  • scholastic competence (academic performance)

They analysed the data to take the following confounders into account:

  • child age
  • maternal age
  • birth order
  • marital status
  • region
  • maternal smoking and alcohol use during pregnancy
  • prenatal care
  • maternal educational achievement
  • total family income
  • maternal employment status
  • insurance coverage

 

What were the basic results?

All three groups were comparable for the variable factors listed above. In the whole group sample, breastfed children had statistically better outcomes in most areas after adjusting for confounders. However, there was an association between breastfeeding and asthma, and there was no difference in parental compliance.

In the sibling sample – chosen to see if having a sibling made a difference to the results – the findings were similar. But there was no statistically significant difference in hyperactivity, attachment and scholastic competence, although compliance was better in the breastfed children.

When only discordant siblings were analysed, there were no statistically significant differences in any outcomes between breastfed and bottle-fed children, including for asthma.

 

How did the researchers interpret the results?

The researchers concluded that these findings "suggest that the relationship between breastfeeding and long-term childhood outcomes may not be as consistent and straightforward as once thought … The risks associated with a failure to breastfeed are drastically overstated…

"Once between-family differences are taken into account, we find relatively little empirical evidence to support the notion that breastfeeding results in improved health and wellbeing for children between 4 and 14 years of age."

 

Conclusion

This study does not alter the current body of research, which has shown the beneficial effects of breastfeeding. There were statistically significant differences in health, behaviour and academic outcomes in the full cohort, although there was an association between breastfeeding and asthma.

It is unclear why this reverse trend was found in this study, but it does not show that breastfeeding causes asthma or that bottle feeding prevents it.

The study did not show a significant difference between siblings within a family who were breastfed. This may be because genetic and environmental factors have more influence on these outcomes than breastfeeding at an individual level.

There are a number of confounding factors that were not adjusted for in this study, including the reasons for changing the feeding style within a family. There could have been maternal factors, such as breast disease, or an inability for the baby to breastfeed, such as a cleft palate.

Another factor to consider is that women are only offered unpaid maternal leave in the US. This could mean that the majority of women who can afford to take time off to care and breastfeed their baby are on a high income. It could be the case that breastfeeding would still be of significant benefit for children born to women on a lower income in the UK.

Other important childhood outcomes in which breastfeeding has previously been found to be beneficial were not measured, including allergies, immune status and diabetes.

Importantly, breastfeeding also brings benefits to the mother, such as reducing the risk of breast cancer and ovarian cancer.

Breastfeeding is still the preferred option where possible, the benefits of which have been confirmed by this study. However, as the study acknowledges, some mothers are unable to breastfeed for a range of reasons and it is important that they are not stigmatised. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Breast milk is 'no better for a baby than bottled milk' - and it INCREASES the risk of asthma, expert claims. Mail Online, February 26 2014

Links To Science

Colen CG, Ramey DM. Is Breast Truly Best? Estimating the Effects of Breastfeeding on Long-term Child Health and Wellbeing in the United States Using Sibling Comparisons. Social Science & Medicine. Published online January 29 2014

Categories: Medical News