Medical News

'Angelina Jolie effect' doubled breast gene tests

Medical News - Fri, 09/19/2014 - 14:30

“Referrals to breast cancer clinics more than doubled in the UK after Angelina Jolie announced she had had a double mastectomy,” BBC News reports. NHS services saw a sharp rise in referrals from women worried about their family history of breast cancer.

In May 2013, actress Angelina Jolie announced that she had decided to undergo a double mastectomy followed by breast reconstruction surgery, as gene testing estimated she had an 87% chance of developing breast cancer.

Examination of trends in genetic testing clinics in the UK showed that there was a peak in referral rates in June and July, with numbers standing at around two-and-half times higher than the previous year. There was almost a doubling in requests for predictive genetic tests for cancer risk genes, and many more enquiries about preventative mastectomy. Researchers were also encouraged by finding that all referrals to genetic or family history clinics were appropriate (that the so-called “worried well” weren’t diverting resources from where they were needed).

This study can’t prove a direct cause and effect, but the evidence seems compelling.

The researchers also speculate that, as Angelina Jolie is seen as a glamorous icon, her decision may have reassured women who fear that preventative surgery would make a woman less attractive.

The actress would have been well within her rights to keep her health confidential, particularly knowing the media interest it would create. Her decision to speak out and help destigmatise mastectomies should be congratulated.

 

Where did the story come from?

The study was carried out by researchers from the University Hospital of South Manchester NHS Trust, and the Manchester Centre for Genomic Medicine at St. Mary’s Hospital. Financial support was provided by the Genesis Breast Cancer Prevention Appeal and Breast Cancer Campaign.

The study was published in the peer-reviewed medical journal Breast Cancer Research on an open-access basis, so it is free to read online.

The UK media’s reporting was generally accurate, though the Daily Mirror got a little confused with its headline "'Angelina Jolie effect' credited for huge rise in double mastectomies to reduce breast cancer risk”.

The effect did cause a rise in the number of women being tested to see if a double mastectomy was required. However, the research didn't look at the number of operations carried out. As most of the tests would have actually proved negative, the impact on the number of operations is unlikely to have been a “huge rise”.

 

What kind of research was this?

This was a review of breast cancer-related referrals to family history clinics and genetics services within the UK for 2012 and 2013, to see how the trends changed between the two years.

As the researchers discuss, it is common for news items related to a particular health service to lead to a short-term temporary increase in interest. There is rarely a long-lasting effect once the media attention has died down. For example, the 2009 death of reality TV star Jade Goody from cervical cancer led to a shortlived increase in the number of young women attending cervical cancer screening appointments.

In 2013, there was said to be “unprecedented publicity of hereditary breast cancer” in the UK. This was associated with two things. First came the release of draft guidance from the National Institute of Health and Care Excellence (NICE) on familial (hereditary) breast cancer in January, followed by the final publication in June 2013. Second, and seemingly more significant, was the high-profile news reports that broke in May 2013 of actress Angelina Jolie’s decision to undergo a double mastectomy when finding that she had inherited the BRCA1 gene – putting her at high risk of developing breast cancer.

Studies suggested that the news stories were associated with increases in attendance at hereditary breast cancer clinics and genetics services in the US, Canada, Australia, New Zealand and the UK. This study assessed the potential effects of the “Angelina Jolie effect” by looking at UK referrals due to breast cancer family history in the UK for the year 2012 compared to 2013.

 

What did the research involve?

This research looked at referrals specific to breast cancer for 21 centres in the UK. This included 12 of 34 family history clinics invited to participate, and nine of 19 regional genetics centres. Centres that did not supply data were reported to either not have this available, or were unable to collate the data. Monthly referrals to each centre for 2012 and 2013 were assessed, and the trends analysed. 

 

What were the basic results?

The results show that overall referral rates were 17% higher in the period January to April 2013 than they had been in the previous year (the draft NICE guidance on familial breast cancer hit the media in January 2013, prior to final publication in June). However, there was nearly a 50% rise in May 2013, which was too early to have been associated with the final publication of NICE guidance, and coincided with the media reports about Angelina Jolie.

In June and July 2013, referral rates to the clinics were 4,847 – two-and-a-half times as many as in the same period the previous year (1,981 in 2012). From August to October, they were around twice as high as they had been in the same period the previous year. The referral rates then settled down again to being 32% higher in November and December 2013 than in November and December 2012.

In total, referrals rose from 12,142 in 2012 to 19,751 in 2013. There was almost a doubling in requests for BRCA1/2 testing, and many more enquiries about preventative mastectomies.

Encouragingly, internal reviews from specific centres show that there was no increase in inappropriate referrals.

 

How did the researchers interpret the results?

The researchers conclude that, “the Angelina Jolie effect has been long-lasting and global, and appears to have increased referrals to centres appropriately”.

 

Conclusion

This is an interesting study that reviewed how the trends in breast cancer-related referrals to breast cancer family history clinics and genetics centres in the UK changed between 2012 and 2013. The overall results show an increase in 2013, with particular peaks following high-profile media events – most notably, news of Angelina Jolie’s decision to have a double mastectomy in May of that year.

However, there are a couple of points to bear in mind when interpreting these results.

Firstly, the study did not have data available from all family history clinics and genetics centres in the UK, and the results are only representative of 40% of those who would have been eligible to participate. Therefore, it is not known whether the trends would be the same were data available from all services. However, this is a good representation, so is likely to give a good indicator.

Studies such as this can assess trends, but it is still not possible to know the direct cause of any changes. As this study says, there were two related events that received media attention in 2013: the publication of NICE guidance on familial breast cancer (pre-publication in January and final publication in June); and the higher-profile news reports in May of Angelina Jolie’s decision to have a double mastectomy due to her high risk of developing familial breast cancer.

While it may be plausible that the rises in referral rates to family history and genetics clinics were associated with this increased media attention, particularly the “Angelina effect”, it still cannot be proven that this is the only cause. Alternatively, the increase in trend could also be related to a gradual year-on-year increase in people’s health awareness.       

It would be interesting to see how trends changed in years prior to 2012. It would also be interesting to know what has happened to the trend in referral rates through 2014. 

Overall, the particular peaks in referral rates in June and July 2013 suggest that the news related to Angelina Jolie, perhaps combined with the publication of NICE guidance on familial breast cancer testing around this time, have a high chance of being associated with the increased referral rates.

This is not surprising given the thought-provoking influence that the media is known to have.

It is also encouraging to know that all referrals to genetic or family history clinics were appropriate, suggesting that the media attention is likely to have had a positive effect in increasing health awareness.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Breast cancer test 'Angelina Jolie effect' found. BBC News, September 19 2014

The Angelina effect: Surge in women going for breast cancer checks after actress speaks out about her mastectomy. Daily Mail, September 19 2014

The 'Angelina Jolie effect': Her mastectomy revelation doubled NHS breast cancer testing referrals. The Independent, September 19 2014

Angelina Jolie's breast cancer announcement doubled number of women being tested: study. The Daily Telegraph, September 19 2014

'Angelina Jolie effect' credited for huge rise in double mastectomies to reduce breast cancer risk. Daily Mirror, September 19 2014

Angelina Jolie's op sparks huge surge in the number of cancer tests. Daily Express, September 19 2014

Links To Science

Evans DGR, Barwell J, Eccles DM, et al. The Angelina Jolie effect: how high celebrity profile can have a major impact on provision of cancer related services. Breast Cancer Research. Published online September 19 2014

Categories: Medical News

Chokeberry extract 'boosts pancreas cancer chemo'

Medical News - Thu, 09/18/2014 - 14:30

“Wild berries native to North America may have a role in boosting cancer therapy,” BBC News reports.

It has been found – in a laboratory study using pancreatic cancer cells – that chokeberry extract may help increase the powers of chemotherapy drugs in treating pancreatic cancer

Researchers tested an extract of chokeberry – a plant found on the eastern side of the continent – on pancreatic cancer cells. They examined what happened to these cells in the laboratory when they were treated with chemotherapy alone, chokeberry extract alone, or with a combination of both.

Researchers found that adding the chokeberry extract to gemcitabine (a chemotherapy drug used in the treatment of pancreatic cancer) was more effective at halting the growth of cancer cells than the drug alone.

Pancreatic cancer is a condition with notoriously poor prognosis, and the possibility of any new treatment on the horizon is encouraging. However, it is uncertain whether these positive lab results would translate to a real-world setting. It is expected that, based on these promising results, further studies will look into the possibility of human trial(s).

For now, people with pancreatic cancer should not consider taking these chokeberry extracts or supplements, based on this very early-stage research. "Herbal remedies" should never be assumed to be safe, and some can react unpredictably with chemotherapy drugs.

 

Where did the story come from?

The study was carried out by researchers from Middlesex University, University of Southampton, Portsmouth University and Kings College Hospital. It was funded by the Malaysian Ministry of Higher Education and a US charitable organisation called Have a Chance Inc.

The study was published in the peer-reviewed Journal of Clinical Pathology.

The BBC’s coverage was fair, pointing out that research was at an early stage and including independent comments from cancer experts on the need for human trials. The Daily Telegraph’s coverage only included comments from the study’s authors.

 

What kind of research was this?

This was a laboratory study, with scientists having conducted various experiments examining the effect of adding extracts of chokeberry to pancreatic cancer cells.

The researchers point out that pancreatic cancer has a very poor outlook and a high mortality rate, with only 1-4% of those with the cancer surviving to five years. Only 10-20% of people with pancreatic cancer are suitable for surgery, and pancreatic cancer cells are resistant to both chemotherapy and radiotherapy.

Researchers say that many studies have explored the use of dietary agents, particularly antioxidant substances called polyphenols, found in fruits and vegetables. This is because of their ability to promote apoptosis – programmed cell death – in a variety of cancer cells. Previous studies have also shown that a number of polyphenols, including those from chokeberry extracts, have potential anticancer properties in malignant brain tumours.

Chokeberry (aronia melanocarpa) is a shrub found in North American wet woods and swamps. Extracts and supplements are popular for their apparent health-giving qualities, including their high level of antioxidants.

 

What did the research involve?

Researchers used a line of pancreatic cancer cells called AsPC-1, which were cultured in the laboratory. In a number of experiments, they assessed how well the cells grew when treated with:

  • the chemotherapy drug gemcitabine alone at different doses (gemcitabine is one of the drugs sometimes given to people after they have had surgery to remove their pancreatic cancer, to try and prevent it returning)
  • differing levels of chokeberry extract
  • a combination of gemcitabine with chokeberry extract

They also carried out experiments to examine how chokeberry extract might cause the death of cancer cells, and at what concentration it caused cell death. As a control, they also tested chokeberry extract on the healthy cells that line blood vessels. These are taken from the veins of the umbilical cord and are often used in laboratory studies.

 

What were the basic results?

Researchers found that gemcitabine in combination with chokeberry extract was more effective at killing cancer cells than gemcitabine by itself. This difference in effect was also present when using lower doses of gemcitabine.

The analysis indicated that when incubated with gemcitabine for 48 hours, a concentration of one microgram per millilitre of chokeberry extract was required to induce cell death. Generally, the higher the concentration of chokeberry extract used in combination with gemcitabine, the more cancer cells were killed.

However, chokeberry extract alone without gemcitabine was not effective at killing the cancer cells at the concentrations tested.

Healthy cells were unaffected by chokeberry extract up to a concentration of 50 micrograms per millilitre.

 

How did the researchers interpret the results?

The researchers say that chokeberry extract and other micronutrients should be considered as part of cancer therapy. More specifically, they suggest that elements in chokeberry extract may have “supra-additive effects” when used in combination with at least one conventional anti-cancer drug.

In an accompanying press release, Bashir Lwaleed, at the University of Southampton, comments: "These are very exciting results. The low doses of the extract greatly boosted the effectiveness of gemcitabine when the two were combined. In addition, we found that lower doses of the conventional drug were needed, suggesting either that the compounds work together synergistically [where the whole is greater than the sum of its parts], or that the extract exerts a "supra-additive" effect. This could change the way we deal with hard-to-treat cancers in the future. "

 

Conclusion

It is now commonly thought that the antioxidants found in fruits and vegetables may have many health benefits, including reducing the risk of some cancers.

Pancreatic cancer is a condition with notoriously poor prognosis, and the possibility of any new treatment on the horizon is encouraging. This study found that when pancreatic cancer cells in the laboratory were directly treated with a combination of the chemotherapy drug gemcitabine and chokeberry extract, adding the extract enhanced the cancer-killing potential compared to the chemotherapy drug alone.

However, directly adding an extract to cells in the laboratory is a lot different from people actually taking chokeberry extracts themselves. Though these are promising findings, it is too early to say whether the micronutrients found in this extract could be effective in the treatment of pancreatic cancer. Further scientific study will be needed before initial developments could progress to the next stage of trials in people with pancreatic cancer, to see whether chokeberry extract might enhance the effects of chemotherapy. 

For now, as experts importantly highlight, people with pancreatic cancer should not consider taking these chokeberry extracts in the form of a herbal remedy or supplement, based on this very early-stage research.

Herbal remedies, just like pharmaceutical medicines, will have an effect on the body and can be potentially harmful.

They should therefore be used with the same care and respect as pharmaceutical medicines. Being "natural" doesn't necessarily mean they're safe to take.

Read more about herbal medicines and supplements.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Berries in cancer therapy trial. BBC News, September 18 2014

Study: Berries could boost standard cancer treatment. The Daily Telegraph, September 18 2014

Links To Science

Thani NAA, Keshavarz S, Lwaleed BA, et al. Cytotoxicity of gemcitabine enhanced by polyphenolics from Aronia melanocarpa in pancreatic cancer cell line AsPC-1. Journal of Clinical Pathology. Published online September 17 2014

Categories: Medical News

Do artificial sweeteners raise diabetes risk?

Medical News - Thu, 09/18/2014 - 14:00

"Artificial sweeteners may promote diabetes, claim scientists," reports The Guardian. But before you go clearing your fridge of diet colas, the research in question – extensive as it was – was mainly in mice.

The researchers' experiments suggest artificial sweeteners, particularly saccharin, change the bacteria that normally live in the gut and help to digest nutrients.

These changes could reduce the body's ability to deal with sugar, leading to glucose intolerance, which can be an early warning sign of type 2 diabetes.

Assessments in human volunteers suggested the findings might also apply to people. But human studies so far are limited.

The researchers only directly tested the effect of saccharin in an uncontrolled study on just seven healthy adults over the course of a week. It is far too early to claim with any confidence that artificial sweeteners could be contributing to the diabetes "epidemic".

In the interim, if you are trying to reduce your sugar intake to control your weight or diabetes, you can always try to do so without using artificial sweeteners. For example, drinking tap water is a far cheaper alternative to diet drinks.

 

Where did the study come from?

This study was carried out by researchers at the Weizmann Institute of Science and other research centres in Israel.

It was funded by the Weizmann Institute and the Nancy and Stephen Grand Israel National Center for Personalized Medicine, as well as grants from various research funders globally.

The study was published in the peer-reviewed medical journal Nature.

The Guardian covered this study well, avoiding sensationalising the results. The paper and other media outlets, including the Daily Mail, included balanced quotes from various experts that highlight the study's limitations.

However, The Guardian reports the daily amount of saccharin used in the study in humans "was enough to sweeten around 40 cans of diet cola", but it is unclear where this estimate came from. Saccharin is not commonly used in diet drinks any longer, with aspartame being the preferred choice of most manufacturers. 

The Daily Express only included quotes from the study author (for) and a representative of the British Soft Drinks Association (against), which – as you would expect – polarised the debate.

 

What kind of research was this?

This was animal and human research looking at the effect of artificial sweeteners on bacteria in the gut and how this influences glucose metabolism.

Animal research is often one of the first steps in investigating theories about the biological effects of substances. It allows researchers to carry out studies that could not be done in humans.

Because of differences between species, results in animals may not always reflect what happens in humans, but they allow researchers to develop a better idea of how things might work.

They can then use this knowledge to develop ways to test their theories using information that can be obtained in humans. This study has carried out both the animal and early human tests of their theories. But the human part of this study was relatively limited, as the focus was on the animal research.

The researchers carried out a cross-sectional analysis of artificial sweetener exposure and indicators of metabolic problems and gut bacteria. This approach is not able to determine whether the sweetener could be contributing to the outcomes seen, or vice versa.

The researchers also tested the short-term effect of saccharin on people who never consumed the sweetener, but without a control group.

 

What did the research involve?

The researchers compared the effect of consuming the artificial sweeteners against water, glucose and sucrose on glucose tolerance in lean mice and obese mice (mice eating a high-fat diet). Glucose tolerance testing assesses how quickly the body can clear glucose from the blood after glucose is eaten.

The body normally responds by quickly taking glucose up into cells for use and storage. If the body is slow to do this, this is called glucose intolerance. Very high glucose intolerance in humans indicates diabetes.

The researchers carried out various experiments to test whether the changes seen might relate to the artificial sweeteners having an effect on the bacteria in the gut, and exactly what these effects were.

They then carried out tests to see whether artificial sweetener consumption could have similar effects in humans. They did this by cross-sectionally assessing long-term artificial sweetener consumption and various indicators of glucose metabolism problems in a sample of 381 people who were not diabetic.

They also tested the effects of commercial saccharin given to seven healthy adult volunteers who did not normally consume saccharin. This was given over the course of six days at the US Food and Drug Agency's (FDA) maximum acceptable level (5mg per kg body weight), equivalent to 120mg a day.

 

What were the basic results?

The researchers found both lean and obese mice consuming the artificial sweeteners saccharin, sucralose or aspartame in their water over 11 weeks developed glucose intolerance, while those consuming just water, glucose or sucrose did not.

Saccharin had the greatest effect on glucose intolerance, and the researchers focused most of their experiments on this sweetener. It caused glucose intolerance within five weeks when given at a dose equivalent to the US Food and Drug Administration (FDA) maximum acceptable daily intake in humans.

The researchers found the mice consuming the artificial sweeteners did not differ in their liquid and food consumption or their walking and energy expenditure compared with the controls. These factors were therefore considered to not be causing the glucose intolerance.

However, treating mice with antibiotics stopped the artificial sweeteners having this effect. Mice with no gut bacteria developed glucose intolerance when the researchers transplanted gut bacteria taken from mice consuming saccharin or being treated with saccharin in the lab. These results suggest the sweeteners were having some effect on the gut bacteria, which was causing the glucose intolerance.

The researchers also found drinking saccharin changed the types of bacteria in the mice's guts. Drinking water, glucose or sucrose did not have this effect.

The bacteria in the gut are involved in helping to digest nutrients. The specific changes seen in mice consuming saccharin suggest the sweeteners could be increasing the amount of energy that could be harvested from these nutrients.

In their human studies, the researchers found:

  • Long-term artificial sweetener consumption in 381 people who were not diabetic was associated with greater waist circumference, waist to hip ratio, levels of glucose in the blood after fasting, and worse glucose tolerance.
  • People who consumed artificial sweeteners had a different gut bacteria composition from people who did not consume artificial sweeteners.
  • Four out of seven healthy adult volunteers who did not normally consume artificial sweeteners developed worse glucose tolerance after consuming the maximum US FDA-recommended level of saccharin for six days. These four people showed gut bacteria differences compared with the three people who did not show an effect, both before and after consuming the saccharin.
  • Transfer of gut bacteria from the volunteers showing a response to bacteria-free mice caused the mice to develop glucose intolerance. This was not seen if they transferred gut bacteria from the non-responding human volunteers to mice.

 

How did the researchers interpret the results?

The researchers concluded that consuming artificial sweeteners increases the risk of glucose intolerance in mice and humans by changing the gut bacteria and therefore affecting their function.

They say their findings suggest artificial sweeteners "may have directly contributed to enhancing the exact epidemic [obesity and diabetes] that they themselves were intended to fight".

 

Conclusion

This fascinating and controversial study in mice and humans suggests artificial sweeteners, particularly saccharin, could lead to glucose intolerance by having an effect on gut bacteria. The fact that both the animal and human experiments seem to support this adds some weight to the findings.

However, the researchers' investigations in humans are currently limited. They assessed the link between long-term artificial sweetener consumption and various indicators of metabolic problems, such as fat around the waist, using a cross-sectional design. This cannot establish which came first and therefore which could be influencing the other. Also, the only confounder in humans that seemed to be considered was body mass index.

The researchers also only directly tested the effect of one artificial sweetener (saccharin) in an uncontrolled study on just seven healthy adults over the course of a week. Saccharin is less commonly used than other artificial sweeteners, and the participants also consumed it at the maximum US FDA-recommended level (equivalent to 120mg a day).

The findings suggest – at least in the short term – saccharin may only affect glucose response in some people, depending on their gut bacteria. Larger studies, which also incorporate a control group, are needed to see whether they support the results and whether other sweeteners have similar effects.

Some earlier human studies have found links between artificial sweeteners and weight gain and increased diabetes risk. However, it has generally been assumed this is because the people who consume more artificial sweeteners because the sweeteners contain no calories already have problems with their weight, which is why they are at more risk, not vice versa (reverse causation).

This study raises the intriguing possibility that artificial sweeteners could also be directly affecting how our bodies respond to sugar. However, this research is only in its early stages, and we cannot say for certain whether artificial sweeteners are contributing to the diabetes epidemic.

In the interim, if you are trying to reduce your sugar intake, you can do so without replacing sugar with artificial sweeteners.

For people trying to lose weight and those with diabetes who are trying to control their blood sugar, it is important to do what works for them as this is more likely to be sustainable in the long term.

For some people, substituting food and drinks containing artificial sweeteners, rather than those containing sugar, may help with these goals.

At this stage, it is far too early to drop artificial sweeteners from the arsenal of sugar alternatives that could be used to fight the diabetes and obesity epidemic.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Artificial sweeteners may promote diabetes, claim scientists. The Guardian, September 17 2014

Sweeteners 'linked to rise in obesity and diabetes'. The Independent, September 17 2014

Low-calorie sweeteners found in diet drinks RAISE the risk of obesity and diabetes by affecting how the body processes sugar. Daily Mail, September 18 2014

Artificial food sweeteners linked to diabetes. Daily Express, September 17 2014

Sweeteners 'could cause obesity' scientists warn. The Daily Telegraph, September 17 2014

Artificial sweeteners linked to glucose intolerance. New Scientist. September 17 2014

Links To Science

Suez J, Korem T, Zeevi D, et al. Artificial sweeteners induce glucose intolerance by altering the gut microbiota. Nature. Published online September 17 2014

Categories: Medical News

Cosmetics blamed for raised child asthma risk

Medical News - Wed, 09/17/2014 - 14:10

"Chemicals in make-up and perfumes fuelling rise in children with asthma," reports the Mail Online.

One scientist, the website claims, suggests that women should take measures such as checking the contents of their make-up and avoiding using plastic containers for food.

This story is based on research following 300 inner-city children in the US and their mothers from the time of their pregnancy to age 11. The women's urine was tested in the third trimester for a group of chemicals called phthalates as a measure of the child's potential exposure in the womb.

They found the children of mothers who had the highest levels of exposure to two phthalates (butylbenzyl phthalate [BBzP] and di-n-butyl phthalate [DnBP]) in pregnancy were more likely to report asthma-like symptoms such as wheezing between the ages of 5 and 11, and to have current asthma.

Crucially, BBzP and DnBP are among several phthalates that have been banned from children's toys and cosmetics in the EU. The Daily Telegraph reports that from 2015 BBzP will be routinely banned. Countries outside the EU may have different legislation on the use of these chemicals.

The study's relatively small size means the size of the potential impact on risk is uncertain. Another limitation is that the study only looked at African American and Dominican inner-city women, and the results may not apply to wider groups of females.

It's also difficult to say for certain whether the phthalates are directly causing the increase in asthma cases. The authors themselves acknowledge that the findings need to be treated with caution until they are checked in other studies.

 

Where did the story come from?

The study was carried out by researchers from Columbia University and other research centres in the US. It was funded by the National Institute of Environmental Health Sciences.

The study was published in the peer-reviewed journal Environmental Health Perspectives.

The Daily Telegraph and The Guardian both crucially note the restrictions on the use of these phthalates in the EU. The Guardian states the US has fewer restrictions on phthalate use.

This difference may contribute to the Mail Online's reports that US scientists are "urging parents to reduce the risk by avoiding using plastic containers, perfume and heavily scented washing detergents".

The researchers do not do this in their research paper, which suggests caution in interpreting its results, although one of the authors is quoted in the Mail Online as making some suggestions to reduce exposure.

This may cause unnecessary concern, given that the Mail Online do not report on the existing, and impending, restrictions on the use of these chemicals in the EU. It is worth bearing in mind that many of the Mail Online's readers are based in the US, so this content may have been aimed at them.

 

What kind of research was this?

This was a prospective cohort study looking at whether exposure to chemicals called phthalates while in the womb is linked to a child's risk of developing asthma.

Phthalates are found in many consumer products, such as food packaging materials and various household products, including some beauty products. As such, people may consume some phthalates in their food or through the wider environment.

Previous studies suggested phthalates in the environment and in the body may be associated with asthma, but no studies have looked at the impact of exposure to these chemicals in the womb.

This type of study is the best way to assess whether there is an association between an earlier exposure and a later outcome in humans. While such research can provide evidence of an association, it is not possible to say for certain whether the exposure directly causes the outcome.

To weigh up whether the exposure is causing the outcome, researchers need to draw on a wide range of evidence, including human and animal studies. All or most of the evidence needs to support the possibility that the exposure causes the outcome before researchers can be relatively confident this is the case.

 

What did the research involve?

The researchers collected urine from 300 pregnant women and measured the levels of various phthalates in these samples as an indication of the exposure of the foetus to these chemicals.

They then followed up the women's children when they were aged 5 to 11 to identify anyone who had developed asthma. They analysed whether higher levels of exposure to phthalates was linked with an increased risk of developing asthma.

Pregnant African American or Dominican women were enrolled to take part in the Columbia Center for Children's Environmental Health (CCCEH) longitudinal birth cohort study between 1998 and 2006. To be eligible, they had to have lived in Northern Manhattan or the South Bronx for at least one year before their pregnancy.

Women who smoked or took illegal drugs, who had not received prenatal care early in their pregnancy, or who had medical conditions such as diabetes or HIV were not eligible to participate. Of the 727 women taking part in the CCCEH study, 300 had provided all of the samples and information needed to be analysed.

The women provided urine samples for testing in their third trimester of pregnancy, and the children provided samples at ages three, five and seven.

Researchers measured four chemicals formed during the breakdown of four different types of phthalates in the samples (called metabolites). These phthalates have long chemical names, which are abbreviated to DEHP, BBzP, DnBP and DEP.

They also measured levels of another type of chemical called bisphenol A, which is also found in consumer plastics and has suggested links to various illnesses.

The mothers were sent asthma questionnaires five times when the children were between the ages of five and 11. These asked about whether the children had asthma symptoms or took asthma medication over the previous year.

The first time the mother reported that their child had symptoms that could indicate asthma (such as wheeze or whistling in the chest, or a cough lasting more than a week) or took asthma medications, the child was referred for a standard assessment by a doctor, including lung function tests.

Based on this assessment, the children were classified as having current asthma or not current asthma (despite a history of symptoms).

The researchers also assessed various factors that could impact results (confounders) as they were thought to be associated with people's phthalate exposure or asthma. This included things such as:

  • exposure to household tobacco smoke prenatally or after birth
  • maternal asthma
  • financial hardship during pregnancy (lack of food, housing, gas, electricity, clothing, or medicine)
  • prenatal bisphenol A exposure
  • child exposure to phthalates after birth (as measured in the child's urine)

They took these factors into account in their analyses, which looked at whether the level of prenatal exposure to phthalates was related to a child's risk of developing asthma.

 

What were the basic results?

Just over half of the children (51%) were assessed by a doctor because they had been reported to have wheezing or other asthma-related symptoms, or to have used asthma medications. After assessment, 31% were judged to have current asthma and 20% to not have current asthma.

The levels of prenatal exposure to two phthalates, called butylbenzyl phthalate (BBzP) and di-n-butyl phthalate (DnBP), showed significant association with having a history of asthma-like symptoms and having current asthma.

Compared with children whose mothers had the lowest levels of these phthalates prenatally (levels in the bottom third of measurements), children whose mothers had the highest levels (levels in the top third of measurements) were:

  • about 40% more likely to have a history of asthma symptoms (relative risk [RR] 1.39 and 1.44 for the two different phthalates; confidence intervals [CI] showed the links were statistically significant)
  • about 70% more likely to have current asthma (RR 1.72 and 1.78 for the two different phthalates; CI showed the links were statistically significant)

Analyses suggest the levels of prenatal exposure to the other two phthalates, called DEHP and DEP, were not associated with a history of asthma symptoms or current asthma. The children's levels of exposure to the phthalates from ages three to seven were not associated with childhood asthma.

 

How did the researchers interpret the results?

The researchers concluded that, "prenatal exposure to BBzP and DnBP may increase the risk of asthma among inner-city children". They note that, as this is the first study to find this, the results need to be interpreted cautiously until they are replicated in other studies.

 

Conclusion

This study, analysing 300 inner-city women and their children, suggests there may be a link between exposure to certain phthalate chemicals prenatally and a child's risk of asthma and asthma symptoms between the ages of 5 and 11.

The strength of this study is its design – prospectively setting out the data it wanted to collect and doing so in a standardised way, also following up the participants over time.

Many studies looking at the links between chemical exposures and adverse outcomes measure both at the same time, meaning it is not clear whether one came before, and therefore might directly influence, the other.

This study also had children with reported asthma symptoms assessed by a doctor to confirm their diagnosis, which is likely to be more accurate than relying solely on parental reporting.

The study does have its limitations, however:

  • The study was relatively small and in a very select group of women (of African American and Dominican ethnicity, living in inner-city areas). Results may not be representative of what might be found in a larger, more diverse, sample.
  • The small sample size also means it's hard to be precise about what level of risk could be associated with the chemicals, and the increase could be anywhere from 5%, and for current asthma could be anywhere from 15%.
  • Phthalate metabolites in the pregnant women's urine was only measured once, in the third trimester, and this may not be representative of exposures throughout the whole pregnancy. The researchers report that studies that have compared levels of these chemicals in people's urine over time show only "moderate" consistency.
  • As with all studies of this type, other factors may have an effect on the results (confounders). The authors did take into account a range of potential confounders, but their effect may not be completely removed, and unmeasured factors may also be having an effect.

These are early findings on this particular association, and it's not possible to say for certain whether these chemicals are definitely having an effect on the child's asthma risk. The authors of the study themselves are appropriately cautious, suggesting that their findings need to be confirmed in other studies before firm conclusions can be drawn.

The study also did not assess the sources of the women's exposure to phthalates. The researchers say that, based on previous studies, PVC products could be a likely "substantial source" of BBzP exposure in the home.

If evidence accumulates that chemicals used in consumer products may be associated with health risks, it's likely that government agencies will review this evidence and come to a decision about whether their use needs to be limited.

Phthalates are a group of chemicals that are being extensively studied, and there are already EU-wide regulatory controls on their use.

For example, there is a ban on using six phthalates, including BBzP and DnBP, in toys and products for children under the age of three. BBzP and DnBP are also banned in cosmetics in the EU.

The UK Food Standards Agency also says there has been a move away from using phthalates in some food packaging in Europe, and has assessed the levels of phthalates in food and the associated potential risks.

The Daily Telegraph reports BBzP "will be among three chemicals whose use is routinely banned by the EU" from 2015. 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Some household plastics could increase risk of childhood asthma, study finds. The Guardian, September 17 2014

Chemicals in make-up and perfumes fuelling rise in children with asthma. Daily Mail, September 17 2014

Asthma risk from exposure to chemicals in the womb. The Daily Telegraph, September 17 2014

Links To Science

Whyatt RM, Perzanowski MS, Just AC, et al. Asthma in Inner-City Children at 5-11 Years of Age and Prenatal Exposure to Phthalates: The Columbia Center for Children’s Environmental Health Cohort (PDF, 643kb). Environmental Health Perspectives. Published online September 17 2014

Categories: Medical News

HPV urine test could screen for cervical cancer

Medical News - Wed, 09/17/2014 - 14:00

"A simple urine test which can detect the human papilloma virus (HPV) could offer women a much less invasive alternative to [current] cervical cancer screening," The Independent reports.

Research found urine-based testing for HPV DNA showed signs it might be accurate enough to provide a viable screening method, given further research and development.

The papers report on a review of 14 diverse studies involving 1,443 women. All of the studies looked at the accuracy of using a self-administered urine test designed to detect HPV DNA. HPV is a group of viruses, some of which can cause cervical cancer in women.

The advantage of such a self-administered urine test is it may improve uptake of cervical screening. As the researchers speculate, some women may be put off by current screening methods (which involve using a tool to painlessly remove a sample of cells from the cervix) as they may find it embarrassing and time consuming.

This drop-off in women who attend screening, especially younger women, is of concern as around 3,000 cases of cervical cancer are diagnosed each year in the UK.

The review findings are promising, but need to be followed up by further investigation and the standardisation of the urine testing method so the potential of using these tests as a screening tool can be assessed.

 

Where did the story come from?

The study was carried out by researchers from The London School of Medicine and Dentistry (England), Clinical Biostatistics Unit, Hospital Ramon y Cajal (Spain), and CIBER Epidemiologia y Salud Publica (Spain).

The publication stated the study did not receive any funding.

The study was published in the peer-reviewed British Medical Journal as an open access article, so it is free to read online.

Generally, the media reported the story accurately but tended to focus on the new urine test as a replacement for the current smear test.

An alternative angle, and perhaps a more likely scenario, would be that the test would be used in tandem with the current smear test, providing an additional option for some women and adding more choice.

In any case, an initial "positive" urine sample result would more than likely be followed up by current cervical screening methods to confirm or disprove the preliminary result.

 

What kind of research was this?

This was a systematic review and meta-analysis to determine the accuracy of testing for HPV DNA in urine to detect cervical HPV in sexually active women.

HPV is one of the most common sexually transmitted infections. Infection with specific strains of HPV has been associated with the development of cervical cancer, a preventable and treatable disease.

Current routine screening uses a cervical cytology-based method to detect cells likely to develop into cancer – precancerous cervical intraepithelial neoplasia (CIN).

Cervical screening has traditionally relied on samples of cervical cells taken from the cervix (neck of the womb/uterus) using a spatula under direct vision by a health professional.

Despite screening, cervical cancer is still the most common malignancy in women aged under 35, the publication states. It says there has been a downward trend in coverage of screening in the under 35s, which may partly be because the current screening using cervical cytology sampling is invasive, time consuming and requires a clinician.

Less invasive, more convenient ways of screening are therefore desirable, such as a urine test. According to the authors, this has led to the rigorous evaluation of HPV DNA testing of cervical samples as a potential method of primary screening, and HPV testing is now set to replace cytology in several national screening programmes.

 

What did the research involve?

The review team searched for studies assessing the accuracy of urine HPV DNA tests in sexually active women. Data was collected relating to patient characteristics, study context, risk of bias, and test accuracy.

The researchers pooled the results of the different studies to estimate the overall test accuracy to detect HPV DNA in general, but also to detect HPV subtypes linked to a higher risk of cervical cancer.

To find relevant literature, the team searched several electronic databases from study inception to December 2013, then manually searched reference lists of these articles for further relevant articles and contacted topic experts. No language restrictions were placed on the literature search.

Studies were included where the detection of HPV DNA in urine was compared with its detection in the cervix in any sexually active woman concerned about HPV infection or the development of cervical cancer. Studies were excluded if a different or no reference standard was used, or if they were of a case-control design.

 

What were the basic results?

The search identified 16 relevant research papers based on 14 studies involving 1,443 women in total. The main results were:

  • Urine detection of any HPV had a pooled sensitivity (the proportion of the urine tests correctly showing HPV was present) of 87% (95% confidence interval [CI], 78% to 92%) and specificity (the proportion of the urine tests correctly showing HPV was absent) of 94% (95% CI, 82% to 98%).
  • Urine detection of high-risk HPV had a pooled sensitivity of 77% (95% CI, 8% to 84%) and specificity of 88% (95% CI, 58% to 97%).
  • Urine detection of HPV 16 and 18 subtypes – some of the subtypes most likely to cause cancer – had a pooled sensitivity of 73% (95% CI, 56% to 86%) and specificity of 98% (95% CI, 91% to 100%).
  • Most HPV urine tested for HPV DNA in first void volumes – this is a sample from the first urine passed in the morning after you wake up. Other studies used midstream urination samples or random urine samples from any time of day.
  • Meta-analysis showed an increase in sensitivity when urine samples were collected as first void compared with random or midstream.

 

How did the researchers interpret the results?

The authors commented that, "Our review demonstrates the accuracy of detection of HPV in urine for the presence of cervical HPV. When cervical testing for HPV is sought, urine-based testing should be an acceptable alternative to increase coverage for subgroups that are hard to reach.

"However, results must be interpreted with caution owing to variation between individual studies for participant characteristics, lack of standardised methods of urine testing, and the surrogate nature of cervical HPV for cervical disease."

 

Conclusion

This systematic review and meta-analysis indicates that urine tests for detecting HPV DNA might be feasible for screening women for cervical cancer based on an evidence base of 14 diverse studies involving 1,443 women.

While it is feasible this type of test might be useful for screening, there were many limitations in the evidence base reviewed. This means its effectiveness as a screening tool is still up for debate and is unproven.

Issues include:

  • the large variation between individual studies for participant characteristics
  • the large variation in estimates of test sensitivity and specificity between individual studies
  • the lack of standardised methods of urine testing and collection
  • the surrogate nature of detecting cervical HPV DNA to predict cervical disease

This ultimately meant a relatively diverse test of screening tests, participants and results were lumped together to give a summary result of test accuracy. This means the pooled result may not actually be a good representation of the underlying studies as they are not a uniform group.

The BMJ editorial summed up how future research could address many of these limitations. "If serious consideration is to be given to using urine HPV testing in cervical screening programmes, then further evaluation is essential, including an adequately powered, high-quality prospective study comparing urine testing with vaginal self-sampling and reporting the detection of high grade CIN [pre-cancer] as the primary endpoint.

"Participants could do both tests without the quality of one sample being reduced by the other. The study could be performed in women attending for routine screening, with urine and vaginal samples collected before the 'gold standard' cervical sample. Ideally, samples would be obtained using standardised protocols and tested using a single validated HPV test."

On the flip side, a strength of this study was the search protocol of systematic review. This seems robust and appeared to have a good chance of identifying all the relevant literature.

We agree with the study authors and the BMJ editorial that these findings are promising, but need to be followed up by further investigation and standardisation of urine testing used in this way.

The benefits of such a test, if successful, are potentially large. For example, it may increase screening rates that ultimately save lives through the early detection of cancer. Women may be more comfortable, and find it more convenient, to test for HPV using a self-administered urine test rather than the current smear test, which requires a visit to a medical establishment with all of its attendant connotations (such as the need to make an appointment and potential emotional effects, for example).

However, because the urine test has not been proven to work as a screening tool, it is not available routinely on the NHS. In the meantime, there are three main ways to reduce the risk of cervical cancer: vaccination, current cervical cancer screening (the smear test), and safe sex using a condom.

Find out more information about cervical screening.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New urine test could replace invasive smear tests. The Independent, September 17 2014

Could the smear test be replaced with a urine sample? Scientists hope less invasive screening method could boost attendance. Daily Mail, September 18 2014

HPV urine test 'should be used instead of cervical smear'. ITV News, September 17 2014

A cervical cancer screening you take at home: Breakthrough in HPV battle. Daily Express, September 17 2014

Links To Science

Pathak N, Dodds J, Zamora J, Khan K. Accuracy of urinary human papillomavirus testing for presence of cervical HPV: systematic review and meta-analysis. BMJ. Published online September 16 2014

Categories: Medical News

Sugar intake guideline 'needs lowering'

Medical News - Tue, 09/16/2014 - 15:00

“Sugar intake must be slashed further,” reports BBC News today.

The news reports follow an ecological study estimating the burden of disease caused by sugar-related tooth decay in adults and children across a life course, in a number of different countries.

It calculated that the burden would be significantly reduced by setting a target limit of less than 3% of total energy intake from sugar. This is much lower than the current figure outlined by the World Health Organization (WHO), which says that sugars should be less than 10% of a person’s daily calorie intake.

This reassessment of the target figure is not official from either the WHO or Public Health England, but has led to widespread media reports stating, “action needed to curb sugar” (Mail Online), while others have outlined possible sugar bans in schools and hospitals (The Daily Express and The Daily Telegraph) or sugar-related taxes. These angles were not put forward in the academic publication, which only suggested new, lower targets for sugar intake should be developed. It did not specify how to achieve them.

Potential limitations of the study include the accuracy of the sugar intake estimates and the percentage of total intake derived from sugar. This may or may not affect their overall conclusion that the existing target, of less than 10%, should be lowered.

On its own, this study does not appear robust enough to lead to policy changes.

 

Where did the story come from?

The study was carried out by researchers from University College London, who reported that no external funds were required for these analyses, interpretation or the writing of the paper.

The study was published in the peer-reviewed medical journal BMC Public Health. It is an open access journal, so it can be read for free online.

The reporting of the study was generally accurate across media outlets, with most coverage bringing in other issues around sugar bans, sugar taxes and other potential control measures in schools. These were not proposed in the original publication, so their source is unclear.

 

What kind of research was this?

This was an ecological study of national data on sugar intake and dental decay in many countries around the world, to assess the burden of disease in adults and children. 

Tooth decay is a common problem that occurs when acids in your mouth dissolve the outer layers of your teeth. It is also known as dental decay, tooth decay or dental caries. Although levels of tooth decay have decreased over the last few decades, it is still one of the most widespread health problems in the UK.

Sugar is a known cause of tooth decay, but the research team say no analysis has been made of the lifetime burden of dental decay by sugar. They wanted to estimate this and also see whether the WHO goal of less than 10% of total energy intake from sugar is optimal and compatible with low levels of dental decay.

 

What did the research involve?

The study gathered information on the prevalence and incidence of dental caries from nationally representative datasets. They then looked for links with national estimates of sugar intake from dietary surveys, or from the national intake assessed from the UN Food and Agriculture Organization Food Balance Sheet.

Analysis looked at countries where sugar intake had changed due to wartime restrictions or as part of a broader nutritional transition linked to becoming a more industrialised nation. The main analysis established a dose response relationship between sugar consumption and risk of dental decay across a life course. This was different to many previous studies that focused on the impact in children only. The impact of fluoride, in the water supply or applied through toothpaste, on the dose response relationship was also considered.

Sugar intake was defined differently in different national dietary surveys, but generally referred to sucrose consumption, often termed “non-milk extrinsic sugars”. In the US, fructose syrups are included, and in the UK, the term “non-milk extrinsic sugars” is used to define these non-lactose disaccharides, with maltose making a negligible contribution. The statistics do not take account of sugars contained in dried fruit.

Estimates of national sugar consumption were used to calculate the proportion of total energy a person might be getting from sugar each day, and were based on an estimate of average global energy intake (men, women and children) of 2,000 calories per day.

 

What were the basic results?

Detailed information from Japan indicated sugar was directly related to dental decay when sugar increased from 0% to 10% of total daily energy intake. This led to a 10-fold increase in dental caries over several years.

Adults aged over 65 had nearly half of all tooth surfaces affected by caries, even when they lived in water-fluoridated areas, where high proportions of people used fluoridated toothpastes. This did not occur in countries where the intake of sugar was less than 3% of total daily energy intake.

Therefore, the cut-off they calculated to reduce the burden of disease caused by sugar was a daily intake of less than 3% of total energy intake. They suggested that less than 5% might be a more pragmatic target for policy makers. The current WHO recommendation is less than 10%.

 

How did the researchers interpret the results?

The researchers concluded that, “there is a robust log-linear relationship of [dental] caries to sugar intakes from 0% to 10% sugar [of total energy]. A 10% sugar intake induces a costly burden of caries. These findings imply that public health goals need to set sugar intakes ideally <3%, with <5% as a pragmatic goal, even when fluoride is widely used. Adult as well as children’s caries burdens should define the new criteria for developing goals for sugar intake.”

 

Conclusion

This ecological study looked at national data sets to estimate the burden of disease caused by sugar-related tooth decay in adults and children across a life course. It calculated that the burden would be significantly reduced by setting a target limit of less than 3% of total energy intake coming from sugar. This is much lower than the current figure outlined by the WHO, which states that sugar should be less than 10% of a person's daily calorie intake.

This reassessment of the target figure is not official, but has led to widespread media reports stating, “action needed to curb sugar” (Mail Online), with others outlining possible sugar bans in schools and hospitals (Express and Telegraph) or sugar-related taxes. These angles were not put forward in the academic publication, which only went as far as suggesting that new, lower targets for sugar intake should be developed. They did not specify how the reduction could or should occur.

The study has many potential limitations, thereby reducing its reliability and calling into question the precision of its estimates and the 3% cut off. Namely, it is likely to include inaccuracy in its estimates of sugar intake and particularly the percentage of total intake derived from sugar. For this, it used a generic figure of 2,000 calories per day for men, women and children. This may not be an accurate representation of intake present across a very diverse demographic of people from a range of different countries.

The severity of the health effects of sugar has long been debated and was somewhat popularised in the 1972 book “Pure White and Deadly” by Professor John Yudkin. Discussions since then have considered whether more restrictions should be placed on sugar, given the many estimates of its widespread negative effect on health in terms of weight gain, tooth decay, diabetes and contribution to other diseases. 

This has also included debate around whether the food and drinks industries should do more (through voluntary or mandatory mechanisms) to reduce the sugar content of their products, particularly those marketed at children, in a similar vein to efforts to reduce the salt and saturated fat content of food in the 1980s and 90s.

On its own, this study does not appear robust enough to lead to policy changes; however, the debate is clearly underway, as some media reports indicated both the WHO and advisors in England may be considering a cut in their recommendations for sugar consumption.

These considerations are likely to be based on much stronger or broader evidence than this single study.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Sugar intake must be slashed further, say scientists. BBC News, September 16 2014

Action needed to curb sugar intake. Mail Online, September 16 2014

Ban sugary foods in schools and hospitals, doctors say. The Daily Telegraph, September 16 2014

Sugar ban: Junk food should be axed from school to stop tooth decay. The Daily Express, September 16 2014

Links To Science

Sheiham A, James WPT. A reappraisal of the quantitative relationship between sugar intake and dental caries: the need for new criteria for developing goals for sugar intake. BMC Public Health. Published online September 16 2014

Categories: Medical News

Brain scans offer fresh insights into ADHD

Medical News - Tue, 09/16/2014 - 14:50

"Doctors could soon diagnose ADHD in children with a brain scan," is the over-exuberant headline from the Mail Online.

The underlying research, based on comparing the brain scans of 133 people with attention deficit hyperactivity disorder (ADHD) with people without the condition, highlighted areas of brain connectivity that were different in the two groups. These differences may be a result of the slower maturation of these connections in people with ADHD. 

These regions of the brain have previously been associated with some of the symptoms characteristic of the condition, such as impulsivity. This suggests these areas may be involved in the development of ADHD.

The study authors' conclusions were considered and did not suggest that improvements in ADHD diagnosis were imminent based on these results alone. They called for further research to confirm and validate their findings and to develop further understanding of the neurological basis of ADHD.

If you think you or your child may have ADHD, you might want to consider speaking to your GP about the condition.

 

Where did the story come from?

The study was carried out by researchers from the Department of Psychiatry at the University of Michigan, and was funded by the US National Institutes of Health, a University of Michigan Center for Computational Medicine pilot grant, and the John Templeton Foundation.

It was published in the peer-reviewed journal, Proceedings of the National Academy of Sciences (PNAS).

The Mail Online coverage was generally accurate, but their headline suggesting that "Doctors could soon diagnose ADHD in children with a brain scan" read too much into these early-stage results.

Researchers neither tested nor validated the use of brain scans alone as a method of diagnosing ADHD, or when coupled with current diagnosis methods.

 

What kind of research was this?

This was a case-control study comparing the brain scans of children and young adults with ADHD with those of typically developing control participants without ADHD.

The researchers state individuals with ADHD have delays in brain maturation. This study aimed to investigate this in detail by establishing which parts of the brain, and which connections between different parts of the brain circuitry, were delayed in people with ADHD.

 

What did the research involve?

The research involved comparing the brain scans of 133 people diagnosed with ADHD, the cases (age range 7.2 to 21.8 years), with 443 typically developing controls (age range selected to match cases). The analysis compared the connectivity between a number of distinct areas of the brain to look for differences between the cases and controls.

The scans assessed functional connectivity to gauge which areas of the brain were functionally connected to other areas. They referred to this approach as a "connectomic" method.

This is slightly different from many previous studies, which mainly looked at whether certain areas are active or not, or at the relative sizes of different areas of the brain. The analysis took account of age differences in the two samples.

 

What were the basic results?

The scans showed differences between the brain connectivity maturation of people with ADHD and those without.

Those with ADHD had a lag in the maturation of connections in a specific brain network region called the default mode network, a poorly understood structure whose functions are uncertain.

They also had delays in connections between the default mode network and two other areas called task-positive networks, which deal with tasks requiring attention: the frontoparietal network and ventral attention network.

The research team indicated these areas of brain connectivity and interaction have previously been associated with the behavioural characteristics of ADHD, such as impulsivity, providing some degree of external validity for the importance of this region.

 

How did the researchers interpret the results?

The researchers stated their results suggest "maturational lag of regulatory control networks contributes to inattention and/or impulsivity across different clinical populations, and they invite new research aimed at direct comparative investigation".

 

Conclusion

This research, based on comparing the brain scans of people with ADHD with those without, highlighted areas of brain connectivity that were different in the two groups. These regions have previously been associated with some of the symptoms characteristic of ADHD. 

The study's authors were considered in their conclusions and did not suggest that improvements in ADHD diagnosis could be made based on their results. They called for further research to confirm and validate their findings and to develop further understanding of the neurological basis of ADHD.

It is feasible this sort of technology might be used to aid ADHD or other mental health-related conditions in the future, but this is very speculative based on what is a relatively small early-stage study.

Larger studies comparing more diverse groups of people with and without ADHD could shed more light on whether this sort of scan could be used as a diagnostic tool.

This is just one avenue of research – a related aim of this type of scanning is to generally increase understanding of the neurological basis of ADHD, which could then lead to new treatments.

ADHD is currently diagnosed through a formal assessment performed by a health professional such as a psychiatrist, a doctor specialising in children's health, a learning disability specialist, a social worker, or an occupational therapist with expertise in ADHD.

If you think you or your child may have ADHD, you might want to consider speaking to your GP about the condition first.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Doctors could soon diagnose ADHD in children with a brain scan. Mail Online, September 15 2014

Links To Science

Sripada CS, Kessler D, Angstadt M. Lag in maturation of the brain's intrinsic functional architecture in attention-deficit/hyperactivity disorder. PNAS. Published online September 15 2014

Categories: Medical News

'Rebooted' stem cells may lead to new treatments

Medical News - Mon, 09/15/2014 - 15:00

"Scientists have managed to 'reset' human stem cells," the Mail Online reports. It is hoped studying these cells will provide more information about the mechanics of early human development.

This headline comes from a laboratory study that reports to have found a way to turn the clock back on human stem cells so they exhibit characteristics more similar to seven- to nine-day-old embryonic cells.

These more primitive cells are, in theory, capable of making all and any type of cell or tissue in the human body, and are very valuable for researching human development and disease.

Previous research efforts have successfully engineered early-stage stem cells capable of making several cell and tissue types, called pluripotent stem cells.

However, pluripotent stem cells engineered in the laboratory are not perfect and display subtle differences to natural stem cells.

This study involved using biochemical techniques to return pluripotent human stems cells to a more primitive "ground-state" stem cell.

If this technique is confirmed as reliable and can be replicated in other studies, it could ultimately lead to new treatments, although this possibility is uncertain.

While the immediate impact is probably minimal, it's hoped this research may lead to advances in the years to come.

 

Where did the story come from?

The study was carried out by researchers from the University of Cambridge, the University of London and the Babraham Institute.

It was funded by the UK Medical Research Council, the Japan Science and Technology Agency, the Genome Biology Unit of the European Molecular Biology Laboratory, European Commission projects PluriMes, BetaCellTherapy, EpiGeneSys and Blueprint, and the Wellcome Trust.

The study was published in the peer-reviewed journal Cell as an open access article, so it's available to read online for free.

The Mail Online's coverage was accurate and reflected many of the facts summarised in the press release issued by the Medical Research Council. Interviews with the research's authors and other scientists in the field added useful extra insight to interpret and contextualise the findings.

 

What kind of research was this?

This was a laboratory study to develop and test a new technique to return pluripotent human stem cells to an earlier, more pristine developmental state.

Pluripotent stem cells are early developmental cells capable of becoming several different cell types. Some stem cells are said to be totipotent (capable of becoming all types of cell), such as early embryonic stem cells shortly after fertilisation.

These types of cells are very valuable in developmental science research as they allow the study of developmental processes in the laboratory that aren't possible to study in a foetus shortly after conception.

As the MRC press release explains: "Capturing embryonic stem cells is like stopping the developmental clock at the precise moment before they begin to turn into distinct cells and tissues.

"Scientists have perfected a reliable way of doing this with mouse cells, but human cells have proved more difficult to arrest and show subtle differences between the individual cells. It's as if the developmental clock has not stopped at the same time and some cells are a few minutes ahead of others."

The aim of this study was therefore to devise and test a way of turning back the clock in human pluripotent stem cells so they exhibit more totipotent characteristics. This was also termed as returning the pluripotent cells to a "ground-state" pluripotency.

 

What did the research involve?

This research took existing human pluripotent stem cells and subjected them to a battery of laboratory-based experiments in an effort to produce stable stem cells showing a more ground-state pluripotency.

This chiefly involved culturing the human stem cells in a range of biological growth factors and other chemical stimuli designed to coax them into earlier phases of development. Extensive monitoring of the cell characteristics, such as self-replication, gene and protein activity (expression), occurred along the way.

 

What were the basic results?

The main findings include:

  • Short-term expression of proteins NANOG and KLF2 was able to put into action a biological pathway leading to the "reset" of pluripotent stems cells to an earlier state. The MRC press release indicated this was equivalent to resetting the cells to those found in an embryo before it implants in the womb at around seven to nine days old.
  • Inhibiting well-established biochemical signalling pathways involving extracellular signal-regulated kinases (ERK) and protein Kinase C (both of which are proteins involved in cell regulation) sustained the "rewired state", allowing cells to stay in the arrested development state.
  • The reset cells could self-renew – a key feature of stem cells – without biochemical ERK signalling, and their observable characteristics and genetics remained stable.
  • DNA methylation – a naturally occurring way of regulating gene expression associated with cellular differentiation – was also dramatically reduced, suggesting a more primitive state.

These features, the authors commented, distinguished these reset cells from other types of embryo-derived or induced pluripotent stem cell, and aligns them closer to the ground-state embryonic stem cell (totipotent) in mice.

 

How did the researchers interpret the results?

The researchers indicate their findings demonstrate the "feasibility of installing and propagating functional control circuitry for ground-state pluripotency in human cells". They added the reset can be achieved without permanent genetic modification.

The research group explained the theory that a "self-renewing ground state similar to rodent ESC [embryonic stem cells] may pertain to primates is contentious", but "our findings indicate that anticipated ground state properties may be instated in human cells following short-term expression of NANOG and KLF2 transgenes. The resulting cells can be perpetuated in defined medium lacking serum products or growth factors."

 

Conclusion

This laboratory study showed human pluripotent stem cells could be coaxed into a seemingly more primitive developmental state, exhibiting some of the key features of an equivalently primitive embryonic stem cell in mice. Namely, this is the ability to stably self-renew and be able to develop into a range of other types of cell.

If replicated and confirmed by other research groups, this finding may be useful to developmental biologists in their efforts to better understand human development and what happens when it goes wrong and causes disease. But this is the hope and expectation for the future, rather than an achievement that has been realised using this new technique.

Sounding a note of caution, Yasuhiro Takashima of the Japan Science and Technology Agency and one of the authors of the study, commented on the Mail Online website: "We don't yet know whether these will be a better starting point than existing stem cells for therapies, but being able to start entirely from scratch could prove beneficial."

This is the start rather than the end of the journey for this new technique and the cells derived from it. The technique will need to be replicated by other research groups in other conditions to ensure its reliability and validity.

The cells themselves will also need to be studied further to see if they do really have the stability and versatility of true primitive stem cells expected under different conditions and time horizons. This will include looking for any subtle or unusual behaviour further down the development line, as has been found to be the case with other types of stem cell thought to be primitive.

Overall, this study is important to biologists and medical researchers as it potentially gives them new tools to investigate human development and associated diseases. For the average person the immediate impact is minimal, but may be felt in the future if new treatments arise.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

British scientists 'reset' human stem cells to their earliest state: 'Major step forward' could lead to development of life-saving medicines. Mail Online, September 11 2014

Ultimate human stem cells created in the lab. New Scientist, September 12 2014

Links To Science

Takashima Y, Guo G, Loos R et al. Resetting Transcription Factor Control Circuitry toward Ground-State Pluripotency in Human. Cell. Published online September 11 2014

Categories: Medical News

Could meditation help combat migraines?

Medical News - Mon, 09/15/2014 - 14:00

“Daily meditation may be the most effective way of tackling migraine,” the Daily Express reports.

This headline is not justified, as it was based on a small pilot study involving just 19 people.

It showed that an eight week "mindfulness-based stress reduction course" (a combination of mediation and yoga-based practices) led to benefits in measures of headache duration and subsequent disability in 10 adult migraine sufferers, compared to nine in a control group who received usual care.

There were no statistically significant differences found for the arguably more important measures of migraine frequency (migraines per month) and severity. However, the study may have been too small to reliably detect any differences in these outcomes. Both groups continued to take any migraine medication (preventative or for treatment during a headache) they were already taking before the trial.

Overall, this trial showed weak and tentative signs that mindfulness-based stress reduction might be beneficial in a very small group of highly select adults with migraines. However, we will only be able to say it works with any confidence after much larger studies have been carried out.

 

Where did the story come from?

The study was carried out by researchers from Wake Forest School of Medicine, North Carolina (US) and Harvard Medical School, Boston. It was funded by the American Headache Society Fellowship and the Headache Research Fund of the John Graham Headache Center, Brigham and Women’s Faulkner Hospital.

The study was published in the peer-reviewed journal Headache.

One of the study authors reported receiving research support from GlaxoSmithKline, Merck and Depomed. All other authors report no conflicts of interest.

The Daily Express’ coverage of this small study arguably gave too much prominence and validity to the findings, indicating they were reliable: “The ancient yoga-style technique lowers the number of attacks and reduces the agonising symptoms without any nasty side effects”.

Many of the limitations associated with the study were not discussed, including the fact that some of the findings may have been chance, due to the small sample size.

To be fair, the researchers themselves were forthright in highlighting the limitations of their research.

 

What kind of research was this?

This was a small randomised controlled trial (RCT) investigating the effects of a standardised eight week mindfulness-based stress reduction course in adults with migraines.

Stress is known to be associated with headaches and migraines, but the research group said solid evidence on whether stress-reducing activities might reduce the occurrence or severity of migraines was lacking. Because of this, they designed a small RCT to test one such activity – an eight-week mindfulness-based stress reduction course.

This was a small pilot RCT. These are usually designed to provide proof of concept that something might work and is safe before moving on to larger trials involving more people. The larger trials are designed to reliably and robustly prove effectiveness and safety. Hence, on their own, pilot RCTs rarely provide reliable evidence of effectiveness.

 

What did the research involve?

Researchers took a group of 19 adults who had been diagnosed with migraines (with or without aura) and randomly divided then into two groups. One group (n=10) received an eight-week mindfulness-based stress reduction course, while the others (n=9) received “usual care” – they were asked to continue taking any migraine medication they had, and not to change the dose during the eight-week trial.

During the mindfulness trial, participants were also allowed to continue to take any medication they usually would. The main outcome of interest was change in migraine frequency from the start of the trial to eight weeks. Secondary measures included change in headache severity, duration, self-efficacy, perceived stress, migraine-related disability/impact, anxiety, depression, mindfulness and quality of life from the start to the end of the eight-week trial period. 

The standardised mindfulness-based stress reduction course class met for eight weekly, two-hour sessions, plus one “mindfulness retreat day”, which comprised six hours led by a trained instructor and followed a method created by Dr Jon Kabat-Zinn. The intervention is based on systematic and intensive training in mindfulness meditation and mindful hatha yoga in the context of mind/body medicine. Participants were encouraged to practice at home to build their daily mindfulness practice for 45 minutes per day, on at least five additional days per week. Compliance was monitored through class attendance and by daily logs of home practice.

To be included in the trial, participants had to have reported between 4 and 14 migraine days per month, more than a year of migraine history, be over 18, in good general health and be able and willing to attend weekly sessions of mindfulness and to practice every day at home for up to 45 minutes. Excluding criteria included participating in yoga/meditation practice and having a major illness (physical or mental).

All participants in both groups were taking medications for their headaches.

At the end of the eight-week period, the control group were offered the mindfulness course as a courtesy for their participation in the trial. In an attempt to blind the control group to treatment allocation,they were told there were two start periods for the eight-week trial and they were merely on the second, continuing usual care in the interim.

For all final analyses, migraines were more precisely defined as those headaches that were more than 4 hours long with a severity of 6 to 10, based on patient diary information.

The study aimed to recruit 34 people, but only recruited 19, so was underpowered to detect statistically significant differences in the outcomes assessed.

All participants kept a daily headache diary for 28 days before the study began.

What were the basic results?

All nine people completed the eight-week stress reducing course, averaging 34 minutes of daily meditation. In both groups, more than 80% took daily prophylactic migraine medication, such as Propranolol and 100% used abortive medication, such as Triptans, when a migraine struck. There were no adverse events recorded, suggesting the intervention was safe, at least in the short term.

The main findings were:

Primary outcome

Mindfulness participants had 1.4 fewer migraines per month compared to controls (intervention 3.5 migraines during 28-day run-in, reduced to 1.0 migraines per month during the eight-week study, vs. control: 1.2 to 0 migraines per month, 95% confidence interval (CI) [−4.6, 1.8], an effect that did not reach statistical significance in this pilot sample. The lack of statistical significance means the result could be due to chance alone.

Secondary outcomes

Headaches were less severe (−1.3 points/headache on 0-10 scale, [−2.3, 0.09], on the borderline of statistical significance) and shorter (−2.9 hours/headache, [−4.6, −0.02], statistically significant) in the intervention group compared to the controls

Migraine Disability Assessment and Headache Impact Test-6 (a widely used test that assesses the impact of migraines on quality of life and day to day function) dropped in intervention group compared with the control group (−12.6, [−22.0, −1.0] and −4.8, [−11.0, −1.0], respectively), both of which were statistically significant. Self-efficacy and mindfulness improved in the intervention group compared with control (13.2 [1.0, 30.0] and 13.1 [3.0, 26.0]) and was also a statistically significant finding.

 

How did the researchers interpret the results?

The researchers indicated the mindfulness-based stress reduction course was “safe and feasible for adults with migraines. Although the small sample size of this pilot trial did not provide power to detect statistically significant changes in migraine frequency or severity, secondary outcomes demonstrated this intervention had a beneficial effect on headache duration, disability, self-efficacy and mindfulness. Future studies with larger sample sizes are warranted to further evaluate this intervention for adults with migraines”.

 

Conclusion

This pilot RCT, based on just 19 adult migraine sufferers, showed an eight-week mindfulness-based stress reduction course led to benefits for headache duration, disability, self-efficacy and mindfulness measures, compared to a control group who received usual care. There were non-significant benefits observed for measures of migraine frequency and severity. Both groups continued to take any migraine medication (prophylactic or for treatment during a headache) they were already taking before the trial.

The research group themselves were very reasonable in their conclusions and called for larger trials to be done to investigate this issue further. As they acknowledge, relatively little can be said with reliability based on this small pilot study alone. This is because small studies van often not be generalised to the wider population.

For example, what are the chances the experience of a group of nine people will represent the experiences of the UK population as a whole who could be different ages, have different attitudes and expectations of meditation and have different medical backgrounds? 

Also, larger trials are able to more accurately estimate the magnitude of any effect, whereas small studies may be more volatile to change or extreme findings. Taken together, a pilot study of this size cannot and does not prove that "mindfulness-based stress reduction" is beneficial for migraine sufferers. This point may have been missed by those reading The Daily Express’ coverage, which appeared to accept some of the positive findings at face value and assume widespread effectiveness, without considering the limitations inherent in a pilot RCT of this size.

It is also worth noting that the participants were recruited if they suffered between 4 and 14 migraines per month, but the actual frequency of headache was much smaller for all participants during the run-in period and the eight-week study period. Indeed, some participants in each group had no headaches during each period. This further reduces the ability of this study to show any significant difference between the groups.

Overall, the eight-week mindfulness-based stress reduction course showed tentative signs that it might be beneficial in a very small group of highly select adults with migraines. However, we will only be able to say it is beneficial with any confidence after much larger studies have been carried out. Until then, we simply don’t know if this type of course will help migraine sufferers, hence the Daily Express’ headline is premature.

That said, adopting a psychological approach to chronic pain conditions, rather than relying on medication alone, can help improve symptoms in some people. Read more about coping with chronic pain.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Daily meditation could help 8m conquer pain of migraines. Daily Express, September 13 2014

How To Cure A Migraine? Study Says Meditation Might Be The Answer. Huffington Post, September 12 2014

Links To Science

Wells RE, Burch R, Paulsen RH, et al. Meditation for Migraines: A Pilot Randomized Controlled Trial. Headache – the Journal of Head and Face Pain. Published online July 18 2014

Categories: Medical News

Pregnant drink binges harm kids' mental health

Medical News - Fri, 09/12/2014 - 13:29

“Binge drinking ONCE during pregnancy can damage your child's mental health and school results,” says the Mail Online. 

The headline follows an analysis of results from a study including thousands of women and their children. In analyses of up to 7,000 children, researchers found that children of women who engaged in binge drinking at least once in pregnancy, but did not drink daily, had slightly higher levels of hyperactivity and inattention problems. These children also scored on average about one point lower in exams.

The results appear to suggest potential for some links, particularly in the area of hyperactivity/inattention. However, the differences identified were generally small, and weren’t always statistically significant after taking into account potential confounders. The links also weren’t always found across both boys and girls, or across both teachers’ and parents’ assessment of the child.

It’s already official advice for women to avoid binge drinking or getting drunk when pregnant. Pregnant women should avoid alcohol in the first three months of pregnancy, especially. If women choose to drink alcohol, officials say to stick to, at most, two units (preferably one) and no more than twice a week (preferably once).

 

Where did the story come from?

The study was carried out by researchers from the University of Nottingham and other research centres in the UK and Australia. The ongoing study is funded by the Medical Research Council, the Wellcome Trust and the University of Bristol. The study was published in the peer-reviewed European Journal of Child and Adolescent Psychiatry.

The media covers the research reasonably, although they sometimes refer generally to the effect on children’s mental health, which may make readers think they are referring to diagnoses of mental health conditions, which is not the case.

The study looked at teacher- and parent-rated levels of problems in areas such as “hyperactivity” and conduct, but did not assess whether the children had psychiatric diagnoses, such as ADHD.

 

What kind of research was this?

This research was part of a cohort study. The current analysis looked at the effect of binge drinking in pregnancy on mental health and school achievement when the children were aged 11. ALSPAC researchers recruited 85% of the pregnant women in the Avon region due to give birth between 1991 and 1992. Researchers have been assessing these women and their children regularly.

The researchers reported that previous analyses of this study have suggested that there was a link between binge drinking in pregnancy and the child having poorer mental health at ages four and seven as rated by their parents, particularly girls.

A prospective cohort study is the most appropriate and reliable study design for assessing the impact of binge drinking in pregnancy on the child’s heath later in life. For studies of this type, the main difficulty is trying to reduce the potential impact of factors other than the factor of interest (binge drinking) that could affect results. The researchers do this by measuring these factors and then using statistical methods to remove their effect in their analyses. This may not entirely remove their effect, and unknown and unmeasured factors could be having an effect, but it is the best way we have to try and isolate the impact of interest alone.

 

What did the research involve?

The researchers assessed the women’s alcohol consumption by questionnaire at 18 and 32 weeks into their pregnancy. They assessed their offspring’s mental health and school performance at age 11 using parent and teacher questionnaires, and their academic performance. They then analysed whether children of mothers who had engaged in binge drinking during pregnancy differed to children of mothers who had not.

Of the over 14,000 pregnant women in the study, 7,965 provided information on their alcohol consumption at both 18 and 32 weeks. They were asked about:

  • how many days in the past four weeks she had drunk at least four units of alcohol
  • how much and how often they had drunk alcohol in the past two weeks or around the time the baby first moved (only asked at 18 weeks)
  • how much she currently drank in a day (only asked at 32 weeks)

The researchers used this information to determine if the women:

  • had engaged in binge drinking at least once in pregnancy (defined as four or more units/drinks in a day) 
  • drank at least one drink a day at either 18 or 32 weeks

The children’s mental health was assessed using a commonly used standard questionnaire given to teachers and parents. This questionnaire (called the “Strengths and Difficulties Questionnaire”) gives an indication of the level of problems in four areas: 

  • emotional
  • conduct
  • hyperactivity/inattention
  • peer relationships

The Strengths and Difficulties Questionnaire also gives an overall score, and this is what the researchers focused on, as well as the conduct and hyperactivity/inattention scores. The researchers also obtained the children’s results on standard Key Stage 2 examinations taken in the final year at primary school. The researchers had information on 4,000 children for hyperactivity and conduct problems, and just under 7,000 children for academic results.

When the researchers carried out their analyses to look at the effect of binge drinking, they took into account a range of factors that could potentially influence results (potential confounders). These included:

  • mother’s age in pregnancy
  • parents’ highest education level
  • smoking in pregnancy
  • drug use in pregnancy
  • maternal mental health in pregnancy
  • whether the parents owned their house
  • whether the parents were married
  • whether the child was born prematurely
  • the child’s birthweight
  • the child’s gender

 

What were the basic results?

The researchers found that about a quarter of women (24%) reported having engaged in binge drinking at least once in pregnancy. Over half (59%) of the women who reported binge drinking at 18 weeks in their pregnancy also reported having engaged in binge drinking at 32 weeks.

Less than half of the women (about 44%) who had engaged in binge drinking reported doing so on more than two occasions in the past month. Women who had engaged in binge drinking were more likely to have more children, to also smoke or use illegal drugs in pregnancy, to have experienced depression in pregnancy, to have a lower level of education, to be unmarried and to be in rented accommodation.

Initial analyses showed children of mothers who had engaged in binge drinking at least once in pregnancy had higher levels of parent- and teacher-rated problems, and worse school performance than children of mothers who had not engaged in binge drinking. Their average difference in three problem scores was less than one point (possible score range 0 to 10 for conduct and hyperactivity/inattention problems, and 0 to 40 for the total problems score), and their average KS2 score was 1.82 points lower.

However, once the researchers took into account potential confounding factors, these differences were no longer large enough to rule out the possibility of having occurred by chance (that is, they were no longer statistically significant).

The researchers repeated their analyses for girls and boys separately. They found that even after adjustment, girls whose mothers had engaged in binge drinking in pregnancy did have higher levels of parent-rated conduct, hyperactivity/inattention and total problems (average score difference less than one point).

If the researchers looked at binge drinking and daily drinking separately, after adjustment they found children of women who had engaged in binge drinking in pregnancy, but did not drink daily, had higher levels of teacher-rated hyperactivity/inattention problems (average score 0.28 points higher) and lower KS2 scores (average 0.81 points lower).

 

How did the researchers interpret the results?

The researchers concluded that occasional binge drinking in pregnancy appears to increase risk of hyperactivity/inattention problems and lower academic performance in children at age 11, even if the women do not drink daily.

 

Conclusion

This prospective cohort study has suggested that even occasional binge drinking in pregnancy may increase the risk of hyperactivity/inattention problems and lower academic performance when the children reach 11 years old.

The strengths of the study are its design – selecting a wide and representative population sample collecting data prospectively – and using standardised questionnaires to assess the children’s outcomes.

Assessing the impact of alcohol in pregnancy on children’s outcomes is difficult. This is partly because assessing alcohol consumption is always difficult. People may not want to report their true consumption, and even if they do, there are difficulties in accurately remembering past consumption. In addition, as this link can only be assessed by observational studies (ethically you couldn’t do a trial that randomised pregnant women to binge drink), it is always possible that additional factors are having an effect.

The study found that women who had engaged in binge drinking in pregnancy were also more likely to have other unhealthy behaviours, such as smoking, and to be socioeconomically disadvantaged. The researchers tried to remove the effects of all of these factors, but this may not entirely remove the effect.

This latest study carried out a large number of analyses looking at different outcomes. The differences identified were generally small, and they weren’t always large enough to be statistically significant after taking into account potential confounders. They also weren’t always found across both boys and girls, or across both teachers’ and parents’ assessment of the child. These differences weren’t always large enough to be statistically significant. However, they do appear to suggest potential for some links, particularly in the area of hyperactivity/inattention.

The researchers note that even with small individual effects, the effect across the population as a whole can be considerable. The small effect may also reflect that it represents an average effect across all levels of binge drinking – ranging from one to many times.

We may never have completely concrete proof of an exact level at which harm occurs, and under which alcohol consumption in pregnancy is safe. Therefore, we have to work with the best information that is available. There is growing evidence that as well as how much we drink, the pattern of how we drink may be important.

Current UK recommendations from the National Institute for Health and Care Excellence (NICE) already advise that women who are pregnant should avoid binge drinking or getting drunk. It is also recommended that:

  • women who are pregnant should avoid alcohol in the first three months of pregnancy
  • if women choose to drink alcohol later in pregnancy, they should drink no more than two (preferably only one) UK units, no more than twice (preferably once) a week.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Binge drinking ONCE during pregnancy can damage your child's mental health and school results. Daily Mail, September 11 2014

Prenatal alcohol consumption linked to mental health problems. The Guardian, September 11 2014

This is how much alcohol you can have during pregnancy before it harms newborn’s mental health. Metro, September 10 2014

Links To Science

Sayal K, et al. Prenatal exposure to binge pattern of alcohol consumption: mental health and learning outcomes at age 11. European Child & Adolescent Psychiatry. Published September 11 2014

Categories: Medical News

Weight discrimination study fuels debate

Medical News - Fri, 09/12/2014 - 12:41

Much of the media has reported that discriminatory “fat shaming” makes people who are overweight eat more, rather than less.

The Daily Mail describes how, “telling someone they are piling on the pounds just makes them delve further into the biscuit tin”. While this image may seem like a commonsense “comfort eating” reaction, the headlines are not borne out by the science.

In fact, the news relates to findings for just 150 people who perceived any kind of weight discrimination, including threats and harassment, and poorer service in shops – not just friendly advice about weight.

The research in question looked at body mass index (BMI) and waist size for almost 3,000 people aged over 50 and how it changed over a three- to five-year period. The researchers analysed the results alongside the people’s reports of perceived discrimination. But because of the way the study was conducted, we can’t be sure whether the weight gain resulted from discrimination or the other way around (or whether other unmeasured factors had an influence).

On average, the researchers found that the 150 people who reported weight discrimination had a small gain in BMI and waist circumference over the course of the study, while those who didn’t had a small loss.

Further larger-scale research into the types of discrimination that people perceived may bring more answers on the best way to help people maintain a healthy weight.
 

Where did the story come from?

The study was carried out by researchers from University College London, and was funded by the National Institute on Aging and Office for National Statistics. Individual authors received support from ELSA funding and Cancer Research UK. The study was published in the peer-reviewed Obesity Journal.

The media in general have perhaps overinterpreted the meaning from this study, given its limitations. The Daily Telegraph’s headline says, “fat shaming makes people eat more”, but the study hasn’t examined people’s dietary patterns, and can’t prove whether the weight gain or discrimination came first.

 

What kind of research was this?

This was an analysis of data collected as part of the prospective cohort study, the English Longitudinal Study of Ageing (ELSA). This analysis looked at the associations between perceived weight discrimination and changes in weight, waist circumference and weight status.

The researchers say that negative attitudes towards people who are obese have been described as “one of the last socially acceptable forms of prejudice”. The researchers cite common perceptions that discrimination against overweight and obesity may encourage people to lose weight, but that it may have a detrimental effect.

A cohort study is a good way of examining how a particular exposure is associated with a particular later outcome. However, in the current study the way in which the data was collected meant that it was not possible to clearly determine whether the discrimination or the weight gain came first.

As with all studies of this kind, finding that one factor has a relationship with another does not prove cause and effect. There may be many other confounding factors involved, making it difficult to say how and whether perceived weight discrimination is directly related to the person’s weight. The researchers did make adjustments for some of these factors in analyses, to try and remove their effect.

 

What did the research involve?

The English Longitudinal Study of Ageing is a long-term study started in 2001/02. It recruited adults aged 50 and over and has followed them every two years. Weight, height and waist circumference have been objectively measured by a nurse every four years.

Questions on perceptions of discrimination were asked only once, in 2010/11, and were completed by 8,107 people in the cohort (93%). No body measures were taken at this time, but they were taken one to two years before (2008/09) and after (2012/13) this. Complete data on body measurements and perceptions of discrimination were available for 2,944 people.

The questions on perceived discrimination were based on those previously established in other studies and asked how often in your day-to-day life: 

  • you are treated with less respect or courtesy
  • you receive poorer service than other people in restaurants and stores
  • people act as if they think you are not clever
  • you are threatened or harassed
  • you receive poorer service or treatment than other people from doctors or hospitals

The responders could choose one of a range of answers for each – from “never” to “almost every day”. The researchers report that because few people reported any discrimination, they grouped responses to indicate any perceived discrimination versus no perceived discrimination. People who reported discrimination in any situation were asked to indicate what they attributed this experience to, from a list of options including weight, age, gender and race.

The researchers then looked at the relationship between change in BMI and waist circumference between the 2008/09 and 2012/13 assessments. They then looked at how this was related to perceived weight discrimination at the midpoint. Normal weight was classed as a BMI less than 25, overweight between 25 and 30, “obese class I” between 30 and 35, “obese class II” 35 to 40, and “obese class III” was a BMI above 40.

In their analyses the researchers took into account age, sex and household (non-pension) income, as an indicator of socioeconomic status.

 

What were the basic results?

Of the 2,944 people for whom complete data was available, 150 (5.1%) reported any perceived weight discrimination, ranging from 0.7% of normal-weight individuals, to 35.9% of people in obesity class III. There were various differences between the 150 people who perceived discrimination and those who didn’t. People who perceived discrimination were significantly younger (62 years versus 66 years), of higher BMI (BMI 35 versus 27), waist circumference (112cm versus 94cm) and less wealthy.

On average, people who perceived discrimination gained 0.95kg in weight between the 2008/09 and 2012/13, while people who didn’t perceive discrimination lost 0.71kg (average difference between the groups 1.66kg).

There were significant changes in the overweight group (gain 2.22kg among those perceiving any discrimination versus loss of 0.39kg in the no discrimination group), and the obese group overall (loss of 0.26kg in the discrimination versus a loss of 2.07kg in the no discrimination group). There were no significant differences in any of the obesity subclasses.

People who perceived weight discrimination also gained an average 0.72cm in waist circumference, while those who didn’t lost an average of 0.40cm (an average difference of 1.12cm). However, there were no other significant differences by group.

Among people who were obese at the first assessment, perceptions of discrimination had no effect on their risk of remaining obese (odds ratio (OR) 1.09, 95% confidence interval (CI) 0.46 to 2.59), with most obese people staying obese at follow-up (85.6% at follow-up versus 85.0% before). However, among people who were not obese at baseline, perceived weight discrimination was associated with higher odds of becoming obese (OR 6.67, 95% CI 1.85 to 24.04).

 

How did the researchers interpret the results?

The researchers conclude that their results, “indicate that rather than encouraging people to lose weight, weight discrimination promotes weight gain and the onset of obesity. Implementing effective interventions to combat weight stigma and discrimination at the population level could reduce the burden of obesity”.

 

Conclusion

This analysis of data collected as part of the large English Longitudinal Study of Ageing finds that people who reported experiencing discrimination as a result of their weight had a small gain in BMI and waist circumference over the study years, while those who didn’t had a small loss.

There are a few important limitations to bear in mind. Most importantly, this study could not determine whether the weight changes or the discrimination came first. And, finding an association between two factors does not prove that one has directly caused the other. The relationship between the two may be influenced by various confounding factors. The authors tried to take into account some of these, but there are still others that could be influencing the relationship (such as the person’s own psychological health and wellbeing).

As relatively few people reported weight discrimination, results were not reported or analysed separately by the type or source of the discrimination. Therefore, it is not possible to say what form the discrimination took or whether it came from health professionals or the wider population.

People’s perception of discrimination and the reasons for it may be influenced by their own feelings about their weight and body image. These feelings themselves could also be having a detrimental effect against them being able to lose weight. This does not mean that discrimination does not exist, or that it should not be addressed. Instead, both factors may need to be considered in developing successful approaches to reducing weight gain and obesity.

Another important limitation of this study is that despite the large initial sample size of this cohort, only 150 people (5.1%) perceived weight discrimination. When further subdividing this small number of people by their BMI class, this makes the numbers smaller still. Analyses based on small numbers may not be precise. For example, the very wide confidence interval around this odds ratio for becoming obese highlights the uncertainty of this estimate.

Also, the findings may not apply to younger people, as all participants were over the age of 50.

Discrimination based on weight or other characteristics is never acceptable and is likely to have a negative effect. The National Institute for Health and Care Excellence has already issued guidance to health professionals, noting the importance of non-discriminatory care of overweight and obese people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Fat shaming 'makes people eat more rather than less'. The Daily Telegraph, September 11 2014

Telling someone they're fat makes them eat MORE: People made to feel guilty about their size are six times as likely to become obese. Mail Online, September 11 2014

‘Fat shaming’ makes people put on more weight, study claims. Metro. September 10 2014

Links To Science

Jackson SE, Beeken RJ, Wardle, J. Perceived weight discrimination and changes in weight, waist circumference, and weight status. September 11 2014

Categories: Medical News

'Food addiction' doesn't exist, say scientists

Medical News - Thu, 09/11/2014 - 14:30

“Food is not addictive ... but eating is: Gorging is psychological compulsion, say experts,” the Mail Online reports.

The news follows an article in which scientists argue that – unlike drug addiction – there is little evidence that people become addicted to the substances in certain foods.

Researchers argue that instead of thinking of certain types of food as addictive, it would be more useful to talk of a behavioural addiction to the process of eating and the “reward” associated with it.

The article is a useful contribution to the current debate over what drives people to overeat. It’s a topic that urgently needs answers, given the soaring levels of obesity in the UK and other developed countries. There is still a good deal of uncertainty about why people eat more than they need. The way we regard overeating is linked to how eating disorders are treated, so fresh thinking may prove useful in helping people overcome compulsive eating habits.

 

Where did the story come from?

The study was carried out by researchers from various universities in Europe, including the Universities of Aberdeen and Edinburgh. It was funded by the European Union.

The study was published in the peer-reviewed Neuroscience and Biobehavioural Reviews on an open-access basis, so it is free to read online. However, the online article that has been released is not the final one, but an uncorrected proof.

Press coverage was fair, although the article was treated somewhat as if it was the last word on the subject, rather than a contribution to the debate. The Daily Mail’s use of the term “gorging” in its headline was unnecessary, implying sheer greed is to blame for obesity. This was not a conclusion found in the published review.

What kind of research was this?

This was not a new piece of research, but a narrative review of the scientific evidence for the existence of an addiction to food. It says that the concept of food addiction has become popular among both researchers and the public, as a way to understand the psychological processes involved in weight gain.

The authors of the review argue that the term food addiction – echoed in terms such as “chocaholic” and “food cravings” has potentially important implications for treatment and prevention. For this reason, they say, it is important to explore the concept more closely.

They also say that “food addiction” may be used as an excuse for overeating, also placing blame on the food industry for producing so-called “addictive foods” high in fat and sugar.

What does the review say?

The researchers first looked at the various definitions of the term addiction. Although they say a conclusive scientific definition has proved elusive, most definitions include notions of compulsion, loss of control and withdrawal syndromes. Addiction, they say, can be either related to an external substance (such as drugs) or to a behaviour (such as gambling).

In formal diagnostic categories, the term has largely been replaced. Instead it is often changed to “substance use disorder” – or in the case of gambling “non-substance use disorder”.

One classic finding on addiction is the alteration of central nervous system signalling, involving the release of chemicals with “rewarding” properties. These chemicals, the authors say, can be released not just by exposure to external substances, such as drugs, but also by certain behaviours, including eating.

The authors also outline the neural pathways through which such reward signals work, with neurotransmitters such as dopamine playing a critical role.

However, the authors of the review say that labelling a food or nutrient as “addictive” implies it contains certain ingredients that could make an individual addicted to it. While certain foods – such as those high in fat and sugar – have “rewarding” properties and are highly palatable, there is insufficient evidence to label them as addictive. There is no evidence that single nutritional substances can elicit a “substance use disorder” in humans, according to current diagnostic criteria.

The authors conclude that “food addiction” is a misnomer, proposing instead the term “eating addiction” to underscore the behavioural addiction to eating. They argue that future research should try to define the diagnostic criteria for an eating addiction, so that it can be formally classified as a non-substance related addictive disorder.

“Eating addiction” stresses the behavioural component, whereas “food addiction” appears more like a passive process that simply befalls the individual, they conclude.

Conclusion

There are many theories as to why we overeat. These theories include the existence of the “thrifty gene”, which has primed us to eat whenever food is present and was useful in times of scarcity. There is also the theory and the “obesogenic environment” in which calorie dense food is constantly available.

This is an interesting review that argues that in terms of treatment the focus should be on people’s eating behaviour – rather than on the addictive nature of certain foods. It does not deny the fact that for many of us high fat, high sugar foods are highly palatable.

If you think your eating is out of control, or you want help with weight problems, it’s a good idea to visit your GP. There are many schemes available that can help people lose weight by sticking to a healthy diet and regular exercise.

If you are feeling compelled to eat, or finding yourself snacking unhealthily, why not check out these suggestions for food swaps that could be healthier.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sugar 'not addictive' says Edinburgh University study. BBC News, September 9 2014

Food is not addictive ... but eating is: Gorging is psychological compulsion, say experts. Daily Mail, September 10 2014

Fatty foods are NOT addictive – but eating can be, Scottish scientists reveal. Daily Express, September 10 2014

Links To Science

Hebebrand J, Albayrak O, Adan R, et al. “Eating addiction”, rather than “food addiction”, better captures addictive-like eating behaviour. Neuroscience and Behaviour. Published online September 6 2014

Categories: Medical News

Bacteria found in honey may help fight infection

Medical News - Thu, 09/11/2014 - 14:00

“Bacteria found in honeybee stomachs could be used as alternative to antibiotics,” reports The Independent.

The world desperately needs new antibiotics to counter the growing threat of bacteria developing resistance to drug treatment. A new study has found that 13 bacteria strains living in honeybees’ stomachs can reduce the growth of drug-resistant bacteria, such as MRSA, in the laboratory.

The researchers examined antibiotic-resistant bacteria and yeast that can infect human wounds such as MRSA and some types of E. coli. They found each to be susceptible to some of the 13 honeybee lactic acid bacteria (LAB). These LAB were more effective if used together.

However, while the researchers found that the LAB could have more of an effect than existing antibiotics, they did not test whether this difference was likely to be due to chance, so few solid conclusions can be drawn from this research.

The researchers also found that each LAB produced different levels of toxic substances that may have been responsible for killing the bacteria.

Unfortunately, the researchers had previously found that the LAB are only present in fresh honey for a few weeks before they die, and are not present in shop-bought honey.

However, the researchers did find low levels of LAB-produced proteins and free fatty acids in shop-bought honey. They went on to suggest that these substances might be key to the long-held belief that even shop-bought honey has antibacterial properties, but that this warrants further research.

 

Where did the story come from?

The study was carried out by researchers from Lund University and Sophiahemmet University in Sweden. It was funded by the Gyllenstierna Krapperup’s Foundation, Dr P Håkansson’s Foundation, Ekhaga Foundation and The Swedish Research Council Formas.

The study was published in the peer-reviewed International Wound Journal on an open-access basis, so it is free to read online.

The study was accurately reported by The Independent, which appears to have based some of its reporting on a press release from Lund University. This press release confusingly introduces details of separate research into the use of honey to successfully treat wounds in a small number of horses.

 

What kind of research was this?

This was a laboratory study looking at whether substances present in natural honey are effective against several types of bacteria that commonly infect wounds. Researchers want to develop new treatments because of the growing problem of bacteria developing antibiotic resistance. In this study, the researchers chose to focus on honey, as it has been used “for centuries … in folk medicine for upper respiratory tract infections and wounds”, but little is known about how it works.

Previous research has identified 40 strains of LAB that live in honeybees’ stomachs (stomach bacteria are commonly known as “gut flora”). 13 of these LAB strains have been found to be present in all species of honeybees and in freshly harvested honey on all continents – but not shop-bought honey.

Research has suggested that the 13 strains work together to protect the honeybee from harmful bacteria. This study set out to further investigate whether these LAB might be responsible for the antibacterial properties of honey. They did this by testing them in the laboratory setting on bacteria that can cause human wound infections.

 

What did the research involve?

The 13 LAB strains were cultivated and tested against 13 multi-drug resistant bacteria, and one type of yeast that had been grown in the laboratory from chronic human wounds.

The bacteria included MRSA and one type of E. coli. The researchers tested each LAB strain for its effect on each type of bacteria or yeast, and then all 13 LAB strains were tested together. They did this by placing a disc of material containing the LAB at a particular place in a gel-like substance called agar, and then placing bacteria or yeast onto the agar.

If the LAB had antibiotic properties, it would be able to stop the bacteria or yeast from growing near it. The researchers would be able to find the LABs with stronger antibiotic properties, by seeing which had the largest distance at which they could stop the bacteria or yeast growing.

The researchers compared the results with the effect of the antibiotic commonly used for each type of bacteria or yeast, such as vancomycin and chloramphenicol. They then analysed the type of substances that each LAB produced, in an attempt to understand how they killed the bacteria or yeast.

The researchers then looked for these substances in samples of different types of shop-bought honey, including Manuka, heather, raspberry and rapeseed honey, and a sample of fresh rapeseed honey that had been collected from a bee colony.

 

What were the basic results?

Each of the 13 LABs reduced the growth of some of the antibiotic-resistant wound bacteria. The LABs were more effective when used together. The LABs tended to stop bacteria and yeast growing over a larger area than the antibiotics, suggesting that they were having more of an effect. However, the researchers did not do statistical tests to see if these differences were greater than might be expected purely by chance.

The 13 LABs produced different levels of lactic acid, formic acid and acetic acid. Five of them also produced hydrogen peroxide. All of the LABs also produced at least one other toxic chemical, including benzene, toluene and octane. They also produced some proteins and free fatty acids. Low concentrations of nine proteins and free fatty acids produced by LABs were found in shop-bought honeys.

 

How did the researchers interpret the results?

The researchers conclude that LAB living in honeybees “are responsible for many of the antibacterial and therapeutic properties of honey. This is one of the most important steps forward in the understanding of the clinical effects of honey in wound management”.

They go on to say that “this has implications not least in developing countries, where fresh honey is easily available, but also in western countries where antibiotic resistance is seriously increasing”.

 

Conclusion

This study suggests that 13 strains of LAB taken from honeybees’ stomachs are effective against a yeast and several bacteria that are often present in human wounds. Although the experiments suggested that the LABs could inhibit the bacteria more than some antibiotics, they did not show that this effect was large enough to be relatively certain it did not occur by chance. All of the tests were done in a laboratory environment, so it remains to be seen whether similar effects would be seen when treating real human wounds.

There were some aspects of the study that were not clear, including the antibiotic dose that was used and whether the dose used was optimal, or had already been used in the clinical setting where the species were collected. The authors also report that an antibiotic was used as a control for each bacteria and the yeast, but this is not clearly presented in the tables of the study, making it difficult to assess whether this is correct.

The study has shown that each LAB produces a different amount or type of potentially toxic substances. It is not clear how these substances interact to combat the infections, but it appears that they work more effectively in combination.

Low concentrations of some of the substances that could be killing the bacteria and yeast were found in shop-bought honey, but this study does not prove that they would have antibacterial effects. In addition, as the researchers point out, shop-bought honey does not contain any LABs.

Antibiotic resistance is a big problem that reduces our ability to combat infections. This means there is a lot of interest in finding new ways to combat bacteria. Whether this piece of research will contribute to that is currently unclear, but finding these new treatments will be crucial.

 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Bacteria found in honeybee stomachs could be used as alternative to antibiotics, scientists claim. The Independent, September 10 2014

Links To Science

Olofsson TC, Butler E, Markowicz P, et al. Lactic acid bacterial symbionts in honeybees – an unknown key to honey's antimicrobial and therapeutic activities. International Wound Journal. Published online September 8 2014

Categories: Medical News

Hundreds report waking up during surgery

Medical News - Wed, 09/10/2014 - 14:30

“At least 150, and possibly several thousand, patients a year are conscious while they are undergoing operations,” The Guardian reports. A report suggests “accidental awareness” during surgery occurs in around one in 19,000 operations.

The report containing this information is the Fifth National Audit Project (NAP5) report on Accidental Awareness during General Anaesthesia (AAGA) – that is, when people are conscious at some point during general anaesthesia. This audit was conducted over a three-year period to determine how common AAGA is.

People who regain consciousness during surgery may be unable to communicate this to the surgeon due to the use of muscle relaxants, which are required for safety during surgery. This can cause feelings of panic and fear. Sensations that the patients have reported feeling during episodes of AAGA include tugging, stitching, pain and choking.

There have been reports that people who experience this rare occurrence may be extremely traumatised and go on to experience post-traumatic stress disorder (PTSD).

However, as the report points out, psychological support and therapy given quickly after an AAGA can reduce the risk of PTSD.

 

Who produced the report?

The Royal College of Anaesthetists (RCoA) and the Association of Anaesthetists of Great Britain and Ireland (AAGBI) produced the report. It was funded by anaesthetists through their subscriptions to both professional organisations.

In general, the UK media have reported on the study accurately and responsibly.

The Daily Mirror’s website points out that you are far more likely to die during surgery than wake up during it – a statement that, while accurate, is not exactly reassuring.

 

How was the research carried out?

The audit was the largest of its kind, with researchers obtaining the details of all patient reports of AAGA from approximately 3 million operations across all public hospitals in the UK and Ireland. After the data was made anonymous, a multidisciplinary team studied the details of each event. This team included patient representatives, anaesthetists, psychologists and other professionals.

The team studied 300 of more than 400 reports they received. Of these, 141 were considered to be certain/probable cases. In addition, 17 cases were due to drug error: having the muscle relaxant but not the general anaesthetic, thus causing “awake paralysis” – a condition similar to sleep paralysis, when a person wakes during sleep, but is temporarily unable to move or speak. Seven cases of AAGA occurred in the intensive care unit (ICU) and 32 cases occurred after sedation rather than general anaesthesia (sedation causes a person to feel very drowsy and unresponsive to the outside world, but does not cause loss of consciousness).

 

What were the main findings?

The main findings were:

  • one in 19,000 people reported AAGA
  • half of the reported events occurred during the initiation of general anaesthetic, and half of these cases were during urgent or emergency operations
  • about one-fifth of cases occurred after the surgery had finished, and were experienced as being conscious but unable to move
  • most events lasted for less than five minutes
  • 51% of cases caused the patient distress
  • 41% of cases resulted in longer moderate to severe psychological harm from the experience
  • people who had early reassurance and support after an AAGA event often had better outcomes

The awareness was more likely to occur:

  • during caesarean section and cardiothoracic surgery
  • in obese patients
  • if there was difficulty managing the patient’s airway at the start of anaesthesia
  • if there was interruption in giving the anaesthetic when transferring the patient from the anaesthetic room to the theatre
  • if certain emergency drugs were used during some anaesthetic techniques

 

What recommendations have been made?

64 recommendations were made covering national, institutional and individual health professional level factors. The main recommendations are briefly outlined below.

They recommend having a new anaesthetic checklist in addition to the World Health Organization (WHO) Safer Surgical Checklist, which is meant to be completed for each patient. This would be a simple anaesthesia checklist performed at the start of every operation. The purpose of it would be to prevent incidents occurring due to human error, and monitoring problems and interruptions to the administration of the anaesthetic drugs.

To reduce the experience of waking but being unable to move, they recommend that a type of monitor called a nerve stimulator should be used, so that anaesthetists can assess whether the neuromuscular drugs are still having an effect before they withdraw the anaesthetic.

They recommend that hospitals look at the packaging of each type of anaesthetic and related drugs that are used, and consider ordering some from different suppliers, to avoid multiple drugs of similar appearance. They also recommend that national anaesthetic organisations look for solutions to this problem with the suppliers.

They recommend that patients be informed of the possibility of briefly experiencing muscle paralysis when they are given the anaesthetic medications and when they wake up at the end, so that they are more prepared for its potential occurrence. In addition, patients who are undergoing sedation rather than general anaesthesia should be better informed of the level of awareness to expect.

The other main recommendation was for a new structured approach to managing any patients who experience awareness, to help reduce distress and longer-term psychological difficulties – called the Awareness Support Pathway.

 

How does this affect you?

As Professor Tim Cook, Consultant Anaesthetist in Bath and co-author of the report, has said: “It is reassuring that the reports of awareness … are a lot rarer than incidences in previous studies”, which have been as high as one in 600. He also states that “as well as adding to the understanding of the condition, we have also recommended changes in practice to minimise the incidence of awareness and, when it occurs, to ensure that it is recognised and managed in such a way as to mitigate longer-term effects on patients”.

 Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Awareness during surgery can cause long-term harm, says report. The Guardian, September 10 2014

Some patients 'wake up' during surgery. BBC News, September 10 2014

Three patients each week report WAKE UP during an operation because they are not given enough anaesthetic. Mail Online, September 10 2014

Hundreds of people wake up during operations. The Daily Telegraph, September 10 2014

More than 150 people a year WAKE UP during surgery: How does it happen? Daily Mirror, September 10 2014

Categories: Medical News

Prescription sleeping pills linked to Alzheimer's risk

Medical News - Wed, 09/10/2014 - 13:30

“Prescription sleeping pills … can raise chance of developing Alzheimer's by 50%,” reports the Mail Online.

This headline is based on a study comparing the past use of benzodiazepines, such as diazepam and temazepam, in older people with or without Alzheimer’s disease. It found that the odds of developing Alzheimer’s were higher in people who had taken benzodiazepines for more than six months.

Benzodiazepines are a powerful class of sedative drugs. Their use is usually restricted to treating cases of severe and disabling anxiety and insomnia. They are not recommended for long-term use, because they can cause dependence.

It’s also important to note that this study only looked at people aged 66 and above, therefore it is not clear what the effects are in younger people. Also, it is possible that the symptoms these drugs are being used to treat in these older people, such as anxiety, may in fact be early symptoms of Alzheimer’s. The researchers tried to reduce the likelihood of this in their analyses, but it is still a possibility.

Overall, these findings reinforce existing recommendations that a course of benzodiazepines should usually last no longer than four weeks.

 

Where did the story come from?

The study was carried out by researchers from the University of Bordeaux, and other research centres in France and Canada. It was funded by the French National Institute of Health and Medical Research (INSERM), the University of Bordeaux, the French Institute of Public Health Research (IRESP), the French Ministry of Health and the Funding Agency for Health Research of Quebec.

The study was published in the peer-reviewed British Medical Journal on an open access basis, so it is free to read online.

The Mail Online makes the drugs sound like they are “commonly used” for anxiety and sleep disorders, when they are used only in severe, disabling cases. It is also not possible to say for sure that the drugs are themselves directly increasing risk, as suggested in the Mail Online headline.

 

What kind of research was this?

This was a case control study looking at whether longer-term use of benzodiazepines could be linked to increased risk of Alzheimer’s disease.

Benzodiazepines are a group of drugs used mainly to treat anxiety and insomnia, and it is generally recommended that they are used only in the short term – usually no more than four weeks.

The researchers report that other studies have suggested that benzodiazepines could be a risk factor for Alzheimer’s disease, but there is still some debate. In part, this is because anxiety and insomnia in older people may be early signs of Alzheimer’s disease, and these may be the cause of the benzodiazepine use. In addition, studies have not yet been able to show that risk increases with increasing dose or longer exposure to the drugs (called a “dose-response effect”) – something that would be expected if the drugs were truly affecting risk. This latest study aimed to assess whether there was a dose-response effect.

Because the suggestion is that taking benzodiazepines for a long time could cause harm, a randomised controlled trial (seen as the gold standard in evaluating evidence) would be unethical.

As Alzheimer’s takes a long time to develop, following up a population to assess first benzodiazepine use, and then whether anyone develops Alzheimer’s (a cohort study) would be a long and expensive undertaking. A case control study using existing data is a quicker way to determine whether there might be a link.

As with all studies of this type, the difficulty is that it is not possible to determine for certain whether the drugs are causing the increase in risk, or whether other factors could be contributing.

 

What did the research involve?

The researchers used data from the Quebec health insurance program database, which includes nearly all older people in Quebec. They randomly selected 1,796 older people with Alzheimer’s disease who had at least six years’ worth of data in the system prior to their diagnosis (cases). They randomly selected four controls for each case, matched for gender, age and a similar amount of follow-up data in the database. The researchers then compared the number of cases and controls who had started taking benzodiazepines at least five years earlier, and the doses used.

Participants had to be aged over 66 years old, and be living in the community (that is, not in a care home) between 2000 and 2009. Benzodiazepine use was assessed using the health insurance claims database. The researchers identified all prescription claims for benzodiazepines, and calculated an average dose for each benzodiazepine used in the study. They then used this to calculate how many average daily doses of the benzodiazepine were prescribed for each person. This allowed them to use a standard measure of exposure across the drugs.

Some benzodiazepines act over a long period as they take longer to be broken down and eliminated from the body, while some act over a shorter period. The researchers also noted whether people took long- or short-acting benzodiazepine, those who took both were classified as having taken the longer acting form.

People starting benzodiazepines within five years of their Alzheimer’s diagnosis (or equivalent date for the controls) were excluded, as these cases are more likely to potentially be cases where the symptoms being treated are early signs of Alzheimer’s.

In their analyses, the researchers took into account whether people had conditions which could potentially affect the results, including:

  • high blood pressure
  • heart attack
  • stroke
  • high cholesterol
  • diabetes
  • anxiety
  • depression
  • insomnia

 

What were the basic results?

Almost half of the cases (49.8%) and 40% of the controls had been prescribed benzodiazepines. The proportion of cases and controls taking less than six months’ worth benzodiazepines was similar (16.9% of cases and 18.2% of controls). However, taking more than six months’ worth of benzodiazepines was more common in the controls (32.9% of cases and 21.8% of controls).

After taking into account the potential confounders, the researchers found that having used a benzodiazepine was associated with an increased risk of Alzheimer’s disease, even after taking into account potential confounders (odds ratio (OR) 1.43, 95% confidence interval (CI) 1.28 to 1.60).

There was evidence that risk increased the longer the drug was taken, indicated by the number of days’ worth of benzodiazepines a person was prescribed:

  • having less than about three months’ (up to 90 days) worth of benzodiazepines was not associated with an increase in risk
  • having three to six months’ worth of benzodiazepines was associated with a 32% increase in the odds of Alzheimer’s disease before adjusting for anxiety, depression and insomnia (OR 1.32, 95% CI 1.01 to 1.74) but this association was no longer statistically significant after adjusting for these factors (OR 1.28, 95% CI 0.97 to 1.69)
  • having more than six months’ worth of benzodiazepines was associated with a 74% increase in the odds of Alzheimer’s disease, even after adjusting for anxiety, depression or insomnia (OR 1.74, 95% CI 1.53 to 1.98)
  • the increase in risk was also greater for long-acting benzodiazepines (OR 1.59, 95% 1.36 to 1.85) than for short-acting benzodiazepines (OR 1.37, 95% CI 1.21 to 1.55).

 

How did the researchers interpret the results?

The researchers concluded that, “benzodiazepine use is associated with an increased risk of Alzheimer’s disease”. The fact that a stronger association was found with longer periods of taking the drugs supports the possibility that the drugs may be contributing to risk, even if the drugs may also be an early marker of the onset of Alzheimer’s disease.

 

Conclusion

This case control study has suggested that long-term use of benzodiazepines (over six months) may be linked with an increased risk of Alzheimer’s disease in older people. These findings are reported to be similar to other previous studies, but add weight to these by showing that risk increases with increasing length of exposure to the drugs, and with those benzodiazepines that remain in the body for longer.

The strengths of this study include that it could establish when people started taking benzodiazepines and when they had their diagnosis using medical insurance records, rather than having to ask people to recall what drugs they have taken. The database used is also reported to cover 98% of the older people in Quebec, so results should be representative of the population, and controls should be well matched to the cases.

The study also tried to reduce the possibility that the benzodiazepines could be being used to treat symptoms of the early phase of dementia, by only assessing use of these drugs that started at least six years before Alzheimer’s was diagnosed. However this may not remove the possibility entirely, as some cases of Alzheimer’s take years to progress, which the authors acknowledge.

All studies have limitations. As with all analyses of medical records and prescription data, there is the possibility that some data is missing or not recorded, that there may be a delay in recording diagnoses after the onset of the disease, or that people may not take all of the drugs they are prescribed. The authors considered all of the issues and carried out analyses where possible to assess their likelihood, but concluded that they seemed unlikely to be having a large effect.

There were some factors which could affect Alzheimer’s risk, which were not taken into account because the data was not available (for example, smoking and alcohol consumption habits, socioeconomic status, education or genetic risk).

It is already not recommended that benzodiazepines are used for long periods, as people can become dependent on them. This study adds another potential reason why prescribing these drugs for long periods may not be appropriate.

If you are experiencing problems with insomnia or anxiety (or both), doctors are likely to start with non-drug treatments as these tend to be more effective in the long term. 

Read more about alternatives to drug treatment for insomnia and anxiety.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Anxiety and sleeping pills 'linked to dementia'. BBC News, September 10 2014

Sleeping pills taken by millions linked to Alzheimer's. The Daily Telegraph, September 10 2014

Prescription sleeping pills taken by more than one million Britons 'can raise chance of developing Alzheimer's by 50%'. Daily Mail, September 10 2014

Sleeping pills can increase risk of Alzheimer's by half. Daily Mirror, September 10 2014

Sleeping pills linked to risk of Alzheimer’s disease. Daily Express, September 10 2014

Links To Science

De Gage SB, Moride Y, Ducruet T, et al. Benzodiazepine use and risk of Alzheimer’s disease: case-control study. BMJ. Published online September 9 2014

Categories: Medical News

Sibling bullying linked to young adult depression

Medical News - Tue, 09/09/2014 - 14:00

“Being bullied regularly by a sibling could put children at risk of depression when they are older,” BBC News reports.

A new UK study followed children from birth to early adulthood. Analysis of more than 3,000 children found those who reported frequent sibling bullying at age 12 were about twice as likely to report high levels of depressive symptoms at age 18.

The children who reported sibling bullying were also more likely to be experiencing a range of challenging situations, such as being bullied by peers, maltreated by an adult, and exposed to domestic violence. While the researchers did take these factors into account, they and other factors could still be having an impact. This means it is not possible to say for certain that frequent sibling bullying is directly causing later mental health problems. However, the results do suggest that it could be a contributor.

As the authors suggest, interventions to target sibling bullying, potentially as part of a programme targeting the whole family, should be assessed to see if they can reduce the likelihood of later psychological problems.

 

Where did the story come from?

The study was carried out by researchers from the University of Oxford and other universities in the UK. The ongoing cohort study was funded by the UK Medical Research Council, the Wellcome Trust and the University of Bristol, and the researchers also received support from the Jacobs Foundation and the Economic and Social Research Council.

The study was published in the peer-reviewed medical journal Pediatrics. The article has been published on an open-access basis so it is available for free online.

This study was well reported by BBC News, which reported the percentage of children in each group (those who had been bullied and those who had not) who developed high levels of depression or anxiety. This helps people to get an idea of how common these things actually were, rather than just saying by how many times the risk is increased.

 

What kind of research was this?

This was a prospective cohort study that assessed whether children who experienced bullying by their siblings were more likely to develop mental health problems in their early adulthood. The researchers say that other studies have found bullying by peers to be associated with increased risk of mental health problems, but the effect of sibling bullying has not been assessed.

A cohort study is the best way to look at this type of question, as it would clearly not be ethical for children to be exposed to bullying in a randomised way. A cohort study allows researchers to measure the exposure (sibling bullying) before the outcome (mental health problems) has occurred. If the exposure and outcome are measured at the same time (as in a cross sectional study) then researchers can’t tell if the exposure could be contributing to the outcome or vice versa.

 

What did the research involve?

The researchers were analysing data from children taking part in the ongoing Avon Longitudinal Study of Parents and Children. The children reported on sibling bullying at age 12, and were then assessed for mental health problems when they were 18 years old. The researchers then analysed whether those who experienced sibling bullying were more at risk of mental health problems.

The cohort study recruited 14,541 women living in Avon who were due to give birth between 1991 and 1992. The researchers collected information from the women, and followed them and their children over time, assessing them at intervals.

When the children were aged 12 years they were sent a questionnaire including questions on sibling bullying, which was described as “when a brother or sister tries to upset you by saying nasty and hurtful things, or completely ignores you from their group of friends, hits, kicks, pushes or shoves you around, tells lies or makes up false rumours about you”. The children were asked whether they had been bullied by their sibling at home in the last six months, how often, what type of bullying and at what age it started.

When the children reached 18 they completed a standardised computerised questionnaire asking about symptoms of depression and anxiety. They were then categorised as having depression or not and any form of anxiety or not, based on the criteria in the International Classification of Diseases (ICD 10). The teenagers were also asked whether they had self-harmed in the past year, and how often.

The researchers also used data on other factors that could affect risk of mental health problems, collected when the children were eight years of age or younger (potential confounders), including any emotional or behaviour problems at age seven, the children’s self-reported depressive symptoms at age 10, and a range of family characteristics. They took these factors into account in their analyses.

 

What were the basic results?

A total of 3,452 children completed both the questionnaires about sibling bullying and mental health problems. Just over half of the children (52.4%) reported never being bullied by a sibling, just over a tenth (11.4%) reported being bullied several times a week, and the remainder (36.1%) reported being bullied but less frequently. The bullying was mainly name calling (23.1%), being made fun of (15.4%), or physical bullying such as shoving (12.7%).

Children reporting bullying by a sibling were more likely to:

  • be girls
  • to report frequent bullying by peers
  • to have an older brother
  • to have three or more siblings
  • to have parents from a lower social class
  • to have a mother who experienced depression during pregnancy
  • to be exposed to domestic violence or mistreatment by an adult
  • to have more emotional and behavioural problems at age seven

At 18 years of age, those who reported frequent bullying (several times a week) by a sibling at age 12 were more likely to experience mental health problems than those reporting no bullying:

  • 12.3% of the bullied children had clinically significant depression symptoms compared with 6.4% of those who were not bullied
  • 16.0% experienced anxiety compared with 9.3% 
  • 14.1% had self-harmed in the past year compared with 7.6%

After taking into account potential confounders, frequent sibling bullying was associated with increased risk of clinically significant depression symptoms (odds ratio (OR) 1.85, 95% confidence interval (CI) 1.11 to 3.09) and increased risk of self-harm (OR 2.26, 95% CI 1.40 to 3.66). The link with anxiety did not reach statistical significance after adjusting for potential confounders.

 

How did the researchers interpret the results?

The researchers concluded that “being bullied by a sibling is a potential risk factor for depression and self-harm in early adulthood”. They suggest that interventions to address this should be designed and tested.

 

Conclusion

The current study suggests that frequent sibling bullying at age 12 is associated with depressive symptoms and self-harm at age 18. The study’s strengths include the fact that it collected data prospectively using standard questionnaires, and followed children up over a long period. It was also a large study, although a lot of children did not complete all of the questionnaires.

The study does have limitations, which include:

  • As with all studies of this type, the main limitation is that although the study did take into account some other factors that could affect the risk of mental health problems, they and other factors could still be having an effect.
  • The study included only one assessment of bullying, at age 12. Patterns of bullying may have changed over time, and a single assessment might miss some children exposed to bullying.
  • Bullying was only assessed by the children themselves. Also collecting parental reports, or those of other siblings, might offer some confirmation of reports of bullying. However, bullying may not always take place when others are present.
  • The depression assessments were by computerised questionnaire, this is not equivalent to a formal diagnosis of having depression or anxiety after a full assessment by a mental health professional, but does indicate the level of symptoms a person is experiencing.
  • A large number of the original recruited children did not end up completing the questionnaires assessed in the current study (more than 10,000 of the 14,000+ babies starting the study). This could affect the results if certain types of children were more likely to drop out of the study (e.g. those with more sibling bullying). However, the children who dropped out after age 12 did not differ in their sibling bullying levels to those who stayed in the study, and analyses using estimates of their data did not have a large effect on results. Therefore the researchers considered that this loss to follow-up did not appear to be affecting their analyses.

While it is not possible to say for certain that frequent sibling bullying is directly causing later mental health problems, the study does suggest that it could be a contributor. It is also clear that the children experiencing such sibling bullying are also more likely to be experiencing a range of challenging situations, such as being bullied by peers, maltreated by an adult, and exposed to domestic violence.

As the authors say, the findings suggest that interventions to target sibling bullying, potentially as part of a programme targeting the whole family, should be assessed to see if they can reduce the likelihood of later psychological problems.

Read more about bullying, how to spot the signs and what to do if you suspect your child is being bullied (or is a bully themselves).

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sibling bullying increases depression risk. BBC News, September 8 2014

Lasting toll of bullying by a sibling: Brothers or sisters who are regularly picked on 'more likely to be depressed or take an overdose'. Daily Mail, September 9 2014

Links To Science

Bowes L, Wolke D, Joinson C, et al. Sibling Bullying and Risk of Depression, Anxiety, and Self-Harm: A Prospective Cohort Study. Published online September 8 2014

Categories: Medical News

Regular walking breaks 'protect arteries'

Medical News - Tue, 09/09/2014 - 13:29

“Just a five-minute walk every hour helps protect against damage of sitting all day,” the Mail Online reports.

A study of 12 healthy but inactive young men found that if they sat still without moving their legs for three hours, the walls of their main leg artery showed signs of decreased flexibility. However, this was “prevented” if the men took five-minute light walking breaks about every hour.

Less flexibility in the walls of the arteries has been linked to atherosclerosis (hardening and narrowing of the arteries), which increases the risk of heart disease.

However, it is not possible to say from this small and short-term study whether taking walking breaks would definitely reduce a person’s risk of heart disease.

There is a growing body of evidence that spending more time in sedentary behaviour such as sitting can have adverse health effects – for example, a 2014 study found a link between sedentary behaviour and increased risk of chronic diseases.

While this study may not be definitive proof of the benefits of short breaks during periods of inactivity, having such breaks isn’t harmful, and could turn out to be beneficial.

 

Where did the story come from?

The study was carried out by researchers from the Indiana University Schools of Public Health and Medicine. It was funded by the American College of Sports Medicine Foundation, the Indiana University Graduate School and School of Public Health.

The study has been accepted for publication in the peer-reviewed journal Medicine & Science in Sports & Exercise.

The coverage in the Mail Online and the Daily Express is accurate though uncritical, not highlighting any of the research's limitations.

 

What kind of research was this?

This was a small crossover randomised controlled trial (RCT) assessing the effect of breaks in sitting time on one measure of cardiovascular disease risk: flexibility of the walls of arteries.

The researchers report that sitting for long periods of time has been associated with increased risk of chronic diseases and death, and this may be independent of how physically active a person is when they are not sitting. This is arguably more an issue now than it would have been in the past, as a lot of us have jobs where sitting (sedentary behaviour) is the norm.

Short breaks from sitting are reported to be associated with improvements in a lower waist circumference, and fats and sugar in the blood.

A randomised controlled trial is the best way to assess the impact of an intervention on outcomes.

 

What did the research involve?

The researchers recruited 12 inactive, but otherwise healthy, non-smoking men of normal weight. These men were asked to sit for two three-hour sessions. During one session (called SIT), they sat on a firmly cushioned chair without moving their lower legs. In the other (called ACT), they sat on a similar chair but got up and walked on a treadmill next to them at a speed of two miles an hour for five minutes, three times during the session. The sessions were carried out between two and seven days apart, and the order in which each man took part in these sessions was allocated at random.

The researchers measured how rapidly the walls of the superficial femoral artery recovered from being compressed by a blood pressure cuff for five minutes. The femoral artery is the main artery supplying blood to the leg. The “superficial” part refers to the part that continues down the thigh after a deeper branch has divided off near the top of the leg.

The researchers took these blood pressure measurements at the start of each session, and then at hourly intervals. The person taking measurements did not know which type of session (SIT or ACT) the person was taking part in. The researchers compared the results obtained during the SIT and ACT sessions, to see if there were any differences.

 

What were the basic results?

The researchers found that the widening of the artery in response to blood flow (called flow-mediated dilation) reduced over three hours spent sitting without moving. However, getting up for five-minute walks in this period stopped this from happening. The researchers did not find any difference between the trials in another measure of what is going on in the arteries, called the “shear rate” (a measurement of how well a fluid flows through a channel such as a blood vessel).

 

How did the researchers interpret the results?

The researchers concluded that light hourly activity breaks taken during three hours of sitting prevented a significant reduction in the speed of the main leg artery recovering after compression. They say that this is “the first experimental evidence of the effects of prolonged sitting on human vasculature, and are important from a public health perspective”.

 

Conclusion

This small and very short-term crossover randomised controlled trial has suggested that sitting still for long periods of time causes the walls of the main artery in the leg to become less flexible, and that having five-minute walking breaks about every hour can prevent this.

The big question is: does this have any effect on our health?

The flexibility of arteries (or in this case, one particular artery) is used as what is called a “proxy” or “surrogate” marker for a person’s risk of cardiovascular disease. However, just because these surrogate markers improve, this does not guarantee that a person will have a lower risk of cardiovascular disease. Longer-term trials are needed to determine this.

The potential adverse effects of spending a lot of time sitting, independent of a person’s physical activity, is currently a popular area of study. Standing desks are becoming increasingly popular in the US, so people spend most of their working day on their feet. Some even bring a treadmill into their office (see this recent BBC News report on desk treadmills).

Researchers are particularly interested in whether taking breaks from unavoidable periods of sitting could potentially reduce any adverse effects, but this research is still at an early stage. In the interim, it is safe to say that having short breaks from periods of inactivity isn’t harmful, and could turn out to be beneficial.

There has been a rapid advancement in human civilisation over the past 10,000 years. We have bodies that were evolved to spend a large part of the day on our feet, hunting and gathering, but we also now have lifestyles that encourage us to sit around all day. It could be that this mismatch partially to blame for the rise in non-infectious chronic diseases, such as type 2 diabetes and heart disease.

If you feel brave enough, why not take on the NHS Choices 10,000 steps a day challenge, which should help build stamina, burn excess calories and give you a healthier heart.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Here's a good excuse to get up from your desk: Just a five-minute walk every hour helps protect against damage of sitting all day. Mail Online, September 8 2014

Walking five minutes at work can undo damage of long periods of sitting. Daily Express, September 8 2014

Links To Science

Thosar SS, Bieklo SL, Mather KJ, et al. Effect of Prolonged Sitting and Breaks in Sitting Time on Endothelial Function. Medicine & Science in Sports & Exercise. Published online September 8 2014

Categories: Medical News

Ebola vaccine hope after successful animal study

Medical News - Mon, 09/08/2014 - 13:29

“Hopes for an effective Ebola vaccine have been raised after trials of an experimental jab found that it gave monkeys long-term protection,” The Guardian reports. An initial animal study found that a new vaccine boosted immunity.

Ebola is an extremely serious and often fatal viral infection thst can cause internal bleeding and organ failure.

It can be spread via contaminated body fluids such as blood and vomit.

Researchers tested vaccines based on chimpanzee viruses, which were genetically modified to not be infectious and to produce proteins normally found in Ebola viruses. As with all vaccines, the aim is to teach the immune system to recognise and attack the Ebola virus if it comes into contact with it again.

They found that a single injection of one form of the vaccine protected macaques (a common type of monkey) against what would usually be a lethal dose of Ebola five weeks later. If they combined this with a second booster injection eight weeks later, then the protection lasted for at least 10 months.

The quest for a vaccine is a matter of urgency, due to the current outbreak of Ebola in West Africa.

Now that these tests have shown promising results, human trials have started in the US. Given the ongoing threat of Ebola, this type of vaccine research is important in finding a way to protect against infection.

 

Where did the story come from?

The study was carried out by researchers from the National Institutes of Health (NIH) in the US, and other research centres and biotechnology companies in the US, Italy and Switzerland. Some of the authors declared that they claimed intellectual property on gene-based vaccines for the Ebola virus. Some of them were named inventors on patents or patent applications for either chimpanzee adenovirus or filovirus vaccines.

The study was funded by the NIH and was published in the peer-reviewed journal Nature Medicine.

The study was reported accurately by the UK media.

 

What kind of research was this?

This was animal research that aimed to test whether a new vaccine against the Ebola virus could produce a long-lasting immune response in non-human primates.

The researchers were testing a vaccine based on a chimpanzee virus from the family of viruses that causes the common cold in humans, called adenovirus. The researchers were using the chimpanzee virus rather than the human one, as the chimpanzee virus is not recognised and attacked by the human immune system.

The virus is essentially a way to get the vaccine into the cells, and is genetically engineered to not be able to reproduce itself, and therefore not spread from person to person or through the body. Other studies have tested chimp virus-based vaccines for other conditions in mice, other primates and humans.

To make a vaccine, the virus is genetically engineered to produce certain Ebola virus proteins. The idea is that exposing the body to the virus-based vaccine “teaches” the immune system to recognise, remember and attack these proteins. Later, when the body comes into contact with the Ebola virus, it can then rapidly produce an immune response to it.

This type of research in primates is the last stage before the vaccine is tested in humans. Primates are used in these trials due to their biological similarities to humans. This high level of similarity means that there is less chance of humans reacting differently.

 

What did the research involve?

Chimpanzee adenoviruses were genetically engineered to produce either a protein found on the surface of the Zaire form of the Ebola virus, or both this protein and another found on the Sudan form of the Ebola virus. These two forms of the Ebola virus are reported to be responsible for more deaths than other forms of the virus.

They then injected these vaccines into the muscle of crab-eating macaques and looked at whether they produced an immune response when later injected with the Ebola virus. This included looking at which vaccine produced a greater immune response, how long this effect lasted and whether giving a booster injection made the response last longer. The individual experiments used between four and 15 macaques.

 

What were the basic results?

In their first experiment, the researchers found that macaques given the vaccines survived when injected with what would normally be a lethal dose of Ebola virus five weeks after vaccination. Using a lower dose protected fewer of the vaccinated macaques.

The vaccine used in these tests was based on a form of the chimpanzee adenovirus called ChAd3. Vaccines based on another form of the virus called ChAd63, or on another type of virus called MVA, did not perform as well at protecting the macaques. A detailed assessment of the macaques' immune responses suggested that this might be due to the ChAd3-based vaccine producing a bigger response in one type of immune system cell (called T-cells).

The researchers then looked at what happened if vaccinated monkeys were given a potentially lethal dose of Ebola virus 10 months after vaccination. They did this with groups of four macaques given different doses and combination of the vaccines against both forms of Ebola virus, given as a single injection or with a booster. They found that a single high-dose vaccination with the ChAd3-based vaccine protected half of the four macaques. All four of the macaques vaccinated survived if they were given an initial vaccination with the ChAd3-based vaccine, followed by an MVA-based booster eight weeks later. Other approaches performed less well.

 

How did the researchers interpret the results?

The researchers concluded that they had shown short-term immunity against Ebola virus could be achieved with a single vaccination in chimps, and also long-term immunity if a booster was given. They state that: “This vaccine will be beneficial for populations at acute risk during natural outbreaks, or others with a potential risk of occupational exposure.”

 

Conclusion

This study has shown the potential of a new vaccine for Ebola virus in chimpanzees. Interest in the quest for a vaccine is seen as urgent, due to the ongoing outbreak of Ebola in West Africa. Animal studies such as this are needed to ensure that any new vaccines are safe, and that they look like they will have an effect. Macaques were used for this research because they, like humans, are primates – therefore, their responses to the vaccine should be similar to what would be expected in humans.

Now that these tests have shown promising results, the first human trials have started in the US, according to reports by BBC News. These trials will be closely monitored to determine the safety and efficacy of the vaccine in humans as, unfortunately, this early success does not guarantee that it will work in humans. Given the ongoing threat of Ebola, this type of vaccine research is important to protect against infection.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Hopes raised as Ebola vaccine protects monkeys for 10 months. The Guardian, September 7 2014

Vaccine gives monkeys Ebola immunity. BBC News, September 7 2014

Breakthrough as experimental Ebola vaccines protect monkeys from epidemic for 10 months. Mail Online, September 7 2014

Links To Science

Stanley DA, Honko AN, Asiedu C, et al. Chimpanzee adenovirus vaccine generates acute and durable protective immunity against ebolavirus challenge. Nature Medicine. Published online September 7 2014

Categories: Medical News

Wearing a bra 'doesn't raise breast cancer risk'

Medical News - Mon, 09/08/2014 - 03:00

“Scientists believe they have answered the decades long debate on whether wearing a bra can increase your risk of cancer,” reports The Daily Telegraph.

There is an "urban myth" that wearing a bra disrupts the workings of the lymphatic system (an essential part of the immune system), which could lead to a build-up of toxins inside breast tissue, increasing the risk of cancer. New research suggests that this fear may be unfounded.

The study compared the bra-wearing habits of 1,044 postmenopausal women with two common types of breast cancer with those of 469 women who did not have breast cancer. It found no significant difference between the groups in bra wearing habits such as when a woman started wearing a bra, whether she wore an underwired bra, and how many hours a day she wore a bra.

The study had some limitations, such as relatively limited matching of characteristics of women with and without cancer. Also, as most women wear a bra, they could not compare women who never wore a bra versus those who wore a bra.

Despite the limitations, as the authors of the study say, the findings provide some reassurance that your bra-wearing habits do not seem to increase risk of postmenopausal breast cancer.

While not all cases of breast cancer are thought to be preventable, maintaining a healthy weight, moderating your consumption of alcohol and taking regular exercise should help lower your risk.

 

Where did the story come from?

The study was carried out by researchers from Fred Hutchinson Cancer Research Center in the US.

It was funded by the US National Cancer Institute.

The study was published in the peer-reviewed medical journal Cancer Epidemiology Biomarkers & Prevention.

The Daily Telegraph and the Mail Online covered this research in a balanced and accurate way.

However, suggestions that women who wore bras were compared with “their braless counterparts”, are incorrect. Only one woman in the study never wore a bra and she was not included in the analyses. The study was essentially comparing women who all wore bras, but starting at different ages, for different lengths of time during the day, or of different types (underwired or not).

 

What kind of research was this?

This was a case-control study looking at whether wearing a bra increases risk of breast cancer.

The researchers say there has been some suggestion in the media that bra wearing might increase risk, but that there is little in the way of hard evidence to support the claim.

A case-control study compares what people with and without a condition have done in the past, to get clues as to what might have caused the condition.

If women who had breast cancer wore bras more often than women who did not have the disease, this might suggest that bras could be increasing risk. One of the main limitations to this type of study is that it can be difficult for people to remember what has happened to them in the past, and people with a condition may remember things differently than those who don’t have the condition.

Also, it is important that researchers make sure that the group without the condition (the controls) are coming from the same population as the group with the condition (cases).

This reduces the likelihood that differences other than the exposure of interest (bra wearing) could contribute to the condition.

 

What did the research involve?

The researchers enrolled postmenopausal women with (cases) and without breast cancer (controls) from one area in the US. They interviewed them to find out detailed information about their bra wearing over the course of their lives, as well as other questions. They then statistically assessed whether the cases had different bra-wearing habits to the controls.

The cases were identified using the region’s cancer surveillance registry data for 2000 to 2004. Women had to be between 55 and 74 years old when diagnosed. The researchers identified all women diagnosed with one type of invasive breast cancer (lobular carcinoma or ILC), and a random sample of 25% of the women with another type (ductal carcinoma). For each ILC case, a control woman who was aged within five years of the case’s age was selected at random from the general population in the region. The researchers recruited 83% of the eligible cases (1,044 of 1,251 women) and 71% of eligible controls (469 of 660 women).

The in-person interviews asked about various aspects of past bra wearing (up to the point of diagnosis with cancer, or the equivalent date for controls):

  • bra sizes
  • age at which they started regularly wearing a bra
  • whether they wore a bra with an underwire
  • number of hours per day a bra was worn
  • number of days per week they wore a bra at different times in their life
  • whether their bra-wearing patterns ever changed during their life

Only one woman reported never wearing a bra, and she was excluded from the analysis.

The women were also asked about other factors that could affect breast cancer risk (potential confounders), including:

  • whether they had children
  • body mass index (BMI)
  • medical history
  • family history of cancer
  • use of hormone replacement therapy (HRT)
  • demographic characteristics

The researchers compared bra-wearing characteristics between cases and controls, taking into account potential confounders. The potential confounders were found to not have a large effect on results (10% change in odds ratio [OR] or less), so results adjusting for these were not reported. If the researchers just analysed data for women who had not changed their bra-wearing habits over their lifetime, the results were similar to overall results, so these were also not reported.

 

What were the basic results?

The researchers found that some characteristics varied between groups – cases were slightly more likely than controls to:

  • have a current BMI less than 25
  • to be currently using combined HRT
  • to have a close family history of breast cancer
  • to have had a mammogram in the past two years
  • to have experienced natural menopause (as opposed to medically induced menopause)
  • to have no children

The only bra characteristic that showed some potential evidence of being associated with breast cancer was cup size (which will reflect breast size). Women who wore an A cup bra were more likely to have invasive ductal cancer than those with a B cup bra (OR 1.9, 95% confidence interval [CI] 1.0 to 3.3).

However, the confidence intervals show that this increase in risk was only just significant, as they show that it is just possible that the risk in both groups is equivalent (an odds ratio of 1). If lower bra cup size was truly associated with increased breast cancer risk, the researchers would expect to see reducing risk as cup sizes got bigger. However, they did not see this trend across the other cup sizes, suggesting that there wasn’t a true relationship between cup size and breast cancer risk.

None of the other bra-wearing characteristics were statistically significantly different between cases with either type of invasive breast cancer and controls.

 

How did the researchers interpret the results?

The researchers concluded that their findings “provided reassurance to women that wearing a bra does not seem to increase the risk of the most common histologic types of postmenopausal breast cancer”.

 

Conclusion

This study suggests that past bra-wearing characteristics are not associated with breast cancer risk in postmenopausal women. The study does have some limitations:

  • There was only limited matching of the cases and controls, which could mean that other differences between the groups may be contributing to results. The potential confounders assessed were reported to not have a large impact on the results, which suggests that the lack of matching may not be having a large effect, but these results were not shown to allow assessment of this by the reader.
  • Controls were not selected for the women with invasive ductal carcinoma, only those with invasive lobular carcinoma.
  • As most women wear bras, but may differ in their bra-wearing habits (e.g. when they started wearing a bar or whether they wore an underwired bra), this means it wasn’t possible to compare the effect of wearing a bra versus not wearing a bra at all.
  • It may be difficult for women to remember their bra-wearing habits a long time ago, for example, exactly when they started wearing a bra, and their estimations may not be entirely accurate. As long as both cases and controls have the same likelihood of these inaccuracies in their reporting, this should not bias results. However, if women with cancer remember their bra wearing differently, for example, if they think it may have contributed to their cancer, this could bias results.
  • There were relatively small numbers of women in the control group, and once they were split up into groups with different characteristics, the number of women in some groups was relatively small. For example, only 17 women in the control group wore an A cup bra. These small numbers may mean some figures are less reliable.
  • The findings are limited to breast cancer risk in postmenopausal women.

While this study does have limitations as the authors say, it does provide some level of reassurance for women that bra wearing does not seem to increase risk of breast cancer.

While not all cases of breast cancer are thought to be preventable, maintaining a healthy weight, moderating your consumption of alcohol and taking regular exercise should help lower your risk. Read more about how to reduce your breast cancer risk.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Wearing a bra does not increase breast cancer risk, study finds. The Daily Telegraph, September 5 2014

Wearing a bra will NOT cause breast cancer - even if it's underwired or you wear it all year long, study finds. Mail Online, September 5 2014

Links To Science

Chen L, Malone KE, Li C I. Bra Wearing Not Associated with Breast Cancer Risk: A Population-Based Case-Control Study. Cancer Epidemiology Biomarkers and Prevention. Published online September 5 2014

Categories: Medical News

Gay people have 'poorer health' and 'GP issues'

Medical News - Fri, 09/05/2014 - 14:30

“Lesbians, gays and bisexuals are more likely to have longstanding mental health problems,” The Independent reports, as well as “bad experiences with their GP”. A UK survey found striking disparities in survey responses compared to heterosexuals.

The news is based on the results of a survey in England of more than 2 million people, including over 27,000 people who described themselves as gay, lesbian or bisexual.

It found that sexual minorities were two to three times more likely to report having longstanding psychological or emotional problems and significantly more likely to report fair/poor health than heterosexuals.

People who described themselves as bisexuals had the highest rates of reported psychological or emotional problems. The researchers speculate that this could be due to a “double discrimination” effect; homophobia from the straight community as well as being stigmatised by the gay and lesbian communities as not being “properly gay” (biphobia).

Sexual minorities were also more likely to report unfavourable experiences with nurses and doctors in a GP setting.

Unfortunately this study cannot tell us the reasons for the differences reported in either health or relationships with GPs.

The results of this survey would certainly seem to suggest that there is room for improvement in the standard and focus of healthcare offered to gay, lesbian and bisexual people.

 

Where did the story come from?

The study was carried out by researchers from the RAND Corporation (a non-profit research organisation), Boston Children’s Hospital/Harvard Medical School and the University of Cambridge. The study was funded by the Department of Health (England).

The study was published in the peer-reviewed Journal of General Internal Medicine. This article is open-access so is free to read online.

The results of this study were well reported by The Independent and The Guardian.

 

What kind of research was this?

This was a cross-sectional study that aimed to compare the health and healthcare experiences of sexual minorities with heterosexual people of the same gender, adjusting for age, race/ethnicity and socioeconomic status.

A cross-sectional study collects data at one point in time so it is not able to prove any direct cause and effect relationships. It can be useful in highlighting possible associations that can then be investigated further.

 

What did the research involve?

The researchers analysed data from the 2009/10 English General Practice Patient Survey.

The survey was mailed to 5.56 million randomly sampled adults registered with a National Health Service general practice (it is estimated that 99% of England’s adult population are registered with an NHS GP). In all, 2,169,718 people responded (39% response rate).

People were asked about their health, healthcare experiences and personal characteristics (race/ethnicity, religion and sexual orientation).

The question about sexual orientation is also used in UK Office of National Statistics Social Surveys: “Which of the following best describes how you think of yourself?:

  • heterosexual/straight
  • gay/lesbian
  • bisexual
  • other
  • I would prefer not to say

Of the respondents 27,497 people described themselves as gay, lesbian, or bisexual.

The researchers analysed the responses to questions concerning health status and patient experience.

People were asked about their general health status (“In general, would you say your health is: excellent, very good, good, fair, or poor?”) and whether they had one of six long-term health problems, including a longstanding psychological or emotional condition.

The researchers looked to see whether people had reported:

  • having “no” trust or confidence in the doctor
  • “poor” or “very poor” to at least one of the doctor communication measures of giving enough time, asking about symptoms, listening, explaining tests and treatments, involving in decisions, treating with care and concern, and taking problems seriously
  • “poor” or “very poor” to at least one of the nurse communication measures
  • being “fairly” or “very” dissatisfied with care overall

The researchers compared the responses from sexual minorities and heterosexuals of the same gender after controlling for age, race/ethnicity and deprivation.

 

What were the basic results?

Both male and female sexual minorities were two to three times more likely to report having a longstanding psychological or emotional problem than heterosexual counterparts. Problems were reported by 5.2% heterosexual men compared to 10.9% gay men and 15% bisexual men and by 6.0% heterosexual women compared to 12.3% lesbian women and 18.8% bisexual women.

Both male and female sexual minorities were also more likely to report fair/poor health. Fair/poor health was reported by 19.6% heterosexual men compared to 21.9% gay men and 26.4% bisexual men and by 20.5% heterosexual women compared to 24.9% lesbian women and 31.6% bisexual women.

Negative healthcare experiences were uncommon in general, but sexual minorities were about one-and-a-half times more likely than heterosexual people to report unfavourable experiences with each of four aspects of primary care:

  • no trust or confidence in the doctor was reported by 3.6% heterosexual men compared to 5.6% gay men (4.3% bisexual men, difference compared to heterosexual men not statistically significant) and by 3.9% heterosexual women compared to 5.3% lesbian women and 5.3% bisexual women
  • poor/very poor doctor communication was reported by 9.0% heterosexual men compared to 13.5% gay men and 12.5% bisexual men and by 9.3% heterosexual women compared to 11.7% lesbian women and 12.8% bisexual women
  • poor/very poor nurse communication was reported by 4.2% heterosexual men compared to 7.0% gay men and 7.3% bisexual men and by 4.5% heterosexual women compared to 7.8% lesbian women and 6.7% bisexual women
  • being fairly/very dissatisfied with care overall was reported by 3.8% heterosexual men compared to 5.9% gay men and 4.9% bisexual men and by 3.9% heterosexual women compared to 4.9% lesbian women (4.2% bisexual women, difference compared to heterosexual women not statistically significant)

 

How did the researchers interpret the results?

The researchers concluded that “sexual minorities suffer both poorer health and worse healthcare experiences. Efforts should be made to recognise the needs and improve the experiences of sexual minorities. Examining patient experience disparities by sexual orientation can inform such efforts”.

 

Conclusion

This study has found that that sexual minorities were two to three times more likely to report having longstanding psychological or emotional problems and significantly more likely to report fair/poor health than heterosexuals.

Sexual minorities were also more likely to report unfavourable experiences with nurses and doctors in a GP setting.

It should also be noted that response rates to the survey were low, with only 39% people responding to the survey. It is unknown whether the results would have been different if more people had responded.

While potential reasons for these disparities may include the stress induced by homophobic attitudes, or the suspicion that a GP disapproves of their patient’s sexuality, these speculations are unproven.

As it stands, this study cannot tell us the reasons for the differences reported. However, it would suggest that healthcare providers need to do more to meet the needs of gay, lesbian and bisexual people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Gay people more likely to have mental health problems, survey says. The Independent, September 4 2014

Gay people report worse experiences with GPs. The Guardian, September 4 2014

Links To Science

Elliott MN, Kanouse DE, Burkhart Q, et al. Sexual Minorities in England Have Poorer Health and Worse Health Care Experiences: A National Survey. Journal of General Internal Medicine. Published online September 4 2014

Categories: Medical News