Medical News

Wet wipes may help spread hospital bugs

Medical News - Tue, 06/09/2015 - 15:00

"A new study finds that detergent wipes are spreading bugs in hospitals," The Daily Telegraph reports. This isn't strictly true, as the study didn't do any tests in hospitals. But through laboratory experiments, researchers found seven commonly used brands of wet wipe could transfer bacteria from one surface to another.

Researchers tested seven detergent cleaning wipes they say are used in UK hospitals. They looked at three common causes of hospital-acquired infection: Staphylococcus aureus, a common cause of skin infections; Clostridium difficile, which can upset the digestive system; and Acinetobacter baumannii, which is usually harmless for most people, but can be very dangerous for people with a weakened immune system.

They found using the same wipe on different surfaces seemed to help these three germs spread. The study also found large variation in the ability of the different types of wipes to kill these three germs.

The authors mention a "one wipe, one surface, one direction approach," but they suspect people use the wipes on multiple surfaces in reality. As this was an experimental study, we do not know whether the use of wipes in this way would have a real-world impact and, if so, what that impact would be. We also don't know how wipes compare with other cleaning methods.

Still, this study does reinforce the importance of infection control in hospital, something that staff, visitors and patients can help to maintain by taking simple steps such as washing their hands frequently. 

Where did the story come from?

The study was carried out by researchers from Cardiff University and was funded by the university. It was published in the American Journal of Infection Control, a peer-reviewed journal.

Generally, the UK media reported the story accurately given the information they were presented with. However, an inconsistency did creep in at the research reporting stage and was replicated in most of the subsequent press coverage.

The researchers made the statement: "All the wipes repeatedly transferred large number of S. aureus on to three consecutive surfaces except wipe G, for which transfer of bacteria was below the limit of detection for this test."

But in the conclusion, this was summarised to: "All of the wipes repeatedly transferred bacteria and spores on to multiple surfaces". This shortened version in the conclusion made it into most of the media coverage. 

What kind of research was this?

This study looked at the effectiveness of detergent cleaning wipes for cleaning germs.

The researchers say the majority of current UK infection control policies advocate the use of detergent and water, or microfibre and water, for cleaning soiled or contaminated surfaces, adding that detergent wipes (wet wipes) are increasingly being used.

However, the team claim there is no good information about the ability of wet wipes to remove disease-causing microbes, or whether they could subsequently transfer microbes from one surface to another. 

What did the research involve?

The study picked seven detergent wipes currently used in healthcare facilities in the UK, and tested how good they were at killing three microbes from a stainless steel surface.

The microbes of choice were Staphylococcus aureus, Acinetobacter baumannii, and Clostridium difficile, representing common – and sometimes lethal – sources of hospital-acquired infections.

The wipes were tested to see how good they were at:

  • removing micro-organisms from the surfaces
  • preventing bacteria transfer when the same wipe was used to clean three consecutive surfaces

After using a 10-second "standard wiping protocol", the researchers measured the ability of the wipes to kill bacteria and spores using a standardised European method of assessment for chemical disinfectants.

Wiping experiments were independently repeated three times to get an average, and were analysed using appropriate methods. 

What were the basic results?

The detergent wipes tested in this study showed big differences in their ability to remove the three germs from surfaces after a 10-second wipe.

They performed quite differently depending on the germs tested. Broadly speaking, the wipes were able to remove a lot of Acinetobacter baumannii, but performed worse for Staphylococcus aureus and Clostridium difficile spores.

Almost all wipes also repeatedly transferred significant amounts of bacteria or spores over three consecutive surfaces, except for one, which registered no transfer.

Even then, the research team say the percentage of total micro-organisms transferred from the wipes after wiping was low for a number of wipes. 

How did the researchers interpret the results?

The team said: "Because detergent cleaning is advocated in many national guidance documents, it is imperative that such recommendations and guidance take into account the wipe limitations found in this study.

"The issue of potential transfer on to multiple surfaces needs to be addressed to avoid the potential spread of microbial pathogens." 


This research suggests detergent cleaning wipes used in UK hospitals and the home show large variability in their ability to kill three selected microbes, including Staphylococcus aureus and Clostridium difficile.

Researchers tested seven commonly used wipes and found they varied a lot in their ability to kill the bugs. More worryingly, it looked like the wipes were picking up the germs rather than killing them – in almost all the wipes tested, the bugs were spread if they were used on a different surface.

The implication of this is that wipes shouldn't be used on consecutive surfaces. The authors mention that "a one wipe, one surface, one direction approach" is recommended, but they suspect people use them on multiple surfaces in reality.

This is a single study, so we don't know for sure that its results are reliable. There were some inaccuracies – for example, in how the study estimated the starting level of contamination in the tests.

The best way to find out would be to repeat the experiments, ideally using wiping protocols used in hospitals, and on the most common surfaces. Only stainless steel surfaces were tested here. Extending the number of bugs tested would also improve the study, as only three specific types were tested.

It also wasn't clear whether the amount of contamination after wiping was enough to cause or significantly raise the risk of infection. We don't know how often the wipes are used in hospital, or whether they are used alongside other more effective cleaning methods.

Hospital infection can be life threatening, so ensuring cleaning practices are evidence-based and effective is likely to be a priority. This research highlights how some wipes may not be as effective for specific germs on specific surfaces as we might assume.

Hospital cleaning protocols are continuously being assessed and refined, so this study will undoubtedly add to this process. 

Links To The Headlines

Wet wipes could be spreading bacteria in homes and hospitals. The Daily Telegraph, June 9 2015

Superbugs 'spread by hospital wet wipes'. BBC News, June 8 2015

Hospital bugs 'spread by use of wet wipes to clean wards' according to first study of its kind. Daily Mail, June 8 2015

Links To Science

Ramm L, Siani H, Wesgate R, Maillard J. Pathogen transfer and high variability in pathogen removal by detergent wipes. American Journal of Infection Control. Published online May 18 2015

Categories: Medical News

New blood test for viral infections shows promise

Medical News - Mon, 06/08/2015 - 16:00

"New test uses a single drop of blood to reveal entire history of viral infections," The Guardian reports.

Every time you are infected by a virus, your immune system produces specific types of antibodies in response. These antibodies remain in your body long after the infection has gone. The new test, called VirScan, is able to assess all these antibodies, building up a detailed immune "history" of viral infections.

Researchers looked at how well the test performed on blood samples from more than 500 people from North and South America, Africa and Asia.

The test correctly identified most of the people with known infections – though there were cases of both false negatives (saying an infection was not present even though it was) and false positives (wrongly diagnosing infection when there was none).

The test could theoretically be expanded to cover other types of organisms that cause human disease, such as bacteria, but this has not been tested yet. The test will also need to be updated as new viruses are discovered or as they change.

This test should be thought of as being at an early stage, likely to undergo further development and testing before it is ready for wider use.  

Where did the story come from?

The study was carried out by researchers from Harvard University and other research centres in the US, Europe, Peru, Thailand and South Africa.

It was funded by the US National Institutes of Health, the International AIDS Vaccine Initiative, the South African Research Chairs Initiative, the Victor Daitz Foundation, the Howard Hughes Medical Institute, the HIVACAT program and CUTHIVAC, the Thailand Research Fund, and the Chulalongkorn University Research Professor Program, NSF.

Some of the authors of the study are listed as inventors on a patent application related to the techniques used in the study (the use of bacteriophage phage display libraries to detect antiviral antibodies).

The study was published in the peer-reviewed journal, Science.

BBC News covered this story well and did not overstate the potential uses of the technique. Experts quoted in the story caution that while this technology may prove very useful in research, it may not be appropriate for diagnosing individual patients with diseases such as HIV.

The Mail Online suggested the test could be used to "help doctors diagnose patients with 'mystery illnesses'." But we do not yet know how this test performs compared with existing diagnostic methods for viral diseases.

Doctors and diagnostic laboratories would need to know that the new test performs as well as existing methods before they consider using it for diagnostic purposes or how well it identifies "mystery illnesses". 

What kind of research was this?

This laboratory study aimed to develop a new blood test that could detect all of a person's previous viral infections at once.

Existing tests for viruses tend to look for a specific single virus and do not detect other viral infections. These tests tend to be based on detecting a virus' genetic material in our blood or how our immune system responds.

Once a viral infection has been successfully fought off by the body, its genetic material may not be detected, but an immune "memory" of the virus can last for decades. This research looked at developing a test for any virus based on looking at our immune memory of previous viral infections.

The researchers hoped this would help them better study the interaction between our immune system and these viruses. It is thought this interaction may influence the development of diseases involving the immune system, such as type 1 diabetes, and potentially even help the immune system fight other infections.  

What did the research involve?

Our immune system makes special proteins called antibodies to fight viruses and other infections. These antibodies work by "recognising" and binding to specific proteins and other molecules on the cell that are produced by the virus.

The immune system remembers the viruses it has been exposed to and continues to produce antibodies against them at a low level, even after the virus has been removed from the body. The researchers took advantage of this in developing their new test.

The researchers started by generating almost 100,000 bits of protein from more than 1,000 strains of all 206 different viral species identified as infecting humans. They were able to do this using genetic information from these viruses, as these sequences carry instructions for making all of the viruses' proteins.

The proteins were made in viruses that typically infect bacteria, called bacteriophages or just phages. These bacteriophages were genetically engineered to each produce one small bit of protein from a human virus, and thousands were then placed on a tiny microchip.

The researchers then took blood samples from 569 participants from four countries (the US, Peru, Thailand and South Africa) on four different continents. They extracted the part of the blood that contains antibodies (the serum) and washed a small amount (less than a microlitre) of this over the microchip.

When antibodies recognise a viral protein they have been exposed to before, they bind to it. This response allowed the researchers to identify which of the bacteriophages had antibodies bound to them, and how much.

They then assessed what viral protein each of those bacteriophages were producing and which viruses they came from. These were the viruses the person would have been exposed to in the past.

The researchers particularly looked for cases where the person's antibodies recognised more than one piece of protein from a given virus, as this would give greater confidence that the person really had been exposed to this virus. They also developed ways to help tell antibody reactions apart from related viruses that produce similar proteins.

They then compared which viruses people in different countries had been exposed to. Some of the participants had known viral infections, such as HIV or hepatitis, so the researchers checked how well this test picked these up.  

What were the basic results?

The researchers found the VirScan test was able to detect 95% or more of known infections with HIV or hepatitis C that had already been diagnosed with existing single virus tests.

VirScan was also able to correctly differentiate between different forms of the hepatitis C virus in 69% of people with known infections. Similar results were found for its ability to detect and differentiate between similar herpes simplex viruses (HSV1 and HSV2).

The researchers found the participants had antibodies against an average of 10 viral species. Younger participants tended to have had fewer viral exposures than older participants from the same country.

This is what would be expected, as they have had less time to be exposed. The pattern of different infections seen in participants from different countries was also similar to what was expected.

The researchers found there were some bits of viral protein that people who had been exposed to that virus almost always produced antibodies against. This suggests that these bits of protein are particularly good at causing a similar immune response in different people and therefore might be useful in making vaccines.

The researchers also found some "false positives" where their test appeared to be detecting pieces of viral protein because of their similarity to proteins from bacteria. 

How did the researchers interpret the results?

The researchers concluded that the VirScan test provides a way to study all current and past viral infections in people using a small sample of blood. The method can be performed in samples from large numbers of people at the same time and is able to distinguish between related viruses.

They said: "VirScan may prove to be an important tool for uncovering the effect of host-virome interactions on human health and disease, and could easily be expanded to include new viruses as they are discovered, as well as other human pathogens, such as bacteria, fungi, and protozoa [single celled micro-organisms that cause diseases such as malaria]." 


This research has developed a test that is able to identify past viral infections using a small sample of blood, giving an insight into a person's history of viral infections. The test could theoretically be expanded to cover other types of organisms that cause human disease, such as bacteria.

No test is perfect, however, and there were some cases where a known infection was not identified (false negative) and where an infection was picked up that was not thought to have really occurred (false positive). The test detects antibodies generated in response to viruses as the result of vaccination.

Antibody response also reduces over time, so the test may not be able to identify all previous infections. The researchers thought this was why they detected less exposure to some common viral infections, such as flu, than they expected.

The use of shorter bits of protein may also mean that some antibodies that recognise larger sections of the protein, or only recognise the protein after it has other molecules added to it, may not be identified.

While the test showed promise for telling different related viral strains apart, the researchers note it won't be as good at this as some genetic tests.

The test is reported to potentially cost only $25 per sample, but it is unclear whether this included the cost of all the machines needed to carry out the testing. Not all diagnostic labs may have access to these machines.

This test should be thought of as being at an early stage. While it might be able to cover other organisms, this has not been tested yet. The researchers suggest it could eventually be used as a first-stage rapid screen for viral infections, which could be followed up by more specific diagnostic tests. Again, more research will be needed to test this.

VirScan will also need to be updated as new viruses are discovered or as viruses change. For now, it is likely to have further development and largely be used as a research tool, rather than for diagnosing disease.

Links To The Headlines

New test uses a single drop of blood to reveal entire history of viral infections. The Guardian, June 4 2015

This blood test can tell you every virus you’ve ever had. The Independent, June 7 2015

The simple blood test that reveals EVERY virus you've ever had - and could help doctors diagnose patients with 'mystery illnesses'. Mail Online, June 5 2015

Revolutionary £16 blood test reveals every virus you've ever had - breakthrough could help GPs fight future diseases. Daily Mirror, June 5 2015

Test unravels history of infection. BBC News, June 4 2015

Meet The Blood Test VirScan That Can Reveal Every Virus You've Ever Been Infected With. The Huffington Post, June 5 2015

Links To Science

Xu G, Kula T, Xu Q, et al. Comprehensive serological profiling of human populations using a synthetic human virome. Science. Published online June 5 2015

Categories: Medical News

'Missing link' between brain and immune system discovered

Medical News - Mon, 06/08/2015 - 14:00

“Newly discovered vessels beneath skull could link brain and immune system,” The Guardian reports. It is has been suggested that the discovery, which has been described as textbook-changing, could lead to new treatments for a range of neurological conditions.

Until now, it was thought that the brain was not connected to the lymphatic system. This is an essential part of the immune system that helps fight infection, while draining excess fluid from tissue.

In this study, scientists discovered previously unknown lymphatic vessels in the outer layers of the brain. These vessels appeared to link the brain and spinal cord with the rest of the body’s immune system. This study used mice and human samples, vessel structure was investigated in the mice, and the observations followed up in the human samples.

Further study will be needed to confirm that the system works the same in humans, but the discovery may require a reassessment of our assumptions about lymph drainage in the brain and its role in diseases involving brain inflammation or degeneration, such as Alzheimer’s disease and multiple sclerosis.

It is too early to say whether the findings could one day have any implications for the treatment of these types of conditions.


Where did the story come from?

The study was carried out by researchers from The University of Virginia and was funded by Fondation pour la Recherche Médicale and by The National Institutes of Health. The study was published in the peer-reviewed scientific journal Nature.

The study has generated a great deal of media excitement, both in the UK and internationally.

This excitement seems largely driven by quotes released by the researchers themselves, such as Professor Kevin Lee, who was widely quoted as saying: “The first time these guys showed me the basic result, I just said one sentence: ‘They’ll have to change the textbooks’.”

However, media reports, such as the Mail Online stating that “[the discovery] could help treat conditions such as autism and Alzheimer’s” is premature, and cannot be concluded from this stage of the research.


What kind of research was this?

This was an animal study using mice to investigate the structure and function of lymphatic vessels in the brain.

It is said to have been previously understood that the central nervous system (brain and spinal cord) did not have a typical lymphatic drainage system. Lymph is the immune fluid that circulates round the body, containing white blood cells to fight infection and destroy abnormal cells.

This study aimed to look at the circulation of lymph in the mouse brain, potentially creating a greater understanding of the workings of the brain and disease processes. However, mice and humans do not have identical biology, so the findings may not be directly applicable. 


What did the research involve?

The scientists used adult mice to look at brain structure and the circulation of lymph.

The study involved complex laboratory techniques. This included the use of a fluorescent antibody to assess the alignment of cells within the brain, examination for markers associated with a lymphatic drainage system and looking at the functional capacity of identified vessels to carry lymphatic fluid to and from the brain. 

Human samples taken from the brain at autopsy were used to investigate any structures found in mice.


What were the basic results?

The scientists found that the outer protective layers of the mouse brain (the meninges) showed cells that were clearly lined up, which suggested that these were vessels with a unique function. These cells showed the characteristic features of functional lymphatic vessels. These vessels appeared able to carry both fluid and immune cells from the fluid surrounding the brain and spinal cord (the cerebrospinal fluid), and were connected to the lymph nodes in the neck.

The location of these vessels may have been the reason they have not been discovered to-date, thereby causing the belief that there is no lymphatic drainage system in the brain.


How did the researchers interpret the results?

The researchers state: “The presence of a functional and classical lymphatic system in the central nervous system [brain and spinal cord] suggests that current dogmas regarding brain tolerance and the immune privilege of the brain should be revisited”. This new understanding may mean current thinking about how the brain works needs to be reassessed. The researchers go on to say it could be the malfunction of these vessels that could be the cause of a variety of brain disorders, such as multiple sclerosis and Alzheimer’s disease.



This mouse study has examined the circulation of lymph in the brain. It discovered previously unknown lymphatic vessels in the outer layers of the mouse brain. If accurate, the findings may call for a review of how the immune system in the brain functions, and shed new light on its role in brain diseases involving brain inflammation or degeneration.

Though animal research can give a good insight into biological and disease processes, and how they may work in humans, the processes in humans and mice are not identical. Further studies are needed to confirm these findings and to assess whether this knowledge is transferable to humans.

As such, it is too early to say whether the findings could one day have any implications for the treatment of degenerative brain conditions such as multiple sclerosis or Alzheimer’s.  

Links To The Headlines

Newly discovered vessels beneath skull could link brain and immune system. The Guardian, June 5 2015

Landmark discovery about the brain 'will have scientists rewriting textbooks' - and could help treat conditions such as autism and Alzheimer's. Mail Online, June 5 2015

Links To Science

Louveau A, Smirnov I, Keyes TJ, et al. Structural and functional features of central nervous system lymphatic vessels. Nature. Published online June 1 2015

Categories: Medical News

Depression 'starts in the womb' claim is unproven

Medical News - Fri, 06/05/2015 - 16:30

“The seeds of depression can be sown in the womb,” is the claim in the Mail Online.

While a new study did find that depression during pregnancy was linked to an increased risk of depression in adult offspring, a range of factors could be contributing.

The study analysed data collected from 103 pregnant mothers whose mental health was assessed though interviews during pregnancy and up to the time the time their child was 16. The children also answered questions of a similar nature about their mental health once they reached the age of 25. The researchers also assessed whether they had experienced maltreatment.

The odds of children whose mothers were depressed during pregnancy developing depression themselves in adulthood were about three times that of children whose mothers were not depressed during pregnancy. They also had about twice the odds of experiencing maltreatment as a child (not necessarily by the mother). 

Analyses suggested that the increased maltreatment might explain the link seen between maternal depression in pregnancy and depression in offspring as adults.

The researchers also make various suggestions as to why the links seen might exist. This included the possibility that maternal depression could impact on the child’s development by increasing levels of stress hormones in the womb; speculation that the Mail seems to have taken as proven fact.

In conclusion, it is not possible to say with certainty that maternal depression during pregnancy was directly causing the increase in depression risk seen.

Irrespective of this, it is important that women who experience depression during pregnancy get appropriate treatment and support. 


Where did the story come from?

The study was carried out by researchers from King’s College London and was funded by the Psychiatry Research Trust; the National Institute for Health Research (NIHR)/Wellcome Trust King’s Clinical Research Facility; the NIHR Biomedical Research Centre at South London and Maudsley National Health Service Foundation Trust; the Institute of Psychiatry, Psychology & Neuroscience, King’s College London; and the Medical Research Council United Kingdom.

The study was published in the peer-reviewed medical journal The British Journal of Psychiatry. It has been made available on an open-access basis, so it is free to read online or download as a PDF.

The Mail's reporting of the study is likely to add unnecessarily to expectant mothers’ concerns, as it does not highlight the limitations to the research, and the fact the research doesn't show cause and effect, or whether other factors are playing a role.

Also, the suggestion that “Screening pregnant women for [sic] condition [depression] could stop it being passed on” have not been tested in this study.


What kind of research was this?

This was a prospective cohort study called the South London Child Development Study (SLCDS), which started 1986. It aimed to assess whether a child’s exposure to a mother’s depression during and after pregnancy was linked to their risk of depression in adulthood, and also their risk of maltreatment as a child.

Previous research has shown a link between postnatal depression in the mother and later depression in the child, but no prospective studies have attempted to assess the link between a mother’s depression while pregnant and depression of the child when they reach adulthood.

A prospective cohort study is the best way of conducting such a study, but it still has limitations. Most important of these is the possibility that factors other than the one of interest (maternal depression) are contributing to links seen. When such studies follow up people over a long time period, as this study did, they are also prone to participants being lost to follow-up, which can bias results.


What did the research involve?

The researchers recruited expectant mothers in 1986 at 20 weeks into their pregnancy. They assessed their mental health during and after the pregnancy, up until the child was 16 years old. They also assessed whether the child was maltreated, and the child’s mental health when they reached 25. The researchers then analysed whether maternal depression at any stage was associated with the child’s depression or maltreatment.

Standardised one-to-one interviews were carried out with expectant mothers alone at 20 and 36 weeks, and together with their children at 4, 11, 16 and 25 years. The following were being assessed in these interviews:

  • maternal depression during pregnancy (at 20 and 36 weeks)
  • maternal postnatal depression (3, 12 and 48 months after birth)
  • maternal depression during offspring’s childhood (4, 11 and 16 years)
  • offspring maltreatment (up to age 17)
  • offspring depression in adulthood (18 to 25 years of age)

The researchers also collected information on other factors that may have contributed to or altered findings (potential confounders) so they could take these into account in their analyses.

Of the 153 women who completed the first interview, 103 (67%) completed the study and had their data analysed.


What were the basic results?

Of the mothers in the sample, 34% experienced depression during pregnancy and 35% suffered postnatal depression. Maltreatment was reported in 35% of offspring and about 38% met the criteria for depression in adulthood.

Before taking into account any potential confounders, children exposed to maternal depression in pregnancy had 3.4 times the odds of developing depression as adults compared to children who had not been exposed (odds ratio (OR) 3.4, 95% confidence interval (CI) 1.5 to 8.1). When taking into account child maltreatment and exposure to maternal depression when aged 1 to 16 years old, this association did not remain.

Children exposed to maternal depression in pregnancy were more likely to experience maltreatment as a child (OR 2.4, 95% CI 1.0 to 5.7). Analyses suggested that the maltreatment might be the “link” between maternal depression in pregnancy and offspring depression in adulthood.


How did the researchers interpret the results?

The researchers conclude that the study “shows that exposure to maternal depression during pregnancy increases offspring vulnerability for developing depression in adulthood”. The authors also state: “By intervening during pregnancy, rates of both child maltreatment and depressive disorders in the young adults could potentially be reduced. All expectant women could be screened for depression and those identified offered prioritised access to psychological therapies – as is currently recommended by the UK guidelines on perinatal mental health.”



This prospective cohort study found a link between depression in the mother during pregnancy and child maltreatment and depression in adulthood. The results suggested that the child maltreatment might be the intermediate “step” or “link” between maternal and offspring depression.

The study has strengths and limitations. The strengths are that it prospectively followed women and their children up over a long time period. The prospective nature of the study is the best way to collect such information. This allowed the study to used standardised diagnostic interviews to collect consistent information from participants.

The main limitation to the study is that we cannot be certain that the links seen are due to a direct effect of maternal depression during pregnancy. While the researchers did explore and take into account some potential confounders, other factors could be contributing. It is likely that a range of environmental and potentially genetic factors may be playing a role, especially for a condition as complex as depression, so it is difficult to disentangle their effects.

Another limitation is the study’s small sample size, and the fact that about a third of participants did not complete it. Also, the rates of depression in the study were relatively high, which the authors suggest might reflect the urban population studied. This means that the results may not be representative of the whole population and therefore may not be generalisable to other groups.

As data was collected by interview, and in some cases was regarding a past period of time, it is possible that participants would not have been truthful or may not accurately recall the information that could affect the results.

It seems that this study has found some association, but we should be cautious about what we conclude. However, it does highlight that many women do experience depression in pregnancy, and ensuring that this is treated appropriately is important to the mother’s health and wellbeing, as well as her child and family.

As the authors mention in their article, the use of antidepressants in expectant mothers is an area of debate, due to the potential for effects on the developing baby. Doctors may decide to prescribe them in situations where the benefits are considered to outweigh the potential risks.

It is also important to note that there are other forms of treatment available, such as talking therapies, including cognitive behavioural therapy. Pregnant women who are concerned that they may be depressed should not be afraid to talk to their healthcare professional about this, to ensure they get appropriate care.

Links To The Headlines

Could depression start in the WOMB? Children of mothers suffering mental illness in pregnancy are 'three times more likely to develop the condition'. Mail Online, June 5 2015

Links To Science

Plant DT, Pariante CM, Sharp D, Pawlby S. Maternal depression during pregnancy and offspring depression in adulthood: role of child maltreatment. British Journal of Psychiatry. Published online June 4 2015

Categories: Medical News

Children with autism may be supersensitive to change

Medical News - Fri, 06/05/2015 - 16:00

"People with autism … are over-sensitive to the world," the Mail Online reports. It reports on an animal study involving a rat model of autism, where a chemical is used to mimic the development of autism in rats. The study found the "autistic" rats showed signs of anxiety and withdrawal when placed in unpredictable environments.

Researchers compared the rats when they were reared in one of three environments: a standard cage, or two types of enriched environment with toys and treats – one where these "enrichments" stayed the same and another where they changed unpredictably.

Overall, they found rats tended to do better in the predictable enriched environment than the standard or unpredictable enriched ones on various tests of sociability, behaviour and emotional response.

This study lends support to what is already generally understood about autism and autism spectrum disorder (ASD) – many people on the spectrum prefer stability and consistency in their environment and activities, and can often find changes to previously set routines upsetting.

However, it is too early to draw further conclusions from the results of this study. The causes of these developmental conditions are not clearly understood, and this rat model is unlikely to be entirely representative of humans with autism. This means we don't know how applicable the findings are or whether they could lead to new treatments. 

Where did the story come from?

The study was carried out by researchers from Yale University, the University of Michigan and the University of Louisiana at Lafayette in the US. It was supported by the Swiss National Science Foundation.

The study was published in the peer-reviewed scientific journal, Frontiers in Neuroscience. This is an open access journal, so the study is free to read online or download as a PDF.

The Mail Online reporting on this study is reasonable, and indicates at the start of the article that this research involved research in rats, not humans.  

What kind of research was this?

This was an animal study using a rat model of autism. It aimed to investigate environmental effects on behaviour and protein expression in the brain.

Autism spectrum disorder (ASD) is a lifelong developmental condition where those affected typically have difficulties with social interaction and communication, and often have quite rigid routines and activities.

People with autism often have some degree of intellectual impairment, while people with Asperger's syndrome usually have normal intelligence or heightened intelligence in some areas. There is no current agreement about whether there are particular underlying disease changes in the brains of people with ASD.

Because people with ASD usually have a preference for a consistent environment and activities, behavioural therapies often focus on these areas. This research aimed to focus on the environment that the child – or, in this case, the rat – grows up in.

The researchers investigated the theory that predictable environments would prevent distressed reactions, while unpredictable enriched environments would lead to abnormal behaviours.     

What did the research involve?

The study used a rat model of autism. When unborn rats are exposed to an antiepileptic drug called valproic acid (VPA), it has been shown to create behaviour similar to that seen in people with autism.

In this study, one group of unborn rats were exposed to VPA (given to the mother), while another group of control rats were exposed to inactive saline (salt water) injections.

When the rats were born, the researchers then tested the effect of housing the two groups of rats in one of three different environments:

  • standard laboratory conditions – standard bedding, housed in groups of three rats per cage, with the cages kept in a shared room
  • predictably enriching conditions – a constant setting of extra toys, treats, smells, running wheel, with six rats per cage (larger than the standard cage); the cages were also kept in an isolated room
  • unpredictably enriching conditions – same as for the predictably enriching conditions, but the stimuli were regularly changed during the week

The researchers then looked at the effect that the pre-birth exposure and the subsequent environment after birth had on behavioural outcomes such as sociability, pain perception, fear response and general anxiety. They also looked at the effect on an overall measure of "emotionality", which incorporated five of the other behavioural scores.   

What were the basic results?

The researchers found pre-birth exposure and the subsequent environment had an effect on the rats' social behaviour.

In the standard environment, the VPA rats showed a reduced preference for being social (assessed by how much they sniffed another rat) compared with the control rats, but the two rat groups did not differ in the unpredictable enriched environment.

In the predictable enriched environment, the sociability and exploration of the VPA rats was increased relative to the control rats, in whom it was reduced.

Pre-birth exposure and the subsequent environment had no effect on the rats' pain perception.

When looking at fear response (as indicated by the rats "freezing" in response to the expectation of a shock), VPA rats showed more fear than controls in the standard environment, but did not differ in the predictable enriched environment.

In the unpredictable enriched environment, the VPA rats showed a similar or heightened fear response compared with VPA rats in the standard environment. 

Looking at general anxiety (measured by exploring new environments), VPA rats generally explored less than control rats in the standard environments, though they tended towards higher exploration in the predictable enriched environments. 

In both rat groups, overall "emotionality" was increased by enrichment, but it increased to a greater extent in the VPA compared with control rats. In the VPA rats, "emotionality" scores were reduced in the predictable enriched environments. 

How did the researchers interpret the results?

The researchers concluded that, "Rearing in a predictable environment prevents the development of hyper-emotional features in an autism risk factor, and demonstrates that unpredictable environments can lead to negative outcomes, even in the presence of environmental enrichment." 


Overall, this study in a rat model of autism seems to support what is generally already understood about ASD: affected individuals often feel more comfortable with set patterns, routines and environments, and may find unpredictability more challenging.

However, it is hard to draw many solid conclusions from this study, particularly because it is difficult to know exactly how representative this rat model of autism is of humans with autism.

Animal research can often give a good insight into biological and disease processes and how they may work in humans, but we are not identical. With a complex condition such as autism, which does not have a clearly established cause or causes, it is difficult to fully replicate the condition in animals.

The researchers report the VPA model is a well-validated model of autism in rats and has some of the characteristics seen in people with autism. But it is likely differences still exist, so we can't be certain how applicable the findings are.

The study generally supports what is already understood about ASD, and may lend support to environmental and behavioural therapeutic approaches. However, we certainly can't say at this stage whether environmental manipulation in humans would have the ability to prevent or cure ASD.

Links To The Headlines

People with autism have 'supercharged' brains: Those with the condition are 'over-sensitive to the world - and not impaired'. Mail Online, June 4 2015

Links To Science

Favre MR, La Mendola D, Meystre J, et al. Predictable enriched environment prevents development of hyper-emotionality in the VPA rat model of autism. Frontiers in Neuroscience. Published online June 2 2015

Categories: Medical News

Five-year 'death test' for older adults launched online

Medical News - Thu, 06/04/2015 - 15:30

"Are you dying to know? Scientists develop death test to predict if you'll make it to 2020," The Daily Telegraph reports. The test is based on analysis of data collected from the UK Biobank.

This is essentially a huge ongoing cohort study that collected data from almost 500,000 middle- to older-age adults in the UK over an average of five years. This data was then used to create an online death risk calculator.

The researchers looked at around 650 different measurements, including blood tests, family history, health and medical history to work out which were most strongly associated with risk of death over the next five years.

This was then used to create an online risk calculator of death. For this, the researchers focused on factors that were easy for people to self-report. For example, you probably have no clue what size your red blood cells are or what your cholesterol level is, but you do know how many children you have.

The factors included in the tool do not necessarily cause death, but are associated with an increase in risk. Many of these factors cannot be changed, such as already having a longstanding illness, but some can be, such as smoking – the strongest predictor of death in people with no medical illness.

Researchers hope the calculator may motivate people to make improvements to their health, or help doctors identify people who could be targeted for interventions to reduce their risk. Studies will be needed to assess whether the tool does lead to these effects.  

Where did the story come from?

The study was carried out by researchers from the Karolinska Institut and Uppsala University in Sweden, and was funded by the Knut and Alice Wallenberg Foundation and the Swedish Research Council.

The Knut and Alice Wallenberg Foundation is the largest private financier of research in Sweden, and their goal is to "promote scientific research, teaching and education beneficial to the Kingdom of Sweden".

The study used data from the UK Biobank, a registered charity set up in the UK by The Wellcome Trust, the Medical Research Council, the Department of Health, the Scottish Government, and the Northwest Regional Development Agency, with additional funding from the Welsh Assembly Government, the British Heart Foundation and Diabetes UK.

The study was published in the peer-reviewed The Lancet on an open access basis, so it is free to view online.

Most of the UK media described the questions included in the online prediction tool UbbLE (UK Longevity Explorer). They also provided expert opinion that highlighted hopes the tool may help people make healthier lifestyle choices, but also pointing out that most of the predictive factors used in it do not directly cause disease.

But some of the reporting of this study by some news sources suggests the online calculator predicts if you'll die within five years – this is not the case. The test does not categorically tell you whether you will die or not, it only gives you a percentage chance based on your characteristics.  

What kind of research was this?

This research used data from a large cohort study of middle- and older-age people from the UK. The researchers wanted to work out the association between multiple measurements of health and socioeconomic status and risk of death over the next five years.

They also planned to create an online tool using the strongest predictors that could be self-reported to allow people to assess their individual risk.

As the analysis is based on a cohort study, the results do not prove cause and effect, and this was not the aim of this particular study. It wanted to identify predictors of risk of death, not necessarily things that cause death directly. 

What did the research involve?

The researchers used information from the large prospective UK Biobank cohort study. By collecting data on the cohort over many years and allowing scientists access to this information, the Biobank aims to improve the prevention, diagnosis and treatment of a wide range of serious and life-threatening illnesses, including cancer, heart disease, stroke, diabetes, arthritis, osteoporosis, eye disorders, depression, and forms of dementia.

UK Biobank recruited 500,000 people aged 40 to 69 into the study between 2006 and 2010. The participants filled out questionnaires and had numerous baseline measurements taken at one of 21 assessment centres across Scotland, England and Wales. In all, there were 655 measurements, which were grouped into 10 categories:

  • blood tests
  • cognitive function
  • early-life factors
  • family history
  • health and medical history
  • lifestyle and environment
  • physical measures
  • psychosocial factors
  • sex-specific factors
  • sociodemographics

The researchers used information for 498,103 people and identified any of those people who died up to February 2014, or December 2012 for Scottish participants. Central NHS registers were used to obtain cause of death.

The researchers then analysed the 655 measurements separately for women and men to determine their association with risk of death over the five years of follow-up.

They also calculated the association between each measurement and risk of death for three different age groups:

  • 40 to 53 years old
  • 53 to 62 years old
  • over 62 years old

The risks associated with all 655 measurements were then displayed for women and men on the UbbLE website on pages called the association explorer.

The researchers used the measurements showing the strongest association with risk of death that could be self-reported to create an online mortality risk calculator. This meant excluding any blood tests or physical measurements that a person could not easily, quickly and reliably take themselves.  

What were the basic results?

A total of 8,532 people died during the study period, about 1.7% of participants. This was lower than that of the general UK population, which might suggest the people who took part were generally healthier than the population as a whole.

The most common causes of death were:

  • cancer (53% of male and 69% of female deaths)
  • cardiovascular diseases, such as a heart attack or coronary heart disease (26% of male and 13% of female deaths)
  • lung cancer in men (10% of male deaths, 546 cases)
  • breast cancer in women (15% of female deaths, 489 cases)

The strongest predictor of death over five years was:

  • self-reported health in men
  • a previous history of cancer in women

Other examples of strong predictors of death from various categories included:

  • self-reported walking pace – for example, men aged 40 to 52 reporting a slow pace had a 3.7 times higher risk of dying within five years than those who reported walking at a steady average pace
  • red blood cell size
  • pulse rate
  • forced expiratory volume in one second (as a measure of lung function)

When the researchers excluded people with serious illnesses, smoking was the strongest predictor of death.

From these results, the researchers created a prediction tool based on 13 questions for men and 11 questions for women. Based on a person's responses and death rates for the general UK population, the tool estimates how likely a person is to die in the next five years. 

How did the researchers interpret the results?

The authors concluded that, "The prediction score we have developed accurately predicts five-year all-cause mortality and can be used by individuals to improve health awareness, and by health professionals and organisations to identify high-risk individuals and guide public policy." 


This large study has identified numerous risk factors associated with a person's risk of death within five years. Researchers used this information to develop an online tool that predicts someone's risk of death within the next five years. The study's strengths include its large sample size and the prospective nature of the study design.

But there are some limitations. There may be some bias in the type of people who volunteered to take part. The death rate was lower than that of the average population in this age group, which may indicate that the participants were more interested in their health and so had healthier lifestyles. This may limit whether the results apply to the population as a whole.

The study only included people from the UK between the ages of 37 and 73, and results may not apply to people outside that age range or from other countries. For example, the data is reliant on self-reporting, and people from other age groups or countries might interpret some of these concepts differently. This may not be an issue for some factors, but it might be an issue for others, such as estimating walking pace or level of health.

This study did not aim to assess whether the factors directly increased risk of death – rather, it set out to identify factors that are associated with and can predict death risk when combined.  Also, several factors in the calculator cannot be changed, such as current and past health. But smoking – a factor that can be altered – was the factor most strongly predictive of death over the next five years.

As the media pointed out, other aspects of an unhealthy lifestyle, such as poor diet, excess alcohol intake and being overweight, are not included in the online risk calculator.

This is most probably because other factors had stronger associations and were chosen because it made completing the risk questionnaire easier, so overall this would give a better indication of risk. Questions in the risk calculator, such as, "In general, how would you rate your overall health?" are affected by multiple factors, such as obesity and alcohol intake.

The authors say they hope the online tool may help people make positive lifestyle changes. However, they also acknowledge that online health information may increase overdiagnosis and anxiety. Follow-up studies are needed to determine the effects of the calculator – for example, to what extent it motivates people to change their lives and what impact this has.

Overall, one message the study highlights is the importance of stopping smoking. Find out how you can quit smoking.

Links To The Headlines

Are you dying to know? Scientists develop death test to predict if you'll make it to 2020. The Daily Telegraph, June 4 2015

Ubble: the online test to predict if you'll die within five years. The Guardian, June 4 2015

Death test: Scientists claim questionnaire can help 40 to 70 year olds judge risk of dying within five years. Daily Mirror, June 4 2015

Will YOU die in the next five years? Take this simple online test to determine your chances of survival. Daily Mail, June 4 2015

Will You Die In Next Five Years? Take Online Test. Sky News, June 4 2015

Links To Science

Ganna A, Ingelsson E. 5 year mortality predictors in 498 103 UK Biobank participants: a prospective population-based study. The Lancet. Published online June 3 2015

Categories: Medical News

Breast cancer screening 'cuts deaths by 40%' expert panel says

Medical News - Thu, 06/04/2015 - 03:00

“Women who undergo breast cancer screening cut their risk of dying from the disease by 40%, according to a global panel of experts,” The Guardian reports.

Breast cancer screening reduces deaths from the condition by spotting cases of breast cancer at an early stage when they are still curable.

Critics argue that this benefit is outweighed by the problem of overdiagnosis, where women are diagnosed as having cancer and treated, when the cancer would never have caused any harm. This treatment carries the usual impacts and side effect for these overdiagnosed individuals, but does not offer them any benefit.

The balance of benefits and risks from breast cancer screening is a hotly debated topic. The latest attempt to settle the debate is a new review published by the International Agency for Research on Cancer (IARC): a working group of cancer experts from across the world.

The review has been published in the peer-reviewed medical journal The New England Journal of Medicine.

The IARC concluded, based on an evaluation of the available evidence, that the benefit of inviting women aged 50 to 69 years of age for mammography screening outweighs the potential harms. In the UK, women in this age group are invited for this screening every three years.


How was the report developed?

IARC brought together a working group of 29 international experts from 16 countries to assess the benefits and harms associated with breast cancer screening. These experts were selected based on their areas of expertise and for not having any known conflicts of interest.

IARC staff searched for available studies on breast cancer screening, and the experts added any other relevant studies they were aware of in their areas. The experts reviewed and debated this evidence in their specialist areas, and came to an initial conclusion. This conclusion was then reviewed by the working group as a whole and a consensus position reached.


Why was the report needed?

This report was part of the IARC’s ongoing work to review and evaluate the effects of preventing different cancers. They had last reviewed the evidence on breast cancer screening in 2002. As new research continues to be carried out, it is important to consider this new evidence, and whether it affects their conclusions. Particular areas they highlighted as needing consideration were:

  • improvements in treatments for late-stage breast cancer
  • concerns around overdiagnosis (diagnoses of breast cancer that would never have been diagnosed otherwise and would never have caused the women any harm)
  • what age groups of women should be offered screening and how frequently
  • effects of screening through self or health professional breast examination, or approaches other than mammography
  • screening in women at high risk of breast cancer


What evidence did the expert group consider on mammography?

In their last report in 2002, the IARC concluded that the evidence for the efficacy of mammography screening in women aged 50 to 69 years old was sufficient, based on the available randomised controlled trials (RCTs). Reassessment of all available RCTs up to the time of the current assessment by the expert group confirmed that this was still the case.

The expert group also considered evidence from recent, high-quality observational studies, as the RCTs were carried out more than two years ago and there have been improvements in screening and treatment since then. They focused on cohort studies with a long duration and which used the best methods for avoiding confounding and other potential limitations. 

Case-control studies were also considered, particularly in areas where there were no cohort studies. 20 cohort studies and the same number of case-control studies from the developed world countries were considered for assessing the effectiveness of mammography.


What did the group conclude about mammography?

Overall, the group concluded that the benefits of mammography screening outweigh the adverse effects for women who are 50 to 69 years of age.

The results of 40 case-control and cohort studies from high-income countries suggested that women in this age group who went for screening had around a 40% reduction in risk of death from breast cancer. If all women who were invited for screening were considered, the average reduction in risk of death from breast cancer was 23%. The evidence did not clearly show how frequently women needed to be screened to gain maximum benefit.

There was judged to be sufficient evidence that women aged 70 to 74 years who went for screening also had a reduced risk of death from breast cancer. Evidence in women aged under 50 was limited, meaning that conclusions could not be drawn.

There was sufficient evidence that mammography screening does lead to overdiagnosis. Once women have been identified as having breast cancer, it is impossible to tell which of them have been “overdiagnosed” but there are ways to estimate the proportion of women it affects. The studies assessed by the expert groups estimated that 1- 11% of women identified as having breast cancer through screening are overdiagnosed.

There was also sufficient evidence that women experience short-term adverse psychological effects if they are given a false positive result on mammography (that is, a positive result that turns out not to be breast cancer on further investigation). Studies from organised screening programmes suggested that about 1 in 5 women who are screened 10 times between the ages of 50 and 70 years would be expected to have a false positive. Less than 5% of false positives lead to an invasive procedure, such as a needle biopsy.


What were the expert group’s other conclusions?

The group also drew conclusions on the other issues they were covering in their report. For a lot of the issues they were interested in, they concluded that the evidence was as yet limited or inadequate to be able to draw firm conclusions. For example, the evidence about whether breast self-examination could reduce death from breast cancer if taught and practised competently and regularly was judged as being inadequate. The full report, including the conclusions, is available from the IARC website.


Does this mean that all scientists now agree and the debate is over?

Probably not. Evaluating the evidence relating to breast cancer screening is complex, and different scientists have analysed and interpreted it in different ways. For example, a Cochrane review from 2013 estimated that the overdiagnosis rate could be as high as 30% based on RCT evidence.

The current report is the considered opinion of the IARC, based on their evaluation of the evidence available to date. However, this doesn’t mean that all other scientists will agree, as they may interpret the studies and weigh up the benefits and harms differently. The IARC will continue to review their conclusions as new evidence becomes available.

What is important is that women who are invited for screening are provided with clear information, so they know the potential benefits and risks, and about the best estimation of their chances of experiencing these. This enables them to make decisions about whether they want to attend screening.

Sarah Williams from Cancer Research UK sums this up in a quote on the BBC website: "There isn't one definitive answer to the question of how the benefits and harms of breast screening stack up – individual women will have different views on the factors that matter most to them, and also there are a number of different ways to bring together and interpret the evidence.”  

Links To The Headlines

Breast cancer screening cuts chance of dying from disease by 40%, say experts. The Guardian, June 3 2015

Breast cancer screening beneficial, scientists reassure. BBC News, June 4 2015

Breast cancer screening cuts deaths by 40 per cent: Analysis of 10 million patients finds regular mammograms saves lives among middle-aged women. Mail Online, June 4 2015

Links To Science

Lauby-Secretan B, Scoccianti C, Loomis D, et al. Breast-Cancer Screening — Viewpoint of the IARC Working Group. The New England Journal of Medicine. Published online June 3 2015

Categories: Medical News

Can a single-shot therapy session cure insomnia?

Medical News - Wed, 06/03/2015 - 16:00

"Insomnia could be cured with one simple therapy session, new study claims," The Independent reports. UK researchers have been looking at whether cognitive behavioural therapy (CBT) delivered in a single one-hour session can combat acute insomnia.

CBT is a type of talking therapy that uses a problem-solving approach to tackle unhelpful patterns of thinking and behaviour. For example, many people with insomnia develop feelings of anxiety and stress related to not being able to sleep, which can make the problem worse – a vicious circle.

A course of CBT is already an established treatment for insomnia, but this trial aimed to see whether a single one-hour session of CBT could be effective.

Forty adults with short-term insomnia (less than three months) were randomised to a single CBT session or waiting list (no treatment) as a control. Four weeks later, 60% of the CBT group had "remission" of their insomnia (defined as falling below a pre-specified level of insomnia severity on a sleep index) compared with 15% of the control group.

The results show promise, but this was a small sample size who may not be representative of all people with insomnia. The study also had a short follow-up period and it is unknown whether these effects would be sustained beyond a month after the session.

Importantly, brief-form CBT was only compared with no treatment and not with a longer course of CBT or another treatment. A similar trial would be required to see how the brief intervention compares with the alternatives.  

Where did the story come from?

The study was carried out by researchers from Northumbria University, Newcastle University and the University of Pittsburgh in the US.

It was not reported to receive any external funding. One of the authors reports receiving educational grants from UCB Pharma and Transport for London, and has consulted for the BBC.

The study was published in the peer-reviewed medical journal, Sleep.

In general, the media has been overoptimistic at this stage. While the results of this small study into the use of a single CBT session for insomnia are promising, questions remain and more study is needed.

Even just going on the results of this study alone, the treatment can't be described as a "cure", despite some media headlines. Not all people showed improvement and we don't know whether the effects lasted longer than a month in those who did improve. 

What kind of research was this?

This was a randomised controlled trial (RCT) that aimed to examine the use of a single session of cognitive behavioural therapy (CBT) for insomnia. CBT is a type of talking therapy that examines thought and behaviour patterns and beliefs and attitudes, and helps people find ways to cope with things by looking at them differently.

Up to 15% of the population are reported to suffer from chronic insomnia, though many more report problems with sleeping. Standard CBT for insomnia usually involves sessions delivered over six to eight weeks and has been demonstrated to be effective.

However, people sometimes have difficulties sticking to long treatment courses, and access to qualified therapists can be limited in some parts of the country.

This study aimed to look at the effects of a single session of CBT for insomnia, accompanied by a self-help booklet, compared with a no treatment or waiting list control condition. This treatment was specifically targeted at people who had insomnia for only a relatively short period of time – less than three months. 

What did the research involve?

The study recruited people with short-term insomnia, and randomised them to a single session of CBT with the accompanying self-help booklet or waiting list control. Researchers then compared participants' insomnia four weeks later to assess the effect of CBT.

Potential participants were recruited from the north-east of the UK and were assessed to see whether they met diagnostic criteria for acute insomnia (duration of less than three months).

Participants also had to have not previously tried CBT for insomnia and not be taking sleeping medications. A total of 40 adults (average age 32, 55% female) were enrolled who had varying underlying causes for their insomnia.

Most (31 out of 40) reported some form of non-medical stress as a cause (such as family, relationship or work problems) and the rest had insomnia related to health problems such as sleep apnoea or depression.

In the group randomised to treatment, the CBT session lasted for around one hour and was delivered on a one-to-one basis by a single experienced therapist.

The therapy included education on sleep and changes in sleep needs throughout life to challenge any misconceptions, and examination of the person's sleep diary, which they had completed when they enrolled in the study. From this diary, the researchers worked out each individual's "sleep efficiency" – the percentage of the time they spent in bed trying to sleep that they actually spent asleep.

The focus then moved to what was called "sleep-restriction titration", where the person was directed about changing time in bed according to sleep efficiency. This involved reducing time in bed by 15 minutes if a person had less than 85% sleep efficiency, increasing it by 15 minutes if they had more than 90% sleep efficiency, and not changing if sleep efficiency was 85-90%.

Sleep diaries were again assessed at one week and four weeks after the session, and at four weeks participants also completed the Insomnia Severity Index (ISI).

This index measures the nature, severity and effects of insomnia on a scale, with each question response ranging from 0 (not a problem) to 4 (a severe problem).

The total possible test score is 28, with a higher score showing more severe insomnia. People whose scores reduced to 10 or less were considered to be in "remission" from insomnia.

People in the "waiting list" control group received no treatment during the study. At the end of the four-week study, participants in both groups were offered a full course of CBT for insomnia.  

What were the basic results?

At the start of the study, there was no difference between the two groups on any characteristics or their ISI scores (average score 14.6 points).

At four-week follow-up, there was a significant difference in ISI scores between the groups. Average ISI score was 9.6 points in the CBT group and 12.7 points in the control group. Remission of insomnia according to the ISI score was achieved by 60% of the treatment group (12/20) compared with 15% of the control group (3/20).   

When examining the sleep diaries, outcomes were also better for the CBT group compared with the control group. The CBT group had significant improvements in how long it took for them to get to sleep (sleep latency), how often they woke after falling asleep, and sleep efficiency. 

After the study, 70% of people in the control group (14/20) requested a full course of CBT, compared with only 5% in the treatment group (1/20). Forty percent of the treatment group (8/20) requested a single booster CBT session, mainly so they could talk about ways to prevent relapsing. 

How did the researchers interpret the results?

The researchers concluded that, "This single session of cognitive behavioural therapy for insomnia is sufficiently efficacious for a significant proportion of those with insomnia."

They say there may be the possibility of introducing this brief form of CBT into the "stepped care model" for insomnia, where people start with lower-intensity treatments and move on to more intense treatments if these don't work.  


This RCT demonstrates that a single one-hour session of CBT led to remission at one-month follow-up for 60% of people with acute insomnia, compared with 15% with a waiting list control.

A course of six to eight weeks of CBT is already a recommended treatment for insomnia, and the results of this study suggest promise for a briefer intervention. This may be better if it makes it more likely people will accept treatment and stick with it. Shorter sessions would also be easier to provide, as they need fewer resources.

However, there are important points to bear in mind before taking this study as conclusive proof of the effectiveness of a single CBT session for insomnia:

  • The study was small, involving only 20 people in each of the treatment and waiting list control groups. The results need to be confirmed in a much larger trial. 
  • These were a specific group of people: young adults (average age 32) who had insomnia lasting for less than three months (mostly as the result of work or relationship stress) who were all willing to take part in the study and try CBT. They were also not taking any sleep medications. Results in this group may not apply to other types of people who have insomnia, so care should be taken when generalising to populations, such as those with chronic sleep problems and the elderly.
  • Follow-up was only for one month. We don't know if there would be a lasting effect or whether further booster sessions would be needed to maintain effectiveness.
  • Comparison was made to a waiting list control – people who received no insomnia treatment and knew this. These people may not have been happy with the fact they were not receiving treatment and this could affect their rating of their insomnia. Also, we can't say how single-session CBT performs compared with alternatives. Ideally, a trial would need to compare the short version of CBT with the full course or other alternatives to compare their effects.
  • The researchers say the number of requests for a full course of CBT at the end of the study was "used as a gross indicator of treatment acceptability". The majority of people in the control group wished to have a full CBT course, but only one of the treatment group wanted a full course. It is difficult to know what to interpret from this – for example, whether people in the CBT group didn't find it acceptable to be treated for longer and wanted no further CBT, or whether they felt they'd already gained enough benefit. However, the fact 8 out of 20 wanted a booster session may suggest they didn't feel the need for a full course and preferred to have another brief session.

Overall, the research suggests promise for brief CBT interventions for insomnia, which may well have a place in the treatment of the condition.

However, at this stage questions remain and larger studies are needed, particularly in comparison with other treatments, such as a full treatment course of CBT.

If you are affected by persistent insomnia, self-help techniques, such as not drinking tea, coffee or alcohol in the evening and taking daily exercise of at least 30 minutes a day, may help.

If the problem persists, see your GP. There may be an underlying medical condition contributing towards your sleep problems. Your GP can also refer you to a CBT therapist. 

Links To The Headlines

Insomnia could be cured with one simple therapy session, new study claims. The Independent, June 2 2015

The course that can CURE insomnia: One hour therapy session that 'banishes anger from the bedroom' helped 73% of people. Mail Online, June 1 2015

Insomnia can be cured with therapy, claims university - here are the tools YOU can use to get better SLEEP. Daily Mirror, June 2 2015

Links To Science

Ellis JG, Cushing T, Germain A. Treating Acute Insomnia: A Randomized Controlled Trial of a "Single-Shot" of Cognitive Behavioral Therapy for Insomnia. Sleep. Published online June 1 2015

Categories: Medical News

Poor sleep quality linked to Alzheimer's disease

Medical News - Wed, 06/03/2015 - 15:00

"Sleepless nights … could raise your odds of developing Alzheimer's," is the claim in the Daily Mail. A new US study did find a link between poor sleep quality and higher levels of clumps of abnormal proteins in the brain (known as beta-amyloid plaques), but no cause and effect relationship between sleep quality and Alzheimer’s disease was proven.

This small study involved 26 healthy older adults who were analysed with a brain scan to measure the amount of protein plaques in their brain. The researchers did find an association with increased amounts of plaques and reduced deep sleep during the night. This in turn was associated with reduced ability to remember word-pair associations from the night before.

It is important to note that these are just associations as this was a cross-sectional study. The study cannot prove that the plaques caused the poor sleep or poor performance on the memory test, or that poor sleep caused plaque development. Various unmeasured factors could account for the results, such as difficulty sleeping in a laboratory.

Additionally, despite the media headlines, this study cannot show whether improving the quality of sleep would reduce the risk of Alzheimer's disease or slow its progression. The participants did not have any symptoms of dementia and were only assessed at one time point.


Where did the story come from?

The study was carried out by researchers from the University of California, California Pacific Medical Center and the Lawrence Berkeley National Laboratory. It was funded by the US National Institutes of Health.

The study was published in the peer-reviewed medical journal Nature Neuroscience.

Some of the UK media’s reporting of the study was inaccurate. For example, the Daily Mirror reported that adults "deprived of regular sleep had the highest levels of beta-amyloid" when this was not assessed in the study. The participants’ sleep pattern was only monitored for one night; the researchers did not formally assess their usual sleep pattern or use this in their calculations. Their claim that the "the study also revealed a 'vicious cycle' in which the protein not only corrodes memory, but also disrupts sleep further" was not found in the study – it was a speculation by the authors.

The Daily Mail also overstated the findings of the study and neither reported on the limitations.


What kind of research was this?

This was a cross-sectional study looking for a link between beta-amyloid plaques, poor sleep and memory deficits. This type of study cannot prove cause and effect but can further our knowledge of associations between these factors.

Beta-amyloid precursor protein is a large protein that is found on the surface of cells and is essential for growth and repair of nerve cells. However, it can be broken up into fragments, one of them called beta-amyloid. These beta-amyloid proteins attach to each other, forming long fibrils that accumulate to form plaques. This occurs in normal ageing, but to a much greater extent in Alzheimer’s disease. The plaques usually start to appear in the grey matter, called the cerebral cortex. The plaques are associated with memory loss but the exact mechanisms for this are not known.

The researchers wanted to explore their theory that the plaques may be causing memory loss through interrupting non-rapid eye movement (NREM) sleep. This part of the sleep cycle occurs during:

  • Stage one: when you start to go to sleep
  • Stage two: light sleep
  • Stage three: deep sleep, when the body repairs and regrows tissues and the immune system

During a night’s sleep, after around 90 minutes, NREM sleep changes into rapid eye movement (REM) sleep for about 10 minutes. REM sleep is when dreams occur. The cycle is then repeated, going back to NREM sleep, with progressively longer periods of REM sleep as the night progresses.


What did the research involve?

The researchers recruited 26 older adults with no cognitive impairment. The study involved participants having a brain scan to measure the amount of beta-amyloid plaques, and performing a word-pair task before and after a night of sleep in the laboratory to test their ability to lay down memory.

The study participants did not have any symptoms of dementia, mental health conditions or reported sleep problems. Each participant underwent a positron emission tomography (PET) brain scan to estimate the amount of beta-amyloid protein build-up in the grey matter of the brain.

The participants then performed a word-pair task before and after a night of sleep in the laboratory. The amount of REM and NREM sleep was measured using an electroencephalogram (EEG) – a test that measures the electrical activity of the brain. The word-pair task consisted of learning a series of word pairs. Short-delay memory was tested by asking the participants to remember some of the word pairs after 10 minutes. Long-delay memory was tested the following day when they were asked to remember the rest of the word pairs. This was performed at the same time as a behavioural and functional magnetic resonance imaging (fMRI) scan, so that the researchers could look at which areas of the brain were active, such as the hippocampus, which is involved in memory.


What were the basic results?

Increased beta-amyloid protein in the medial prefrontal cortex of the brain, was associated with reduced NREM sleep. In particular, it was associated with less slow wave activity of below one hertz (Hz) – a measurement of frequency, which is believed to be when memory is consolidated. These results remained significant after adjusting for age and volume of grey matter. Beta-amyloid protein in other areas of the brain was not associated with NREM sleep of slow wave activity below one Hz.

Reduced NREM slow wave sleep and increased beta-amyloid protein in the medial prefrontal cortex of the brain was associated with poorer overnight memory. It was also associated with an increase in the activity of the hippocampus area of the brain.

The amount of beta-amyloid protein was not directly associated with poorer ability to form new memories. The link was only formed when reduced NREM sleep was included in the statistical analyses.


How did the researchers interpret the results?

The researchers concluded that their data "implicate sleep disruption as a mechanistic pathway through which β-amyloid [beta-amyloid] pathology may contribute to hippocampus-dependent cognitive decline in the elderly". They say that "cortical Aβ [beta-amyloid] pathology is associated with impaired generation of NREM slow wave oscillations that, in turn, predict the failure in long-term hippocampus-dependent memory consolidation".

The researchers go on to speculate from previous animal studies that NREM sleep disruption increases the build-up of beta-amyloid plaques and that this then reduces the amount of NREM sleep, creating a vicious cycle. However, they are clear that this is a hypothesis and has not been proven by this study.



This small study of 26 healthy older adults has found a link between the build-up of protein plaques in the brain, poor quality sleep and difficulty in laying down memory overnight.

The main limitations of this study are the cross-sectional study design. This means that the study cannot prove that the increased beta-amyloid plaques caused the poor NREM sleep or that either caused the memory difficulties. Similarly it does not show that poor sleep quality increases plaque build-up and so could be associated with Alzheimer’s development. Other factors could have accounted for the results seen, such as poor sleep from trying to sleep in a laboratory setting.

Additionally, despite the media claims, the study was taken at one point in time and so cannot show that increased NREM sleep would reduce the risk of dementia such as Alzheimer’s disease or slow its progression.

Overall this is an interesting piece of research, but further studies over a long period of time are needed to better understand the associations seen. That said, improving the quality of your sleep may have numerous health benefits, and tips can be found on our Better sleep hub

Links To The Headlines

Are sleepness nights to blame for Alzheimer's disease? Lack of shut-eye may cause long-term memory loss. Daily Mail, June 1 2015

How to fight Alzheimer's: New study says improving sleep could slash rise of cruel disease. Daily Express, June 2 2015

Sleep Could Help Stave Off Alzheimer's And Memory Loss, According To New Study. Huffington Post, June 2 2015

Links To Science

Mander BA, Marks SM, Vogel JW, et al. β-amyloid disrupts human NREM slow waves and related hippocampus-dependent memory consolidation. Nature. Published online June 1 2015

Categories: Medical News

Office workers of England - stand up for your health!

Medical News - Tue, 06/02/2015 - 17:48

Workers have been warned to "stand up for at least two hours a day in [the] office," according to The Daily Telegraph. It says these are the first official health guidelines on the issue.

The guidance comes from a panel of experts, commissioned by Public Health England, which provides recommendations aimed at helping employers know what to aim for when trying to make workplaces less sedentary and more active. They say that this could potentially improve productivity and profitability, for example by reducing sickness.

This guidance has been prompted by a growing body of evidence that sedentary behaviour can increase the risk of a range of chronic diseases such as obesitytype 2 diabetes and high blood pressure. Some experts have gone as far as saying that "sitting is the new smoking".

Dr Ann Hoskins, Deputy Director for Health and Wellbeing, Healthy People, Public Health England said: "This research supports the Chief Medical Officer’s recommendations to minimise how much we sit still. Being active is good for your physical and mental health. Simple behaviour changes to break up long periods of sitting can make a huge difference."

Read more about why sitting too much is bad for your health


Who has asked workers to stand up?

The recommendations have come from an international group of experts. They were invited to provide these recommendations by Public Health England (PHE) and a UK community interest company (Active Working CIC). The guidance is published in the British Journal of Sports Medicine and has been made available on an open-access basis so it is free to read online or download as a PDF.


Why was the guidance needed?

The aim was to provide employers and staff working in office environments initial guidance on how to combat the potential risks of long periods of seated office work. The authors of the article report that recently more evidence has been published about the links between sedentary behaviour, including at work, and cardiovascular disease, diabetes and some cancers. These conditions are leading causes of ill health and death. As a result, they say that the guidance hopes to support those employers and staff who want to make their working environments less sedentary and more active.


How was the guidance developed?

The experts based their guidance on the available evidence. This included long-term epidemiological studies looking at the effects of sedentary behaviour, and studies of getting workers to stand or move more often.

They ranked the quality of the studies using the American College of Sports Medicine system – this ranks studies from the highest quality grading of A (overwhelming data from randomised controlled trials (RCTs) to D (a consensus judgement from the panel). They based their recommendations on the best available evidence.

They say that the key evidence used in developing their guidance was:

  • data from a longer term retrospective national health and fitness survey which found that standing (or having some movement) for more than two hours a day at work was associated with lowering of risks, and those standing for at least four hours had the lowest risks. This was independent of a person’s physical activity
  • data from a number of observational or short term interventional studies where there were changes in "cardiometabolic" and ergonomic risk factors (such as energy expenditure, blood glucose, insulin, muscle function and joint sensations), when the total accumulated time spent standing or having some movement was more than two hours a day

In preparing their recommendations they used other experts as a "sounding board". The guidance was also externally peer reviewed when it was submitted for publication.


What were the recommendations?

The recommendations for workers who are in mainly desk based occupations, were broadly that:

  • The initial aim should be to work towards getting at least two hours a day of standing and light walking during working hours, and eventually work up to a total of four hours per day.
  • Seated work should be regularly broken up with standing work and vice versa. Sit–stand adjustable desk stations were highly recommended.
  • Similar to avoiding remaining in a static seated position for a long time, remaining in a static standing posture should also be avoided.
  • Movement does need to be checked and corrected on a regular basis, especially if a person experiences musculoskeletal sensations. Occupational standing and walking have not been shown to cause low back and neck pain, and can provide relief.
  • People new to adopting more standing-based work may have some musculoskeletal sensations and fatigue as part of the process of adapting to this. If these cannot be relieved either by changing posture or walking for a few minutes, then the worker should rest, including sitting, in a posture that relieves the sensations. If discomfort persists, then medical advice should be sought.
  • Employers should promote the message to their staff that prolonged sitting, across work and leisure time, may significantly increase one’s risk of cardiometabolic diseases and premature death.

The evidence behind the recommendations was ranked between B (limited data from RCTs and high quality observational data) and D (expert opinion). The experts acknowledge that more evidence is required to add greater certainty to their recommendations. They call for longer term, prospective and large-scale RCTs to assess standing and light activity interventions in real office environments and their effect on long-term health outcomes. They note that future refinements to their recommendations will be needed as more evidence is published.   

Links To The Headlines

Stand up for at least two hours a day in office, workers warned. The Daily Telegraph, June 1 2015

Office workers 'should stand up for two hours per shift to combat serious health risks'. Daily Mirror, June 1 2015

Links To Science

Buckley JP, Hedge A, Yates T, et al. The sedentary office: a growing case for change towards better health and productivity. Expert statement commissioned by Public Health England and the Active Working Community Interest Company. The British Journal of Sports Medicine. Published online June 1 2015

Categories: Medical News

Cats blamed for children's poor reading skills

Medical News - Tue, 06/02/2015 - 16:00

"Cats could be making children stupid," reports the Daily Telegraph. It says that a parasite called Toxoplasma gondii, which is carried by cats, could be affecting performance at school.

Toxoplasma gondii is a common parasite that can be found in many mammals, including cats. It can be contracted by humans if they come into contact with the faeces of infected cats, or by consuming contaminated food or water.

An infection with Toxoplasma gondii is known as toxoplasmosis.

While toxoplasmosis does not usually cause symptoms in healthy adults, some researchers have argued that the parasites might have an effect on the brain. For example, a 2012 study that we discussed linked cat ownership with an increased risk of suicide.

This latest study involved just over 1,700 secondary school age children in the US. It found a link between having been exposed to toxoplasma and lower scores on two cognitive tests – one of reading ability and one of verbal memory.

However, there were no differences in performance on maths or visuospatial tests (the ability to process visual information about the position of objects). While the researchers took into account some factors that could affect results, such as family income, removing their impact completely is likely to be difficult.

The study did not assess how the children had been exposed to toxoplasma – whether through cats or contaminated food. Overall, this study should not cause undue alarm among families with cats. Regardless of the results of this study, good hygiene around family pets is always a good idea. Pregnant women are already advised to avoid cat faeces to reduce their chances of passing an infection to the unborn baby.


Where did the story come from?

The study was carried out by researchers from the University of Iowa and Florida International University. It received no specific funding.

The study was published in the peer-reviewed medical journal Parasitology.

Rather unusually, the headline in the Telegraph is more cautious than some of its article text. This study cannot prove cause and effect, and the headline appropriately talks about the parasite being "linked to" learning difficulties, but the article says that "cats could be making children stupid".

The article does include a note of caution from the authors that longitudinal studies are needed.

Mail Online’s reporting of the study is accurate and provides useful background information about toxoplasmosis. Both papers take the opportunity to show cute pictures of kittens and cats (the real purpose of the internet).


What kind of research was this?

This was a cross-sectional study looking at whether infection with the parasite Toxoplasma gondii is linked to poorer cognitive performance in school age children.

Toxoplasma gondii is a single-celled parasite that is said to affect about a third of the world’s population. As noted in the news, it can be carried by cats and transmitted to humans through contact with infected cat faeces. It can also be passed on by drinking contaminated water, eating contaminated undercooked meat or unwashed vegetables, or from mother to baby.

Toxoplasma can cause serious illness if passed on from a pregnant woman to her foetus, or in people whose immune systems are compromised. However, in most people with healthy immune systems infection does not cause noticeable symptoms, and the infection is considered "latent" or inactive.

There has been some suggestion, however, that infection might be causing more subtle behavioural or cognitive changes that are currently not attributed to the infection. No studies have ever looked at this possibility in children, so the researchers wanted to see whether children with toxoplasma infection might show different cognitive performance to those without the infection.

This kind of cross-sectional study can only tell us about whether certain characteristics (cognitive function in this case) are different in certain types of people (those with or without toxoplasma infection in this case). As it does not assess which factor comes first, we can’t say for certain that infection could potentially be causing any differences seen. That is, we’d need to know whether children’s cognitive performance was different before the infection or just afterwards.


What did the research involve?

Researchers used data from an ongoing cross-sectional study called the National Health and Nutrition Examination Survey (NHANES). Among other assessments, this study tested children in the US aged 12 to 16 years old for signs of toxoplasma infection, and also tested their cognitive abilities. They then compared cognitive test results between those with and without toxoplasma infection.

The NHANES selects a sample representative of the US population as a whole. Data analysed in the current study was collected between 1988 and 1994, as part of the third NHANES study. Children were tested for a specified level of antibodies to toxoplasma, which indicated that they had been infected at some point. Blood samples were also tested for presence of antibodies against other forms of infection (such as hepatitis B and C or herpes viruses), and for levels of various vitamins. The NHANES also collected other information, for example on family income and ethnicity.

The children completed standard reading and maths tests, and tests assessing reasoning and various aspects of memory and other cognitive functions. Children with learning disabilities were not included in the current study. The researchers analysed whether children with evidence of toxoplasma differed in their cognitive test scores from those without evidence of infection. They took into account factors that could affect results (potential confounders), and they also looked at whether results differed in boys and girls, or in those with different levels of vitamins in their blood.


What were the basic results?

The researchers analysed data for 1,755 children. They found that 7.7% of children showed evidence of having been exposed to toxoplasma infection. Children with toxoplasma infection were more likely to be from families whose main language was not English and to have signs of other infections. They also tended to be poorer.

Having been exposed to toxoplasma infection was associated with lower reading skill scores and verbal memory scores, after adjustment for potential confounders. There was no relationship between toxoplasma infection and maths or visuospatial reasoning in these adjusted analyses.

Toxoplasma infection seemed to be associated with a greater difference in verbal memory in children with lower vitamin E concentrations in their blood. None of the other vitamin concentrations or gender seemed to affect the link.


How did the researchers interpret the results?

The researchers concluded that: "Toxoplasma seropositivity may be associated with reading and memory impairments in school age children", and that "serum vitamin E seems to modify the relationship".



This cross-sectional study found an association between toxoplasma infection in secondary school age children in the US and certain measures of cognitive function (reading and verbal memory), but not others (maths or visuospatial reasoning).

The study included a relatively large national sample (just over 1,700 children), which was selected to be representative of the US population as a whole. Its main limitation is its cross-sectional design. As the authors note, this means that they cannot establish that infection with toxoplasma was present before any differences in cognitive function. Therefore they cannot draw conclusions about whether toxoplasma might be directly causing the differences seen. Cohort studies are needed to gain a better understanding of whether the link might be due to a direct effect of toxoplasma.

In addition, while the researchers took into account various factors that might affect results, this may not completely remove their effects. For example, the children with toxoplasma infection tended to come from poorer families. While the researchers took into account one measure of socioeconomic status (family income), removing its impact completely is likely to be difficult.

The researchers also carried out a lot of analyses, and not all were statistically significant. When a lot of significant tests are done, some will find a link just by chance. Also, while there was an association between toxoplasma and some cognitive test results, there was no association for others.

This study should not cause undue alarm among families with cats, as it is not possible to say with certainty that toxoplasma is causing the differences seen. Regardless of the results of this study, good hygiene around family pets is always a good idea. Pregnant women are already advised to avoid cat faeces to reduce their chances of toxoplasma infection and transmission to the foetus.

Pet ownership may have benefits, such as boost a child’s quality of life and teach them about the concept of responsibility. It is important to reinforce good hygiene rules around pets, such as staying away from any animal waste and always washing their hands after handling a pet, especially before eating.

Read more advice about pet hygiene in the home

Links To The Headlines

Parasite in cats linked to learning difficulties in children. The Daily Telegraph, June 1 2015

Is your CAT making your child stupid? Mind-controlling feline parasite is linked to poor memory and reading skills. Mail Online, June 1 2015

Links To Science

Mendy A, Vieira ER, Albatineh AN, Gasana J. Toxoplasma gondii seropositivity and cognitive functions in school-aged children. Parasitology. Published online May 20 2015

Categories: Medical News

Can 'sleep training' make people less racist and sexist?

Medical News - Mon, 06/01/2015 - 17:00

“Levels of unconscious racist and sexist bias have been reduced by manipulating the way the brain learns during sleep,” BBC News report.

This was study looking into inherent unconscious biases related to gender and race/ethnicity, and whether they could be reversed. Forty white university students did an “implicit association test”.

The exact format of this test is hard to describe briefly, but it generally looks at how quickly and accurately people can group certain words and concepts, and whether some groupings took them longer to get right (e.g. grouping female and scientific words). The test showed an inherent tendency to find it easier to link female and artistic rather than scientific words.

This is possibly due to the influence of an inherent cultural bias that women “don’t do science” (which is nonsense). The students also seemed to find it easier to link Black faces with “bad” (negative) words (like virus) rather than “good” words (like sunshine).

They were then given computer-based training to counter this bias, accompanied by a sound cue. After the training, they were tested again and their responses showed less bias.

When they then took a 90-minute nap with the sound cues replayed to them through headphones, they still showed a reduction in the bias on re-testing after they woke up. The effect was still demonstrated one week later.

This small study, while interesting, makes it difficult to draw any firm conclusions based on this research. We don’t know whether the test results represent real bias/discrimination in every day life, or further whether the sleep training would have any effect on real-life interactions.


Where did the story come from?

The study was carried out by researchers from the psychology departments of the University of Texas and Princeton University in the US, and was funded by the National Science Foundation and National Institutes of Health.

The study was published in the peer-reviewed journal Science on an open-access basis, so it is free to read online or download as a PDF.

BBC News gives a good account of this study, including the researchers' caution when drawing any implications from the sleep training and applying to real life: "We didn't have people interact with or make decisions about other people, so that sort of experiment is needed to know the full effects of the methods we used."


What kind of research was this?

This was an experimental study looking at ways to reduce biases related to gender and race/ethnicity.

These biases were described as being largely unconscious, where people are unaware of them, rather than people being actively racist or sexist.

The study aimed to look at whether these biases could be altered through training and then consolidated during sleep. Sleep is the time when memories are consolidated and newly acquired information is preserved.

Such experimental research can be useful for exploring theories related to sociology and psychology but is only an early stage of research, and much more would need to be done to assess whether the approach tested works in real-life situations.


What did the research involve?

The study included 40 men and women from a university, all of white ethnicity. First they tested their gender and race bias using an implicit association test (IAT). The test involves getting people to associate words with different faces shown to them on a screen. This test showed that participants had inherent/implicit gender and race biases. On one test images of female faces were more often associated with artistic rather than scientific words, and the opposite was found for male faces. On another test, images of black faces were more often associated with bad rather than good words, with the opposite found for white faces.

The researchers then gave them computer-based training to reduce bias. Participants viewed several types of face-word pairs but were required to respond only to pairings that countered the typical bias. They were played sounds for each pairing that went against the typical bias, as a form of "reinforcement" for the "counter-bias" messages. The students were given the IAT test again after the training to see what effect it had.

They then looked at whether taking a 90-minute nap while repeatedly playing the counter-bias sound cues had an effect when the tests were given to them again when they woke up.


What were the basic results?

The research found that the counter-bias training was effective in reducing participants' implicit gender and race bias in the IAT compared with that shown at the start of the study.

The sleep test showed that when they had a nap where the accompanying counter-bias sound cues were played to them, when they woke they still didn't show the implicit bias for both gender and race when re-tested. If sleep wasn't accompanied by the sound cues, however, their tests showed the same biases as they had at the start of the study.

Testing one week later showed that the reduction in bias in the IAT in those who had listened to the counter-bias sound cues during sleep was still there.


How did the researchers interpret the results?

The researchers conclude that "memory reactivation during sleep enhances counter-stereotype training and maintaining a bias reduction is sleep-dependent".



This small experimental study suggests that training involving sound cues to counter gender and race biases – and then consolidating this by replaying the sounds during sleep – could have an effect in changing these biases. The use of sound cues to alter behaviour has proved effective in the past, most famously in the animal experiments of Ivan Pavlov.

However, any such interpretations should be made very cautiously at this stage, as this experiment may not be representative of real-life situations.

This was a small study involving 40 university students, all of white ethnicity. Both their biases, and the effects of the training, may not be applicable to different populations. The results may also have been different had a much larger sample of a similar white university population group been tested.

This was a very specific image-word association test, with a specific sound-cue-sleep training to try and counter the bias. Though this test may have shown reduction in bias on the test at up to one week later, there are many unknowns. We don’t know how long the effect would last, or whether training would have to be continually reinforced, for example.

We also don't know whether the biases apparent on this testing actually related to any discriminatory attitudes or behaviour in real-life situations. Furthermore, we don't know whether the effect of training would translate to making a difference in real-life situations. In real-life situations, perceptions and behaviour are likely to be influenced by many factors.

Overall this research will be of interest to the fields of psychology and sociology into biases and possible ways to reverse them.

The researchers speculate that the technique could be used to change other “unhealthy habits” such as smoking, unhealthy eating, negative thinking and selfishness.

There may be ethical issues with such techniques. For example, the definition of an “unhealthy” habit or viewpoint may not always be universally agreed, and some people may prefer to have the freewill to think the way they do or make their own choices without this being manipulated.

Such speculations will remain so until the researchers can demonstrate that their sleep training technique has a measurable “real world” effect that persists over time.

Links To The Headlines

Sleep training 'may reduce racism and sexism'. BBC News, May 29 2015

Could SLEEP make you less racist? Gender and racial bias can be 'erased' during a nap, claims study. Daily Mail, May 29 2015

Gender and racial bias can be 'unlearnt' during sleep, new study suggests. The Guardian, May 28 2015

Links To Science

Hu X, Antony JW, Creery JD, et al. Unlearning implicit social biases during sleep. Science. Published online May 29 2015

Categories: Medical News

Immunotherapy drug combo could combat melanoma

Medical News - Mon, 06/01/2015 - 14:50

"New era in the war on cancer," is the somewhat over-hyped headline on the front of the Daily Mail. The new era refers to the use of immunotherapy – using drugs to coax the immune system into attacking cancerous cells while leaving healthy cells unharmed.

The results of a number of studies on immunotherapy have recently been presented at a conference in Chicago. The study we are looking at investigated the use of a combination of two immunotherapy drugs, ipilimumab and nivolumab, in treating advanced melanoma; the most serious type of skin cancer.

The trial involved 945 people with advanced melanoma who were given either of these drugs alone or in combination. Overall, people taking the combination lived longer without the disease progressing (average 11.5 months) compared with either drug alone (average 6.9 months with nivolumab and 2.9 months with ipilimumab). People whose tumour displayed the protein that nivolumab targets (PD-L1), did just as well with nivolumab alone as the combination.

The study is ongoing and it’s not yet known whether people taking the combination treatment live longer overall than those taking the individual drugs. Side effects such as severe diarrhoea were quite common, affecting more than half of the people taking the combination. Therefore it will also be important to compare people’s quality of life while taking the different drugs and combinations.

The results, while promising, are unfortunately not a cure, but may provide another option for people with this hard to treat cancer.


Where did the story come from?

The study was carried out by researchers from The Royal Marsden Hospital, London, South West Wales Cancer Institute, Singleton Hospital, Swansea, and other European and international institutions. Funding was provided by Bristol-Myers Squibb, which manufactures the drugs being tested.

The study was published in the peer-reviewed New England Journal of Medicine on an open-access basis so it is free to read online or download as a PDF.

Some of the wide media coverage of this study, arguably moves into the realm of hype. Much of the reporting could give the impression that immunotherapy is a new discovery. In fact it was first used in the late 1980s, and has been used in the treatment of various conditions.

Several papers cover immune therapies more broadly, describing the results of several studies in different cancers. A lot of these results have been reported at the American Society of Clinical Oncology’s annual conference in Chicago and an overview of the results can be found here.

Overall, although this particular study of immune therapy for advanced melanoma gives promising results, it is not demonstrating a cure as some of the headlines suggested. 

Some sources do carry quotes from independent experts warning about unrealistic expectations about immunotherapy. Prof Karol Sikora, the dean of the University of Buckingham's medical school, is quoted by the BBC as saying: "You would think cancer was being cured tomorrow. It's not the case, we've got a lot to learn."


What kind of research was this?

This was a randomised controlled trial (RCT) investigating a drug combination to treat advanced melanoma, which has a notoriously poor outlook.

Over the past years there have been advances in the development of treatments for advanced melanoma, particularly drugs that work via the immune system (immunotherapies). The drugs used in this study are synthetic drugs based on antibodies that occur naturally in the body. They are designed to attach to specific proteins displayed on the surface of cancerous cells and so destroy them.

One antibody drug called ipilimumab is licensed for the treatment of melanoma too advanced for surgical removal, or that has spread to other parts of the body (metastatic). A newer antibody treatment, nivolumab, has also recently been approved for the treatment of advanced melanoma following studies demonstrating its effectiveness.

This RCT investigated whether the combination of ipilimumab and nivolumab worked better than either drug used alone.


What did the research involve?

The international study carried out in multiple research centres enrolled 945 adults with advanced melanoma which was unsuitable for surgical removal and/or had spread elsewhere in the body. They were randomised to one of three treatment groups:

  • ipilimumab alone
  • nivolumab alone
  • ipilimumab plus nivolumab in combination

All treatments were given directly by infusion into the bloodstream. The people having just one drug were also given an inactive infusion (placebo) to match the schedule of infusions they would have had if they were also taking the second drug. This was to make the study "double blind" meaning that neither patients nor assessors examining the outcomes knew which treatment the patients had been given.

The patients were assessed at 12 weeks, then every six weeks for 49 weeks, then every 12 weeks until their disease progressed or treatment stopped.

The main outcomes the researchers assessed were progression-free survival and overall survival. This study presents only the results for progression-free survival, which is the amount of time a person lives without having their disease getting worse (progressing) or dying.

At the time of this assessment the patients had been followed up for an average of one year. Follow-up for the outcome of overall survival is ongoing and so the trial remains blinded (patients and assessors still not knowing what treatment they had).

Analyses were by intention to treat, where participants were assessed according to their assigned groups regardless of whether they completed treatment.


What were the basic results?

At follow-up of around one year, 16% of the ipilimumab-only group were still receiving treatment, 37% of the nivolumab-only group, and 30% of the combination group. The main reasons for stopping treatment were disease progression in the two single drug groups, and side effects in the combination group.

Average (median) progression-free survival was significantly longer for people taking both ipilimumab and nivolumab than those taking either drug alone:

  • ipilimumab and nivolumab: 11.5 months
  • ipilimumab alone: 2.9 months
  • nivolumab alone: 6.9 months

Participants had 58% reduced risk of death or disease progression during follow up with the combination compared with ipilimumab alone (hazard ratio (HR) 0.42, 99.5% confidence interval (CI) 0.31 to 0.57) and 26% reduced risk with the combination compared with nivolumab alone (HR 0.74, 95% CI, 0.60 to 0.92). People taking nivolumab alone also had 43% reduced risk of death or disease progression during follow up compared with people taking ipilimumab alone (HR 0.57, 99.5% CI 0.43 to 0.76).

The nivolumab antibody targets a particular protein called PD-1. The presence of a related protein which binds to PD1 called PD-L1 has been reported to predict how well people respond to drugs that target PD1. In this study, people whose tumours displayed the PD-L1 protein showed equally good progression-free survival with nivolumab alone or the drug combination (both average 14 months) compared with 3.9 months for ipilimumab alone. For people whose tumours didn’t express PD-L1, progression-free survival was best with the combination (11.2 months) and they didn’t get as much benefit from nivolumab alone (5.3 months) or ipilimumab alone (2.8 months).

Overall, 19% of the ipilimumab group showed a response to their treatment, 44% of the nivolumab group, and 58% of the combination group.

Severe side effects were seen in 55% of those taking the combination, 27% of the ipilimumab group and 16% of the nivolumab group. The most common of these side effects were diarrhoea and bowel inflammation.


How did the researchers interpret the results?

The researchers conclude: "Among previously untreated patients with metastatic melanoma, nivolumab alone or combined with ipilimumab resulted in significantly longer progression-free survival than ipilimumab alone. In patients with PD-L1–negative tumours, the combination was more effective than either agent alone."



This randomised controlled trial examined the effects of different immune treatments for advanced melanoma.

It demonstrated that overall, people taking the combination of two antibody treatments – ipilimumab and nivolumab – lived longer without disease progression or dying compared with either drug alone. People whose tumours expressed the protein that nivolumab targets, did just as well with nivolumab alone as the combination. Even for people whose tumours didn’t show this protein, they still did better with the combination than ipilimumab alone.

Ipilimumab is currently a licensed immune treatment for advanced melanoma. Nivolumab has recently been recommended for approval for the treatment of advanced melanoma by the European Medicines Agency, which regulates medicines in the European Union. It is not yet available as a treatment though, as it still needs to be granted the final marketing authorisation for this use by the European Commission.

The trial has several strengths, including its relatively large sample size, blinding of participants and assessors to treatment allocation (which should reduce bias), and that the nivolumab was compared with the currently licensed antibody treatment ipilimumab.

Overall the results are promising, but headlines stating immune "milestones", "breakthroughs" or even cures for terminal cancer should be viewed cautiously. Though many of these headlines do cover other studies investigating immune treatments for cancer, this study does not demonstrate that this drug combination cures advanced melanoma. So far it has only been shown to prolong the period before the disease progressed. Also, not all people responded to the drug combination. The study is ongoing and it has yet to be seen whether the drug combination, or nivolumab alone, can increase how long people live overall.

Side effects of the drugs were also a considerable problem. After one year, only a relatively small proportion of people in all treatment groups were still taking the drugs. In the combination group people had often stopped taking the drugs because of side effects. It will be important to compare people’s quality of life while taking these different drugs and their combination.

Another factor that has been overlooked in some sections of the media is cost. Both ipilimumab and nivolumab are expensive, with a typical course of combined treatment estimated to cost more than $200,000 (around £131,000 at today’s exchange rates).

When considering new treatments, the effect gained from a particular treatment needs to be balanced against its cost and compared with existing treatments.  

The further study results on overall survival and quality of life will need to be considered alongside cost if and when the National Institute for Health and Care Excellence (NICE) assess whether to recommend nivolumab or the combination as a cost-effective treatment option for the treatment of advanced melanoma. 

Links To The Headlines

New era in the war on cancer: Revolutionary treatment that will save thousands hailed as 'biggest breakthrough since chemotherapy. Daily Mail, June 1 2015

Cancer drug combination 'shrinks 60% of melanomas'. BBC News, June 1 2015

Doctors hail 'spectacular' cancer treatment breakthrough. ITV News, June 1 2015

Immunotherapy: 'Major milestone' as drugs stimulate body to fight off deadly cancers. The Independent, May 31 2015

Immunotherapy: the big new hope for cancer treatment. The Guardian, June 1 2015

'Cure for terminal cancer' found in game-changing drugs. The Daily Telegraph, June 1 2015

Welcome to the ‘new era’ of cancer treatment. Metro, June 1 2015

Links To Science

Larkin J, Chiarion-Sileni V, Gonzalez R, et al. Combined Nivolumab and Ipilimumab or Monotherapy in Untreated Melanoma. The New England Journal of Medicine. Published online May 31 2015

Categories: Medical News

Did the English smoking ban stop 90,000 children getting ill?

Medical News - Fri, 05/29/2015 - 12:55

"90,000 children spared illness by smoking ban," reports the Daily Mail. This impressive-seeming statistic is based on research looking at how many under-14s ended up in hospital with respiratory infections in the years before and after the July 2007 smoking ban in England and Wales.

Researchers analysed data on more than 1.6 million children who were admitted to NHS hospitals for respiratory infections – excluding asthma cases – between 2001 and 2012. To put this into context, children are unlikely to be admitted to hospital with a mere cold or sniffle – these are likely to be children very ill with bronchitis, pneumonia, laryngitis or tonsillitis.

The researchers calculated the rate of admission for any respiratory tract infection reduced by 3.5% immediately after the introduction of the smoking ban. It then continued to reduce by 0.5% each year.

The biggest immediate reduction in admissions was for lower respiratory tract infection (such as pneumonia), which reduced by 13.8%.

While this study can't prove the smoking ban definitely caused the drop in the numbers of children needing hospitalisation, the research appears robust and we have confidence that the findings are likely to be accurate. The researchers accounted for potential confounding factors, including the introduction of the pneumococcal vaccine in 2006.

And while this study shows the smoking ban is associated with around 11,000 fewer hospital admissions for respiratory tract infections in children a year, it cannot tell us the potential wider benefits of the smoking ban on children's health.

Now is the best time to quit smoking

Where did the story come from?

The study was carried out by researchers from Maastricht University and Sophia Children's Hospital, both in The Netherlands; Brigham and Women's Hospital in the US; and the University of Edinburgh and Imperial College London in the UK.

It was funded by the Thrasher Research Fund, the Netherlands Lung Foundation, the International Pediatric Research Foundation, and The Commonwealth Fund.

The study was published in the peer-reviewed European Respiratory Journal.

The media reported the study accurately, though some did not point out the limitations of this type of study, in that it cannot prove cause and effect.

The Guardian's figure of 11,000 children a year saved from hospital admissions reflects the estimates in the research. The Mail's figure of 90,000 appears to be an extrapolation of this figure over the eight years since the ban. The research only went up to 2012, so this figure can't be verified, but the Mail's headline claim is unlikely to be wildly inaccurate. 

What kind of research was this?

This was an observational study looking at the number of admissions for respiratory tract infections in children before and after the smoking ban was introduced in July 2007. This type of study is useful when it is not feasible to perform a randomised trial, though it cannot prove cause and effect.

The scientists knew secondhand smoke exposure was a major risk factor for respiratory tract infections. However, no-one had yet studied the direct impact of the smoking ban in England on children's health in this way. 

What did the research involve?

The researchers used the Hospital Episode Statistics database to collect data. They looked at the monthly number of hospital admissions for children aged 0 to 14 in England and Wales with respiratory tract infections between January 1 2001 and December 31 2012. 

The researchers calculated the rate of admissions according to the number of admissions divided by the number of children at risk of admission to account for any changes in the number of children in the population over time.

The rate of admission to hospital before and after the smoking ban was introduced on July 1 2007 was compared, taking some potential confounding factors into account.

The results were adjusted using standard statistical analysis to take into account:

  • age group (0 to 4, 5 to 9 and 10 to 14 years)
  • gender
  • socioeconomic deprivation
  • level of urbanisation
  • region
  • introduction of the pneumococcal vaccine in 2006  
What were the basic results?

There were 1,660,652 admissions for acute respiratory tract infections among 0 to 14 year-olds during the study period. Most occurred in children up to age four (85%), and admission was more likely with increased deprivation.

Overall, respiratory tract infection admissions reduced by 3.5% immediately after the introduction of the smoking ban (admission rate -3.5%, 95% confidence interval [CI] -4.7 to -2.3%). The rate continued to reduce each year thereafter by 0.5% (admission rate -0.5%, 95% CI -0.9 to -0.1%).

The biggest immediate reduction in admissions was for lower respiratory tract infections (such as pneumonia) – these were reduced by 13.8% (admission rate -13.8%, 95% CI -15.6 to -12%).

The smoking ban was associated with an estimated 54,489 fewer hospital admissions over the next five years. 

How did the researchers interpret the results?

The researchers concluded that, "The introduction of national smoke-free legislation in England was associated with ~ [around] 11,000 fewer hospital admissions per year for RTIs in children."

They say their research demonstrated "an immediate reduction in lower RTI admissions, and a more gradual, incremental reduction in upper RTI admissions". 


This observational study found an association between the introduction of the 2007 smoking ban in public places in England and Wales, and a reduction in children's hospital admissions for respiratory tract infections.

The study included data on a large number of admissions for respiratory tract infections in children, using nationwide official hospital statistics to gather this information. This gives us confidence in how well these findings may be generalisable because it limits selection bias.

The researchers took several potential confounding factors into account when analysing their results, including:

  • the introduction of the pneumococcal vaccine for children in 2006
  • seasonal variations
  • temperature 
  • levels of air pollution

They only included children up to the age of 14 in an attempt to limit the effect of adolescent first-hand smoking. A cut-off point had to be used, though recognising there will be children under the age of 14 who smoke.

However, this study has limitations because of its design. As it is a "before and after" type of study, there may have been other factors that changed that may have influenced the results.

  • Hospital admissions may have reduced over time as a result of improvements in the treatments available for respiratory tract infections in children – for example, at home, through a pharmacist, GP or A&E services, or because of preventative measures.
  • Private hospital admissions were not included and their use may have changed over the study period. However, this is believed to be less than 1% of total admissions.
  • Admissions for children with asthma were excluded so they did not bias results, but children with other conditions that increase their risk of respiratory tract infections were included. The number of children with these conditions – such as cystic fibrosis – may have changed over the study period.
  • Epidemics such as H1N1 (swine) flu in 2009 will have increased the number of admissions. The authors say that despite this increase in infections in the "after" stage of the study, there was still an overall reduction in the number of admissions for respiratory tract infections in children after the smoking ban.

The study could not directly measure the impact the smoking ban had on people smoking in their homes or cars, which is one of the main sources of secondhand smoke for children.

However, the authors cited several credible studies that found the number of smoke-free homes increased significantly after smoking bans.

Overall, this study shows the smoking ban is associated with around 11,000 fewer hospital admissions for respiratory tract infections in children a year.

If you still haven't given up smoking, now is as good a time as any and you can get free help to quit from the NHS.

Links To The Headlines

Smoking ban hailed for cut in children's illness. The Guardian, May 29 2015

Smoking ban in England 'cuts child hospital admissions'. BBC News, May 29 2015

Thousands of children spared hospital treatment by England's smoking ban. Mirror, May 29 2015

Almost 90,000 children spared illness by smoking ban. Daily Mail, May 29 2015

Links To Science

Been JV, et al. Smoke-free legislation and childhood hospitalisations for respiratory tract infections. European Respiratory Journal. Published May 28 2015

Categories: Medical News

Media reckons science now proves 'carbs' are fine again

Medical News - Fri, 05/29/2015 - 12:32

"Eat more 'good' carbohydrates and less protein for a longer life," reports the Mirror.

It seems like only last week that the media was advising us to eat less carbohydrates. The reality is that neither today's "pro-carbs" or recent "anti-carbs" news stories have changed government food advice.

Today’s news refers to a short study of different diets on a relatively small number of mice – not people. It's always a fairly safe bet that studies in mice have few implications for the British human public – and today’s news is no exception.

And the study did not look at "good" or "bad" carbohydrates, nor the effect of the diet on lifespan. Instead, mice randomly got to eat one of three diets, with either unlimited access or in a calorie restricted fashion. All diets had 20% fat content, and then either a high, low or medium protein to carbohydrate ratio.

As expected, mice fed calorie restricted diets lost the most weight and had good metabolic status. Mice fed unlimited low protein, high carbohydrate diets ate the most amount of food and had the highest energy intake, but did not put on as much weight as the other two groups of mice fed unlimited food. The researchers say this was because they burned off more energy. These mice also had improved insulin and cholesterol levels similar to that observed for mice with calorie restricted diets.

Find out the truth about carbs.


Where did the story come from?

The study was carried out by researchers from the University of Sydney, Australia and the National Institutes of Health in Baltimore, US. It was funded by the National Health and Medical Research Council of Australia, the Ageing and Alzheimer’s Institute of Concord Hospital and the US National Institutes of Health.

The study was published on an open-access basis in the peer-reviewed journal Cell Reports, so it is free to read online. 

Both the Mirror and the Daily Mail reported that "healthy" carbs such as fruit and veg may be the key to living longer. While this may be the case, neither the type of carbohydrates nor lifespan were studied in this piece of mouse-based research.


What kind of research was this?

This was an animal study using mice. The researchers wanted to directly compare the effects of different diets on indicators of health.

The authors point to previous research in which restricting calories by 30 to 50% increased "healthspan", delayed the onset of ageing and age-associated diseases and improved metabolic health in most animal species that have been tested. They wanted to see if other diets could have the same effects without the calorie restriction, because most people find it difficult to limit the amount of calories they consume.


What did the research involve?

The researchers randomly assigned 90 mice to have either calorie restricted diets or diets with unlimited access to food for eight weeks. At the end of the eight weeks, the researchers compared the weight and metabolic status of mice between the groups.

All of the diets had 20% fat content and the same number of calories per gram. The diets were split into:

  • low protein, high carbohydrate
  • medium protein, medium carbohydrate
  • high protein, low carbohydrate

Mice fed the calorie restricted diets were given 40% of the average amount of food that the other group of mice ate.


What were the basic results?

After eight weeks:

  • Mice fed calorie restricted diets lost the most weight.
  • Mice fed unlimited low protein, high carbohydrate diets ate the most amount of food and had the highest energy intake, but did not put on as much weight as the other two groups of mice fed unlimited food. They had the highest level of energy expenditure out of all groups studied.
  • Mice fed unlimited low protein, high carbohydrate diets also had improved insulin and cholesterol levels similar to that observed for mice with calorie restricted diets, regardless of the type of diet.
  • Mice fed unlimited high protein and low carbohydrates had higher insulin levels and impaired glucose tolerance compared to mice in the other groups.


How did the researchers interpret the results?

The researchers concluded that after eight weeks, mice fed unlimited low protein, high carbohydrate diets had similar metabolic improvements as seen under calorie restriction. This was despite increased energy intake. Importantly, this was these mice also didn’t develop increased body fat and fatty liver that is observed in longer-term low protein, high carbohydrate feeding. They say that these results "suggest that it may be possible to titrate the balance of macronutrients to gain some of the metabolic benefits of [calorie restriction], without the challenge of a 40% reduction in caloric intake".



This study has found that over a short time period, mice fed a diet that’s low in protein and high in carbohydrates gained less weight than those fed diets with higher levels of protein. It also found that mice lost weight regardless of the amount of protein and carbohydrate if the number of calories was restricted.

The researchers say that the mice fed unlimited low protein, high carbohydrate diets did not gain as much weight because they burned off more calories. In this study, their "metabolic status" improved compared to mice with unlimited higher protein diets. However, previous research has shown that low protein, high carbohydrate diets consumed over longer time periods have been associated with weight gain, increased body fat and fatty liver.

While this study has produced interesting findings, its use is severely limited because it had no control group to compare the diets against.

It is also not clear how the results of this study would be applicable to humans. Calorie restriction does cause weight loss in humans, but in the long term this can reduce the metabolic rate. Humans are also more complex and less likely to naturally burn off any excess calories consumed from a low protein, high calorie diet.

In conclusion, this study in mice does not change current human dietary advice to eat plenty of starchy carbohydrates, fresh fruit and vegetables, some milk and dairy foods, lean meat and fish (or other protein), and foods high in sugar, fat or salt to be eaten little or in small amounts.

The eatwell plate highlights the different types of food that make up our diet, and shows the proportions we should eat them in to have a well balanced and healthy diet.

Links To The Headlines

Could eating MORE carbs be the key to a longer life? Diet low in protein 'produced same benefits as cutting calories by 40%'. Daily Mail, May 28 2015

Eat more 'good' carbohydrates and less protein for a longer life, say researchers. Daily Mirror, May 28 2015

Links To Science

Solon-Biet SM, et al. Dietary Protein to Carbohydrate Ratio and Caloric Restriction: Comparing Metabolic Outcomes in Mice. Cell Reports. Published May 28 2015

Categories: Medical News

Is gut bacteria responsible for the 'terrible twos' in toddlers?

Medical News - Thu, 05/28/2015 - 16:00

"Terrible twos?" asks the Mail Online, going on to say that, "the bacteria in your child's gut may be to blame for their bad behaviour". The story is based on research that showed links between the types of bacteria in stool samples from two-year-old children, and their behaviour and temperament.

Researchers have become increasingly interested in how the population of bacteria in the gut (known as gut microbiota) affects health.

Studies have already linked gut bacteria to conditions including obesity, allergies and bowel disease. Now researchers are interested in finding out if gut bacteria is also linked to mental health – for example, depression and anxiety.

So they took stool samples from 75 children in Ohio in the US, and their mothers filled in questionnaires about their temperament and behaviour. They wanted to see whether aspects of a child's temperament were linked to the bacteria in the gut.

The researchers found that both boys and girls who had greater diversity of bacteria in their gut were likely to have higher scores for "surgency" – a term used to describe a combination of impulsive behaviour and high levels of activity.

While the study found a link, it is impossible to say whether the bacteria actually caused the behaviour, or whether other factors are responsible for the link seen. This is very early exploratory research, so we can't draw too many conclusions from it.

And we certainly wouldn't advise trying to alter your toddler's gut microbiota to improve their behaviour. Just stick to a few minutes on the naughty step.  

Where did the story come from?

The study was carried out by researchers from Ohio State University in the US and was funded by grants from the university and the National Institutes for Health, and the National Center for Advancing Translational Sciences. It is published in the peer-reviewed medical journal Brain, Behavior and Immunity.

The Mail Online ignored warnings in the study that it cannot show whether bacteria causes differences in temperament or behaviour, claiming that it showed how "abundance and diversity of certain bacteria can impact a child's mood", and that parents should "blame the bacteria" in the child's gut if their toddler is "acting up".

The study did not look at "acting up" or bad behaviour, but at temperament scales, which included how extrovert and physically active a child is. 

What kind of research was this?

This was a cross-sectional study. It aimed to see whether gut microbiota (the range and amount of bacteria living in the gut) were linked to a child's temperament.

Cross-sectional studies cannot determine which factor came first – in this case, whether the differences in bacteria were present before the children developed a particular temperament. This means they cannot say which factor might potentially be influencing the other.

In addition, observational studies such as this can't show whether one thing definitely causes another, just whether the two happen to be linked in some way. Much more evidence, from a range of different studies and study designs, is needed before scientists are happy to conclude that one thing is likely to cause the other. 

What did the research involve?

Researchers sent online questionnaires to 79 mothers who volunteered for the study to assess their child's temperament, diet and feeding behaviour. Children were all aged 18 to 27 months.

The mothers then collected stool samples from the babies' nappies, which were sent to the researchers for analysis. The researchers used statistical modelling to work out whether the diversity of bacteria or the abundance of any types of bacteria were linked to particular types of temperament.

Of the 79 children tested, only 75 were included in the final analysis. In two cases the stool samples could not be analysed; the reasons for exclusion of the other two were not clear, but may relate to questionnaires showing results outside the usual expected range.

The researchers used a number of techniques to look at the variety of bacteria, how common these bacteria were in each sample, how many different types of bacteria were present in each stool sample, and what proportion they were in to each other.

The researchers used several statistical models to assess the relationship between stool sample results and questionnaire results. They looked at three main aspects of temperament.

The first, called negative affectivity, measures traits including fear, fidgeting, discomfort, shyness, sensitivity to surroundings and how easily the child can be soothed.

The second, called surgency, measures impulsive behaviour, how active a child is, how much pleasure they get from exciting situations, how sociable they are and how excited they get when anticipating pleasure.

The third, called effortful control, looks at a child's ability to stop doing something when told to, transfer their attention from one activity to another, take pleasure in normal activities and focus on a task.

Girls and boys tend to differ in their results on this questionnaire, with boys showing more surgency and girls more effortful control. Because of this, the researchers analysed the results for boys and girls separately.  

What were the basic results?

As expected, there were differences between boys and girls in the questionnaire scores for temperament. However, there was not much overall difference between boys and girls in the population of bacteria in their guts.

The researchers found both boys and girls who had greater diversity of bacteria in their gut were likely to have higher scores for "surgency". This link was stronger for boys, especially when the researchers looked at the individual scores for sociability and pleasure from exciting situations. Among girls only, they found lower levels of bacterial diversity were linked to higher scores for effortful control.

Having more of particular types of bacteria seemed to link to traits including sociability, pleasure from exciting situations and activity, but for boys and not girls. Girls who had more of one particular type of bacteria were likely to have higher scores for fear.

The researchers looked at whether the diet the children ate or how long they'd been breastfed could explain the links between gut microbiota and temperament. Although they found some links with how much vegetables or meat children ate, they say this did not explain away the links found between bacteria and temperament. 

How did the researchers interpret the results?

The researchers said they were "unable to determine" from the study whether the links they found were down to the effect of temperament on the gut bacteria, the effect of the gut bacteria on temperament, or a combination of the two.

But they went on to say that, if later studies show that gut bacteria does influence behaviour, this might give doctors the chance to treat children early to prevent later health problems, including mental health. 


This study found an intriguing link between the bacteria that live inside children's guts, and their personalities and behaviour. It's important to remember we don't know why this relationship exists, or whether it is the result of one factor directly causing the other.

For example, toddlers who are more active could have an increased exposure to bacteria, rather than bacteria leading to increased activity.

The researchers suggest there could be various explanations. For example, stress hormones can change the acidity of the gut, which could affect the bacteria that grow there. Bacteria in the gut can affect us through physical illness and may also affect how we feel or behave.

The study was small, and it only sampled bacteria living in the gut that get passed out of the body in stools. There are many other bacteria that live in the wall of the gut, which might also be important. However, it is difficult and painful to take samples of these bacteria.

The study also relied on the mother's assessment of the child's temperament. While that is important, having an assessment from fathers and impartial observers might help make the results more representative of the child's temperament as a whole, as children often behave differently in different situations.

While the researchers did try to take into account some factors that might influence both behaviour and gut bacteria (such as some aspects of diet), it is possible that these or other factors are in fact behind the association seen.

The relationship between the gut and the brain is an area of research that is attracting a lot of attention. That said, the idea that gut bacteria could impact our behaviour or mental health is not one that has gained widespread acceptance, and a lot more research is needed before it would be. 

Links To The Headlines

Terrible twos? Why the bacteria in your child's gut may be to blame for their bad behaviour. Mail Online, May 27 2015

Links To Science

Christian LM, Galley JD, Hade EM, et al. Gut microbiome composition is associated with temperament during early childhood. Brain, Behavior and Immunity. Published online November 10 2014

Categories: Medical News

New discovery about how breast cancer spreads into bones

Medical News - Thu, 05/28/2015 - 11:48

"Certain breast cancers spread to the bones using an enzyme that drills 'seed holes' for planting new tumours, research has shown," The Guardian reports. The hope is drugs currently available – or possibly modified versions of them – could block the effects of this enzyme.

This largely animal and lab-based study has identified how a protein called lysyl oxidase (LOX), which some breast cancer tumours secrete, helps cancer spread to bones.

Analysis of data collected on human tumours found that in breast cancers not responsive to oestrogen, high levels of LOX production was associated with an increased risk of spread to the bones. This suggests the findings may apply to some human breast cancers as well.

Blocking the LOX protein in mice reduced the spread of cancer to the bones. Reducing the ability of the protein to create "holes" in the bone using a drug called a bisphosphonate also stopped cancer cells forming metastases in the bone.

Bisphosphonates are already used to treat osteoporosis (weakened bones) and reduce the risk of fracture in people with cancers that affect their bones. Researchers hope these drugs could also be used in people with breast cancer to reduce spread to the bone.

This will need to be tested before we can be certain that it works, but the fact these drugs are already used in humans should speed up the start of this testing process.  

Where did the story come from?

The study was carried out by researchers from the University of Copenhagen and other research centres in Denmark and the UK, including the University of Sheffield.

It was funded by Cancer Research UK, the Biotech Research and Innovation Centre, the University of Sheffield, the National Institute for Health Research Sheffield Clinical Research Facility, Breast Cancer Campaign, the Danish Cancer Society, The Lundbeck Foundation, the Velux Foundation, and the Novo Nordisk Foundation.

The study was published in the peer-reviewed journal Nature, and has been made available on a semi open-access basis – you can read it online, but you can't print or download it.

The UK media covered this story reasonably, although their headlines don't make it clear that such a drug would specifically be expected to stop spread to the bone and not necessarily other areas of the body.

The drug would also not be expected to have any effect on the breast tumour itself, so it would need to be combined with other treatments.  

What kind of research was this?

This was mainly a laboratory and animal study looking at how breast cancer affects the bones. Breast cancer can spread to the bone and cause the surrounding bone to be destroyed (lesions). This can cause serious complications, and the spread also makes the cancer harder to treat successfully.

The researchers wanted to investigate exactly how the breast cancer cells spread to bone and what happens within the bone when they do. They hope that by understanding this process better they may be able to find ways to stop it. This type of research is an appropriate way to study this type of question.   

What did the research involve?

Previous research suggests lower levels of oxygen within breast cancer tumours are associated with poorer outcomes for the patient. The researchers carried out a wide range of experiments to look at why this might be the case and unravel the biology behind this.

The researchers first looked at data on 344 women with information on the pattern of gene activity in their breast tumours, and also information on whether their tumours had later spread to the bone or elsewhere in the body.

They looked at whether a particular gene activity pattern that indicated low oxygen levels in the tumour was associated with tumour spread. An additional set of data from another 295 women was used to confirm the initial findings.

The researchers then looked at which proteins were secreted by breast cancer cells when they were exposed to low oxygen conditions in the lab. These proteins may play a role in helping the cancer spread by "preparing" other tissues for the cancer.

They then went on to study this protein in various experiments in mice. The mice were injected with mouse breast (mammary gland) cancer cells, which spread to the bones and other tissues.

The researchers looked at what effect increasing the levels of this protein had and what effect blocking it had on spread to the bone.

Bone is constantly being broken down and reformed by cells within it, so the researchers looked at what effect the protein had on the balance of these actions within the bone.

They also looked at the effect of a bisphosphonate drug on the formation of lesions. Bisphosphonates are drugs used to treat osteoporosis (thinning of the bones). They do this by reducing the number of bone-digesting cells, allowing the bone-building cells to take over the balance.  

What were the basic results?

The researchers found low oxygen conditions within the breast tumour were associated with cancer spread (metastases) in women with one form of breast cancer (oestrogen receptor-negative breast cancer).

It was most strongly associated with spread to the bone. This relationship was not seen in those with oestrogen receptor-positive breast cancer. 

They then looked at breast cancer cells from oestrogen receptor-negative tumours in the laboratory, including cells that had spread to bone. They found a protein called lysyl oxidase (LOX) was released in high levels in low oxygen conditions, particularly in the cells that spread to the bone.

When looking back at the data they had on breast cancer tumour gene activity and outcome, higher activity of the gene encoding LOX was found to be associated with bone metastasis in oestrogen receptor-negative breast cancer.

In mice, the researchers found cancer cells were more likely to spread to the bone and form lesions when high levels of LOX were present. Injecting the mice with cancer cells producing lower amounts of LOX, or blocking the activity of LOX with antibodies, reduced the ability of the cancer cells to form lesions in the bone.

The researchers found high levels of LOX upsets the normal balance of bone formation and "digestion". It encourages more bone-digesting cells to form, overwhelming the action of the bone-forming cells and causing small lesions of destroyed bone to start to form. These lesions are then colonised by circulating tumour cells, allowing the formation of bone metastases.

The researchers found giving the mice with tumours a bisphosphonate stopped bone lesions forming, but did not affect the growth of the original tumour. Bisphosphonates also reduced the ability of injected cancer cells to settle in the bone and develop bone metastases if they were given to mice at the time of injection. 

How did the researchers interpret the results?

The researchers concluded they have discovered new information about the way bone metastases form from breast tumours. They say this opens up the possibility of developing new treatments for breast cancer.

They suggest that: "Bisphosphonate treatment of patients with high-LOX-expressing tumours after surgery could prevent the establishment and growth of circulating tumour cells within the bone." 


This research has identified how breast tumours create conditions that allow them to spread into the bone. Most of this research was in mice, but initial experiments suggest these findings may apply in humans as well. Researchers are likely to carry out further study to confirm this.

As part of their research, researchers found a bisphosphonate – a drug that can reduce bone breakdown – was able to reduce bone metastases in mice.

These drugs are already used to treat osteoporosis and people who have advanced malignancies affecting their bone. This means that getting approval for human studies testing the effect of these drugs on the spread of breast cancer to the bone should easier than if a completely new drug was being tested.

However, we will not know for certain whether it is effective in humans until these trials are carried out. If it does work, there will still be a lot to investigate – for example, the best dose or length of treatment to use, or when best to give it.

Researchers may also try to develop alternative ways to disrupt this pathway and stop or reduce tumour spread to the bones. New treatments would require a longer time to develop and reach the human testing stage.

Such treatments would be aimed at reducing spread to the bone, but would not be expected to have any effect on the main breast tumour itself or on spread to other parts of the body, such as the brain or lungs. This means it would need to be combined with other treatments, such as chemotherapy and surgery.

This study adds another piece of knowledge to the overall picture we have of breast cancer biology, and opens up another avenue for investigation in the search for new approaches to treatment. 

Links To The Headlines

Breast cancer could be 'stopped in its tracks' by new technique, say scientists. The Guardian, May 28 2015

Breast cancer 'alters bone to help it spread'. BBC News, May 28 2015

Thousands of breast cancer patients could be saved by £1 osteoporosis drug after scientists discover it stops the disease spreading to the bones. Daily Mail, May 28 2015

Scientists discover mechanism that could stop breast cancer spreading to bones. The Independent, May 27 2015

Breast cancer could be 'stopped in tracks' by cheap drugs. The Daily Telegraph, May 27 2015

Links To Science

Cox TR, Rumney RMH, Schoof EM, et al. The hypoxic cancer secretome induces pre-metastatic bone lesions through lysyl oxidase. Nature. Published online May 27 2105

Categories: Medical News

Modified herpes virus 'could combat skin cancer'

Medical News - Wed, 05/27/2015 - 16:00

"Patients with aggressive skin cancer have been treated successfully using a drug based on the herpes virus," The Guardian reports. A new study suggests a novel form of immunotherapy could be effective for treating some cases of advanced skin cancer.

This was a large trial examining the use of a new immune treatment called talimgogene laherparepvec (T-VEC) for advanced melanoma (the most serious type of skin cancer) that could not be removed surgically.

T-VEC is a modified derivative of the herpes virus that causes cold sores. It is injected directly into the tumour and causes the production of a chemical called granulocyte-macrophage colony-stimulating factor (GM-CSF), which stimulates an immune response to fight the cancer.

T-VEC injections were compared with injections of GM-CSF only, which is sometimes used to treat people who have poor immunity caused by cancer treatment. 

The trial found, overall, significantly more people responded to treatment for more than six months with T-VEC (16.3%) than with GM-CSF injections (2.1%).

It also improved overall survival, but this only just reached statistical significance, meaning we can have less confidence in this effect. Average survival was 23.3 months with T-VEC, compared with 18.9 months with GM-CSF.

While these results are encouraging, media claims of a cure for advanced melanoma are misguided. Further research is needed to see how T-VEC compares with existing treatments. It is also not known whether the treatment would work for other types of cancer. 

Where did the story come from?

The study was carried out by a large collaboration of researchers from institutions in North America, including the University of Utah and the Cancer Institute of New Jersey.

It was funded by Amgen, the developers of the technology. The individual researchers report many affiliations with pharmaceutical companies, including Amgen.

The study was published in the peer-reviewed Journal of Clinical Oncology.

The quality of the reporting of this study is somewhat patchy. For example, The Guardian's statement that, "Patients with aggressive skin cancer have been treated successfully using a drug based on the herpes virus" needs to be set in the correct context.

The study showed only about one in five people given the treatment responded positively to it, so it will not work for everyone.

The claims made by the Daily Express, talking about a cure, are also not supported by the results of this study.  

What kind of research was this?

This was a randomised controlled trial (RCT) investigating treating melanoma with an injectable form of immune therapy.

The immune therapy under investigation is called T-VEC. It is a genetically engineered derivative of the herpes simplex virus type 1 (HSV-1), which causes cold sores.

The derivative is designed to selectively replicate within tumours and produce granulocyte-macrophage colony-stimulating factor (GM-CSF). GM-CSF is an important chemical produced during the natural immune response.

It recruits other white blood cells to fight infection or abnormal cells. Injecting a treatment that produces GM-CSF within a tumour should, in theory, boost the immune response to fight the tumour.

This study looked at whether injecting T-VEC directly into melanoma resulted in a better response compared with an injection of GM-CSF. GM-CSF injections are given under the skin, rather than directly into a tumour.

In normal medical practice, GM-CSF injections are used in the treatment of low white blood cell count (for example, in people receiving chemotherapy) to combat reduced immune system function.  

What did the research involve?

This was an international multicentre trial conducted in 64 different locations across North America, the UK and South Africa.

It included 436 adults (average age 63-64) with advanced melanoma that was not suitable for treatment by surgical removal, but could be directly injected with a treatment. People were randomised to receive either T-VEC injections into the tumour or GM-CSF injections under the skin.

T-VEC was given as a first dose, another three weeks later, then once every two weeks. GM-CSF was given once daily for 14 days in 28-day cycles.

Treatment was continued regardless of disease progression for 24 weeks, and after 24 weeks continued until there was disease progression, lack of response, remission or intolerability. At one year, people with stable or responsive disease could continue for a further six months.

The main outcome was disease response rate, defined as complete or partial response that started within the first 12 months and lasted continuously for at least six months. Response was measured through clinical assessment of the visible tumour and body scans.

Other outcomes included overall survival from the time of randomisation, best overall response, and duration of response.

Participants knew which treatment they were receiving, but assessors who examined the outcomes did not know. Analyses were by intention to treat (by the randomised treatment regardless of completion). 

What were the basic results?

Average duration of treatment was 23 weeks for T-VEC and 10 weeks for GM-CSF, and average follow-up time from randomisation to final analysis was just under two years.

Disease response rate was significantly better in people given T-VEC (16.3%) compared with those given GM-CSF (2.1%). This was an almost nine-fold increased odds of response (odds ratio [OR] 8.9, 95% confidence interval [CI] 2.7 to 29.2).

For these people who responded, average time to response was 4.1 months in the T-VEC group and 3.7 months in the GM-CSF group. Average time to treatment failure was significantly longer in the T-VEC group (8.2 months) than in the GM-CSF group (2.9 months).

Average survival was 23.3 months with T-VEC, compared with 18.9 months with GM-CSF. Overall, this was a borderline significant reduction in risk of death, which included the possibility there was no difference (HR 0.79, 95% CI 0.62 to 1.00).

The most common side effect using T-VEC was fever, which affected around half of those treated. This compared with less than 10% of those treated with GM-CSF.

Fatigue affected half of T-VEC treatment patients compared with just over a third in the GM-CSF group. Cellulitis was the only more serious side effect, occurring in a larger proportion of the T-VEC group.  

How did the researchers interpret the results?

The researchers concluded that T-VEC is the first cancer immune therapy to demonstrate benefit against melanoma in a clinical trial.

They say it gave significantly higher disease response rate and just significantly higher overall survival, making this a "novel potential therapy for patients with metastatic melanoma". 


This randomised controlled trial has demonstrated the effectiveness of a novel injectable immune treatment for advanced melanoma that cannot be surgically removed.

The trial has various strengths, including its large sample size, analysis by intention to treat, and blinding of assessors to treatment assignment, which should have reduced the risk of bias.

It demonstrated that, overall, significantly more people responded to treatment with T-VEC than GM-CSF injections. It also improved survival by an average of 4.4 months, but this only just reached statistical significance, meaning we can have less confidence in this effect.

There are several points to bear in mind, however:

  • T-VEC boosts GM-CSF production within the tumour to enhance the immune response, and was therefore compared with GM-CSF injections. However, GM-CSF is not used as a treatment for advanced melanoma. Ideally, the treatment would need to be compared with treatments for advanced melanoma that are currently available – for example, chemotherapy, radiotherapy, and particularly other immune therapies, such as the antibody treatment ipilimumab.
  • The treatment has not been shown to "cure" melanoma. Most of the people in this study passed away during the two years of follow-up, but the people receiving T-VEC generally lived slightly longer.
  • The treatment is a genetically engineered derivative of herpes simplex type 1 virus. But this is not the same as having been infected with herpes simplex. For example, people shouldn't wrongly interpret the headlines to think getting cold sores offers protection against melanoma or other types of cancer.
  • It is not known whether this treatment could only have potential for the treatment of advanced melanoma, or whether it could have other potential uses for other types of cancer.

Overall, the results of this trial into a potential new immune treatment for advanced melanoma are promising, but more research will be needed.

As with most conditions, prevention is more effective than cure when it comes to melanoma. Avoid overexposure to the sun or other artificial sources of ultraviolet light, such as sun beds, to reduce your risk of skin cancer.

Read more about protecting your skin from the sun

Links To The Headlines

Virotherapy: skin cancer successfully treated with herpes-based drug. The Guardian, May 27 2015

Cold sore virus 'treats skin cancer'. BBC News, May 26 2015

Genetically engineered virus 'cures' patients of skin cancer. The Daily Telegraph, May 27 2015

Herpes Virus Offers Hope Over Skin Cancer. Sky News, May 27 2015

World first as scientists use cold sore virus to attack cancer cells. The Independent, May 27 2015

Could HERPES halt the spread of skin cancer? Genetically modified version of virus 'kick-starts the immune system and kills diseased cells'. Daily Mail, May 27 2015

Why HERPES virus could hold key to finding a skin cancer cure. Daily Express, May 27 2015

Links To Science

Andtbacka RHI, Kaufman HL, Collichio F, et al. Talimogene Laherparepvec Improves Durable Response Rate in Patients With Advanced Melanoma. Journal of Clinical Oncology. Published online May 26 2015

Categories: Medical News

Combined contraceptive pills 'increase risk of blood clots'

Medical News - Wed, 05/27/2015 - 11:48

"Women who take the latest generations of contraceptive pills are at a greater risk of potentially lethal blood clots," The Times reports. While the increase in risk is statistically significant, it is very small in terms of individual risk

The combined oral contraceptive pill, commonly referred to as "the pill", is already well known to be linked to increased risk of blood clots in the veins, such as deep vein thrombosis (DVT), as we discussed back in 2014.

A new study, using two large GP databases, set out to refine the assessment of the risk. It identified women who had had a venous blood clot, matched them by age to unaffected women, and examined use of the pill in the previous year.

Overall it found that use of any contraceptive pill almost tripled risk of blood clot; though the baseline risk is small. And risk was generally higher with the newer third generation pills, compared with older pills. Encouragingly, risk was lowest for pills containing levonorgestrel, which is by far the most common prescribed. This pill carried risk of around six extra cases of blood clot for every 10,000 women prescribed.

Risk was more than double this for pills containing desogestrel, gestodene, drospirenone and cyproterone, though these aren't normally pills of first choice in practice, and are normally used when there are reasons to treat other symptoms such as acne.

The combined oral contraceptive pill remains a safe and effective form of contraception for most women, but it is not suitable for all – such as women with a history of heart disease or high blood pressure. Read more about who can, and who shouldn't, use the combined oral contraceptive pill.


Where did the story come from?

The study was carried out by researchers from the Division of Primary Care, University Park in Nottingham. It received no external sources of funding. The study was published in the peer-reviewed British Medical Journal as an open-access article. This means it can be read online or downloaded by anyone for free.

The reporting of the study by the UK media was accurate and refreshingly took steps to put the small increase in risk in context.


What kind of research was this?

This was a case-control study of women identified through two general practice databases in the UK. The researchers were aiming to look at the link between use of the combined oral contraceptive pill ("the pill") and risk of blood clots in the veins (e.g. deep vein thrombosis, or DVT), specifically taking into account the type of progestogen in the pill.

Use of the pill is already well known to be associated with increased risk of blood clots in the veins (venous thromboembolism). Different types of pill combine different types of the hormone progestogen with another hormone called oestrogen. It is recognised that the different progestogens have differing influence on the risk of blood clots, though previous study has not been able to quantify the risks of the different pills, particularly newer ones.

This case-control study investigated this by looking at women diagnosed with a blood clot, matching them to unaffected women and then looking at type of pill used.


What did the research involve?

The study used two large GP databases, QResearch and Clinical Practice Research Datalink (CPRD), both of which have previously been used to look into links between different drugs and blood clot risk. QResearch covers 618 general practices in the UK, and CPRD covers 722.

Researchers identified women aged 15-49 years registered between 2001 and 2013 who had a first instance of a venous blood clot. They matched these "cases" with up to five unaffected age-matched "controls" from the same database. They excluded women who were pregnant around the time, or those who had a hysterectomy or sterilisation. They excluded women who were pregnant around the time, who had hysterectomy or sterilisation, or who had a history of use of blood thinning medicine – suggesting history of or susceptibility to blood clots.

Use of the combined oral contraceptive pill was examined in the year prior to the blood clot record. They included all the most commonly used preparations in the UK, containing the different types of progestogen. They also included the combination of oestrogen with cyproterone acetate (brand name Dianette), which acts as a contraceptive pill, but its main indication is for the treatment of acne. They looked at when the pill had been used in relation to the time of the clot (e.g. current or past use) and how long it had been used for. 

They took into account potential confounding factors that may influence clot risk, including:

  • chronic medical conditions (e.g. cancer, heart or lung disease, arthritis or inflammatory conditions)
  • recent immobilisation such as through trauma, surgery or hospital admission
  • smoking and alcohol
  • obesity
  • polycystic ovary syndrome (associated with pill use and increased risk of clots)
  • social deprivation


What were the basic results?

After exclusions they had a sample of 5,500 cases and 22,396 controls in the QResearch database, and 5,062 cases and 19,638 matched controls in the CPRD database. The incidence of venous blood clots in the two databases was around six per 10,000 women per year. Just over half (58%) the blood clots in the two databases were DVTs.

In the two databases 28-30% of cases had used the pill in the past year, compared with 16-18% of controls. Overall, any pill use in the past year was associated with an almost tripled risk of venous blood clot compared to no use (adjusted odds ratio (OR) 2.97, 95% confidence interval (CI) 2.78 to 3.17).

The most common pill was one containing the progestogen levonorgestrel, which accounted for roughly half of prescriptions in cases and controls.

By type of progestogen the researchers found the following to be associated with lower risk:

  • levonorgestrel (OR 2.38, 95% CI 2.18 to 2.59)
  • norethisterone (OR 2.56, 95% CI 2.15 to 3.06)
  • norgestimate (OR 2.53, 95% CI 2.17 to 2.96)

The following were associated with higher risks:

  • desogestrel (OR 4.28, 95% CI 3.66 to 5.01)
  • gestodene (OR 3.64, 95% CI 3.00 to 4.43)
  • drospirenone (OR 4.12, 95% CI 3.43 to 4.96)
  • cyproterone acetate (OR 4.27, 95% CI 3.57 to 5.11)

Pills are sometimes termed according to generations of when they were manufactured. The bottom list are newer "third generation" pills, while the former group includes mostly earlier generations. The exception is norgestimate in the former list, which is also third generation.

The number of extra cases of venous blood clot per year was lowest for levonorgestrel and norgestimate (both six extra per 10,000 women) and highest for desogestrel and cyproterone (both 14 extra per 10,000 women).


How did the researchers interpret the results?

The researchers conclude: "In these population-based, case-control studies using two large primary care databases, risks of venous thromboembolism associated with combined oral contraceptives were, with the exception of norgestimate, higher for newer drug preparations than for second generation drugs."



It is already well known that the combined oral contraceptive pill ("the pill") is associated with increased risk of venous blood clots. It is also already known that risk can differ according to the type of progestogen in the pill. This study adds further evidence helping to quantify these risks.

The study has numerous strengths. It has used two large GP databases covering large samples of the UK population, and containing reliable information on medical diagnoses and prescriptions made. The analyses were also adjusted for various confounders known to be associated with risk of blood clots.

It demonstrates pill use in the previous year almost tripled risk of venous blood clot, with risk generally higher with the newer pills than the older ones – though there were some exceptions.

Encouragingly, preparations containing levonorgestrel – which is by far the most common pill prescribed – had the lowest associated risk; around six extra cases of blood clot for every 10,000 women prescribed.

The preparations associated with the highest risks in this study – desogestrel, gestodene, drospirenone and cyproterone – were already recognised to be linked to higher risk, though this study has helped to better quantify these risks. These are not usually pill preparations of first choice in practice and are normally used when there are specific indications (e.g. women who have acne, particularly those taking cyproterone), or who have had side effects with other preparations.

The organisation in charge of regulating medicines in England quantified the risks of the combined pill last year and came up with very similar results.

That review said the benefits of the pill far outweigh the risks, but added: "Prescribers and women should be aware of the major risk factors for thromboembolism, and of the key signs and symptoms."

This study again highlights the need for careful prescribing of the combined oral contraceptive pill, taking into account the individual woman’s risk factors such as lifestyle and medical history. Women should also be aware of the signs and symptoms of venous blood clots, such as DVT. If a woman taking the pill experiences unexplained swelling or pain in the leg, or sudden breathlessness and/or chest pain, they should seek medical help immediately.

The combined pill may not be suitable for you if you have a history of certain chronic diseases, such as heart disease, diabetes or high blood pressure. Other alternative methods of contraception, such as a contraceptive implant may be a more suitable option.

Your GP should be able to advise you on the safest method for your individual circumstances.

Links To The Headlines

New generation of Pill raises blood clot risk. The Times, May 27 2015

Newer contraceptive pills raise risk of blood clot four fold. The Daily Telegraph, May 26 2015

Newer contraceptive pills linked to 'higher blood clot risk'. ITV News, May 26 2015

Women on newer types of contraceptive pill 'double their risk of getting blood clot'. Daily Mirror, May 27 2015

Links To Science

Vinogradova Y, Coupland C, Hippisley-Cox J. Use of combined oral contraceptives and risk of venous thromboembolism: nested case-control studies using the QResearch and CPRD databases. BMJ. Published online May 26 2015

Categories: Medical News

Obesity in teen boys may increase bowel cancer risk in later life

Medical News - Tue, 05/26/2015 - 16:00

"Teenage boys who become very obese may double their risk of getting bowel cancer by the time they are in their 50s," The Guardian reports. A Swedish study found a strong association between teenage obesity and bowel cancer risk in later adulthood.

The study involved over 230,000 Swedish males, who were conscripted into the military aged 16 to 20 years old. Those who were in the upper ranges of overweight and those who were obese at that time were about twice as likely to develop bowel cancer over the next 35 years as those who were a normal weight.

This study has a number of strengths, including its size, the fact that body mass index (BMI) was objectively measured by a nurse and that the national cancer registry in Sweden captures virtually all cancer diagnoses. However, it was not able to take into account the boys' diets or smoking habits – both of which affect bowel cancer risk.

Obesity in adulthood is already known to be a risk factor for bowel cancer, therefore the possibility that a person being obese from an early age also increases risk seems plausible. Maintaining a healthy weight at all ages will have a range of health benefits, such as reducing your risk of developing conditions including heart disease and type 2 diabetes, as well as a number of cancers.  

Where did the story come from?

The study was carried out by researchers from Harvard School of Public Health and other research centres in the US, Sweden and the UK.

The study and researchers were funded by the National Cancer Institute, Harvard School of Public Health, Örebro University and the UK Economic and Social Research Council (ESRC).

The study was published in the peer-reviewed medical journal Gut.

The UK media covers this study reasonably well but did not discuss any limitations.  

What kind of research was this?

This was a cohort study looking at whether there was a link between body mass index (BMI) and inflammation in adolescence, and risk of colorectal (bowel) cancer later in life.

Being obese and having long-lasting (chronic) signs of inflammation in the body as an adult have been linked to increased bowel cancer risk. However, few of the studies have assessed the effect of obesity in adolescence specifically, and none have been said to look at the impact of inflammation in adolescence.

This type of study is the best way to look at the link between a possible risk factor and an outcome, as people cannot be randomly assigned to have, for example, higher or lower body mass index (BMI) or inflammation.

However, as people are not randomly allocated, it does mean that a group of people with an exposure are likely to differ in other ways from those without that exposure.

It is difficult to disentangle the effects of each of these differences, but researchers can try to single out the effect of the factors they are interested in if they have enough information about the differences between the groups. 

What did the research involve?

The researchers used data on BMI and inflammation collected from a very large group of Swedish adolescents and young men taking part in compulsory military service.

They used a national cancer registry to identify any of these men who later developed bowel cancer. They then analysed whether those who had higher BMIs or inflammation as youths were at greater risk.

The researchers analysed data from 239,658 men aged between 16 and 20 years old. These men had medical examinations when they were enlisted in compulsory military service between 1969 and 1976.

The marker (or sign) of inflammation the researchers had information on was the erythrocyte (red blood cell) sedimentation rate, or ESR. This measurement increases when there is inflammation.

Sweden has a national registry recording cancer cases diagnosed in the country, and researchers used this to identify men in the study who developed cancer from their enlistment up to January 2010. This gave an average of 35 years of follow-up for the men.

The researchers analysed whether BMI or signs of inflammation in late adolescence were linked to later risk of bowel cancer. They took into account confounding factors measured at the time of conscription that might affect results, including:

  • age
  • household crowding
  • health status
  • blood pressure
  • muscle strength
  • physical working capacity
  • cognitive function  
What were the basic results?

The researchers identified 885 cases of bowel cancer.

Compared with those with a healthy weight BMI (from 18.5 to less than 25), those who were:

  • underweight (BMI less than 18.5) or at the lower end of the overweight category (BMI 25 to less than 27.5) did not differ in their risk of bowel cancer
  • at the upper end of the overweight category (BMI 27.5 to less than 30) had about twice the risk of developing bowel cancer during follow-up (hazard ratio [HR] 2.08, 95% confidence interval [CI] 1.40 to 3.07)
  • obese (BMI 30 or more) were also more than twice as likely to develop bowel cancer during follow-up (HR 2.38, 95% CI 1.51 to 3.76)

Adolescents with "high" levels of inflammation were more likely to develop bowel cancer than those with "low" levels (HR 1.63, 95% CI 1.08 to 2.45).

However, those developing bowel cancer or inflammatory bowel disease (Crohn's disease or ulcerative colitis) in the first 10 years of follow-up were excluded, as this link was no longer statistically significant.

This suggested that the link with inflammation might at least in part be due to some men with high levels of inflammation already being in the early stages of inflammatory bowel disease, which is itself linked to higher bowel cancer risk. 

How did the researchers interpret the results?

The researchers concluded that, "late-adolescent BMI and inflammation, as measured by ESR, may be independently associated with future CRC [colorectal cancer] risk". 


This large cohort study found obesity in adolescence is linked to later colorectal cancer risk in men.

The very large size of this study is its main strength, along with the fact that BMI was objectively measured by a nurse, and that the national cancer registry in Sweden is estimated to record virtually all cancer cases.

As with all studies, there are limitations. For example, the study:

  • only had information on BMI at one time point, and could not tell whether the men maintained their BMIs or not
  • did not have information on diet or smoking, and these are known to impact bowel cancer risk
  • only analysed one marker for inflammation – results may differ for other markers
  • findings may not apply to women

Obesity in adulthood is already known to be a risk factor for bowel cancer, therefore the possibility that if a person is obese from an early age also increases risk seems plausible.

Research suggests that you can help lower your risk of bowel cancer by:

Also, adults can take part in the NHS Bowel Screening Programme offered at specific ages (age 55 for one form of screening, and ages 60 to 74 for another).

Links To The Headlines

Obese teenage boys could have higher risk of bowel cancer, study says. The Guardian, May 25 2015

Obesity in adolescence linked to bowel cancer risk, says study. BBC News, May 26 2015

'Bowel cancer risk' for obese teens. Mail Online, May 26 2015

Links To Science

Kantor ED, Udumyan R, Signorello LB, et al. Adolescent body mass index and erythrocyte sedimentation rate in relation to colorectal cancer risk. Gut. Published online May 18 2015

Categories: Medical News