Medical News

Weight discrimination study fuels debate

Medical News - Fri, 09/12/2014 - 12:41

Much of the media has reported that discriminatory “fat shaming” makes people who are overweight eat more, rather than less.

The Daily Mail describes how, “telling someone they are piling on the pounds just makes them delve further into the biscuit tin”. While this image may seem like a commonsense “comfort eating” reaction, the headlines are not borne out by the science.

In fact, the news relates to findings for just 150 people who perceived any kind of weight discrimination, including threats and harassment, and poorer service in shops – not just friendly advice about weight.

The research in question looked at body mass index (BMI) and waist size for almost 3,000 people aged over 50 and how it changed over a three- to five-year period. The researchers analysed the results alongside the people’s reports of perceived discrimination. But because of the way the study was conducted, we can’t be sure whether the weight gain resulted from discrimination or the other way around (or whether other unmeasured factors had an influence).

On average, the researchers found that the 150 people who reported weight discrimination had a small gain in BMI and waist circumference over the course of the study, while those who didn’t had a small loss.

Further larger-scale research into the types of discrimination that people perceived may bring more answers on the best way to help people maintain a healthy weight.
 

Where did the story come from?

The study was carried out by researchers from University College London, and was funded by the National Institute on Aging and Office for National Statistics. Individual authors received support from ELSA funding and Cancer Research UK. The study was published in the peer-reviewed Obesity Journal.

The media in general have perhaps overinterpreted the meaning from this study, given its limitations. The Daily Telegraph’s headline says, “fat shaming makes people eat more”, but the study hasn’t examined people’s dietary patterns, and can’t prove whether the weight gain or discrimination came first.

 

What kind of research was this?

This was an analysis of data collected as part of the prospective cohort study, the English Longitudinal Study of Ageing (ELSA). This analysis looked at the associations between perceived weight discrimination and changes in weight, waist circumference and weight status.

The researchers say that negative attitudes towards people who are obese have been described as “one of the last socially acceptable forms of prejudice”. The researchers cite common perceptions that discrimination against overweight and obesity may encourage people to lose weight, but that it may have a detrimental effect.

A cohort study is a good way of examining how a particular exposure is associated with a particular later outcome. However, in the current study the way in which the data was collected meant that it was not possible to clearly determine whether the discrimination or the weight gain came first.

As with all studies of this kind, finding that one factor has a relationship with another does not prove cause and effect. There may be many other confounding factors involved, making it difficult to say how and whether perceived weight discrimination is directly related to the person’s weight. The researchers did make adjustments for some of these factors in analyses, to try and remove their effect.

 

What did the research involve?

The English Longitudinal Study of Ageing is a long-term study started in 2001/02. It recruited adults aged 50 and over and has followed them every two years. Weight, height and waist circumference have been objectively measured by a nurse every four years.

Questions on perceptions of discrimination were asked only once, in 2010/11, and were completed by 8,107 people in the cohort (93%). No body measures were taken at this time, but they were taken one to two years before (2008/09) and after (2012/13) this. Complete data on body measurements and perceptions of discrimination were available for 2,944 people.

The questions on perceived discrimination were based on those previously established in other studies and asked how often in your day-to-day life: 

  • you are treated with less respect or courtesy
  • you receive poorer service than other people in restaurants and stores
  • people act as if they think you are not clever
  • you are threatened or harassed
  • you receive poorer service or treatment than other people from doctors or hospitals

The responders could choose one of a range of answers for each – from “never” to “almost every day”. The researchers report that because few people reported any discrimination, they grouped responses to indicate any perceived discrimination versus no perceived discrimination. People who reported discrimination in any situation were asked to indicate what they attributed this experience to, from a list of options including weight, age, gender and race.

The researchers then looked at the relationship between change in BMI and waist circumference between the 2008/09 and 2012/13 assessments. They then looked at how this was related to perceived weight discrimination at the midpoint. Normal weight was classed as a BMI less than 25, overweight between 25 and 30, “obese class I” between 30 and 35, “obese class II” 35 to 40, and “obese class III” was a BMI above 40.

In their analyses the researchers took into account age, sex and household (non-pension) income, as an indicator of socioeconomic status.

 

What were the basic results?

Of the 2,944 people for whom complete data was available, 150 (5.1%) reported any perceived weight discrimination, ranging from 0.7% of normal-weight individuals, to 35.9% of people in obesity class III. There were various differences between the 150 people who perceived discrimination and those who didn’t. People who perceived discrimination were significantly younger (62 years versus 66 years), of higher BMI (BMI 35 versus 27), waist circumference (112cm versus 94cm) and less wealthy.

On average, people who perceived discrimination gained 0.95kg in weight between the 2008/09 and 2012/13, while people who didn’t perceive discrimination lost 0.71kg (average difference between the groups 1.66kg).

There were significant changes in the overweight group (gain 2.22kg among those perceiving any discrimination versus loss of 0.39kg in the no discrimination group), and the obese group overall (loss of 0.26kg in the discrimination versus a loss of 2.07kg in the no discrimination group). There were no significant differences in any of the obesity subclasses.

People who perceived weight discrimination also gained an average 0.72cm in waist circumference, while those who didn’t lost an average of 0.40cm (an average difference of 1.12cm). However, there were no other significant differences by group.

Among people who were obese at the first assessment, perceptions of discrimination had no effect on their risk of remaining obese (odds ratio (OR) 1.09, 95% confidence interval (CI) 0.46 to 2.59), with most obese people staying obese at follow-up (85.6% at follow-up versus 85.0% before). However, among people who were not obese at baseline, perceived weight discrimination was associated with higher odds of becoming obese (OR 6.67, 95% CI 1.85 to 24.04).

 

How did the researchers interpret the results?

The researchers conclude that their results, “indicate that rather than encouraging people to lose weight, weight discrimination promotes weight gain and the onset of obesity. Implementing effective interventions to combat weight stigma and discrimination at the population level could reduce the burden of obesity”.

 

Conclusion

This analysis of data collected as part of the large English Longitudinal Study of Ageing finds that people who reported experiencing discrimination as a result of their weight had a small gain in BMI and waist circumference over the study years, while those who didn’t had a small loss.

There are a few important limitations to bear in mind. Most importantly, this study could not determine whether the weight changes or the discrimination came first. And, finding an association between two factors does not prove that one has directly caused the other. The relationship between the two may be influenced by various confounding factors. The authors tried to take into account some of these, but there are still others that could be influencing the relationship (such as the person’s own psychological health and wellbeing).

As relatively few people reported weight discrimination, results were not reported or analysed separately by the type or source of the discrimination. Therefore, it is not possible to say what form the discrimination took or whether it came from health professionals or the wider population.

People’s perception of discrimination and the reasons for it may be influenced by their own feelings about their weight and body image. These feelings themselves could also be having a detrimental effect against them being able to lose weight. This does not mean that discrimination does not exist, or that it should not be addressed. Instead, both factors may need to be considered in developing successful approaches to reducing weight gain and obesity.

Another important limitation of this study is that despite the large initial sample size of this cohort, only 150 people (5.1%) perceived weight discrimination. When further subdividing this small number of people by their BMI class, this makes the numbers smaller still. Analyses based on small numbers may not be precise. For example, the very wide confidence interval around this odds ratio for becoming obese highlights the uncertainty of this estimate.

Also, the findings may not apply to younger people, as all participants were over the age of 50.

Discrimination based on weight or other characteristics is never acceptable and is likely to have a negative effect. The National Institute for Health and Care Excellence has already issued guidance to health professionals, noting the importance of non-discriminatory care of overweight and obese people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Fat shaming 'makes people eat more rather than less'. The Daily Telegraph, September 11 2014

Telling someone they're fat makes them eat MORE: People made to feel guilty about their size are six times as likely to become obese. Mail Online, September 11 2014

‘Fat shaming’ makes people put on more weight, study claims. Metro. September 10 2014

Links To Science

Jackson SE, Beeken RJ, Wardle, J. Perceived weight discrimination and changes in weight, waist circumference, and weight status. September 11 2014

Categories: Medical News

'Food addiction' doesn't exist, say scientists

Medical News - Thu, 09/11/2014 - 14:30

“Food is not addictive ... but eating is: Gorging is psychological compulsion, say experts,” the Mail Online reports.

The news follows an article in which scientists argue that – unlike drug addiction – there is little evidence that people become addicted to the substances in certain foods.

Researchers argue that instead of thinking of certain types of food as addictive, it would be more useful to talk of a behavioural addiction to the process of eating and the “reward” associated with it.

The article is a useful contribution to the current debate over what drives people to overeat. It’s a topic that urgently needs answers, given the soaring levels of obesity in the UK and other developed countries. There is still a good deal of uncertainty about why people eat more than they need. The way we regard overeating is linked to how eating disorders are treated, so fresh thinking may prove useful in helping people overcome compulsive eating habits.

 

Where did the story come from?

The study was carried out by researchers from various universities in Europe, including the Universities of Aberdeen and Edinburgh. It was funded by the European Union.

The study was published in the peer-reviewed Neuroscience and Biobehavioural Reviews on an open-access basis, so it is free to read online. However, the online article that has been released is not the final one, but an uncorrected proof.

Press coverage was fair, although the article was treated somewhat as if it was the last word on the subject, rather than a contribution to the debate. The Daily Mail’s use of the term “gorging” in its headline was unnecessary, implying sheer greed is to blame for obesity. This was not a conclusion found in the published review.

What kind of research was this?

This was not a new piece of research, but a narrative review of the scientific evidence for the existence of an addiction to food. It says that the concept of food addiction has become popular among both researchers and the public, as a way to understand the psychological processes involved in weight gain.

The authors of the review argue that the term food addiction – echoed in terms such as “chocaholic” and “food cravings” has potentially important implications for treatment and prevention. For this reason, they say, it is important to explore the concept more closely.

They also say that “food addiction” may be used as an excuse for overeating, also placing blame on the food industry for producing so-called “addictive foods” high in fat and sugar.

What does the review say?

The researchers first looked at the various definitions of the term addiction. Although they say a conclusive scientific definition has proved elusive, most definitions include notions of compulsion, loss of control and withdrawal syndromes. Addiction, they say, can be either related to an external substance (such as drugs) or to a behaviour (such as gambling).

In formal diagnostic categories, the term has largely been replaced. Instead it is often changed to “substance use disorder” – or in the case of gambling “non-substance use disorder”.

One classic finding on addiction is the alteration of central nervous system signalling, involving the release of chemicals with “rewarding” properties. These chemicals, the authors say, can be released not just by exposure to external substances, such as drugs, but also by certain behaviours, including eating.

The authors also outline the neural pathways through which such reward signals work, with neurotransmitters such as dopamine playing a critical role.

However, the authors of the review say that labelling a food or nutrient as “addictive” implies it contains certain ingredients that could make an individual addicted to it. While certain foods – such as those high in fat and sugar – have “rewarding” properties and are highly palatable, there is insufficient evidence to label them as addictive. There is no evidence that single nutritional substances can elicit a “substance use disorder” in humans, according to current diagnostic criteria.

The authors conclude that “food addiction” is a misnomer, proposing instead the term “eating addiction” to underscore the behavioural addiction to eating. They argue that future research should try to define the diagnostic criteria for an eating addiction, so that it can be formally classified as a non-substance related addictive disorder.

“Eating addiction” stresses the behavioural component, whereas “food addiction” appears more like a passive process that simply befalls the individual, they conclude.

Conclusion

There are many theories as to why we overeat. These theories include the existence of the “thrifty gene”, which has primed us to eat whenever food is present and was useful in times of scarcity. There is also the theory and the “obesogenic environment” in which calorie dense food is constantly available.

This is an interesting review that argues that in terms of treatment the focus should be on people’s eating behaviour – rather than on the addictive nature of certain foods. It does not deny the fact that for many of us high fat, high sugar foods are highly palatable.

If you think your eating is out of control, or you want help with weight problems, it’s a good idea to visit your GP. There are many schemes available that can help people lose weight by sticking to a healthy diet and regular exercise.

If you are feeling compelled to eat, or finding yourself snacking unhealthily, why not check out these suggestions for food swaps that could be healthier.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sugar 'not addictive' says Edinburgh University study. BBC News, September 9 2014

Food is not addictive ... but eating is: Gorging is psychological compulsion, say experts. Daily Mail, September 10 2014

Fatty foods are NOT addictive – but eating can be, Scottish scientists reveal. Daily Express, September 10 2014

Links To Science

Hebebrand J, Albayrak O, Adan R, et al. “Eating addiction”, rather than “food addiction”, better captures addictive-like eating behaviour. Neuroscience and Behaviour. Published online September 6 2014

Categories: Medical News

Bacteria found in honey may help fight infection

Medical News - Thu, 09/11/2014 - 14:00

“Bacteria found in honeybee stomachs could be used as alternative to antibiotics,” reports The Independent.

The world desperately needs new antibiotics to counter the growing threat of bacteria developing resistance to drug treatment. A new study has found that 13 bacteria strains living in honeybees’ stomachs can reduce the growth of drug-resistant bacteria, such as MRSA, in the laboratory.

The researchers examined antibiotic-resistant bacteria and yeast that can infect human wounds such as MRSA and some types of E. coli. They found each to be susceptible to some of the 13 honeybee lactic acid bacteria (LAB). These LAB were more effective if used together.

However, while the researchers found that the LAB could have more of an effect than existing antibiotics, they did not test whether this difference was likely to be due to chance, so few solid conclusions can be drawn from this research.

The researchers also found that each LAB produced different levels of toxic substances that may have been responsible for killing the bacteria.

Unfortunately, the researchers had previously found that the LAB are only present in fresh honey for a few weeks before they die, and are not present in shop-bought honey.

However, the researchers did find low levels of LAB-produced proteins and free fatty acids in shop-bought honey. They went on to suggest that these substances might be key to the long-held belief that even shop-bought honey has antibacterial properties, but that this warrants further research.

 

Where did the story come from?

The study was carried out by researchers from Lund University and Sophiahemmet University in Sweden. It was funded by the Gyllenstierna Krapperup’s Foundation, Dr P Håkansson’s Foundation, Ekhaga Foundation and The Swedish Research Council Formas.

The study was published in the peer-reviewed International Wound Journal on an open-access basis, so it is free to read online.

The study was accurately reported by The Independent, which appears to have based some of its reporting on a press release from Lund University. This press release confusingly introduces details of separate research into the use of honey to successfully treat wounds in a small number of horses.

 

What kind of research was this?

This was a laboratory study looking at whether substances present in natural honey are effective against several types of bacteria that commonly infect wounds. Researchers want to develop new treatments because of the growing problem of bacteria developing antibiotic resistance. In this study, the researchers chose to focus on honey, as it has been used “for centuries … in folk medicine for upper respiratory tract infections and wounds”, but little is known about how it works.

Previous research has identified 40 strains of LAB that live in honeybees’ stomachs (stomach bacteria are commonly known as “gut flora”). 13 of these LAB strains have been found to be present in all species of honeybees and in freshly harvested honey on all continents – but not shop-bought honey.

Research has suggested that the 13 strains work together to protect the honeybee from harmful bacteria. This study set out to further investigate whether these LAB might be responsible for the antibacterial properties of honey. They did this by testing them in the laboratory setting on bacteria that can cause human wound infections.

 

What did the research involve?

The 13 LAB strains were cultivated and tested against 13 multi-drug resistant bacteria, and one type of yeast that had been grown in the laboratory from chronic human wounds.

The bacteria included MRSA and one type of E. coli. The researchers tested each LAB strain for its effect on each type of bacteria or yeast, and then all 13 LAB strains were tested together. They did this by placing a disc of material containing the LAB at a particular place in a gel-like substance called agar, and then placing bacteria or yeast onto the agar.

If the LAB had antibiotic properties, it would be able to stop the bacteria or yeast from growing near it. The researchers would be able to find the LABs with stronger antibiotic properties, by seeing which had the largest distance at which they could stop the bacteria or yeast growing.

The researchers compared the results with the effect of the antibiotic commonly used for each type of bacteria or yeast, such as vancomycin and chloramphenicol. They then analysed the type of substances that each LAB produced, in an attempt to understand how they killed the bacteria or yeast.

The researchers then looked for these substances in samples of different types of shop-bought honey, including Manuka, heather, raspberry and rapeseed honey, and a sample of fresh rapeseed honey that had been collected from a bee colony.

 

What were the basic results?

Each of the 13 LABs reduced the growth of some of the antibiotic-resistant wound bacteria. The LABs were more effective when used together. The LABs tended to stop bacteria and yeast growing over a larger area than the antibiotics, suggesting that they were having more of an effect. However, the researchers did not do statistical tests to see if these differences were greater than might be expected purely by chance.

The 13 LABs produced different levels of lactic acid, formic acid and acetic acid. Five of them also produced hydrogen peroxide. All of the LABs also produced at least one other toxic chemical, including benzene, toluene and octane. They also produced some proteins and free fatty acids. Low concentrations of nine proteins and free fatty acids produced by LABs were found in shop-bought honeys.

 

How did the researchers interpret the results?

The researchers conclude that LAB living in honeybees “are responsible for many of the antibacterial and therapeutic properties of honey. This is one of the most important steps forward in the understanding of the clinical effects of honey in wound management”.

They go on to say that “this has implications not least in developing countries, where fresh honey is easily available, but also in western countries where antibiotic resistance is seriously increasing”.

 

Conclusion

This study suggests that 13 strains of LAB taken from honeybees’ stomachs are effective against a yeast and several bacteria that are often present in human wounds. Although the experiments suggested that the LABs could inhibit the bacteria more than some antibiotics, they did not show that this effect was large enough to be relatively certain it did not occur by chance. All of the tests were done in a laboratory environment, so it remains to be seen whether similar effects would be seen when treating real human wounds.

There were some aspects of the study that were not clear, including the antibiotic dose that was used and whether the dose used was optimal, or had already been used in the clinical setting where the species were collected. The authors also report that an antibiotic was used as a control for each bacteria and the yeast, but this is not clearly presented in the tables of the study, making it difficult to assess whether this is correct.

The study has shown that each LAB produces a different amount or type of potentially toxic substances. It is not clear how these substances interact to combat the infections, but it appears that they work more effectively in combination.

Low concentrations of some of the substances that could be killing the bacteria and yeast were found in shop-bought honey, but this study does not prove that they would have antibacterial effects. In addition, as the researchers point out, shop-bought honey does not contain any LABs.

Antibiotic resistance is a big problem that reduces our ability to combat infections. This means there is a lot of interest in finding new ways to combat bacteria. Whether this piece of research will contribute to that is currently unclear, but finding these new treatments will be crucial.

 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Bacteria found in honeybee stomachs could be used as alternative to antibiotics, scientists claim. The Independent, September 10 2014

Links To Science

Olofsson TC, Butler E, Markowicz P, et al. Lactic acid bacterial symbionts in honeybees – an unknown key to honey's antimicrobial and therapeutic activities. International Wound Journal. Published online September 8 2014

Categories: Medical News

Hundreds report waking up during surgery

Medical News - Wed, 09/10/2014 - 14:30

“At least 150, and possibly several thousand, patients a year are conscious while they are undergoing operations,” The Guardian reports. A report suggests “accidental awareness” during surgery occurs in around one in 19,000 operations.

The report containing this information is the Fifth National Audit Project (NAP5) report on Accidental Awareness during General Anaesthesia (AAGA) – that is, when people are conscious at some point during general anaesthesia. This audit was conducted over a three-year period to determine how common AAGA is.

People who regain consciousness during surgery may be unable to communicate this to the surgeon due to the use of muscle relaxants, which are required for safety during surgery. This can cause feelings of panic and fear. Sensations that the patients have reported feeling during episodes of AAGA include tugging, stitching, pain and choking.

There have been reports that people who experience this rare occurrence may be extremely traumatised and go on to experience post-traumatic stress disorder (PTSD).

However, as the report points out, psychological support and therapy given quickly after an AAGA can reduce the risk of PTSD.

 

Who produced the report?

The Royal College of Anaesthetists (RCoA) and the Association of Anaesthetists of Great Britain and Ireland (AAGBI) produced the report. It was funded by anaesthetists through their subscriptions to both professional organisations.

In general, the UK media have reported on the study accurately and responsibly.

The Daily Mirror’s website points out that you are far more likely to die during surgery than wake up during it – a statement that, while accurate, is not exactly reassuring.

 

How was the research carried out?

The audit was the largest of its kind, with researchers obtaining the details of all patient reports of AAGA from approximately 3 million operations across all public hospitals in the UK and Ireland. After the data was made anonymous, a multidisciplinary team studied the details of each event. This team included patient representatives, anaesthetists, psychologists and other professionals.

The team studied 300 of more than 400 reports they received. Of these, 141 were considered to be certain/probable cases. In addition, 17 cases were due to drug error: having the muscle relaxant but not the general anaesthetic, thus causing “awake paralysis” – a condition similar to sleep paralysis, when a person wakes during sleep, but is temporarily unable to move or speak. Seven cases of AAGA occurred in the intensive care unit (ICU) and 32 cases occurred after sedation rather than general anaesthesia (sedation causes a person to feel very drowsy and unresponsive to the outside world, but does not cause loss of consciousness).

 

What were the main findings?

The main findings were:

  • one in 19,000 people reported AAGA
  • half of the reported events occurred during the initiation of general anaesthetic, and half of these cases were during urgent or emergency operations
  • about one-fifth of cases occurred after the surgery had finished, and were experienced as being conscious but unable to move
  • most events lasted for less than five minutes
  • 51% of cases caused the patient distress
  • 41% of cases resulted in longer moderate to severe psychological harm from the experience
  • people who had early reassurance and support after an AAGA event often had better outcomes

The awareness was more likely to occur:

  • during caesarean section and cardiothoracic surgery
  • in obese patients
  • if there was difficulty managing the patient’s airway at the start of anaesthesia
  • if there was interruption in giving the anaesthetic when transferring the patient from the anaesthetic room to the theatre
  • if certain emergency drugs were used during some anaesthetic techniques

 

What recommendations have been made?

64 recommendations were made covering national, institutional and individual health professional level factors. The main recommendations are briefly outlined below.

They recommend having a new anaesthetic checklist in addition to the World Health Organization (WHO) Safer Surgical Checklist, which is meant to be completed for each patient. This would be a simple anaesthesia checklist performed at the start of every operation. The purpose of it would be to prevent incidents occurring due to human error, and monitoring problems and interruptions to the administration of the anaesthetic drugs.

To reduce the experience of waking but being unable to move, they recommend that a type of monitor called a nerve stimulator should be used, so that anaesthetists can assess whether the neuromuscular drugs are still having an effect before they withdraw the anaesthetic.

They recommend that hospitals look at the packaging of each type of anaesthetic and related drugs that are used, and consider ordering some from different suppliers, to avoid multiple drugs of similar appearance. They also recommend that national anaesthetic organisations look for solutions to this problem with the suppliers.

They recommend that patients be informed of the possibility of briefly experiencing muscle paralysis when they are given the anaesthetic medications and when they wake up at the end, so that they are more prepared for its potential occurrence. In addition, patients who are undergoing sedation rather than general anaesthesia should be better informed of the level of awareness to expect.

The other main recommendation was for a new structured approach to managing any patients who experience awareness, to help reduce distress and longer-term psychological difficulties – called the Awareness Support Pathway.

 

How does this affect you?

As Professor Tim Cook, Consultant Anaesthetist in Bath and co-author of the report, has said: “It is reassuring that the reports of awareness … are a lot rarer than incidences in previous studies”, which have been as high as one in 600. He also states that “as well as adding to the understanding of the condition, we have also recommended changes in practice to minimise the incidence of awareness and, when it occurs, to ensure that it is recognised and managed in such a way as to mitigate longer-term effects on patients”.

 Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Awareness during surgery can cause long-term harm, says report. The Guardian, September 10 2014

Some patients 'wake up' during surgery. BBC News, September 10 2014

Three patients each week report WAKE UP during an operation because they are not given enough anaesthetic. Mail Online, September 10 2014

Hundreds of people wake up during operations. The Daily Telegraph, September 10 2014

More than 150 people a year WAKE UP during surgery: How does it happen? Daily Mirror, September 10 2014

Categories: Medical News

Prescription sleeping pills linked to Alzheimer's risk

Medical News - Wed, 09/10/2014 - 13:30

“Prescription sleeping pills … can raise chance of developing Alzheimer's by 50%,” reports the Mail Online.

This headline is based on a study comparing the past use of benzodiazepines, such as diazepam and temazepam, in older people with or without Alzheimer’s disease. It found that the odds of developing Alzheimer’s were higher in people who had taken benzodiazepines for more than six months.

Benzodiazepines are a powerful class of sedative drugs. Their use is usually restricted to treating cases of severe and disabling anxiety and insomnia. They are not recommended for long-term use, because they can cause dependence.

It’s also important to note that this study only looked at people aged 66 and above, therefore it is not clear what the effects are in younger people. Also, it is possible that the symptoms these drugs are being used to treat in these older people, such as anxiety, may in fact be early symptoms of Alzheimer’s. The researchers tried to reduce the likelihood of this in their analyses, but it is still a possibility.

Overall, these findings reinforce existing recommendations that a course of benzodiazepines should usually last no longer than four weeks.

 

Where did the story come from?

The study was carried out by researchers from the University of Bordeaux, and other research centres in France and Canada. It was funded by the French National Institute of Health and Medical Research (INSERM), the University of Bordeaux, the French Institute of Public Health Research (IRESP), the French Ministry of Health and the Funding Agency for Health Research of Quebec.

The study was published in the peer-reviewed British Medical Journal on an open access basis, so it is free to read online.

The Mail Online makes the drugs sound like they are “commonly used” for anxiety and sleep disorders, when they are used only in severe, disabling cases. It is also not possible to say for sure that the drugs are themselves directly increasing risk, as suggested in the Mail Online headline.

 

What kind of research was this?

This was a case control study looking at whether longer-term use of benzodiazepines could be linked to increased risk of Alzheimer’s disease.

Benzodiazepines are a group of drugs used mainly to treat anxiety and insomnia, and it is generally recommended that they are used only in the short term – usually no more than four weeks.

The researchers report that other studies have suggested that benzodiazepines could be a risk factor for Alzheimer’s disease, but there is still some debate. In part, this is because anxiety and insomnia in older people may be early signs of Alzheimer’s disease, and these may be the cause of the benzodiazepine use. In addition, studies have not yet been able to show that risk increases with increasing dose or longer exposure to the drugs (called a “dose-response effect”) – something that would be expected if the drugs were truly affecting risk. This latest study aimed to assess whether there was a dose-response effect.

Because the suggestion is that taking benzodiazepines for a long time could cause harm, a randomised controlled trial (seen as the gold standard in evaluating evidence) would be unethical.

As Alzheimer’s takes a long time to develop, following up a population to assess first benzodiazepine use, and then whether anyone develops Alzheimer’s (a cohort study) would be a long and expensive undertaking. A case control study using existing data is a quicker way to determine whether there might be a link.

As with all studies of this type, the difficulty is that it is not possible to determine for certain whether the drugs are causing the increase in risk, or whether other factors could be contributing.

 

What did the research involve?

The researchers used data from the Quebec health insurance program database, which includes nearly all older people in Quebec. They randomly selected 1,796 older people with Alzheimer’s disease who had at least six years’ worth of data in the system prior to their diagnosis (cases). They randomly selected four controls for each case, matched for gender, age and a similar amount of follow-up data in the database. The researchers then compared the number of cases and controls who had started taking benzodiazepines at least five years earlier, and the doses used.

Participants had to be aged over 66 years old, and be living in the community (that is, not in a care home) between 2000 and 2009. Benzodiazepine use was assessed using the health insurance claims database. The researchers identified all prescription claims for benzodiazepines, and calculated an average dose for each benzodiazepine used in the study. They then used this to calculate how many average daily doses of the benzodiazepine were prescribed for each person. This allowed them to use a standard measure of exposure across the drugs.

Some benzodiazepines act over a long period as they take longer to be broken down and eliminated from the body, while some act over a shorter period. The researchers also noted whether people took long- or short-acting benzodiazepine, those who took both were classified as having taken the longer acting form.

People starting benzodiazepines within five years of their Alzheimer’s diagnosis (or equivalent date for the controls) were excluded, as these cases are more likely to potentially be cases where the symptoms being treated are early signs of Alzheimer’s.

In their analyses, the researchers took into account whether people had conditions which could potentially affect the results, including:

  • high blood pressure
  • heart attack
  • stroke
  • high cholesterol
  • diabetes
  • anxiety
  • depression
  • insomnia

 

What were the basic results?

Almost half of the cases (49.8%) and 40% of the controls had been prescribed benzodiazepines. The proportion of cases and controls taking less than six months’ worth benzodiazepines was similar (16.9% of cases and 18.2% of controls). However, taking more than six months’ worth of benzodiazepines was more common in the controls (32.9% of cases and 21.8% of controls).

After taking into account the potential confounders, the researchers found that having used a benzodiazepine was associated with an increased risk of Alzheimer’s disease, even after taking into account potential confounders (odds ratio (OR) 1.43, 95% confidence interval (CI) 1.28 to 1.60).

There was evidence that risk increased the longer the drug was taken, indicated by the number of days’ worth of benzodiazepines a person was prescribed:

  • having less than about three months’ (up to 90 days) worth of benzodiazepines was not associated with an increase in risk
  • having three to six months’ worth of benzodiazepines was associated with a 32% increase in the odds of Alzheimer’s disease before adjusting for anxiety, depression and insomnia (OR 1.32, 95% CI 1.01 to 1.74) but this association was no longer statistically significant after adjusting for these factors (OR 1.28, 95% CI 0.97 to 1.69)
  • having more than six months’ worth of benzodiazepines was associated with a 74% increase in the odds of Alzheimer’s disease, even after adjusting for anxiety, depression or insomnia (OR 1.74, 95% CI 1.53 to 1.98)
  • the increase in risk was also greater for long-acting benzodiazepines (OR 1.59, 95% 1.36 to 1.85) than for short-acting benzodiazepines (OR 1.37, 95% CI 1.21 to 1.55).

 

How did the researchers interpret the results?

The researchers concluded that, “benzodiazepine use is associated with an increased risk of Alzheimer’s disease”. The fact that a stronger association was found with longer periods of taking the drugs supports the possibility that the drugs may be contributing to risk, even if the drugs may also be an early marker of the onset of Alzheimer’s disease.

 

Conclusion

This case control study has suggested that long-term use of benzodiazepines (over six months) may be linked with an increased risk of Alzheimer’s disease in older people. These findings are reported to be similar to other previous studies, but add weight to these by showing that risk increases with increasing length of exposure to the drugs, and with those benzodiazepines that remain in the body for longer.

The strengths of this study include that it could establish when people started taking benzodiazepines and when they had their diagnosis using medical insurance records, rather than having to ask people to recall what drugs they have taken. The database used is also reported to cover 98% of the older people in Quebec, so results should be representative of the population, and controls should be well matched to the cases.

The study also tried to reduce the possibility that the benzodiazepines could be being used to treat symptoms of the early phase of dementia, by only assessing use of these drugs that started at least six years before Alzheimer’s was diagnosed. However this may not remove the possibility entirely, as some cases of Alzheimer’s take years to progress, which the authors acknowledge.

All studies have limitations. As with all analyses of medical records and prescription data, there is the possibility that some data is missing or not recorded, that there may be a delay in recording diagnoses after the onset of the disease, or that people may not take all of the drugs they are prescribed. The authors considered all of the issues and carried out analyses where possible to assess their likelihood, but concluded that they seemed unlikely to be having a large effect.

There were some factors which could affect Alzheimer’s risk, which were not taken into account because the data was not available (for example, smoking and alcohol consumption habits, socioeconomic status, education or genetic risk).

It is already not recommended that benzodiazepines are used for long periods, as people can become dependent on them. This study adds another potential reason why prescribing these drugs for long periods may not be appropriate.

If you are experiencing problems with insomnia or anxiety (or both), doctors are likely to start with non-drug treatments as these tend to be more effective in the long term. 

Read more about alternatives to drug treatment for insomnia and anxiety.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Anxiety and sleeping pills 'linked to dementia'. BBC News, September 10 2014

Sleeping pills taken by millions linked to Alzheimer's. The Daily Telegraph, September 10 2014

Prescription sleeping pills taken by more than one million Britons 'can raise chance of developing Alzheimer's by 50%'. Daily Mail, September 10 2014

Sleeping pills can increase risk of Alzheimer's by half. Daily Mirror, September 10 2014

Sleeping pills linked to risk of Alzheimer’s disease. Daily Express, September 10 2014

Links To Science

De Gage SB, Moride Y, Ducruet T, et al. Benzodiazepine use and risk of Alzheimer’s disease: case-control study. BMJ. Published online September 9 2014

Categories: Medical News

Sibling bullying linked to young adult depression

Medical News - Tue, 09/09/2014 - 14:00

“Being bullied regularly by a sibling could put children at risk of depression when they are older,” BBC News reports.

A new UK study followed children from birth to early adulthood. Analysis of more than 3,000 children found those who reported frequent sibling bullying at age 12 were about twice as likely to report high levels of depressive symptoms at age 18.

The children who reported sibling bullying were also more likely to be experiencing a range of challenging situations, such as being bullied by peers, maltreated by an adult, and exposed to domestic violence. While the researchers did take these factors into account, they and other factors could still be having an impact. This means it is not possible to say for certain that frequent sibling bullying is directly causing later mental health problems. However, the results do suggest that it could be a contributor.

As the authors suggest, interventions to target sibling bullying, potentially as part of a programme targeting the whole family, should be assessed to see if they can reduce the likelihood of later psychological problems.

 

Where did the story come from?

The study was carried out by researchers from the University of Oxford and other universities in the UK. The ongoing cohort study was funded by the UK Medical Research Council, the Wellcome Trust and the University of Bristol, and the researchers also received support from the Jacobs Foundation and the Economic and Social Research Council.

The study was published in the peer-reviewed medical journal Pediatrics. The article has been published on an open-access basis so it is available for free online.

This study was well reported by BBC News, which reported the percentage of children in each group (those who had been bullied and those who had not) who developed high levels of depression or anxiety. This helps people to get an idea of how common these things actually were, rather than just saying by how many times the risk is increased.

 

What kind of research was this?

This was a prospective cohort study that assessed whether children who experienced bullying by their siblings were more likely to develop mental health problems in their early adulthood. The researchers say that other studies have found bullying by peers to be associated with increased risk of mental health problems, but the effect of sibling bullying has not been assessed.

A cohort study is the best way to look at this type of question, as it would clearly not be ethical for children to be exposed to bullying in a randomised way. A cohort study allows researchers to measure the exposure (sibling bullying) before the outcome (mental health problems) has occurred. If the exposure and outcome are measured at the same time (as in a cross sectional study) then researchers can’t tell if the exposure could be contributing to the outcome or vice versa.

 

What did the research involve?

The researchers were analysing data from children taking part in the ongoing Avon Longitudinal Study of Parents and Children. The children reported on sibling bullying at age 12, and were then assessed for mental health problems when they were 18 years old. The researchers then analysed whether those who experienced sibling bullying were more at risk of mental health problems.

The cohort study recruited 14,541 women living in Avon who were due to give birth between 1991 and 1992. The researchers collected information from the women, and followed them and their children over time, assessing them at intervals.

When the children were aged 12 years they were sent a questionnaire including questions on sibling bullying, which was described as “when a brother or sister tries to upset you by saying nasty and hurtful things, or completely ignores you from their group of friends, hits, kicks, pushes or shoves you around, tells lies or makes up false rumours about you”. The children were asked whether they had been bullied by their sibling at home in the last six months, how often, what type of bullying and at what age it started.

When the children reached 18 they completed a standardised computerised questionnaire asking about symptoms of depression and anxiety. They were then categorised as having depression or not and any form of anxiety or not, based on the criteria in the International Classification of Diseases (ICD 10). The teenagers were also asked whether they had self-harmed in the past year, and how often.

The researchers also used data on other factors that could affect risk of mental health problems, collected when the children were eight years of age or younger (potential confounders), including any emotional or behaviour problems at age seven, the children’s self-reported depressive symptoms at age 10, and a range of family characteristics. They took these factors into account in their analyses.

 

What were the basic results?

A total of 3,452 children completed both the questionnaires about sibling bullying and mental health problems. Just over half of the children (52.4%) reported never being bullied by a sibling, just over a tenth (11.4%) reported being bullied several times a week, and the remainder (36.1%) reported being bullied but less frequently. The bullying was mainly name calling (23.1%), being made fun of (15.4%), or physical bullying such as shoving (12.7%).

Children reporting bullying by a sibling were more likely to:

  • be girls
  • to report frequent bullying by peers
  • to have an older brother
  • to have three or more siblings
  • to have parents from a lower social class
  • to have a mother who experienced depression during pregnancy
  • to be exposed to domestic violence or mistreatment by an adult
  • to have more emotional and behavioural problems at age seven

At 18 years of age, those who reported frequent bullying (several times a week) by a sibling at age 12 were more likely to experience mental health problems than those reporting no bullying:

  • 12.3% of the bullied children had clinically significant depression symptoms compared with 6.4% of those who were not bullied
  • 16.0% experienced anxiety compared with 9.3% 
  • 14.1% had self-harmed in the past year compared with 7.6%

After taking into account potential confounders, frequent sibling bullying was associated with increased risk of clinically significant depression symptoms (odds ratio (OR) 1.85, 95% confidence interval (CI) 1.11 to 3.09) and increased risk of self-harm (OR 2.26, 95% CI 1.40 to 3.66). The link with anxiety did not reach statistical significance after adjusting for potential confounders.

 

How did the researchers interpret the results?

The researchers concluded that “being bullied by a sibling is a potential risk factor for depression and self-harm in early adulthood”. They suggest that interventions to address this should be designed and tested.

 

Conclusion

The current study suggests that frequent sibling bullying at age 12 is associated with depressive symptoms and self-harm at age 18. The study’s strengths include the fact that it collected data prospectively using standard questionnaires, and followed children up over a long period. It was also a large study, although a lot of children did not complete all of the questionnaires.

The study does have limitations, which include:

  • As with all studies of this type, the main limitation is that although the study did take into account some other factors that could affect the risk of mental health problems, they and other factors could still be having an effect.
  • The study included only one assessment of bullying, at age 12. Patterns of bullying may have changed over time, and a single assessment might miss some children exposed to bullying.
  • Bullying was only assessed by the children themselves. Also collecting parental reports, or those of other siblings, might offer some confirmation of reports of bullying. However, bullying may not always take place when others are present.
  • The depression assessments were by computerised questionnaire, this is not equivalent to a formal diagnosis of having depression or anxiety after a full assessment by a mental health professional, but does indicate the level of symptoms a person is experiencing.
  • A large number of the original recruited children did not end up completing the questionnaires assessed in the current study (more than 10,000 of the 14,000+ babies starting the study). This could affect the results if certain types of children were more likely to drop out of the study (e.g. those with more sibling bullying). However, the children who dropped out after age 12 did not differ in their sibling bullying levels to those who stayed in the study, and analyses using estimates of their data did not have a large effect on results. Therefore the researchers considered that this loss to follow-up did not appear to be affecting their analyses.

While it is not possible to say for certain that frequent sibling bullying is directly causing later mental health problems, the study does suggest that it could be a contributor. It is also clear that the children experiencing such sibling bullying are also more likely to be experiencing a range of challenging situations, such as being bullied by peers, maltreated by an adult, and exposed to domestic violence.

As the authors say, the findings suggest that interventions to target sibling bullying, potentially as part of a programme targeting the whole family, should be assessed to see if they can reduce the likelihood of later psychological problems.

Read more about bullying, how to spot the signs and what to do if you suspect your child is being bullied (or is a bully themselves).

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sibling bullying increases depression risk. BBC News, September 8 2014

Lasting toll of bullying by a sibling: Brothers or sisters who are regularly picked on 'more likely to be depressed or take an overdose'. Daily Mail, September 9 2014

Links To Science

Bowes L, Wolke D, Joinson C, et al. Sibling Bullying and Risk of Depression, Anxiety, and Self-Harm: A Prospective Cohort Study. Published online September 8 2014

Categories: Medical News

Regular walking breaks 'protect arteries'

Medical News - Tue, 09/09/2014 - 13:29

“Just a five-minute walk every hour helps protect against damage of sitting all day,” the Mail Online reports.

A study of 12 healthy but inactive young men found that if they sat still without moving their legs for three hours, the walls of their main leg artery showed signs of decreased flexibility. However, this was “prevented” if the men took five-minute light walking breaks about every hour.

Less flexibility in the walls of the arteries has been linked to atherosclerosis (hardening and narrowing of the arteries), which increases the risk of heart disease.

However, it is not possible to say from this small and short-term study whether taking walking breaks would definitely reduce a person’s risk of heart disease.

There is a growing body of evidence that spending more time in sedentary behaviour such as sitting can have adverse health effects – for example, a 2014 study found a link between sedentary behaviour and increased risk of chronic diseases.

While this study may not be definitive proof of the benefits of short breaks during periods of inactivity, having such breaks isn’t harmful, and could turn out to be beneficial.

 

Where did the story come from?

The study was carried out by researchers from the Indiana University Schools of Public Health and Medicine. It was funded by the American College of Sports Medicine Foundation, the Indiana University Graduate School and School of Public Health.

The study has been accepted for publication in the peer-reviewed journal Medicine & Science in Sports & Exercise.

The coverage in the Mail Online and the Daily Express is accurate though uncritical, not highlighting any of the research's limitations.

 

What kind of research was this?

This was a small crossover randomised controlled trial (RCT) assessing the effect of breaks in sitting time on one measure of cardiovascular disease risk: flexibility of the walls of arteries.

The researchers report that sitting for long periods of time has been associated with increased risk of chronic diseases and death, and this may be independent of how physically active a person is when they are not sitting. This is arguably more an issue now than it would have been in the past, as a lot of us have jobs where sitting (sedentary behaviour) is the norm.

Short breaks from sitting are reported to be associated with improvements in a lower waist circumference, and fats and sugar in the blood.

A randomised controlled trial is the best way to assess the impact of an intervention on outcomes.

 

What did the research involve?

The researchers recruited 12 inactive, but otherwise healthy, non-smoking men of normal weight. These men were asked to sit for two three-hour sessions. During one session (called SIT), they sat on a firmly cushioned chair without moving their lower legs. In the other (called ACT), they sat on a similar chair but got up and walked on a treadmill next to them at a speed of two miles an hour for five minutes, three times during the session. The sessions were carried out between two and seven days apart, and the order in which each man took part in these sessions was allocated at random.

The researchers measured how rapidly the walls of the superficial femoral artery recovered from being compressed by a blood pressure cuff for five minutes. The femoral artery is the main artery supplying blood to the leg. The “superficial” part refers to the part that continues down the thigh after a deeper branch has divided off near the top of the leg.

The researchers took these blood pressure measurements at the start of each session, and then at hourly intervals. The person taking measurements did not know which type of session (SIT or ACT) the person was taking part in. The researchers compared the results obtained during the SIT and ACT sessions, to see if there were any differences.

 

What were the basic results?

The researchers found that the widening of the artery in response to blood flow (called flow-mediated dilation) reduced over three hours spent sitting without moving. However, getting up for five-minute walks in this period stopped this from happening. The researchers did not find any difference between the trials in another measure of what is going on in the arteries, called the “shear rate” (a measurement of how well a fluid flows through a channel such as a blood vessel).

 

How did the researchers interpret the results?

The researchers concluded that light hourly activity breaks taken during three hours of sitting prevented a significant reduction in the speed of the main leg artery recovering after compression. They say that this is “the first experimental evidence of the effects of prolonged sitting on human vasculature, and are important from a public health perspective”.

 

Conclusion

This small and very short-term crossover randomised controlled trial has suggested that sitting still for long periods of time causes the walls of the main artery in the leg to become less flexible, and that having five-minute walking breaks about every hour can prevent this.

The big question is: does this have any effect on our health?

The flexibility of arteries (or in this case, one particular artery) is used as what is called a “proxy” or “surrogate” marker for a person’s risk of cardiovascular disease. However, just because these surrogate markers improve, this does not guarantee that a person will have a lower risk of cardiovascular disease. Longer-term trials are needed to determine this.

The potential adverse effects of spending a lot of time sitting, independent of a person’s physical activity, is currently a popular area of study. Standing desks are becoming increasingly popular in the US, so people spend most of their working day on their feet. Some even bring a treadmill into their office (see this recent BBC News report on desk treadmills).

Researchers are particularly interested in whether taking breaks from unavoidable periods of sitting could potentially reduce any adverse effects, but this research is still at an early stage. In the interim, it is safe to say that having short breaks from periods of inactivity isn’t harmful, and could turn out to be beneficial.

There has been a rapid advancement in human civilisation over the past 10,000 years. We have bodies that were evolved to spend a large part of the day on our feet, hunting and gathering, but we also now have lifestyles that encourage us to sit around all day. It could be that this mismatch partially to blame for the rise in non-infectious chronic diseases, such as type 2 diabetes and heart disease.

If you feel brave enough, why not take on the NHS Choices 10,000 steps a day challenge, which should help build stamina, burn excess calories and give you a healthier heart.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Here's a good excuse to get up from your desk: Just a five-minute walk every hour helps protect against damage of sitting all day. Mail Online, September 8 2014

Walking five minutes at work can undo damage of long periods of sitting. Daily Express, September 8 2014

Links To Science

Thosar SS, Bieklo SL, Mather KJ, et al. Effect of Prolonged Sitting and Breaks in Sitting Time on Endothelial Function. Medicine & Science in Sports & Exercise. Published online September 8 2014

Categories: Medical News

Ebola vaccine hope after successful animal study

Medical News - Mon, 09/08/2014 - 13:29

“Hopes for an effective Ebola vaccine have been raised after trials of an experimental jab found that it gave monkeys long-term protection,” The Guardian reports. An initial animal study found that a new vaccine boosted immunity.

Ebola is an extremely serious and often fatal viral infection thst can cause internal bleeding and organ failure.

It can be spread via contaminated body fluids such as blood and vomit.

Researchers tested vaccines based on chimpanzee viruses, which were genetically modified to not be infectious and to produce proteins normally found in Ebola viruses. As with all vaccines, the aim is to teach the immune system to recognise and attack the Ebola virus if it comes into contact with it again.

They found that a single injection of one form of the vaccine protected macaques (a common type of monkey) against what would usually be a lethal dose of Ebola five weeks later. If they combined this with a second booster injection eight weeks later, then the protection lasted for at least 10 months.

The quest for a vaccine is a matter of urgency, due to the current outbreak of Ebola in West Africa.

Now that these tests have shown promising results, human trials have started in the US. Given the ongoing threat of Ebola, this type of vaccine research is important in finding a way to protect against infection.

 

Where did the story come from?

The study was carried out by researchers from the National Institutes of Health (NIH) in the US, and other research centres and biotechnology companies in the US, Italy and Switzerland. Some of the authors declared that they claimed intellectual property on gene-based vaccines for the Ebola virus. Some of them were named inventors on patents or patent applications for either chimpanzee adenovirus or filovirus vaccines.

The study was funded by the NIH and was published in the peer-reviewed journal Nature Medicine.

The study was reported accurately by the UK media.

 

What kind of research was this?

This was animal research that aimed to test whether a new vaccine against the Ebola virus could produce a long-lasting immune response in non-human primates.

The researchers were testing a vaccine based on a chimpanzee virus from the family of viruses that causes the common cold in humans, called adenovirus. The researchers were using the chimpanzee virus rather than the human one, as the chimpanzee virus is not recognised and attacked by the human immune system.

The virus is essentially a way to get the vaccine into the cells, and is genetically engineered to not be able to reproduce itself, and therefore not spread from person to person or through the body. Other studies have tested chimp virus-based vaccines for other conditions in mice, other primates and humans.

To make a vaccine, the virus is genetically engineered to produce certain Ebola virus proteins. The idea is that exposing the body to the virus-based vaccine “teaches” the immune system to recognise, remember and attack these proteins. Later, when the body comes into contact with the Ebola virus, it can then rapidly produce an immune response to it.

This type of research in primates is the last stage before the vaccine is tested in humans. Primates are used in these trials due to their biological similarities to humans. This high level of similarity means that there is less chance of humans reacting differently.

 

What did the research involve?

Chimpanzee adenoviruses were genetically engineered to produce either a protein found on the surface of the Zaire form of the Ebola virus, or both this protein and another found on the Sudan form of the Ebola virus. These two forms of the Ebola virus are reported to be responsible for more deaths than other forms of the virus.

They then injected these vaccines into the muscle of crab-eating macaques and looked at whether they produced an immune response when later injected with the Ebola virus. This included looking at which vaccine produced a greater immune response, how long this effect lasted and whether giving a booster injection made the response last longer. The individual experiments used between four and 15 macaques.

 

What were the basic results?

In their first experiment, the researchers found that macaques given the vaccines survived when injected with what would normally be a lethal dose of Ebola virus five weeks after vaccination. Using a lower dose protected fewer of the vaccinated macaques.

The vaccine used in these tests was based on a form of the chimpanzee adenovirus called ChAd3. Vaccines based on another form of the virus called ChAd63, or on another type of virus called MVA, did not perform as well at protecting the macaques. A detailed assessment of the macaques' immune responses suggested that this might be due to the ChAd3-based vaccine producing a bigger response in one type of immune system cell (called T-cells).

The researchers then looked at what happened if vaccinated monkeys were given a potentially lethal dose of Ebola virus 10 months after vaccination. They did this with groups of four macaques given different doses and combination of the vaccines against both forms of Ebola virus, given as a single injection or with a booster. They found that a single high-dose vaccination with the ChAd3-based vaccine protected half of the four macaques. All four of the macaques vaccinated survived if they were given an initial vaccination with the ChAd3-based vaccine, followed by an MVA-based booster eight weeks later. Other approaches performed less well.

 

How did the researchers interpret the results?

The researchers concluded that they had shown short-term immunity against Ebola virus could be achieved with a single vaccination in chimps, and also long-term immunity if a booster was given. They state that: “This vaccine will be beneficial for populations at acute risk during natural outbreaks, or others with a potential risk of occupational exposure.”

 

Conclusion

This study has shown the potential of a new vaccine for Ebola virus in chimpanzees. Interest in the quest for a vaccine is seen as urgent, due to the ongoing outbreak of Ebola in West Africa. Animal studies such as this are needed to ensure that any new vaccines are safe, and that they look like they will have an effect. Macaques were used for this research because they, like humans, are primates – therefore, their responses to the vaccine should be similar to what would be expected in humans.

Now that these tests have shown promising results, the first human trials have started in the US, according to reports by BBC News. These trials will be closely monitored to determine the safety and efficacy of the vaccine in humans as, unfortunately, this early success does not guarantee that it will work in humans. Given the ongoing threat of Ebola, this type of vaccine research is important to protect against infection.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Hopes raised as Ebola vaccine protects monkeys for 10 months. The Guardian, September 7 2014

Vaccine gives monkeys Ebola immunity. BBC News, September 7 2014

Breakthrough as experimental Ebola vaccines protect monkeys from epidemic for 10 months. Mail Online, September 7 2014

Links To Science

Stanley DA, Honko AN, Asiedu C, et al. Chimpanzee adenovirus vaccine generates acute and durable protective immunity against ebolavirus challenge. Nature Medicine. Published online September 7 2014

Categories: Medical News

Wearing a bra 'doesn't raise breast cancer risk'

Medical News - Mon, 09/08/2014 - 03:00

“Scientists believe they have answered the decades long debate on whether wearing a bra can increase your risk of cancer,” reports The Daily Telegraph.

There is an "urban myth" that wearing a bra disrupts the workings of the lymphatic system (an essential part of the immune system), which could lead to a build-up of toxins inside breast tissue, increasing the risk of cancer. New research suggests that this fear may be unfounded.

The study compared the bra-wearing habits of 1,044 postmenopausal women with two common types of breast cancer with those of 469 women who did not have breast cancer. It found no significant difference between the groups in bra wearing habits such as when a woman started wearing a bra, whether she wore an underwired bra, and how many hours a day she wore a bra.

The study had some limitations, such as relatively limited matching of characteristics of women with and without cancer. Also, as most women wear a bra, they could not compare women who never wore a bra versus those who wore a bra.

Despite the limitations, as the authors of the study say, the findings provide some reassurance that your bra-wearing habits do not seem to increase risk of postmenopausal breast cancer.

While not all cases of breast cancer are thought to be preventable, maintaining a healthy weight, moderating your consumption of alcohol and taking regular exercise should help lower your risk.

 

Where did the story come from?

The study was carried out by researchers from Fred Hutchinson Cancer Research Center in the US.

It was funded by the US National Cancer Institute.

The study was published in the peer-reviewed medical journal Cancer Epidemiology Biomarkers & Prevention.

The Daily Telegraph and the Mail Online covered this research in a balanced and accurate way.

However, suggestions that women who wore bras were compared with “their braless counterparts”, are incorrect. Only one woman in the study never wore a bra and she was not included in the analyses. The study was essentially comparing women who all wore bras, but starting at different ages, for different lengths of time during the day, or of different types (underwired or not).

 

What kind of research was this?

This was a case-control study looking at whether wearing a bra increases risk of breast cancer.

The researchers say there has been some suggestion in the media that bra wearing might increase risk, but that there is little in the way of hard evidence to support the claim.

A case-control study compares what people with and without a condition have done in the past, to get clues as to what might have caused the condition.

If women who had breast cancer wore bras more often than women who did not have the disease, this might suggest that bras could be increasing risk. One of the main limitations to this type of study is that it can be difficult for people to remember what has happened to them in the past, and people with a condition may remember things differently than those who don’t have the condition.

Also, it is important that researchers make sure that the group without the condition (the controls) are coming from the same population as the group with the condition (cases).

This reduces the likelihood that differences other than the exposure of interest (bra wearing) could contribute to the condition.

 

What did the research involve?

The researchers enrolled postmenopausal women with (cases) and without breast cancer (controls) from one area in the US. They interviewed them to find out detailed information about their bra wearing over the course of their lives, as well as other questions. They then statistically assessed whether the cases had different bra-wearing habits to the controls.

The cases were identified using the region’s cancer surveillance registry data for 2000 to 2004. Women had to be between 55 and 74 years old when diagnosed. The researchers identified all women diagnosed with one type of invasive breast cancer (lobular carcinoma or ILC), and a random sample of 25% of the women with another type (ductal carcinoma). For each ILC case, a control woman who was aged within five years of the case’s age was selected at random from the general population in the region. The researchers recruited 83% of the eligible cases (1,044 of 1,251 women) and 71% of eligible controls (469 of 660 women).

The in-person interviews asked about various aspects of past bra wearing (up to the point of diagnosis with cancer, or the equivalent date for controls):

  • bra sizes
  • age at which they started regularly wearing a bra
  • whether they wore a bra with an underwire
  • number of hours per day a bra was worn
  • number of days per week they wore a bra at different times in their life
  • whether their bra-wearing patterns ever changed during their life

Only one woman reported never wearing a bra, and she was excluded from the analysis.

The women were also asked about other factors that could affect breast cancer risk (potential confounders), including:

  • whether they had children
  • body mass index (BMI)
  • medical history
  • family history of cancer
  • use of hormone replacement therapy (HRT)
  • demographic characteristics

The researchers compared bra-wearing characteristics between cases and controls, taking into account potential confounders. The potential confounders were found to not have a large effect on results (10% change in odds ratio [OR] or less), so results adjusting for these were not reported. If the researchers just analysed data for women who had not changed their bra-wearing habits over their lifetime, the results were similar to overall results, so these were also not reported.

 

What were the basic results?

The researchers found that some characteristics varied between groups – cases were slightly more likely than controls to:

  • have a current BMI less than 25
  • to be currently using combined HRT
  • to have a close family history of breast cancer
  • to have had a mammogram in the past two years
  • to have experienced natural menopause (as opposed to medically induced menopause)
  • to have no children

The only bra characteristic that showed some potential evidence of being associated with breast cancer was cup size (which will reflect breast size). Women who wore an A cup bra were more likely to have invasive ductal cancer than those with a B cup bra (OR 1.9, 95% confidence interval [CI] 1.0 to 3.3).

However, the confidence intervals show that this increase in risk was only just significant, as they show that it is just possible that the risk in both groups is equivalent (an odds ratio of 1). If lower bra cup size was truly associated with increased breast cancer risk, the researchers would expect to see reducing risk as cup sizes got bigger. However, they did not see this trend across the other cup sizes, suggesting that there wasn’t a true relationship between cup size and breast cancer risk.

None of the other bra-wearing characteristics were statistically significantly different between cases with either type of invasive breast cancer and controls.

 

How did the researchers interpret the results?

The researchers concluded that their findings “provided reassurance to women that wearing a bra does not seem to increase the risk of the most common histologic types of postmenopausal breast cancer”.

 

Conclusion

This study suggests that past bra-wearing characteristics are not associated with breast cancer risk in postmenopausal women. The study does have some limitations:

  • There was only limited matching of the cases and controls, which could mean that other differences between the groups may be contributing to results. The potential confounders assessed were reported to not have a large impact on the results, which suggests that the lack of matching may not be having a large effect, but these results were not shown to allow assessment of this by the reader.
  • Controls were not selected for the women with invasive ductal carcinoma, only those with invasive lobular carcinoma.
  • As most women wear bras, but may differ in their bra-wearing habits (e.g. when they started wearing a bar or whether they wore an underwired bra), this means it wasn’t possible to compare the effect of wearing a bra versus not wearing a bra at all.
  • It may be difficult for women to remember their bra-wearing habits a long time ago, for example, exactly when they started wearing a bra, and their estimations may not be entirely accurate. As long as both cases and controls have the same likelihood of these inaccuracies in their reporting, this should not bias results. However, if women with cancer remember their bra wearing differently, for example, if they think it may have contributed to their cancer, this could bias results.
  • There were relatively small numbers of women in the control group, and once they were split up into groups with different characteristics, the number of women in some groups was relatively small. For example, only 17 women in the control group wore an A cup bra. These small numbers may mean some figures are less reliable.
  • The findings are limited to breast cancer risk in postmenopausal women.

While this study does have limitations as the authors say, it does provide some level of reassurance for women that bra wearing does not seem to increase risk of breast cancer.

While not all cases of breast cancer are thought to be preventable, maintaining a healthy weight, moderating your consumption of alcohol and taking regular exercise should help lower your risk. Read more about how to reduce your breast cancer risk.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Wearing a bra does not increase breast cancer risk, study finds. The Daily Telegraph, September 5 2014

Wearing a bra will NOT cause breast cancer - even if it's underwired or you wear it all year long, study finds. Mail Online, September 5 2014

Links To Science

Chen L, Malone KE, Li C I. Bra Wearing Not Associated with Breast Cancer Risk: A Population-Based Case-Control Study. Cancer Epidemiology Biomarkers and Prevention. Published online September 5 2014

Categories: Medical News

Gay people have 'poorer health' and 'GP issues'

Medical News - Fri, 09/05/2014 - 14:30

“Lesbians, gays and bisexuals are more likely to have longstanding mental health problems,” The Independent reports, as well as “bad experiences with their GP”. A UK survey found striking disparities in survey responses compared to heterosexuals.

The news is based on the results of a survey in England of more than 2 million people, including over 27,000 people who described themselves as gay, lesbian or bisexual.

It found that sexual minorities were two to three times more likely to report having longstanding psychological or emotional problems and significantly more likely to report fair/poor health than heterosexuals.

People who described themselves as bisexuals had the highest rates of reported psychological or emotional problems. The researchers speculate that this could be due to a “double discrimination” effect; homophobia from the straight community as well as being stigmatised by the gay and lesbian communities as not being “properly gay” (biphobia).

Sexual minorities were also more likely to report unfavourable experiences with nurses and doctors in a GP setting.

Unfortunately this study cannot tell us the reasons for the differences reported in either health or relationships with GPs.

The results of this survey would certainly seem to suggest that there is room for improvement in the standard and focus of healthcare offered to gay, lesbian and bisexual people.

 

Where did the story come from?

The study was carried out by researchers from the RAND Corporation (a non-profit research organisation), Boston Children’s Hospital/Harvard Medical School and the University of Cambridge. The study was funded by the Department of Health (England).

The study was published in the peer-reviewed Journal of General Internal Medicine. This article is open-access so is free to read online.

The results of this study were well reported by The Independent and The Guardian.

 

What kind of research was this?

This was a cross-sectional study that aimed to compare the health and healthcare experiences of sexual minorities with heterosexual people of the same gender, adjusting for age, race/ethnicity and socioeconomic status.

A cross-sectional study collects data at one point in time so it is not able to prove any direct cause and effect relationships. It can be useful in highlighting possible associations that can then be investigated further.

 

What did the research involve?

The researchers analysed data from the 2009/10 English General Practice Patient Survey.

The survey was mailed to 5.56 million randomly sampled adults registered with a National Health Service general practice (it is estimated that 99% of England’s adult population are registered with an NHS GP). In all, 2,169,718 people responded (39% response rate).

People were asked about their health, healthcare experiences and personal characteristics (race/ethnicity, religion and sexual orientation).

The question about sexual orientation is also used in UK Office of National Statistics Social Surveys: “Which of the following best describes how you think of yourself?:

  • heterosexual/straight
  • gay/lesbian
  • bisexual
  • other
  • I would prefer not to say

Of the respondents 27,497 people described themselves as gay, lesbian, or bisexual.

The researchers analysed the responses to questions concerning health status and patient experience.

People were asked about their general health status (“In general, would you say your health is: excellent, very good, good, fair, or poor?”) and whether they had one of six long-term health problems, including a longstanding psychological or emotional condition.

The researchers looked to see whether people had reported:

  • having “no” trust or confidence in the doctor
  • “poor” or “very poor” to at least one of the doctor communication measures of giving enough time, asking about symptoms, listening, explaining tests and treatments, involving in decisions, treating with care and concern, and taking problems seriously
  • “poor” or “very poor” to at least one of the nurse communication measures
  • being “fairly” or “very” dissatisfied with care overall

The researchers compared the responses from sexual minorities and heterosexuals of the same gender after controlling for age, race/ethnicity and deprivation.

 

What were the basic results?

Both male and female sexual minorities were two to three times more likely to report having a longstanding psychological or emotional problem than heterosexual counterparts. Problems were reported by 5.2% heterosexual men compared to 10.9% gay men and 15% bisexual men and by 6.0% heterosexual women compared to 12.3% lesbian women and 18.8% bisexual women.

Both male and female sexual minorities were also more likely to report fair/poor health. Fair/poor health was reported by 19.6% heterosexual men compared to 21.9% gay men and 26.4% bisexual men and by 20.5% heterosexual women compared to 24.9% lesbian women and 31.6% bisexual women.

Negative healthcare experiences were uncommon in general, but sexual minorities were about one-and-a-half times more likely than heterosexual people to report unfavourable experiences with each of four aspects of primary care:

  • no trust or confidence in the doctor was reported by 3.6% heterosexual men compared to 5.6% gay men (4.3% bisexual men, difference compared to heterosexual men not statistically significant) and by 3.9% heterosexual women compared to 5.3% lesbian women and 5.3% bisexual women
  • poor/very poor doctor communication was reported by 9.0% heterosexual men compared to 13.5% gay men and 12.5% bisexual men and by 9.3% heterosexual women compared to 11.7% lesbian women and 12.8% bisexual women
  • poor/very poor nurse communication was reported by 4.2% heterosexual men compared to 7.0% gay men and 7.3% bisexual men and by 4.5% heterosexual women compared to 7.8% lesbian women and 6.7% bisexual women
  • being fairly/very dissatisfied with care overall was reported by 3.8% heterosexual men compared to 5.9% gay men and 4.9% bisexual men and by 3.9% heterosexual women compared to 4.9% lesbian women (4.2% bisexual women, difference compared to heterosexual women not statistically significant)

 

How did the researchers interpret the results?

The researchers concluded that “sexual minorities suffer both poorer health and worse healthcare experiences. Efforts should be made to recognise the needs and improve the experiences of sexual minorities. Examining patient experience disparities by sexual orientation can inform such efforts”.

 

Conclusion

This study has found that that sexual minorities were two to three times more likely to report having longstanding psychological or emotional problems and significantly more likely to report fair/poor health than heterosexuals.

Sexual minorities were also more likely to report unfavourable experiences with nurses and doctors in a GP setting.

It should also be noted that response rates to the survey were low, with only 39% people responding to the survey. It is unknown whether the results would have been different if more people had responded.

While potential reasons for these disparities may include the stress induced by homophobic attitudes, or the suspicion that a GP disapproves of their patient’s sexuality, these speculations are unproven.

As it stands, this study cannot tell us the reasons for the differences reported. However, it would suggest that healthcare providers need to do more to meet the needs of gay, lesbian and bisexual people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Gay people more likely to have mental health problems, survey says. The Independent, September 4 2014

Gay people report worse experiences with GPs. The Guardian, September 4 2014

Links To Science

Elliott MN, Kanouse DE, Burkhart Q, et al. Sexual Minorities in England Have Poorer Health and Worse Health Care Experiences: A National Survey. Journal of General Internal Medicine. Published online September 4 2014

Categories: Medical News

1 in 5 child deaths 'preventable'

Medical News - Fri, 09/05/2014 - 14:00

“One in five child deaths ‘preventable’,” reports BBC News.

The headline was prompted by the publication of a three-part series of papers on child death in high-income countries published in The Lancet.

The reviews outlined the need for child death reviews to identify modifiable risk factors, described patterns of child mortality at different ages across five broad categories. These were perinatal causes, congenital abnormalities, acquired natural causes, external causes, and unexplained deaths. They described contributory factors to death across four broad domains: biological and psychological factors, the physical environment, the social environment, and health and social service delivery.

Although the series did report that one in five child deaths are preventable, it should be noted that this was not a new figure and was published by the government in 2011.

Leading causes of preventable child deaths in the UK highlighted by the authors include accidents, abuse, neglect and suicide.

The authors also argue that child poverty and income inequality have a significant effect on risk factors for preventable child death and they are quoted in the media as calling for the government to do more in tackling child poverty.

 

Where did the story come from?

The series of papers was written by researchers from the University of Warwick in collaboration with researchers from universities and research institutes around the world. The source of funding for this series of three papers was not reported.

The series was published in the peer-reviewed medical journal The Lancet. All three papers are open-access so are free to read online (though you will need to register with The Lancet website):

 

Child health reviews

The first paper in the series discussed child death reviews, which have been developed in several countries. These aim to develop a greater understanding of how and why children die, which could lead to the identification of factors that could potentially be modified to reduce further deaths.

In England, multiagency rapid-response teams investigate all unexpected deaths of children aged 0-18 years. However, lessons learned from child death reviews are yet to be translated into large-scale policy initiatives, although local actions have been taken.

However, the researchers report that whether child death reviews have led to a reduction in national child death rates has not been assessed.

They also suggest that child death reviews could be extended to child deaths in hospital.

 

Patterns of death in England and Wales

The second paper in the series discussed the pattern of child death in England and Wales at different ages across five broad categories (perinatal causes, congenital abnormalities, acquired natural causes, external causes, and unexplained deaths).

It found that more than 5,000 infants, children and adolescents die every year in England and Wales.

Mortality is highest in infancy, dropping to very low rates in the middle childhood years, before rising again in adolescence.

Patterns of mortality vary with age and sex; perinatal and congenital causes predominate in infancy, with acquired natural causes (for example infections or neurological, respiratory and cardiovascular disorders) becoming prominent in later childhood and adolescence.

More than 50% of adolescent deaths occur from external causes, which included traffic deaths, non-intentional injuries (for example, falls), fatal maltreatment and death from assault, suicide and deliberate self-harm.

Deaths of children diagnosed with life-limiting disorders (disorders that are likely to reduce a child’s lifespan) might account for 50% or more of all child mortality in England and Wales.

 

Why do children die in high-income countries?

In the third review of the series the researchers summarised the results of key studies that described contributory factors to child death across four broad domains:

  • Intrinsic (genetic and biological) factors that are associated with child mortality include sex, ethnic origin, gestation and growth characteristics, disability and behaviour.
  • Physical environment, for example the home and surrounding area, including access to firearms (a particular problem in the US) and poisons.
  • Social environment (for example socioeconomic status, parental characteristics, parenting behaviours, family structures, and social support).
  • Service delivery (delivery of healthcare including national policy, healthcare services and the individual doctor; and the delivery of other welfare services (such as housing, welfare benefits and social care).

 

What do the researchers suggest?

In an accompanying editorial the researchers suggest that:

  • co-ordinated strategies that reduce antenatal and perinatal risk factors are essential
  • further research is needed into preventative interventions for preterm birth
  • efforts are needed to prevent child deaths due to acquired natural causes, including improved recognition of severity of illness
  • preventative strategies involving collaboration between health authorities and other agencies, including social, education, environmental, police and legal services, industry, and consumer groups are needed to prevent deaths due to external causes

 

Conclusion

A case could be made that these series of reports are more in the realm of political debate rather than health and medicine.

The lead author, Dr Peter Sidebotham, is quoted in The Daily Telegraph as saying: "It needs to be recognised that many child deaths could be prevented through a combination of changes in long-term political commitment, welfare services to tackle child poverty, and healthcare services.

"Politicians should recognise that child survival is as much linked to socioeconomic policies that reduce inequality as it is to a country's overall gross domestic product and systems of healthcare delivery."

While most of us would agree that reducing child poverty and income inequality is a good thing, exactly how we go about achieving these goals is a matter of heated debate.

Those on the Right of the political spectrum have argued that stimulating the economic activity of the free market will provide opportunities to lift people out of poverty. Those on the Left have argued that redistributing wealth through taxation can help create a safety net that stops children falling into poverty.

Seeing as this argument has been raging for centuries, we do not expect a resolution to the debate anytime soon.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

One in five child deaths 'preventable'. BBC News, September 5 2014

One in five child deaths in England is preventable, study finds. The Guardian, September 5 2014

Fifth of child deaths are preventable. The Daily Telegraph, September 5 2014

1 in 5 child deaths 'could be prevented and Government must take immediate action'. Daily Mirror, September 5 2014

One In Five Child Deaths Are 'Preventable'. Sky News, September 5 2014

Links To Science

Fraser J, Sidebotham P, Frederick J, et al. Learning from child death review in the USA, England, Australia, and New Zealand. The Lancet. Published online September 5 2014

Sidebotham P, Fraser J, Fleming P, et al. Patterns of child death in England and Wales. The Lancet. Published online September 5 2014

Sidebotham P, Fraser J, Covingtion T, et al. Understanding why children die in high-income countries. The Lancet. Published online September 5 2014

Categories: Medical News

How immunotherapy may treat multiple sclerosis

Medical News - Thu, 09/04/2014 - 14:05

“Breakthrough hope for MS treatment as scientists discover how to ‘switch off’ autoimmune diseases,” reports the Mail Online.

Autoimmune disorders, such as multiple sclerosis (MS), occur when the body’s immune system attacks and destroys healthy body tissue by mistake.

The “holy grail” of treatment is to make the immune system tolerant to the part of the body that it is attacking, while still allowing the immune system to work effectively.

Previous studies in mice have shown that tolerance can be achieved by repeatedly exposing mice with autoimmune disorders to fragments of the components that the immune system is attacking and destroying. The immune cells that were attacking the healthy tissue convert into regulatory cells that actually dampen the immune response.

This process is similar to the process that has been used to treat allergies (immunotherapy).

It is known that doses of the fragments of the components that the immune system is attacking need to start low before increasing – this is known as the dose-escalation protocol.

A new mouse study has found that a carefully calibrated dose-escalation protocol caused changes in gene activity (gene expression). This then causes the attacking immune cells to express regulatory genes and to become suppressive. So rather than attacking healthy tissue, they are now ready to protect against further attacks against healthy tissue.

The researchers hope that some of the changes in immune cells and in gene expression that they have identified can be used in clinical studies to determine whether immunotherapy is working.

 

Where did the story come from?

The study was carried out by researchers from the University of Bristol and University College London and was funded by the Wellcome Trust, MS Society UK, the Batchworth Trust and the University of Bristol.

The study was published in the peer-reviewed journal Nature Communications. This article is open-access and can be read for free.

Although most of the media reporting was accurate, it should be noted that the current study focused on how dose-escalation therapy works rather than revealing it as a new discovery.

The principles underpinning immunotherapy and similar treatments have been known for many years.

 

What kind of research was this?

This was an animal study that aimed to improve the understanding of how dose-escalation therapy works so that it can be made more effective and safer.

Animal studies are the ideal type of study to answer this sort of basic science question.

 

What did the research involve?

Most of the experiments were performed in mice that were engineered to develop autoimmune encephalomyelitis, which has similarities to multiple sclerosis (MS).

In this mouse model, more than 90% of a subset of immune cells called CD4+ T cells recognise myelin basic protein, which is found in the myelin sheath that surrounds nerve cells. This causes the immune system to attack the myelin sheath, damaging it, which causes nerve signals to slow down or stop.

The researchers injected the mice subcutaneously (under the skin) with a peptide (small protein) that corresponded to the region of myelin basic protein that was recognised by the CD4+ T cells.

The researchers initially wanted to see what the maximum dose of peptide that could be tolerated was, and what dose was most effective at inducing tolerance.

They then did further experiments in which they increased the dose of peptide and compared that with just giving the same dose of peptide on multiple days.

Finally, they looked at what genes were being expressed or repressed in CD4+ T cells during dose-escalation.

 

What were the basic results?

The researchers found that the maximum dose of peptide that could be tolerated safely by the mice was 8µg (micrograms).

The tolerance to the peptide increased as peptide dose increased. This means that when the mice were re-challenged with peptide, the immune response was lower in mice that received 8µg of peptide compared to mice that had received lower doses.

The researchers found that dose escalation was critical for effective immunotherapy. If mice received 0.08µg on day 1, 0.8µg on day 2, and 8µg on day 3, they could then tolerate doses of 80µg with no adverse effects. In addition, this dose escalation protocol suppressed activation and proliferation of the CD4+ T cells in response to the peptide.

The researchers then looked at the gene expression within CD4+ T cells during dose escalation. They found that each escalating dose of peptide treatment modified the genes that were expressed. Genes that are associated with an inflammatory response were repressed and genes that are associated with regulatory processes were induced.

How did the researchers interpret the results?

The researchers concluded that “these findings reveal the critical importance of dose escalation in the context of antigen-specific immunotherapy, as well as the immunological and transcriptional signatures associated with successful self-antigen escalation dose immunotherapy”.

They go on to say that “with the immunological and transcriptional evidence provided in this study, we anticipate that these molecules can now be investigated as surrogate markers for antigen-specific tolerance induction in clinical trials”.

 

Conclusion

This mouse study using a mouse model of MS has found that the dose-escalation protocol is extremely important for inducing tolerance, in this case a small fragment of myelin basic protein.

Escalation dose immunotherapy minimised immune system activation and proliferation during the early stages, and caused changes in gene expression that caused the attacking immune cells to express regulatory genes and to become suppressive.

The researchers hope that some of the changes in immune cells and in gene expression that they have identified can be used in clinical studies of tolerance-inducing treatments for autoimmune disorders to determine whether therapy is working.

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Breakthrough hope for MS treatment as scientists discover how to 'switch off' autoimmune disease. Mail Online, September 4 2014

Could a cure for MS and diabetes be on the way? Daily Express, September 4 2014

Links To Science

Burton BR, Britton GJ, Fang H, et al. Sequential transcriptional changes dictate safe and effective antigen-specific immunotherapy. Nature Communications. Published online September 3 2014

Categories: Medical News

Claims e-cigarettes are a 'gateway to cocaine'

Medical News - Thu, 09/04/2014 - 14:00

“E-cigarettes could lead to using cocaine and cannabis scientists say,” the Daily Mirror reports.

In an article sure to prove controversial, two neuroscientists argue that nicotine may "prime" the brain to become addicted to harder drugs, such as cocaine.

The story comes from an article that argues that nicotine alters the brain’s circuitry, lowering the threshold for addiction to other substances such as cannabis and cocaine. Electronic cigarettes, the authors point out, are “pure nicotine delivery devices”, which could increase drug addiction among young people.

The “gateway drug” hypothesis is that use of certain (usually legal) drugs such as nicotine and alcohol can lead to the use of hard illegal drugs such as cocaine. This article argues that nicotine is such a drug and includes previous research by the authors that tested this hypothesis in a mouse model. 

The authors’ argument is based on the assumption that e-cigarette (or other nicotine) users will go on to use drugs such as cocaine. This assumption is unproven. While it is true that most cocaine users are also smokers, this does not equate to stating that most smokers use cocaine.

The article is of interest but it does not prove that e-cigarettes are a “gateway” to the use of drugs such as cocaine.

 

Where did the story come from?

The article is the print version of a lecture given by two researchers from Columbia University in the US. Their previous work in this area has been funded by the Howard Hughes Medical Institute, the National Institutes of Health and the National Institute on Drug Abuse, all in the US. One of the researchers, Professor Eric Kandel, shared the 2000 Nobel Prize in Physiology or Medicine for his discoveries related to the molecular basis of memory.

The article was published in the peer-reviewed New England Journal of Medicine on an open access basis so it is free to read online.

Coverage in the UK media was accurate but uncritical.

 

What kind of research was this?

This was not research but an article, based on a lecture, that presented evidence in favour of the theory that nicotine “primes” the brain for the use of other drugs such as cannabis and cocaine.

The authors say that while studies have shown that nicotine use is a gateway to the use of cannabis and cocaine in human populations, it has not been clear how nicotine accomplishes this. They say they have brought “the techniques of molecular biology” to bear on the question, revealing the action of nicotine in the brains of mice.

 

What did the article involve?

The authors first explain the gateway hypothesis (developed previously by one of them), which argues that in western societies, there is a “well defined” developmental sequence of drug use that starts with a legal drug and proceeds to illegal drugs. Specifically, it says, the use of alcohol and tobacco precede the use of cannabis, which in turn precedes the use of cocaine and other illicit drugs. They then review their own studies in which they tested the gateway hypothesis in a mouse model. 

Using this model they examined both addictive behaviour, brain “plasticity” (changes to the structures of the brain) and activity of a specific gene associated with addiction, in various experiments in which mice were exposed to both nicotine and cocaine.

One of the behavioural experiments they report on, for example, shows that mice given nicotine for seven days, followed by four days of nicotine and cocaine, were significantly (98%) more active than controls.

They also say they found that exposing mice brains to nicotine appeared to increase the “rewarding” properties of cocaine by encouraging production of the neurotransmitter dopamine.

Other experiments they report on found that nicotine given to mice before cocaine increased the expression of a gene that magnifies the effect of cocaine.

This “priming” effect, they say, does not occur unless nicotine is given repeatedly and in close conjunction with cocaine.

They then report on studies that they say show that nicotine also primes human brains to respond to cocaine, with the rate of cocaine dependence highest among users who started using cocaine after having smoked cigarettes.

Their conclusion is that, in humans, nicotine affects the circuitry of the brain in a manner that enhances the effects of a subsequent drug and that this “priming effect” happens if cocaine is used while using nicotine.

This effect is likely to occur, they argue, whether the exposure is from tobacco smoking, passive smoking or e-cigarettes.

They also argue that e-cigarettes are increasingly used by adolescents and young adults, with the potential for creating a new generation of people addicted to nicotine. “Whether e-cigarettes will prove to be a gateway to the use of combustible cigarettes and illicit drugs is uncertain but it is clearly a possibility.”

 

Conclusion

The article is of interest but it does not prove that e-cigarettes are a “gateway” to the use of drugs such as cocaine. The authors present evidence, much of it from their own research, in support of this hypothesis, but it remains just that – a hypothesis.

You could also make the point that it is somewhat unfair to demonise e-cigarettes in this way. Any product containing nicotine, such as patches or gum, could also be classed as a “gateway drug”, but as they release nicotine slowly these are not thought to be as “addictive”.

Also, as the authors point out, the “gateway drug” hypothesis is not universally accepted by addiction specialists. There is another hypothesis that the use of multiple drugs reflects a general tendency to drug use and that it is this tendency to addiction, rather than the use of a particular drug, that increases the risk of progressing to another drug.

From 2016 e-cigarettes are likely to be classed as "medicines", which means they will face stringent checks by medicine regulator the MHRA, and doctors will be able to prescribe them to smokers to help them cut down or quit. Tighter regulation will ensure the products are safe and effective.

If you want to try a safer alternative to cigarettes but are concerned about the uncertainties surrounding e-cigarettes, you may wish to consider a nicotine inhalator. This licensed quit smoking aid, available on the NHS, consists of just a mouthpiece and a plastic cartridge. It’s proven to be safe, but the nicotine vapour only reaches the mouth rather than the lungs, so you don’t get the quick hit of nicotine that comes with e-cigarettes.

It is well known that nicotine is addictive. Despite the risk of addiction and other uncertainties, e-cigarettes are likely to be safer than cigarettes (or other tobacco products). There is no conclusive evidence that using e-cigarettes will increase your risk of developing a drug addiction.


Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

E-cigarettes could lead to using cocaine and cannabis scientists say. Daily Mirror, September 4 2014

How an e-cigarette could lead to cocaine. The Daily Telegraph, September 3 2014

E-cigarettes could act as 'gateway' to harmful illegal drugs raising the risk of addiction. Mail Online, September 4 2014

Effects of using E-cigarettes 'stronger in adolescents'. ITV News, September 4 2014

Links To Science

Kandel ER, Kandel DB. A Molecular Basis for Nicotine as a Gateway Drug. The New England Journal of Medicine. Published online September 4 2014

Categories: Medical News

What is proton beam therapy?

Medical News - Wed, 09/03/2014 - 16:00

Proton beam therapy has been discussed widely in the media in recent days.

This is due to the controversy surrounding the treatment of a young boy called Ashya King, who has medulloblastoma, a type of brain cancer.

Ashya was reportedly taken abroad by his parents to receive proton beam therapy.

But what does proton beam therapy involve, and can it treat cancer effectively?

 

How does proton beam therapy work?

Proton beam therapy is a type of radiotherapy.

Conventional radiotherapy uses high energy beams of radiation to destroy cancerous cells, but surrounding tissue can also be damaged. This can lead to side effects such as nausea, and can sometimes disrupt how some organs function.

Proton beam therapy uses beams of protons (sub-atomic particles) to achieve the same cell-killing effect. A "particle accelerator" is used to speed up the protons. These accelerated protons are then beamed into cancerous cells, killing them.

Unlike conventional radiotherapy, in proton beam therapy the beam of protons stops once it "hits" the cancerous cells. This means that proton beam therapy results in much less damage to surrounding tissue.

 

Who can benefit from proton beam therapy?

Proton beam therapy is useful for treating types of cancer in critical areas – when it is important to reduce damage to surrounding tissue as much as possible. For example, it is used most often to treat brain tumours in young children whose brains are still developing.

Proton beam therapy can also be used to treat adult cancers where the cancer has developed near a place in the body where damage would cause serious complications, such as the optic nerve.

These types of cancer make up a very small proportion of all cancer diagnoses. Even if there was unlimited access to proton beam therapy, its use would not be recommended in most cases.

Cancer Research UK estimates that only one in 100 people with cancer would be suitable for proton beam therapy.

 

Is proton beam therapy effective?

It is important not to assume that newly emerging treatments are more effective than existing treatments.

Proton beam therapy may cause less damage to healthy tissue, but it is still unclear whether it is as good at destroying cancerous tissue as conventional radiotherapy.

As proton beam therapy is usually reserved for very rare types of cancer, it is hard to gather systematic evidence about its effectiveness when compared to radiotherapy.

People who travel abroad from the UK to receive proton beam therapy usually respond well. But these people have specifically been selected for treatment as they were seen as "optimal candidates" who would benefit the most. Whether this benefit would apply to more people with cancer is unclear.

We cannot say with any conviction that proton beam therapy is “better” overall than radiotherapy.

 

Is proton beam therapy available in the UK?

Generally not. The NHS is building two proton beam centres, one in London and one in Manchester, which are expected to open in 2018. There is an existing low energy proton machine used specifically to treat some eye cancers at the NHS Clatterbridge Cancer Centre in Merseyside. This low energy machine cannot be used to treat most brain tumours as the low energy beam cannot penetrate far enough.

The NHS sends patients abroad if their care team thinks they are ideally suited to receive proton beam therapy. Around 400 patients have been sent abroad since 2008 – most of these patients were children. Read NHS England's advice for families of children being referred for proton beam therapy at overseas clinics (PDF, 1.39Mb).

Some overseas clinics providing proton beam therapy heavily market their services to parents who are understandably desperate to get treatment for their children. Proton beam therapy can be very costly and it is not clear whether all children treated privately abroad are treated appropriately.

It is important not to lose sight of the fact that conventional radiotherapy is, in most cases, both safe and effective with a low risk of complications. While side effects of radiotherapy are common they normally pass once the course of treatment has finished.

Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

What access to proton beam therapy do UK patients have? BBC News, September 1 2014

Ashya: Opinion Divided On Proton Beam Therapy. Sky News, September 2 2014

Categories: Medical News

Missing breakfast linked to type 2 diabetes

Medical News - Wed, 09/03/2014 - 14:30

"Skipping breakfast in childhood may raise the risk of diabetes," the Mail Online reports. A study of UK schoolchildren found that those who didn’t regularly eat breakfast had early signs of having risk markers for type 2 diabetes.

The study found that children who did not usually eat breakfast had 26% higher insulin resistance than children who always ate breakfast. High insulin resistance increases risk of type 2 diabetes, which is why the results of this study are important. It should be pointed out that while the levels were higher in children who skipped breakfast, they were still within normal limits.

The researchers questioned more than 4,000 children aged nine and 10 about whether they usually ate breakfast, and took a fasting blood sample for a variety of measurements, including their blood sugar level and insulin level. 

The results suggest that eating breakfast may reduce the risk of higher insulin resistance levels, but due to the cross-sectional design of the study (a one-off assessment), it cannot prove that skipping breakfast causes higher insulin resistance or type 2 diabetes. And, as the researchers point out, even if a direct cause and effect relationship was established, it is still unclear why skipping breakfast would make you more prone to diabetes.

Despite this limitation of the study, eating a healthy breakfast high in fibre has many health benefits and should be encouraged.

 

Where did the story come from?

The study was carried out by researchers from St George’s University Hospital in London, the University of Oxford, the Medical Research Council Human Nutrition Research in Cambridge and University of Glasgow School of Medicine. It was funded by Diabetes UK, the Wellcome Trust, and the National Prevention Research Initiative. The authors declared no conflict of interest.

The study was published in the peer-reviewed medical journal PLOS Medicine. This is an open access journal so the study is free to read online.

The UK media generally reported the study accurately, although claims the study “tracked” children over time are inaccurate. Researchers used a one-off questionnaire and blood test, and none of the results showed that the children were insulin resistant – they just had higher levels within the normal range.

Also the Mail Online’s headline “Youngsters who don't eat morning meal more likely to be insulin dependent” appears to be written by someone without any grasp of human biology. All humans are insulin dependent.

 

What kind of research was this?

This was a cross-sectional study of nine- and 10-year-old children in England. It aimed to see if there was a link between eating breakfast and markers for type 2 diabetes, in particular insulin resistance and high blood sugar levels. Higher fasting insulin levels are seen when the body becomes insulin resistant, which is a risk factor for developing type 2 diabetes. As this was a cross-sectional study, it cannot prove that not eating breakfast causes children to be at higher risk of type 2 diabetes, but it can show that there is an association.

 

What did the research involve?

The researchers used information collected from 4,116 children who had participated in the Child Heart And health Study in England (CHASE) between 2004 and 2007. This study invited children aged nine and 10 from 200 randomly selected schools in London, Birmingham and Leicester to take part in a survey looking at risk factors for type 2 diabetes and cardiovascular disease.

This included questionnaires, measures of body fat and a fasting blood sample, taken eight to 10 hours after their last meal.

One of the questions related to how often they ate breakfast, with the following possible responses:

  • every day
  • most days
  • some days
  • not usually

Children from the last 85 schools were also interviewed by a research nutritionist to determine their food and drink intake in the previous 24 hours.

They analysed the data looking for an association between breakfast consumption and insulin resistance and higher blood sugar levels adjusting the results to take into account age, sex, ethnicity, day of the week and month, and school.

 

What were the basic results?

Of the 4,116 children:

  • 3,056 (74%) ate breakfast daily
  • 450 (11%) had breakfast most days
  • 372 (9%) had breakfast some days
  • 238 (6%) did not usually have breakfast

Compared to children who ate breakfast every day, children who did not usually have breakfast had:

  • 26% higher fasting insulin levels
  • 26.7% higher insulin resistance
  • 1.2% higher HbA1c (number of red blood cells attached to glucose, which is a marker of average blood glucose concentration, higher numbers increase the risk of diabetes) 1% higher glucose (blood sugar) level

These results remained significant even after taking into account the child’s fat mass, socioeconomic status and physical activity levels.

In the subset of children asked about their food intake over the previous 24 hours, children eating a high fibre breakfast had lower insulin resistance than those eating other types of breakfasts such as toast or biscuits.

 

How did the researchers interpret the results?

The researchers concluded that “children who ate breakfast daily, particularly a high fibre cereal breakfast, had a more favourable type 2 diabetes risk profile. Trials are needed to quantify the protective effect of breakfast on emerging type 2 diabetes risk”.

 

Conclusion

This well designed study found that children who did not usually eat breakfast had 26% higher insulin resistance than children who always ate breakfast, though the level was still within normal limits.

Higher levels indicate a risk of type 2 diabetes, which is why the results of this study are important.

Strengths of the study include the large sample size, multi-ethnicity of the participants and accuracy of the body fat measurements rather than just relying on body mass index (BMI).

A limitation of the study is that due to the cross-sectional design it cannot prove that not eating breakfast would cause diabetes, but it does show that this may begin to increase the risk. The study is also reliant on self-reporting of usual breakfast intake.

Eating a healthy breakfast rich in fibre has been linked to many health benefits and is thought to contribute to maintaining a healthy weight. As the researchers point out, further studies will be required to verify the link, such as through following children over time to see which ones develop diabetes.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Children who skip breakfast 'more likely to suffer diabetes': Youngsters who don't eat morning meal more likely to be insulin dependent. Mail Online, September 3 2014

Breakfast lowers risk of diabetes. The Times, September 3 2014

Children who skip breakfast might raise diabetes risk. New Scientist, September 2 2014

Links To Science

Donin AS, Nightingale CM, Owen CG, et  al. Regular Breakfast Consumption and Type 2 Diabetes Risk Markers in 9- to 10-Year-Old Children in the Child Heart and Health Study in England (CHASE): A Cross-Sectional Analysis. PLoS Medicine. Published online September 2 2014

Categories: Medical News

Lumpectomy 'as effective as double mastectomy'

Medical News - Wed, 09/03/2014 - 13:29

“Double mastectomy for breast cancer 'does not boost survival chances' – when compared to breast-conserving surgery," The Guardian reports.

The news is based on the results of a large US cohort study of women with early stage breast cancer in one breast.

It found that the 10-year mortality benefit associated with bilateral mastectomy (removal of both breasts) was the same as breast-conserving surgery (also known as lumpectomy, where the cancer and a border of healthy tissue is removed) plus radiotherapy.

Unilateral mastectomy (removal of the affected breast) was associated with a slightly increased risk of 10-year mortality, although the absolute difference was only 4%.

In the UK, bilateral mastectomy may be recommended for women at high risk of breast cancer due to family history, or because of a gene mutation (for example mutations in the BRCA1 and BRCA2 genes). A bilateral mastectomy can then be followed by breast reconstruction surgery, restoring the original look of the breasts.

Disadvantages of a bilateral mastectomy compared to a lumpectomy include a longer recovery time and a higher risk of complications.

This study suggests that bilateral mastectomy may not be associated with any significant survival benefit over breast conserving therapy plus radiotherapy for most women.

It is important to note that the outcome for individual patients may vary, and the type of surgery a woman with breast cancer receives will depend on a number of factors, including her personal wishes and feelings.

 

Where did the story come from?

The study was carried out by researchers from Stanford University School of Medicine and the Cancer Prevention Institute of California. This study was funded by the Jan Weimer Junior Faculty Chair in Breast Oncology, the Suzanne Pride Bryan Fund for Breast Cancer Research at Stanford Cancer Institute, and the National Cancer Institute Surveillance, Epidemiology, and End Results Program. The collection of cancer incidence data was supported by the California Department of Health Services, the National Cancer Institute Surveillance, Epidemiology, and End Results Program and the Centres for Disease Control and Prevention National Program of Cancer Registries.

The study was published in the peer-reviewed medical journal JAMA. This article is open access so it is free to read and download.

The results of this study were well covered by the UK media. However, the headlines could be misconstrued as stating that there are no benefits associated with double mastectomies.

In fact, the headlines refer to the fact the double mastectomies weren’t associated with a significantly different survival benefit compared to breast-conserving therapy with radiotherapy, rather than with no survival benefit compared to no treatment.

 

What kind of research was this?

This was a cohort study that aimed to better understand the use of and outcomes after different treatment options for women diagnosed with early stage unilateral breast cancer (cancer in one breast).

Treatment options for breast cancer include surgery, radiotherapy, chemotherapy, hormone therapy and biological treatments.

In this study, the researchers were interested in different surgical options: unilateral mastectomy (removal of the breast with the cancer), bilateral mastectomy (removal of both breasts) and breast-conserving therapy with radiotherapy.

As this is a cohort study it cannot show that the type of surgery was the cause of poorer outcomes. A randomised controlled trial would be required for this. However, the researchers state that as bilateral mastectomy is an elective procedure for unilateral breast cancer, women who want this option are unlikely to accept randomisation to a less extensive surgical procedure in a trial.

 

What did the research involve?

The researchers identified women who had been diagnosed with early stage breast cancer (stage 0-III cancer) in one breast between 1998 and 2011 from the California Cancer Registry. Stage 0 breast cancer is localised and non-invasive, while stage III cancer is invasive and has spread to the lymph nodes.

The researchers followed these women for an average of 89.1 months.

The researchers looked for factors associated with the women receiving different types of surgical treatment.

They then looked to see how many women had died, and how many women had died from breast cancer, to see if the risk was different for women who had received different surgical treatment options.

The researchers adjusted their analyses for the following confounders:

  • age
  • race/ethnicity
  • tumour size
  • grade
  • histology (how the cells look under the microscope)
  • whether the cancer had spread to the lymph nodes
  • oestrogen receptor/progesterone receptor status
  • whether women also received chemotherapy and/or radiotherapy
  • neighbourhood socioeconomic status
  • marital status
  • insurance status
  • the socioeconomic composition of patients at the reporting hospital
  • whether women received care at a US National Cancer Institute designated cancer centre
  • year of diagnosis

 

What were the basic results?

The researchers identified 189,734 women who had been diagnosed with stage 0-III cancer in one breast between 1998 and 2011 from the California Cancer Registry. Of these, 6.2% underwent bilateral mastectomy, 55.0% received breast-conserving surgery with radiotherapy and 38.8% had a unilateral mastectomy.

The percentage of women who received bilateral mastectomy increased from 2.0% in 1998 to 12.3% in 2011, an annual increase of 14.3%. The increase in bilateral mastectomy rate was greatest among women younger than 40 years: the rate increased from 3.6% in 1998 to 33% in 2011.

The researchers compared the 10-year mortality (the percentage of women who don’t survive for 10 years) of women who had received breast-conserving surgery with radiotherapy, unilateral mastectomy and bilateral mastectomy.

  • 10-year mortality with breast-conserving surgery with radiotherapy was 16.8%
  • 10-year mortality with unilateral mastectomy was 20.1%
  • 10-year mortality with bilateral mastectomy was 18.8%

The researchers found that there was no significant mortality difference with bilateral mastectomy compared with breast-conserving surgery with radiotherapy (hazard ratio [HR] 1.02, 95% confidence interval [CI] 0.94 to 1.11), although unilateral mastectomy was associated with increased mortality (HR 1.35, 95% CI 1.32 to 1.39). The results for risk of death from breast cancer were similar.

The researchers also found that there were significant differences in the women who received the different surgical options.

Compared to women who received breast-conserving therapy plus radiotherapy, women were more likely to receive bilateral mastectomy if they:

  • were younger than 50 years old
  • were unmarried
  • were non-Hispanic white women
  • were diagnosed between 2005 and 2011 (vs. 1998 to 2004)
  • had a larger tumour, lymph node involvement, lobular histology (where cancer develops inside milk producing glands), higher grade or oestrogen receptor-/progesterone receptor-negative status (where cancer does not respond to hormonal treatments)
  • did not receive adjuvant treatment (chemotherapy and/or radiotherapy)
  • had private health insurance
  • came from neighbourhoods with higher socioeconomic status
  • received care at a National Cancer Institute designated cancer centre, or a hospital predominantly serving patients with lower socioeconomic status

Compared to women who received breast-conserving therapy plus radiotherapy, women were more likely to receive unilateral mastectomy if they:

  • were any age apart from 50 to 64 years old
  • were from a racial/ethnic minority
  • were married
  • were diagnosed between 1998 and 2004 (vs. 2005 to 2011)
  • had a larger tumour, lymph node involvement, lobular histology, higher grade, or oestrogen receptor-/progesterone receptor-negative status
  • did not receive adjuvant therapy (chemotherapy and/or radiotherapy)
  • had public/Medicaid insurance
  • came from neighbourhoods with lower socioeconomic status
  • received care at a hospital predominantly serving patients with lower socioeconomic status, and at hospitals that were not a National Cancer Institute designated cancer centre

 

How did the researchers interpret the results?

The researchers concluded that “use of bilateral mastectomy increased significantly throughout California from 1998 through 2011 and was not associated with lower mortality than that achieved with breast-conserving surgery plus radiotherapy. Unilateral mastectomy was associated with higher mortality than were the other two surgical options”.

 

Conclusion

This large US cohort study of women with early stage breast cancer in one breast has found no 10-year mortality benefit associated with bilateral mastectomy (removal of both breasts) compared with breast-conserving surgery (also known as lumpectomy, where the cancer and a border of healthy tissue is removed) plus radiotherapy.

Unilateral mastectomy was associated with a slightly increased risk of 10-year mortality, although the absolute difference was only 4%.

However, as there were significant differences between the patients receiving the different surgical options it makes it likely that the increase in risk associated with unilateral mastectomy is due to incomplete adjustment for some of the measured factors, unmeasured factors (for example, the presence of other diseases such as diabetes), or differences in access to care.

This study suggests that bilateral mastectomy may not be associated with any significant survival benefit compared to breast-conserving surgery with radiotherapy for the population of women with unilateral breast cancer.

However, as this was a cohort study it cannot prove that there was no significant survival difference; this would require a randomised controlled trial.

It is important to note that the outcome for individual patients may vary, and the type of surgery a woman with breast cancer receives will depend on a number of factors, including her personal wishes and feelings.

Ultimately, if you have been told you may require breast surgery, the choice of surgery will be down to you. Questions you may wish to ask your surgeon include:

  • What are the risks of the cancer reoccurring?
  • What are the risks of complications with each type of surgery?
  • What would be the likely impact on my quality of life for each type of surgery?
  • How will surgery affect the appearance of my breasts?
  • Are there any viable non-surgical options?

Read more about preparing for surgery.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Double mastectomy for breast cancer 'does not boost survival chances'. The Guardian, September 2 2014

Double mastectomy 'doesn't boost chance of surviving cancer': Women who have less drastic surgery live just as long. Daily Mail, September 3 2014

Double mastectomies may not reduce cancer survival rates, study shows. The Independent, September 2 2014

Breast cancer survival rate 'no greater' after double mastectomy despite rise in breast removals. Daily Mirror, September 2 2014

Links To Science

Kurian AW, Lichtensztajn DY, Keegan THM, et al. Use of and Mortality After Bilateral Mastectomy Compared With Other Surgical Treatments for Breast Cancer in California, 1998-2011. JAMA. Published online September 3 2014

Categories: Medical News

Could watching action films make you fat?

Medical News - Tue, 09/02/2014 - 14:00

“Couch potatoes captivated by fast-paced action films eat far more than those watching more sedate programmes,” The Independent reports.

A small US study found that people snacked more when watching action-packed movies.

The study took 94 US student volunteers and randomly assigned them in groups to watch 20 minutes of either the action film “The Island” with sound, the same film without sound or “Charlie Rose”, a long-running American talk show.

They were provided with unlimited snacks of M&Ms, cookies, carrots and grapes.

People watching the action film with sound ate 65% more calories than those watching the talk show.

Researchers discussed the hypothesis that the frequent visual and audio variations in “The Island” (a style of filming that director Michael Bay, best known for the "Transformers" films, has become notorious for) may be distracting. This means participants may have been unaware of how much they were snacking.

However, this does not prove that action films make you fat. The study appeared to allow students to gather themselves into groups before being assigned to what they would watch. This could have meant the groups were not adjusted for factors such as food preferences, physical activity or when the students had last eaten, which could all have influenced results.

The study does remind us, however, that we need to pay attention to what we eat, including food we consume while distracted, as it all counts towards our daily calorie intake.

 

Where did the story come from?

The study was carried out by researchers from Cornell University in New York and Vanderbilt University in Nashville. It was funded by Cornell University.

The study was published in the peer-reviewed medical journal JAMA Internal Medicine.

The UK media reported the story accurately, but did not highlight any of its weaknesses. However, The Independent did helpfully publish advice from England’s Chief Medical Officer that people should do a minimum of 150 minutes (2.5 hours) of moderate activity a week.

 

What kind of research was this?

This was a randomised controlled trial that aimed to see if people ate more snacks depending on the type of TV content they were watching.

While randomising participants is the best way to get groups that are balanced in their characteristics, this study only gave limited details of how this was done. This makes it difficult to know exactly how well the randomisation worked, and if the groups were truly balanced.

 

What did the research involve?

The researchers recruited 94 undergraduate students, gathered in groups of up to 20 people, then randomly assigned them to watch TV for 20 minutes, which was either:

  • an excerpt from action movie “The Island”
  • the same excerpt from “The Island”, but without any sound
  • an interview programme (talk show) called “Charlie Rose” – a celebrity focused talk show

During the 20 minutes, four snacks were made available: M&Ms, cookies, carrots and grapes. They were allowed to eat as much of them as they wanted. The amount of snacking per person was calculated by weighing the snacks before and after the 20 minute programme.

The researchers then analysed the results by type of TV show and sex of the participant.

 

What were the basic results?

Participants watching the action film with sound ate 98 more grams (g) of food than those watching a talk show (206.5g versus 104.3g). This equated to 65% more calories (kcal) consumed in the action film with sound group (354.1kcal versus 214.6kcal).

Those watching the action film without sound also ate significantly more snacks than people watching the talk show – 36% more grams of food (142.1g versus 104.3g) and 46% more calories (314.5kcal versus 214.6kcal).

Males ate more than females in all three groups.

 

How did the researchers interpret the results?

The researchers concluded that “more distracting TV content appears to increase food consumption: action and sound variation are bad for one’s diet”. They suggest that people should either avoid snacking when watching distracting TV or use “proportioned quantities to avoid overeating”.

 

Conclusion

This study appears to indicate that the type of TV programme a person watches can influence how many calories are consumed as snacks. However, little information was provided about the methods and findings of this study, which makes it difficult to be certain how well it was performed and, therefore, how robust the results are.

The potential issues with the study that could affect interpretation of the results seen include:

  • The participants were not randomly assigned to the different groups individually – instead they “gathered” into groups, and then these groups were randomised. This might mean that friends with similar likes and preferences gathered together and ended up in the same group. These self-selected groupings may have differed in their characteristics (e.g. gender, body mass index (BMI), physical activity or socioeconomic status), and these differences could affect results.
  • It is not clear whether the same number of people were exposed to each scenario, as the number of people in the groups was not reported.
  • No information was provided on which snacks the participants chose to eat, only the overall quantity in grams and calories. While it is tempting to assume that the people eating more calories were eating the unhealthier food, we don’t know whether this was the case. Indeed, the difference between the average least amount of snacks and the highest average amount was 100g and 140kcal – this suggests that the difference was not entirely of unhealthy food, as 100g of M&Ms contains more than 544kcal.
  • It is unclear what time of day the programmes were watched or whether they were all watched at the same time of day. Time of viewing could have a large effect on snacking, depending on the timing in relation to meals.
  • The students eating the most snacks may have had a higher physical requirement for food due to their level of sport or usual activities. The study also didn’t look at whether the people who ate more in snacks compensated for this in their later meals.
  • The study was conducted on students, and their behaviour may not be representative of the population at large.

In conclusion, this study in isolation doesn’t prove that watching certain TV programmes or films makes you fat. However, it does act as a reminder that we should pay attention to what we eat, including food we consume while distracted, as it all part of our calorie intake.

It is still recommended that you aim for at least 150 minutes (2.5 hours) of moderate physical activity each week, as well as eating a healthy, balanced diet.

If you are trying to lose weight, it might be a good idea to remove snacks from situations where you may get distracted – whether that is at home watching TV or at the cinema.

Only eating in a set location, such as your kitchen or dining room, can be a good way of staying mindful of how much you are actually eating; even a few extra snacks every night can quickly add up.

There are, however, a range of 100 calories or less snacks you can try, that shouldn’t put you over your daily calorie intake.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Action films make you fat, study finds. The Independent, September 1 2014

Action films most likely to make you fat, says study. BBC News, September 2 2014

Links To Science

Tal A, Zuckerman S, Wansink D. Watch What You Eat: Action-Related Television Content Increases Food Intake. JAMA Internal Medicine. Published online September 1 2014

Categories: Medical News

Brain can be 'retrained' to prefer healthy foods

Medical News - Tue, 09/02/2014 - 03:00

“The brain can be trained to prefer healthy food over unhealthy high-calorie foods, using a diet which does not leave people hungry,” reports BBC News.

It reports on a small pilot study involving 13 overweight and obese people who, aside from their weight, were described as being in good health.

Researchers looked at whether a dietary weight loss programme, known as the iDiet, could change how the brain’s reward system responds to high- and low-calorie foods. The iDiet included carbohydrates that released glucose slowly into the bloodstream (a low glycaemic index), and higher fibre and protein. It also aimed to reduce calorie intake by 500 calories (kcal), to 1,000kcal per day.

Adults on the iDiet lost more weight than those not on the diet. Interestingly, MRI scans suggested that their brains had increased the “reward” in response to anticipation of eating low-calorie foods and reduced the “reward” response to high-calorie foods compared to people not on the plan.

People can change their eating habits, which can lead to sustainable weight loss. This study supports this notion, and suggests that part of this may be related to changes in our brain’s “reward” response. The researchers hope to use this knowledge to improve weight loss interventions, but as yet it is not clear whether this will become a reality.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School and other research centres in the US. It was funded by the US Department of Agriculture (USDA) and the Jean Mayer USDA Human Nutrition Research Center on Aging. One of the authors reported that she was the co-founder of a commercial weight loss programme (the iDiet) based on the approach described in the research paper.

The study was published in the peer-reviewed journal Nutrition & Diabetes, and has been made available on an open-access basis so it is free to read online.

The UK media has covered this research in a reasonable way. Both the Mail Online and BBC include comments from the lead researcher, noting that “there is much more research to be done here, involving many more participants, long-term follow-up and investigating more areas of the brain”.

 

What kind of research was this?

This was a randomised controlled trial, testing whether a new weight loss programme could change how the brain’s reward system responds to healthy and unhealthy food.

We need food to survive, but it takes effort to find and prepare food, so the brain “rewards” us for doing these tasks in anticipation of eating, by increasing levels of chemicals such as dopamine inside our brains.

This reward reinforces this behaviour. High-calorie foods provide more reward than lower-calorie foods, and this can cause people to choose these foods in preference to healthier options.

Reinforcement of this behaviour by the brain’s reward system may contribute to over-eating of these foods and, ultimately, obesity. The researchers say whether the brain can be trained to reverse this through a behavioural weight loss intervention, and therefore help to treat obesity, is not known. Two previous randomised control trials had found no impact of a weight loss programme on the brain’s reward system.

A randomised control trial is the best way to test the impact of an intervention on a given outcome. This was a pilot study, which means that it was a small-scale test to get some initial idea of whether the intervention works. If initial signs are positive, this would be followed up by a larger study to confirm these initial findings.

 

What did the research involve?

The researchers included 15 overweight or obese adults who were otherwise healthy and who were taking part in a larger randomised control trial of a weight loss programme called the “iDiet” in their workplaces. They had brain scans before and six months into the programme to see if the reward system in their brains had changed its response to the anticipation of high-calorie and low-calorie food.

Participants were randomly allocated to either the iDiet or no weight loss intervention for six months. The iDiet aimed to help people to lose 0.5 to 1kg per week in a sustainable way. Participants took part in group sessions that aimed to get them to reduce calorie intake by 500-1,000kcal per day (roughly the calorie content of a large takeaway cheeseburger).

They received weekly hour-long sessions for 15 weeks, followed by fortnightly sessions for eight weeks.

The iDiet included elements aimed at reducing hunger and reducing existing associations between unhealthy food and reward, while reinforcing associations between healthy food and reward. The researchers provided portion-controlled menus and recipes that combined low glycaemic index carbohydrates (providing about 50% of the diet’s energy) with higher fibre (40g/day or more) and protein (about 25% of energy from protein and fat). There were also specific low-calorie “free foods” that could be eaten as desired. This combination aimed to make participants feel fuller and reduce hunger.

The researchers had specific criteria for people to be eligible to take part in the brain scanning part of the study (for example, they could not have had any psychiatric problems in the last two years). It was not clear from the reporting exactly how many people in total were in the randomised control trial and how many in total were eligible for the brain scan part of the study.

Of the 15 people who enrolled in the brain scan study, two dropped out – one lost their job and one felt claustrophobic in the brain scanner. Eight of the remaining participants were in the iDiet group, and five were in the control group.

The study used a type of brain scan called a functional MRI (fMRI), which detects activity in different parts of the brain. The researchers were particularly interested in the part of the brain called the striatum, as this has been reported to be involved in giving “rewards”. The participants were shown 40 images of commonly eaten high-calorie and low-calorie foods while they were in the scanner, to see how their brains responded. The participants also rated each food from one (not desirable at all) to four (extremely desirable).

They were also shown non-food images so that the researchers could take into account how active the brain regions normally were when not exposed to food. The brain scans were taken four hours after a meal, so about when the participants would be ready for another meal.

 

What were the basic results?

Participants on the iDiet lost 6.3kg on average over six months, while the control group gained 2.1kg. It was not clear whether these results were for the entire randomised control trial, or just those participants taking part in the brain scan part of the study.

Compared to the control group, the iDiet participants showed greater increase in activation of one part of the striatum (a reward-related brain region) when shown low-calorie foods, and more reduction in activation of another part of the striatum when shown high-calorie food after six months. Other parts of the striatum that had previously been implicated in the food reward system did not show differences between the groups.

The iDiet participants reported a greater increase in desirability of the low-calorie foods, and a greater reduction in the desirability of the high-calorie foods than the control group. However, this difference was not large enough to reach statistical significance.

The changes over time in brain response did not appear to show a relationship to changes in eating behaviour in the eight iDiet participants.

 

How did the researchers interpret the results?

The researchers concluded that this was the first randomised control trial to show changes in the brain reward system response to high- and low-calorie foods in response to a weight loss programme. They suggest that interventions that take advantage of this should be explored for their ability to enhance how effective behavioural weight loss interventions are, and how sustainable the weight loss is.

 

Conclusion

This small study has shown that a successful dietary weight loss programme is associated with changes in the brain’s response to images of high- and low-calorie food. Participants in the programme showed greater brain activity in one reward-related part of the brain in response to low-calorie foods, and less activity in another reward-related part of the brain in response to high-calorie foods. This effect was not seen in people who had not taken part in the programme.

There are a few things to bear in mind when interpreting this study:

  • The researchers are not able to say whether the change in brain response came before and contributed to the weight changes, or whether they came after and potentially resulted from the changes in weight.
  • The researchers were not able to show a relationship between eating behaviours and the level of activation in the reward centres – so they can’t say for certain that the brain changes seen were linked to changes in what people actually ate.
  • The brain activity seen was in response to pictures of food rather than actual food, and this may differ.
  • The groups did have different levels of dietary restraint at the start of the study, and this could influence results.
  • The study was small (13 people) and a relatively short-term part of a pilot randomised control trial, so findings would need to be assessed in a larger study to see if they could be confirmed in a wider sample of people over a longer period.
  • It is not possible to say whether the changes in brain activity seen are specifically related to the approach taken in the iDiet programme, or whether other dietary programmes would have a similar effect.

In conclusion, this study confirms that people can change their eating habits and weight. It also suggests that part of this may be related to changes in our brain’s “reward” response to high- and low-calorie foods. The researchers hope to use this knowledge to improve weight loss interventions, but as yet it is not clear whether this will become a reality.

For a free alternative to commercial diet plans, why not try the NHS weight loss plan.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Brain 'can be trained to prefer healthy food'. BBC News, September 2 2014

You CAN train your brain to like healthy foods: Researchers reveal diet that can kick junk food addiction. Mail Online, September 2 2014

Links To Science

Deckersbach T, Das SK, Urban LE, et al. Pilot randomized trial demonstrating reversal of obesity-related abnormalities in reward system responsivity to food cues with a behavioral intervention. Nutrition and Diabetes. Published online September 1 2014

Categories: Medical News

Heart failure drug could 'cut deaths by a fifth'

Medical News - Mon, 09/01/2014 - 14:00

“A new drug believed to cause a 20 per cent reduction in heart failure deaths could present a 'major advance' in treatment,” The Independent reports.

The drug, LCZ696, helps improve blood flow in heart failure patients. Heart failure is a syndrome caused by the heart not working properly, which can make people vulnerable to serious complications.

A new study compared LCZ696 with an existing heart failure drug called enalapril, which is also used to treat high blood pressure.

Researchers found that LCZ696 is better than enalapril for preventing death from cardiovascular causes and for preventing hospitalisation for heart failure. The results were so striking that they decided to halt the trial.

During the 27 months of the study, compared to enalapril, LCZ696:

  • reduced the risk of death from cardiovascular disease by 20%
  • reduced the risk of hospitalisation for heart failure by 21%
  • reduced the risk of death from any cause by 16%

The makers of LCZ696 must now apply for marketing authorisation before the drug can be sold. A press release from the developer of the drug, Novartis, states that it plans to file the application for marketing authorisation in the European Union in early 2015.

 

Where did the story come from?

The study was carried out by researchers from the University of Glasgow, the University of Texas Southwestern Medical Center and Novartis Pharmaceuticals, in collaboration with an international team of researchers from other universities and research institutes around the world. It was funded by Novartis, the pharmaceutical company that developed LCZ696.

The study was published in the peer-reviewed New England Journal of Medicine and has been made available on an open-access basis, so it is free to read online.

The results of the research were well covered by the UK media.

 

What kind of research was this?

This was a randomised controlled trial. It aimed to determine whether the new drug LCZ696 reduced the risk of death from cardiovascular causes or hospitalisation for heart failure in people who had heart failure with reduced ejection fraction, compared to enalapril.

Heart failure is a syndrome caused by the heart not working properly. In heart failure with reduced ejection fraction, less blood than normal is pumped out of the heart with each beat.

Enalapril is a drug already used to treat hypertension (high blood pressure) and heart failure. Enalapril is what is known as an angiotensin-converting enzyme (ACE) inhibitor, which improves heart failure by a number of different mechanisms. It inhibits an enzyme that is part of what is known as the renin-angiotensin-aldosterone system. One of the effects of this is to cause blood vessels to relax and widen.

LCZ696 also inhibits the renin-angiotensin-aldosterone system but also inhibits another enzyme called neprilysin. It was hoped that it would be more effective in treating heart failure.

A randomised controlled trial was deemed the best way of determining whether LCZ696 reduced the risk of death from cardiovascular causes or hospitalisation for heart failure compared to enalapril.

 

What did the research involve?

The researchers recruited 8,442 people with heart failure and an ejection fraction of 40% or less into the trial. Ejection fraction is a measure of how well your heart beats. A normal heart pumps a little more than half the heart’s blood volume with each beat. Normal ejection fractions range between 55% and 70%. To be included in the trial, patients had to be able to tolerate both enalapril and LCZ696; this was determined in a run-in phase before people were randomised. 

People were randomly assigned to receive LCZ696 (200mg twice daily) or enalapril (at a dose of 10mg twice daily), in addition to recommended therapy.

The researchers monitored how many people died from cardiovascular causes or were hospitalised for heart failure.

The researchers compared outcomes for people receiving LCZ696 with people receiving enalapril. 

43 of them were later excluded due to invalid randomisation, or if their hospital site had been closed.

 

What were the basic results?

The trial was stopped early because outcomes with LCZ696 were much better than outcomes with enalapril.

After people had been followed for an average of 27 months:

  • 4.7% fewer people who received LCZ696 died from cardiovascular causes or had been hospitalised for heart failure: 914 patients (21.8%) in the LCZ696 group compared with 1,117 patients (26.5%) in the enalapril group. This was equivalent to a 20% reduction in risk with LCZ696 compared to with enalapril (hazard ratio [HR] 0.80; 95% confidence interval [CI] 0.73 to 0.87). If 21 people were treated with LCZ696, one less death from cardiovascular causes or hospitalisation for heart failure would be expected than if people received enalapril.
  • 3.2% fewer people who received LCZ696 died from cardiovascular causes: 558 patients (13.3%) in the LCZ696 group and 693 patients (16.5%) in the enalapril group. This was a 20% reduction in risk with LCZ696 compared to with enalapril (HR 0.80; 95% CI, 0.71 to 0.89). If 32 people were treated with LCZ696, one less death from cardiovascular causes would be expected than if people received enalapril.
  • 2.8% fewer people who received LCZ696 were hospitalised for worsening heart failure: 537 patients (12.8%) in the LCZ696 group compared to 658 (15.6%) in the enalapril group. This was a 21% reduction in risk with LCZ696 compared to with enalapril (HR 0.79; 95% CI 0.71 to 0.89).
  • 2.8% fewer people who received LCZ696 died: 711 patients (17.0%) in the LCZ696 group compared with 835 patients (19.8%) in the enalapril group. This was equivalent to a 16% reduction in risk with LCZ696 compared to with enalapril (HR 0.84; 95% CI 0.76 to 0.93).

LCZ696 also significantly reduced the symptoms and physical limitations of heart failure.

With regards to adverse effects, more people who received LCZ696 had low blood pressure (hypotension) and non-serious angioedema (swelling of the deeper layers of the skin due to a build up of fluid), but fewer people had kidney (renal) impairment, hyperkalemia (high levels of potassium in the blood) and cough than the people who received enalapril. Overall, fewer people in the LCZ696 group stopped their medication because of an adverse event than in the enalapril group.

 

How did the researchers interpret the results?

The researchers concluded that “LCZ696 was superior to enalapril in reducing the risks of death, and of hospitalisation for heart failure.”

 

Conclusion

This was a well conducted study that achieved impressive results.

In this 27 month-long randomised controlled trial of 8,442 people with heart failure and an ejection fraction of 40% or less, compared to enalpril, the new drug LCZ696:

  • reduced the risk of death from cardiovascular disease or the risk of hospitalisation for heart failure by 20%
  • reduced the risk of death from cardiovascular disease by 20%
  • reduced the risk of hospitalisation for heart failure by 21%
  • reduced the risk of death from any cause by 16%

Marketing authorisation is now required before it can be sold. The developer of the drug, Novartis, states that they plan to file the application for marketing authorisation in the European Union in early 2015.

It is currently unclear how much LCZ696 will cost. Until this information becomes available, it is difficult to predict whether LCZ696 will be offered by the NHS.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New heart drug LCZ696 could reduce heart failure deaths by 20%, scientists say. The Independent, August 30 2014

'Remarkable' new heart drug will cut deaths by a fifth - and could be available as early as next year. Mail Online, September 1 2014

New heart drug will cut deaths by a fifth. The Daily Telegraph, August 30 2014

Links To Science

McMurray JJV, Packer M, Desai AS, et al. Angiotensin–Neprilysin Inhibition versus Enalapril in Heart Failure. The New England Journal of Medicine. Published online August 30 2014

Categories: Medical News

Students 'showing signs of phone addiction'

Medical News - Mon, 09/01/2014 - 03:00

“Students spend up to 10 hours a day on their mobile phones,” the Mail Online reports. The results of a US study suggest that some young people have developed an addiction to their phone.

Mobile or “cell” phone addiction is the habitual drive or compulsion to continue to use a mobile phone, despite its negative impact on one’s wellbeing.

The authors of a new study suggest that this can occur when a mobile phone user reaches a “tipping point”, where they can no longer control their phone use. Potential negative consequences include dangerous activities, such as texting while driving.

This latest study surveyed mobile phone use and addiction in a sample of 164 US students.

The students reported spending nearly nine hours a day on their mobile phones. There was a significant difference in the amount of time male and female students spent on their phones, with women spending around 150 minutes more a day using the device.

Common activities included texting, sending emails, surfing the internet, checking Facebook and using other social media apps, such as Instagram and Pinterest.

It was also found that women spent a lot more time texting than men, and were more likely to report feeling agitated when their phone was out of sight or their battery was nearly dead. Men spent more time than women playing games.

Using Instagram and Pinterest, and using the phone to listen to music, as well as the number of calls made and the number of texts sent, were positively associated with (increased risk of) phone addiction.

However, the study did not prove that any of these activities can cause mobile phone addiction.

 

Where did the story come from?

The study was carried out by researchers from Baylor University and Xavier University in the US, and the Universitat Internacional de Catalunya in Spain. No financial support was received.

The study was published in the peer-reviewed Journal of Behavioural Addictions and has been published on an open-access basis, meaning it is free to read online.

The results of the study were well-reported by the Mail.

 

What kind of research was this?

This was a cross-sectional study that aimed to investigate which mobile phone activities are most closely associated with phone addiction in young adults, and whether there are differences between males and females.

As it is a cross-sectional study, it cannot show causation – that is, that the activities undertaken cause a person to become addicted to their mobile phone.

 

What did the research involve?

164 college undergraduates in Texas aged between 19 and 22 years old completed an online survey.

To measure mobile phone addiction, people were asked to score how much they agreed with the following statements (1=strongly disagree; 7=strongly agree):

  • I get agitated when my phone is not in sight.
  • I get nervous when my phone’s battery is almost exhausted.
  • I spend more time than I should on my phone.
  • I find that I am spending more and more time on my phone.

People were also asked how much time they spent on 24 different mobile phone activities a day, including:

  • calling, texting and emailing
  • using social media applications
  • playing games
  • taking photos
  • listening to music

Finally, they were asked how many calls they made, and how many texts and emails they sent a day.

 

What were the basic results?

On average, the undergraduates spent 527.6 minutes (almost nine hours) a day on their phones. Female students reported spending significantly more time on their phone than male students.

The students spent the most time texting (94.6 minutes per day), sending emails (48.5 minutes), checking Facebook (38.6 minutes), surfing the Internet (34.4 minutes) and listening to their iPods (26.9 minutes). There were significant differences between the amount of time male and female students reported performing different mobile phone activities. Women spent more time than men texting, emailing, taking pictures, using a calendar, using a clock, on Facebook, Pinterest and Instagram, while men spent more time than women playing games.

The study identified activities that were significantly associated with mobile phone addiction. Instagram, Pinterest and using an iPod application, as well as the number of calls made and the number of texts sent, were positively associated with (increased the risk of) mobile phone addiction when males and females were analysed together. Time spent on “other” applications was negatively associated with (reduced the risk of) phone addiction.

However, there were differences between males and females.

For males, time spent sending emails, reading books and the Bible, as well as visiting Facebook, Twitter and Instagram, in addition to the number of calls made and the number of texts sent, were positively associated with mobile phone addiction. In contrast, time spent placing calls, using the phone as a clock, visiting Amazon and “other” applications were negatively associated with phone addiction.

For females, time spent on Pinterest, Instagram, using an iPod application, Amazon and the number of calls made were all positively associated with mobile phone addiction. In contrast, time spent using the Bible application, Twitter, Pandora/Spotify and an iTunes application were negatively associated with phone addiction.

 

How did the researchers interpret the results?

The researchers concluded that mobile phone addiction amongst participants was largely driven by a desire to connect socially. However, the activities found to be associated with phone addiction differed between males and females.

 

Conclusion

This study found that a sample of college students in the US reported spending nearly nine hours a day on their mobile phones, although there was a significant difference between male and female students. There were also differences in the amount of time male and female students spent performing various activities.

The study has identified some activities associated with mobile phone addiction, with differences seen between male and female students.

However, due to the study design, it cannot prove that these activities caused the mobile phone addiction directly.

This study has several limitations:

  • it was performed on a sample of college students in the US, and the results of this study may not be generalisable to the population at large
  • the mobile phone addiction scale used in this study requires further evaluation
  • participants self-reported the time spent on certain activities

Mobile phones may help us connect with people all over the world, but possibly at the cost of reducing interaction with “real” people. Failure to connect with others can have an adverse effect on a person’s quality of life. A 2013 study found an association between Facebook use and dissatisfaction – the more time a person spent on Facebook, the less likely they were to report feeling satisfied with their life.

Read more about how connecting with others can improve your mental health.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Students 'addicted to mobile phones': Some spending up to ten hours a day texting, emailing and on social media. Mail Online, September 1 2014

Links To Science

Roberts JA, Yaya LHP, Manolis C. The invisible addiction: Cell-phone activities and addiction among male and female college students. The Journal of Behavioural Addiction. Published online August 26 2014

Categories: Medical News