Medical News

Social care reforms announced

Medical News - Tue, 01/19/2038 - 06:14

Most of the UK media is covering the announcement made in Parliament by Jeremy Hunt, Secretary of State for Health, about proposed changes to social care.

The two confirmed points to have garnered the most media attention in the run-up to the announcement are:

  • a ‘cost cap’ of £75,000 worth of care costs – after this point the state would step in to meet these care costs
  • raising the current means-testing threshold for people to be eligible for state-funded social care from £23,520 to £123,000

The government expects these changes will lead to fewer people having to sell their homes in order to pay for their long-term care needs.

Speaking in Parliament, Mr Hunt said that the current system was ‘desperately unfair’ as many older people face ‘limitless, often ruinous’ costs. The minister stated that he wants the country to be ‘one of the best places in the world to grow old’.

 

What is social care?

The term social care covers a range of services provided to help vulnerable people improve their quality of life and assist them with their day-to-day living.

People often requiring social care include:

  • people with chronic (long-term) diseases
  • people with disability
  • the elderly – particularly those with age-related conditions, such as dementia

Social care services can include:

  • healthcare
  • equipment
  • help in your home or in a care home
  • community support and activities
  • day centres

 

How does the current adult social care system work?

Currently, state funding for social care is based on two criteria:

  • means – people with assets of more than £23,520 do not qualify for funding
  • needs – most local authorities will only fund care for people assessed to have substantial or critical needs

The majority of people currently requiring social care pay for it privately. These are known as ‘self-funders’.

 

What prompted these reforms to adult social care?

Put simply, on average, the UK population is getting older.

When the welfare state was created in the early 20th century, it was not expected that people would someday routinely live into their 70s, 80s, and even 90s.

The increase in life expectancy is a good thing, however, it brings a new set of challenges.

While people are living longer, they are also spending more of their lives in ill health. Older people are more likely to have potentially complex care needs that can be expensive to manage.

Many people are currently ineligible for state-funded social care under the existing laws. To meet the costs of these care needs, these ‘self-funders’ have, in many cases, had to sell or remortgage their home, or sell other assets to pay for the costs of their care.

Without reforms, experts agree that the cost of social care for both the state (through taxes) and to ‘self-funders’ is likely to become increasingly problematic.

To try and find the best way to resolve some of the difficulties of fairly funding adult social care, the Department of Health set up a commission. This independent commission reported its findings to ministers in July 2011. The government considered these findings in its white paper on care and support published in July 2012, and in the drafting of the proposed new legislation.

 

What happens next?

The government has introduced a Social Care Bill which will need to be passed by the Houses of Parliament.

If the bill is successfully passed it is expected the amendments will come into force by 2017.

 

Edited by NHS Choices. Follow Behind the Headlines on twitter.

Links To The Headlines

Social care: Jeremy Hunt hails 'fully-funded solution'. BBC News, February 11 2013

Social care reforms: Almost 2 million pensioners will be denied state help. The Daily Telegraph, February 11 2013

Social care reform: how your family may be affected. The Daily Telegraph, February 11 2013

Dilnot 'regrets' decision to set social care cap at £75,000. The Guardian, February 11 2013

Hunt statement on adult social care cap: Politics live blog. The Guardian, February 11 2013

Categories: Medical News

Heart failure drug could 'cut deaths by a fifth'

Medical News - 9 hours 55 min ago

“A new drug believed to cause a 20 per cent reduction in heart failure deaths could present a 'major advance' in treatment,” The Independent reports.

The drug, LCZ696, helps improve blood flow in heart failure patients. Heart failure is a syndrome caused by the heart not working properly, which can make people vulnerable to serious complications.

A new study compared LCZ696 with an existing heart failure drug called enalapril, which is also used to treat high blood pressure.

Researchers found that LCZ696 is better than enalapril for preventing death from cardiovascular causes and for preventing hospitalisation for heart failure. The results were so striking that they decided to halt the trial.

During the 27 months of the study, compared to enalapril, LCZ696:

  • reduced the risk of death from cardiovascular disease by 20%
  • reduced the risk of hospitalisation for heart failure by 21%
  • reduced the risk of death from any cause by 16%

The makers of LCZ696 must now apply for marketing authorisation before the drug can be sold. A press release from the developer of the drug, Novartis, states that it plans to file the application for marketing authorisation in the European Union in early 2015.

 

Where did the story come from?

The study was carried out by researchers from the University of Glasgow, the University of Texas Southwestern Medical Center and Novartis Pharmaceuticals, in collaboration with an international team of researchers from other universities and research institutes around the world. It was funded by Novartis, the pharmaceutical company that developed LCZ696.

The study was published in the peer-reviewed New England Journal of Medicine and has been made available on an open-access basis, so it is free to read online.

The results of the research were well covered by the UK media.

 

What kind of research was this?

This was a randomised controlled trial. It aimed to determine whether the new drug LCZ696 reduced the risk of death from cardiovascular causes or hospitalisation for heart failure in people who had heart failure with reduced ejection fraction, compared to enalapril.

Heart failure is a syndrome caused by the heart not working properly. In heart failure with reduced ejection fraction, less blood than normal is pumped out of the heart with each beat.

Enalapril is a drug already used to treat hypertension (high blood pressure) and heart failure. Enalapril is what is known as an angiotensin-converting enzyme (ACE) inhibitor, which improves heart failure by a number of different mechanisms. It inhibits an enzyme that is part of what is known as the renin-angiotensin-aldosterone system. One of the effects of this is to cause blood vessels to relax and widen.

LCZ696 also inhibits the renin-angiotensin-aldosterone system but also inhibits another enzyme called neprilysin. It was hoped that it would be more effective in treating heart failure.

A randomised controlled trial was deemed the best way of determining whether LCZ696 reduced the risk of death from cardiovascular causes or hospitalisation for heart failure compared to enalapril.

 

What did the research involve?

The researchers recruited 8,442 people with heart failure and an ejection fraction of 40% or less into the trial. Ejection fraction is a measure of how well your heart beats. A normal heart pumps a little more than half the heart’s blood volume with each beat. Normal ejection fractions range between 55% and 70%. To be included in the trial, patients had to be able to tolerate both enalapril and LCZ696; this was determined in a run-in phase before people were randomised. 

People were randomly assigned to receive LCZ696 (200mg twice daily) or enalapril (at a dose of 10mg twice daily), in addition to recommended therapy.

The researchers monitored how many people died from cardiovascular causes or were hospitalised for heart failure.

The researchers compared outcomes for people receiving LCZ696 with people receiving enalapril. 

43 of them were later excluded due to invalid randomisation, or if their hospital site had been closed.

 

What were the basic results?

The trial was stopped early because outcomes with LCZ696 were much better than outcomes with enalapril.

After people had been followed for an average of 27 months:

  • 4.7% fewer people who received LCZ696 died from cardiovascular causes or had been hospitalised for heart failure: 914 patients (21.8%) in the LCZ696 group compared with 1,117 patients (26.5%) in the enalapril group. This was equivalent to a 20% reduction in risk with LCZ696 compared to with enalapril (hazard ratio [HR] 0.80; 95% confidence interval [CI] 0.73 to 0.87). If 21 people were treated with LCZ696, one less death from cardiovascular causes or hospitalisation for heart failure would be expected than if people received enalapril.
  • 3.2% fewer people who received LCZ696 died from cardiovascular causes: 558 patients (13.3%) in the LCZ696 group and 693 patients (16.5%) in the enalapril group. This was a 20% reduction in risk with LCZ696 compared to with enalapril (HR 0.80; 95% CI, 0.71 to 0.89). If 32 people were treated with LCZ696, one less death from cardiovascular causes would be expected than if people received enalapril.
  • 2.8% fewer people who received LCZ696 were hospitalised for worsening heart failure: 537 patients (12.8%) in the LCZ696 group compared to 658 (15.6%) in the enalapril group. This was a 21% reduction in risk with LCZ696 compared to with enalapril (HR 0.79; 95% CI 0.71 to 0.89).
  • 2.8% fewer people who received LCZ696 died: 711 patients (17.0%) in the LCZ696 group compared with 835 patients (19.8%) in the enalapril group. This was equivalent to a 16% reduction in risk with LCZ696 compared to with enalapril (HR 0.84; 95% CI 0.76 to 0.93).

LCZ696 also significantly reduced the symptoms and physical limitations of heart failure.

With regards to adverse effects, more people who received LCZ696 had low blood pressure (hypotension) and non-serious angioedema (swelling of the deeper layers of the skin due to a build up of fluid), but fewer people had kidney (renal) impairment, hyperkalemia (high levels of potassium in the blood) and cough than the people who received enalapril. Overall, fewer people in the LCZ696 group stopped their medication because of an adverse event than in the enalapril group.

 

How did the researchers interpret the results?

The researchers concluded that “LCZ696 was superior to enalapril in reducing the risks of death, and of hospitalisation for heart failure.”

 

Conclusion

This was a well conducted study that achieved impressive results.

In this 27 month-long randomised controlled trial of 8,442 people with heart failure and an ejection fraction of 40% or less, compared to enalpril, the new drug LCZ696:

  • reduced the risk of death from cardiovascular disease or the risk of hospitalisation for heart failure by 20%
  • reduced the risk of death from cardiovascular disease by 20%
  • reduced the risk of hospitalisation for heart failure by 21%
  • reduced the risk of death from any cause by 16%

Marketing authorisation is now required before it can be sold. The developer of the drug, Novartis, states that they plan to file the application for marketing authorisation in the European Union in early 2015.

It is currently unclear how much LCZ696 will cost. Until this information becomes available, it is difficult to predict whether LCZ696 will be offered by the NHS.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New heart drug LCZ696 could reduce heart failure deaths by 20%, scientists say. The Independent, August 30 2014

'Remarkable' new heart drug will cut deaths by a fifth - and could be available as early as next year. Mail Online, September 1 2014

New heart drug will cut deaths by a fifth. The Daily Telegraph, August 30 2014

Links To Science

McMurray JJV, Packer M, Desai AS, et al. Angiotensin–Neprilysin Inhibition versus Enalapril in Heart Failure. The New England Journal of Medicine. Published online August 30 2014

Categories: Medical News

Students 'showing signs of phone addiction'

Medical News - 20 hours 55 min ago

“Students spend up to 10 hours a day on their mobile phones,” the Mail Online reports. The results of a US study suggest that some young people have developed an addiction to their phone.

Mobile or “cell” phone addiction is the habitual drive or compulsion to continue to use a mobile phone, despite its negative impact on one’s wellbeing.

The authors of a new study suggest that this can occur when a mobile phone user reaches a “tipping point”, where they can no longer control their phone use. Potential negative consequences include dangerous activities, such as texting while driving.

This latest study surveyed mobile phone use and addiction in a sample of 164 US students.

The students reported spending nearly nine hours a day on their mobile phones. There was a significant difference in the amount of time male and female students spent on their phones, with women spending around 150 minutes more a day using the device.

Common activities included texting, sending emails, surfing the internet, checking Facebook and using other social media apps, such as Instagram and Pinterest.

It was also found that women spent a lot more time texting than men, and were more likely to report feeling agitated when their phone was out of sight or their battery was nearly dead. Men spent more time than women playing games.

Using Instagram and Pinterest, and using the phone to listen to music, as well as the number of calls made and the number of texts sent, were positively associated with (increased risk of) phone addiction.

However, the study did not prove that any of these activities can cause mobile phone addiction.

 

Where did the story come from?

The study was carried out by researchers from Baylor University and Xavier University in the US, and the Universitat Internacional de Catalunya in Spain. No financial support was received.

The study was published in the peer-reviewed Journal of Behavioural Addictions and has been published on an open-access basis, meaning it is free to read online.

The results of the study were well-reported by the Mail.

 

What kind of research was this?

This was a cross-sectional study that aimed to investigate which mobile phone activities are most closely associated with phone addiction in young adults, and whether there are differences between males and females.

As it is a cross-sectional study, it cannot show causation – that is, that the activities undertaken cause a person to become addicted to their mobile phone.

 

What did the research involve?

164 college undergraduates in Texas aged between 19 and 22 years old completed an online survey.

To measure mobile phone addiction, people were asked to score how much they agreed with the following statements (1=strongly disagree; 7=strongly agree):

  • I get agitated when my phone is not in sight.
  • I get nervous when my phone’s battery is almost exhausted.
  • I spend more time than I should on my phone.
  • I find that I am spending more and more time on my phone.

People were also asked how much time they spent on 24 different mobile phone activities a day, including:

  • calling, texting and emailing
  • using social media applications
  • playing games
  • taking photos
  • listening to music

Finally, they were asked how many calls they made, and how many texts and emails they sent a day.

 

What were the basic results?

On average, the undergraduates spent 527.6 minutes (almost nine hours) a day on their phones. Female students reported spending significantly more time on their phone than male students.

The students spent the most time texting (94.6 minutes per day), sending emails (48.5 minutes), checking Facebook (38.6 minutes), surfing the Internet (34.4 minutes) and listening to their iPods (26.9 minutes). There were significant differences between the amount of time male and female students reported performing different mobile phone activities. Women spent more time than men texting, emailing, taking pictures, using a calendar, using a clock, on Facebook, Pinterest and Instagram, while men spent more time than women playing games.

The study identified activities that were significantly associated with mobile phone addiction. Instagram, Pinterest and using an iPod application, as well as the number of calls made and the number of texts sent, were positively associated with (increased the risk of) mobile phone addiction when males and females were analysed together. Time spent on “other” applications was negatively associated with (reduced the risk of) phone addiction.

However, there were differences between males and females.

For males, time spent sending emails, reading books and the Bible, as well as visiting Facebook, Twitter and Instagram, in addition to the number of calls made and the number of texts sent, were positively associated with mobile phone addiction. In contrast, time spent placing calls, using the phone as a clock, visiting Amazon and “other” applications were negatively associated with phone addiction.

For females, time spent on Pinterest, Instagram, using an iPod application, Amazon and the number of calls made were all positively associated with mobile phone addiction. In contrast, time spent using the Bible application, Twitter, Pandora/Spotify and an iTunes application were negatively associated with phone addiction.

 

How did the researchers interpret the results?

The researchers concluded that mobile phone addiction amongst participants was largely driven by a desire to connect socially. However, the activities found to be associated with phone addiction differed between males and females.

 

Conclusion

This study found that a sample of college students in the US reported spending nearly nine hours a day on their mobile phones, although there was a significant difference between male and female students. There were also differences in the amount of time male and female students spent performing various activities.

The study has identified some activities associated with mobile phone addiction, with differences seen between male and female students.

However, due to the study design, it cannot prove that these activities caused the mobile phone addiction directly.

This study has several limitations:

  • it was performed on a sample of college students in the US, and the results of this study may not be generalisable to the population at large
  • the mobile phone addiction scale used in this study requires further evaluation
  • participants self-reported the time spent on certain activities

Mobile phones may help us connect with people all over the world, but possibly at the cost of reducing interaction with “real” people. Failure to connect with others can have an adverse effect on a person’s quality of life. A 2013 study found an association between Facebook use and dissatisfaction – the more time a person spent on Facebook, the less likely they were to report feeling satisfied with their life.

Read more about how connecting with others can improve your mental health.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Students 'addicted to mobile phones': Some spending up to ten hours a day texting, emailing and on social media. Mail Online, September 1 2014

Links To Science

Roberts JA, Yaya LHP, Manolis C. The invisible addiction: Cell-phone activities and addiction among male and female college students. The Journal of Behavioural Addiction. Published online August 26 2014

Categories: Medical News

Study finds plain cigarette pack fears 'unfounded'

Medical News - Fri, 08/29/2014 - 14:00

"Cigarette plain packaging fear campaign unfounded," reports The Guardian.

After Australia introduced plain packaging laws in 2012, opponents of the legislation argued it would lead to a number of unintended consequences, including:

  • the market would become flooded by cheap Asian brands
  • smokers would be more likely to buy illegal unbranded tobacco (including raw unbranded loose tobacco known locally in Australia as "chop-chop")
  • smokers would be less likely to buy their cigarettes from smaller mixed businesses such as convenience stores and petrol stations, meaning that small businesses would suffer

But a new study conducted in Victoria, Australia, suggests these fears are unfounded.

Researchers compared the responses smokers gave in a telephone survey one year before the introduction of standardised packaging, with responses given one year after its introduction.

The study found no evidence the introduction of standardised packaging had changed the proportion of people purchasing from small mixed-business retailers, purchasing cheap brands imported from Asia, or using illicit tobacco.

But this study did not investigate whether there had been an increase in the use of counterfeit branded tobacco products. The researchers noted that smokers may be unaware they are smoking counterfeit products.

In conclusion, the study suggests there is no evidence for many of the "fears" proposed by opponents of standardised packaging.

 

Where did the story come from?

The study was carried out by researchers from the Centre for Behavioural Research in Cancer in Melbourne, Australia.

It was supported by Quit Victoria, with funding from VicHealth and the Department of Health for the Victorian Smoking and Health annual survey.

The study was published in the peer-reviewed journal BMJ Open, which is open access, so the study can be read online or downloaded for free.

The results of the study were well reported by the UK media.

 

What kind of research was this?

This was a serial cross-sectional study (a cross-sectional study at different time points) that aimed to determine whether there was any evidence that the introduction of standardised packaging in Australia had changed:

  • the proportion of current smokers who usually purchased their tobacco products from larger discount outlets such as supermarkets, compared with small mixed-business retail outlets
  • the prevalence of the regular use of low-cost brands imported from Asia
  • the use of illicit unbranded tobacco

In Australia, since 2012 all tobacco products have to be sold in standardised dark brown packaging with large graphic health warnings. Brand names are printed in a standardised position with standardised lettering.

The researchers state opponents of plain packaging have suggested its introduction could mean smokers would be less likely to purchase from small mixed-business retailers, more likely to purchase cheap brands imported from Asia, and more likely to use illicit tobacco.

 

What did the research involve?

Smokers aged 18 and over in Victoria, Australia were identified in an annual population telephone survey (the Victorian Smoking and Health Survey).

They were asked about:

  • the place they usually purchase tobacco products from (supermarkets, specialist tobacconists, small mixed businesses, petrol stations or other venues, including informal sellers)
  • their use of low-cost Asian brands (whether their main brand was a low-cost Asian brand)
  • their use of unbranded illicit tobacco (whether they had bought or purchased any unbranded tobacco)

The researchers compared answers from three annual surveys: 

  • 2011 – a year prior to the implementation of standardised packaging
  • 2012 – during roll-out
  • 2013 – a year after implementation

 

What were the basic results?

A total of 754 smokers were surveyed in 2011, 590 in 2012 and 601 in 2013.

The researchers found:

  • the proportion of smokers purchasing from supermarkets did not increase and the percentage purchasing from small mixed-business outlets did not decline between 2011 and 2013
  • the prevalence of low-cost Asian brands was low and did not increase between 2011 and 2013
  • the proportion reporting current use of unbranded illicit tobacco did not change significantly between 2011 and 2013

 

How did the researchers interpret the results?

The researchers concluded that, "One year after implementation, this study found no evidence of the major unintended consequences concerning loss of smoker patrons from small retail outlets, flooding of the market by cheap Asian brands and use of illicit tobacco predicted by opponents of plain packaging in Australia."

 

Conclusion

The study found no evidence the introduction of standardised packaging had changed the proportion of people purchasing from small mixed-business retailers, purchasing cheap brands imported from Asia, or using illicit tobacco in Victoria, Australia.

However, this survey was only conducted in Victoria and only among English-speaking residents, so further studies are required to confirm the generalisability of the findings. As with all surveys, there is the possibility of respondent error and misreporting.

Further studies are required to investigate whether the introduction of standardised packaging has increased the use of counterfeit branded tobacco products, as this was not assessed.

Overall, the results of this study suggest there is no evidence behind many of the "fears" proposed by opponents of standardised packaging.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

'Plain' packaging not a boost to illegal tobacco use, study suggests. BBC News, August 29 2014

Australia shows that plain tobacco packaging significantly cuts smoking. The Independent, August 29 2014

Cigarette plain packaging fear campaign unfounded, Victoria study finds. The Guardian, August 29 2014

Plain Cigarette Packs Do Not Hurt Retailers. Sky News, August 29 2014

Links To Science

Scollo M, Zacher M, Durkin S, Wakefield M. Early evidence about the predicted unintended consequences of standardised packaging of tobacco products in Australia: a cross-sectional study of the place of purchase, regular brands and use of illicit tobacco. BMJ Open. Published online July 18 2014

Categories: Medical News

Claims magnetic brain stimulation helps memory

Medical News - Fri, 08/29/2014 - 03:00

“Magnetic brain stimulation treatment shown to boost memory,” The Guardian reports. A new study found that magnetic pulses improved recall skills in healthy individuals. It is hoped that the findings of this study could lead to therapies for people with memory deficits such as dementia.

Researchers investigated the effects of transcranial magnetic stimulation (TMS) every day for five days on connections within the brain and on associative memory (the ability to learn and remember relationships between items – such as “1066” and the “Battle of Hastings”).

TMS is a non-invasive technique that uses an electromagnet placed against the skull to produce magnetic pulses that stimulate the brain.

In this study, TMS of a specific area of the brain was compared to “sham” stimulation in 16 healthy adults.

TMS was found to improve performance on the associative memory test by over 20%, whereas sham stimulation had no significant effect.

While the results are interesting, there are important limitations to consider. The sample size was small, just 16 people, so the findings need to be replicated in a larger group of people. It is also unclear how long any effect would persist, and if there are any adverse effects of TMS. Long-term studies are also required to determine whether TMS is both safe and effective.

Of note, the current study involved healthy people, not people with memory deficits, so it is uncertain whether TMS would be of any benefit to people with conditions that cause memory deficits such as dementia.

 

Where did the story come from?

The study was carried out by researchers from Northwestern University and the Rehabilitation Institute of Chicago, and was funded by the US National Institute of Mental Health and National Institute of Neurological Disorders and Stroke.

The study was published in the peer-reviewed journal Science.

The results of this study were generally well reported by the media, although some headline writers overstated the implications of the results.

 

What kind of research was this?

This was a cross-over trial that aimed to determine whether electromagnetic stimulation of a particular region of the brain could improve memory in 16 healthy people.

The researchers were interested in a region of the brain called the hippocampus, which is necessary for associative memory – this includes the ability to remember the association between a word and a face. It has been hypothesised that this ability also depends on other brain regions, and that the hippocampus could act as a “hub”.

To see whether this was the case, the researchers used high-frequency TMS to stimulate part of the brain known as the lateral parietal cortex, which is thought to interact with the hippocampus in memory.

The lateral parietal cortex is part of the cerebral cortex or grey matter, and the hippocampus is located under grey matter.

 

What did the research involve?

The researchers compared the effects of high-frequency transcranial magnetic stimulation and “sham” stimulation for five days on the ability of 16 healthy people to remember the association between faces and words.  

Each person participated for two weeks – one week with TMS and one week with sham stimulation – separated by at least one week. The baseline assessment occurred one day prior to the first stimulation session, and there were five consecutive daily stimulation sessions. The post-treatment assessment occurred one day after the final stimulation session. Half the subjects received TMS first and half received sham stimulation first.

In the memory test, participants were shown 20 different human face photographs for three seconds each. A researcher read a unique common word aloud for each face. One minute after this had been completed the participants were shown the photos again and asked to recall the words associated with them.

In addition to looking at the effect of memory, the researchers also looked at the effect of TMS on connectivity within the brain, using a technique called functional magnetic resonance imaging. This technique looks at changes in blood flow, and can be used to assess connectivity by looking for variations in blood flow that are time correlated across the brain.

 

What were the basic results?

TMS improved people’s ability to remember the association between a word and a face by more than 20%, whereas sham treatment caused no significant performance change.

The researchers also gave people other cognitive tests, but found that TMS had no effect on people’s performance on these tests.

TMS also increased connectivity between specific cortical (grey-matter) regions of the brain and the hippocampus.

 

How did the researchers interpret the results?

The researchers concluded that cortical-hippocampal networks can be enhanced noninvasively and play a role in associative memory.

 

Conclusion

In this study, TMS was found to improve performance on the associative memory test by more than 20%, whereas sham stimulation had no significant effect.

TMS also increased connectivity between specific cortical (grey-matter) regions of the brain and the hippocampus.

This interesting research increases our knowledge of how memory works. However, it was a very small trial with only 16 participants. It is also unclear whether electromagnetic stimulation would be effective for people with memory disorders such as dementia. The media has reported that the researchers are now planning to study the effect of TMS on people with early loss of memory ability.

Long-term studies are also required to determine how long the improved memory performance lasts and to ensure that electromagnetic stimulation of the brain doesn’t have any adverse effects.

Dementia remains a poorly understood condition, and claims that brain training exercises have a definitive protective effect against the condition have not held up to scrutiny. That said, keeping the brain active through memory intensive activities such as learning a new language, a musical instrument, or even just picking up a book cannot hurt. Keeping the mind active has been shown to improve quality of life.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Magnetic brain stimulation treatment shown to boost memory. The Guardian, August 28 2014

Electrical brain stimulation 'boosts memory'. BBC News, August 29 2014

Magnetic pulse to head could improve memory of dementia sufferers. The Daily Telegraph, August 28 2014

Links To Science

Wang JX, Rogers LM, Gross EZ, et al. Targeted enhancement of cortical-hippocampal brain networks and associative memory. Science. Published online August 29 2014

Categories: Medical News

Tomato-rich diet 'reduces prostate cancer risk'

Medical News - Thu, 08/28/2014 - 14:30

“Tomatoes ‘cut risk of prostate cancer by 20%’,” the Daily Mail reports, citing a study that found men who ate 10 or more portions a week had a reduced risk of the disease.

The study in question gathered a year’s dietary information from 1,806 men who were found to have prostate cancer and 12,005 who were clear after random prostate checks. The researchers compared the diets and adjusted the results to take into account factors such as age, family history of prostate cancer and ethnicity.

They found that men who ate more than 10 portions of tomatoes or tomato products per week have an 18% reduced risk of prostate cancer compared to men who ate less than 10.

As this was a case controlled study, and not a randomised controlled trial, it cannot prove that eating more tomatoes prevents prostate cancer. It can only show an association.

The association is biologically plausible, because tomatoes are a rich source of lycopene, a nutrient thought to protect against cell damage. However, the jury is still out on whether it really does protect cells.

So a healthy, balanced diet, regular exercise and stopping smoking are still the way to go. It’s unlikely that focusing on one particular food will improve your health.

 

Where did the story come from?

The study was carried out by researchers from the University of Bristol, the National Institute for Health Research (NIHR) Bristol Nutrition Biomedical Research Unit, Addenbrooke’s Hospital in Cambridge and the University of Oxford. It was funded by the NIHR and Cancer Research UK.

The study was published in the peer-reviewed medical journal Cancer Epidemiology, Biomarkers and Prevention. The study is open-access so it is free to read online or download.

In general, the media reported the story accurately but also reported different numbers of study participants, ranging from 1,800 to 20,000. This is because out of the 23,720 men who were initially included in the study, a proportion were excluded from the analyses due to missing questionnaires.

Several news sources have also reported that eating the recommended five portions of fruit or veg per day reduced the risk of prostate cancer by 24% compared to 2.5 servings or less per day. This seems to have come directly from the lead researcher, but these figures are not clearly presented in the research paper.

 

What kind of research was this?

This was a case-control study looking at the diet, lifestyle and weight of men who had had a prostate check and were subsequently diagnosed with (cases) and without (controls) prostate cancer. The researchers wanted to see if there were any factors that reduced the risk of being diagnosed with prostate cancer.

A previous systematic review suggested that a diet high in calcium is associated with an increased risk of prostate cancer and that diets high in selenium and lycopene are associated with reduced risk. Selenium is a chemical element essential for life that is found in animals and plants, but high levels are toxic. Lycopene is a nutrient found in red foods such as tomatoes and pink grapefruit.

The researchers defined intake of selenium and lycopene as the “prostate cancer dietary index”. They looked at whether there was an association between men’s index scores and their risk of having prostate cancer.

In addition, in 2007, the World Cancer Research Fund (WCRF) and the American Institute for Cancer Research (AICR) made eight recommendations on diet, exercise and weight for cancer prevention. 

However, recent research has shown conflicting results as to whether these recommendations are applicable to prostate cancer. One large European study found that men who followed the recommendations did not have a lower general prostate cancer risk, and the other found that men did have a reduced risk of aggressive prostate cancer.

The researchers wanted to see if these recommendations should be changed to include any of the prostate cancer dietary index components for men and/or men at higher risk of prostate cancer.

 

What did the research involve?

The researchers used data collected from a large UK study called the ProtecT trial. In this trial, 227,300 randomly selected men aged 50 to 69 were invited to have a prostate check between 2001 and 2009.

Nearly half of the men then had a prostate specific antigen (PSA) test and 11% of them went on to have further investigations. Before the test they were asked to fill out questionnaires on:

  • lifestyle
  • diet
  • alcohol intake
  • medical history
  • family history

They were also asked to provide information on their:

  • physical activity level
  • body mass index (BMI)
  • waist circumference
  • body size aged 20, 40 and at the time they entered the study

Body size was self-estimated by looking at pictures on a scale of 1 to 9. All those selecting 1 to 3 were categorised as normal weight and those selecting 4 to 9 were considered overweight/obese.

From this study the researchers identified 2,939 men who had been diagnosed with prostate cancer and matched them with 20,781 randomly selected men by age and GP practice who did not have prostate cancer to act as controls. They then excluded anyone who did not return the questionnaires and those who did not provide all of the body metrics.

This gave a sample of 1,806 men with prostate cancer and 12,005 controls.

The dietary questionnaires assessed how frequently they had consumed 114 items of food over the previous 12 months. This included an estimate of portion sizes.

From this information, the men were assigned a score to reflect how well they had achieved the first six of the eight WCRF/AICR recommendations (they did not have enough information for “salt consumption” or “dietary supplements”).

Adherence to each recommendation was scored (1 – complete adherence, 0.5 – partial adherence or 0 – non-adherence), giving an overall score between 0 and 6.

The researchers also looked at the intake of components of the “prostate cancer dietary index”: calcium, selenium and tomato products which they used as an indicator of lycopene intake (tomato juice, tomato sauce, pizza and baked beans). To be scored as adherent, men had to:

  • eat less than 1,500mg of calcium per day
  • eat more than 10 servings of tomato and tomato products per week
  • eat between 105 and 200µg of selenium per day

Statistical analyses were then performed to determine the risk of low or high grade prostate cancer according to adherence to the WCRF/AICR recommendations or intake of any of the three dietary components of the prostate cancer dietary index. The results were adjusted to take into account the following confounders:

  • age
  • family history of prostate cancer
  • self-reported diabetes
  • ethnic group
  • occupational class
  • smoking status
  • total energy intake
  • BMI

 

What were the basic results?

After adjusting for possible confounding factors:

  • being adherent to the tomato and tomato product recommendation by eating 10 or more servings of tomatoes per week was associated with an 18% reduced risk of prostate cancer compared to eating less than 10 servings (odds ratio (OR) 0.82, 95% confidence interval (CI) 0.70 to 0.97)
  • each component of the “prostate cancer dietary index” that the men adhered to was associated with a 9% reduction in risk of prostate cancer (OR 0.91, 95% CI 0.84 to 0.99)
  • the overall WCFR/AICR adherence score was not associated with a decreased risk of prostate cancer (OR 0.99, 95% CI 0.94 to 1.05)
  • every 0.25 increase in the score for adherence to the plant food recommendation was associated with a 6% reduced overall risk of prostate cancer (OR 0.94, 95% CI 0.89 to 0.99)

A 0.25 increase in adherence score could be achieved by increasing fruit and vegetable intake from less than 200g/day to between 200 and 400g/day, or by increasing fruit and vegetable intake from between 200 and 400g/day to 400g/day or more (400g is equivalent to five portions) or by changing intake of unprocessed cereals (grains) and/or pulses (legumes).

 

How did the researchers interpret the results?

The researchers concluded that, “in addition to meeting the optimal intake for the three dietary factors associated with prostate cancer, men should maintain a healthy weight and an active lifestyle to reduce risk of developing prostate cancer, cardiovascular diseases and diabetes”. They also say that “high intake of plant foods and tomato products in particular may help protect against prostate cancer, which warrants further investigations”.

 

Conclusion

This large study has shown an association between the consumption of more than 10 portions of tomatoes per week and an 18% reduction in risk of prostate cancer. However, as this was a case controlled study, and not a randomised controlled trial, it cannot prove that eating more tomatoes prevents prostate cancer.

Strengths of the study include its large size and attempts to account for potential confounding factors, although there are some limitations to the study, including:

  • reliance on the accuracy of the dietary questionnaires
  • broad categories for self-estimate of body size

This study does not provide enough evidence to change the recommendations for reducing the risk of prostate cancer. A healthy, balanced dietregular exercise and stopping smoking are still the way to go, rather than relying on eating one exclusive food type such as tomatoes.

Following the eight WCRF/AICR recommendations as listed above should also help prevent against other types of cancer as well as chronic diseases such as obesity and type 2 diabetes. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Tomatoes 'cut risk of prostate cancer by 20%': It takes 10 portions a week - but even baked beans count. Daily Mail, August 28 2014

Tomatoes 'important in prostate cancer prevention'. BBC News, August 27 2014

Tomato-rich diet can lower prostate cancer risk by a fifth, scientists claim. The Independent, August 27 2014

New research suggests men who eat more than 10 portions of tomatoes a week are less likely to develop prostate cancer. ITV News, August 27 2014

Links To Science

Er V, Lane JA, Martin RM, et al. Adherence to dietary and lifestyle recommendations and prostate cancer risk in the Prostate Testing for Cancer and Treatment (ProtecT) trial. Cancer Epidemiology, Biomarkers and Prevention. Published online July 13 2014

Categories: Medical News

Depression therapy aids other cancer symptoms

Medical News - Thu, 08/28/2014 - 14:00

"Depression therapy could help cancer patients fight illness," reports The Daily Telegraph.

The headline follows a study of intensive treatment of clinical depression given to people who had both depression and cancer – delivered as part of their cancer care. It found that not only did people’s mood improve, but cancer-related symptoms such as pain and fatigue were also reduced compared to that seen with the usual care given.

The treatment programme, called Depression Care for People with Cancer (DCPC), involves a team of specially trained cancer nurses and psychiatrists who work closely with the patient’s cancer doctors and GP.

A related study, also published today, found that clinical depression is a common problem for people living with cancer. For example, it found that around one in eight people with lung cancer also had clinical depression.

It should be noted that the trial involved patients with a good outlook for their cancer, which may have been a factor in their response to treatment for depression.

However, a second trial of the depression treatment programme, this time involving lung cancer patients, also published today but not analysed here, showed a similar benefit, despite their poorer cancer prognosis.

This was a randomised controlled trial, which is the best type of study to examine the effectiveness of healthcare treatments, so the results are likely to be reliable. It is hoped that the positive results will be replicated in larger populations.

 

Where did the story come from?

The study was carried out by researchers from the Universities of Oxford and Edinburgh, and was funded by Cancer Research UK and the Scottish government.

The study was published in the peer-reviewed medical journal The Lancet.

The study is one of three depression-related cancer studies published by The Lancet.

The first looks at how common clinical depression is in cancer patients.

The third study assesses how effective the DCPC programme is in patients with cases of lung cancer that have a poor prognosis.

The study was covered fairly by the UK media.

 

What kind of research was this?

This was a randomised controlled trial of an integrated treatment programme for clinical depression in patients with cancer, compared to the results seen with usual care.

The authors point out that clinical depression affects about 10% of people with cancer and is associated with: worse anxiety, pain, fatigue and functioning; suicidal thoughts; and poor adherence to anticancer treatments.

However, at present, there is no good evidence for how best to treat depression in cancer patients and how to integrate treatment into their cancer care.

Their integrated treatment programme involves a psychiatrist and the care manager working with the patient’s specialist doctor, GP and cancer nurses to provide an intensive systematic treatment for depression, including both drugs and psychological treatment.

It’s worth pointing out that what is new here is not the actual treatments for depression – rather the way they are delivered, as an integrated part of the patient’s cancer care.

 

What did the research involve?

Between 2008 and 2011, researchers enrolled 500 participants attending three cancer centres in Scotland. Participants were aged 18 or over, with a good cancer prognosis – with a predicted survival of at least a year. They had all been diagnosed with clinical depression of at least four weeks' duration.

253 participants were randomly assigned to the new DCPC programme, with 247 assigned to usual care.

In the DCPC group, depression care was delivered by specially trained cancer nurses, under the supervision of a psychiatrist. The programme was designed to be integrated with the patient’s cancer care, with psychiatrists working in collaboration with the patient’s oncology team and their GP.

The nurses established a therapeutic relationship with the patient, provided information about depression and its treatment, delivered psychological interventions and monitored progress, using a validated depression questionnaire. The psychiatrists supervised treatment, advised GPs about prescribing antidepressants and provided direct consultations with patients who were not improving.

The initial treatment phase comprised a maximum of 10 sessions with the nurse (at the clinic or, if necessary, by telephone) over a four-month period. After this, the patient’s progress was monitored monthly by telephone for a further eight months, and additional sessions with the nurse were provided for patients not meeting treatment targets. All cases were reviewed on a weekly basis, in supervision meetings attended by nurses and a psychiatrist.

In the usual care group, the patient's GP and cancer doctors were informed about the clinical depression diagnosis and asked to treat their patients as they normally would. This might involve the GP prescribing antidepressants, or a referral of the patient to mental health services for assessment or psychological treatment.

At 24 weeks, researchers looked at the patient's primary response to their treatment, defined as at least a 50% reduction in depression severity and measured using a self-rated symptom checklist. A 50% reduction in score has been shown to be comparable to no longer meeting diagnostic criteria for major depression.

Researchers also looked at each patient’s levels of anxiety, pain, fatigue, physical and social functioning, as well as their overall health and quality of life, using validated questionnaires, and the patient’s opinion of the quality of depression care.

They analysed the results using standard statistical methods.

 

What were the basic results?

Researchers found that in 62% of participants in the DCPC group, the severity of depression decreased by 50% or more, compared with a 17% decrease in the usual care group (absolute difference 45%, 95% confidence interval (CI) 37 to 53; adjusted odds ratio (OR) 8.5, 95% CI 5.5 to 13.4).

Compared with patients in the usual care group, participants in the DCPC group also had less anxiety, pain and fatigue, as well as better functioning, health and quality of life. They also rated their depression care as being better.

During the study, 34 cancer-related deaths occurred (19 in the DCPC group, 15 in the usual care group); one patient in the DCPC group was admitted to a psychiatric ward and one patient in this group attempted suicide. None of these events were judged to be related to the trial's treatments or procedures.

 

How did the researchers interpret the results?

The researchers say their findings suggest that DCPC is an effective treatment for clinical depression in patients with cancer, and also offers a model for the treatment of depression occurring with other chronic medical conditions.

According to lead author Professor Michael Sharpe, from the University of Oxford in the UK: “The huge benefit that DCPC delivers for patients with cancer and depression shows what we can achieve for patients if we take as much care with the treatment of their depression as we do with the treatment of their cancer.”

 

Conclusion

Not surprisingly, this well-conducted study suggests that offering cancer patients with clinical depression an intensive, systematic treatment for depression involving all the people involved in their care, works better than the current approach.

As the authors point out, the trial had some limitations. The sample was mainly women receiving follow-up or adjuvant treatment for breast and gynaecological cancers, so it is unclear whether the findings are generalisable to other cancer patients.

Also, patients and their GPs could not be “masked” as to whether they were in the DCPC group or the group receiving usual care, which might have influenced the findings.

The striking results for patients in the DCPC group is probably attributable to treatment for depression being intensive, systematically implemented and integrated with the patient’s cancer care.

It is noteworthy that in the group receiving usual care, prescribing antidepressants was not actively managed – by, for example, changing the drug or adjusting the dose, according to the patient’s response. Few patients in this group received psychological treatment, despite the option being available.

Due to the very positive results achieved using the DCPC approach, the programme is likely to be assessed using other groups of people with cancer. If it continues to prove successful, it may become part of standard cancer treatment protocols.

If you are concerned that you have mental health problems that are being left untreated, talk to your cancer nurse or GP. They should be able to provide extra support and treatment as required.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Cancer patients with depression 'are being overlooked'. BBC News, August 28 2014

Do more for depressed cancer patients – study. The Guardian, August 28 2014

Depression therapy could help cancer patient fight illness. The Daily Telegraph, August 28 2014

Study: Depression among cancer patients 'overlooked'. ITV News, August 28 2014

Links To Science

Sharpe M, Walker J, Hansen CH, et al. Integrated collaborative care for comorbid major depression in patients with cancer (SMaRT Oncology-2): a multicentre randomised controlled effectiveness trial. The Lancet. Published online August 28 2014

Categories: Medical News

Does weight loss surgery affect dementia risk?

Medical News - Wed, 08/27/2014 - 14:30

"Weight loss surgery 'reduces chance of Alzheimer's disease'," reports The Daily Telegraph. This misleading headline reports on a small Brazilian study of severely obese women before and after weight loss surgery. None of the women had any signs or symptoms of Alzheimer's.

Seventeen women with an average body mass index (BMI) of 50kg/m² had neuropsychological tests, blood tests and a brain scan before surgery and again six months later, when their average BMI had reduced to 37kg/m². Their results were compared with those of 16 women of a normal weight – the "controls".

All of the women had normal neuropsychological tests. The obese women performed one of the tests more quickly after weight loss surgery, but it cannot be assumed this is a direct result of their weight loss. It could be they were faster simply because this was the second time they had done the test. The control group of women did not repeat the test, so we do not know if they also would have performed better.

Small changes in the rate of metabolism were seen in brain scans after surgery in two areas of the obese women's brains. But because the women were not followed up over time, it is not possible to say whether this means the women were at less risk of dementia or Alzheimer's disease as a result.

Losing weight can improve cardiovascular function, which in turn can protect against some types of dementia. But, based on this very small study, weight loss surgery cannot be recommended as an effective preventative measure against dementia.

 

Where did the story come from?

The study was carried out by researchers from the University of São Paulo, Brazil and was funded by the Brazilian National Council for Scientific and Technological Development.

It was published in the peer-reviewed Journal of Clinical Endocrinology and Metabolism on an open access basis, so it is free to read the paper online (PDF, 443kb).

The media headlines overstated the results of this study – it was not able to show that weight loss "boosts brain power" or reduces the risk of Alzheimer's disease. A more accurate – if less exciting – headline would have been "Weight loss surgery may make you perform slightly better in one of several neuropsychological tests".

But credit should go to the Mail Online for including a quote from an independent expert, who warned against reading too much into the results of this small study.

 

What kind of research was this?

This was a before and after study looking at the effect of weight loss surgery on brain (cognitive) function and metabolism in severely obese people. Severe obesity is when a person has a BMI of 40 or above.

The researchers say there is a link between obesity and Alzheimer's disease. They also report that previous research has found one area of the brain, called the posterior cingulate gyrus (believed to be involved in many brain processes), which shows reduced metabolic activity in early Alzheimer's disease.

They suggest the increased activity in this region might be a compensatory mechanism that occurs before the reduction in activity later in the disease.

The researchers wanted to assess the level of activity in this part of the brain in obese women and whether weight loss could have any impact on the metabolism.

As this study did not have a randomised control group of severely obese people who did not receive surgery, it is not able to prove cause and effect, as other confounding factors may have influenced the results.

 

What did the research involve?

The researchers compared the results of six neuropsychological tests, blood tests and a PET brain scan (a type of scan that assesses brain metabolism) on severely obese women before gastric bypass surgery and six months afterwards. They also compared the obese women's results with those of a group of normal-weight women.

Seventeen severely obese women aged between 30 and 50 were selected who were due to have gastric bypass surgery. The blood tests they had measured:

  • indicators of metabolism – glucose (sugar) level, insulin and lipids
  • markers of inflammation – C-reactive protein (CRP), Interleukin-6 (IL-6) and tumour necrosis factor-alpha (TNF-α)

Sixteen normal-weight women were recruited from the gynaecology unit to have the same tests on a single occasion to act as controls. They were matched to the obese women in terms of age and educational level.

 

What were the basic results?

The obese women lost a significant amount of weight after the surgery, but were still classified as very obese. Their average BMI was 50.1kg/m² before surgery and 37.2kg/m² six months after. The BMI of the normal-weight women was 22.3kg/m².

There was no significant difference in the neuropsychological tests between the obese women (before or after surgery) and the normal-weight women. The obese women showed improvements in one part of one of the six neuropsychological tests after surgery, however. This was the Trail Making Test – B, which assesses speed of visual scanning, attention and mental flexibility.

The obese women were able to complete the test in two-thirds of the time after surgery than they had before (average 147.8 seconds before and 96.9 seconds afterwards). Their performance was within normal limits both before and after surgery.

The brain PET scan showed an increase in metabolism in two areas of the brain before surgery compared with the normal-weight women. This difference was no longer present six months after surgery.

The two areas were the right posterior cingulate gyrus (the area that may be more active in early Alzheimer's disease) and the right posterior lobe of the cerebellum (involved in motor co-ordination).

Blood glucose, insulin levels and insulin resistance were higher in obese women than normal-weight women before surgery and improved to similar levels six months after surgery. Two of the inflammatory markers – CRP and IL-6 – were also significantly higher prior to surgery but then improved.

 

How did the researchers interpret the results?

The researchers concluded that, "metabolic and inflammatory properties associated with obesity in young adults are accompanied by changes in the cerebral metabolism capable of being reversed with weight loss."

They acknowledge that, "further studies are required to improve the understanding of the pathogenesis of the cognitive dysfunction related to obesity and the effects of weight loss on the occurrence of dementia."

 

Conclusion

This small short-term study has not shown that weight loss surgery reduces the risk of dementia. The women in this study were relatively young (about 41 years old on average) and all had normal neuropsychological test performance.

What this study did show is that, unsurprisingly, weight loss for severely obese women was associated with improved insulin resistance and blood glucose levels, and reduced levels of inflammation.

The main result reported by the researchers was a higher level of metabolism in two areas of the brain in severely obese women before gastric band surgery compared with normal-weight controls. This reduced to normal levels six months after surgery, when they had lost a substantial amount of weight but were still obese.

According to the researchers, one of these parts of the brain usually has reduced levels of metabolism in Alzheimer's disease, but has higher levels of metabolism in young people with a genetically increased risk of Alzheimer's disease before the levels then reduce. But they did not test any of the women for this genetic risk factor (apolipoprotein E type 4 allele).

The study also only followed the women for six months. This means it was not able to show what happened to activity in this area over a longer period of time, or whether any of the women would go on to develop Alzheimer's disease.

Overall, this study cannot show that the increased level of activity was associated with an increased risk of dementia, or that the reduction of activity after the women lost weight would change their risk.

There were improvements in the time it took the obese women to complete half of one of the six neuropsychological tests after the surgery and weight loss, but this cannot be attributed solely to weight loss. It could be that the women were quicker simply because they had done the test before and remembered how to do it.

The normal-weight women were only tested once, and there was no randomised control group of severely obese women who did not have surgery. Therefore, there was no group that allowed the researchers to compare whether completing the test for a second time would be faster, even without weight loss. There was also no difference in the women's ability to complete the other part of this test, or in the other five tests.

Further limitations of the study include:

  • the small number of participants
  • all the participants were women, so the results may not be applicable to men
  • this was a select group of severely obese women with an average BMI of 50kg/m², so may not apply to women with other levels of obesity – a normal weight is between 19 and 25kg/m², obesity is considered for those over 30kg/m² and severe obesity for those over 40kg/m²
  • it is not clear what gynaecological conditions the control women had and whether this could have affected the results
  • there is no information about any other potential confounding factors that could have influenced the results, including other medical conditions, lifestyle factors such as smoking or alcohol use, or a family history of dementia

In conclusion, this study does not show that weight loss surgery reduces the risk of dementia. Despite this, the study does provide further evidence of the benefits of this type of surgery, including weight loss and improvements in insulin resistance, which would reduce the risk of diabetes.

Weight loss surgery should only be considered as a last resort. Many people can achieve significant weight loss through reducing their calorie intake and by taking regular exercise. This also has the added bonus of eliminating the risks of complications and after effects of surgery, such as excess skin.

For more information on losing weight, download the NHS Choices weight loss plan.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Weight loss surgery 'reduces chance of Alzheimer's Disease'. The Daily Telegraph, August 26 2014

Weight-loss surgery can boost brain power and 'cut the risk of developing Alzheimer's'. Mail Online, August 26 2014

Links To Science

Marques EL, Halpern A, Mancini MC, et al. Changes in Neuropsychological Tests and Brain Metabolism After Bariatric Surgery (PDF, 443kb). The Journal of Clinical Endocrinology and Metabolism. Published online August 26 2014

Categories: Medical News

Antidepressant use in pregnancy linked to ADHD

Medical News - Wed, 08/27/2014 - 03:00

“Pregnant women who take anti-depressants 'could raise their child's risk of ADHD',” reports the Mail Online, saying that this could explain “the rise in children with short attention spans”.

The study in question compared children with attention deficit hyperactivity disorder (ADHD) or autistic spectrum disorders (ASD) with children without these conditions. It found that children with ADHD, but not those with ASD, were more likely to have had mothers who took antidepressants during pregnancy. 

The main limitation to this study is that there is no certainty the antidepressants were having an effect, or whether other factors were at play. The researchers did try to take factors such as the mother’s depression itself into account, but acknowledge that other factors may have affected the findings. The fact that the link was no longer significant once the severity of women’s psychiatric illness was taking into account adds weight to the suggestion that other factors were involved.

While medications, including antidepressants, are generally avoided in pregnancy, the benefits of taking them may outweigh potential risks in some circumstances. Depression is a serious condition, which can have serious consequences if left untreated during pregnancy.

If you are taking antidepressants and are pregnant or planning to get pregnant, talk to your doctor. However, you should not stop taking your medicines unless advised to do so by your doctor.

 

Where did the story come from?

The study was carried out by researchers from Massachusetts General Hospital and other healthcare and research institutes in the US. It was funded by the US National Institute for Mental Health Research. Some of the authors declared receiving consulting fees or research support, having equity holdings or being on scientific advisory boards for various pharmaceutical companies. The study was published in the peer-reviewed medical journal Molecular Psychiatry.

The study was covered reasonably by the Mail, which highlighted early on in its story that any risk of taking antidepressants needed to be balanced against the risk of not treating a woman’s depression. It also very sensibly reported on current guidance from the National Institute for Health and Care Excellence (NICE) on when antidepressants should be used in pregnancy.

 

What kind of research was this?

This was a case-control study looking at whether exposure of a foetus to antidepressants in the womb might increase the risk of the child having ASD or ADHD in childhood. The researchers report that some previous studies have found a link, while others have not.

It would be unethical for researchers to randomly assign pregnant women with depression to receive or not receive antidepressants just to assess potential harms to the baby. Therefore, this type of study (called an observational study) is the most feasible way of investigating these links. The limitation to this type of study, however, is that factors other than antidepressants could be causing the link seen. For example, the depression itself might have an effect, or genetic factors contributing to the woman’s depression might also increase the child’s risk of ASD or ADHD. The researchers took measures to try and take some factors into account, particularly that ADHD and ASD might be associated with maternal depression itself. However, their effect may not be removed completely.

 

What did the research involve?

The researchers used data routinely collected from one healthcare group in the US. They identified children diagnosed with ADHD or ASD (cases), and compared them with similar children who did not have these conditions (controls). They looked at whether the mothers of children with these conditions were more likely to have taken antidepressants during their pregnancies. If this was the case, this would suggest that the antidepressant use might be linked to an increased risk of these conditions.

The researchers identified cases diagnosed between 1997 and 2010, among children aged from two to 19, who had been delivered at the three hospitals that were part of the healthcare group. For each case child, they identified three “control” children, who were:

  • not diagnosed with ADHD, ASD or an intellectual disability
  • born in the same year, ideally, or within three years if not enough controls could be found
  • born at the same hospital
  • born at the same term – either full-term or preterm (premature)
  • of the same sex
  • of the same race/ethnicity
  • of the same health insurance type (this acted as an indicator of socioeconomic status)

Children for whom no matching controls could be identified were excluded, but those with only one or two matched controls were included. The researchers ended up with 1,377 children with ASD, 2,243 children with ADHD and 9,653 healthy control children for analysis.

The children’s mothers were also identified from the healthcare database and birth certificate data. They identified whether the mothers had been prescribed antidepressants:

  • at any time before pregnancy
  • in the three months before conceiving the child
  • at any time during pregnancy (also broken down into first, second or third trimester prescriptions)

They also identified how long the prescription lasted (how many days’ worth of antidepressants the woman was prescribed).

The researchers then analysed whether prenatal antidepressant use was more or less common in mothers of cases or controls. These analyses took into account the factors that the children were matched for (such as gender and race) as well as maternal age and household income.

They also took into account whether the mother had been diagnosed with depression, looked at the effects of different types of antidepressant, an indicator of how severe the woman’s illness was (assessed by how much treatment she received and presence of other psychiatric diagnoses) – and exposure to two types of non-antidepressant medication (one drug to prevent vomiting that affected serotonin levels – something that some antidepressants also do – and any antipsychotics).

 

What were the basic results?

Maternal depression was associated with increased risk of ASD and ADHD in adjusted analyses.

Between 3% and 6.6% (approximately) of children with ADHD or ASD had mothers who had taken antidepressants either before pregnancy or during pregnancy, compared to 1% to 3.5% (approximately) of control children.

Before taking into account other factors, taking antidepressants before pregnancy or during pregnancy was associated with an increased risk of ASD and ADHD. After taking into account factors including maternal depression, taking antidepressants before pregnancy was associated with an increase in the odds of ASD (odds ratio (OR) 1.62, 95% confidence interval (CI) 1.17 to 2.23), but not of ADHD (OR 1.18, 95% CI 0.86 to 1.61). Taking antidepressants during pregnancy was associated with an increase in the odds of ADHD (OR 1.81, 95% CI 1.22 to 2.70) but not of ASD (OR 1.10, 95% CI 0.70 to 1.70).

The researchers found that if they took into account measures of how severe the woman’s illness was (how much treatment she was receiving, and whether she had other psychiatric conditions), the link between antidepressant exposure during pregnancy and ADHD was no longer statistically significant.

The researchers found no link between the anti-vomiting drug and ASD or ADHD risk, while there was a suggestion of a link between maternal antipsychotic use during pregnancy and ASD, but not ADHD.

 

How did the researchers interpret the results?

The researchers concluded that the association between maternal prenatal antidepressant use and ASDs in the children was probably due to the depression itself, rather than antidepressant use.

Maternal prenatal antidepressant use did appear to be associated with a modest increase in ADHD in the child, although this may still be due to other factors rather than the antidepressants themselves, they said. The researchers note that this potential risk needs to be weighed up against the considerable consequences of not treating the mother’s depression.

 

Conclusion

This study suggests a potential link between women taking antidepressants during pregnancy and an increased risk of ADHD, but not ASDs, in their children. The limitation to this type of study is that factors other than the antidepressants, such as the depression itself, or genetic factors increasing both depression and ADHD risk, might be causing the effect seen.

The researchers used various methods to take this into account, but acknowledge that other factors could still be having an effect. While the link with ADHD remained significant after taking maternal depression into account, it did not remain significant after taking into account measures of how severe the woman’s illness was.

Other limitations to the study include the following:

  • It could only assess what prescriptions the mothers received, and not whether they took them.
  • It could not directly assess how severe a woman’s illness was; they had to rely on data that was routinely collected on the types of treatment she was receiving and her previous diagnoses. This is unlikely to capture severity as well as a more direct assessment could.
  • If children or mothers were diagnosed or treated outside of the healthcare grouping being assessed, this information would not be available to the researchers, and this could affect results.

It is important to know that no one factor is likely to cause ADHD or ASD. These conditions are complex, and we are not yet entirely sure what causes the majority of cases. Both genetic and non-genetic (known as “environmental”) factors are thought to potentially play a part.

Medications are used sparingly in pregnancy to reduce any risk of harm to the developing foetus. However, if a woman’s condition could have serious consequences if untreated, then the woman and their doctor may decide that the benefits outweigh the harms.

NICE has guidance on how to treat depression if planning a pregnancy and during pregnancy and breastfeeding. In general, it recommends considering alternatives to antidepressant treatment, and considering doctor-supervised withdrawal of antidepressants for women already taking them. However, under certain circumstances it advises considering antidepressant treatment, such as if the women has not responded to non-drug therapies. 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Pregnant women who take anti-depressants 'could raise their child's risk of ADHD'. Mail Online, August 26 2014

Links To Science

Clements CC, Castro VM, Blumenthal SR, et al. Prenatal antidepressant exposure is associated with risk for attention-deficit hyperactivity disorder but not autism spectrum disorder in a large health system. Molecular Psychiatry. Published online August 26 2014

Categories: Medical News

Common bacteria could help prevent food allergies

Medical News - Tue, 08/26/2014 - 15:00

"Bacteria which naturally live inside our digestive system can help prevent allergies and may become a source of treatment," BBC News reports after new research found evidence that Clostridia bacteria helps prevent peanut allergies in mice.

The study in question showed that mice lacking normal gut bacteria showed increased allergic responses when they were given peanut extracts.

The researchers then tested the effects of recolonising the mice's guts with specific groups of bacteria. They found that giving Clostridia bacteria (a group of bacteria that includes the "superbug" Clostridium difficile) reduced the allergic response.

The researchers hope the findings could one day support the development of new approaches to prevent or treat food allergies using probiotic treatments.

These are promising findings, but they are in the very early stages. Only mice have so far been studied, with a specific focus on peanut allergy and Clostridia bacteria. Further study developments from this animal research are awaited.

 

Where did the story come from?

This study was conducted by researchers from the University of Chicago, Northwestern University, the California Institute of Technology and Argonne National Laboratory in the US, and the University of Bern in Switzerland.

Funding was provided by Food Allergy Research and Education (FARE), US National Institutes of Health Grants, the University of Chicago Digestive Diseases Research Core Center, and a donation from the Bunning family.

It was published in the peer-reviewed journal PNAS.

BBC News gave a balanced account of this research.

 

What kind of research was this?

This was an animal study that aimed to see how alterations in gut bacteria are associated with food allergies.

As the researchers say, life-threatening anaphylactic reactions to food allergens (any substance that generates an allergic response) are an important concern, and the prevalence of food allergies appears to have been rising over a short space of time.

This has caused speculation about whether alterations in our environment could be driving allergic sensitivity to foods. One such theory is the "hygiene hypothesis" (discussed above).

This is the theory that reducing our exposure to infectious microbes during our early years – through overzealous sanitisation, for example – deprives people's immune systems of the "stimulation" of exposure, which could then lead to allergic disease. 

An extension of this theory is that environmental factors – including sanitation, but also increased use of antibiotics and vaccination – have altered the composition of natural gut bacteria, which play a role in regulating our sensitivity to allergens. It has been suggested that infants who have altered natural gut bacteria could be more sensitive to allergens.

This mouse study aimed to examine the role of gut bacteria in sensitivity to food allergens, with a focus on peanut allergy.

 

What did the research involve?

The researchers investigated the role gut bacteria plays in sensitivity to food allergens in different groups of mice. The research team studied mice born and raised in a completely sterile, bacteria-free environment so they were germ free.

Another group of mice were treated with a mixture of strong antibiotics from two weeks of age to severely reduce the variety and number of bacteria in their gut.

These groups of mice were then given purified extracts of roasted unsalted peanuts to assess their allergic response.

After looking at the allergic reactions in the germ-free and antibiotic-treated mice, specific groups of bacteria were reintroduced into their gut to see what, if any, effect it had on their allergic response.

The researchers focused on reintroducing Bacteroides and Clostridia groups of bacteria, which are normally present in mice in the wild.

 

What were the basic results?

Faecal samples taken from the antibiotic mice were found to have a significantly reduced number and variety of gut bacteria. These mice also had increased sensitivity to peanut allergens, demonstrating an increased immune system response that produced antibodies specific to these allergens, as well as showing symptoms of allergy.  

When the germ-free mice were exposed to peanut allergens, they demonstrated a greater immune response than normal mice and also demonstrated features of an anaphylactic reaction.

The researchers found that adding Bacteroides to the gut of the germ-free mice had no effect on the allergic reaction. However, adding Clostridia bacteria reduced sensitivity to the peanut allergen, making their allergic response similar to normal mice.

This suggests that Clostridia plays a role in protecting against sensitisation to food allergens.

This was further confirmed when Clostridia was used to recolonise the guts of the antibiotics mice and was found to reduce their allergic response.

The researchers then carried out further laboratory experiments looking at the process by which Clostridia could be offering protection. They found the bacteria increases the immune defenses of the cells lining the gut.

One specific effect seen was how Clostridia increased the activity of a particular antibody, which reduced the amount of peanut allergen entering the bloodstream by making the gut lining less permeable (so substances are less likely to pass through it).

 

How did the researchers interpret the results?

The researchers concluded that they have identified a "bacterial community" that protects against sensitisation to allergens and have demonstrated the mechanisms by which these bacteria regulate the permeability of the gut lining to food allergens.

They suggest their findings support the development of new approaches for the prevention and treatment of food allergy by using probiotic therapies to modulate the composition of the gut bacteria, and so help induce tolerance to dietary allergens.

 

Conclusion

This research examined how normal populations of gut bacteria influence mouse susceptibility to peanut allergens. The findings suggest the Clostridia group of bacteria may have a particular role in altering the immune defenses of the gut lining and preventing some of the food allergen entering the bloodstream.

The findings inform the theory that our increasingly sterile environments and increased use of antibiotics could lead to a reduction in our normal gut bacteria, which could possibly lead to people developing a sensitivity to allergens.

But these findings are in the very early stages. So far, only mice have been studied, and only their reactions to peanuts. We don't know whether similar results would be seen with other tree nuts or other foods that can cause an allergic response.

Also, although this research provides a theory, we do not know whether this theory is correct. We don't know, for example, whether people with a peanut allergy do (or did) have reduced levels of certain gut bacteria populations and whether this contributed to the development of their allergy. We also do not know whether treatments that reintroduce these bacteria could help reduce the allergy.

As the researchers say, the study does open an avenue of further study into the possible development of probiotic treatments, but there is a long way to go. 

Professor Colin Hill, a microbiologist at University College Cork, was quoted by the BBC as saying: "It is a very exciting paper and puts this theory on a much sounder scientific basis."

But he does offer due caution, saying: "We have to be careful not to extrapolate too far from a single study, and we also have to bear in mind that germ-free mice are a long way from humans."

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To Science

Gut bugs 'help prevent allergies'. BBC News, August 26 2014

Probiotics may help prevent peanut allergies, animal study shows. Fox News, August 26 2014

Categories: Medical News

Breakfast 'not the most important meal of the day'

Medical News - Tue, 08/26/2014 - 03:00

"Breakfast might not be the most important meal of the day after all,” the Mail Online reports.

The concept that breakfast is the most important meal of the day is up there in the pantheon of received wisdom with “never swim after eating” or “getting wet will give you a cold”. But is there any hard evidence to back the claim?

A new study in 38 lean people found that six weeks of regularly eating breakfast had no significant effect on metabolism or eating patterns for the rest of the day compared to total fasting before midday.

It also found no difference between the groups at the end of the study in body mass, fat mass, or indicators of cardiovascular health (such as cholesterol or inflammatory markers).

There are various important limitations to this trial though such as the short follow-up time. For example, people who fasted had much more variable blood sugar levels in the afternoon and evening, and we do not know what the longer-term effects of this could be.

Overall, based on this study alone, we would not recommend completely starving your body of all nutrition before 12pm each day, not least because not eating something in the morning may not make you feel very happy or energetic, if nothing else. 

 

Where did the story come from?

The study was carried out by researchers from the University of Bath and published in the peer-reviewed American Journal of Clinical Nutrition. The study has been published on an open-access basis, so is available for free online. The work was funded by a grant from the Biotechnology and Biological Sciences Research Council. The authors declare no conflicts of interest.

In concluding that breakfast is not the most important meal of the day, the Mail does not consider the various limitations of this very small study.

 

What kind of research was this?

This was a randomised controlled trial looking at how breakfast habits were associated with energy balance in the rest of the day in people living their normal daily life.

As the researchers say, it is the popular belief that “breakfast is the most important meal of the day”. But this assumption is only grounded in cross-sectional studies observing that eating breakfast is associated with reduced risk of weight gain and certain chronic diseases (such as diabetes and cardiovascular disease). However, this does not prove cause and effect. The researchers also note that such observational studies do not take into account the fact that people who eat breakfast also tend to be more physically active, eat less fat, be non-smokers and moderate drinkers, opening up the possibility of confounding factors.

So it could be the case that rather than regularly eating breakfast making you healthy, healthy people are more likely to eat breakfast.

The researchers say that though breakfast is said to influence metabolism, studies have lacked measurement tools capable of accurately measuring this during normal daily activities. This study aimed to get a better indication of this by measuring all aspects of energy balance, including the heat generated during physical activity, and in-depth laboratory tests (including blood tests and DEXA scan of bone mineral density).

Ultimately, they wanted to find out whether eating breakfast was a cause of good health or whether it was simply a sign of an already healthy lifestyle.

 

What did the research involve?

The research was given the title the “Bath Breakfast Project”. Adults between the ages of 21 and 60 were eligible for the trial if they were either normal weight (20 to 25kg/m²) or overweight (25 to 30kg/m²). People were randomised to eat a daily breakfast or to extended morning fasting for six weeks. Each of the two randomised groups was intended to include an even balance of normal and overweight participants, and of people who frequently and infrequently ate breakfast. This was done to allow a stratified (representative) analysis based on these two factors.

The total sample size was around 60-70. This publication reports the findings for the 38 "lean" people in the study – omen with a DEXA fat mass index of 11kg/m² or less, and men with a fat mass index of 7.5kg/m² or less (DEXA fat mass index is assessed using X-rays to give a very precise measurement of body fat).

Before the trial, participants came to the laboratory to have baseline measurements taken. This included blood tests to look at hormones, metabolites and blood fats, assessments of metabolic rate, and body mass and fat mass measurements. A small tissue sample was also taken to look at key genes related to appetite and physical activity. 

The breakfast group were told to eat 3,000kJ (around 720 calories – or around two bacon sandwiches) of energy prior to 11am, with half of this provided within two hours of waking. The breakfasts were self-selected by the participants, though they were said to be provided with detailed examples of the foods that would give the appropriate energy intake. The extended morning fasting group could drink only water before 12pm each day.

During the first and last weeks of the six-week trial, participants kept detailed records of their food and fluid intakes for later analysis of daily energy and macronutrient intake. During these two weeks, they were also fitted with a combined heart rate/accelerometer to accurately record energy expenditure/physical activity habits for the entire duration of each of these seven-day periods. A glucose monitor was also fitted under the skin.

They were told when these devices were fitted: “Your lifestyle choices during this free-living monitoring period are central to this study. We are interested in any natural changes in your diet and/or physical activity habits, which you may or may not make in response to the intervention. This monitoring period has been carefully scheduled to avoid any pre-planned changes in these habits, such as a holiday or diet/exercise plan. You should inform us immediately if unforeseen factors external to the study may influence your lifestyle.”

After the six weeks of the trial, the participants returned to the laboratory for repeat body measurements.

 

What were the basic results?

The study reports data for the 33 people who completed the trial, 16 in the breakfast group and 17 in the fasting group. These people were of average age 36, 64% were female and 79% of them regularly ate breakfast.

The researchers found that compared to the fasting group those in the breakfast group generated significantly more heat energy during physical activity before 12pm, and also engaged in more physical activity, in particular more “light” physical activity. Resting metabolic rate was stable between the groups, and there was no subsequent suppression of appetite in the breakfast group (energy intake remained 539 kcal/d greater than the fasting group throughout the day).

There was no difference in waking or sleeping times, and at the end of the study there were no differences between groups in body mass or fat mass, body hormones, cholesterol or inflammatory markers. There was no difference between groups in fasting blood sugar or insulin at six weeks, but during continuous sugar monitoring in the last week of the trial the fasting group demonstrated more variability in their afternoon and evening sugar measures.

 

How did the researchers interpret the results?

The researchers conclude that: “Daily breakfast is causally linked to higher physical activity thermogenesis [heat generation] in lean adults, with greater overall dietary energy intake, but no change in resting metabolism. Cardiovascular health indexes were unaffected by either of the treatments, but breakfast maintained more stable afternoon and evening glycemia [glucose control] than did fasting.”

 

Conclusion

This trial aimed to measure the direct effect that eating breakfast or fasting before 12pm has on energy balance and indicators of cardiovascular health in people living their normal daily lives. The trial has been carefully designed study and has taken extensive body measurements to try and measure the direct effects of breakfast or fasting upon the body. However, there are limitations to bear in mind:

  • This study reports the findings for the 33 lean people in the study. The researchers randomised between 60 and 70 people, including a balanced mix of normal weight and obese people. A later publication will report the findings in the remaining obese cohort.
  • The intervention was intended to apply “under free-living conditions” where all lifestyle choices were allowed to vary naturally. However, it is difficult to gauge how accurately people did comply with their allocated interventions. Compliance was said to be confirmed via self-report and verified via continuous glucose monitoring; however, this only apparently happened during the first and sixth weeks of the trial. It is unclear whether compliance would have been accurately measured during the intervening weeks.
  • The study only measures the effect of a very specific intervention of eating 3,000kJ for breakfast, or eating absolutely nothing at all, except for water before 12pm. This total fasting example is quite extreme, and its effects have only been measured over six weeks. We don’t know what the longer-term effects upon health would be. For example, the study did find that people who fasted had much more variable blood glucose control in the afternoon, and we don’t know what the longer-term effects of this pattern would be.
  • The study has also not measured the wider effects upon general feelings of wellbeing, emotions, concentration, lethargy, etc, that fasting may have. Participants in the fasting group were observed to do less physical activity in the morning, and this may have been an indicator of them feeling that they had less energy.
  • Study of different timings of breakfast, or different compositions (e.g. of carbohydrate, protein or fat) or different total calories, may be more beneficial for future study than the comparison of this 3,000kJ breakfast or total fast before 12pm studied here.

Overall, this study does not settle the debate on whether breakfast is the most important meal of the day, because it was quite narrow in its scope. Dr Betts, a senior lecturer in nutrition, metabolism and statistics, told the Mail Online that “It is certainly true that people who regularly eat breakfast tend to be slimmer and healthier, but these individuals also typically follow most other recommendations for a healthy lifestyle, so have more balanced diets and take more physical exercise." 

In normal life situations, breakfast does therefore seem to be linked to health in some way, though direct cause and effect is difficult to apply, due to the influence of other health and lifestyle factors in relationship. However, this study does not provide many more answers of whether we should eat breakfast, or what type of breakfast we should eat.

However, based on this study alone we would not recommend missing breakfast, not least because it may have a negative impact on your mood; you could spend all morning feeling “hangry”.

If you have slipped into the habit of skipping breakfast, then it is never too late to break the habit.

Read about five breakfast recipes specifically designed for people who hate eating breakfast.

Analysis by Bazian. Edited by NHS Choices.
Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Breakfast might NOT be the most important meal of the day after all: Scientists find it doesn't speed up the metabolism or aid weight loss. Mail Online, August 25 2014

Links To Science

Betts JA, Richardson JD, Chowdhury EA, et al. The causal role of breakfast in energy balance and health: a randomized controlled trial in lean adults. The American Journal of Clinical Nutrition. Published online June 4 2014

Categories: Medical News

Autistic brain 'overloaded with connections'

Medical News - Fri, 08/22/2014 - 14:30

"Scientists discover people with autism have too many brain 'connections'," the Mail Online reports. US research suggests that people with an autistic spectrum disorder have an excessive amount of neural connections inside their brain.

The headline is based on the results of a study that found that at post-mortem, brains of people with autism spectrum disorder (ASD) have more nerve cell structures called “dendritic spines” – which receive signals from other nerve cells – than the brains of people without ASD.

Brain development after birth involves both the formation of new connections and the elimination or "pruning" of other connections. The researchers concluded that people with ASD have a developmental defect in the pruning/elimination of dendritic spines.

Further examination of the brains of people with ASD found that more of the signalling protein mTOR was found to be in its activated state than in brains of people without ASD.

A process called autophagy, where older structures and proteins within cells are removed and broken down, was also impaired.

The researchers performed further experiments to show the mTOR signalling inhibits autophagy, and without autophagy pruning of dendritic spines does not occur.

Mice genetically engineered to have increased levels of activated mTOR signalling were found to display autistic-like symptoms. All of these could be reversed with treatment with an inhibitor of mTOR called rapamycin.

Rapamycin is a type of antibiotic, and is currently used in medicine as an immunosuppressant to prevent organ rejection after kidney transplant. However, it has been associated with a range of adverse effects so would be unsuitable for most people with ASD.

It is too soon to say whether this research could lead to any treatment for ASD, and even if it does it is likely to be a long way off.

 

Where did the story come from?

The study was carried out by researchers from Columbia Medical School, the Icahn School of Medicine at Mount Sinai and the University of Rochester. It was funded by the Simons Foundation.

The study was published in the peer-reviewed journal Neuron.

The results of the study were extremely well-reported by the Mail Online.

 

What kind of research was this?

This was a laboratory and animal study that aimed to determine whether a process called autophagy (a process of removing and degrading cell structures and proteins) is involved in the remodelling of synapses (nerve connections). And whether this involves signalling through a protein called mTOR.

They also wanted to see whether this process was defective in autism spectrum disorder (ASD).

Laboratory and animal-based research is ideal for answering these sorts of questions. However, it means that any application to human health is probably a long way off.

 

What did the research involve?

The researchers initially examined at post-mortem the brains of people with ASD and people without ASD. They were particularly interested in nerve cell structures called “dendritic spines”, which receive signals from other nerve cells.

The researchers performed experiments with mice genetically engineered to have symptoms of ASD. In these mice models the signalling protein mTOR is dysregulated.

The researchers also performed further experiments to study the effects of mTOR dysregulation and blockage of autophagy.

 

What were the basic results?

From examining the brains of people with ASD and comparing them with the brains of people without ASD the researchers found that the density of dendritic spines was significantly higher in ASD.

Brain development after birth involves both the formation of new nerve connections and the pruning/elimination of others. The formation of new nerve connections exceeds pruning during childhood, but then synapses are eliminated during adolescence as synapses are selected and matured.

When the researchers compared the brains of children (aged between two and nine) and adolescents (aged between 13 and 20) they found that spine density was slightly higher in children with ASD compared to controls, but was markedly higher in adolescents with ASD compared to controls.

From childhood through adolescence, dendritic spines decreased by approximately 45% in control subjects, but by only approximately 16% in those with ASD. The researchers concluded that people with ASD have a developmental defect in spine pruning/elimination.

The researchers found there were higher levels of the activated version of the signalling protein mTOR in adolescent ASD brains than brains without ASD. They also found ASD brains were not performing as much autophagy as brains without ASD.

The researchers then performed experiments using mice models of ASD that had dysregulated mTOR. They found the mice had spine pruning defects. These pruning defects could be improved by treating the mice with a chemical called rapamycin which inhibits mTOR. The nerve cells of the mice models of ASD also performed less autophagy, and this was also corrected by treating the mice with rapamycin. Rapamycin also improved social behaviour of the mice on behavioural tests.

 

How did the researchers interpret the results?

The researchers conclude that their “findings suggest mTOR-regulated autophagy is required for developmental spine pruning, and activation of neuronal autophagy corrects synaptic pathology and social behaviour deficits in ASD models with hyperactivated mTOR".

 

Conclusion

This study has found that brains of people with ASD have more nerve cell structures called “dendritic spines”, which receive signals from other nerve cells, than the brains of people without ASD. More of the signalling protein mTOR was found to be in its activated state and a process called autophagy, which the cell uses to remove and degrade cell structures and proteins, was impaired in brains from people with ASD.

Genetically engineered mice with hyperactivated mTOR display autistic-like symptoms, have more dendritic spine pruning defects and impaired autophagy. All of these could be reversed with treatment with an inhibitor of mTOR called rapamycin.

Rapamycin is a type of antibiotic, and is currently used in medicine as an immunosuppressant to prevent organ rejection after kidney transplantation.

However, it has been associated with a range of adverse effects. As the Mail points out, this research is in its very early stages. It mainly helps our understanding of the brain changes that may be involved in this condition.

It is too soon to say whether it could lead to any treatment for autism spectrum disorders, and even if it does it is likely to be a long way off.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Scientists discover people with autism have too many brain 'connections'. Mail Online, August 22 2014

Links To Science

Tang G, Gudsnuk K, Kuo S, et al. Loss of mTOR-Dependent Macroautophagy Causes Autistic-like Synaptic Pruning Deficits. Neuron. Published online August 21 2014

Categories: Medical News

Dual vaccine approach could help eradicate polio

Medical News - Fri, 08/22/2014 - 14:00

Double vaccines "could hasten the end of polio", BBC News reports. Researchers in India found that using a combination of the oral and injected vaccines provided enhanced protection against the disease.

Polio is a viral infection that can cause paralysis and death. Thanks to initiatives such as the NHS Childhood Vaccination Schedule, it is now largely a disease of the past, found in only three countries: Afghanistan, India and Nigeria. It is hoped that the disease could be entirely eradicated in the same way as smallpox.

There are two types of polio vaccine: the oral polio vaccine, which contains weakened strains of polio, and a vaccine known as the Salk inactivated poliovirus vaccine (IPV), which contains chemically inactivated poliovirus and is given by injection.

A new study, performed in India, found that giving a booster injection with the Salk IPV to children who had already been given the oral vaccine boosted gut immunity. This was demonstrated by the fact that fewer children had virus in the faeces after they received a challenge dose (an additional dose) of oral vaccine.

On the basis of this study’s results, the World Health Organization (WHO) is recommending that at least one dose of Salk inactivated poliovirus vaccine is added to routine vaccination schedules, instead of the all-oral vaccination schedule that many countries have.

Hopefully, the ambition of eradicating polio will be achieved in the coming years.

 

Where did the story come from?

The study was carried out by researchers from the WHO, the US Centers for Disease Control and Prevention, Imperial College London, the Enterovirus Research Centre in India and Panacea Biotech Ltd. Funding was provided by the Rotary International Polio Plus Program.

The study was published in the peer-reviewed journal Science. This article is open-access, so is free to download and read.

The results of the study were well reported by BBC News. Additional insight into the challenges of vaccinating children in conflict-ridden areas, such as Taliban-dominated areas of Afghanistan, was also provided.

 

What kind of research was this?

This was a randomised controlled trial. The researchers wanted to see if giving children a booster injection with the Salk inactivated poliovirus vaccine (IPV) could boost “mucosal” immunity, which includes immunity in the gut. This is because poliovirus can replicate in the guts of people who have been vaccinated but who don’t have strong mucosal immunity, and can therefore continue to be spread in faeces.

 

What did the research involve?

To do this, they randomised 954 children in India (in three age groups: infants aged 6 to 11 months, children aged 5 and children aged 10) who had already been vaccinated with the oral polio vaccine to booster injections with:

  • the Salk IPV
  • another dose of the oral polio vaccine
  • no vaccine

Four weeks later, children received a challenge dose of the oral polio vaccine, and the researchers measured the amount of poliovirus that was in their faeces after 3, 7 and 14 days. The researchers were interested in two types of poliovirus: poliovirus type 1 and poliovirus type 3. They wanted to see if the booster injection with the Salk IPV reduced the number of children with either of these two polioviruses in the faeces.

 

What were the basic results? Infants aged between 6 and 11 months
  • Booster injections with the Salk IPV significantly reduced the proportion of infants with type 3 poliovirus in their faeces compared to no vaccine, but did not significantly alter the proportion of infants with type 1 poliovirus in their faeces.
  • Another dose of the oral polio vaccine did not significantly alter the proportion of infants excreting poliovirus, compared to no vaccine.
Children aged 5
  • Booster injections with the Salk IPV significantly reduced the proportion of children aged 5 with type 1 or type 3 poliovirus in their faeces, compared to no vaccine.
  • Another dose of the oral polio vaccine did not significantly alter the proportion of children excreting poliovirus, compared to no vaccine.
Children aged 10
  • Booster injections with the Salk IPV significantly reduced the proportion of children aged 10 with type 1 or type 3 poliovirus in their faeces, compared to no vaccine.
  • Another dose of the oral polio vaccine also significantly reduced the number of children aged 10 with type 1 or type 3 poliovirus in their faeces, compared to no vaccine.
Overall

When all the age groups were considered together, booster injections with the Salk IPV significantly reduced the proportion of children with type 1 or type 3 poliovirus in their faeces compared to no vaccine, while another dose of the oral polio vaccine had no significant effect.

 

How did the researchers interpret the results?

The researchers conclude that their study "provides strong evidence that IPV boosts intestinal immunity among children with a history of multiple [oral poliovirus vaccine] doses more effectively than an additional [oral poliovirus vaccine] dose".

They go onto report that “as a result, the WHO is no longer recommending an all-[oral poliovirus vaccine] schedule; rather, it recommends that all-[oral poliovirus vaccine] using countries introduce [at least] one dose of the IPV into routine vaccination schedules".

 

Conclusion

This randomised control trial has found that a booster vaccination with the Salk inactivated poliovirus vaccine (IPV) can boost gut immunity against polioviruses in infants and children who have already received multiple doses of the oral vaccine.

It appears that receiving both vaccines is key, as the researchers report that the ability of the Salk IPV to induce gut immunity is limited. They say that studies in countries that do not use oral vaccine show that more than 90% of children given the IPV excrete challenge poliovirus. However, the researchers also say the oral vaccine has been reported to give incomplete intestinal immunity that does deteriorate.

Polio is transmitted by the faecal-oral route, either by exposure to faecally contaminated food or water, or by person-to-person contact. These findings are important, as in many of the parts of the world where polio is a problem, the standards of sanitation are poor. This means the potential for children to contract the disease by coming into contact with infected faeces passed by someone with weakened intestinal immunity is high.

The researchers also note one limitation to their study: it was performed in one district of India, and therefore extrapolation or generalisation of these findings must be done with caution. Despite this, on the basis of the results of this study the WHO is recommending that at least one dose of Salk IPV is added to routine vaccination schedules instead of the all-oral vaccination schedule that many countries have.

The UK vaccination schedule will remain unchanged, as all children should be given the IPV vaccinations as part of the routine vaccination schedule. 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Double vaccines 'could hasten the end of polio'. BBC News, August 22 2014

Polio double vaccine gives better protection, study finds. The Guardian, August 22 2014

Links To Science

Jafari H, Deshpande JM, Sutter RW, et al. Efficacy of inactivated poliovirus vaccine in India. Science. Published online August 22 2014

Categories: Medical News

Botox may be useful in treating stomach cancers

Medical News - Thu, 08/21/2014 - 15:30

"Botox may have cancer fighting role," BBC News reports after research involving mice found using Botox to block nerve signals to the stomach may help slow the growth of stomach cancers. Botox, short for botulinum toxin, is a powerful neurotoxin that can block nerve signals.

The researchers studied genetically modified mice designed to develop stomach cancer as they grew older.

They found that mice treated with Botox injections had improved survival rates, because the cancer spread at a reduced rate or was prevented from developing in the first place.

Cutting the nerve supply to the stomach during an operation called a vagotomy had a similar effect.

In mice that had already developed stomach cancer, Botox injections reduced cancer growth and improved survival rates when combined with chemotherapy.

Further studies of human stomach cancer samples confirmed the finding that nerves play a role in tumour growth.

An early-phase human trial is now underway in Norway to determine the safety of such a procedure and to work out how many people would need to be treated in trials, to see whether the treatment is effective.

 

Where did the story come from?

The study was carried out by researchers from the Norwegian University of Science and Technology in Trondheim, Columbia University College of Physicians and Surgeons in New York, and universities and institutes of technology in Boston, Germany and Japan.

It was funded by the Research Council of Norway, the Norwegian University of Science and Technology, St Olav's University Hospital, the Central Norway Regional Health Authority, the US National Institutes of Health, the Clyde Wu Family Foundation, the Mitsukoshi Health and Welfare Foundation, the Japan Society for the Promotion of Science Postdoctoral Fellowships for Research Abroad, the Uehara Memorial Foundation, the European Union Seventh Framework Programme, the Max Eder Program of the Deutsche Krebshilfe and the German Research Foundation.

The study was published in the peer-reviewed medical journal Science Translational Medicine.

The study was reported accurately by the UK media, making it clear that this potential treatment is not yet available and will take years to assess its potential.

 

What kind of research was this?

This research was a collection of experiments on mice and studies of human tissue samples. Previous research had shown that cutting the main nerve to the stomach (vagus) in a procedure called a vagotomy reduces the thickness of the stomach wall and decreases cell division.

Another research study found people who had a vagotomy had a 50% reduced risk of developing stomach cancer 10 to 20 years later. The researchers wanted to see if targeting the nerve would reduce stomach cancer growth.

 

What did the research involve?

Genetically modified mice designed to develop stomach cancer by 12 months of age were studied to see if there was a link between the density of nerves and stomach cancer.

One of four different types of operation was then performed on the vagus nerve of 107 genetically modified mice at the age of six months to see if this made a difference in the development of stomach cancer. This was either:

  • a sham operation
  • pyloroplasty (PP) – surgery to widen the valve at the bottom of the stomach so the stomach can empty food more easily
  • bilateral vagotomy with pyloroplasty (VTPP) – cutting both sections of the vagus nerve and widening the valve
  • anterior unilateral vagotomy (UVT) – cutting just the front section of the vagus nerve

The researchers then performed a Botox procedure on another set of mice by injecting the anterior vagus nerve (front section) when they were six months old to see if this reduced the development of stomach cancer.

To see if cutting or injecting the nerve had any effect after stomach cancer had developed, the researchers performed UVT on mice aged 8, 10 or 12 months and compared their survival rate with mice who had not had the intervention.

They then injected Botox into the stomach cancer of mice aged 12 months and looked at the subsequent cancer growth. They also compared survival rates for chemotherapy with saline injection, chemotherapy with Botox and chemotherapy with UVT.

The researchers then examined human stomach samples from 137 people who had undergone an operation for stomach cancer, to look at how active the nerves were in the sections of cancer compared with normal tissue.

They also compared tissue samples of 37 people who had already had an operation for stomach cancer, but then developed stomach cancer in the base portion of the stomach. The vagus nerve had been cut in 13 of these people.

 

What were the basic results?

The genetically modified mice mostly developed stomach cancer in the section of the stomach that had the highest density of nerves.

Cutting the vagus nerve supply reduced the incidence of tumours developing. The percentage of mice that had tumours six months after the operation was:

  • 78% after the sham surgery
  • 86% after PP
  • 17% after VTPP
  • 14% in the front section of the stomach (where the nerve had been cut) and 76% in the back section (where the vagus nerve was still intact) after UVT

Six months after the Botox injection into the anterior vagus nerve, the mice still developed stomach cancer. However, the size of the tumour and number of dividing cancer cells in the front section of the stomach was less than half that of the back section.

In mice that had already developed stomach cancer, the normal survival rate was 53% by 18 months, but this was increased by the UVT to:

  • 71% if the UVT was performed at 8 months
  • 64% if the UVT was performed at 10 months
  • 67% if the UVT was performed at 12 months

Botox injection into the stomach tumours of mice reduced the growth by roughly half. Botox and chemotherapy improved mouse survival compared with chemotherapy on its own, as did UVT and chemotherapy.

In the human samples, there was evidence of increased nerve activity in the cancer sections of tissue compared with the normal tissues. This was higher in more advanced tumours.

All 24 people who had not had the vagus nerve cut developed stomach cancer in the base, as well as the remaining front and back sections of the stomach. Only one of the 13 people who had had the vagus nerve cut developed cancer in the front or back section of the stomach, suggesting that the nerve needed to be intact for cancer to develop.

 

How did the researchers interpret the results?

The researchers say that their "finding that nerves play an important role in cancer initiation and progression highlights a component of the tumour microenvironment contributing to the cancer stem cell niche.

"The data strongly supports the notion that denervation and cholinergic antagonism, in combination with other therapies, could represent a viable approach for the treatment of gastric cancer and possibly other solid malignancies."

 

Conclusion

These laboratory experiments show that nerves have a role in the development and advancement of stomach cancer. The early experiments in mice found that stopping the nervous supply by either cutting the vagus nerve or injecting it with Botox improved survival rates and reduced cancer growth.

The Botox injections were not performed on any humans in this study. However, an early-phase clinical trial in humans with inoperable stomach cancer began in Norway in January 2013, with the results expected in 2016.

This will determine the safety of such a procedure and work out the number of people who would need to be treated in a larger controlled trial to see whether the treatment is effective.

You can reduce your risk of stomach cancer by quitting smoking if you smoke and moderating your consumption of salt and smoked meats, such as pastrami.

Stomach cancer has also been linked to a chronic infection by H. pylori bacteria, a common cause of stomach ulcers.

If you find yourself having persistent bouts of indigestion or stomach pain, you should contact your GP for advice. The symptoms could be caused by a H. pylori infection, which is relatively straightforward to treat.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Botox may have cancer fighting role. BBC News, August 21 2014

Botox could be used as new treatment for stomach cancer as scientists discover anti-wrinkle treatment slows tumour growth. Daily Mail, August 21 2014

Botox 'could be used to treat stomach cancer'. Daily Mirror, August 21 2014

Botox could halt stomach cancer. The Daily Telegraph, August 21 2014

Links To Science

Zhao C, Hayakawa Y, Kodama Y, et al. Denervation suppresses gastric tumorigenesis. Science Translational Medicine. Published online August 20 2014

Categories: Medical News

'Fat and 30' link to dementia is inconclusive

Medical News - Thu, 08/21/2014 - 14:00

“People as young as 30 who are obese may be at greater risk [of dementia],” The Independent reports.

This UK study examined a set 14-year period (1998 to 2011) and looked at whether NHS hospital records documenting obesity in adults above the age of 30 were associated with subsequent hospital or mortality records documenting dementia in the remaining years of the study.

Overall there was actually no significant association between obesity and dementia in later life.

When the researchers broke down the data into 10-year age bands (30s, 40s, 50s and 60s) they found that people in these age groups had increased risk of dementia. However, it must be remembered that the researchers were not looking at lifetime dementia diagnoses, but only looking at diagnoses in the remaining years of the study. Very few people in the younger age groups would have developed dementia over the following few years.

For example, the study found more than a trebled risk of dementia for people with obesity in their 30s, but this was based on only 19 people who developed dementia during the remaining years of the study. Calculations based on small numbers may be less reliable and should be given less "weight".

As expected the greatest number of subsequent dementia diagnoses occurred in people who were 70 or above when obesity was assessed, and obesity did not increase dementia risk in these people.

Aside from any dementia link or not, overweight and obesity are well established to be associated with a variety of chronic diseases and a healthy weight should be the aim.

 

Where did the story come from?

The study was carried out by two researchers from the University of Oxford and was funded by the English National Institute for Health Research.

The study was published in the peer-reviewed Postgraduate Medical Journal.

The UK media failed to report the various limitations of this research. This includes the lack of a significant association with dementia overall for the total cohort.

And while significant associations for people between the ages of 30 and 60 were found, these are based on only very small numbers who developed dementia during the study so may be less reliable.

As said, the links between vascular dementia specifically and obesity do seem to be more apparent, but this is to be expected.

It is also not clear in the study where the 50% increased risk for people in middle age comes from.

 

What kind of research was this?

This was a retrospective cohort study that aimed to examine how obesity in middle age may be associated with the risk of subsequent dementia.

The researchers say the worldwide prevalence of dementia in 2010 was around 35.6 million cases, which was estimated to double to 65.7 million by 2030.

Meanwhile we are in the midst of an obesity epidemic, with the World Health Organization reporting that in 2008 just over a third of all adults were overweight (BMI over 25kg/m²) while 10% of men and 14% of women were obese (BMI over 30kg/m²).

As the researchers say, with the rapidly increasing burden of dementia, it is important to identify which modifiable risk factors are associated. The researchers say there is growing evidence that mid-life obesity is associated with “dementia” overall.

Dementia is just the general term for problems with memory and thinking, which has different causes. Alzheimer’s disease is the most common cause of dementia, which is associated with characteristic symptoms and changes in the brain (the formation of protein plaques and tangles). The causes of Alzheimer’s are not fully understood, with increasing age and genetic factors being the most well established. Overweight and obesity are not currently established as risk factors for Alzheimer’s disease.

Meanwhile, vascular dementia – the second most common cause – has the same risk factors as cardiovascular disease, so there would be a plausible link between obesity and this type of dementia.

This study simply examined a set 14 year period (1998 to 2011) and looked at whether hospital re-ords documenting obesity in adults of different ages, was associated with subsequent documentation of dementia in the remaining years of the study.

 

What did the research involve?

This study used Hospital Episode Statistics (HES) data, which includes data for all hospital admissions including day cases in NHS hospitals in England between April 1998 and December 2011. They also linked with the Office for National Statistics (ONS) to identify deaths up to December 2011.

The researchers identified a cohort of people with obesity by looking for the first admission or day care visit where obesity was recorded as a diagnosis (according to the International Classification of Diseases [ICD] codes). They identified a comparison control cohort without obesity who had received day care or hospital admission for various medical, surgical conditions or injuries. They only included adults in the obesity and comparison groups who were aged 30 or older and did not have an admission for dementia at the same time as, or before, the date of admission when obesity was recorded.

For the obesity and comparison groups they searched the HES and ONS databases for all subsequent hospital care or deaths from dementia (according to ICD codes). The researchers say they subdivided admissions into those specifically documented to be due to Alzheimer’s disease or vascular dementia, and separately examined men and women.

They grouped obesity and comparison groups into 10-year age bands at the time obesity was first recorded, then compared their risk of dementia in the subsequent years. Adjustment was made for sex, time period of the study, region of residence and deprivation score.  

 

What were the basic results?

There were 451,232 adults in the obesity cohort, 43% of whom were male (number in the comparison cohort not specifically reported).

Overall compared to controls, for the total cohort of all adults aged 30 or above, there was no statistically significant association between a hospital record of obesity and subsequent record of dementia in the remaining years of the study (relative risk [RR]0.98, 95% confidence interval [CI] 0.95 to 1.01).

However, when they were then split into 10-year age brackets, there was increased risk of subsequent dementia for people with obesity recorded in the age brackets:

  • 30 to 39 (RR 3.48, 95% CI 2.05 to 5.61)
  • 40 to 49 (RR 1.74, 95% CI 1.33 to 2.24)
  • 50 to 59 (RR 1.48, 95% CI 1.28 to 1.69)
  • 60 to 69 (RR 1.39, 95% CI 1.31 to 1.48)

There was no significant association between obesity and dementia for people with obesity between the ages 70 and 79, and an apparent decrease in risk of dementia for people above the age of 80 with obesity. 

When they looked by specific type of dementia, there was no clear link between obesity and Alzheimer’s disease. For the full cohort of adults aged 30 or over, obesity actually seemed to decrease the risk of subsequently developing Alzheimer’s disease (RR 0.63, 95% CI 0.59 to 0.67). Then by age group there was an apparent increased risk for those with obesity in the ages 30 to 39 (RR 5.37, 95% CI 1.65 to 13.7); no association for those between the ages 40 and 59; then decreased risk of Alzheimer’s for those with obesity above the age of 60.   

Obesity seemed to have a clearer link with risk of vascular dementia. The full cohort of adults aged 30 or over recorded to have obesity had a 14% increased risk of vascular dementia in the subsequent years of the study (RR 1.14, 95% CI 1.08 to 1.19). There were also significantly increased risks for all age groups up to the age of 69. For the 70 to 79 year age group there was no association, and for obese adults over the age of 80, obesity again seemed to decrease the risk.

 

How did the researchers interpret the results?

The researchers conclude that: “Obesity is associated with a risk of dementia in a way that appears to vary with age. Investigation of the mechanisms mediating this association might give insights into the biology of both conditions.”

 

Conclusion

As the researchers say: “The dataset spans 14 years and is therefore just a snapshot of people's lifetime experience of obesity.” The study is just looking at a set 14-year period (1998 to 2011) and looking at whether hospital records documenting obesity in adults of different ages, were associated with subsequent documentation of dementia in the remaining years of the study.

Therefore not only is the study looking at a snapshot of obesity in a 14-year period, is also looking at just a snapshot of time in which people could develop dementia in the remaining years of the study. For those in the cohort who were in their 70s or 80s when their obesity was recorded, you may expect that the study could have a better chance of capturing whether those people were ever going to develop dementia in their lifetime. However, for most of the people in the cohort who were between the ages of 30 and 60, their likelihood of developing dementia in the remaining few years of the study is low.

Therefore, this study cannot reliably show whether or not obesity in mid-life is associated with developing dementia, as the follow-up timeframe will not have been long enough for most people. 

The main result of this study was that for all adults in the cohort there was no association between a hospital record of obesity and risk of any type of dementia in the subsequent years of the study.

Though the research did then find increased risks for 10-year age bands in the 30s, 40s, 50s and 60s, many of these analyses are based on only small numbers of people who developed dementia in the remaining years of the study.

For example, the highest more than trebled risk of dementia for people with obesity in their 30s was based on only 19 people who developed dementia during the remaining years of the study. An analysis based on such a small number of people has a much higher chance of error. 

The 39% increased risk for people with obesity in their 60s was more reliable as this included 1,037 people from this age band who subsequently developed dementia.

But then the pattern is less clear, as for people with obesity in their 70s, of whom the largest number developed dementia (2,215), there was no association between obesity and dementia.

Meanwhile people who were obese in their 80s seemed to have decreased risk of then developing dementia.

Overall this makes a confusing picture from which to obtain any clear understanding of how obesity is associated with dementia. And it seems possible that various confounding hereditary, health and lifestyle factors may be having an influence.

Looking at Alzheimer’s specifically there was no clear link between adult obesity and Alzheimer’s. Therefore the study doesn’t provide evidence of obesity as a modifiable risk factor for the most common type of dementia. The only increased risk was for people with obesity in their 30s, but considering only five people developed Alzheimer’s in the remaining study years, this makes this risk association far from reliable. In fact for people over the age of 60, obesity apparently seems to be protective against Alzheimer’s for some reason. Though again it is highly possible this could be due to confounding from other factors.

As said, vascular dementia – the second most common type – has the same risk factors as cardiovascular disease, so there would be a plausible link between obesity and this type of dementia. And this study does support this, finding for the overall cohort of all adults above the age of 30, obesity was associated with a 14% increased risk of vascular dementia. Therefore, the study generally supports the link between obesity and this vascular condition.

Another point to bear in mind for this study is that, though it benefits from using a large reliable dataset of HES and ONS data which has recorded obesity and dementia based on valid diagnostic codes, it is of course only looking at hospital presentations of both obesity and dementia.

It is therefore unable to capture the large number of people with both of these conditions who may not have accessed hospital care.

Overall, this study contributes to the literature examining how the obesity epidemic may be associated with the growing prevalence of dementia worldwide, however it provides little in the way of conclusive answers.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Further evidence that obesity in middle age increases dementia risk. The Independent, August 21 2014

How being fat in your 30s could triple the risk of dementia: Person's age at which they are classified as obese found to be key in chance of developing condition. Daily Mail, August 21 2014

Slim to reduce the risk of dementia, middle-aged told. The Times, August 21 2014

Dementia risk TRIPLES if you get fat in your thirties. Daily Express, August 21 2014

Links To Science

Wotton CJ, Goldacre MJ. Age at obesity and association with subsequent dementia: record linkage study. Postgraduate Medical Journal. Published online August 20 2014

Categories: Medical News

Is breastfeeding inability causing depression?

Medical News - Wed, 08/20/2014 - 14:30

Mothers who plan, but are unable, to breastfeed their babies are more likely to suffer from postnatal depression, report BBC News and The Independent.

A study of 14,000 women in England found that those who planned to breastfeed but had not managed to were two-and-a-half times more likely to develop postnatal depression, compared to women who had no intention of breastfeeding.

Around 1 in 10 women develop postnatal depression, which is not the same as the “baby blues”, but a serious illness that can affect a mother’s ability to bond with her baby. It can also affect the baby’s longer-term development.

It can develop within the first six weeks of giving birth, but is often not apparent until around six months. It’s important to get professional help if you think you may be suffering from this illness.

The study had several limitations. For example, both antenatal and postnatal depression were self-reported rather than clinically diagnosed, which may make the results less reliable.

Due to the nature of the study’s design, it cannot prove that not breastfeeding raises the risk of postnatal depression. However, it highlights the need to support new mothers who want to breastfeed but are unable to do so.

 

Where did the story come from?

The study was carried out by researchers from the University of Seville, University of Cambridge, University of Essex and University of London. It was funded by the UK’s Economic and Social Research Council. The study was published in the peer-reviewed Journal of Maternal and Child Health.

The Mail Online’s claim that “choosing not to” breastfeed doubles the risk of postnatal depression was misleading and oversimplified the study’s results.

The media did not point out that the majority of results were compared to women who did not want to breastfeed (and, subsequently, didn’t). For example, the doubled risk of postnatal depression for women who wanted to breastfeed but couldn’t was compared to women who did not want to breastfeed and didn’t. Most of the associations reported by the media were only significant at eight weeks after birth, and not significant beyond that.

As the authors point out, their results on the association between maternal depression and breastfeeding were very mixed. The link between not breastfeeding and postnatal depression seems to depend on whether or not a woman planned to breastfeed in the first place, as well as her mental health during pregnancy.

 

What kind of research was this?

Researchers used data from a longitudinal survey of about 14,000 children born in the early 1990s, conducted by the University of Bristol, which looked at child health and development.

The authors point out that about 3% of women experience postpartum depression (PPD) within 14 weeks of giving birth. Overall, as many as 19% of women have a depressive episode during pregnancy or the three months after birth. However, they say the effects of breastfeeding on the risk of PPD is not well understood.

The researchers aimed to examine how breastfeeding affects a mother’s mental health and, in particular, if the relationship between breastfeeding and maternal mental health is mediated by whether or not the mother intended to breastfeed.

The relationship between breastfeeding and the risk of PPD, they say, may be driven by biological factors, such as the difference in hormone levels between breast- and formula-feeding mothers. However, it may also be affected by feelings of success or failure over breastfeeding.

As this was a cohort study, it can only show an association, it cannot prove that not breastfeeding causes PPD.

 

What did the research involve?

The researchers used a sample of just over 14,000 women, who were recruited into the survey by doctors, when they first reported their pregnancy. Data for the study was collected by questionnaires administered to both parents at four points during pregnancy, and at several stages following birth.

Researchers used a validated measure of depression called the Edinburgh Postnatal Depression Scale (EPDS), which is designed to screen for PPD. This was conducted when women were 18 and 32 weeks pregnant. They conducted it again at 8 weeks, and 8, 18 and 33 months after the birth. 

The EPDS consists of 10 questions, each with four possible answers, to describe the severity of depressive symptoms. Total scores range from 0 to 30. Following guidelines, the researchers used a score of more than 14 to indicate depression during the antenatal period and more than 12 to indicate depression after birth.

Mothers were asked during pregnancy how they intended to feed their babies for the first four weeks. Following their child’s birth, they were asked at several points how they were actually feeding, and the ages at which infant formula and solid foods were introduced.

Researchers included in their analysis how long mothers had breastfed for and how long they had breastfed exclusively.

They identified four groups of women: 

  • mothers who had not planned to breastfeed, and who did not breastfeed (reference group)
  • mothers who had not planned to breastfeed, but who did actually breastfeed
  • mothers who had planned to breastfeed, but who did not actually breastfeed
  • mothers who had planned to breastfeed, and who did actually breastfeed

Using statistical methods, they presented several models of the relationship between breastfeeding and depression, controlling for different factors such as the child’s sex, parents' education and information on the pregnancy and birth. The most reliable model takes account of as many factors as possible, including the mother’s physical and mental health, whether she was depressed in pregnancy, the quality of her personal relationships and the experience of stressful life events. 

After conducting this analysis for the whole sample, they split the sample into mothers who were and who were not depressed during pregnancy; for each group, they examined the differences in outcomes between women who had planned to breastfeed, and women who had not.

 

What were the basic results?

Researchers found that 7% of women suffered depression at 18 weeks of pregnancy and 8% at 32 weeks. 9-12% of new mothers suffered from PPD. 

Breastfeeding was initiated by 80% of mothers and 74% breastfed for one week or more. By four weeks, 56% of mothers were breastfeeding at all and 43% were breastfeeding exclusively.

Researchers found that for the sample as a whole, there was little evidence of a relationship between breastfeeding and the risk of PPD. After adjusting for all of the factors, it was found that women who exclusively breastfed for 4 weeks or more were 19% less likely to have PPD 8 weeks after giving birth (odds ratio [OR] 0.81, 95% confidence interval [CI] 0.68 to 0.97). This was not significant at 8, 18 or 33 months.

However, they then calculated the results according to whether mothers had been depressed during pregnancy, and whether they had planned to breastfeed their babies. 

In mothers without any depressive symptoms during pregnancy, they found that the lowest risk of PPD by 8 weeks was among women who had planned to breastfeed and did so. For example, compared to women who did not plan to breastfeed and didn’t, women who exclusively breastfed for 2 weeks or more were 42% less likely to develop PPD by 8 weeks (OR 0.58, 95% CI 0.35 to 0.96).

The highest risk was found among women who had planned to breastfeed, but had not initiated breastfeeding. They were two-and-a-half times more likely to develop PPD by 8 weeks compared to women who did not plan to breastfeed and didn’t (OR 2.55, 95% CI 1.34 to 4.84).

For women who had shown signs of depression during pregnancy, there was no difference in risk of PPD for women who had planned to breastfeed but couldn’t. The only statistically significant result was for those women who had not planned to breastfeed, but did exclusively for four weeks. Their risk of PPD was reduced by 58% compared to women who had not planned to breastfeed and didn’t (OR 0.42, 95% CI 0.20 to 0.90).

There was no significant difference in risk of PPD between any of the planned or not planned breastfeeding groups at 8, 21 or 33 months.

 

How did the researchers interpret the results?

The authors say that the effects of breastfeeding on the risk of maternal depression is dependent on breastfeeding intentions during pregnancy and by mothers’ mental health.

“Our results underline the importance of providing expert breastfeeding support to women who want to breastfeed, but also of providing compassionate support for women who had intended to breastfeed, but who find themselves unable to,” they argue.

 

Conclusion

This is a useful study but, as the authors point out, it does have some limitations. Both antenatal and postnatal depression were self-reported rather than clinically diagnosed, which may make the results less reliable.

Also, the fact that the study consisted of parents who had voluntarily entered the study may also lead to bias. It’s worth noting that 95% of the women were white, so the results may not be generalisable to mothers from ethnic minorities.

Finally, although the researchers controlled for many possible confounders, there is the possibility that some unmeasured factor may have influenced results, such as a mother’s personality or IQ.

Many mothers who wish to breastfeed may find it difficult to do so for a range of reasons, but professional support can help. Postnatal depression is serious, but treatment is available.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Failing to breastfeed may double risk of depression in mothers: study. The Daily Telegraph, August 19 2014

Breastfeeding 'cuts depression risk', according to study. BBC News, August 20 2014

Mothers who breastfeed are 50% less likely to suffer postnatal depression. The Independent, August 20 2014

Links To Science

Borra C, Iacovou M, Sevilla A. New Evidence on Breastfeeding and Postpartum Depression: The Importance of Understanding Women’s Intentions. Maternal and Child Health Journal. Published online August 2014

Categories: Medical News

Common antibiotic linked to 'tiny' rise in heart deaths

Medical News - Wed, 08/20/2014 - 13:15

An antibiotic given to millions of people in the UK to treat chest infections has been linked to an increased risk of heart death, report The Daily Telegraph and The Independent.

A Danish study of three antibiotics found the risk of death from any heart condition while taking the antibiotic clarithromycin is slightly higher than with penicillin V.

Clarithromycin is used for respiratory infections, and 2.2 million doses were prescribed in England in 2013. However, it is not recommended for people with abnormal heart rhythms.

Researchers compared the number of people who had a heart-related death after being put on a course of either clarithromycin, roxithromycin (not used in the UK) or penicillin.

The study, published online in the British Medical Journal, found there were an extra 37 cardiac deaths per 1 million courses of clarithromycin compared with penicillin.

But the risk was still very low. As this was a cohort study, it cannot prove that any of these deaths were as a result of taking clarithromycin, as it did not account for all of the other factors that could have influenced the results.

In particular, major risk factors for heart conditions such as smoking and obesity were not included in the analyses. When all factors the researchers did record were accounted for, there was no longer any statistically significant difference between clarithromycin and penicillin.

This study should not cause unnecessary concern – although there appears to be an increase in risk, this is tiny, at 0.01%.

 

Where did the story come from?

The study was carried out by researchers from the Statens Serum Institut in Copenhagen. They report there was no funding.

It was published in the peer-reviewed British Medical Journal (BMJ). It is available to read on the BMJ website.

The media reported the story reasonably accurately, but on the whole failed to point out quite how low the risk of cardiac death is on these antibiotics.

There were good quotes from UK experts about the fact that all drugs have some side effects and should therefore only be taken if they are really needed – this is particularly important for antibiotics given the increase in antibiotic resistance.

 

What kind of research was this?

This was a cohort study. It aimed to see if there was an increased risk of cardiac death while taking clarithromycin or roxithromycin compared with penicillin V.

Penicillin V is an antibiotic used for treating bacterial infections of the ear, throat, chest, skin and soft tissues.

Clarithromycin is an antibiotic used to treat bacterial chest infections, throat or sinus infections, skin and soft tissue infections, and Helicobacter pylori associated with peptic ulcers. It is not recommended for people with abnormal heart rhythms.

Roxithromycin is a similar type of antibiotic, but it is not used in the UK. All three are also used as prophylactic medication to prevent infections for people who are immunocompromised.

As this was a cohort study, it cannot prove that clarithromycin caused any cardiac deaths. This is because it does not take into account confounding factors that may have influenced the results. A randomised controlled trial would be required to prove causation.

 

What did the research involve?

The researchers compared the number of people who had a cardiac death during or in the 30 days after an outpatient course of either clarithromycin or roxithromycin, compared with penicillin V.

The nationwide Danish National Prescription Registry was used to identify all adults aged 40 to 74 who collected prescriptions for each antibiotic between 1997 and 2011.

Each time a person had a prescription of one of the drugs this was included in the analysis as long as they were not in hospital or had been prescribed an antibiotic in the previous 30 days. This means some people would have been included who had more than one antibiotic prescription.

The researchers collected data on cardiac deaths from the Danish Register of Causes of Death and looked at whether there was an association between taking either clarithromycin or roxithromycin compared with penicillin V, and having a cardiac death.

They looked at whether people had a cardiac death during the following two periods:

  • the seven days of likely antibiotic use from the start date of the prescription
  • eight to 37 days after the start date of the prescription

The researchers excluded people with serious disease (including cancer, neurological diseases or liver disease) and those deemed to be at high risk of death from non-cardiac causes.

They adjusted their analyses for a number of confounders, including sex, age, place of birth, time period, season, medical history, prescription drug use in the previous year, and use of healthcare in the previous six months.

 

What were the basic results?

There were 285 cardiac deaths during the first seven days after antibiotic prescription from a total of more than 5 million antibiotic prescriptions that met the study inclusion criteria. Of these, there were:

  • 18 deaths during 160,297 courses of clarithromycin (0.01%), incidence rate of cardiac death 5.3 per 1,000 person years
  • 235 deaths during 4,355,309 courses of penicillin V (0.005%), incidence rate of cardiac death 2.5 per 1,000 person years
  • 32 deaths during 588,988 courses of roxithromycin (0.005%), incidence rate of cardiac death 2.5 per 1,000 person years

After taking into account sex, age, cardiac risk score and the use of other drugs that are metabolised in a similar way, clarithromycin was associated with a 76% higher risk of cardiac death than penicillin V (adjusted rate ratio 1.76, 95% confidence interval [CI] 1.08 to 2.85).

The researchers say this would be equivalent to 37 extra cardiac deaths per 1 million treatment courses associated with clarithromycin compared with penicillin V (95% CI, 4 to 90). Roxithromycin was not associated with an increased risk.

The risk was higher in women on clarithromycin, (adjusted rate ratio 2.83 [95% CI 1.50 to 5.36]) compared with men (adjusted rate ratio 1.09 [95% CI 0.51 to 2.35]), although the difference was not statistically significant.

When the researchers performed additional analysis, where they matched people who had taken clarithromycin with people who had taken penicillin, according to sex, age, place of birth, time period, season, medical history, prescription drug use in the previous year and use of healthcare in the previous six months, they found the increase in risk of cardiac death with clarithromycin compared with penicillin was no longer statistically significant (rate ratio 1.63, 95% CI 0.87 to 3.03).

Between 8 and 37 days after antibiotic prescription, when it was assumed that people had finished taking antibiotics, there were 364 cardiac deaths. Of these, there were:

  • 14 deaths after clarithromycin, incidence rate 1.3 per 1,000 patient years
  • 308 deaths after penicillin V, incidence rate 1.0 per 1,000 patient years
  • 42 deaths after roxithromycin, incidence rate 1.0 per 1,000 patient years

Neither clarithromycin nor roxithromycin had an increased risk of cardiac death compared with penicillin after the presumed seven-day course.

 

How did the researchers interpret the results?

The researchers concluded this study "found a significantly increased risk of cardiac death associated with current use of clarithromycin, but not roxithromycin".

However, they also acknowledged that, "Before these results are used to guide clinical decision making, confirmation in independent populations is an urgent priority given the widespread use of macrolide antibiotics".

Clarithromycin and roxithromycin both belong to the macrolide class of antibiotics.

 

Conclusion

The conclusion that the risk of cardiac death during the use of clarithromycin is 76% higher than that for penicillin V was based on a small number of cardiac deaths. In fact, it occurred during 0.01% of prescriptions of clarithromycin, compared with 0.005% during prescriptions for penicillin V.

A death rate just a bit higher than a very small death rate is still very small. This means that from an individual point of view, the risk of cardiac death from taking either antibiotic is minimal.

This study does not prove clarithromycin caused any cardiac deaths. It only showed a very small increased risk of cardiac death in the seven days after the prescription was collected in a select group of people. This did not include:

  • antibiotic use in hospitals
  • people with serious illnesses
  • long-term prophylactic use (to prevent infections), such as for those who are immunocompromised
  • people who did not improve and required an alternative antibiotic

The study also has several other limitations, including:

  • major risk factors for cardiac death, such as smoking and obesity, were not taken into account
  • the reason for taking each antibiotic was not known – clarithromycin is used for more types of infections than penicillin V, which may have influenced the results
  • clarithromycin is commonly used for people who are allergic to penicillin, but this factor was not assessed in the study
  • it was assumed that people who collected their prescriptions took the medication as prescribed for seven days

Also, when the researchers performed additional analysis, where they matched people who had taken clarithromycin with people who had taken penicillin according to sex, age, place of birth, time period, season, medical history, prescription drug use in the previous year and use of healthcare in the previous six months, they found the increase in risk of cardiac death with clarithromycin was no longer statistically significant.

Although it is already known clarithromycin can have an effect on the rhythm of the heart and is not recommended for people who have irregular heart rhythms, the study did not specifically look at cardiac death caused by an abnormal rhythm, but instead grouped all causes of death related to heart problems. This further limits the ability to establish a link between how clarithromycin might be increasing the very small risk.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Antibiotic 'linked to heart deaths'. Daily Mail, August 20 2014

Common antibiotic linked to sudden heart deaths. The Daily Telegraph, August 20 2014

Common antibiotic linked to increased risk of heart disease. The Independent, August 20 2014

Links To Science

Svanström H, Pasternak B, Hviid A. Use of clarithromycin and roxithromycin and risk of cardiac death: cohort study. British Medical Journal. Published online August 2014

Categories: Medical News

Are good neighbours really life-savers?

Medical News - Tue, 08/19/2014 - 15:00

“Having good neighbours can help cut heart attack risk,” reports The Independent.

The paper reports on a nationally representative US study of over 5,000 adults over the age of 50.

People were asked about how they rated their neighbourhood social cohesion, then followed up for four years to see if they had a heart attack.

Social cohesion refers to how “neighbourly” people feel, and relates to feelings of security, connection to the area and trust of inhabitants. In this study, social cohesion was assessed by asking people how much they agreed with simple statements such as “people in this area are friendly” and “people in this area can be trusted”.

The study found that higher social cohesion was associated with a reduced risk of heart attack.

However, the association became non-significant (could have been the result of chance) once the researchers adjusted for all factors known to be associated with heart attack risk, such as smoking history, exercise and body mass index (BMI).

This makes it more difficult to draw any meaningful interpretation from these results. It's likely that any link between the risk of a heart attack and perceived social cohesion is being influenced by a varied mix of other factors.

While building social connections can bring mental health benefits, relying on your neighbours to cut your risk of a heart attack is probably unwise.

 

Where did the story come from?

The study was carried out by researchers from the University of Michigan. Sources of funding were not reported. 

The study was published in the peer-reviewed Journal of Epidemiology & Community Health.

This story was covered by The Independent, the Mail Online and The Daily Telegraph.

It was not stated that the association between social cohesion and heart attack was no longer significant when all covariates were adjusted for.

However, the Telegraph did make the point that it is too early to make any definitive conclusions.

 

What kind of research was this?

This was a cohort study that investigated whether higher perceived neighbourhood social cohesion was associated with lower incidence of heart attack (myocardial infarction).

Cohort studies cannot show that higher social cohesion caused the reduction in heart attacks, as there could be many other factors responsible for any association seen.

 

What did the research involve?

The researchers analysed 5,276 people without a history of heart disease who were taking part in the Health and Retirement Study – a nationally representative study of American adults over the age of 50.

People were asked at the beginning of the study about how they rated their neighbourhood social cohesion. Social cohesion was measured by the participants’ agreement with the following statements:

  • “I really feel part of this area”
  • “If you were in trouble, there are lots of people in this area who would help you”
  • “Most people in this area can be trusted”
  • “Most people in this area are friendly”

There was then a follow-up period of four years to see if those studied had a heart attack, which was self-reported or reported by a proxy if the participant had died.

The researchers looked to see if people with higher perceived social neighbourhood cohesion had a reduced risk of heart attack.

 

What were the basic results?

During the four-year study, 148 people (2.81%) people had a heart attack.

Each standard deviation (a measure of variation from the average) increase in perceived neighbourhood social cohesion was associated with a 22% reduced odds of heart attack after adjusting for age, gender, race, marital status, education and total wealth (odds ratio [OR] 0.78, 95% confidence interval [CI] 0.63 to 0.94).

However, the association was no longer statistically significant if all potential confounders were adjusted for (age, gender, race/ethnicity, marital status, education level, total wealth, smoking, exercise, alcohol frequency, high blood pressure, diabetes, BMI, depression, anxiety, cynical hostility, optimism, positive affect, social participation and social integration) (OR 0.82, 95% CI 0.66 to 1.02).

The researchers also divided perceived neighbourhood social cohesion into four categories: low, low-moderate, moderate-high and high. When age, gender, race, marital status, education and total wealth were adjusted for, people with high perceived neighbourhood social cohesion were at reduced risk of heart attack compared to people with low social cohesion. Again, this association was no longer significant if all confounders were adjusted for.

 

How did the researchers interpret the results?

The researchers concluded that “higher perceived neighbourhood social cohesion may have a protective effect against myocardial infarction”.

 

Conclusion

This US cohort study found that higher social cohesion was associated with a reduced risk of heart attack. However, the association became non-significant once the researchers adjusted for all behavioural (such as smoking or exercise), biological (such a BMI) and psychosocial (such as depression) factors that could act as potential confounders.

It is difficult to draw any meaningful interpretation from these results. Perceived social cohesion in this study was only measured by asking people how much they agreed with four simple statements about whether they liked living in the area, whether people in the area were friendly and if they could be trusted. This tells us little about the sociodemographic structure of the area, or the individuals’ interpersonal relationships with others.

Also, despite the large initial sample size, there were relatively few heart attacks over the four years. Heart attack cases were also noted by individual or proxy-self report, rather than a review of medical records, which may also have led to errors.

There are a variety of biological, hereditary and lifestyle factors that are well known to be associated with greater risk of cardiovascular disease, and various other psychological ones that have been speculated (such as stress).

As the results of this study suggest, it is likely that any link between risk of heart attack and perceived social cohesion is being influenced by a varied mix of other factors.

If you want to try and reduce your risk of a heart attack, maintaining a healthy weight through diet and exercise, avoiding smoking and limiting alcohol intake are a great start. 

Simply relying on your neighbours to cut your risk of heart attack is probably unwise.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Having good neighbours can help cut heart attack risk, study shows. The Independent, August 18 2014

Friendly neighbours could lower the risk of heart attack, study finds. The Daily Telegraph, August 18 2014

Good neighbours can keep your heart healthy: Chance of a heart attack found to be a fifth lower if you live in a friendly area. Daily Mail, August 19 2014

Links To Science

Kim ES, Hawes AM, Smith J. Perceived neighbourhood social cohesion and myocardial infarction. Journal of Epidemiology and Community Health. Published online August 18 2014

Categories: Medical News

Targeted brain stimulation 'could aid stroke recovery'

Medical News - Tue, 08/19/2014 - 03:00

"Stimulating the part of the brain which controls movement may improve recovery after a stroke," BBC News reports after researchers used lasers to stimulate a particular region of the brain with promising results in mice.

The researchers were looking at a sub-type of stroke known as ischaemic stroke, where a blood clot blocks the supply of blood to part of the brain.

With prompt treatment an ischaemic stroke is survivable, but even a temporary block to the blood supply can cause brain damage, which can impact on multiple functions such as movement, cognition and speech. Attempting to recover these functions is now an important aspect of post-stroke treatment.

The researchers used a technique called optogenetics in this study. Optogenetics uses a combination of genetics and light, where genetic techniques are used to "make" (code) certain brain cells sensitive to the effects of light. Light is produced by a laser and delivered through an optical fibre.

The researchers used light to stimulate an area of the brain (the primary motor cortex) in mice which had stroke-related brain damage. After stimulation, the mice's performance improved in behaviour tests assessing sensation and movement.

But to use this technique in humans, brain cells would have to be made sensitive to light, possibly by introducing a gene coding for a light-sensitive channel into nerve cells using gene therapy techniques. It is unclear whether this would be feasible based on current technology and techniques.

 

Where did the story come from?

The study was carried out by researchers from Stanford University School of Medicine in the US.

It was funded by the US National Institutes of Health, the National Institute of Neurological Disorders, a Stroke Grant, Russell and Elizabeth Siegelman, and Bernard and Ronni Lacroute.

The study was published in the peer-reviewed journal PNAS.

The research was well reported by BBC News.

 

What kind of research was this?

This animal study aimed to determine whether stimulating nerve cells in certain undamaged parts of the brain could help recovery in a mouse model of stroke.

Animal research such as this is a useful first step in investigating whether treatments could potentially be developed for testing in humans.

 

What did the research involve?

The researchers used a mouse that had been genetically engineered so the nerve cells in the part of the brain responsible for movement (the primary motor cortex) produced an ion channel sensitive to light. When light is shone on the nerve cells expressing this ion channel, the ion channel opens and the nerve cell is activated.

The researchers used healthy mice, as well as mice with brain damage caused by stopping blood flow in one of the arteries that supplies blood to the brain. This mimics the damage that occurs during an ischaemic stroke. The damage occurred in a different part of the brain from the primary motor cortex (the area that was stimulated). 

The researchers looked at whether stimulating the nerve cells in the primary motor cortex using light from a laser could promote recovery in a mouse model of stroke. This combination of light and genetics is called optogenetics.

 

What were the basic results?

Light stimulation of the nerve cells in the undamaged primary motor cortex significantly improved brain blood flow, as well as blood flow in response to brain activity in "stroke mice". It also increased the expression of neurotrophins, a family of proteins that promotes the survival, development and function of nerve cells, and other growth factors.

Stimulation of the nerve cells in the primary motor cortex also promoted functional recovery in the "stroke mice". "Stroke mice" who received stimulation showed faster weight gain and performed significantly better in a sensory-motor behaviour test (the rotating beam test).

Interestingly, stimulations in normal "non-stroke mice" did not alter motor behaviour or expression of neurotrophins.

 

How did the researchers interpret the results?

The researchers concluded that, "These results demonstrate that selective stimulation of neurons can enhance multiple plasticity-associated [the brain's ability to change] mechanisms and promote recovery."

 

Conclusion

This mouse model of stroke has found that stimulating nerve cells in the part of the brain responsible for movement (the primary motor cortex) can lead to better blood flow and the expression of proteins that could promote recovery, as well as leading to functional recovery after stroke.

But it remains to be determined whether a similar technique could be used in people who have had a stroke.

The mice were genetically modified so nerve cells in the primary motor cortex produced an ion channel that could be activated by light. The nerve cells were then activated using a laser.

To use this technique in humans, a gene coding for a light-sensitive channel would have to be introduced into nerve cells, possibly using gene therapy techniques.

Gene therapy in people is very much in its infancy, so it is unclear whether this would be achievable, let alone safe. The last thing you would want to do with a brain recovering from stroke-related damage is to make that damage worse.

Overall, this interesting technique shows promise, but much more research needs to be done before there will be any practical applications in the treatment of stroke patients.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Brain stimulation 'helps in stroke'. BBC News, August 19 2014

Links To Science

Cheng MY, Wang EH, Woodson WJ, et al. Optogenetic neuronal stimulation promotes functional recovery after stroke. PNAS. Published online August 18 2014

Categories: Medical News

Bone marrow drug could treat alopecia

Medical News - Mon, 08/18/2014 - 03:00

“Alopecia sufferers given new treatment hope with repurposed drug,” The Guardian reports.

Alopecia is a type of autoimmune condition where the body’s own immune cells start to attack the hair follicles for an unknown reason, leading to hair loss.

This new research actually involved two phases, one involving mice and one involving humans.

The researchers identified the specific type of immune cell (CD8+NKG2D+ T cells) that is involved in this autoimmune process, and identified the signalling pathways that stimulate the activity of these cells.

The researchers then demonstrated that using molecular treatments to block these signalling pathways was effective in preventing and reversing the disease process in mice genetically engineered to develop alopecia.

These findings in mice were followed by promising results in three people with moderate to severe alopecia. These people were treated with ruxolitinib, which is currently licensed in the UK to treat certain bone marrow disorders. All three patients demonstrated “near-complete hair regrowth” after three to five months of treatment.

This promising research is in very early stages. Ruxolitinib has been tested in only three people with alopecia, which is far too small a number to make any solid conclusions about the effectiveness or the safety of this treatment in people with alopecia.

The safety and efficacy would need to be tested in many further studies involving larger numbers of people, and it would also need to be tested against other currently used treatments for alopecia, such as steroids.

 

Where did the story come from?

The study was carried out by researchers from Columbia University in New York. The study received various sources of financial support including US Public Health Service National Institutes of Health, the Columbia University Skin Disease Research Center, the Locks of Love Foundation and the Alopecia Areata Initiative.

The study was published in the peer-reviewed scientific journal Nature Medicine.

The media gives varied reports of this study. The Mail in particular is overly premature, as the current study is a very long way away in terms of research steps before knowing whether there could be a new “standard treatment for the condition”.

Also, references to a “baldness pill” are potentially misleading as they could lead people to think that this treatment, or similar, would be effective against the most common type of baldness, male pattern baldness.

 

What kind of research was this?

This was a laboratory and mouse study that aimed to examine the cellular processes that cause alopecia and to try and investigate a treatment to reverse the process.

Alopecia is a condition where body hair falls out, ranging from just a patch of hair on the head to the entire body hair. It is understood to be a type of autoimmune condition where the body’s own immune cells start to attack the hair follicles. Causes are not completely understood, with associations with stress and genetics speculated. Unfortunately, although various treatments may be tried (most commonly corticosteroids) there is currently no cure for alopecia.

The autoimmune process is thought to be driven by T lymphocyte cells (a type of white blood cell). Previous laboratory studies in mouse and human models have shown that transfer of T cells can cause the disease. However, effective treatments are said to be limited by a lack of understanding of the key T cell inflammatory pathways in alopecia.

The researchers had previously identified a particular subset of T cells (CD8+NKG2D+ T cells) surrounding hair follicles in alopecia, as well as identifying certain signalling molecules that seem to stimulate them. In this study, the researchers aimed to further investigate the role of these specific T cells using a group of mice genetically engineered to spontaneously develop alopecia, and also human skin samples.

 

What did the research involve?

First of all the researchers examined skin biopsies from genetically engineered mice that had developed alopecia to confirm that these specific CD8+NKG2D+ T cells were infiltrating the hair follicles. They confirmed that there was an increase in numbers of these specific T cells, increase in total number of cells, and also noticed that there was an increase in growth of lymph nodes in the skin. They found that the type of T cell infiltrating the skin and infiltrating the lymph nodes was the same. They examined the genetic profile of these T cells from the lymph nodes.

They then looked into the role of these specific T cells in disease development by transferring these specific T cells, or overall cells from the lymph nodes, into thus far healthy genetically engineered mice that had not yet developed alopecia.

This was in order to confirm that the CD8+NKG2D+ T cells were the dominant cell type involved in the development of the disease and were sufficient to cause the disease.

The researchers then examined the gene activity in skin samples from the genetically engineered mice, and from humans with alopecia.

They identified several genes that were overexpressed around the areas of alopecia, as well as several signalling molecules that are drivers of this abnormal T cell activity, including interleukins 2 and 15, and interferon gamma. 

The researchers therefore then wanted to see whether using drug treatments that could block these signalling molecules would prevent disease development.

To do this they grafted skin from mice that had developed alopecia on to the backs of mice who had not yet developed the condition. They then tested the effectiveness of drug treatments that can block the signalling molecules to see if they could prevent or reverse the disease.

Finally, they followed their results in mice with tests in three people with alopecia.

 

What were the basic results?

When currently healthy mice were grafted with the skin of mice who had developed alopecia, 95-100% of them developed alopecia within 6 to 10 weeks. Giving antibodies to neutralise interferon gamma at the time of grafting prevented alopecia development. Giving antibodies to block interleukins 2 and 15 had a similar effect.

However, though the researchers could prevent development if given at the same time, none were able to reverse the process if given after alopecia had developed.

They then investigated whether they could block other signalling molecules that are involved in the downstream pathway from interferon gamma (called JAK proteins). Ruxolitinib (currently licensed in the UK to treat certain bone marrow disorders) is a molecule that blocks JAK1/2 proteins. Tofacitinib is another molecular treatment (not currently licensed for any condition in the UK) that blocks another (JAK3). When these two treatments were given at the same time the alopecia skin samples were grafted on to the healthy mice, the mice no longer developed alopecia.

The researchers then tested whether giving tofacitinib seven weeks after grafting could reverse alopecia. Treatment did result in “substantial hair regrowth” all over the body and reduced numbers of T cells, which persisted for a few months after stopping treatment. They also tested whether these two JAK inhibitor treatments were effective when topically applied (rubbed into the skin on the back) instead of given by mouth, and found that they were, with hair regrowth occurring within 12 weeks.

The human tests involved three people with moderate to severe alopecia who were given 20mg of ruxolitinib by mouth twice daily.

All three people demonstrated “near-complete hair regrowth” within three to five months of treatment.

No information on whether these people developed side effects was provided in the study.

 

How did the researchers interpret the results?

The researchers conclude that their results demonstrate that CD8+NKG2D+ T cells are the dominant cell type involved in the disease process of alopecia. They say that “the clinical response of a small number of patients with alopecia to treatment with the JAK1/2 inhibitor ruxolitinib suggests future clinical evaluation of this compound or other JAK protein inhibitors currently in clinical development is warranted”.

 

Conclusion

This is valuable laboratory research that identifies the specific type of immune cell (CD8+NKG2D+ T cells) that is involved in the disease process of alopecia. It further identifies several signalling molecules that are drivers of this T cell activity.

The researchers then demonstrate that giving two molecular treatments to block the signalling molecules – ruxolitinib (currently licensed in the UK to treat certain bone marrow disorders) and tofacitinib (not currently licensed for any condition in the UK) – were effective in preventing and reversing the disease process in mice with alopecia.

These findings in mice were followed by promising results in three people with moderate to severe alopecia who were treated with ruxolitinib. All three patients demonstrated “near-complete hair regrowth” after three to five months of ruxolitinib treatment.

These are promising results into the study of potential treatments for this devastating autoimmune condition, which currently has no cure.

However, it is important to realise that this research is in the very early stages. So far ruxolitinib treatment has been tested in only three people with alopecia, which is far too small a number to make any solid conclusions about the effectiveness or the safety of this treatment in people with alopecia. This drug is currently not licensed for use in this condition. It would need to go through many further clinical trial stages in larger numbers of people with alopecia. It would also need to be tested for safety and efficacy against other currently used treatments for alopecia, such as steroids.

Overall there is some way to go before we could know whether ruxolitinib holds real promise as a treatment for alopecia.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Alopecia sufferers given new treatment hope with repurposed drug. The Guardian, August 17 2014

Pill that can cure baldness in five months: Twice-a-day tablet that allows alopecia sufferers’ hair to grow back set to become standard treatment for condition. Daily Mail, August 18 2014

Baldness pill to cure alopecia. Metro, August 18 2014

Links To Science

Xing L, Dai Z, Jabbari A, et al. Alopecia areata is driven by cytotoxic T lymphocytes and is reversed by JAK inhibition. Nature Medicine. Published online August 17 2014

Categories: Medical News