Medical News

Offer weight loss surgery to diabetics, says NICE

Medical News - Fri, 07/11/2014 - 14:30

"An expansion of weight loss surgery in England is being proposed to tackle an epidemic of type 2 diabetes," BBC News reports. The National Institute for Health and Care Excellence (NICE) has recommended obese people with type 2 diabetes should be offered weight loss (bariatric) surgery.

These draft guidelines include new recommendations about the treatment of obesity. In particular, NICE advises that those with recent-onset type 2 diabetes who fulfil certain body mass index (BMI) criteria should have surgery. The recommendations also provide guidance on the use of very low-calorie diets.

As is often the case, the proposed NICE recommendations have made a huge media splash, leading to front-page headlines such as the Daily Mail's claim that, "Thousands more to get obesity ops on the NHS".

These are draft guidelines, so it is far from certain whether they will become official advice. A consultation will be taking place between July 11 and August 8 2014.

 

What are the main new draft guidelines?

Currently, bariatric surgery is offered to people with a BMI of 40 or more, or those with a BMI between 35 and 40 if they also have another significant and possibly life-threatening disease that could be improved if they lost weight, such as type 2 diabetes or high blood pressure.

Patients must have tried and failed to achieve clinically beneficial weight loss by all other appropriate non-surgical methods and be fit for surgery. This recommendation has not changed.

The updated draft guidelines include additional recommendations on bariatric surgery for people with recent-onset type 2 diabetes. These recommendations include:

  • Offering an assessment for bariatric surgery to people who have recent-onset type 2 diabetes and are also obese (BMI of 35 and over).
  • Considering an assessment for bariatric surgery for people who have recent-onset type 2 diabetes and have a BMI between 30 and 34.9. People of Asian origin will be considered for surgery if they have a lower BMI than this, as the point at which the level of body fat becomes a health risk varies between ethnic groups. Asian people are known to be particularly vulnerable to the complications of diabetes.

 

What is bariatric surgery?

Bariatric surgery includes gastric banding, gastric bypass, sleeve gastrectomy and duodenal switch.

A range of techniques are used, but they are usually all based on the principle of surgically altering the digestive system so it takes less food and makes the patient feel fuller quicker after eating.

The two most common types of weight loss surgery are:

  • gastric band – where a band is used to reduce the size of the stomach so a smaller amount of food is required to make someone feel full
  • gastric bypass – where the digestive system is rerouted past most of the stomach so less food is digested, which makes the person feel full

These procedures are usually performed using keyhole surgery.

 

What are the risks?

As with all types of surgery, weight loss surgery carries a risk of complications. These include:

  • internal bleeding
  • a blood clot inside the leg (deep vein thrombosis)
  • a blood clot or other blockage inside the lungs (pulmonary embolism)

It is estimated the risk of dying shortly after gastric band surgery is around 1 in 2,000. A gastric bypass carries a higher risk of around 1 in 100.

The surgery also carries the risk of other side effects, including:

  • excess skin – removal of excess skin is usually considered a form of cosmetic surgery, so it is not usually available on the NHS
  • gallstones – small stones, usually made of cholesterol, that form in the gallbladder
  • stomal stenosis – where the hole that connects the stomach to the small intestine in people with a gastric bypass becomes blocked
  • gastric band slippage – where the gastric band slips out of position
  • food intolerance
  • psychosocial effects – for example, some people have reported relationship problems with their partner because their partner begins to feel nervous, anxious or possibly jealous of their weight loss

 

What other treatments have new draft recommendations?

The draft guideline also makes recommendations regarding very low-calorie diets (800kcal per day or less). These include:

  • Not routinely using very low-calorie diets to manage obesity.
  • Only considering very low-calorie diets for a maximum of 12 weeks (continuously or intermittently) as part of a multicomponent weight management strategy with ongoing support. This would be for people who are obese and have a clinically assessed need to rapidly lose weight – for example, people who require joint replacement surgery or who are seeking fertility services.
  • Giving counselling and assessing people for eating disorders or other mental health conditions before starting them on a very low-calorie diet. This is to ensure the diet is appropriate for them.

The risks and benefits of surgery should also be discussed. Patients should be made aware that very low-calorie diets are not a long-term weight management strategy and that regaining weight is likely, but not because of a failure on their or their clinician's part.

 

How were the draft recommendations received?

There is concern about how many people will be eligible for treatment under the new guidelines and how much it will cost, with Diabetes UK estimating that 850,000 people could be eligible for surgery.

Simon O'Neill, from the charity Diabetes UK, has been quoted as saying that, "Bariatric surgery should only be considered as a last resort if serious attempts to lose weight have been unsuccessful and if the person is obese.

"It can lead to dramatic weight loss, which in turn may result in a reduction in people taking their type 2 diabetes medication, and even in some people needing no medication at all.

"This does not mean, however, that type 2 diabetes has been cured. These people will still need to eat a healthy balanced diet and be physically active."

 

What is the rationale behind the new recommendations regarding bariatric surgery?

Professor Mark Baker, director of the Centre for Clinical Practice, said that, "Updated evidence suggests people who are obese and have been recently diagnosed with type 2 diabetes may benefit from weight loss surgery.

"More than half of people who undergo surgery have more control over their diabetes following surgery and are less likely to have diabetes-related illness; in some cases, surgery can even reverse the diagnosis. The existing recommendations around weight loss surgery have not changed."

It could actually be the case that increasing access to bariatric surgery will save the NHS money in the long term if this helps combat the obesity epidemic.

If obesity levels continue to rise at their current rates, it is estimated that by 2050 the annual cost of treating obesity-related complications will be £50 billion, more than half the entire current NHS budget for England.

One million operations at £5,000 each – £5 billion in total – could well seem a bargain in comparison.

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

More weight loss operations for diabetes. BBC News, July 11 2014

Thousands more to get obesity ops on the NHS: NICE calls for huge increase in surgery – but even obesity charities condemn it. Daily Mail, July 11 2014

Give weight-loss surgery to obese patients on NHS to curb diabetes, says Nice. The Independent, July 11 2014

NHS anti-obesity plans could lead to sharp rise in gastric band surgery. The Guardian, July 11 2014

Thousands more diabetics to be offered weight loss surgery on the NHS. Daily Mirror, July 11 2014

Weight loss surgery can 'reverse diabetes diagnosis'. ITV News, July 11 2014

Obesity surgery could be offered to a million more people on NHS. The Daily Telegraph, July 11 2014

NHS plans to tackle diabetes crisis with FREE stomach stapling and gastric bands. Daily Express, July 11 2014

Categories: Medical News

Vasectomy-associated prostate cancer risk 'small'

Medical News - Fri, 07/11/2014 - 14:26

“Men who have the snip increase their risk of suffering fatal prostate cancer, according to research,” the Daily Mail reports. However, while the increase in risk was found to be statistically significant, it was small in absolute terms.

The newspaper reports on a US study that followed 49,405 men over 24 years, a quarter of whom had had a vasectomy.

It compared the risk of prostate cancer in men who had a vasectomy to the men who hadn’t.

During the 24 years of this study, 12.4% of those who had had a vasectomy developed prostate cancer, compared with 12.1% of those who hadn’t.

They also found vasectomy to be associated with a 19% increased risk of prostate cancer that had spread to other organs (metastatic) or that caused death.

However, it’s important to note that these increases in relative risk relate to a small increase in terms of absolute risk (a 0.3% absolute difference in incidence rate).

This type of study also cannot show that vasectomies cause prostate cancer, as there could have been differences in the men that opted for vasectomy that the researchers did not adjust for.

Overall, though the study findings are worthy of further research, men should not be overly concerned by these reports.

 

Where did the story come from?

The study was carried out by researchers from Brigham and Women’s Hospital, Harvard School of Public Health, the Dana Farber Cancer Institute and the University of Massachusetts Medical School. It was funded by the US National Cancer Institute/National Institutes of Health.

The study was published in the peer-reviewed Journal of Clinical Oncology.

The results of the research were mainly well reported. To give credit to the UK media, some of the news sources which covered the study made it clear that the increase in absolute risk is small (something that is often not made clear in health reporting).

One point to mention is that The Guardian and The Daily Telegraph both said that men who had vasectomies at a younger age were at the greatest risk, though this is not supported by the results of the study.

It was suggested in the research paper that the increased risk was more pronounced among men who were younger at the time of vasectomy. However, this association was not statistically significant, so it could have been due to chance.

 

What kind of research was this?

This was a cohort study that aimed to investigate the association between vasectomy and prostate cancer risk.

A cohort study is the ideal study design to address this question. However, cohort studies cannot show causation, as there is the potential for confounders (other variables that explain the association).

 

What did the research involve?

The researchers studied 49,405 men who were part of the Health Professionals Follow-Up Study, which is an ongoing cohort study conducted by Harvard University.

The men were aged between 40 and 75 years old at the start of the study in 1986. They were followed-up for 24 years, until 2010. Around a quarter of the men (12,321) had vasectomies.

During the follow-up period, 6,023 men were diagnosed with prostate cancer, and 811 men died from prostate cancer.

The researchers compared the risk of developing prostate cancer in men with a vasectomy to the risk of prostate cancer in men without a vasectomy.

This was to see if having a vasectomy was associated with an increased risk of prostate cancer.

The researchers adjusted their analyses for a number of confounders, including:

  • age
  • height
  • body mass index (BMI)
  • amount of vigorous physical activity
  • smoking status
  • diabetes
  • whether the men had a family history of prostate cancer
  • multivitamin use
  • vitamin E supplement use
  • alcohol intake
  • history of prostate-specific antigen (PSA) testing

PSA is a protein produced by normal cells in the prostate and also by prostate cancer cells, and elevated levels can indicate a variety of prostate problems (for example, levels are raised with cancer, but also benign enlargement, inflammation and infection).

 

What were the basic results?

During the study 12.4% of those who’d had a vasectomy developed prostate cancer (1,524 cases out of 12,321 who’d had a vasectomy) compared with 12.1% of those who hadn’t (4,499 cases out of 37,804 who hadn’t had a vasectomy).

The researchers found that vasectomy was associated with:

  • A 10% increase in risk of prostate cancer (relative risk [RR] 1.10, 95% confidence interval [CI] 1.04 to 1.17).
  • A 22% increase in risk of “high-grade” cancer (more aggressive cancer with poorer prognosis) (RR 1.22, 95% CI 1.03 to 1.45). High-grade cancer was defined as having a Gleeson score of 8 to 10 at diagnosis.
  • A 20% increase in risk of “advanced prostate cancer” (lethal or stage T3b [cancer had spread to the seminal vesicles], T4 [cancer has spread to nearby organs], N1 [lymph nodes contain cancer cells] or M1 [cancer has spread to other parts of the body]) (RR 1.20, 95% CI 1.03 to 1.40). 
  • A 19% increase in risk of prostate cancer with distant metastasis (where the cancer has spread to another part of the body) or that causes death (RR 1.19, 95% CI 1.00 to 1.43).

The researchers noted that men who had a vasectomy reported more PSA testing than men without vasectomy.

Although the researchers adjusted for frequency of testing in their analyses, they were concerned the results could be due to men with vasectomy being diagnosed with prostate cancer because they had PSA testing more frequently, rather than because they were more likely to have prostate cancer.

They then performed an analysis of “highly screened” men (who reported PSA screening in 1994 and 1996; note this is a US study and there is no national PSA screening campaign in the UK).

In this subcohort, having a vasectomy was not associated with increased risk of prostate cancer overall, but the association with cancer with distant metastasis or that causes death remained. 

 

How did the researchers interpret the results?

The researchers concluded that their data “support the hypothesis that vasectomy is associated with a modest increased incidence of lethal prostate cancer.”

 

Conclusion

This 24-year cohort study found that men with a vasectomy had a 10% increased risk of prostate cancer and a 19% increased risk of prostate cancer that had spread to other organs, or that caused death.

However, it’s important to note that there are only tiny increases in absolute risk; during the 24 years of this study, 12.4% of those who’d had a vasectomy developed prostate cancer, compared with 12.1% of those who hadn’t.

The strengths of this study are its large size, its long follow-up period, and the collection of data on and adjustment for a large number of factors that could affect the association (confounders). However, as this is a cohort study, it cannot show causation, as the potential for other confounders remains.

Given that the 0.3% absolute difference in cancer incidence is small, there may be other factors differing between those who had a vasectomy and those that didn’t that could account for the differences.

Overall, though the study finding is worthy of further research, men should not be overly concerned by these findings.

As the researchers say, “the decision to opt for a vasectomy remains a highly personal one in which the potential risks and benefits must be considered.”

There are also less drastic steps you can take if you don’t want to have any children.

If used correctly, condoms are 98% effective. They also have the advantage of protecting you against sexually transmitted infections (STIs).

And there is always the possibility that you may change your mind about having children. Vasectomy reversal is expensive (it is rarely available on the NHS) and has a patchy success rate, ranging from 25% to 55%.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Men opting for the snip could be at greater risk of prostate cancer: Vasectomy linked to most lethal form of the disease, study finds. Daily Mail, July 11 2014

Vasectomy can increase risk of developing lethal prostate cancer. The Daily Telegraph, July 10 2014

Having a vasectomy raises risk of prostate cancer by 10 per cent, claims new study. The Independent, July 10 2014

Prostate cancer danger raised if you have 'snip'. Daily Mirror, July 11 2014

Vasectomy raises risk of lethal prostate cancer, study shows. The Guardian, July 11 2014

Links To Science

Siddiqui MM, Wilson KM, Epstein MM, et al. Vasectomy and Risk of Aggressive Prostate Cancer: A 24-Year Follow-Up Study. Journal of Clinical Oncology. Published online July 7 2014

Categories: Medical News

Obesity link for siblings

Medical News - Thu, 07/10/2014 - 18:00

“Children are five times more likely to become obese if their older brother or sister is overweight,” reports the Daily Mail.

There is a widespread assumption that a significant risk factor for child obesity is if they have one or both parents who are obese.

A new US study suggests that a more influential risk factor may be if a child has a brother or sister (or both) who are obese. 

A study of US families has found that among those with two children, if one child was obese then there was a relatively large chance that the other child would also be obese.

This “obese sibling” effect was particularly pronounced if the children are of the same gender. This study has several limitations, including its reliance on one parent self-reporting the height and weight of both themselves and their children.

The study also relied on data taken at one point in time, so it cannot prove a direct cause and effect.

It does arguably highlight the fact that a family environment can play an important part in influencing individual family member’s health outcomes. 

Often, the family that exercises and eats healthily together, achieves a healthy weight together.

Read more about exercising as a family.

 

Where did the story come from?

The study was carried out by researchers from Massachusetts General Hospital, MassGeneral Hospital for Children, Harvard Medical School, the National Bureau of Economic Research, Cornell University and Duke University, all in the US. It was funded by the Robert Wood Johnson Foundation.

The study was published in the peer-reviewed American Journal of Preventive Medicine.

The study is open-access, so is available online free of charge.

The Mail Online’s reporting that children are five times more likely to become obese if their older brother or sister is overweight was a little skewed. It implied that older siblings influence younger children’s eating and exercise behaviour.

But this study had a cross-sectional design, which means all the data was gathered at the same time, so we cannot be sure whether one factor (such as an older sibling’s obesity) follows another (a younger sibling's obesity).

In some cases, the younger sibling may have be the first to develop obesity, followed by the older sibling.

As the study makes clear, older children were five times as likely to be overweight if their younger sibling was also obese.

 

What kind of research was this?

This was a cross-sectional study looking at how the obesity status of different children within the same family is related to parental or other sibling obesity.

The researchers point out that while the parent-child obesity link is well established, little is known about any association in obesity status between siblings. 

They also say that unhealthy behaviours of children are shaped by the family and peer environment, school and neighbourhood – factors which together may influence sibling health differently than parental health.

Cross-sectional studies look at all data at the same time, so they cannot be used to see if one thing follows another, but are useful for showing up patterns or links in the data.

 

What did the research involve?

In 2011 the researchers contacted adults in 14,400 US households in a web-based survey about family health habits, using the resources of a national market research company. Of the 14,400, 71% (10,244 households) responded.

To take part in this aspect of the larger study, adults had to have either one or two children under 18 years of age who were living at home. Participants completed a survey on family health habits via the internet, as well as data on socioeconomic status, physical activity and overall health and “food environment”.

The questions were adapted from a variety of validated sources.

The responding adult also had to report height and weight for themselves and their children. Of the households asked, 1,948 adults had the required one or two children; provided the information needed and reported on adult and child height and weight.

From the information on height, weight and gender, researchers classified adults and their children as obese or not obese.

For children they also used information on age and growth chart data, classifying them according to internationally accepted criteria to measure obesity in children.

For adults they calculated the body mass index (BMI) from self-reported height and weight.

They analysed the association between obesity in a child and obesity in his or her siblings and parents. They adjusted the results for a range of possible factors that might influence results (called confounders).

 

What were the basic results?

The researchers found that:

  • In one-child households, a child was 2.2 times more likely to be obese if a parent was obese (standard deviation 0.5).
  • In households with two children, having an obese younger sibling was more strongly associated with the elder child’s obesity (odds ratio [OR] 5.4, SE.9) than a parent’s obesity status (OR 2.3, SE 0.8).
  • Having an obese elder sibling was associated with a younger child’s obesity (OR 5.6, SE 1.9), and the parent’s obesity status was no longer significant.
  • The link between siblings and obesity was stronger when they were the same gender.
  • Child physical activity was significantly associated with obesity status.

Surprisingly, they also found that having an extremely active older sibling was associated with a higher risk of younger sibling obesity.

 

How did the researchers interpret the results?

The researchers say that siblings may have greater influence on informal behaviour than parents and that older siblings may influence their younger siblings’ attitude and behaviours around food and physical activity. Taking sibling information into account may be of benefit in efforts to prevent childhood obesity, they argue.

 

Conclusion

This study found that while parent obesity made it more likely that a child would be obese, in two-child families, sibling obesity had an even stronger association.

However, as the authors point out, the study has several limitations.

  • It was based on self-reported data on height and weight and proxy reports for children, which limits its reliability.
  • Its cross-sectional design means all the data was gathered at the same time, so we cannot be sure whether one factor (such as an older sibling’s obesity) follows another (a younger sibling's obesity). 
  • It was not a representative sample of the US population, which has a higher prevalence of childhood obesity.
  • It was restricted to families with only one or two children. Different results may be found in larger families.
  • Importantly, it included only the obesity status of one adult in each household – the one who responded to the survey.

Obesity is a major global health problem in which many factors play a part, including social and food environment, family lifestyle and shared genetic background.

It is likely that siblings have a common exposure to many such factors.

It is also possible that siblings influence each other’s behaviour around food and physical activity, but this study is not strong enough to prove it. 

Read more advice about what to do if your child (or children) is overweight, or very overweight.

Your GP should also be able to provide advice.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Children with obese siblings are FIVE times more likely to become overweight... but healthy friends could help beat the bulge. Mail Online, July 9 2014

Links To Science

Pachucki MC, Lovenheim MF, Harding M. Within-Family Obesity Associations - Evaluation of Parent, Child, and Sibling Relationships. American Journal of Preventive Medicine

Categories: Medical News

Malaria parasites can 'hide' inside bone marrow

Medical News - Thu, 07/10/2014 - 14:30

“Malaria parasites can hide inside the bone marrow and evade the body's defences, research confirms,” BBC News report.

It is hoped that this insight into the activities of the parasites could lead to new treatments.

While most people associate malaria with mosquitoes, the disease is actually caused by tiny parasites called Plasmodium, which infect mosquitoes and spread the infection to humans by injecting them with spores.

These spores grow and multiply in the liver and then infect blood cells, causing the symptoms of malaria.

To continue their lifecycle, some of the parasites sexually mature and are then transferred back into mosquitoes during another bite, where they can breed. 

The researchers looked at tissue samples from autopsies of children who had died from malaria.

The study found evidence that sexual maturation of the parasite is likely to take place in the bone marrow, but outside of the blood vessels. This might be why the immune system rarely destroys them, as infection-fighting antibodies are unable to target bone marrow tissue.

It is hoped that these results can pave the way for the development of new drugs to target this key stage. This has the potential to reduce the number of infected mosquitoes, thus decreasing the number of malaria cases.

The ultimate hope is that malaria could be eradicated in the same way as smallpox.

 

Where did the story come from?

The study was carried out by researchers from around the globe, including the Harvard School of Public Health, the Liverpool School of Tropical Medicine, the University Of Malawi College Of Medicine, and Brigham and Women’s Hospital, Boston. It was funded by the US National Institutes of Health.

The study was published in the peer-reviewed medical journal Science Translational Medicine.

The study was briefly reported by BBC News, which provided an accurate summary of the research.

 

What kind of research was this?

This was an autopsy study designed to investigate where a key stage in the lifecycle of the parasite that causes malaria takes place.

The tropical disease is caused by Plasmodium parasites. The most severe form of malaria is caused by Plasmodium falciparum. The lifecycle of the parasite relies on blood-feeding mosquitoes and humans. When an infected mosquito bites a human, sporozoites are injected into the human, and they travel to the liver. They mature into schizonts in the liver and then rupture to release meroziotes into the blood. These merozoites divide and multiply asexually by sticking to the sides of small blood vessels. This process causes the symptoms of malaria, which include shivering and fever.

However, for the parasites to continue their lifecycle, some of the meroziotes mature into the sexual stage; these are called gametocytes. These male and female gametocytes are then ingested by mosquitoes the next time they have a blood meal; they can then fertilise and replicate within the mosquito.

The gametocytes are only present in the bloodstream when they are mature enough to be taken up by mosquitoes. They take six to eight days to mature, and it is believed this takes place in human tissue. This stage has not been studied in depth, as the Plasmodium falciparum will only live in humans, so rodent studies are not possible. This study looked for these immature gametocytes in multiple tissue sites in autopsies of children who had died from malaria, to find out where this stage takes place.

 

What did the research involve?

The researchers initially used antibodies to identify the parasite in general, as well as specific antibodies to the sexual gametocytes, to detect them in various tissues from six autopsies. They looked at tissue samples from eight organs and the subcutaneous fat.

They measured the total proportion of parasites in each organ compared to the level of gametocytes.

They then measured the level of gene activity of three stages in the gametocyte maturation process in the different organs, to see if the first of these stages takes place in one particular site.

The researchers then looked in detail at the bone marrow from 30 autopsies to gather more information about where the gametocytes mature.

Finally, they performed experiments with growing Plasmodium falciparum in the laboratory.

 

What were the basic results?

Results from the first six autopsies revealed that:

  • The spleen, brain, heart and gut had the highest numbers of total parasites.
  • Levels of gametocytes were high in the spleen, brain, gut and bone marrow.
  • There was a significantly higher proportion of gametocytes compared to total parasites in the bone marrow (44.9%), in comparison to the gut (12.4%), the brain (4.8%) and all other organs.
  • The first stage of gametocyte gene activity was highest in the bone marrow. 

Results from the 30 autopsies of bone marrow found that:

  • The youngest gametocytes did not stick to blood vessels as happens in the asexual reproduction of merozoites; instead, they were outside of the blood vessels in the bone marrow. 
  • Immature gametocytes appeared to grow inside young red blood cells. 

The laboratory experiments confirmed that Plasmodium falciparum gametocytes can mature inside young red blood cells.

 

How did the researchers interpret the results?

The researchers said there is evidence that gametocytes develop within the bone marrow, probably in early red blood cells, and that this process uses a different mechanism to the asexual cell replication.

This means there is potential for drugs to be developed that could target this process.

 

Conclusion

This interesting study has found evidence of the likelihood that the sexual reproductive stage in the lifecycle of Plasmodium falciparum takes place outside of the blood vessels, in the bone marrow.

It has also shown that these immature gametocytes are rarely destroyed by the immune system.

It is hoped that these results can pave the way for the development of new drugs to target this key stage in the Plasmodium falciparum lifecycle.

While this would not treat the symptoms of malaria – which come from the asexual reproduction of merozoites – it could potentially stop the transmission of the sexual gametocytes back into mosquitoes.

This could reduce the number of infected mosquitoes, thus decreasing the number of malaria cases.

Eradicating malaria is a challenge, but many public health experts think it is plausible.

For example, Microsoft founder turned philanthropist Bill Gates has pledged billions of dollars towards this goal. What impact this would have on the planet’s ecosystem is debatable.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Malaria parasite 'gets down to the bone'. BBC News, July 9 2014

 

Links To Science

Joice R, Nilsson SK, Montgomery J, et al. Plasmodium falciparum transmission stages accumulate in the human bone marrow. Science Translational Medicine. Published online July 9 2014

 

Categories: Medical News

Call to tackle maternal blood infection risk

Medical News - Wed, 07/09/2014 - 14:20

“Pregnant women and new mothers need closer attention for signs of potentially fatal sepsis, a study says,” reports BBC News.

While still rare, sepsis – a blood infection – is now the leading cause of maternal death in the UK.

Sepsis can potentially be very serious, as it can cause a rapid fall in blood pressure (septic shock), which can lead to multiple organ failure. If untreated, sepsis can be fatal.

The study collected information on all cases of severe sepsis that were treated in hospital maternity units from June 2011 to May 2012.

It found there were 365 confirmed cases of severe sepsis out of over 780,000 maternities. Out of these, five women died (meaning around 0.05% of maternities were affected). 

The most common place for the infection to have spread to the blood from was the urinary and genital tract. Severe sepsis occurred rapidly, often within 24 hours of the first symptoms. Over 40% of women with severe sepsis had an illness with a high temperature, or were taking antibiotics in the previous two weeks.

This study highlights the importance of identifying infections in pregnant women and women who have recently given birth, especially in the first few days after delivery. During these periods, if you have a high temperature over 38°C or are on antibiotics but not getting better, you should seek medical attention.

 

Where did the story come from?

The study was carried out by researchers from the University of Oxford, Northwick Park Hospital, Bradford Royal Infirmary and St Michael’s Hospital in Bristol. It was funded by the National Institute for Health Research.

The study was published in the peer-reviewedopen-access medical journal PLOS Medicine, so the study can be read online for free.

BBC News reported the study accurately and provided sage advice from one of the authors, Professor Knight, who said that, “women who are pregnant or have recently given birth need to be aware that if they are not getting better after being prescribed antibiotics – for example, if they continue to have high fevers, extreme shivering or pain – they should get further advice from their doctor or midwife urgently”.

 

What kind of research was this?

This was a case-control study. The researchers studied all women in the UK diagnosed with severe sepsis (blood poisoning) during pregnancy or during the six weeks after delivery in all maternity units in the UK, from June 1 2011 to May 31 2012 (“cases”), as well as two unaffected (“control”) women per case.

Sepsis is the leading cause of maternal death in the UK, with a rate of 1.13 per 100,000 maternities between 2006 and 2008. The aim of this study was to identify risk factors, the sources of infection and type of organisms responsible, in order to improve prevention and management strategies.

A case-control study selects people with a condition, and matches each of them to at least one other person without the condition; this can be done by factors such as age and sex. In this study, controls were women who did not have severe sepsis, and delivered immediately before each case in the same hospital. Medical histories and exposures can then be compared between the cases and controls to look for associations, and thus risk factors, for the condition. This type of study is useful in investigating rare and emergency conditions, but cannot prove causation.

 

What did the research involve?

The researchers collected information from all 214 hospitals in the UK that have maternity units led by obstetricians. This included all cases of sepsis around pregnancy and two controls for each case. They compared the sociodemographic, medical history and delivery characteristics between the cases and controls. They also compared the cases that developed into septic shock with those that didn’t, to identify factors that were associated with increased severity.

 

What were the basic results?

In terms of severe sepsis cases:

  • There were 365 confirmed cases out of 780,537 maternities.
  • For most women, it was less than 24 hours between the first sign of systemic inflammatory response syndrome (SIRS) and the diagnosis of severe sepsis (SIRS is a term used to describe cases where two or more symptoms associated with sepsis are present).
  • 134 occurred during pregnancy and 231 were after delivery.
  • Those cases that occurred after delivery happened, on average, after three days.
  • 114 women were admitted to the intensive care unit (ICU).
  • 29 (8%) women had a miscarriage or a termination of pregnancy.
  • Five infants were stillborn and seven died in the neonatal period.

In terms of septic shock cases:

  • 71 (20%) of the women developed septic shock.
  • Five women died.

In terms of sources of infection:

  • A source was identified in 270 cases (70%). 
  • Genital tract infection was responsible for 20.2% of cases during pregnancy and 37.2% of cases after delivery.
  • Urinary tract infection caused 33.6% of cases during pregnancy and 11.7% of cases after delivery.
  • Wound infection caused 14.3% of cases after delivery.
  • Respiratory tract infection caused 9% of cases during pregnancy and 3.5% of cases after delivery. 

In terms of organisms responsible:

  • E. coli was the most common organism, occurring in 21.1% of infections.
  • Group A streptococcus was the next most common organism, occurring in 8.8% of infections; for most women with group A streptococcal infection, there was less than nine hours between the first sign of SIRS and severe sepsis, with half having less than two hours between the first signs and diagnosis.
  • 50% of women with group A streptococcal infection progressed to septic shock.

Risk factors for severe sepsis included women who:

  • were of black or other minority ethnic origin
  • were primiparous (giving birth for the first time)
  • had a pre-existing medical problem
  • had a febrile (high temperature) illness or were taking antibiotics in the two weeks before developing severe sepsis

All types of deliveries requiring operations were risk factors for severe sepsis. These were:

  • operative vaginal delivery
  • pre-labour caesarean section
  • caesarean section after the onset of labour

Risk factors for developing septic shock were:

  • multiple pregnancy
  • group A streptococcus

 

How did the researchers interpret the results?

The researchers concluded that “over 40% of women with severe sepsis had a febrile illness or were taking antibiotics prior to presentation, which suggests that at least a proportion were not adequately diagnosed, treated or followed up … it cannot be assumed that antibiotics will prevent progression to severe sepsis … there is a need to ensure that follow-up happens to ensure that treatment is effective”. They also recommend that “signs of severe sepsis in peripartum women, particularly with confirmed or suspected group A streptococcal infection, should be regarded as an obstetric emergency”.

 

Conclusion

This comprehensive study highlights several areas where awareness of the risks of sepsis in pregnancy should be increased in both primary and secondary care. These include:

  • If there is clinical suspicion of infection with group A streptococcus, then urgent action should be taken.
  • There should be increased care given to pregnant women and women who have just given birth who have a suspected infection.
  • High-dose intravenous antibiotics should be given within one hour of admission for suspected sepsis.
  • Vigilant infection control measures should be employed during vaginal delivery.
  • Despite antibiotics being routinely prescribed before planned caesarean sections, women are still at risk of severe sepsis and need to be closely monitored.
  • There should be consideration for clinicians to give prophylactic antibiotics before operative vaginal deliveries.
  • There should be consideration for clinicians to give prophylactic antibiotics at the time the decision is made to perform an emergency caesarean section. 

Strengths of the study include its size and the 100% participation rate of maternity units in the UK, which should account for any regional or socioeconomic differences.

If you are pregnant or have just given birth, and have signs or symptoms of infection, such as a high temperature of over 38°C, it is important to seek medical advice immediately.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Blood infection warning for new mums. BBC News, July 9 2014

Links To Science

Acosta CD, Kurinczuk JJ, Lucas DN, et al. Severe Maternal Sepsis in the UK, 2011–2012: A National Case-Control Study. PLOS Medicine. Published online July 8 2014

Categories: Medical News

Cycling linked to prostate cancer, but not infertility

Medical News - Wed, 07/09/2014 - 14:17

"Men who cycle more than nine hours a week are … more likely to develop prostate cancer," the Mail Online inaccurately reports. The story comes from the publication of an online survey into cycling in the UK and its effects on health outcomes.

Researchers were particularly interested in whether frequent cycling was linked to an increased risk of prostate cancerinfertility and erectile dysfunction (impotence).

Fears have been raised regarding the effect cycling has on these conditions. These concerns have been attributed to a wide range of factors, such as repetitive trauma.

This study found no association between the amount of time spent cycling and erectile dysfunction or infertility.

But it did find a dose-response relationship between cycling time and the risk of prostate cancer in men aged over 50, with risk increasing as the hours a week spent cycling increased.

Despite these seemingly alarming results, regular cyclists do not need to panic – this type of study cannot prove increased cycling time leads to prostate cancer; it can only prove an association.

Also, the prostate cancer analyses were only carried out on fewer than 42 men, which is only a relatively small sample of men. With such a small sample, it increases the possibility that any association is the result of chance.

Most experts would agree that the health benefits of frequent cycling outweigh the risks.

For more on the pros and cons of the sport, read our special report on cycling.

 

Where did the story come from?

The study was carried out by researchers from University College London. Funding is not explicitly stated in the study, but the authors acknowledge the support of the national cycling charity CTC, British Cycling, Sky Ride and Cycling Weekly magazine. They state the funders had no role in the study other than providing funding.

The study was published in the peer-reviewed Journal of Men's Health on an open access basis, so it is free to read online.

The story was picked up by both the Mail Online and The Daily Telegraph, with each choosing to report the findings differently.

The Mail was keen to emphasise the link to prostate cancer, as seen in the inaccurate headline stating that, "Men who cycle more than nine hours a week are six times more likely to develop prostate cancer". The rest of the coverage of the study is appropriate and goes on to report that no causal relationship has been proven. 

The Telegraph took a more cheerful approach, emphasising that the study found no link between frequent cycling and infertility or erectile dysfunction. Its reporting was also balanced and accurate.

 

What kind of research was this?

This was a cross-sectional study looking at associations between erectile dysfunction, infertility and prostate cancer in a group of regular male cyclists from the UK. 

A cross-sectional type of study looks at the characteristics of a population at a given point in time. This type of study is useful for finding out how common a particular condition is in a population. Often, the relationships between the characteristics or factors assessed are then analysed.

Importantly, because this study only looks at one point in time, it cannot establish cause and effect between factors, as it does not show which factor came first.

 

What did the research involve?

The research included 5,282 male cyclists from the Cycling for Health UK Study. This study aimed to examine cycling habits and the health of men from a variety of cycling backgrounds, ranging from commuters to amateur racing cyclists.

In the current study, an online survey was advertised from October 2012 to July 2013 through cycling magazines and UK cycling bodies. The survey included questions on:

  • demographics – age, sex and education
  • cycling history – number of years cycling, average weekly total and commuting cycling distances, as well as time, speed and best times for various standard distance competitive cycling races
  • body measurements – weight, height and resting heart rate, which was self-recorded from the wrist

The survey also captured additional information on:

  • alcohol intake
  • current and past cigarette use
  • history of cardiovascular events, such as stroke or heart attack
  • any cancers
  • diagnosis or treatment of hypertension (high blood pressure), high cholesterol or diabetes
  • participation in non-cycling physical activity

The men were then asked whether they had experienced erectile dysfunction in the past five years and if they had been diagnosed with infertility or prostate cancer by a doctor.

The researchers used statistical techniques to determine the association between the number of hours a week spent cycling and erectile dysfunction, infertility or prostate cancer.

They did this by categorising the amount of time a week spent cycling into four groups:

  • fewer than 3.75 hours per week
  • between 3.76 hours and 5.75 hours per week
  • between 5.76 and 8.5 hours per week
  • more than 8.5 hours per week

The researchers only analysed the data of men aged over 50 (2,027 men) because prostate cancer diagnosed under the age of 50 accounts for less than 1% of diagnoses.

They then adjusted the results for potential confounders, including:

  • age
  • smoking status
  • weekly alcohol intake
  • body mass index (BMI)
  • diagnosed hypertension (high blood pressure)
  • other physical activity

 

What were the basic results?

The average age of the men was 48.2 (range 16 to 88 years), the average BMI was 25.3 kg, and 3.8% of the cyclists were current smokers. On average, they cycled 4.2 days a week for 6.5 hours a week.

Of the 5,282 men, 8.4% (444 men) self-reported erectile dysfunction in the past five years, 1.2% (63 men) reported a diagnosis of infertility, and 0.8% (42 men) reported a diagnosis of prostate cancer. Prostate cancer was reported in 1.8% of men aged over 50 (figure not reported).

The main results of this study were:

  • there was no association found between cycling time and erectile dysfunction
  • there was no linear association found between cycling time and infertility – in fact, cycling for between 3.76 and 5.75 hours per week was actually associated with a decreased risk of infertility (odds ratio [OR] 0.44, 95% confidence interval [CI] 0.21 to 0.94)
  • after adjusting for confounders, a graded increase in the association between cycling time and the risk of prostate cancer in men aged over 50 was found – the risk increased as the hours a week spent cycling increased

Compared with cycling less than 3.75 hours a week:

  • cycling between 3.76 and 5.75 hours had an OR for prostate cancer of 2.94
  • cycling between 5.76 and 8.5 hours had an OR of 2.89 
  • cycling for more than 8.5 hours had an OR of 6.14

To determine if the results for prostate cancer were related to health-seeking behaviour (the idea that more active people are more likely to be aware of their health and seek treatment if necessary), associations between the time spent cycling and yearly consultations with doctors were also analysed. No association was found.

 

How did the researchers interpret the results?

The researchers concluded weekly cycling duration was not associated with erectile dysfunction or infertility. They also conclude there is a dose-response association between cycling time and prostate cancer in men aged over 50, particularly for men who cycle more than 8.5 hours a week.

One of the researchers, Dr Mark Hamer from University College London, is reported in the media as saying: "These results are not straightforward. It may be that these men are more health aware and therefore more likely to get a diagnosis."

Those who are cycling the most did not make up a huge sample, so more research is needed. Hamer went on to say that, "We are talking about very keen cyclists who are on their bikes for nine hours a week – not people who are just commuting to work."

He also made the point that cycling leads to health benefits in other areas, such as reducing the risk of type 2 diabetes, heart disease and stroke.

 

Conclusion

This study has looked at the associations between the number of hours spent cycling a week and erectile dysfunction, infertility and prostate cancer in men over the age of 50 who cycle regularly.

It found no association between the time spent cycling and erectile dysfunction or infertility, but did find a dose-response association with prostate cancer for men over the age of 50, with risk increasing as the time a week spent cycling increased.

As the researchers point out, this type of study cannot prove causality (that increased cycling time leads to prostate cancer), only an association. Different study designs, such as randomised controlled trials, are required before we can draw these kinds of conclusions.

There are several other limitations of this study worth noting:

  • The survey was only sent to current cyclists. There was therefore no control group to compare the results with, and the results would have missed men who no longer cycle because of ill health.
  • The study was carried out at only one point in time, so self-reported cycling may have differed if the men were surveyed at a different point in time. Factors such as the time of year when they answered the survey (whether it was winter or summer) may have affected their responses.
  • Only self-reported data was required for classifying men as having erectile dysfunction in the past five years. It is possible that either men who reported having erectile dysfunction in fact didn't have this, or that some men with the condition were potentially missed. The researchers note the prevalence of erectile dysfunction was lower in this study than in previous studies. But they also acknowledge that because the survey was anonymous and online, this may have reduced any embarrassment of reporting erectile dysfunction and infertility.
  • Only self-reported data was used to categorise the number of hours a week spent cycling. It is possible men either over or underestimated the amount of cycling they do a week.
  • The study was internet-based, so there was no opportunity for interviews or physical examination, which limited the ability to explore the reported diagnoses further.
  • Overall, there were only about 42 men out of 5,282 in the study who had prostate cancer, and fewer than 42 cases among the men aged over 50 (figure not reported). This means analyses were only performed on a relatively small sample of men.

Cyclists need not panic, as this research has not proved a cause and effect relationship between prostate cancer and the amount of time a week a man spends cycling.

In fact, cycling regularly has many health benefits, including a reduced risk of diabetes, high blood pressure, heart attack, stroke and cancer.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Men who cycle more than nine hours a week are six times more likely to develop prostate cancer, study finds. Mail Online, July 8 2014

Cycling does not cause infertility, British scientists find. The Daily Telegraph, July 7 2014

Links To Science

Hollingworth M, Harper A, Hamer M. An Observational Study of Erectile Dysfunction, Infertility, and Prostate Cancer in Regular Cyclists: Cycling for Health UK Study. Journal of Men's Health. Published online June 11 2014

Categories: Medical News

Gene mutation linked to distinct type of autism

Medical News - Tue, 07/08/2014 - 14:26

“Have scientists found the autism gene?" asks the Mail Online.

The news is based on a genetic study that found children with autism spectrum disorder (ASD) were more likely to have a mutation in a gene called CHD8 than children without the disorder.

ASD is an umbrella term for conditions affecting social interaction, communication, interests and behaviour.

However, talk of a single autism gene is premature. This is relatively early stage research. The genetic test will need to be further tested and validated in large and diverse groups to ensure it accurately identifies people with ASD.

The results linking the CHD8 mutation to a specific sub-type of ASD with a similar set of symptoms also made the news but was based on just 15 people. This means this sub-analysis cannot be viewed as reliable.

The hope is that if a test does reliably predict what set of symptoms a child with ASD is likely to develop, an individual treatment and care plan can be devised to meet their future needs.

 

Where did the story come from?

The study was carried out by a large international collaboration of researchers from medical and academic institutions and was funded by a variety of non-commercial institutions including Simons Foundation Autism Research Initiative, the National Institutes for Health (US) and the European Commission.

The study was published in the peer-reviewed science journal Cell.

As with many Mail Online health stories the headline was misleading. There is no such thing as a single autism gene, in the same way that there is no such thing a single type of autistic disorder.

The actual reporting of the study in the main body of the story was accurate and informative. (Mail Online journalists may want to have a quiet word with their headline writers).

 

What kind of research was this?

This was a genetic study looking for gene variants associated with autism spectrum disorder.

Autism spectrum disorder (ASD) is an umbrella term for a number of conditions that affect social interaction, communication, interests and behaviour. Symptoms can range from mild to severe.

In some cases ASD can lead to below average intelligence and learning difficulties. In other cases intelligence is unaffected or above average.

The researchers describe how different subtypes of ASD have been described based on behaviour but with limited success because behaviour varies so much within and between subtypes. As opposed to behaviour, the researchers thought genes may hold the key to the different subtypes of ASD and sought to investigate this hypothesis.

 

What did the research involve?

Researchers sequenced the DNA of 3,730 children with development delay or ASD looking for variations in a gene called CHD8 – a gene previously associated with ASD. They looked to see if any genetic variations were associated with being diagnosed with ASD overall, but also for any links to specific characteristics of subsets of people with ASD such as distinct faces, a larger than average head (marcoencephaly), and stomach or digestive complaints.

While the genetic study recruited a lot of children, the assessment of specific characteristics was based on just 15 people.

This sort of genetic study is often used to further scientific understanding of any genetic origins of a disease. Scientists know what many genes do, so if there is a genetic link it helps them understand the biology of the disease process, this can ultimately lead to ideas for new drugs and treatments.

 

What were the basic results?

The genetic analysis revealed 15 different and independent genetic variations (mutations) in the CHD8 gene in the children with development delay or ASD.

Thirteen of the 15 CHD8 mutations associated with ASD caused the resulting CHD8 protein to shorten (truncate). These truncating mutations were not found in 8,792 healthy controls – including 2,289 unaffected siblings – suggesting the mutations were relatively specific to the disease.

The presence of one of the genetic mutations identified was linked to an increased likelihood of an ASD diagnosis overall. 

Specific CHD8 mutations were also associated with distinct characteristics, but only using 15 people with ASD so should be taken with a pinch of salt. These were:

  • increased head size accompanied by rapid early postnatal growth
  • a facial phenotype marked by prominent forehead
  • wide-set eyes
  • pointed chin
  • increased rates of digestive complaints
  • sleep dysfunction

As a follow-up study, the researchers disrupted the CHD8 gene in zebra fish – a popular research animal for genetic manipulation as they breed very quickly and adapt well to being housed in aquariums. They found the effects of this were broadly similar to ASD type symptoms in humans including larger head size as results of expansion of the forebrain and midbrain, and impairment of gastrointestinal motility.

 

How did the researchers interpret the results?

The researchers concluded their findings “suggest that CHD8 disruptions represent a specific pathway in the development of ASD and define a distinct ASD subtype”.

 

Conclusion

This study used many thousands of children to identified mutations in the CHD8 gene that were associated with the development of ASD. Much less convincing was the finding that CHD8 mutations might be associated with distinct characteristics within the spectrum of disease, potentially representing a genetically defined subtype, as this was based on just 15 people.

Identifying subtypes of ASD is important to guide diagnosis, prognosis and disease management to maximise quality of life. This study is interesting because most categorisation of ASD has focused on behavioural dimensions. By contrast, this study flipped this on its head and looked at the genetic dimension of the disease, which revealed some statistically significant links.

Knowing this gene-disease link will help researchers understand the biological origins of the disease, which improves the chance of treatments being discovered or better ways to manage symptoms of the conditions to improve quality of life.

This is relatively early stage research; the genetic test will need to be further tested and validated in large and diverse groups to ensure it accurately identifies people with ASD and or any subtypes.

Today this test does not help people with ASD, but it does help pave the way for potential improvements in the future.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Have scientists found the autism GENE? Breakthrough as specific link between DNA and the condition is discovered. Mail Online, July 7 2014

Links To Science

Bernier R, Golzio C, Xiong B, et al. Disruptive CHD8 Mutations Define a Subtype of Autism Early in Development. Cell. Published online July 6 2014

Categories: Medical News

New Alzheimer’s test may help future clinical trials

Medical News - Tue, 07/08/2014 - 14:25

“Research in more than 1,000 people has identified a set of proteins in the blood which can predict the start of the dementia with 87% accuracy,” BBC News reports.

The primary goal of the test was to predict whether people with mild cognitive impairments (usually age-related memory problems) would go on to develop “full-blown” Alzheimer’s disease over approximately a year.

There is currently no cure for Alzheimer’s, so people may question whether an early warning system for the disease is of any practical use.

However, having a relatively reliable method of identifying high-risk people who will develop Alzheimer’s could be useful in recruiting suitable candidates for clinical trials investigating future treatments.

An important point is that, while the test accuracy rate of 87% sounds impressive, this may not be a good indicator of how useful the test would be if it was used in the wider population.

Given real world assumptions on the proportion of people who have mild cognitive impairment that progress to Alzheimer’s disease (10-15%), the predictive ability of a positive test falls to around 50%. This means that those who have a positive test have a 50:50 chance of going on to have Alzheimer's.

Consequently, on its own, this test is unlikely to be much good for use in clinical practice for the general population. However, refining this test and combining it with other methods (such as a lipid test we discussed in March) might improve accuracy rates, making it a viable predictive tool in the future.

 

Where did the story come from?

The study was led by researchers from Kings College London and was funded by the Medical Research Council, Alzheimer’s Research, The National Institute for Health Research (NIHR) Biomedical Research Centre and various European Union (EU) grants.

Some of the researchers reported potential conflicts of interest, as they had patents filed with, or work for, Proteome Sciences plc. Proteome Sciences is a life sciences company with a commercial interest in biomarking testing. Another researcher works for the pharmaceutical company GlaxoSmithKline (GSK). No other conflicts of interest were reported.

The study was published in the peer-reviewed medical journal Alzheimer’s & Dementia. The study is open-access, so is free to read online.

The media coverage was broadly accurate, but none reported the positive predictive value of the test. This reduces the impressive-sounding 87% accurate figure to a predictive value of a positive test to around the 50% level, depending on the the rate of conversion from mild cognitive impairment to Alzheimer's disease.

This important information should have been highlighted to avoid overstating the utility of the test on its own.

 

What kind of research was this?

This study used information from three existing cohorts of people, to study the prognostic value of a new blood test in predicting people’s progress from mild cognitive impairment to Alzheimer’s disease. 

There are currently no drug treatments that cure Alzheimer's, although there are some that can improve symptoms or temporarily slow down progression of the disease in some people.

Some believe that many new clinical trials fail because drugs are given too late in the disease process.

A blood test could be used to identify patients in the early stages of memory loss, who could then be used in clinical trials to find drugs to halt the disease's progression.

 

What did the research involve?

The researchers studied the blood plasma of 1148 elderly people – 476 with clinically diagnoses of Alzheimer’s disease, 220 with mild cognitive impairment (a mild form of dementia) and 452 with no signs of dementia. They then studied how differences in proteins correlated with disease progression and severity over a period of between one and three years.

Diagnosis of Alzheimer’s disease was made using established criteria, but three groups were used and combined, so the diagnosis tool used in each was actually different.

Other standardised clinical assessment included the Mini-Mental State Examination (MMSE) for measuring general cognition and cognitive decline, as well as the Clinical Dementia Rating (ANM and KHP-DCR only) for measuring dementia severity.

Participants’ brains were also scanned using an MRI scanner, to measure the volume and thickness of the brain to look for further signs of Alzheimer’s or brain deterioration.

The researchers started with 26 candidate proteins they thought might be useful to predict progression and severity. These were tested in different combinations and reduced to the best 10, based on specificity and sensitivity of the test.

 

What were the basic results?

The team identified 16 proteins in participants’ blood that correlated with disease severity and cognitive decline.

The strongest associations predicting progression from mild cognitive impairment to Alzheimer’s disease were formed of a panel of 10 proteins. Depending on different threshold inputs, this test had an accuracy of between 72.7% and 87.2%, and a positive predictive value of between 47.8% and 57.1%.

The predictive value of a test is the proportion of positive and negative results that are true positive and true negative results. That is an indication of each result's ability to correctly identify people with a specific condition, and not misdiagnose people who don’t have the condition.

The accuracy of the protein test was improved when it was combined with a test for gene variant associated with increased amyloid protein in the brain (APOE ε4 allele).

This combined test predicted progression from mild cognitive impairment to Alzheimer’s disease over a year, with an accuracy of 87% (sensitivity 85%, and specificity 88% and PPV 68.8%). The PPV was based on the 24% of people with mild cognitive impairment who went on to develop Alzheimer’s disease in the study. However, there are a wide range of estimates for this conversion, many of which are much lower.

For example, figures from the Alzheimer’s society estimate that between 10% and 15% of people with mild cognitive impairment progress to Alzheimer’s disease each year. Based on this assumption, the test has a positive predictive value of between 44% and 56%. This means that a positive result on the combined test will only identify people correctly in around half of cases, and potentially less.

The average time for mild cognitive impairment to develop into Alzheimer’s in the study was around one year.

 

How did the researchers interpret the results?

The study authors concluded they had, “identified 10 plasma proteins strongly associated with disease severity and disease progression” and that, “such markers may be useful for patient selection for clinical trials and assessment of patients with predisease subjective memory complaints”.

 

Conclusion

This research developed and tested a new blood test that predicted the progression from mild cognitive impairment to Alzheimer’s disease, with an accuracy of 87% approximately a year before development.

However, in a non-experimental setting, the test may be much less effective than the 87% figure suggests. Based on figures from the Alzheimer’s society indicating that 10-15% of people or less progress each year, a positive result on the test would only be expected to be correct around 50% of the time.

The test is unlikely to be used by itself, so its predictive ability may be improved if used in combination with other tests in development. The predictive ability of the test would improve if the 10-15% assumptions turned out to be an underestimate, and reduce if the conversion assumption was an overestimate.

A further limitation to the test, if it was to be used for general screening, is that it only made predictions a year in advance of Alzheimer’s diagnosis. This is certainly better than nothing, but Alzheimer’s disease is often diagnosed at a later stage, with the disease having already caused damage for many years (the exact time is variable). A test that predicted Alzheimer’s disease using a 5 or 10-year period would be a much bigger advancement.

As there is currently no cure for Alzheimer’s, there is likely to be a debate about whether patients would want to know this information if the test was successfully developed further and made available in mainstream medicine.

Some people may prefer to know their prognosis, as it may influence what they do or the way they live.

Others may prefer not to know, given that current drug treatments can only slow the progression of the disease in some people, and do not improve the symptoms in all of those affected.

However, as the researchers point out, the test has an important potential use. If confirmed to be effective in further studies, the test could be used to recruit people into clinical trials, testing new drugs or treatments to help future generations.

Promising Alzheimer’s drugs are reported to have a high failure rate in human clinical trials.

Many researchers believe this to be because by the time a person is diagnosed with Alzheimer’s, it is too late to do anything about it, with medicine unable to reverse the brain damage that has already been caused.

Therefore, scientists are looking for ways to intervene earlier.

Knowing who will likely develop Alzheimer’s in a year is a step forward in this effort, as researchers can test different drugs and treatments, and would be able to see if they are preventing the progression from mild cognitive decline to Alzheimer’s disease. This isn’t currently possible with existing diagnostic tools and approaches.

One of the limitations of this research is that it did not use post-mortem assessments to diagnose Alzheimer’s and assess its severity. Instead, it relied on clinical diagnosis, severity scores and MRI scans. While these are practical and valid measures, the gold standard for Alzheimer’s diagnosis is a post-mortem examination of the brain. This could be corroborated with the test results in future studies.

This is the first research group to test the predictive ability of this specific panel of proteins.

Interestingly, a previous small study found 10 other blood lipid biomarkers predicted, with 90% accuracy, 28 cognitively normal participants who progressed to have either mild cognitive impairment or mild Alzheimer’s disease within two to three years, compared to those who did not.

It will be important for future research groups to confirm and replicate the findings, to see if the results are the same, or if a combination of these approaches improves the predictive values in larger trials.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Alzheimer's research in 'major step' towards blood test. BBC News, July 8 2014

Alzheimer's disease could be prevented after new blood test breakthrough. The Daily Telegraph, July 8 2014

Blood test breakthrough in search for Alzheimer's cure. The Guardian, July 8 2014

New blood test 'paves way to halt dementia': Patients could get drugs earlier. Daily Mail, July 8 2014

Alzheimer's blood test 'could be available in two years'. ITV News, July 8 2014

Alzheimer’s blood test could predict whether disease will develop within a year. Daily Mirror, July 8 2014

Blood test to give early warning of Alzheimer’s. The Times, July 8 2014

Alzheimer's breakthrough: Simple new test will help millions beat cruel disease. Daily Express, July 8 2014

Links To Science

Hye A, Riddoch-Contreras J, Baird AL, et al. Plasma proteins predict conversion to dementia from prodromal disease. Alzheimer’s & Dementia. Published online July 8 2014

Categories: Medical News

Two-question test for alcohol misuse 'effective'

Medical News - Mon, 07/07/2014 - 15:00

“Do you regularly have more than six drinks in one sitting? Or do you regret a drunken escapade that took place in the past year? Answering yes to both questions may be a sign that you have a drink problem," the Mail Online reports.

This comes following a systematic review, which is, essentially, a study of studies.

The review aimed to examine whether short and quick screening approaches (comprising just one or two questions) can successfully and accurately identify people with alcohol problems during a GP visit.

The prevalence of people attending an appointment who have some form of alcohol problem has been suggested to be as high as 30%.

From the seven papers identified, using a single screening question such as “How often do you have six or more drinks on one occasion?” or “As a result of your drinking or drug use, did anything happen in the last year that you wish didn’t happen?” was not very accurate.

However, asking two screening questions increased accuracy (sensitivity) to 87.2%, meaning that only around one person in seven would be missed.

Asking either one or two questions would not be recommended as a single approach, however, because they’re not accurate enough. Instead, they appear to serve well as an initial screening technique, if they are then followed by a standard screening questionnaire.

 

Where did the story come from?

The study was carried out by researchers from Leicester General Hospital, and received no sources of financial support.

The study was published in the peer-reviewed medical journal British Journal of General Practice.

The Mail Online’s reporting of the study is accurate, but didn’t make clear that the initial two-question screening was not proposed to be used by itself.

This initial screening would be followed up by one or more longer, validated alcohol screening questionnaires if an alcohol problem was initially suspected.

Describing this more measured approach may have been less newsworthy than warning people that their GP was going to make the decision based on just two questions.

 

What kind of research was this?

This was a systematic review that aimed to see whether asking one or two simple questions could be an accurate and acceptable method for general practice screening to find out if people have an alcohol problem.

Previous research, the study authors’ said, suggested that up to a third of people attending general practice may be drinking at a level harmful to their health (called at-risk drinking) or have an alcohol use disorder. Many researchers consider that GPs might be well placed to identify any drink-related problems, as they can offer help and support at an early stage.

This study aimed to review the global literature to see if there was any evidence to support the use of very short alcohol screening questions in general practice.

 

What did the research involve?

The authors searched three literature databases – MEDLINE, PubMed and Embase – up to January 2014, using various search terms, including different terms for alcohol use disorders and terms to identify screening questions. They looked at the identified studies and only included those that assessed one or two questions to identify alcohol problems. They appraised the quality of included studies, and extracted information including study setting, patient characteristics, sample size, questions used to identify alcohol problems and accuracy of the screening questions.

 

What were the basic results?

The researchers identified six publications investigating one-question screening, and two studies investigating two-question screening. All were diagnostic studies designed to investigate the accuracy of diagnosing alcohol problems. The sample size of individual studies ranged from 227 to 1333 participants.

Most studies used valid diagnostic criteria to identify alcohol use disorders, and the overall prevalence of disorders across studies was 21%.

A single-question approach helped identify 453 out of 800 individuals with an alcohol use problem, with a pooled sensitivity across studies of 56.6%. This means that 56.6% of people with an alcohol problem were correctly identified by the screening question as having a problem.

On the flip side, 43% of people with an alcohol problem would have incorrectly been given the all-clear (false negatives).

By comparison, the pooled specificity of the single question was 81.3%, meaning that 81.3% of people without an alcohol problem were correctly identified by the screening question as not having an alcohol problem suggested (18.7% false positive rate). However, the individual studies had quite variable sensitivity and specificity results.

The most accurate single questions appeared to be “How often do you have six or more drinks on one occasion?” and “As a result of your drinking or drug use, did anything happen in the last year that you wish didn’t happen?” Both were reported to have excellent performance for ruling out people with alcohol problems, with low false negative results.

For the two-question approach, the sensitivity was 87.2% (proportion accurately identified to have an alcohol problem), and the specificity was 79.8% (proportion accurately identified as not having an alcohol problem). The optimal combination of questions was “recurrent drinking in situations in which it is physically hazardous” combined with “drinking in larger amounts or over a longer period than intended”.

The currently used 10-item AUDIT alcohol use questionnaire was found to be the most accurate single method for identifying alcohol use disorders, followed by the 4-item CAGE questionnaire.

However, the difficulty with these methods is that they would take GPs much longer to use these as screening methods. They were followed in accuracy by asking two screening questions, followed by one question. 

The most accurate method was considered to be a stepwise approach of asking two initial screening questions, followed by the AUDIT or CAGE questionnaires to confirm.

 

How did the researchers interpret the results?

The researchers conclude that “two brief questions can be used as an initial screen for alcohol problems, but only when combined with a second-step screen. A brief alcohol intervention should be considered in those individuals who answer positively on both steps”.

 

Conclusion

This systematic review has examined global literature to identify studies that have assessed using one or two screening questions in general practice to identify people with alcohol use problems. The pooled results across the seven publications found the prevalence of alcohol use problems to be 21%.

Using a single question such as “How often do you have six or more drinks on one occasion?” or “As a result of your drinking or drug use, did anything happen in the last year that you wish didn’t happen?” was not very accurate, having a sensitivity of just over half. This means that half the people with alcohol problems would be missed. However, asking two screening questions increased sensitivity to 87.2%, meaning that less than 13% would be missed.

The optimal two question categories were “recurrent drinking in situations in which it is physically hazardous”, combined with “drinking in larger amounts or over a longer period than intended”.

However, as the researchers highlight, asking either one or two questions would not be recommended as a single approach, because they’re not accurate enough. They would need to be followed by the longer 10-item AUDIT alcohol use questionnaire or the 4-item CAGE questionnaire in a stepwise approach.

Both of these questionnaires used alone are more accurate than either one or two questions used alone, but would take GPs much longer to use these as screening methods. However, asking an initial two questions followed by AUDIT or CAGE would be the best way to identify people with alcohol problems, hopefully targeting them towards interventions.

However, as the study says, there are limitations. Despite the systematic review design, only seven diagnostic studies were identified, and it was not possible to conduct subgroup analyses. For example, it can’t tell us whether accuracy differs by male or female sex, or how good the screening methods would be at identifying people with different types of alcohol problem (e.g. people with actual alcohol dependence, or just harmful or hazardous drinking habits).

The researchers say that in the UK, “experts have recommended routine alcohol screening focusing on new patient registrations, general health checks and special types of consultation”. However, as they say, there are many things to consider, including the acceptability of asking even a single question related to alcohol use, “as some questions may not be welcome in unselected primary care attendees”.

The researchers aptly put forward “a cautious recommendation for one or two verbal questions as a screening test for alcohol-use disorder in primary care, but only when paired with a longer screening tool to decide who warrants a brief alcohol intervention”.

Further research is needed to clarify the added value of this approach compared with clinical assessment without the use of screening questions.

Despite alcohol misuse being associated with psychological denial, most people with an alcohol problem know that they have a problem.

A good source of advice is your GP. Be honest with them about how much you drink.

If your body has become dependent on alcohol, stopping drinking overnight can cause severe withdrawal symptoms, and in some cases can even can be life-threatening, so get advice about cutting down gradually.

Your GP may refer you to a local community alcohol service. Ask about free local support groups, day-centre counselling and one-to-one counselling.

Read more about how to seek help for alcohol misuse.  

Analysis by Bazian. Edited by NHS Choices
Follow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Do you consume more than six drinks in one sitting at least once a month? And do you regret a drunken incident in the past year? Two questions that tell your GP you have a drink problem. Mail Online, July 7 2014

Links To Science

Mitchell AJ, Bird V, Rizzo M, et al. Accuracy of one or two simple questions to identify alcohol-use disorder in primary care: a meta-analysis. British Journal of General Practice. Published online July 1 2014

Categories: Medical News

Aggressive breast cancer protein discovered

Medical News - Mon, 07/07/2014 - 14:30

"A breakthrough by scientists could lead to a new treatment for one of the most aggressive forms of breast cancer," the Mail Online reports. Researchers have identified a protein called integrin αvβ6, which may help trigger the spread of some types of breast cancer.

Up to a third of breast cancers are HER2-positive cancers. These are cases of breast cancer where growth is driven by a protein called human epidermal growth factor receptor 2 (HER2). These types of cancer can be particularly aggressive.

Seventy percent of people with HER2-positive breast cancers develop resistance to Herceptin, the main drug treatment for these cancers, effectively leaving them with no treatment options.

This laboratory study examined samples of breast cancer tissue from two cohorts of women with breast cancer. Researchers looked at the expression of a protein called integrin αvβ6, which has been shown to interact with HER2 to stimulate cancer growth.

The researchers found women who had higher expression of integrin αvβ6 in their breast cancer tissue had poorer five-year survival rates, particularly when they were also HER2 positive.

The researchers then studied mice that had been grafted with breast cancer tissue. But a potential new treatment called 264RAD was found to block integrin αvβ6 in the rodents.

Giving this treatment in combination with Herceptin stopped cancer growth, even in breast cancer tissues resistant to Herceptin.

Clinical trials of 264RAD in women with this particularly high-risk type of breast cancer are now required.

 

Where did the story come from?

The study was carried out by researchers from Barts Cancer Institute, Queen Mary, University of London, and was funded by the Breast Cancer Campaign and the Medical Research Council.

It was published in the peer-reviewed Journal of the National Cancer Institute, and is open access and available to read for free online.

Both the Mail Online and The Daily Telegraph's reporting of the study is accurate and informative.

 

What kind of research was this?

This was a laboratory study examining samples of breast cancer tissue from two cohorts of women with breast cancer. Researchers looked for the expression of a particular protein called integrin subunit beta6 (integrin αvβ6). They then looked at how the expression of this protein was associated with cancer survival.

The researchers explain that in up to a third of all breast cancers, a particular protein called human epidermal growth factor receptor 2 (HER2) is overexpressed. This is associated with a very aggressive breast cancer.

These cancers are aggressive because HER2 triggers signalling pathways that stimulate breast cancer cells to grow out of control. This means the cancer cells are more likely to spread into the lymph nodes or to other major organs of the body (known as metastases or metastatic cancer).

Currently, some HER2-positive breast cancers can be treated using the biological antibody treatment Herceptin (trastuzumab). Herceptin works by binding to the protein receptors and blocking them so HER2 cannot stimulate the growth of the breast cancer cells.

However, as the researchers point out, more than 70% of people have developed resistance to Herceptin, leaving them without other treatment options. New treatments are therefore needed for people with HER2-positive breast cancers.

A molecule called TGFβ has been shown to promote HER2-driven cancer by increasing the migration, invasion and spread of breast cancer cells. Integrin αvβ6 has been identified as an activator of TGFβ and implicated in promoting the growth of various types of cancer.

With this knowledge, this research aimed to see whether integrin αvβ6 could influence HER2-positive breast cancer and if inhibiting its action helped reduce the cancer size and spread.

 

What did the research involve?

The research included two cohorts of people with breast cancer:

  • a Nottingham cohort that included 1,795 consecutive women treated from 1986 to 1998
  • a London cohort that included 1,197 women mostly treated from 1975 to 1998

The researchers were able to gather complete information on the women's tumour types, including whether they were HER2 positive.

They examined tissue samples for the expression of integrin αvβ6 and looked at how this was associated with survival by seeing whether the women were still alive at five-year follow-up checks.

The researchers also looked at survival rates when there was co-expression of both HER2 and integrin αvβ6.

Further laboratory experiments using mice grafted with breast cancer tissue examined treatment options using Herceptin (trastuzumab), the antibody known to block HER2, and an antibody called 264RAD, which was found to block integrin αvβ6.

 

What were the basic results?

High expression of integrin αvβ6 was found in 15-16% of breast cancer tissue samples from the two cohorts of women.

The researchers found a significant association between the expression of integrin αvβ6 and survival.

In women with a high expression of integrin αvβ6, five-year survival dropped from 75.6% to 58.8% in the London cohort, and from 84.1% to 75.0% in the Nottingham cohort.

In statistical terms, this meant that high expression was associated with an almost doubled risk of mortality.

Even with adjustment for tumour stage, size and grade, integrin αvβ6 was an independent predictor of overall survival.

For the Nottingham cohort, the researchers also had data available on distant spread (metastases) – 39.5% of those who were integrin αvβ6-positive had metastases, compared with 30.9% who were integrin αvβ6 negative.

When looking at co-expression of both integrin αvβ6 and HER2, they found this combination was associated with a particularly poor prognosis. In women who were HER2 positive, five-year survival was 65.1%, but it dropped to 52.8% if integrin αvβ6 was also strongly expressed.

The studies in mice grafted with human tumour tissue found both Herceptin and the study molecule 264RAD individually slowed tumour growth, but the combination of the two effectively stopped tumour growth, even in breast cancer tissues resistant to Herceptin.

 

How did the researchers interpret the results?

The researchers conclude that the overexpression of integrin αvβ6 in breast cancer is a poor prognostic factor associated with lower survival and the development of distant metastases.

They say the overexpression of both integrin αvβ6 and HER2 is associated with an even poorer prognosis.

The researchers feel the likely biological explanation is integrin αvβ6 and HER2 co-operate within the same molecular complex and integrin αvβ6 mediates the invasive behaviour of HER2-positive cancer.

They suggest targeting αvβ6 using the antibody 264RAD, either alone or in combination with Herceptin, may provide a novel therapy for people with this very high-risk breast cancer sub-type (HER2/αvβ6 positive), particularly if it is resistant to Herceptin.

 

Conclusion

This is a valuable laboratory study that furthers our understanding of the way HER2 may promote the growth, proliferation and spread of breast cancer cells, possibly through the influence of the protein integrin αvβ6.

The researchers suggest up to 40% of people with HER2-positive breast cancers may also have a high expression of integrin αvβ6.

The discovery that integrin αvβ6 may play a role in mediating the growth and progression of these cancers hopefully opens up new treatment possibilities for people with HER2-positive cancers.

As the researchers suggest, their study supports the proposal that testing breast tissue biopsies for the expression of αvβ6 should become a routine procedure to stratify women with breast cancer into this new very high-risk αvβ6-positive/HER2-positive group.

The laboratory experiments in mice with human breast cancer tissue grafts suggested the combination of Herceptin with the antibody 264RAD could be effective in stopping tumour growth in this group.

However, the development of the 264RAD treatment is still in the very early stages. Clinical trials in people with αvβ6-positive and HER2-positive breast cancer are now awaited. These will tell us whether 264RAD – either alone or in combination with Herceptin – could be an effective treatment option for this very high-risk sub-group.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

New treatment to stop spread of breast cancer: Scientists pinpoint molecule that causes cells to spread quickly. Mail Online, July 4 2014

Breast cancer breakthrough brings hope. The Daily Telegraph, July 4 2014

Links To Science

Moore KM, Thomas GJ, Duffy SW, et al. Therapeutic Targeting of Integrin αvβ6 in Breast Cancer. Journal of the National Cancer Institute. Published online June 30 2014

Categories: Medical News

Children’s TV contains unhealthy 'food cues'

Medical News - Fri, 07/04/2014 - 15:00

“Children are being bombarded with scenes of unhealthy eating on TV,” The Independent reports. Researchers looking at public broadcasting in the UK and Ireland have found that children’s TV contains a high number of visual and verbal references to unhealthy foods.

In the UK, direct TV advertising of unhealthy food to children has been banned since 2008.

However, the researchers were still interested in whether children’s TV broadcast by state-funded organisations still promotes unhealthy food choices to young children.

Researchers assessed five weekdays of children programmes from the BBC and its Irish equivalent, RTE. They were interested in what they describe as “cues” – visual, verbal and plot-driven references to specific foods and drinks.

Unhealthy foods accounted for just under a half of specified food cues, and sugar-sweetened beverages for a quarter. The context of the food and drink cue was mostly positive or neutral, with celebratory/social motivations the most common.

As the programmes were on non-commercial TV, it could be the case that the inclusion of said cues was due to cultural, and not commercial, reasons.

The plot device of a “slap-up meal” as a reward for a job well done, or as a treat, is a constant in children’s fiction, ranging from Rastamouse to the Famous Five.

Importantly, however, the study can’t tell us whether the food and drink cues directly influence the children’s food and drink requests or their eating patterns.

 

Where did the story come from?

The study was carried out by researchers from the University of Limerick in Ireland and Dalhousie University, in Halifax, Canada. No sources of financial support are reported. 

The study was published in the peer-reviewed medical journal Archives of Diseases of Childhood, and has been released on an open-access basis, so is free to read online.

Overall reporting of the study by BBC News and The Independent is of a good quality.

The BBC includes useful wider debate on the issue, with Malcolm Clark, coordinator of the Children's Food Campaign, saying: “It is disappointing that children’s TV seems to be so tamely reflecting the obesogenic environment we all live in, rather than presenting a more positive vision of healthy, sustainable food.”

An obesogenic environment is an environment that promotes unhealthy food choices, such as a workplace located next to lots of fast food outlets. We discussed obesogenic environments back in March of this year.

A BBC spokesperson defended its content, saying: “We broadcast lots of programmes to promote healthy eating to children and to help them understand where food comes from, with series like I Can Cook, Incredible Edibles and Blue Peter.”

 

What kind of research was this?

This was an observational study that examined the frequency and type of food and drink references contained on children’s TV programmes during five weekday mornings, comparing UK and Irish state-funded television channels.

Previous research has demonstrated how there is an association between child weight and the amount of TV that they watch.

The researchers suggest that this could be due to a combination of greater periods of inactivity, and exposure to food advertising while watching TV.

Advertisements targeted at children are said to be dominated by high-calorie, low nutritional-quality foods, and previous research has associated child TV viewing with consumption of low nutrient-density foods, persuading parents to purchase such food, which leads to the development of poor eating habits.

Direct advertising to children of “junk food” has been banned during children’s programming in the UK since 2008, although many children watch adult programming, such as talent shows and soap operas.

There is also the possibility that non-commercial programming may promote unhealthy food choices.

This study aimed to investigate this by looking at food and drink references in broadcasts aimed at children.

Understanding the influences and patterns on children’s eating habits may help in the development of further measures to improve healthy eating, in addition to targeting the overweight and obesity epidemic. However, this study only provides a small snapshot of food and drink references on child’s TV during a one-week period. It cannot tell us how the many other types of media advertising influence eating patterns, or capture the wider picture of all the lifestyle and environmental factors that are associated with overweight and obesity.

 

What did the research involve?

This research only reviewed the public broadcast channels of the BBC in the UK, and Radio Teilifis Eireann (RTE) in Ireland. These channels were said to be studied as they are “‘public-good’ channels, which aim to inform, educate and empower audiences”.

In July and October 2010, the researchers examined a total 82.5 hours of broadcast on these channels over five weekdays, looking at programmes broadcast between 06.00 and 11.30 on the BBC and between 06.00 and 17.00 on RTE. 

The researchers looked at food or drink references (or cues), defined as “a product being displayed within a food-specific context with potential to be consumed”. Cues were coded by type of product and as healthy or non-healthy (based on the food pyramid).

Healthy foods included breads/grains, cereals, meats, dairy, fruit, vegetables, fish and sandwiches.

Unhealthy foods included fast food/convenience meals, pastries, savoury snacks, sweet snacks/bars, ice cream and candy.

Beverages were coded and grouped as water, juices, tea/coffee, sugar-sweetened or unspecified.

They recorded the context of the cue (e.g. whether it was part of a meal, in the school or home setting, etc), and what motivations and consequences were associated with the food (e.g. as a reward, to relieve thirst or hunger).

 

What were the basic results?

The researchers recorded one food or drink cue every 4.2 minutes, equivalent to 450 on the BBC and 705 on RTE. The total recorded time involving food or drink cues was 4.8% of the total 82.5 hours, covering 3.94 hours and averaging 13.2 seconds per cue.

The most foods most commonly seen could not be grouped into a distinct food group (unspecified, 16.6%), followed by sweet snacks (13.3%), sweets/candy (11.4%) and fruit (11.2%). The most common beverages were also unspecified (35.0%), followed by teas/coffee (13.5%) and sugar-sweetened (13.0%). Unhealthy foods account for 47.5% of specified food cues, and sugar-sweetened beverages for 25%.

Just over a third of the cues were visual, a quarter verbal, and the remainder were visual and verbal combined.

A third of the cues were in the home setting, and in a third of cases, the food or drink was consumed.

Half of the programmes involving food and drink cues involved humans, and half were in animations (human or other). In a quarter of the cases, the motivation for the cue was celebratory/social;  in a quarter, it was to relieve hunger/thirst.

In a third of the cases, the motivation and outcomes associated with the food cue were positive, in half they were neutral, and the remainder were negative.

When comparing the two broadcast channels (only the morning broadcasts when they had data for both), there were significantly more cues on the BBC than the Irish channel; correspondingly, this involved both significantly more healthy cue and unhealthy cues. On RTE, the most common types of foods depicted in 20.5% of cues were unspecified, though on the BBC sweet snacks topped the chart, at 19%.

RTE contained significantly more cues for breads/grains, condiments and breakfast pastries, while BBC had significantly more for fruit, sweet snacks and ice cream. For beverages, it was most commonly unspecified in both countries.

BBC included more visual cues, while RTE had more verbal. BBC also had more animated characters, while RTE had more human. For both countries, the motivation was most often celebratory/social, followed by hunger/thirst. Health was not recorded as a cue motivation on the BBC, while it was in 6.2% of RTE cues.

 

How did the researchers interpret the results?

The researchers concluded that, “This study provides further evidence of the prominence of unhealthy foods in children’s programming. These data may provide guidance for healthcare professionals, regulators and programme makers in planning for a healthier portrayal of food and beverages in children’s television”.

 

Conclusion

This study provides a snapshot of the food and drink cues/references contained in children’s TV programmes on BBC and RTE over five weekdays, totalling 82.5 hours of broadcasting.

The research demonstrates the frequency of cues, the types of food and drink associated, and the motivations for the food cue.

This includes the observation that unhealthy foods accounted for just under half of specified food cues, and sugar-sweetened beverages accounted for a quarter.

The context of the food and drink cue was mostly positive, with celebratory/social motivations the most common.

Importantly, this study can’t tell us whether these food and drink cues actually have any direct influence on a child’s food and drink requests or their eating patterns. While an association between a child’s duration of TV viewing and overweight/obesity has previously been established, this is unlikely to be a result of a single factor, such as exposure to food and drink cues in TV programmes. Other factors – most notably, lack of physical activity while watching TV, and possibly the mindless eating snacks while watching – are likely to have a big influence.

As both BBC and RTE are publicly funded broadcasters, it is unlikely that any unhealthy food cues were included for commercial reasons (notorious examples include the McDonalds “Hamburglar” or “Tony the Tiger”, which was used to sell sugared flakes).

The idea that food is a treat or celebration has long been part of children’s fiction, such as the Famous Fives' “lashing of ginger beer and ice creams”.

It would be interesting to take a broader look at the content across TV channels and over a broader period of time, and also compare the content in programmes targeted at children compared to teenagers and adults.

The food and drinks in this study were categorised into broad groups “healthy” or “unhealthy” groups, but this may not necessarily be the case. For example, healthy foods included breads/grains, cereals, meats, dairy and sandwiches. However, in all of these food groups, you can get many different “healthy” and “unhealthy” versions of each.

Ultimately, while television may be useful as an occasional babysitter, it is no substitute for parenting.

Teaching your child healthy habits at an early age increases the chances that such habits will persist into adulthood.

Read more about encouraging healthy eating in children.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Children bombarded with unhealthy eating messages on TV, experts warn. The Independent, July 4 2014

Children's TV 'packed with junk-food references'. BBC News, July 4 2014

Links To Science

Scully P, Reid O, Macken A, et al. Food and beverage cues in UK and Irish children—television programming. Archives of Disease in Childhood. Published online June 30 2014

Categories: Medical News

Headbanging could damage your (Motör)head

Medical News - Fri, 07/04/2014 - 14:10

“German doctors are highlighting the dangers of headbanging after a 50-year-old man developed bleeding in the brain following a Motörhead concert,” BBC News reports.

The news is based on a case report in The Lancet about a man who developed a subdural haematoma.

subdural haematoma occurs when a blood vessel in the space between the skull and the brain splits apart. This is a serious condition that can be fatal, so early recognition and diagnosis is essential.

This is only one in a very small number of documented cases of brain blood clots that have been associated with headbanging. However, the incident serves as an important caution that the vigorous activity of headbanging may not always be as harmless as supposed. 

 

What is the story?

This was a case report. A case report usually consists of a particularly unusual set of circumstances.

The case was reported by doctors from Hannover Medical School in Germany and published in the peer-reviewed medical journal The Lancet.

They report on a 50-year-old man who presented to their neurosurgery department in January 2013, complaining of a constant and worsening headache for two weeks. He had no history of head injury, but did report headbanging at a Motörhead concert four weeks earlier. He had no other past health problems of note, and clinical examination and blood tests were normal. However, a CT scan of his brain it showed a chronic subdural haematoma on the right side of his brain.

 

What is a subdural haematoma?

Both the media, and to a large extent The Lancet, adopt a somewhat whimsical reporting of the case, which is understandable given the unusual circumstances. However, a subdural haematoma is no laughing matter.

The brain and spinal cord are covered by protective membranes called meninges, which are made up of three layers: an inner layer (pia mater – closest to the brain), middle (arachnoid mater) and outer layer (dura mater – closest to the skull). A subdural haematoma therefore means that there is a blood clot underneath (sub) the dura mater. This means that the bleeding has happened between the middle and outer layers of the meninges.

Usually subdural haematoma occurs as a result of head injury or trauma. For example, there have been cases reported in the media where people have developed a subdural bleed after falling and hitting their head while skiing. The Formula One racing driver Michael Schumacher was reported to have developed a subdural haematoma as a result of skiing accident in December 2013, that kept him in a coma for six months.

The actress Natasha Richardson died of a subdural haematoma, the symptoms of which only become apparent several hours after she had a skiing injury.

When the bleed develops, the collection of clotting blood takes up space and puts pressure on the underlying brain, causing symptoms such as headache, nausea and vomiting, and possibly drowsiness, confusion, or loss of consciousness.

Links To The Headlines

Headbanging can cause brain injury, say German doctors. BBC News, July 4 2014

Motörhead fan's vigorous headbanging leaves him with blood clot on the brain. The Guardian, July 4 2014

Motorhead fan's brain bleeds from headbanging. The Daily Telegraph, July 4 2014

Heavy metal lovers beware, headbanging can be bad for you: Motorhead fan suffers blood clot after performing 'violent and rhythmic' movement at concert. Mail Online, July 4 2014

Heavy metal fan suffers headbang brain injury. ITV News, July 4 2014

Headbanging to Motörhead caused fan’s brain to bleed. The Times, July 4 2014

Links To Science

Islamian AP, Polemikos M, Krauss JK. Chronic subdural haematoma secondary to headbanging. The Lancet. Published online July 4 2014

Categories: Medical News

Lab-grown corneas could prevent blindness

Medical News - Thu, 07/03/2014 - 14:50

 

“Scientists regrow corneas in breakthrough that could pave the way for a cure for blindness,” reports the Mail Online.

Researchers in the US have found a way to identify the stem cells that renew the cornea (the clear layer that covers the front of the eye), and have used them to grow normal corneas in mice.

These stem cells – called limbal stem cells (LSCs) – are known to be the basis of cornea renewal, but there has not been a way to harvest them before now.

Through a number of laboratory experiments, the researchers found that a protein called Abcb5 is located on the surface of the LSCs.

The protein can now be used as a marker to identify and separate them from other cells.

They also showed that transplanting the isolated human LSCs into mice lacking these cells caused them to develop normal corneas after five weeks, and then maintain them for over a year.

The hope now is that these cells could be used in human corneal transplants to enriched them with lots of these LSCs to improve the chances of success. However, this would depend on the condition being treated, with the long-term success rate of corneal transplants ranging between 60% and 90%.

Additional research is likely to be needed to refine and further test the technique before it can be tested on humans.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School, Boston Children’s Hospital, Brigham and Women’s Hospital, and several other US Universities. It was funded by the National Institutes of Health, a Harvard Stem Cell Institute grant, the Department of Defense, the Corley Research Foundation and the Western Pennsylvania Medical Eye Bank Core Grant for Vision Research.

The study was published in the peer-reviewed medical journal Nature.

The UK media have accurately reported this story.

 

What kind of research was this?

The research comprised a series of laboratory and animal experiments, which aimed to analyse stem cells in the cornea to help improve the success rates of corneal transplants.

The cornea is the clear, outer layer that covers the front of the eye and, like a lens, helps focus light onto the retina. It is constantly renewed by LSCs, which are located in one layer of the cornea.

A number of conditions can result in a reduced number of LSCs, which prevents the cornea from repairing itself adequately.

This means that it can become opaque (stop being clear), causing reduced vision and blindness.

LSC deficiency can be due to congenital conditions and from injury due to radiotherapy, chemical burns, contact lens wear and inflammatory conditions.

Management of LSC deficiency includes maintaining a healthy surface of the eye with artificial tears and, if necessary, topical steroids. If surgery is needed, a transplant of healthy cornea from a donor (usually deceased) can be used. Studies have shown that the number of LSCs within grafts is critical for long-term transplant success. However, there is currently no easy way to select these cells from other corneal cells. This research investigated whether they could develop a technique to identify and separate out LSCs, with a view to being able to increase their number and thus improve success rates.

 

What did the research involve?

The researchers conducted several experiments using human corneal samples, staining and imaging techniques, and mice to find a way of identifying functioning LSCs.

They first investigated whether a protein that is present on the surface of other types of skin stem cells, called Abcb5, is also present on LSCs. They then looked at whether the presence of this protein identified LSCs specifically by examining whether the presence of the protein predicts a cell's active properties in cell renewal.

To test whether the presence of the Abcb5 protein on the LSC is necessary for corneal repair, the researchers compared mice genetically modified to lack a key part of this protein (knockout mice) and normal mice.

The knockout mice were able to see, but had thinner corneas, and the corneal cells had a disorganised pattern.

The researchers compared their wound healing abilities by making an injury to the mice's cornea, and measured how quickly and effectively the wounds healed.

This was done to see if the lack of protein affected how well the LSCs could generate new cells to repair the cornea.

They also transplanted mouse and human LSCs with and without the Abcb5 protein into mice and monitored the regrowth of the corneas. They looked at long-term (more than one year) results from this corneal restoration.

This was done by putting the LSCs into a fibrin based gel, removing the corneal and limbal epithelium of anaesthetised LSC-deficient mice and transplanting the LSC-containing fibrin gel, and suturing it in place.

 

What were the basic results?

The Abcb5 protein was present on the surface of the LSCs and seemed to specifically identify these cells rather than other cells in the cornea. Using antibodies to the Abcb5 protein allowed researchers to separate out the LSCs from other cells without damaging them.

The cornea of normal mice and knockout mice without the Abcb5 protein healed at the same rate. However, the repaired cornea in the Abcb5 knockout mice showed irregular and fewer corneal cells, compared to the normal mice.

The mice with an LSC deficiency were given either mouse or human corneal grafts. There were three basic results. The mice that had:

  • LSCs without Abcb5 developed abnormal corneas.
  • A mixture of LSCs with and without Abcb5 had partial corneal restoration.
  • LSCs with Abcb5 developed normal, clear corneas. 

 

How did the researchers interpret the results?

The researchers concluded that “the identification and prospective isolation of molecularly defined LSCs with essential functions in corneal development and repair has important implications for the treatment of corneal disease, particularly corneal blindness due to LSC deficiency”.

 

Conclusion

This study has identified that the cell surface protein Abcb5 is necessary for normal function of LSCs in renewing the cornea. It has also shown that LSCs can be separated out from other cells through the use of antibodies to the Abcb5 protein without causing damage to the LSCs. This means that it should be possible to gather these cells (in preference to other cells) and use them to provide the best chance for a successful corneal transplant.

It is important to note that the mice were given genetically identical grafts or completely immunosuppressed so that they did not reject the grafts. At present, human recipients of donor corneal transplants also have to have immunosuppression to try to prevent the body from rejecting the transplant, unless the corneal transplant was from their good eye (but this can lead to a risk of LSC deficiency in this donor eye). Rejection is a common problem that currently affects around one in five transplant cases.

Immunosuppression and possible rejection would still be a consideration in using this new technique.

Though there is a possibility that researchers may be able to find a way to harvest normal LSCs from the person requiring the transplant and multiply them in the laboratory, before transplanting them back.

Though this research provides a new approach to capturing important cells for corneal regeneration, more research to develop the technique and make sure it is safe will be required before human trials can take place.

As is the case with all donated organs, the current demand for transplanted corneas outstrips the demand, so if you haven’t already signed up to the organ donation register, please do so.

Adding your name to the Organ Donor Register will only take a few minutes.

That way, you can be sure that your corneas and other valuable organs don’t go to waste after you die.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Scientists use stem cells to regenerate human corneas. BBC News, July 2 2014

Scientists regrow corneas in breakthrough which could pave the way for a cure for blindness. Mail Online, July 2 2014

Links To Science

Ksander BR, Kolovou PE, Wilson BJ, et al. ABCB5 is a limbal stem cell gene required for corneal development and repair. Nature. Published online July 2 2014

Categories: Medical News

Tests can predict teens most likely to binge drink

Medical News - Thu, 07/03/2014 - 14:30

“A single glass of wine or beer at the age of 14 can help a young teenager along the path to binge drinking,” the Daily Mail warns.

But having a single drink does not mean a child is bound to become a “binge boozer”. That is just one of around 40 factors researchers have identified which they claim can be used to predict whether a teenager will grow up to become a binge drinker.

These factors include life events, personality traits and differences in brain structure – such as increased activity in areas of the brain associated with reward-seeking.

It is difficult to see what practical implications this research will have in preventing teenage drinking, due to the cost of brain scans. A single functional MRI brain scan, as used in the study, can cost around £300 to £400 to carry out and then interpret.

As BBC News reports “A simplified version of the test … is more likely to be used”.

Children and their parents and carers are advised that an alcohol-free childhood is the healthiest and best option. Aside from the short and long-term risks of alcohol abuse, alcohol may disrupt the normal development of the teenage brain.

 

Where did the story come from?

The study was carried out by researchers from a number of academic institutions in Europe and North America, including the University of Nottingham and the Medical Research Council in the UK. It was funded by a number of different sources, including the European Union and the National Institute for Health Research.

The study was published in the peer-reviewed journal Nature.

Although drinking from an early age is not advised, the Daily Mail’s headline was rather alarmist. It is certainly not the case that drinking a single glass of wine at the age of 14 will condemn a teenager to a life of binge drinking. The research identified a range of factors which might put young people at risk.

BBC News coverage of the study was rather more measured and usefully quotes from both the researchers involved in the study as well as independent experts.

 

What kind of research was this?

This was a longitudinal study of 692 adolescents aged 14 from across Europe which aimed to create a model which could identify individuals at risk of future alcohol misuse.

It looked at a range of data including brain images, personality, life experiences and genetic information, to construct models of adolescent binge drinking.

The study follows on from a previous study by the same group, in which they investigated the associations between brain networks and high risk behaviours such as drug and alcohol misuse. The latest study aimed to predict who went on to drink heavily at age 16.

The researchers say that alcohol misuse is common among adolescents and is a strong risk factor for adult alcohol dependence. Identifying risk factors is important but previous studies have typically focused on just one type of risk factor. Personality, life events such as parental divorce, and certain genes and brain structure may all play a role.

 

What did the research involve?

The researchers used data from the IMAGEN project, a longitudinal study of adolescent development which followed 692 adolescents aged 14 from across Europe.

To construct and test their model, they looked at a whole range of information which had been collected on the children, including:

  • brain imaging and activity – including looking at the volume of the brains and also how the brain responds to reward
  • personality, using validated measures – including traits such as neuroticism, extravagance, and conscientiousness
  • cognitive ability, using validated intelligence scales
  • life experience including family history and stressful life events, gathered using a validated questionnaire
  • factors such as age, sex, age at puberty and socio economic status
  • the presence of 15 “candidate genes” which are thought to predispose to alcohol misuse, identified through blood tests

They used the data to construct a model of current and future adolescent alcohol misuse and to predict which individuals would become binge drinkers by the age of 16. They then tested their model on a new, separate group of teenagers, to test its reliability.

 

What were the basic results?

The researchers found that life experiences, neurobiological differences and personality are all important risk factors for binge drinking.

Personality measures associated with binge drinking included a “novelty seeking” trait – in other words, behaviour of searching for and feeling rewarded by novel experiences.

They found that their method predicted with about 70% accuracy (confidence interval [CI] 66-83%) which 14-year-olds are likely to binge drink at age 16.

 

How did the researchers interpret the results?

They conclude that vulnerability to binge drinking can be accurately predicted by a test looking at teenagers’ life experience, personality traits and brain structure and function. They point out that there are multiple factors involved in adolescent alcohol misuse – and that the influence of any one factor is “modest”.

The risk profile could help develop targeted interventions to those at risk.

 

Conclusion

This research on a model for identifying those at risk of turning into binge drinkers included a wide range of potential risk factors. How such a detailed test might be used in practice is uncertain.

Limitations of this study include the facts that:

  • it was reliant on the 14-year-olds accurately reporting how much alcohol they drank
  • some of the analysis was restricted to subsets of the 692 participants (115 “binge drinkers” and 150 “controls”)

There is also the practical limitation that access to brain scanning devices, such as MRI scanners, are limited. You also have to pay the wages of people who are able to interpret the scans correctly, making them an expensive diagnostic technique. It could be the case that a “streamlined” version of the testing protocol, just focusing on personality traits and life experiences, may be used in the future.

There is already evidence that drinking frequently at a young age is linked to an increased risk of developing alcohol dependence in young adulthood. So an alcohol-free childhood is the healthiest and best option.

However, if children drink alcohol, it should not be until at least the age of 15 years.

Drinking before that age could disrupt the normal development of the brain and nervous system, leading to problems in later life.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Scientists 'develop test for teen binge-drinking risk'. BBC News, July 3 2014

One glass of wine or a beer at the age of 14 could set teenagers on the path to binge drinking, new study finds. Daily Mail, July 3 2014

Links To Science

Whelan R, Watts R, Orr CA, et al. Neuropsychosocial profiles of current and future adolescent alcohol misusers. Nature. Published online July 2014

Categories: Medical News

Frozen testicle tissue produces mice offspring

Medical News - Wed, 07/02/2014 - 18:19

“A sample of frozen testicle has been used to produce live offspring in experiments on mice,” BBC News reports. 

While this may seem like a strange study to conduct, the aim is to preserve the fertility of boys affected by childhood cancers such as acute lymphoblastic leukaemia.

Side effects of treatments for these types of cancer, such as chemotherapy, can result in infertility.

Currently, it is not possible to preserve the fertility of pre-pubescent boys undergoing some cancer treatments, because sperm is not produced until puberty (which usually happens around the ages of 11 or 12). The aim in this particular study was to see if sperm could be grown from frozen testicular tissue samples.

The researchers froze testicular tissue samples from mice that were five days old, and then grew sperm in the laboratory. They then used this sperm to fertilise over 200 eggs. More than half of them were inserted into female mice, and eight mice were born. These mice appeared to be healthy and were able to reproduce.

This is exciting research, but there are many challenges to be faced. These include making sure that the technique works on human testicular tissue, and is able to produce normal sperm and healthy offspring

Despite the small numbers of mice involved, this animal study does provide some hope that the technique could be refined for future use in humans.

 

Where did the story come from?

The study was carried out by researchers from Yokohama City University, the National Research Institute for Child Health and Development in Tokyo and the RIKEN Bioresource Center in Ibaraki, Japan. It was funded by Grants-in-Aid for Scientific Research on Innovative Areas from the Japan Society for the Promotion of Science, and University grants.

The study was published in the peer-reviewed medical journal Nature.

BBC News reported the study accurately, and pointed out some of the challenges that will need to be overcome when conducting human trials.

 

What kind of research was this?

This was a laboratory study conducted on mice to see if frozen testicular tissue could be used to generate healthy sperm, which could then fertilise eggs. The researchers wanted to investigate whether they could grow testicular tissue in a laboratory to produce sperm as a method of preserving fertility for boys undergoing chemotherapy or radiotherapy. It isn’t possible to freeze a sperm sample for boys undergoing cancer treatments that might cause infertility, as sperm are not produced until a boy reaches puberty.

Other techniques that have previously been investigated in animals include freezing testicular tissue and then transplanting it back. However, these techniques could reintroduce cancer cells. 

 

What did the research involve?

The researchers froze samples of testicular tissue from neonatal (baby) mice. They then grew the samples in the laboratory, and sperm were produced. These were used to fertilise eggs, which were implanted in female mice.

The testicular tissue of mice around 4.5 days after birth was frozen using either “slow freezing” or “vitrification” (high-speed freezing using an anti-freeze substance). After preservation in liquid nitrogen for between 7 and 223 days, they were thawed and cultured in an agarose (seaweed) gel for up to 46 days, to see if sperm would be produced.

In the second stage of tests, the sperm produced from slow freezing or vitrification was used to inseminate mice eggs, which were transferred into female mice.

 

What were the basic results?

In the sperm culture experiments, 17 out of 30 testicular tissue samples produced sperm. Of these, 7 samples had more than 100 sperm, and 6 samples had more than 10 sperm.

They used the sperm to fertilise 236 eggs, and then transferred 156 of them into female mice. About a third of them (n=49) implanted (attached to the womb), and 8 mice were born.

The mice appeared to grow healthily and were able to mate naturally. It is unclear how long the mice were followed.

The mice studied were born from both slow-freezing and vitrification techniques.

 

How did the researchers interpret the results?

The researchers conclude that, “although they may not be easy and require further investigation, organ culture methods for the spermatogenesis of other animals, including humans, are expected to be successful in the future. When this goal is realised, testis tissue cryopreservation will become a practical means to preserving the reproductive capacity of pre-pubertal male cancer patients”. 

 

Conclusion

This laboratory study has shown that it is possible to freeze pre-pubertal testicular tissue from mice, and that it is also possible grow viable sperm from it. However, as can be seen from the figures, the actual number of mice born was extremely small compared to the number of fertilised eggs transferred into female mice. Although the mice were able to reproduce and appeared to be healthy, this was not actually studied in depth.

In addition to this, there are challenges that need to be faced when considering using this technique in humans, including whether the technique could produce genetically normal sperm and healthy offspring.

The researchers point out other limitations in the potential for growing human testicular tissue, which include the fact that:

  • the mixture used to grow the mouse samples did not work for rat samples; the reasons why are unclear, but means it isn’t certain that the technique will work in different species, including humans
  • the mixture used products from bovine serum (from cows), which could be an infection risk for humans

While the numbers were small, this experimental study does provide some hope that the technique could be refined for future use in humans.

For people who wish to bring up a child but are unable to do so for medical reasons, adoption or fostering are viable options. There are around 4,000 children in England who need to be adopted every year, and many more who need fostering.

Analysis by
Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Frozen testicle 'live birth first'. BBC News, July 1 2014

Links To Science

Yokonishi T, Sato T, Komeya M, et al. Offspring production with sperm grown in vitrofrom cryopreserved testis tissues. Nature Communications. Published online July 1 2014

Categories: Medical News

Parents of autistic kids 'have autistic traits too'

Medical News - Wed, 07/02/2014 - 14:26

"Parents of children with autism are more likely to have autistic traits," the Mail Online reports. The news comes from research comparing the families of children with autism spectrum disorder (ASD) with those that are unaffected.

Parents and children with ASD completed Social Responsiveness Scale (SRS) questionnaires designed to detect traits known to be associated with the condition.

The study found the risk of ASD increased by 85% when both parents had elevated SRS scores. Fathers' elevated SRS scores significantly increased the risk of ASD in the child, but no association was found with mothers' elevated scores.

The study also found elevated SRS scores for both parents significantly increased child SRS scores in children not reported to have ASD.

But this study has several limitations worth noting, particularly that it relied on what the mothers said to determine whether a child had ASD. This means that some children reported to have ASD may not actually have the condition.

It could simply be the case that naturally shy parents brought up a naturally shy child. Such reporting could be considered to be medicalising normal human behaviour.

 

Where did the story come from?

The study was carried out by researchers from the Harvard School of Public Health, the University of California, Washington University and other US institutions.

It was funded by grants from the US National Institutes of Health, Autism Speaks and the US Army Medical Research Material Command.

The study was published in the peer-reviewed medical journal JAMA Psychiatry.

There is a potential conflict of interest associated with the study, as the Social Responsiveness Scale used in the research was devised by one of the lead researchers involved in the study, Professor John Constantino, who also holds the copyright. Every time a copy of the scale is downloaded or posted, the professor receives a royalty. This conflict of interest is made clear in the study, however.

The Mail Online picked up the story and overall reported on the study appropriately. However, the website failed to mention that the ASD diagnosis was mainly determined by reports from the mothers involved. The news story implies that a diagnosis of ASD had been confirmed by a qualified medical professional.

 

What kind of research was this?

This was a nested case-control study carried out within a wider cohort study called the Nurses' Health Study II. 

A nested case-control study is a comparison of people who have a condition of interest (cases) with those who don't (controls). The past histories and characteristics of the two groups are examined to see how they differ.

This type of study is often used to identify risk factors for uncommon or rare medical conditions. A nested case-control study is a special type of case-control study where cases and controls are selected from the same cohort of people (and are therefore "nested").

In contrast to non-nested case-control studies, data is usually collected in advance (prospectively), which means researchers can be sure of when certain exposures or outcomes happened. This avoids the difficulties or biases of participants remembering (or misremembering) past events.

Also, as cases and controls are selected from the same cohort, this means they should be better matched than if researchers identified cases and controls separately.

 

What did the research involve?

Participants in this study were part of a wider cohort study called the Nurses' Health Study II, which included 116,430 female nurses aged 25 to 42 years when they were recruited in 1989.

As part of the wider study, these women completed questionnaires posted to them every two years since recruitment. In 2005 they were asked whether any of their children had autism, Asperger's syndrome or another condition on the autism spectrum.

Current thinking is that autism spectrum disorder (ASD) encompasses a range of conditions and associated symptoms. This can range from children with behavioural and learning difficulties (often referred to as autism) to children whose intelligence is unaffected but have problems with social interaction (known as Asperger's syndrome).

The current study began in 2007. "Cases" were determined by mothers reporting ASD among their children. "Controls" were the children of women who did not have the condition. They were matched to the cases by birth year.

Of the original 3,756 women included in the study, the final analysis was performed on 1,649 participants. This was because some mothers failed to respond to follow-up questionnaires and some chose to no longer participate.

The researchers also excluded some participants, including those with missing information, mothers who failed to indicate they had a child with ASD on follow-up questionnaires, and any "controls" with ASD.

The main outcome of interest in the study was ASD assessed using the Social Responsiveness Scale (SRS). The SRS is a validated questionnaire used to assess behavioural and social communication traits.

It provides a single score that distinguishes individuals with ASD from individuals who do not have the condition and those with other psychiatric and developmental conditions.

A small proportion of cases (50) had maternal reports of ASD diagnosis validated using a diagnostic interview called the Autism Diagnostic Interview – Revised. SRS scores for children and fathers were completed by the nurses, whereas the mothers' forms were completed by their spouse or a close relative.

The SRS scores were then examined by the researchers, who used statistical techniques to look for associations with the risk of ASD among the children. The children's SRS scores were also examined in association with the SRS scores of their parents.

In their analysis, the researchers made adjustments for several confounders, including:

  • child sex
  • child year of birth
  • maternal and paternal age at birth
  • household income level
  • race
  • maternal pre-pregnancy obesity
  • maternal history of depression
  • divorce status

 

What were the basic results?

A total of 1,649 children were included in the final analyses: 256 children with ASD (cases) and 1,393 children who did not have the condition (controls).

The main findings from this study were:

  • risk of ASD was increased by 85% among children when both parents had elevated SRS scores (odds ratio [OR] 1.85, 95% confidence interval [CI] 1.08 to 3.16)
  • fathers' elevated SRS scores significantly increased the risk of ASD in the child (OR 1.94, 95% CI 1.38 to 2.71), but no association was found with mothers' elevated SRS scores
  • elevated SRS scores for both parents significantly increased child SRS scores in the control children (an increase of 23 points on SRS)

 

How did the researchers interpret the results?

The researchers concluded they found evidence that parents of children with ASD had a greater social impairment than control parents, as measured by the Social Responsiveness Scale (SRS).

They also found that when both parents had elevated SRS scores, this increased the risk of ASD in the child.

They say that heritability of autism traits was supported through significant increases in child SRS scores according to elevated parent SRS scores among children without the condition.

 

Conclusion

Overall, this study provides limited evidence of an association between elevated Social Responsiveness Scores (SRS) among parents and the risk of autism spectrum disorder (ASD) in their children.

As the authors note, the study has several strengths, including that it adjusted for several potential confounders, such as maternal history of depression and maternal and paternal age at birth, and used cases and controls drawn from a larger study (the Nurses' Health Study II).

However, the researchers do note this wider study is not ethnically or racially diverse, so its findings may not be generalisable to groups outside of those studied.

The wider study was also only carried out in nurses and this may also limit the generalisability of the study.

However, despite these strengths, there are several limitations worth noting.

Self-reporting

ASD was predominantly determined via maternal report, so it is likely that some of the "cases" actually did not have the condition and instead had a milder condition, no condition or another condition altogether.

The authors did attempt to account for this by validating a sub-group of cases using a diagnostic interview carried out by a trained health professional. However, this validation was only done for 50 "case" children.

Incomplete paternal information

The researchers say they also did not have complete information on the fathers of the children (for example, paternal history of depression was not accounted for as a confounder). This may have affected the results.

Reporting bias

There is also a possibility of reporting bias in that mothers completed forms for children and fathers, and fathers and close relatives completed forms for mothers.

As ASD is thought to be associated with genetics (although environmental factors are also thought to be involved), the hypothesis that parental traits may contribute towards a child's condition is plausible.

But it is also possible that some children grow up to have a similar personality to their parents. While ASD is a recognised neurological condition, being introverted and shy is just part of the wider range of human personalities. We should always be vigilant that we don't start trying to fix problems that do not actually exist.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Parents of children with autism are likely to have similar traits, new study finds. Mail Online, July 1 2014

Links To Science

Lyall K, Constantino JN, Weisskopf MG, et al. Parental Social Responsiveness and Risk of Autism Spectrum Disorder in Offspring. JAMA Psychiatry. Published online June 18 2014

Categories: Medical News

Emotions manipulated in Facebook study

Medical News - Tue, 07/01/2014 - 18:19

"Facebook made users depressed in secret research," the Mail Online reports. The news comes from a controversial experiment where researchers used the social networking site Facebook to explore the effects of "emotional contagion".

Emotional contagion is when emotional states are transferred between people. For example, if everyone in your office is in a good mood, chances are your own mood will be lifted.

To study its effects, researchers reduced the amount of negative or positive content that appeared in users' newsfeeds to see if this changed their emotional posting behaviour.

The study found when positive emotional content was reduced, people subsequently produced fewer posts containing positive words and more posts containing negative words. The opposite pattern occurred when negative emotional content was reduced.

But the effect sizes in the study were very small – just a few percentage points in terms of changes in the positive or negative terms used by individual users.

 

Where did the story come from?

The study was carried out by researchers from the University of California and Cornell University in the US. Funding sources were not reported, but it would be fair to assume it was funded by Facebook.

It was published in the peer-reviewed open access journal PNAS, so it is available to read online.

The story was picked up widely in the UK media, with most focusing on the ethical aspects of the study.

Some of the reporting was a little over the top, such as the Mail Online's claim that, "Facebook made users depressed". Adding a few extra negative words to your status update is not the same as being clinically depressed.

In reaction to the widespread criticism of the study, Facebook issued a statement saying that the company "never meant to upset anyone".

 

What kind of research was this?

This was an experimental study among a group of people who use the social networking site Facebook. The researchers were interested in seeing whether "emotional contagion" can occur outside of direct personal interactions.

They did this by reducing the amount of emotional content in Facebook's newsfeed function. This contains posts from people someone has agreed to become friends with on the site.

According to the researchers, what content is shown or omitted in the newsfeed is determined by a ranking algorithm Facebook uses to show, as the researchers put it, "the content they will find most relevant and engaging".

 

What did the research involve?

This experiment manipulated the extent to which 689,003 people were exposed to emotional content in their newsfeed on Facebook during one week in January 2012. This was designed to test whether exposure to other people's emotions through the newsfeed subsequently caused people to change their own posting behaviour.

The researchers were particularly interested in seeing whether exposure to certain tones of emotional content caused people to post similar emotional content – for example, whether people were more likely to post negative content if they had been exposed to negative emotional content.

According to the researchers, people who viewed Facebook in English were qualified for selection in the experiment, and participants were selected at random.

Two experiments were carried out:

  • exposure to positive emotional content in the newsfeed was reduced
  • exposure to negative emotional content in the newsfeed was reduced

The researchers report that each of these experiments had a control condition where a similar amount of posts in a person's newsfeed were omitted at random without respect to emotional content.

When a user loaded their newsfeed on Facebook, posts that contained positive or negative emotional content had a 10-90% chance of being omitted for that specific viewing, but remained visible on a person's profile.

Posts were determined to be either positive or negative if they contained at least one positive or negative word, as defined by a word counting software called Linguistic Inquiry and Word Count.

The researchers say use of this software was consistent with Facebook's data use policy, which all users agree to prior to creating an account on the site. Strictly speaking, this constitutes informed consent for the purposes of this research.

They then looked at the percentage of positive or negative words in people's own status updates, and compared each emotional condition to its control group.

The researchers hypothesised that if emotional contagion has an effect through social networks, people in the positively reduced condition should be less positive compared with their control, and vice versa.

They also tested whether the opposite emotion was affected to see if people in the positively reduced condition expressed increased negativity, and vice versa. 

 

What were the basic results?

Of the posts manipulated, 22.4% contained negative words and 46.8% contained positive words. More than 3 million posts were analysed, containing more than 122 million words, of which 4 million were positive (3.6%) and 1.8 million were negative (1.6%).

The researchers say the emotional expression of the participants did not differ in the week prior to the experiment taking place.

The main findings from this study were that:

  • when positive emotional content was reduced in a person's newsfeed, people subsequently produced fewer posts containing positive words and more posts containing negative words
  • when negative emotional content was reduced in a person's newsfeed, the opposite pattern occurred

Omitting positive and negative emotional content in a person's newsfeed was found to significantly reduce the amount of words a person subsequently produced. This effect was greater when positive words were omitted.

The researchers concluded that this finding was a withdrawal effect, meaning that people who were exposed to fewer emotional posts (positive or negative) in their newsfeed were less expressive overall on the following days.

They say these results show emotional contagion and the emotions expressed by friends through online social networks therefore influence our moods.

 

How did the researchers interpret the results?

The researchers concluded that their results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion through social media.

They also say their work suggests that, in contrast to prevailing assumptions, in-person interaction and non-verbal cues are not strictly necessary for emotional contagion, and that the observation of other people's positive experiences constitutes a positive experience.

 

Conclusion

Overall, despite its interesting nature, this study provides limited evidence of associations between emotions expressed through the social networking site Facebook and the emotional tone of a person's subsequent posts on the same site.

But there are some important limitations to consider when interpreting these findings, namely that the effect sizes in the study were very small (as the authors note). Also, the words people choose to use when they post a status update may not accurately reflect their general emotional state. 

It is also possible that factors other than what people saw in their newsfeed contributed to their subsequent posts, rather than being directly linked to the posts they had just seen.

Probably of greater interest is the subsequent controversy the study has generated. Many people have been shocked that Facebook can filter a person's newsfeed, although this has been common practice for years. As Facebook states, this is often done to show users "the content they will find most relevant and engaging".

It is important to remember that Facebook is not a charity or a public service – it is a commercial enterprise with the primary aim of making a profit.

While social networking can be a positive and engaging experience for some, connecting with other people in the real world has been shown to improve our wellbeing.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Facebook made users depressed in secret research: Site deleted positive comments from friends. Mail Online, June 30 2014

Facebook deliberately made people sad. This ought to be the final straw. The Guardian, June 30 2014

Facebook emotion study breached ethical guidelines, researchers say. The Guardian, June 30 2014

Facebook emotion experiment sparks criticism. BBC News, June 30 2014

Facebook defends secretly manipulating users: Experiments 'improved service'. The Independent, June 30 2014

Facebook Totally Screwed With a Bunch of People in the Name of Science. Time, June 28 2014

Did Facebook and PNAS violate human research protections in an unethical experiment? Science-Based Medicine, June 30 2014

Links To Science

Kramer ADI, Guillory JE, Hancock JT. Experimental evidence of massive-scale emotional contagion through social networks. PNAS. Published online June 17 2014

Categories: Medical News

Kids who know their fast food logos 'grow up fat'

Medical News - Tue, 07/01/2014 - 14:45

“Children who recognise fast food brands are more likely to be obese,” the Mail Online reports.
The headlines are based on a US study that included two separate samples of children aged three to five years; the first contained 69 children and the second contained 75.

In both studies, the parents were questioned about their child’s TV viewing and physical activity levels.

The children themselves were asked to complete a picture collage designed to assess “brand recognition” of four major brands: McDonalds, Burger King, Coca-Cola and Pepsi.

In the first study, they also had to assess two crisp brands (Fritos and Doritos) and two breakfast cereals (Lucky Charms and Trix). In the second study, they had to assess two sweet brands (M&Ms and Jelly Belly) and two different breakfast cereals (Froot Loops and Fruity Pebbles).

The researchers then looked at how these responses were associated with child body mass index (BMI).

In both groups, increased brand knowledge was significantly associated with increased BMI.

However, this study has many limitations, such as its small sample size and reliance on self-reporting.

Despite this, the study still makes for interesting reading. Better understanding about the influences on child consumption patterns may help towards developing effective measures to target the growing obesity epidemic.

 

Where did the story come from?

The study was carried out by researchers from the University of Oregon, Michigan State University and Ann Arbor Public Schools Preschool and Family Center, in the US. No sources of financial support are reported. The study was published in the peer-reviewed medical journal Appetite.

The Mail Online’s reporting of the study is accurate, but does not consider the wider limitations of this very small study and limited analysis.

The news website also mentions Kentucky Fried Chicken (KFC), despite this brand not being assessed during the two studies.

 

What kind of research was this?

This was a cross-sectional analysis, using data from two small studies of young children, assessing their knowledge of brands high in fat, sugar and salt. Researchers also asked parents about their children’s TV viewing habits and physical activity levels. They then looked at how these were associated with the children’s BMI.

The researchers say how previous research has shown that older children/teenagers who are obese were usually overweight or obese in nursery school.

They discuss how understanding the way in which a child’s palate develops with exposure to calorie-dense, nutrient-poor foods may contribute to an understanding of how food consumption patterns in early childhood influence weight. The researchers also discuss the role of influences such as brand logo recognition, activity patterns and television viewing patterns (e.g. “mindless” eating in front of the TV). 

The present research aimed to address three research questions:

  • Does exposure to commercial television significantly influence pre-school children’s BMI scores?
  • Does knowledge of packaged food and beverage brands significantly influence pre-school children’s BMI scores?
  • Does the amount of daily physical activity counteract the effect of brand knowledge or commercial television exposure on pre-school children’s BMI scores?

Understanding these influences and patterns may help develop measures to tackle obesity.

 

What did the research involve?

The researchers’ questions were addressed in two separate studies.

Study one

The first study involved 69 children (34 boys and 35 girls) aged three to five years, as well as one parent of each child. The sample contained people of a diverse ethnic mix. Parents were asked how many hours a week their child spend watching commercial TV and non-commercial TV (e.g. DVDs), and how many days a week their child engages in 30 minutes or more of physical activity.

The brand knowledge task then involved asking children to sort picture cards to create collages that showed their knowledge of different food and drink brands, and which brands were competitors of each other. The task included four groups: fast food (McDonald’s “versus” Burger King), soft drinks (Coca-Cola “versus” Pepsi), crisps (Fritos “versus” Doritos) and breakfast cereal (Lucky Charms “versus” Trix).

Their results for each of the four food groups were scored on a scale from 0 to 18, with higher scores indicating more brand knowledge.

They used a statistical model to see how age and gender-specific BMI correlated with their responses.

Study two

This study included 75 children (40 boys and 35 girls), also aged three to five, as well as one parent of each child. Again, the sample contained people of a diverse ethnic mix. Parents were asked the same questions on TV viewing and physical activity. Children were asked the same brand knowledge questions for fast food and soft drinks, but two different trials were added – two types of sweets (M&Ms and Jelly Belly) and two cereals (Froot Loops and Fruity Pebbles).

They again looked at the associations with age and gender-specific BMI.

 

What were the basic results?Study one

In study one, most participants (60%) were of a normal weight. The average child brand knowledge score across the four food groups was 13.

Brand knowledge was significantly associated with BMI. As brand knowledge increased, so did BMI. Brand knowledge was said to account for 8.4% of the variance in BMI scores. There was no link between TV viewing and BMI; however, there was a significant link between physical activity and BMI. As physical activity increased, BMI decreased. Physical activity was said to account for an even greater proportion of variation in BMI scores than brand knowledge – 63.2%.

In fact, when the model took brand knowledge, TV viewing and physical activity into account, the association between brand knowledge and BMI was no longer statistically significant.

When looking at associations with being overweight/obese, increased physical activity significantly decreased the risk (by 58%) that the child would be. TV viewing and brand knowledge were not significantly associated with being overweight/obese.

Study two

The average brand knowledge score for this sample was also 13, and most of the sample (68%) were of a normal weight. Replicating the findings of study one, TV viewing was not significantly associated with BMI, but brand knowledge was – this time accounting for a 16.5% of the variance in BMI scores.

However, this time, there was no significant association with physical activity. This study replicated the findings for brand knowledge, but not for physical activity. In this study, increased brand knowledge significantly increased the risk of a person being overweight or obese (by about a third).

 

How did the researchers interpret the results?

The researchers conclude that across the two studies, a child’s brand knowledge significantly predicted their BMI, even when adjusting for age, gender and extent of TV viewing.

They comment on “the success of physical activity to counter the influence of brand knowledge on BMI in the first study". However, they then say that "the failure to replicate this finding in the second study suggested that exercise was not a robust predictor of child BMI”.

 

Conclusion

This research includes two small studies of young children, with the aim of assessing their knowledge of brands high in fat, salt and sugar, as well as their TV viewing and physical activity levels. They then looked at how these factors were linked to their BMI.

In both samples, increased brand knowledge was significantly associated with increased BMI, though the second study found a stronger association with brand knowledge.

Interestingly though, the first study found that physical activity had a much greater effect on BMI, and mitigated all of the effects of brand knowledge.

In short, brand knowledge was predicative of BMI, but this effect was removed if the child engaged in frequent physical activity.

The second study did not find a link with physical activity; the researchers have said this supports previous study findings that physical activity may not be sufficient to reduce BMI in children.

However, concluding on the limited role of physical activity in reducing BMI seems a fairly strong conclusion to make based on this very small study, which has numerous limitations: 

  • It is plausible that a child’s increased knowledge of foods and drinks that contain high levels of fat, salt and sugar may be associated with their higher consumption, as well as increased BMI. However, this study is only cross-sectional, so can only demonstrate associations. It cannot prove that the child’s brand knowledge is directly linked to their current BMI.
  • The study includes only two separate groups of children. The majority of children in each group were of a normal weight. Therefore, examining associations between the responses in the small proportion of children overweight or obese BMIs decreases the reliability of any associations found.
  • All measures on child physical activity and TV viewing were through parental self-reporting, which opens up the possibility of inaccurate estimations.
  • The child was only asked to perform a task assessing their knowledge of different and competing fast food, soft drink, cereal, sweet and crisp brands. It gives no indication how often and in what quantity they may or may not eat these particular foods. We also don't know anything about the children’s food and drink intake.
  • As said, these are only two very small groups of US children, aged three to five years. The samples did benefit from a broad ethnic mix; nevertheless, larger samples of children of different ages and from different geographical regions could provide different results.

Understanding the influences and patterns on child consumption patterns may help develop measures to target the growing obesity epidemic, and its associated health problems. However, this single small study answers few questions on its own. The study will contribute to the wider literature on overweight and obese children, and its influences, which considered as a whole may help to find new angles for intervention.

It is most likely that a child’s BMI is going to be influenced by a combination of both their diet and their physical activity levels.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Can your child spot the golden arches of McDonald's or the face of KFC? Children who recognise fast-food brands are more likely to be obese, study finds. Mail Online, June 30 2014

Links To Science

Cornwell TB, McAlister AR, Polmear-Swendris N. Children's knowledge of packaged and fast food brands and their BMI. Why the relationship matters for policy makers. Appetite. Published online June 24 2014

Categories: Medical News

'Supercooling' may extend life of transplant organs

Medical News - Mon, 06/30/2014 - 16:30

BBC News reports on a new method to keep donated organs fresher for longer: “supercooling”.

US researchers are developing a new technique for the longer term preservation of human organs before transplantation.

Current methods of organ preservation can keep an organ viable for transplant up to around 12 hours once it has been removed from the body. This new technique has potentially extended this time up to three days.

The researchers tested the technique using rat livers. They froze the livers to subzero temperatures of 0C to -6C, while at the same time, passing nutritional preserving fluids to help keep the organ viable.

When rats were transplanted with a liver that had been preserved in this way for 72 hours, they all survived to three months, showing no signs of liver failure.

The number of people needing organ transplant always outnumbers the number of suitable donors available. So a technique that could preserve organs for longer could potentially allow them to be transported across greater distances to suitable recipients.

Hopefully this technique could work in humans, though due to the size and complexity of human organs, this may turn out not to be the case.

 

Where did the story come from?

The study was carried out by researchers from Harvard Medical School, Boston; Rutgers University, Piscataway, New Jersey; and University Medical Center, Utrecht, the Netherlands. Funding was provided by the US National Institutes of Health, and the Shriners Hospitals for Children.

The study was published in the peer-reviewed medical journal Nature Medicine.

The BBC's reporting on the study is of a good quality and includes useful discussion from the researchers as well as independent experts about the new development.

Dr Rosemarie Hunziker, from the US National Institute of Biomedical Imaging and Bioengineering, is quoted as saying “It is exciting to see such an achievement in small animals by recombining and optimising existing technology. The longer we are able to store donated organs, the better the chance the patient will find the best match possible, and doctors and patients can be fully prepared for surgery. This is a critically important step in advancing the practice of organ storage for transplantation.”

 

What kind of research was this?

This was laboratory research which tested a new “supercooling” technique to preserve the life of donated organs. The current study tested the technique using rat livers.

The researchers explain the increasing number of people waiting for organ transplants, but the serious shortage of donor organs. When organs are removed from a living body their cells immediately begin to die, meaning they need to be transplanted into the donor as soon as possible to give the best chances of a successful transplant.

The researchers report how current preservation solutions and cooling methods for humans allow organs to remain viable for up to 12 hours.

Methods that could increase the preservation time to days could potentially allow for sharing of donor organs across much greater geographical distances to reach suitably matched recipients.

This could greatly help the problem of the shortage of donor organs. For example, it could be possible to transport an organ with a rare tissue type from Australia to the UK.

So far the researchers say that cryopreservation has been successful for various cell types and some sample tissues. However, its success for the long-term storage of vascularized solid organs (organs, like the liver, with a complex vascular blood system) has been difficult up to now due to freezing and the subsequent rewarming having damaging effects on the intricate anatomy of the organs.

The “supercooling” technique tested here involves freezing to subzero temperatures of 0C to -6C. So far, though previous studies have demonstrated freezing organs to subzero temperatures, they have yet to demonstrate that this can result in the long-term survival of the organ following transplantation. The current research expanded on this by supercooling to subzero temperatures, but additionally using a machine to perfuse the organ with a nutritional preserving solution to support the organ while frozen.

 

What did the research involve?

The researchers used livers from male rats. The organs were surgically removed and then perfusion and supercooling was carried out using a technique called subnormothermic machine perfusion (SNMP).

This makes use of a machine that carefully cools the tissue to below body temperature, and at the same time circulates a preserving solution through the tissue.

The machine first perfused the organ at room temperature (21C) with a nutritional preserving solution containing various substances (such as antibiotics, steroids, proteins and anti-clotting chemicals). There were various stages of recirculation and oxygenation. After one hour of perfusion, the temperature of the perfusing solution was gradually lowered by 1C every minute until the temperature of 4C was reached. At this point the liver was again briefly flushed through with preserving solution and then transferred to a sterile bag filled with the same solution and moved to a freezer, which gradually cooled at a controlled rate until the temperature of −6C was reached.

The liver was kept at this temperature for up to 96 hours (four days). The organ was then gradually rewarmed. The temperature was raised to 4C, and then the organ was again perfused using the SNMP machine for a further three hours. During this time they took various organ measurements, including analysing the organ’s weight, liver enzymes, dissolved oxygen and carbon dioxide, and bile flow.

The liver was then transplanted into a recipient rat, and the rat’s blood samples were analysed for one month. They then continued to observe the clinical condition of the rat for up to three months, particularly looking at clinical signs of liver cirrhosis and overall survival.

They compared the results with those when rats were transplanted with livers that were kept for the same duration using current preservation techniques.

 

What were the basic results?

All the rats transplanted with supercooled livers that had been preserved for 72 hours survived to three months, and showed no signs of liver failure. Comparatively when rats were transplanted with livers that were kept for three days under standard preservation techniques, all of those rats died from liver failure within the first two days.

Using standard preservation techniques the same survival results were only seen if the rat livers were preserved for no more than 24 hours – therefore the supercooling technique tripled the storage time.

Increasing supercooling duration to 96 hours however, resulted in only 58% rat survival, which the researchers say is comparable to the 50% survival following 48 hours of standard preservation.

Control rats transplanted with livers that had been frozen to the same subzero temperatures but which were not subjected to the full sequence and duration of perfusion with the nutritional solution also did not survive.

 

How did the researchers interpret the results?

The researchers say that as far as they are aware “supercooling is the first preservation technique capable of rendering livers transplantable after four days of storage”.

 

Conclusion

When organs are removed from a living body their cells immediately begin to die, meaning they need to be transplanted into the donor as soon as possible to give the best chances of a successful transplant. The number of people needing organ transplant always outnumbers the number of suitable matched donors available. So having a technique that could preserve organs for longer and potentially allow them to be transported across greater distances to suitable recipients could, as the researchers say, be a great breakthrough.

This is especially important as it can often be difficult to find a suitably matched donor (to prevent the body from rejecting the donation, the tissue type has to be as similar as possible), but if the geographical availability of donors were increased, then this could increase the likelihood of finding a matched donor. 

This research demonstrated the technique of preserving with a nutritional solution and then supercooling to subzero temperatures of 0C to -6C. When rats were transplanted with a liver that had been preserved in this way for 72 hours, all of them survived to three months, showing no signs of liver failure. This triples the preservation time from 24 hours, which is the maximum that can be successfully achieved using standard techniques in rats.

The 100% rat survival was limited to 72 hours of storage. When the storage time was extended by one day, rat survival almost halved to 58%. However, as the researchers say, with continued study of different additives for the preserving solution, or variations in protocol, additional improvements could be achieved from future experiments.

The researchers also importantly highlight that this is only a proof-of-concept study in small animals. As they say, the robustness and preservation properties of human liver cells differ from those of rodents.

Though their research with the rat livers was successful, with no signs of liver failure when stored for three days, they need to see whether the same results can be achieved with larger animals, before they can test with human livers.

They also need to perform longer follow up to see if survival and liver function are maintained for longer than three months

The current study also used healthy livers surgically removed from living, healthy rats.

The researchers also need to consider removing organs from dead bodies, so the organ has already been subjected to being starved of oxygen.

They also need to see if the technique can be extended to other organs, besides the liver.

Overall, this is promising early research, which paves the way for much further study.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Organ transplants: 'Supercooling' keeps organs fresh. BBC News, June 29 2014

Links To Science

Berendsen TA, Bruinsma BG, Puts CF, et al. Supercooling enables long-term transplantation survival following 4 days of liver preservation. Nature Medicine. Published online June 29 2014

Categories: Medical News

Part of the brain for 'hangover guilt' identified

Medical News - Mon, 06/30/2014 - 14:26

"Scientists pinpoint the part of the brain that tells us 'never again'," the Mail Online reports. New research in rats suggests part of the brain called the lateral habenula (LHb) helps us learn lessons from bad experiences after consuming too much alcohol.

The LHb is believed to play some role in preventing us repeating something that previously resulted in a negative outcome, such as getting extremely drunk and waking up with a dreadful hangover. But some people may lack activity in this part of the brain.

This study found causing surgical damage to the LHb stopped it having inhibitory effects on alcohol consumption. When given free access to alcohol, rats that did not have damage to this part of the brain had a high alcohol intake initially, but this then tailed off. Rats with LHb damage showed a continuously increasing rate of ethanol consumption.

A similar mechanism may play a role in people with alcohol misuse problems. As a result of decreased LHb activity, they may fail to "learn" from alcohol-associated adverse events and continue to misuse the drug. This may explain why many people who experience the negative effects of alcohol continue to drink.

But intriguing as this hypothesis is, it remains unproven. The research also has no direct implications for humans at this stage, such as new ways to prevent and treat alcohol dependence.

 

Where did the story come from?

The study was carried out by researchers from the University of Utah School of Medicine in the US, and was funded by the US National Institutes of Health, the March of Dimes Foundation and the University of Utah.

It was published in the peer-reviewed scientific journal PLOS One. PLOS One is an open access journal, so the article is free to read online.

The Mail Online's reporting of the study is accurate.

 

What kind of research was this?

This was animal research that aimed to investigate the role of a particular region of the brain – the lateral habenula (LHb) – in conditioning our response to alcohol.

The LHb has been implicated as a brain region key in learning from adverse outcomes. It is believed to play a role in stopping us doing things if we had a negative experience when we did this previously.

As the researchers say, the positive effects of drugs are known to motivate further drug-seeking behaviours. But it is also known that the adverse effects of drugs can limit further intake.

Previous studies pointed towards the LHb being involved in reducing motivation to consume nicotine and cocaine.

Ethanol (alcohol) is well known to have downsides, including impairment of motion and hangovers.

Studies have shown that rats with sensitivity to these adverse effects decrease their voluntary intake of alcohol.

To further examine the role of the LHb in learning driven by adverse outcomes, the researchers studied voluntary ethanol consumption in rats with and without lesions (damage) in the LHb. 

 

What did the research involve?

The research involved 136 male rats. The rats were anaesthetised and half were given surgical damage to the LHb by passing an electrical current through it. The remainder of the rats received a similar surgery, but no electrical current was passed (a "sham" procedure).

The rats were given one week to recover before being included in various experiments. The researchers conducted various experiments looking at the role of the LHb in alcohol consumption.

In one experiment, sham and lesion rats (17 in each group) were given intermittent 24-hour access to two bottles over eight weeks. One bottle contained water and one contained a solution of water with ethanol (alcohol) at a concentration of 20%. On some days, they were given only water and no ethanol.

The researchers weighed the water and ethanol bottles to measure intake and preference. After eight weeks, they looked at various effects in subsets of rats, including looking at the effect of subjecting the rats to a long period of alcohol abstinence before restoring their alcohol intake.

Another group of sham and lesion rats (10 in each group) were given intermittent 24-hour ethanol access for eight weeks. The researchers then examined the effects of allowing the rats access to self-administer ethanol by pressing a lever. After a period of free self-delivery, the researchers tested what happened when pressing the lever no longer gave the rats alcohol.   

As a final test in a large group of 37 sham and 42 lesion rats, the researchers tested the theory of conditioned taste aversion, where an effect of one fluid conditions them to dislike fluids with a similar taste, even if they don't have the same effect.

These rats were housed with free access to food and water and a sugar solution. They were then given ethanol, and the subsequent effect on their consumption of sugar solution was measured.

 

What were the basic results?

The researchers found that intermittent 24-hour ethanol access resulted in a steady increase in ethanol consumption in both sham and LHb lesion rats.

However, after one week of ethanol, consumption in the lesion rats increased more than the sham rats and reached higher intake levels, reaching 6g per kg per 24 hours, compared with 4g per kg per 24 hours in the sham rats.

The rats with the LHb lesions continued to show higher intake than the sham rats when they were not given alcohol for a period before access was then reinstated.

After the eight weeks of intermittent ethanol access, the researchers found the LHb lesion rats pressed the lever to get alcohol significantly more than the sham rats.

When the lever presses no longer rewarded them with ethanol, the lesion rats still pressed the lever more than the sham rats on the first day, but not after that.

In the final test of conditioned taste aversion, after giving rats ethanol, those with no LHb damage also showed an aversion to drinking the sugary solution, while those with LHb damage did not show aversion.

 

How did the researchers interpret the results?

The researchers concluded that their results show that the lateral habenula (LHb) plays an important role in controlling ethanol-directed behaviours.

 

Conclusion

This was animal research that aimed to investigate the role of the lateral habenula (LHb) in conditioning responses to alcohol.

The LHb is a brain region key in learning driven by adverse outcomes. It is believed to play a role in stopping us repeating actions that have previously resulted in negative outcomes.

In this study in rats, surgical damage to the LHb stopped the rats learning to moderate their alcohol consumption.

When given free and open access to ethanol, rats with LHb damage showed continuously increasing rates of ethanol consumption and reached higher blood alcohol levels.

Comparatively, rats without damage to this brain region had a high intake initially, but their liking then tailed off.

The researchers also found damage to the LHb reduced conditioned taste aversion – after being given ethanol, rats without damage to this region had an aversion to drinking a sugar solution, but the rats with LHb damage did not.

Overall, this rat study supports the belief that the LHb may be involved in learning driven by adverse outcomes. But it isn't clear what negative effects the rats could have been having – for example, whether this was linked to them having anything like a hangover after drinking alcohol.

The direct implications for humans are currently very limited. It is plausible that some people have an underperforming LHb. This could lead to self-destructive patterns of behaviour, despite a previous history of adverse events such as hangovers.

Even if this highly speculative hypothesis turns out to be true, it is currently unclear what treatments this could lead to.

Current treatments for alcohol misuse include medications that can help relieve cravings, as well as counselling – both one-to-one and in groups.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Scientists pinpoint the part of the brain that tells us 'never again' while suffering a hangover. Mail Online, June 27 2014

Links To Science

Haack AK, Sheth C, Schwager AL, et al. Lesions of the Lateral Habenula Increase Voluntary Ethanol Consumption and Operant Self-Administration, Block Yohimbine-Induced Reinstatement of Ethanol Seeking, and Attenuate Ethanol-Induced Conditioned Taste Aversion. PLOS One. Published online April 2 2014

Categories: Medical News