Understanding Anorexia – Promoting Life through Prevention

An essay by Caitlin Lloyd.

Emma was an anxious child, always worrying. At thirteen, her anxiety became centered on interactions at school – she was terrified of being judged negatively by classmates. Around this time Emma began dieting, intending to lose just a small amount of weight. It turned out she could do so relatively easily, and enjoyed the sense of achievement resulting from the numbers on the scale going down. Her diet continued, becoming more and more extreme. Emma’s weight plummeted.

Eight years later, having had two inpatient hospital admissions, Emma maintains a dangerously low body weight, achieved by setting strict rules around eating. A daily calorie limit is followed, and foods containing fat and sugar avoided. Eating takes place only at certain times, and each mouthful must be chewed ten times before swallowing. Any deviation from these rules, and the day is ruined.

Emma retook two years at school, falling behind her peers, but secured a place at Durham University to study mathematics. It is difficult to concentrate on her work though, because all Emma can think about is food: what she has eaten; and what she will eat. Her focus on food makes it hard to maintain friendships, and Emma has few. Emma spends university holidays with her family, the time dominated by arguments over food.

Sometimes Emma wishes things were different. But that means eating more, which feels impossible. Deviating from the rules makes Emma unbearably anxious. No amount of support can dispel the intense fear of becoming fat, or feelings of self-disgust that accompany weight-gain.

Emma is fictional but typical of someone with anorexia nervosa, an eating disorder characterised by persistent starvation in the context of a low weight and fear of weight-gain. In the UK it is estimated that as many as one in 25 women will experience anorexia in the course of their lifetime. Men develop anorexia too; roughly one in ten people with anorexia is male.

Anorexia usually develops during adolescence, and has many adverse yet long-lasting physical and mental health consequences. Starvation compromises the function of almost all major organ systems, and feelings of despair increase the risk of suicide; anorexia has the highest death rate of any mental health disorder.

Full recovery from anorexia is a lengthy process, and unfortunately not common. Treatments exist but not one is consistently effective. Fewer than half of those diagnosed with anorexia make a full recovery, and relapse rates are high – around 30-40% of people fall back into the disorder’s grip following initial recovery. For some, weight-gain is sustained, but a strict diet and overconcern with eating and weight remains, severely impacting quality of life.

The difficulty treating anorexia makes effective prevention vital. For this we need to target the factors that cause anorexia, requiring knowledge of what those factors are. My research investigates whether anxiety disorders play a causal role in anorexia development, to help us understand whether it would be beneficial to address anxiety in young people to prevent eating disorders.

It has long been suggested that the starvation of anorexia reduces anxiety. This would make dieting helpful (in this narrow sense) to those experiencing anxiety symptoms, encouraging the dieting to continue. Anxiety disorders and anorexia often co-occur. But correlation is not causation, and determining cause-and-effect is notoriously challenging.

As an example, for anxiety to cause anorexia development, anxiety must precede anorexia. Existing findings support this, however studies have tended to ask people with anorexia to recall the time before their illness developed. Experiencing anorexia may affect memory recall; to try and explain how their anorexia developed, someone with anorexia might believe themselves to have been more anxious in childhood than they actually were. In this case the conclusion that anxiety causes anorexia may be invalid. Many sources of potential error exist in research, meaning that many findings could be inaccurate, at least to some degree.

Different research methods have different strengths and limitations, and are thus prone to different biases. This can be used to our advantage: if findings across studies of different research methods point to the same conclusion, we can be more confident the conclusion is correct. I am using a variety of research methods, each designed to minimise the potential for erroneous conclusions, to determine the role of anxiety in anorexia. If a causal role is supported across the different studies, trialing interventions designed to reduce anxiety for eating disorder prevention is encouraged. If not, the search for other factors to target for improved eating disorder prevention continues.

We are at an early stage in understanding anorexia, but we do know that many people with the illness become ill at a young age, with their whole lives ahead – like Emma. My research matters because it aims to stop people losing their lives, and quality of life, to anorexia.


Does schizophrenia influence cannabis use? How to report the influence of disease liability on outcomes in Mendelian randomization studies

The recent Nature Neuroscience paper by Pasman et al entitled “GWAS of lifetime cannabis use reveals new risk loci, genetic overlap with psychiatric traits, and a causal influence of schizophrenia” (see below) provides important and novel insights into the aetiology of cannabis use and its relationship with mental health. However – in its title and elsewhere – it subtly misrepresents what the Mendelian randomization (MR) [1] analyses it presents actually show. MR analyses are increasingly being reported as demonstrating the effect of a disease (in this case schizophrenia) on the outcome, through using genome wide significant variants associated with risk of the disease on the outcome (which can be a behavior, such as cannabis smoking in the present paper, a measured trait or a second disease).

MR analyses are often carried out using summary data, where the exposure and outcome GWAS come from separate samples. In such analyses interpretation is not to apparent effects of the disease itself, but to the phenotypic effects of genetic liability to that disease. Typically, only a tiny proportion of participants in the outcome GWAS datasets will actually have experienced the disease – in this case particularly so given the low participation rate of people with schizophrenia in most studies in the general population. Indeed, MR studies can be carried out in datasets where there are no individuals with the outcome (e.g. datasets collected amongst an age group in whom the outcome will have occurred very rarely, if ever). Such analysis may reveal apparent, but impossible, effects of the disease on outcome phenotypes. To use MR analyses to investigate the causal effect of a disease on outcomes would require individual-level data with recorded disease events and subsequent follow-up. Analytical approaches to such data have, as yet, not been published.

The widespread misrepresentations of such MR studies have important implications, not just in terms of how the results are interpreted, but also how they are applied. One valuable contribution of MR studies is that they can identify modifiable exposures that can be the target of interventions. If it is recognized that what is being shown is an effect of liability to disease on an outcome, then interventions targeting the mechanisms of this liability would have benefits even in individuals who are unlikely to go on to develop the disease, including those at low risk of the disease for other reasons. For example, targeting breast cancer liability may have benefits in men if this liability influenced diseases that are common in men. If, however, it is the disease itself which has the effect, then the interventions would be targeted at those likely to develop disease: only women, in the case of breast cancer liability. It may be that schizophrenia does indeed lead to cannabis use, but the analyses reported by Pasman et al show only that liability to schizophrenia leads to cannabis use.

The point is a subtle one – we have both used similar language in the past in articles reporting MR analyses on which we are authors. Indeed, one of us (MM) was an author on the Pasman et al paper (and contributed principally to the MR analyses and their interpretation) but failed to suggest the correct phrasing.  Fortunately, the title and discussion will be changed to address this problem so that the enduring version of the paper captures this importance nuance (unfortunately, the original headline has already been repeated elsewhere [2]). However, it is a widespread and underappreciated point of interpretation in MR studies, and we feel that this presents a useful opportunity to highlight it. It also illustrates that methodologies, and the interpretation of the results they generate, continue to evolve, illustrating the need to interpret past work (including our own!) through the lens of current approaches.

[1] Davey Smith G, Ebrahim S. ‘Mendelian randomization’: can genetic epidemiology contribute to understanding environmental determinants of disease?  Int J Epidemiology 2003;32:1-22.

[2] Andrae LC. Cannabis use and schizophrenia: Chicken or egg?  Sci Transl Med 2018;10:eaav0342.

Can cognitive interventions change our perception from negative to positive, and might that be useful in treating depression?

By Sarah Peters

Have you ever walked away from a social interaction feeling uncomfortable or anxious? Maybe you felt the person you were talking to disliked you, or perhaps they said something negative and it was all you could remember about the interaction. We all occasionally focus on the negative rather than the positive, and sometimes ruminate over a negative event, but a consistent tendency to perceive even ambiguous or neutral words, faces, and interactions as negative (a negative bias), may play a causal role in the onset and rate of relapse in depression.

A growing field of psychological interventions known as cognitive bias modification (CBM) propose that by modifying these negative biases it may be possible to intervene prior to the onset of depression, or prevent the risk of subsequent depressive episodes for individuals in remission. Given that worldwide access to proven psychological and pharmacological treatments for mood disorders is limited, and that in countries like the UK public treatment for depression is plagued by long waiting lists, high costs, side effects, and low overall response rates, there is a need for effective treatments which are inexpensive, and both quick and easy to deliver. We thought that CBM might hold promise here, so we ran a proof of principle trial for a newly developed CBM intervention that shifts the interpretation of faces from negative to positive (a demonstration version of the training procedure can be seen here). Proof of principle trials test an intervention in a non-patient sample, which is important to help us understand a technique’s potential prior to testing it in a clinical population – we need to have a good idea that an intervention is going to work before we give it to people seeking treatment!

In this study, we had two specific aims. Firstly, we aimed to replicate previous findings to confirm that this task could indeed shift the emotional interpretation of faces. Secondly, we were interested in whether this shift in interpretation would impact on clinically-relevant outcomes: a) self-reported mood symptoms, and b) a battery of mood-relevant cognitive tasks. Among these were self-report questionnaires of depressive and anxious symptoms, the interpretation of ambiguous scenarios, and an inventory of daily stressful events (e.g., did you “wait too long in a queue,” and “how much stress did this cause you on a scale of 0 to 7”). The cognitive tasks included a dot probe task to measure selective attention towards negative (versus neutral) emotional words, a motivation for rewards task which has been shown to measure anhedonia (the loss of pleasure in previously enjoyed activities), and a measure of stress-reactivity (whereby individuals complete a simple task under two conditions: safe and under stress). This final task was included because it is thought that the negative biases we were interested in modifying are more pronounced when an individual is under stress.

We collected all of our self-report and cognitive measures at baseline (prior to CBM), after which participants underwent eight sessions (in one week) of either CBM or a control version of the task (which does not shift emotional interpretation). We then collected all of our measures again (after CBM). In order to be as sure of our results as possible, there were a number of critical study design features we used. Our design, hypotheses, and statistical analyses were pre-registered online prior to collecting data (this meant that we couldn’t fish around in our data until we found something promising, then re-write our hypotheses to make that result seem stronger). We also powered our study to be able to detect an effect of our CBM procedure. This meant running a statistical calculation to ensure we had enough participants to be convinced by any significant findings, and their potential to be clinically useful. This told us we needed 104 individuals split evenly between groups. Finally, our study was randomised (participants were randomly allocated to the intervention group or the control group), controlled (one group underwent an identical “placebo” procedure), and double-blind (only an individual who played no role in recruitment or participant contact knew which group any one participant was in).

So, what did we actually find? While the intervention successfully shifted the interpretation of facial expressions (from negative to positive), there was only inconclusive evidence of improved mood and the CBM procedure failed to impact most measures. There was some evidence in our predicted direction that daily stressful events were perceived as less stressful by those in the intervention group post-CBM, and weaker evidence for decreased anhedonia in the intervention group. In an exploratory analysis, we also found some evidence that results in the stress-reactivity task were moderated by baseline anxiety scores – for this task, the effects of CBM were only seen in individuals who had higher baseline anxiety scores. However, exploratory findings like this need to be treated with caution.

Therefore, as is often the case in scientific research, our results were not entirely clear. However, there are a few limitations and directions for future research that might explain and help us to interpret our findings. Our proof of principle study only considered effects in healthy individuals. Although these individuals were clearly amenable to training, and may indeed have symptoms of depression or anxiety without a clinical diagnosis, our observation that more anxious individuals appeared to be more affected by the intervention warrants research in clinical populations. In fact, a reasonable parallel to the effects observed in this study may be working memory training, which does not transfer well to other cognitive operations in healthy samples, but shows promise as a tool for general cognitive improvement in impaired populations.

Future research is also needed to disambiguate the tentative self-report stress and cognitive anhedonia effects observed here. One possibility, for example, is that the 104 participants we recruited were not enough to detect an effect of transference from CBM training to other measures (the size of which is unknown). Given the complexity of any mechanism through which a computerised task could shift the perception of faces and then influence behaviour, it is likely that a larger sample is necessary. While it could be argued that if such a large group of individuals is warranted to detect an effect, that effect is likely too small to be clinically useful, we would argue that even tiny effects can indeed be meaningful (e.g., cancer intervention studies often identify very small effects which can have a meaningful impact at a population level).

Another explanation for our small effects is that while one week was long enough to induce a change in bias, it may not have been long enough to observe corresponding changes in mood. For instance, positive interpretation alone may not be enough – it may be that individuals need to go out into the world and use this new framework to have personal, positive experiences that gradually improve mood, and this process may take longer than one week.

Overall, this CBM procedure may have limited impact on clinically-relevant symptoms. However, the small effects observed still warrant future study in larger and clinical samples. Given the large impact and cost of mood disorders on the one hand, and the relatively low cost of providing CBM training on the other, clarifying whether even small effects exist is likely worthwhile. Even if this procedure fails to result in clinical improvement, documenting and understanding the different steps in going from basic scientific experimentation to intervening in clinical samples is crucial for both the scientific field and the general public to know. The current study is part of a body of research which should encourage all individuals who are directly or indirectly impacted by depression or other mood disorders. Novel approaches towards understanding, preventing, and treating these disorders are constantly being investigated, meaning that we can be hopeful for a reduction in the devastating impact they currently have in the not so distant future.

Read the published study here

Sarah Peters can be contacted via email at: s.peters@bristol.ac.uk