Psychiatric disorders: what’s the significance of non-random mating?

7960674098_2070f1fe64_bHardly a week passes without the publication of a study reporting the identification of genetic variants associated with an increasing number of behavioural and psychiatric outcomes. This partly driven by the growth in large international consortia of studies, as well as the release of data from very large studies such as UK Biobank. These consortia and large individual studies are now achieving the necessary sample sizes to detect the very small effects associated with common genetic variants,.

We’ve known for some time that psychiatric disorders are under a degree of genetic influence, but one puzzle is why estimates of the heritability of these disorders (i.e., the proportion of variability in risk of a disorder that is due to genetic variation) differs across disorders. Another intriguing question is why there appears to be a high degree of genetic comorbidity across different disorders; that is, common genetic influences that relate to more than one disorder. One possible answer to both questions may lie in the degree of non-random mating by disorder.

Non-random mating refers to the tendency for partners to be more similar than we would expect by chance on any given trait of interest. This is straightforward to see for traits such as height and weight, but less obvious for traits such as personality. A recent study by Nordsletten and colleagues investigated the degree of non-random mating for psychiatric disorders, as well as a selection of non-psychiatric disorders for comparison purposes.

 

Methods

The researchers used data from three Swedish national registers, using unique personal identification numbers assigned at birth. The data were linked to the Swedish National Patient Register (NPR), which includes diagnostic information on all individuals admitted to a Swedish hospital and, since 2001, on outpatient consultations. Individuals with multiple diagnoses could appear as a “case” in each separate analysis of these different diagnoses.

Cases of schizophrenia, bipolar disorder, autism spectrum disorder, anorexia nervosa, substance abuse, attention deficit hyperactivity disorder (ADHD), obsessive compulsive disorder (OCD), major depressive disorder, social phobia, agoraphobia, and generalised anxiety disorder were identified using standard protocols. For comparison purposes, cases of Crohn’s disease, type 1 and type 2 diabetes, multiple sclerosis and rheumatoid arthritis were also identified.

For each case (i.e., individuals with a diagnosis), five population controls were identified, matched on age, sex and area of residence. Mating relationships were identified through records of individual marriages, and through records of individuals being the biological parent of a child. The use of birth of a child was intended to capture couples who remained unmarried. For each member of a mated case pair a comparison sample was again generated, with the constraint that these controls not have the diagnosis of interest.

First, the proportion of mated pairs in the full case and control samples was summarised. Correlations were calculated to evaluate the relationship between the diagnostic status of each individual in a couple, first within and then across disorders. Logistic regression was used to estimate the odds of any diagnoses in mates of cases relative to mates of controls. Finally, the odds of any diagnosis in mates was estimated, and the relationship between the number of different disorders in a case and the presence of any psychiatric diagnoses in their mate explored.

Non-random mating is not a lack of promiscuity, it's the tendency for partners to be more similar than we would expect by chance on any given trait of interest.

Non-random mating is not a lack of promiscuity! It’s the tendency for partners to be more similar than we would expect by chance on any given trait of interest.

Results

Cases showed reduced odds of mating relative to controls, and this differed by diagnosis, with the greatest attenuation among individuals with schizophrenia. In the case of some diagnoses (e.g., ADHD) this low rate of mating may simply reflect, at least in part, the young age of these populations.

Within each diagnostic category, there was evidence of a correlation in diagnostic status for mates of both sexes (ranging from 0.11 to 0.48), and there was also evidence of cross-disorder correlations, although these were typically smaller than within-disorder correlations (ranging from 0.01 to 0.42).

In general, if an individual had a diagnosis this was typically associated with a 2- to 3-fold increase in the odds of his or her mate having the same or a different disorder. This was particularly pronounced for certain conditions, such as ADHD, autism spectrum disorder and schizophrenia.

In contract to psychiatric samples, mating rates were consistently high among both men and women with non-psychiatric diagnoses, and correlations both across and within the conditions was rare (ranging from -0.03 to 0.17), with the presence of a non-psychiatric diagnosis associated with little increase in his or her spouse’s risk.

This general population study found an amazing amount of assortative (non-random) mating within psychiatric disorders.

This general population study found an amazing amount of assortative mating within psychiatric disorders.

Conclusions

These results indicate a striking degree of non-random mating for psychiatric disorders, compared with minimal levels for non-psychiatric disorders.

Correlations between partners were:

  • Greater than 0.40 for ADHD, autism spectrum disorder and schizophrenia,
  • Followed by substance abuse (range 0.36 to 0.39),
  • And detectable but more modest for other disorders, such as affective disorders (range 0.14 to 0.19).

The authors conclude the following:

  • Non-random mating is common in people with a psychiatric diagnosis.
  • Non-random mating occurs both within and across psychiatric diagnoses.
  • There is substantial variation in patterns of non-random mating across diagnoses.
  • Non-random mating is not present to the same degree for non-psychiatric diagnoses.

Implications

So, what are the implications of these findings?

First, non-random mating could account for the relatively high heritability of psychiatric disorders, and also explain why some psychiatric disorders are more heritable then others (if the degree of assortment varies by disorder).

This is because non-random mating will serve to increase additive genetic variation across generations until equilibrium is reached, leading to increased (narrow sense) heritability for any trait on which it is acting.

Second, non-random mating across psychiatric disorders (reflected, for example in a correlation of 0.31 between schizophrenia and autism spectrum disorder) could help to explain in part the observed genetic comorbidity across these disorders.

Non-random mating could explain why some psychiatric disorders are more heritable then others.

Non-random mating could explain why some psychiatric disorders are more heritable than others.

Strengths and limitations

This is an extremely well-conducted, authoritative study using a very large and representative data set. The use of a comparison group of non-psychiatric diagnoses is also an important strength, which gives us insight into just how strong non-random mating with respect to psychiatric diagnoses is.

The major limitations include:

  • Not being able to capture other pairings (e.g., unmarried childless couples)
  • A reliance on register diagnoses, which largely excludes outpatients etc
  • A lack of insight into possible mechanisms

This last point is interesting; non-random mating such as that observed in this study could arise because couples become more similar over time after they have become a couple (e.g., due to their interactions with each other) or may be more similar from the outset (e.g., because similar individuals are more likely to form couples in the first place, known as assortative mating).

The authors conclude that the non-random mating they observed may be due toassortative mating for two reasons. First, shared environment (which would capture effects of partner interactions) appears to play very little role in many psychiatric conditions. Second, neurodevelopmental conditions are present over the lifespan (i.e., before couples typically meet), which would suggest an assortative mating explanation for the observed similarity for at least these conditions.

Some disorders (e.g., schizophrenia) are associated with reduced reproductive success, and therefore should be under strong negative selection in the general population. However, these results suggest they may be positively selected for within certain psychiatric populations. In other words, these mating patterns could, in part, compensate for the reduced reproductive success associated with certain diagnoses, and explain why they persist across generations.

Implications for future research

Non-random mating also has implications for research, and in particular the use of genetic models. These models typically assume that mating takes place at random, but the presence of non-random mating (as indicated by this study) suggests that this should be taken into account in these models. This could be done by allowing for a correlation between partners, and neglecting this correlation may lead to an underestimate of heritability.

Summary

This study suggests that non-random mating is widespread for psychiatric conditions, which may help to provide insights into why these conditions are transmitted across generations, and why there is such a strong degree of comorbidity across psychiatric diagnoses. The results also challenge a fundamental assumption of many genetic approaches.

Assortative mating means that the person closest to an individual with a psychiatric disorder is also likely to have psychiatric problems.

Assortative mating means that, in general population terms, people in romantic relationships with those who have psychiatric disorders are also likely to have psychiatric problems themselves.

Links

Primary paper

Nordsletten AE, Larsson H, Crowley JJ, Almqvist C, Lichtenstein P, Mataix-Cols D. (2016) Patterns of nonrandom mating within and across 11 major psychiatric disorders. JAMA Psychiatry 2016. doi: 10.1001/jamapsychiatry.2015.3192

Photo credits

The missing heritability problem

By Marcus Munafo

Missing heritability has been described as genetic “dark matter”In my last post I described the transition from candidate gene studies to genome-wide association studies, and argued that the corresponding change in the methods used, focusing on the whole genome rather than on a handful of genes of presumed biological relevance, has transformed our understanding of the genetic basis of complex traits. In this post I discuss the reasons why, despite this success, we still have not accounted for all the genetic influences we expect to find.

As I discussed previously, genome-wide association studies (GWAS) have been extremely successful in identifying genetic variants associated with a range of disease outcomes – countless replicable associations have emerged over the last few years. Nevertheless, despite this success, the proportion of variability in specific traits accounted for so far is much less than what twin, family and adoption studies would lead us to expect. The individual variants identified are associated with a very small proportion of variance in the trait of interest (typically 0.1% of less), so that together they still only account for a modest proportion. Twin, family and adoption studies would lead us to expect that 50% or more of the variance in many complex traits is attributable to genetic influences, but so far we have found only a small fraction of that total. This has become known as the “missing heritability” problem. Where are the other genes? Should we be seeking common genetic variants of smaller and smaller effect, in larger and larger studies? Or is there a role for rare variants (i.e., those which occur with a low frequency in a particular population, typically a minor allele frequency less than 5%), which may have a larger effect?

It is clear that some missing heritability will be accounted for by variants that have not yet been identified via GWAS. Most GWAS genotyping chips don’t capture rare variants very well, but evolutionary theory predicts that those mutations that strongly influence complex phenotypes will tend to occur at low frequencies. Under the evolutionary neutral model, variants with these large effects are predicted to be rare. However, under the same model, while rare variants of large effect constitute the majority of causal variants, they still only contribute a small proportion of phenotypicvariance in a population, because they are rare. On the other hand, common variants of small effect contribute a greater overall proportion of variance. There are new methods which use a less stringent threshold for including variants identified via GWAS – instead of only including those that reach “genomewide significance” (i.e., a P-value < 10-8 – see my earlier post), those which reach a much more modest level of statistical evidence (e.g., P < 0.5) are included. This much more inclusive approach has shown that when considered together, common genetic variants do in fact seem to account for a substantial proportion of expected heritability.

In other words, complex traits, such as most disease outcomes but also those behavioural traits of interest to psychologists, are highly polygenic – that is, they are influenced by a very large number of common genetic variants of very small effect. This, in turn, explains why we have yet to reliably identify specific genetic variants associated with many psychological and behavioural traits – while the latest GWAS of traits such as height and weight (the GIANT Consortium) includes data on over 250,000 individuals, there exists no such collection of data on most psychological and behavioural traits. This situation is changing though – a recent GWAS of educational attainment combined data on over 125,000 individuals, and three genetic loci were identified with genomewide significance, although these were associated with very small effects (as we would expect). Excitingly, these findings have recently been replicated. Another large GWAS, this time of schizophrenia, identified 108 loci associated with the disease, putting this psychiatric condition on a par with traits such as height and weight in terms of our understanding of the underlying genetics.

The success of the GWAS method is remarkable – the recent schizophrenia GWAS, for example, has provided a number of intriguing new biological targets for further study. It should only be a matter of time (and sample size) before we begin to identify variants associated with personality, cognitive ability and so on. Once we do, we will understand more about the biological basis for these traits, and finally begin to account for the missing heritability.

References:

Munafò, M.R., & Flint J. (2014). Schizophrenia: genesis of a complex disease. Nature, 511, 412-3.

Rietveld, C.A., et al. (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science340, 1467-71.

@MarcusMunafo

@BristolTARG

This blog first appeared on The Inquisitive Mind site on 18th October 2014.

Having confidence…

I’ve written previously about the problems associated with an unhealthy fixation on P-values in psychology. Although null hypothesis significance testing (NHST) remains the dominant approach, there are a number of important problems with it. Tressoldi and colleagues summarise some of these in a recent article.

First, NHST focuses on rejection of the null hypothesis at a pre-specified level of probability (typically 5%, or 0.05). The implicit assumption, therefore, is that we are only interested answering “Yes!” to questions of the form “Is there a difference from zero?”. What if we are interested in cases where the answer is “No!”? Since the null hypothesis is hypothetical and unobserved, NHST doesn’t allow us to conclude that the null hypothesis is true.

Second, P-values can vary widely when the same experiment is repeated (for example, because the participants you sample will be different each time) – in other words, it gives very unreliable information about whether a finding is likely to be reproducible. This is important in the context of recent concerns about the poor reproducibility of many scientific findings.

Third, with a large enough sample size we will always be able to reject the null hypothesis. No observed distribution is ever exactly consistent with the null hypothesis, and as sample size increases the likelihood of being able to reject the null increases. This means that trivial differences (for example, a difference in age of a few days) can lead to a P-value less than 0.05 in a large enough sample, despite the difference having no theoretical or practical importance.

The last point is particularly important, and relates to two other limitations. Namely, the P-value doesn’t tell us anything about how large an effect is (i.e., the effect size), or about how precise our estimate of the effect size is. Any measurement will include a degree of error, and it’s important to know how large this is likely to be.

There are a number of things that can be done to address these limitations. One is the routine reporting of effect size and confidence intervals. The confidence interval is essentially a measure of the reliability of our estimate of the effect size, and can be calculated for different ranges. A 95% confidence interval, for example, represents the range of values that we can be 95% confident that the true effect size in the underlying population lies within. Reporting the effect size and associated confidence interval therefore tells us both the likely magnitude of the observed effect, and the degree of precision associated with that estimate. The reporting of effect sizes and confidence intervals is recommended by a number of scientific organisations, including the American Psychological Association, and the International Committee of Medical Journal Editors.

How often does this happen in the best journals? Tressoldi and colleagues go on to assess the frequency with which effect sizes and confidence intervals are reported in some of the most prestigious journals, including Science, Nature, Lancet and New England Journal of Medicine. The results showed a clear split. Prestigious medical journals did reasonably well, with most selected articles reporting prospective power (Lancet 66%, New England Journal of Medicine 61%) and an effect size and associated confidence interval (Lancet 86%, New England Journal of Medicine 83%). However, non-medical journals did very poorly, with hardly any selected articles reporting prospective power (Science 0%, Nature 3%) or an effect size and associated confidence interval (Science 0%, Nature 3%). Conversely, these journals frequently (Science 42%, Nature 89%) reported P-values in the absence of any other information (such as prospective power, effect size or confidence intervals).

There are a number of reasons why we should be cautious when ranking journals according to metrics intended to reflect quality and convey a sense of prestige. One of these appears to be that many of the articles in the “best” journals neglect some simple reporting procedures for statistics. This may be for a number of reasons – editorial policy, common practices within a particular field, or article formats which encourage extreme brevity. Fortunately the situation appears to be improving – Nature recently introduced a methods reporting checklist for new submissions, which includes statistical power and sample size calculation. It’s not perfect (there’s no mention of effect size or confidence intervals, for example), but it’s a start…

Reference:

Tressoldi, P.E., Giofré, D., Sella, F. & Cumming, G. (2013). High impact = high statistical standards? Not necessarily so. PLoS One, e56180.

Posted by Marcus Munafo

Shifting the Evidence

An excellent paper published a few years ago, Sifting the Evidence, highlighted many of the problems inherent in significance testing, and the use of P-values. One particular problem highlighted was the use of arbitrary thresholds (typically P < 0.05) to divide results into “significant” and “non-significant”. More recently, there has been a lot of coverage of the problems of reproducibility in science, and in particular distinguishing true effects from false positives. Confusion about what P-values actually tell us may contribute to this.

It is often not made clear whether research is exploratory or confirmatory. This distinction is now commonly made in genetic epidemiology, where individual studies routinely report “discovery” and “replication” samples. That in itself is helpful – it’s all too common for post-hoc analyses (e.g., of sub-groups within a sample) to be described as having been based on a priori hypotheses. This is sometimes called HARKing (Hypothesising After the Results are Known), which can make it seem like results were expected (and therefore more likely to be true), when in fact they were unexpected (and therefore less likely to be true). In other words, a P-value alone is often not very informative in telling us whether an observed effect is likely to be true – we also need to take into account whether it conforms with our prior expectations.

statisticalpower

One way we can do this is by taking into account the pre-study probability that the effect or association being investigated is real. This is difficult of course, because we can’t know this with certainty. However, what we perhaps can estimate is the extent to which a study is exploratory (the first to address a particular question, or use a newly-developed methodology) or confirmatory (the latest in a long series of studies addressing the same basic question). Broer et al (2013) describe a simple way to take this into account and increase the likelihood that a reported finding is actually true. Their basic point is that the likelihood that a claimed finding is actually true (which they call the positive predictive value, or PPV) is related to three things: the prior probability (i.e., whether the study is exploratory or confirmatory), the statistical power (i.e., the probability of finding an effect if it really exists), and the Type I error rate (i.e., the P-value or significance threshold used). We have recently described the problems associated with low statistical power in neuroscience (Button et al., 2013).

What Broer and colleagues show is that if we adjust the P-value threshold we use, depending on whether a study is exploratory or confirmatory, we can dramatically increase the likelihood that a claimed finding is true. For highly exploratory research, with a very low prior probability, they suggest a P-value of 1 × 10-7. Where the prior probability is uncertain or difficult to estimate, they suggest a value of 1 × 10-5. Only for highly confirmatory research, where the prior probability is high, do they suggest that a “conventional” value of 0.05 is appropriate.

Psychologists are notorious for having an unhealthy fixation on P-values, and particularly the 0.05 threshold. This is unhelpful for lots of reasons, and many journals now discourage or even ban the use of the word “significant”. The genetics literature that Broer and colleagues draw on has learned these lessons from bitter experience. However, if we are going to use thresholds, it makes sense that these reflect the exploratory or confirmatory nature of our research question. Fewer findings might pass these new thresholds, but those that do will be much more likely to be true.

References:

Broer L, Lill CM, Schuur M, Amin N, Roehr JT, Bertram L, Ioannidis JP, van Duijn CM. (2013). Distinguishing true from false positives in genomic studies: p values. Eur J Epidemiol; 28(2): 131-8.

Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, Munafò MR. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci; 14(5): 365-76.

Posted by Marcus Munafo and thanks to Mark Stokes at Oxford University for the ‘Statistical power is truth power’ image.