Evaluating potential bias of REML estimators of heritability
Abstract
Genomic models that incorporate dense SNP genotypes are increasingly being used and studied for inference of variance parameters and narrow-sense heritability. The variance parameters of a linear mixed model linking a phenotype to SNP genotypes can be inferred using restricted maximum likelihood, which produces consistent, asymptotically normal estimates of variance components, when the SNP genotypes are those of the causal loci. Such properties are not guaranteed to hold when the covariance structure of the data specified by the genomic and the true models differs substantially. Since in practice we do not have knowledge of the true genetic relationship matrix among individuals, genomic models that incorporate SNP genotypes are used instead to compute a genomic relationship matrix. The patterns of realized relationships at different sets of loci (e.g., markers and causal loci) vary across the genome, and therefore a genomic relationship matrix may provide a poor description of true genetic relationships at causal loci, potentially leading to incorrect inferences. This work offers a theoretical analysis based on splitting the likelihood equations into components, isolating those that contribute to incorrect inferences, and providing an informative measure to compare the covariance structure of the data specified by the genomic and the true models. The theory presented is also used to evaluate and explain the success of a number of recently reported approaches in removing sources of bias of heritability estimates.
Fichier principal
evaluating-potential-bias-reml-estimators-heritability.pdf (2.63 Mo)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|