PLINK: Whole genome data analysis toolset plink...
Last original PLINK release is v1.07 (10-Oct-2009); PLINK 1.9 is now available for beta-testing

Whole genome association analysis toolset

Introduction | Basics | Download | Reference | Formats | Data management | Summary stats | Filters | Stratification | IBS/IBD | Association | Family-based | Permutation | LD calcualtions | Haplotypes | Conditional tests | Proxy association | Imputation | Dosage data | Meta-analysis | Result annotation | Clumping | Gene Report | Epistasis | Rare CNVs | Common CNPs | R-plugins | SNP annotation | Simulation | Profiles | ID helper | Resources | Flow chart | Misc. | FAQ | gPLINK

1. Introduction

2. Basic information

3. Download and general notes

4. Command reference table

5. Basic usage/data formats 6. Data management

7. Summary stats 8. Inclusion thresholds 9. Population stratification 10. IBS/IBD estimation 11. Association 12. Family-based association 13. Permutation procedures 14. LD calculations 15. Multimarker tests 16. Conditional haplotype tests 17. Proxy association 18. Imputation (beta) 19. Dosage data 20. Meta-analysis 21. Annotation 22. LD-based results clumping 23. Gene-based report 24. Epistasis 25. Rare CNVs 26. Common CNPs 27. R-plugins 28. Annotation web-lookup 29. Simulation tools 30. Profile scoring 31. ID helper 32. Resources 33. Flow-chart 34. Miscellaneous 35. FAQ & Hints

36. gPLINK
 

Permutation procedures

Permutation procedures provide a computationally intensive approach to generating significance levels empirically. Such values have desirable properties: for example, relaxing assumptions about normality of continuous phenotypes and Hardy-Weinberg equilibrium, dealing with rare alleles and small sample sizes, providing a framework for correction for multiple testing, and controlling for identified substructure or familial relationships by permuting only within cluster.

Conceptual overview of permutation procedures
Permutation procedures are available for a variety of tests, as described below. For some tests, however, these procedures are not available (e.g. SNP x SNP epistasis tests). For other tests, permutation is necessary to obtain any significance values at all (e.g. set-based tests).

The permutation tests described below come can be categorized in two ways:
  • Label-swapping versus gene-dropping
  • Adaptive versus max(T)
Label-swapping and gene-dropping
In samples of unrelated individuals, one simply swaps labels (assuming that individuals are interchangeable under the null) to provide a new dataset sampled under the null hypothesis. Note that only the phenotype-genotype relationship is destroyed by permutation: the patterns of LD between SNPs will remain the same under the observed and permuted samples. For family data, it might be better (or in the case of affected-only designs such as the TDT, necessary) to perform gene-dropping permutation instead. In it's most simple form, this just involves flipping which allele is transmitted from parent to offspring with 50:50 probability. This approach can extend to general pedigrees also, dropping genes from founders down the generations.

For quantitative traits, or samples in which both affected and unaffected non-founders are present, one can then perform a basic test of association (with disease, or with a quantitative trait) treating the pedigree data as if they were all unrelated (i.e. just using the --assoc option) but creating permuted datasets by gene-dropping will both control for stratification and the non-independence of related individuals (i.e. as these will also be properties of every permuted dataset). It is possible to maintain LD between SNPs by applying the same series of 50:50 flip/no-flip decisions to all SNPs in the same permuted replicate for a given transmission. In addition, it is possible to control for linkage by applying the same series of flip/no-flip decisions to all siblings in the same nuclear family. Both these features are automatically applied in PLINK.
Adaptive and max(T) permutation
Using either label-swapping or gene-dropping, there are two basic approaches to performing the permutations. The default mode is to use an adaptive permutation approach, in which we give up permuting SNPs that are clearly going to be non-significant more quickly than SNPs that look interesting. In otherwords, if after only 10 permutations we see that for 9 of these the permuted test statistic for a given SNP is larger than the observed test statistic, there is little point in carrying on, as this SNP is incredibly unlikely to ever achieve a highly significant result. This greatly speeds up the permutation procedure, as most SNPs (that are not highly significant) will drop out quite quickly, making it possible to properly evaluate significance for the handful of SNPs that require millions of permutations. Naturally, the precision with which one has estimated the significance p-value (i.e. relating from the number of permutations performed) will be correlated the significance value itself -- but for most purposes, this is precisely what one wants, as it is of little interest whether a clearly un-associated SNP really has a p-value of 0.78 or 0.87.

In contrast, max(T) permutation does not drop SNPs along the way. If 1000 permutations are specified, then all 1000 will be performed, for all SNPs. The benefit of doing this is that two sets of empirical significance values can then be calculated -- pointwise estimates of an individual SNPs significance, but also a value that controls for that fact that thousands of other SNPs were tested. This is achieved by comparing each observed test statistic against the maximum of all permuted statistics (i.e. over all SNPs) for each single replicate. In otherwords, the p-value now controls the familywise error rate, as the p-value reflects the chance of seeing a test statistic this large, given you've performed as many tests as you have. Because the permutation schemes preserve the correlational structure between SNPs, this provides a less stringent correction for multiple testing in comparison to the Bonferroni, which assumes all tests are independent. Because it is now the corrected p-value that is of interest, it is typically sufficient to perform a much smaller number of tests -- i.e. it is probably not necessary to demonstrate that something is genome-wide significant beyond 0.05 or 0.01.

Computational issues
PLINK performs the basic tests of association reasonably quickly -- for small datasets both permutation procedures will be feasible. For example, for a dataset comprising 100,000 SNPs measured on 350 individuals, each permutation (for all 100K SNPs) takes approximately 2 seconds on a modern Linux workstation. At this speed, it will take just over 1 day to perform 50,000 permutations using the max(T) mode and label-swapping. With the same dataset, using adaptive mode, the entire analysis is finished much quicker (although the empirical p-values are, of course, not corrected for multiple testing). For larger datasets (e.g. 1000s of individuals measured on >500K SNPs) things will slow down, although this will be linear with the number of genotypes -- if one has access to a cluster, however, the max(T) approach lends itself to easy parrallelization (i.e. if one can set many jobs running analysing the same data, it is easy to combine the empirical p-values afterwards).

By default, PLINK will select a random seed for the permutations, based on the system clock. To specify a fixed seed instead add the command
   --seed 6377474
where the parameter a (large) integer seed.

Basic (adaptive) permutation procedure

The default method for permutation is the adaptive method. To obtain a max(T) permutation p-value, see the section below. For either either case/control or quantitative trait association analysis, use the option:
plink --file mydata --assoc --perm

to initiate adaptive permutation testing. As well as the plink.assoc or plink.qassoc output file, adding the --perm option will generate a file named:
plink.assoc.perm

which contains the fields:
     CHR     Chromosome
     SNP     SNP ID
     STAT    Test statistic
     EMP1    Empirical p-value (adaptive)
     NP      Number of permutations performed for this SNP
An alternate scheme is also available, that may under some circumstances be useful. Specifically, this approach fixes the observed marginal counts for the 2-by-3 tables that is case/control status by the two alleles and the missing allele count. After permuting case/control label, only two cells in the table, e.g. missing and A2 alleles for controls, are counted, the rest of the table is filled in on the basis of the fixed marginal values. This speeds up the permutation procedure a little, and also implicitly downweights association results where there is a lot of missing genotype data that is non-random with respect to genotype and case/control status. Naturally, this approach can not provide total protection against the problem of non-random missing genotype data. Also, for SNPs with lots of missing data, this test will be conservative, whether the missingness is non-random or not. For these reasons, this is not the default option, although this approach might be one worth exploring further. To use this alternate permutation scheme, use the --p2 flag:
plink --file mydata --assoc --perm --p2

or
plink --file mydata --assoc --mperm 1000 --p2

Adaptive permutation parameters

Although the --perm option invokes adaptive permutation by default, there are various parameters that alter the behavior of the adaptive process that can be tweaked using the --aperm option, followed by six parameters: for example,
plink --file mydata --assoc --aperm 10 1000000 0.0001 0.01 5 0.001

The six arguments (along with the default values) are:
     Minimum number of permutations per SNP           5
     Maximum number of permutations per SNP           1000000
     Alpha level threshold (alpha)                    0
     Confidence interval on empirical p-value (beta)  0.0001
     Interval to prune test list (intercept)          1
     Interval to prune test list (slope)              0.001
These are interpreted as follows: for every SNP, at least 5 permutations will be performed, but no more than 1000000. After 5 permutations, the p-values will be evaluated to see which SNPs we can prune. The first interval value means to perform this pruning every 5 replicates; the second pruning parameter (0.001) means that the rate of pruning slows down with increasing number of replicates (i.e. pruning is, in this case, performed every 5+0.001R replicates where R is the current number of replicates). At each pruning stage, a 100*(1 - beta / 2T)% confidence interval is calculated for each empirical p-value, where beta is, in this case 0.01, and T is the number of SNPs. Using the normal approximation to the binomial, we prune any SNP for which the lower confidence bound is greater than alpha or the upper confidence bound is less than alpha.

max(T) permutation

To perform the max(T) permutation procedure, use the --mperm option, which takes a single paramter, the number of permutations to be performed: e.g. to use with the TDT test:
plink --file mydata --tdt --mperm 5000

which will generate (along with the plink.tdt file) an file named
     plink.tdt.mperm
which contains the fields:
     CHR     Chromosome
     SNP     SNP ID
     STAT    Test statistic
     EMP1    Empirical p-value (pointwise)
     EMP2    Corrected empirical p-valie (max(T) / familywise) 

Hint If multiple runs of PLINK are performed on the same dataset in parallel, using a computer cluster to speed up the max(T) permutations, then the resulting estimates of empirical significance can be combined across runs as follows. Empirical p-values are calculated as (R+1)/(N+1) where R is the number of times the permuted test is greater than the observed test; N is the number of permutations. Therefore, therefore, given p_i, the empirical p-value for the ith run, this implies that p_i*(N_i+1)-1 replicates passed the observed value. The overall empirical p-value should then be:
     ( SUM_i { p_i * ( N_i + 1 ) - 1 } + 1 ) / ( SUM_i { N_i } + 1 ) 

To produce output files that contain either the best statistic per replicate, or all statistics per replicate, use either option
     --mperm-save
or
     --mperm-save-all
along with the usual --mperm command. The first command generates a file
     plink.mperm.dummp.best
which contains two columns. The first is the replicate number (0 represents the original data, the remaining rows 1 to R where R is the number of permutations specified). The second column is the maximum test statistic over all SNPs for that replicate. The second command, --mperm-save-all produces a file
     plink.mperm.dump.all
that could be a very large file: the test statistic for all SNPs for all replicates. As before, the first row is the original data; the first column represents the replicate number; all other columns represent the test statistic values for each SNP (NA if this cannot be calculated). These two files might be of use if, for example, you wish to create your own wrapper around PLINK to perform higher-order corrections for multiple testing, e.g. if more than one phenotype is tested per SNP. In most cases, for this purpose, the first form should suffice.

Gene-dropping permutation

To perform gene-dropping permutation, use the --genedrop option, combined with the standard --assoc option. Either adaptive: e.g.
plink --file mydata --assoc --genedrop

or max(T) permutation: e.g.
plink --file mydata --assoc --genedrop --mperm 10000

can be specified.

This analysis option is equally applicable to disease and quantitative traits, although at least some non-founder individuals should be unaffected. Currently, an individual must have both parents genotyped for genedropping. For founders and for individuals without two genotyped parents, their genotypes are unchanged throughout all genedropping permutations.

It is possible to combine label-swapping with gene-dropping, however, to handle different family/sample configurations. That is, the basic gene-dropping procedure will leave untouched all individuals without two parents, making them uninformative for the test of association. One can think of at least three classes of groups of people without two parents in the dataset: founders/parents, siblings and unrelated singletons. Label-swapping within these groups can provide additional sources information for association that control different levels of the between/within family components of association.

There are three options, which can be used together, are:
     --swap-sibs                within family
     --swap-parents             partial within-family
     --swap-unrel               between family
which label-swap between sibs without genotyped parents (swapping only within families), between parents only (swapping only within families), or between all singletons (unrelated individuals) (swapping between familes).

Basic within family QTDT
This test only considers information from individuals with two genotyped parents:
plink --file mydata --assoc --genedrop

Discordant sibling test
Although gene-dropping only considers individuals with two parents to be informative, valid family-based tests can include information from full-siblings -- by label-swapping only within each full sibship that does not otherwise have parents, it is possible to augment the power of the gene-dropping approach:
plink --file mydata --assoc --genedrop --swap-sibs

parenTDT/parenQTDT
This test additionally incorporates information from phenotypically discordant parents (for either quantitative or disease triats). This provides more information for association, but provides a weaker level of protection against stratification (i.e. it assumes that mother and father pairs are well matched in terms of subpopulation stratum).
plink --file mydata --assoc --genedrop --swap-parents

Standard association for singleton, unrelated individuals
If a sample is a mixture of families and unrelated individuals (e.g. case/control and offspring/parent trios combined) then adding this option as well as the --gene-drop option will perform label-swapping permutation for all unrelated individuals.
plink --file mydata --assoc --genedrop --swap-unrel

One or more of these options can be included with the --genedrop option. These features allow between and within family components of association to be included in analysis. Below are the results of some simple, proof-of-principle simulations, to illustrate parental discordance test:

Here is a subset of the results: in all cases, we have an unselected quantitative trait measured in parent/offspring nuclear families. The four models:
  1. no stratification, no QTL
  2. no stratification, QTL
  3. stratification between families (i.e. mother and father from same subpopulation), no QTL
  4. stratification within families (i.e. mother and father might not be from same subpopulation), no QTL
The three analytic procedures:
  1. standard QTL test (i.e. ignore family structure, which we know is incorrect)
  2. gene-dropping permutation (i.e. within family QTDT)
  3. gene-dropping + parental label-swapping (i.e. parenQTDT)
From simulation, the empirically estimated power/type I error rates (for a nominal value of 0.05) are:
     500 trios (QT)
       I     II    III
     A 0.121 0.045 0.053
     B 0.841 0.239 0.563
     C 0.461 0.056 0.056
     D 0.505 0.055 0.501

     500 tetrads (QT)
       I     II    III
     A 0.173 0.043 0.050
     B 0.900 0.363 0.653
     C 0.439 0.042 0.045
     D 0.390 0.044 0.421
That is,
  1. method I is, as expected, liberal (e.g. for tetrads, we see type I error rate of 17.3% instead of 5%). Subsequent values for this test should therefore be ignored in the table
  2. the parenQTDT (III) (as implemented by gene-dropping) is considerably more powerful than the standard within-family test that ignores parental phenotypes (II) -- i.e. 65% versus 36% for tetrads, in this particular instance.
  3. the parenQTDT is robust to stratification so long as it is between-family (condition C) -- i.e. it only assumes that mum and dad are matched on strata, not the whole sample. When this does not hold (condition D), then we get spurious association, as expected.

HINT For disease traits, the parenTDT test is automatically performed by the --tdt option (as long as there are at least 10 phenotypically discordant parental pairs in the sample). See the section of standard association testing for more details.

Within-cluster permutation

To perform label-swapping permutaion only within clusters, you must supply either a cluster file with the --within option, or indicate that family ID is to be used as the cluster variable, with the --family option. Then any label-swapping permutation procedure will only swap phenotype labels between individuals within the same cluster. For example,
plink --file mydata --assoc --within mydata.clst --perm

if the file mydata.clst were (for a PED file containing only 6 individuals, the file format is family ID, individual ID, cluster):
     F1  1  1
     F2  1  1
     F3  1  1
     F4  1  2
     F5  1  2
     F6  1  3
this would imply that only sets {1,2,3} and {4,5} could be permuted. That is, 1 and 3 could swap phenotypes, but not 1 and 5, for example. In this way, any between-cluster effects are preserved in each permuted dataset, which thereby controls for them.

To permute with family ID as the cluster variable for label-swapping, use the
plink --file mydata --assoc --family --perm

Note that label-swapping within families is different from gene-dropping. This approach would be appropriate for sibship data, for example, when no parents are available. The assumption is that individuals within family unit are interchangeable under the null -- as such, you should not include mixtures of full siblings and half siblings, or siblings and parents, for example, in the same cluster using this approach.

Note Other options for stratified analyses are described on the previous page

Generating permuted phenotype filesets

To generate a phenotype file with N permuted phenotypes in it, use the function
plink --bfile mydata --make-perm-pheno 10

which will make a file
     plink.pphe
with 10 phenotypes (listed after the FID and IID as the first two columns). This can then be used in any further analysis with PLINK or any other program. This command can be combined with --within, to generate permuted phenotype files in which individuals' phenotypes are only swapped within each level of the stratifying cluster variable, e.g.
plink --bfile mydata --make-perm-pheno 10 --within strata1.dat

 

This document last modified Wednesday, 25-Jan-2017 11:39:26 EST