1. Introduction
2. Basic information
3. Download and general notes
4. Command reference table
5. Basic usage/data formats
6. Data management
7. Summary stats
8. Inclusion thresholds
9. Population stratification
10. IBS/IBD estimation
11. Association
12. Familybased association
13. Permutation procedures
14. LD calculations
15. Multimarker tests
16. Conditional haplotype tests
17. Proxy association
18. Imputation (beta)
19. Dosage data
20. Metaanalysis
21. Annotation
22. LDbased results clumping
23. Genebased report
24. Epistasis
25. Rare CNVs
26. Common CNPs
27. Rplugins
28. Annotation weblookup
29. Simulation tools
30. Profile scoring
31. ID helper
32. Resources
33. Flowchart
34. Miscellaneous
35. FAQ & Hints
36. gPLINK


Permutation procedures
Permutation procedures provide a computationally intensive approach to
generating significance levels empirically. Such values have desirable
properties: for example, relaxing assumptions about normality of
continuous phenotypes and HardyWeinberg equilibrium, dealing with rare
alleles and small sample sizes, providing a framework for
correction for multiple testing, and controlling for
identified substructure or familial relationships by permuting only
within cluster.
Conceptual overview of permutation procedures
Permutation procedures are available for a variety of tests, as
described below. For some tests, however, these procedures are not
available (e.g. SNP x SNP epistasis tests). For other tests,
permutation is necessary to obtain any significance values at all
(e.g. setbased tests).
The permutation tests described below come can be categorized in two ways:
 Labelswapping versus genedropping
 Adaptive versus max(T)
Labelswapping and genedropping
In samples of unrelated individuals, one simply swaps labels (assuming
that individuals are
interchangeable under the null) to provide a new dataset sampled under
the null hypothesis.
Note that only the phenotypegenotype relationship is destroyed by
permutation: the patterns
of LD between SNPs will remain the same under the observed and permuted
samples.
For family data, it might be better (or in the case of affectedonly
designs such as the TDT, necessary) to perform genedropping permutation
instead. In it's most simple form, this just involves flipping which
allele is transmitted from parent to offspring with 50:50 probability.
This approach can extend to general pedigrees also, dropping genes from
founders down the generations.
For quantitative traits, or samples in which both affected and
unaffected nonfounders are present, one can then perform a basic test of
association (with disease, or with a quantitative trait) treating the
pedigree data as if they were all unrelated (i.e. just using the
assoc option) but creating permuted datasets by genedropping
will both control for stratification and the nonindependence of related
individuals (i.e. as these will also be properties of every permuted
dataset). It is possible to maintain LD between SNPs by applying the
same series of 50:50 flip/noflip decisions to all SNPs in the
same permuted replicate for a given transmission. In addition, it is
possible to control for linkage by applying the same series of flip/noflip
decisions to all siblings in the same nuclear family. Both these features
are automatically applied
in PLINK.
Adaptive and max(T) permutation
Using either labelswapping or genedropping, there are two basic
approaches to performing the permutations. The default mode is to use an
adaptive permutation
approach, in which we give up permuting SNPs that are clearly going to be nonsignificant
more quickly than SNPs that look interesting. In otherwords, if after only 10 permutations
we see that for 9 of these the permuted test statistic for a given SNP is larger than the
observed test statistic, there is little point in carrying on, as this SNP is incredibly
unlikely to ever achieve a highly significant result. This greatly speeds up the permutation
procedure, as most SNPs (that are not highly significant) will drop out quite quickly, making it possible
to properly evaluate significance for the handful of SNPs that require millions of permutations.
Naturally, the precision with which one has estimated the significance pvalue (i.e. relating from
the number of permutations performed) will be correlated the significance value itself  but for
most purposes, this is precisely what one wants, as it is of little interest whether a clearly
unassociated SNP really has a pvalue of 0.78 or 0.87.
In contrast, max(T) permutation does not drop SNPs along the way. If 1000 permutations are
specified, then all 1000 will be performed, for all SNPs. The benefit of doing this is that two sets
of empirical significance values can then be calculated  pointwise estimates of an individual SNPs
significance, but also a value that controls for that fact that thousands of other SNPs were tested. This
is achieved by comparing each observed test statistic against the maximum of all permuted statistics (i.e.
over all SNPs) for each single replicate. In otherwords, the pvalue now controls the familywise error rate,
as the pvalue reflects the chance of seeing a test statistic this large, given you've performed as many
tests as you have. Because the permutation schemes preserve the correlational structure between SNPs, this
provides a less stringent correction for multiple testing in comparison to the Bonferroni, which assumes
all tests are independent. Because it is now the corrected pvalue that is of interest, it is typically
sufficient to perform a much smaller number of tests  i.e. it is probably not necessary to demonstrate that
something is genomewide significant beyond 0.05 or 0.01.
Computational issues
PLINK performs the basic tests of association reasonably quickly  for small datasets both permutation
procedures will be feasible. For example, for a dataset comprising 100,000 SNPs measured on 350 individuals,
each permutation (for all 100K SNPs) takes approximately 2 seconds on a modern Linux workstation. At this speed, it
will take just over 1 day to perform 50,000 permutations using the max(T) mode and labelswapping. With the same dataset,
using adaptive mode, the entire analysis is finished much quicker (although the empirical pvalues are, of course, not
corrected for multiple testing). For larger datasets (e.g. 1000s of individuals measured on >500K SNPs) things will
slow down, although this will be linear with the number of genotypes  if one has access to a cluster, however, the
max(T) approach lends itself to easy parrallelization (i.e. if one can set many jobs running analysing the same data, it
is easy to combine the empirical pvalues afterwards).
By default, PLINK will select a random seed for the
permutations, based on the system clock. To specify a fixed seed
instead add the command
seed 6377474
where the parameter a (large) integer seed.
Basic (adaptive) permutation procedure
The default method for permutation is the adaptive method. To obtain a max(T) permutation pvalue,
see the section below. For either either case/control or quantitative trait
association analysis, use the option:
plink file mydata assoc perm
to initiate adaptive permutation testing. As well as the plink.assoc or plink.qassoc output file,
adding the perm option will generate a file named:
plink.assoc.perm
which contains the fields:
CHR Chromosome
SNP SNP ID
STAT Test statistic
EMP1 Empirical pvalue (adaptive)
NP Number of permutations performed for this SNP
An alternate scheme is also available, that may under some circumstances be useful.
Specifically, this approach fixes the observed marginal counts for the 2by3 tables
that is case/control status by the two alleles and the missing allele count.
After permuting case/control label, only two cells in the table, e.g. missing and
A2 alleles for controls, are counted, the rest of the table is filled in on
the basis of the fixed marginal values. This speeds up the permutation procedure
a little, and also implicitly downweights association results where there is
a lot of missing genotype data that is nonrandom with respect to genotype
and case/control status. Naturally, this approach can not provide total
protection against the problem of nonrandom missing genotype data. Also,
for SNPs with lots of missing data, this test will be conservative, whether
the missingness is nonrandom or not. For these reasons, this is not the
default option, although this approach might be one worth exploring further. To
use this alternate permutation scheme, use the p2 flag:
plink file mydata assoc perm p2
or
plink file mydata assoc mperm 1000 p2
Adaptive permutation parameters
Although the perm option invokes adaptive permutation by
default, there are various parameters that alter the behavior of the
adaptive process that can be tweaked using the aperm option,
followed by six parameters: for example,
plink file mydata assoc aperm 10 1000000 0.0001 0.01 5 0.001
The six arguments (along with the default values) are:
Minimum number of permutations per SNP 5
Maximum number of permutations per SNP 1000000
Alpha level threshold (alpha) 0
Confidence interval on empirical pvalue (beta) 0.0001
Interval to prune test list (intercept) 1
Interval to prune test list (slope) 0.001
These are interpreted as follows: for every SNP, at least 5 permutations
will be performed, but no more than 1000000. After 5 permutations, the
pvalues will be evaluated to see which SNPs we can prune. The first
interval value means to perform this pruning every 5 replicates;
the second pruning parameter (0.001) means that the rate of pruning slows down with
increasing number of replicates (i.e. pruning is, in this case, performed every 5+0.001R replicates
where R is the current number of replicates). At each pruning stage, a 100*(1  beta / 2T)%
confidence interval is calculated for each empirical pvalue, where beta is, in this case 0.01,
and T is the number of SNPs. Using the normal approximation to the binomial, we prune any SNP for
which the lower confidence bound is greater than alpha or the upper confidence bound is less than
alpha.
max(T) permutation
To perform the max(T) permutation procedure, use the mperm
option, which takes a single paramter, the
number of permutations to be performed: e.g. to use with the TDT test:
plink file mydata tdt mperm 5000
which will generate (along with the plink.tdt file) an file named
plink.tdt.mperm
which contains the fields:
CHR Chromosome
SNP SNP ID
STAT Test statistic
EMP1 Empirical pvalue (pointwise)
EMP2 Corrected empirical pvalie (max(T) / familywise)
Hint If multiple runs of PLINK are performed on the same
dataset in parallel, using a computer cluster to speed up the max(T) permutations,
then the resulting estimates of empirical significance can be combined across runs
as follows. Empirical pvalues are calculated as (R+1)/(N+1) where R is
the number of times the permuted test is greater than the observed test; N is the
number of permutations. Therefore, therefore, given p_i, the empirical pvalue
for the ith run, this implies that p_i*(N_i+1)1 replicates passed the observed
value. The overall empirical pvalue should then be:
( SUM_i { p_i * ( N_i + 1 )  1 } + 1 ) / ( SUM_i { N_i } + 1 )
To produce output files that contain either the best statistic per replicate, or all statistics per replicate, use either option
mpermsave
or
mpermsaveall
along with the usual mperm command. The first command generates a file
plink.mperm.dummp.best
which contains two columns. The first is the replicate number (0 represents the original data, the remaining rows 1 to R where R is the
number of permutations specified). The second column is the maximum test statistic over all SNPs for that replicate.
The second command, mpermsaveall produces a file
plink.mperm.dump.all
that could be a very large file: the test statistic for all SNPs for all replicates. As before, the first row is the original data;
the first column represents the replicate number; all other columns represent the test statistic values for each SNP (NA
if this cannot be calculated). These two files might be of use if, for example, you wish to create your own wrapper around PLINK to perform higherorder
corrections for multiple testing, e.g. if more than one phenotype is tested per SNP. In most cases, for this purpose,
the first form should suffice.
Genedropping permutation
To perform genedropping permutation, use the genedrop option,
combined with the standard assoc option. Either adaptive: e.g.
plink file mydata assoc genedrop
or max(T) permutation: e.g.
plink file mydata assoc genedrop mperm 10000
can be specified.
This analysis option is equally applicable to disease and quantitative traits,
although at least some nonfounder individuals should be unaffected.
Currently, an individual must have both parents genotyped for genedropping. For founders
and for individuals without two genotyped parents, their genotypes are
unchanged throughout all genedropping permutations.
It is possible to combine labelswapping with genedropping, however, to
handle different family/sample configurations. That is, the basic
genedropping procedure will leave untouched all individuals without two
parents, making them uninformative for the test of association. One can
think of at least three classes of groups of people without two parents
in the dataset: founders/parents, siblings and unrelated singletons.
Labelswapping within these groups can provide additional sources
information for association that control different levels of the
between/within family components of association.
There are three options, which can be used together, are:
swapsibs within family
swapparents partial withinfamily
swapunrel between family
which labelswap between sibs without genotyped parents (swapping only
within families), between parents only (swapping only within families),
or between all singletons (unrelated individuals) (swapping between
familes).
Basic within family QTDT
This test only considers information from individuals with two genotyped
parents:
plink file mydata assoc genedrop
Discordant sibling test
Although genedropping only considers individuals with two parents to be
informative, valid familybased tests can include information from
fullsiblings  by labelswapping only within each full sibship that
does not otherwise have parents, it is possible to augment
the power of the genedropping approach:
plink file mydata assoc genedrop swapsibs
parenTDT/parenQTDT
This test additionally incorporates information from phenotypically
discordant parents (for either quantitative or disease triats). This
provides more information for association, but provides a weaker level
of protection against stratification (i.e. it assumes that mother and
father pairs are well matched in terms of subpopulation stratum).
plink file mydata assoc genedrop swapparents
Standard association for singleton, unrelated individuals
If a sample is a mixture of families and unrelated individuals (e.g.
case/control and offspring/parent trios combined) then adding this
option as well as the genedrop option will perform
labelswapping permutation for all unrelated individuals.
plink file mydata assoc genedrop swapunrel
One or more of these options can be included with the genedrop
option. These features allow between and within family components of
association to be included in analysis. Below are the results of some
simple, proofofprinciple simulations, to illustrate parental
discordance test:
Here is a subset of the results: in all cases, we have an unselected
quantitative trait measured in parent/offspring nuclear families.
The four models:
 no stratification, no QTL
 no stratification, QTL
 stratification between families (i.e. mother and father from same subpopulation), no QTL
 stratification within families (i.e. mother and father might not be from same
subpopulation), no QTL
The three analytic procedures:
 standard QTL test (i.e. ignore family structure, which we know is incorrect)
 genedropping permutation (i.e. within family QTDT)
 genedropping + parental labelswapping (i.e. parenQTDT)
From simulation, the empirically estimated power/type I error rates (for
a nominal value of 0.05) are:
500 trios (QT)
I II III
A 0.121 0.045 0.053
B 0.841 0.239 0.563
C 0.461 0.056 0.056
D 0.505 0.055 0.501
500 tetrads (QT)
I II III
A 0.173 0.043 0.050
B 0.900 0.363 0.653
C 0.439 0.042 0.045
D 0.390 0.044 0.421
That is,
 method I is, as expected, liberal (e.g. for tetrads, we see type I
error rate of 17.3% instead of 5%). Subsequent values for this test should
therefore be ignored in the table
 the parenQTDT (III) (as implemented by genedropping) is considerably
more powerful than the standard withinfamily test that ignores parental
phenotypes (II)  i.e. 65% versus 36% for tetrads, in this particular
instance.
 the parenQTDT is robust to stratification so long as it is
betweenfamily (condition C)  i.e. it only assumes that mum and dad are
matched on strata, not the whole sample. When this does not hold
(condition D), then we get spurious association, as expected.
HINT For disease traits, the parenTDT test is
automatically performed by the tdt option (as long as there
are at least 10 phenotypically discordant parental pairs in the sample).
See the section of standard association testing
for more details.
Withincluster permutation
To perform labelswapping permutaion only within clusters, you
must supply either a cluster file with the within option,
or indicate that family ID is to be used as the cluster variable, with the
family option. Then any labelswapping permutation procedure
will only swap phenotype labels between individuals within the same
cluster. For example,
plink file mydata assoc within mydata.clst perm
if the file mydata.clst were (for a PED file containing only 6
individuals, the file format is family ID, individual ID, cluster):
F1 1 1
F2 1 1
F3 1 1
F4 1 2
F5 1 2
F6 1 3
this would imply that only sets {1,2,3} and {4,5} could be permuted. That is, 1 and 3 could
swap phenotypes, but not 1 and 5, for example. In this way, any betweencluster effects
are preserved in each permuted dataset, which thereby controls for them.
To permute with family ID as the cluster variable for labelswapping, use
the
plink file mydata assoc family perm
Note that labelswapping within families is different from genedropping.
This approach would be appropriate for sibship data, for example, when no
parents are available. The assumption is that individuals within family
unit are interchangeable under the null  as such, you should not
include mixtures of full siblings and half siblings, or siblings
and parents, for example, in the same cluster using this approach.
Note Other options for stratified analyses are
described on the previous
page
Generating permuted phenotype filesets
To generate a phenotype file with N permuted phenotypes in it, use the function
plink bfile mydata makepermpheno 10
which will make a file
plink.pphe
with 10 phenotypes (listed after the FID and IID as the first two
columns). This can then be used in any further analysis with PLINK or
any other program. This command can be combined
with within, to generate permuted phenotype files in which
individuals' phenotypes are only swapped within each level of the stratifying
cluster variable, e.g.
plink bfile mydata makepermpheno 10 within strata1.dat

