Can we believe in the imputations?

A popular approach to deal with missing values is to impute the data to get a complete dataset on which any statistical method can be applied. Many imputation methods are available and provide a completed dataset in any cases, whatever the number of individuals and/or variables, the percentage of missing values, the pattern of missing values, the relationships between variables, etc.

However, can we believe in these imputations and in the analyses performed on these imputed datasets?

Multiple imputation generates several imputed datasets and the variance between-imputations reflects the uncertainty of the predictions of the missing entries (using an imputation model). In the missMDA package we propose a way to visualize the uncertainty associated to the predictions. The rough idea is to project all the multiple imputed datasets on the PCA graphical representations obtained from the “mean” imputed dataset.

For instance, for the incomplete orange data, the two following graphs read as follows: observation 6 has no uncertainty (there is no missing value for this observation) whereas there is more variability on the position of observation 10. For the variables, the clouds of points represent the uncertainties on the predictions. Ellipses as well as clouds are quite small and encourage to carry-on the analysis on the imputed dataset.

missMDA_ind_orangemissMDA_var_orange

The graphics above where obtained after performing multiple imputation with PCA simply be obtained using the function plot.MIPCA as follows:

library(missMDA)
data(orange)
nbdim <- estim_ncpPCA(orange) # estimate the number of dimensions to impute 
res.comp <- MIPCA(orange, ncp = nbdim$ncp, nboot = 1000)
plot(res.comp)

Now we have hints to answer the famous questions: “I have a dataset with xx% of missing values, can I impute it with your method?” or “Is 30% of missing values too much or not?” or “What is the maximum percentage of missing values?” Indeed, the percentage of missing values impacts the quality of the imputation but not only! The structure of the data (i.e. the relationships between variables) is very important. It is indeed possible to have small ellipses with a high percentage of missing values and the other way around. That is why these graphs are useful. The following ones suggest that we must be very careful with subsequent analyses on the imputed dataset, and even it suggests stopping the analysis of this dataset. When there’s nothing good to do, it’s better to do nothing!

missMDA_ind2missMDA_var2

This methodology is also available for categorical data with the functions MIMCA and plot.MIMCA to visualize the uncertainty around the prediction of categories.

You can contact us for more information:
julie.josse@polytechnique.edu       @JulieJosseStat
husson@agrocampus-ouest.fr

Multiple imputation for continuous and categorical data

The idea of imputation is both seductive and dangerous” (R.J.A Little & D.B. Rubin).

Indeed, a predicted value is considered as an observed one and the uncertainty of prediction is ignored, conducting to bad inferences with missing values. That is why Multiple Imputation is recommended.

The missMDA package quickly generates several imputed datasets with quantitative variables and/or categorical variables. It is based on dimensionality reduction methods such as PCA for continuous variables or multiple correspondence analysis for categorical variables. Compared to the packages Amelia and mice, it better handles cases where the number of variables is larger than the number of units, and cases where regularization is needed (i.e. when the imputation model is prone to overfitting issues). For categorical variables, it is particularly interesting with many variables and many levels, but also with rare levels.

With 3 lines of code, we generate 1000 imputed datasets for the quantitative orange data available in missMDA:

library(missMDA)
data(orange)
nbdim <- estim_ncpPCA(orange) # estimate the number of dimensions to impute
res.comp <- MIPCA(orange, ncp = nbdim$ncp, nboot = 1000)

In the same way, MIMCA can be used for categorical data:

library(missMDA)
data(vnf)
nb <- estim_ncpMCA(vnf,ncp.max=5) ## Time-consuming, nb = 4
res <- MIMCA(vnf, ncp=4,nboot=10)

You can find more information in this JSS paper, on this website, on this tutorial given at useR!2016 at stanford.

You can also watch this playlist on Youtube to practice with R.

You can also contact us:
julie.josse@polytechnique.edu       @JulieJosseStat
husson@agrocampus-ouest.fr

Missing values imputation with missMDA

The best thing to do with missing values is not to have any” (Gertrude Mary Cox)

Unfortunately, missing values are ubiquitous and occur for plenty of reasons. One solution is single imputation which consists in replacing missing entries with plausible values. It leads to a complete dataset that can be analyzed by any statistical methods.

Based on dimensionality reduction methods, the missMDA package successfully imputes large and complex datasets with quantitative variables, categorical variables and mixed variables. Indeed, it imputes data with principal component methods that take into account the similarities between the observations and the relationship between variables. It has proven to be very competitive in terms of quality of the prediction compared to the state of the art methods.

With 3 lines of code, we impute the dataset orange available in missMDA:

library(missMDA)
data(orange)
nbdim <- estim_ncpPCA(orange) # estimate the number of dimensions to impute
res.comp <- imputePCA(orange, ncp = nbdim)

In the same way, imputeMCA imputes datasets with categorical variables and imputeFAMD imputes mixed datasets.

With a completed data, we can pursue our analyses… however we need to be careful and not to forget that the data was incomplete! In a future post, we will see how to visualize the uncertainty on these predicted values.

You can find more information in this JSS paper, on this website, on this tutorial given at useR!2016 at stanford.

You can also watch this playlist on Youtube to practice with R.