Introduction

The traj package implements a clustering algorithm for functional data (henceforth referred to as trajectories) that extends previous work from Leffondré et al [1]. This algorithm is comprised of three steps. The first step summarizes the main features of the trajectories by computing the 19 measures listed below and detailed in Appendix A.

  1. Maximum
  2. Range
  3. Mean value
  4. Standard deviation
  5. Intercept of the linear model
  6. Slope of the linear model
  7. \(R^2\): Proportion of variance explained by the linear model
  8. Curve length (total variation)
  9. Rate of intersection with the mean
  10. Proportion of time spent above the mean
  11. Minimum of the first derivative
  12. Maximum of the first derivative
  13. Mean of the first derivative
  14. Standard deviation of the first derivative
  15. Minimum of the second derivative
  16. Maximum of the second derivative
  17. Mean of the second derivative
  18. Standard deviation of the second derivative
  19. Later change/Early change

The second step performs a dimensionality reduction on the 19 measures to extract the main features of the trajectories. Specifically,

  1. Measures that are constant across trajectories are discarded because they do not provide any discriminating information.
  2. A principal component analysis (PCA) is conducted on the measures. The main principal components (defined as those principal components contributing more to the total variance than any of the individual standardized measures) are selected.
  3. A varimax rotation is applied to the main principal components.
  4. The measure which is most strongly correlated with each rotated component is selected, starting from the component that explains the most variance.

In the third step, a clustering algorithm is used to form clusters of trajectories based on the measures selected in step 2.

An example

Let us illustrate how to use the traj package on an artificially created dataset (trajdata) comprised of 130 trajectories following four distinct patterns (A, B, C, D).

library(traj)
data(trajdata) 
head(trajdata)
dat <- trajdata[, -c(1,2)]

Each trajectory is made up of six observations and there are no missing values. The function Step1Measures computes the measures. By default, measure 19 (Early change/Later change) is not included in the analysis. This is because, depending on the situation (uneven observation times, attrition, missing values, etc.), there might not be, for each trajectory, a natural midpoint. In the present data set, we include measure 19. By leaving the ‘midpoint’ argument at its default of NULL, the third observation will be taken as the midpoint.

step1 <- Step1Measures(Data = dat, measures = 1:19) 

summary(step1)
## Description of the measures:
## m1: Maximum
## m2: Range
## m3: Mean value
## m4: Standard deviation
## m5: Intercept of the linear model
## m6: Slope of the linear model
## m7: Proportion of variance explained by the linear model (R squared)
## m8: Curve length (total variation)
## m9: Number of times crossing the mean per unit time
## m10: Proportion of time spent under the mean
## m11: Minimum of the first derivative
## m12: Maximum of the first derivative
## m13: Mean of the first derivative
## m14: Standard deviation of the first derivative
## m15: Minimum of the second derivative
## m16: Maximum of the second derivative
## m17: Mean of the second derivative
## m18: Standard deviation of the second derivative
## m19: Later change/Early change
## 
## Summary of measures:
##                m1         m2        m3        m4         m5          m6
## Min.     11.16944   4.974832  6.734208  1.655156  -2.132012 -24.7364594
## 1st Qu.  17.44338  13.596531 10.683520  4.608162  10.718085 -18.0088233
## Median   86.89627  80.567344 49.521374 24.392691 100.578979 -14.3756056
## Mean     63.71865  61.281725 37.032177 18.761021  67.365833  -8.8673769
## 3rd Qu.  95.64244  94.429695 55.289514 29.552200 116.965230   0.1682977
## Max.    113.38473 128.477551 61.923002 37.161789 140.967999  16.2687616
##                   m7        m8        m9       m10        m11        m12
## Min.    6.548737e-05  11.01804 0.2000000 0.2000000 -63.352715 -14.935763
## 1st Qu. 2.119554e-01  33.25262 0.2000000 0.5000000 -33.047714  -5.681113
## Median  8.825381e-01  81.92250 0.2000000 0.5000000 -21.252427   2.160407
## Mean    6.408343e-01  70.27963 0.3507692 0.5261538 -20.647836   2.202651
## 3rd Qu. 9.319712e-01 102.00426 0.6000000 0.6000000  -5.955160   6.702146
## Max.    9.903453e-01 137.85654 1.0000000 0.9000000   9.971951  44.466830
##                  m13       m14        m15       m16         m17        m18
## Min.    -25.69551019  1.243051 -27.917171 -7.326749 -12.5077887  0.6418593
## 1st Qu. -18.46646486  4.101053 -12.992123  1.919339  -4.0529737  3.1900188
## Median  -14.91259635  6.502801  -9.027665  4.740119  -1.5795261  4.8789085
## Mean     -8.98945884  7.207440  -9.280153  6.277813  -1.6084247  5.2729286
## 3rd Qu.   0.04639682  9.685304  -4.659910 10.095925   0.9862174  6.8043731
## Max.     17.57402397 18.868623   2.347271 27.813294  11.9407699 15.3664412
##                   m19
## Min.    -177.05043991
## 1st Qu.   -0.01697665
## Median     1.12915597
## Mean       1.11919749
## 3rd Qu.    2.81539207
## Max.      95.15142933

Once the measures are computed, we use the Step2Selection function to extract the measures that best characterize the trajectories.

step2 <- Step2Selection(trajMeasures = step1) 
summary(step2)
## The measures m4, m13 were discarded because they were perfectly or almost perfectly correlated with another measure. Upon forming the principal components from the remaining measures, 4 of them had a variance greater than any one of the normalized measures. Together, they explained 83.5% of the total variance. A varimax rotation was performed to maximize the correlation with the original measures without affecting the proportion of explained variance. For each rotated factor, the measure that had the highest correlation (loading) with it was selected. As a result of this procedure, the selected measures are, in decreasing order of variance explained, m7, m15, m12, m16. Use print() to see more detailed informations.

Two measures are defined as “perfectly or almost perfectly correlated” if the absolute value of their Pearson correlation coefficient is greater than 0.98. The print function provides more detailed information:

print(step2)
## m4 has been discarded due to being perfectly or almost perfectly correlated with m2.
## m13 has been discarded due to being perfectly or almost perfectly correlated with m6.
## 
## In decreasing order of variance explained, the selected measures are m7, m15, m12, m16.
## 
## Loadings:
##     RC1    RC4    RC3    RC2   
## m1   0.836  0.371 -0.332       
## m2   0.775  0.472 -0.379       
## m3   0.871  0.355 -0.232       
## m5   0.596  0.371 -0.698       
## m6  -0.354 -0.335  0.842       
## m7   0.891  0.198 -0.282       
## m8   0.685  0.586 -0.366       
## m9  -0.853         0.173       
## m10  0.425         0.208  0.168
## m11 -0.330 -0.553  0.713       
## m12                0.949       
## m14  0.442  0.797 -0.111 -0.106
## m15 -0.253 -0.835         0.365
## m16         0.228 -0.107  0.947
## m17 -0.164 -0.415         0.869
## m18  0.239  0.824         0.440
## m19         0.181 -0.139       
## 
##                  RC1   RC4   RC3   RC2
## SS loadings    5.180 3.712 3.257 2.053
## Proportion Var 0.305 0.218 0.192 0.121
## Cumulative Var 0.305 0.523 0.715 0.835

Measure 4 (Standard deviation) was dropped because it is perfectly or almost perfectly correlated with measure 2 (Range). The Step3Clusters function uses the k-medoids algorithm (function cluster:::pam) on the measures selected in step 2 to cluster the trajectories.

library(cluster)
set.seed(1337)
step3 <- Step3Clusters(trajSelection = step2, nclusters = 4) 

If the nclusters argument is set to NULL (the default), the number \(k\) of clusters will be determined using the Calinski-Harabasz criterion. To visually inspect the classification, we write plot(step3, ask = TRUE). We can also ask for specific plots with which.plots.

## [1] "See also 'critplot' for a plot of the statistic used to determined the number of clusters and see 'scatterplots' for scatter plots of the measures involved in the clustering."

## [1] "See also 'critplot' for a plot of the statistic used to determined the number of clusters and see 'scatterplots' for scatter plots of the measures involved in the clustering."

## [1] "See also 'critplot' for a plot of the statistic used to determined the number of clusters and see 'scatterplots' for scatter plots of the measures involved in the clustering."

## [1] "See also 'critplot' for a plot of the statistic used to determined the number of clusters and see 'scatterplots' for scatter plots of the measures involved in the clustering."

## [1] "See also 'critplot' for a plot of the statistic used to determined the number of clusters and see 'scatterplots' for scatter plots of the measures involved in the clustering."

The “Sample trajectories” plot tends to get cluttered when there are too many clusters. In any case, it is always a good idea to plot the whole clusters:

color.pal <- palette.colors(palette = "Polychrome 36", alpha = 1)[-2] 
par(mfrow = c(1, 1))
for(k in 1:4){
  w <- which(step3$partition$Cluster == k)
  dat.w <- dat[w, ]
  plot(y = 0, x = 0, ylim = c(floor(min(dat)), ceiling(max(dat))), xlim = c(1,6), xlab="", ylab="", type="n", main = paste("Cluster ", k, " (n = ", step3$partition.summary[k], ")", sep = ""))
  for(i in 1:length(w)){
    lines(y = dat.w[i, ], x = 1:6, col = color.pal[k])
  }
}


Appendix A: The measures

In this section, we expand on how the eighteen measures are computed. Let \(y=y(t)\) denote a continuous function \([a,b]\rightarrow \mathbb{R}\) and let \(y(t_i)\) denote the trajectory obtained by measuring \(y(t)\) at times \(a\leq t_1<\ldots< t_N\leq b\), where \(N\geq 3\). We do not assume that the times \(t_i\) are equidistant from one another.

Appendix B: The capping procedure

If the cap.outliers argument of the Step1Measures function is set to TRUE or if measure contains values that are infinite (cause by a division by 0), the outliers will be capped as follows. In a recent paper published on the arXiv [3], the author proves that for any continuous random variable \(X\) with finite first two moments there holds \[ P[|X-\mu|>k\sigma]<\sigma M(k)\alpha(k), \]where \(M(k)\) is the least upper bound of the probability density function on \(\{x \ | \ |x-\mu|>k\sigma\}\) and where \(\alpha(k)\) is the unique real root of the cubic polynomial\[t^3 + 2\pi kt^2 + 2\pi e k^2t - \frac{2\pi e}{\sigma M(k)}.\]Suppose that our data set contains \(n\) trajectories. This gives us a sample \(X_1,\ldots,X_n\) from the distribution of \(X\). Our capping procedure consists of the following steps.

  1. After relabeling the observed values of \(X\) so that \(|X_1|\leq |X_2|\leq\ldots \leq |X_n|\), remove the last \(r=\min(1,n/100)\) observations.

  2. From the remaining values \(X_1,\ldots, X_{n-r}\), we compute estimates \(\hat{\mu}\) and \(\hat{\sigma}\) of the mean and standard deviation as usual.

  3. Still using only \(X_1,\ldots, X_{n-r}\), approximate the probability density function of \(X\) using a kernel density estimator (function stats:::density), find the value of \(M(k)\) for this PDF and compute \(\alpha(k)\).

  4. Using these, identify the smallest value of \(k\) for which \(\hat{\sigma}M(k)\alpha(k)<0.003\). If \(k^*\) denotes this smallest value of \(k\), replace the value of any \(X_i\) with \(X_i<\hat{\mu}-k^*\hat{\sigma}\) by \(\hat{\mu}-k^*\hat{\sigma}\) and we replace the value of any \(X_i\) with \(X_i>\hat{\mu}+k^*\hat{\sigma}\) by \(\hat{\mu}+k^*\hat{\sigma}\).

Bibliography

[1] Leffondré K, Abrahamowicz M, Regeasse A, Hawker GA, Badley EM, McCusker J, Belzile E. Statistical measures were proposed for identifying longitudinal patterns of change in quantitative health indicators. J Clin Epidemiol. 2004 Oct; 57(10):1049-62. doi: 10.1016/j.jclinepi.2004.02.012. PMID: 15528056.

[2] Tibshirani R, Walther G, Hastie T. Estimating the Number of Clusters in a Data Set Via the Gap Statistic, Journal of the Royal Statistical Society Series B: Statistical Methodology, Volume 63, Issue 2, July 2001, Pages 411–423

[3] Nishiyama T. Improved Chebyshev inequality: new probability bounds with known supremum of PDF arXiv:1808.10770v2