Refine
Year of publication
Document Type
- Monograph/Edited Volume (8)
- Preprint (5)
- Article (4)
- Doctoral Thesis (1)
- Postprint (1)
Language
- English (19)
Keywords
- censoring (2)
- Asymptotic variance of maximum partial likelihood estimate (1)
- Confidence intervals (1)
- Cox model (1)
- Estimability (1)
- Model comparison (1)
- Monte Carlo testing (1)
- Reclassification (1)
- Risk assessment (1)
- Risk model (1)
Institute
We give a survey on procedures for testing functions which are based on quadratic deviation measures. The following problems are considered: Testing whether a density function lies in a parametric class of functions, whether continuous random variables are independent; testing cell probabilities and independence in sparse data sets; testing the parametric fit of a regression homoscedasticity in a regression model and testing the hazard rate in survival models with censoring and with and without covariates.
Estimability in Cox models
(2016)
Our procedure of estimating is the maximum partial likelihood estimate (MPLE) which is the appropriate estimate in the Cox model with a general censoring distribution, covariates and an unknown baseline hazard rate . We find conditions for estimability and asymptotic estimability. The asymptotic variance matrix of the MPLE is represented and properties are discussed.
We consider a nonparametric survival model with random censoring. To test whether the hazard rate has a parametric form the unknown hazard rate is estimated by a kernel estimator. Based on a limit theorem stating the asymptotic normality of the quadratic distance of this estimator from the smoothed hypothesis an asymptotic ®-test is proposed. Since the test statistic depends on the maximum likelihood estimator for the unknown parameter in the hypothetical model properties of this parameter estimator are investigated. Power considerations complete the approach.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.