In integrated medical considerations of the biological human system, both intellectual and motor performances in a similar manner are considered as a result of the function of the nervous system. Consequently, universal minimal dysfunctions of the central nervous system may lead to both intellectual and physical anomalies. Therefore, this study tests the hypothesis that there is a connection between the balance ability as a motor parameter and school success as an intellectual parameter. A postural measuring system based on the force-moment sensor technique was used to record the postural balance regulation of 773 children (circle divide 11 +/- 1 years). The school achievement of each child was determined by school grades. Data analysis was performed by linear as well as by nonlinear time series analyses. There are highly significant differences in balance regulation between good and poor pupils recognized by several linear and nonlinear parameters. Good pupils could be discriminated from pupils with bad results in learning to 80 %. The results support the hypothesis mentioned above. One possible explanation for the poor regulation of balance in bad learners could be a deficit in the neural maturity. In future, further developments will be targeted on higher discrimination levels, possibly in order to predict school success. On the other hand, the effects of special movement exercises on the neural development in childhood will be the focus in our further work
In the last decade, there has been an increasing interest in compensating thermally induced errors to improve the manufacturing accuracy of modular tool systems. These modular tool systems are interfaces between spindle and workpiece and consist of several complicatedly formed parts. Their thermal behavior is dominated by nonlinearities, delay and hysteresis effects even in tools with simpler geometry and it is difficult to describe it theoretically. Due to the dominant nonlinear nature of this behavior the so far used linear regression between the temperatures and the displacements is insufficient. Therefore, in this study we test the hypothesis whether we can reliably predict such thermal displacements via nonlinear temperature-displacement regression functions. These functions are estimated firstly from learning measurements using the alternating conditional expectation (ACE) algorithm and then tested on independent data sets. First, we analyze data that were generated by a finite element spindle model. We find that our approach is a powerful tool to describe the relation between temperatures and displacements for simulated data. Next, we analyze the temperature-displacement relationship in a silent real experimental setup, where the tool system is thermally forced. Again, the ACE-algorithm is powerful to estimate the deformation with high precision. The corresponding errors obtained by using the nonlinear regression approach are 10-fold lower in comparison to multiple linear regression analysis. Finally, we investigate the thermal behavior of a modular tool system in a working milling machine and get again promising results. The thermally inducedaccuracy using this nonlinear regression analysis. Therefore, this approach seems to be very useful for the development of new modular tool systems. errors can be estimated with 1-2 micrometer
In the last decade, there has been an increasing interest in compensating thermally induced errors to improve the manufacturing accuracy of modular tool systems. These modular tool systems are interfaces between spindle and workpiece and consist of several complicatedly formed parts. Their thermal behavior is dominated by nonlinearities, delay and hysteresis effects even in tools with simpler geometry and it is difficult to describe it theoretically. Due to the dominant nonlinear nature of this behavior the so far used linear regression between the temperatures and the displacements is insufficient. Therefore, in this study we test the hypothesis whether we can reliably predict such thermal displacements via nonlinear temperature-displacement regression functions. These functions are estimated firstly from learning measurements using the alternating conditional expectation (ACE) algorithm and then tested on independent data sets. First, we analyze data that were generated by a finite element spindle model. We find that our approach is a powerful tool to describe the relation between temperatures and displacements for simulated data. Next, we analyze the temperature-displacement relationship in a silent real experimental setup, where the tool system is thermally forced. Again, the ACE-algorithm is powerful to estimate the deformation with high precision. The corresponding errors obtained by using the nonlinear regression approach are 10-fold lower in comparison to multiple linear regression analysis. Finally, we investigate the thermal behavior of a modular tool system in a working milling machine and get again promising results. The thermally induced errors can be estimated with 1-2${mu m}$ accuracy using this nonlinear regression analysis. Therefore, this approach seems to be very useful for the development of new modular tool systems.
The main intention of this contribution is to discuss different nonlinear approaches to heart rate and blood pressure variability analysis for a better understanding of the cardiovascular regulation. We investigate measures of complexity which are based on symbolic dynamics, renormalised entropy and the finite time growth rates. The dual sequence method to estimate the baroreflex sensitivity and the maximal correlation method to estimate the nonlinear coupling between time series are employed for analysing bivariate data. The latter appears to be a suitable method to estimate the strength of the nonlinear coupling and the coupling direction. Heart rate and blood pressure data from clinical pilot studies and from very large clinical studies are analysed. We demonstrate that parameters from nonlinear dynamics are useful for risk stratification after myocardial infarction, for the prediction of life-threatening cardiac events even in short time series, and for modelling the relationship between heart rate and blood pressure regulation. These findings could be of importance for clinical diagnostics, in algorithms for risk stratification, and for therapeutic and preventive tools of next generation implantable cardioverter defibrillators.
Despite many previous Studies on the association between hyperthyroidism and the hyperadrenergic state, controversies still exist. Detrended fluctuation analysis (DFA) is a well recognized method in the nonlinear analysis of heart rate variability (HRV), and it has physiological significance related to the autonomic nervous system. In particular, an increased short-term scaling exponent alpha 1 calculated from DFA is associated with both increased sympathetic activity and decreased vagal activity. No study has investigated the DFA of HRV in hyperthyroidism. This study was designed to assess the sympathovagal balance in hyperthyroidism. We performed the DFA along with the linear analysis of HRV in 36 hyperthyroid Graves' disease patients (32 females and 4 males; age 30 +/- 1 years, means +/- SE) and 36 normal controls matched by sex, age and body mass index. Compared with the normal controls, the hyperthyroid patients revealed a significant increase (P < 0.001) in alpha 1 (hyperthyroid 1.28 +/- 0.04 versus control 0.91 +/- 0.02), long-term scaling exponent alpha 2 (1.05 +/- 0.02 versus 0.90 +/- 0.01), overall scaling exponent alpha (1.11 +/- 0.02 versus 0.89 +/- 0.01), low frequency power in normalized units (LF%) and the ratio of low frequency power to high frequency power (LF/HF); and a significant decrease (P < 0.001) in the standard deviation of the R-R intervals (SDNN) and high frequency power (HF). In conclusion, hyperthyroidism is characterized by concurrent sympathetic activation and vagal withdrawal. This sympathovagal imbalance state in hyperthyroidism helps to explain the higher prevalence of atrial fibrillation and exercise intolerance among hyperthyroid patients.
In the modern industrialized countries every year several hundred thousands of people die due to the sudden cardiac death. The individual risk for this sudden cardiac death cannot be defined precisely by common available, non-invasive diagnostic tools like Holter-monitoring, highly amplified ECG and traditional linear analysis of heart rate variability (HRV). Therefore, we apply some rather unconventional methods of nonlinear dynamics to analyse the HRV. Especially, some complexity measures that are basing on symbolic dynamics as well as a new measure, the renormalized entropy, detect some abnormalities in the HRV of several patients who have been classified in the low risk group by traditional methods. A combination of these complexity measures with the parameters in the frequency domain seems to be a promising way to get a more precise definition of the individual risk. These findings have to be validated by a representative number of patients.
In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (rho(mu)(ls)=0.97 +/- 0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65 +/- 0.13; EH, 0.54 +/- 0.05; YH, 0.57 +/- 0.05; p < 0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in rho(mu)(ls) (day, 0.65 +/- 0.13; night, 0.66 +/- 0.12; n.s.) in contrast to healthy controls (day, 0.54 +/- 0.05; night, 0.61 +/- 0.05; p=0.002). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification
Objectives: Scoring sleep visually based on polysomnography is an important but time-consuming element of sleep medicine. Where-as computer software assists human experts in the assignment of sleep stages to polysomnogram epochs, their performance is usually insufficient. This study evaluates the possibility to fully automatize sleep staging considering the reliability of the sleep stages available from human expert sleep scorers. Methods: We obtain features from EEG, ECG and respiratory signals of polysomnograms from ten healthy subjects. Using the sleep stages provided by three human experts, we evaluate the performance of linear discriminant analysis on the entire polysomnogram and:only on epochs where the three experts agree in their-sleep stage scoring. Results: We show that in polysomnogram intervals, to which all three scorers assign the same sleep stage, our algorithm achieves 90% accuracy. This high rate of agreement with the human experts is accomplished with only a small set of three frequency features from the EEG. We increase-the performance to 93% by including ECG and respiration features. In contrast, on intervals of ambiguous sleep stage, the sleep stage classification obtained from our algorithm, agrees with the human consensus scorer in approximately 61%. Conclusions: These findings suggest that machine classification is highly consistent with human sleep staging and that error in the algorithm's assignments is rather a problem of lack of well-defined criteria for human experts to judge certain polysomnogram epochs than an insufficiency of computational procedures