Refine Search

New Search

Results: 6

(searched for: doi:10.13176/11.117)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Journal of the Operational Research Society, Volume 69, pp 1994-2005; https://doi.org/10.1080/01605682.2017.1417684

Abstract:
The objective of quantitative credit scoring is to develop accurate models of classification. Most attention has been devoted to deliver new classifiers based on variables commonly used in the economic literature. Several interdisciplinary studies have found that personality traits are related to financial behaviour; therefore, psychological traits could be used to lower credit risk in scoring models. In our paper, we considered financial histories and psychological traits of customers of an Italian bank. We compared the performance of kernel-based classifiers with those of standard ones. We found very promising results in terms of misclassification error reduction when personality attitudes are included in models, with both linear and non-linear discriminants. We also measured the contribution of each variable to risk prediction in order to assess importance of each predictor.
, Hamparsum Bozdogan, Sinan Çalık
Computational and Mathematical Methods in Medicine, Volume 2015, pp 1-14; https://doi.org/10.1155/2015/370640

Abstract:
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions.
Varun Gupta, Gavendra Singh, Manish Mittal, Sharvan Kumar Pahuja
2010 Second International Conference on Advances in Computing, Control, and Telecommunication Technologies pp 6-9; https://doi.org/10.1109/act.2010.11

Abstract:
In this paper we are highlighting the signals that are not Fourier transformable and give its Fourier transform using PCA (Principle Component Analysis), lDA (linear Discriminant Analysis). Such signals are step signal, signum, etc. Basically Fourier transform transforms time domain signal into frequency domain and after transformation describes what frequencies original signal have. Principle Component Analysis is a way of identifying patterns (recognition) in the data and the differences of the data is highlighted. With the help of PCA & lDA we do the dimension reduction of the signal. lDA is used in statistics and machine learning to find a linear combination of features which characterize or separate two or more classes of objects or events. The resulting combination may be used as a linear classifier or, more commonly, for dimensionality reduction before later classification. lDA is closely related to anova (analysis of variance). PCA is used for analyzing. Main advantage of PCA is that once patterns are found and data is compressed that is by reducing the number of dimension without much loss of information. Dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature extraction. Feature selection approaches try to find a subset of the original variables and feature extraction transforms the data in the high-dimensional space to a space of fewer dimensions.
Advances in Intelligent and Soft Computing pp 345-352; https://doi.org/10.1007/978-3-642-14746-3_43

The publisher has not yet granted permission to display this abstract.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top