"Getting Started with a SIMPLIS Approach" is particularly appropriate for those users who are not experts in statistics, but have a basic understanding of multivariate analysis that would allow them to use this handbook as a good first foray into LISREL. Part I introduces the topic, presents the study that serves as the background for the explanation of matters, and provides the basis for Parts II and III, which, in turn, explain the process of estimation of the measurement model and the structural model, respectively. In each section, we also suggest essential literature to support the...
"Getting Started with a SIMPLIS Approach" is particularly appropriate for those users who are not experts in statistics, but have a basic understan...
The purpose of this book is to illustrate a new statistical approach to test allelic association and genotype-specific effects in the genetic study of diseases. There are some parametric and non-parametric methods available for this purpose. We deal with population-based association studies, but comparisons with other methods will also be drawn, analysing the advantages and disadvantages of each one, particularly with regard to power properties with small sample sizes. In this framework we will work out some nonparametric statistical permutation tests and likelihood-based tests to perform...
The purpose of this book is to illustrate a new statistical approach to test allelic association and genotype-specific effects in the genetic study...
This monograph presents a radical rethinking of how elementary inferences should be made in statistics, implementing a comprehensive alternative to hypothesis testing in which the control of the probabilities of the errors is replaced by selecting the course of action (one of the available options) associated with the smallest expected loss.
Its strength is that the inferences are responsive to the elicited or declared consequences of the erroneous decisions, and so they can be closely tailored to the client's perspective, priorities, value judgments and other prior information,...
This monograph presents a radical rethinking of how elementary inferences should be made in statistics, implementing a comprehensive alternative to...
Software reliability is one of the most important characteristics of software product quality. Its measurement and management technologies during the software product life cycle are essential to produce and maintain quality/reliable software systems.
Part 1 of this book introduces several aspects of software reliability modeling and its applications. Hazard rate and nonhomogeneous Poisson process (NHPP) models are investigated particularly for quantitative software reliability assessment. Further, imperfect debugging and software availability models are discussed with reference to...
Software reliability is one of the most important characteristics of software product quality. Its measurement and management technologies during t...
This book reviews methods for handling missing data, manipulated data, multiple confounders, predictions beyond observation, uncertainty of diagnostic tests, and the problems of outliers.
This book reviews methods for handling missing data, manipulated data, multiple confounders, predictions beyond observation, uncertainty of diagnostic...
This brief monograph is an in-depth study of the infinite divisibility and self-decomposability properties of central and noncentral Student's distributions, represented as variance and mean-variance mixtures of multivariate Gaussian distributions with the reciprocal gamma mixing distribution. These results allow us to define and analyse Student-Levy processes as Thorin subordinated Gaussian Levy processes. A broad class of one-dimensional, strictly stationary diffusions with the Student's t-marginal distribution are defined as the unique weak solution for the stochastic differential...
This brief monograph is an in-depth study of the infinite divisibility and self-decomposability properties of central and noncentral Student's distrib...
This work is an overview of statistical inference in stationary, discrete time stochastic processes. Results in the last fifteen years, particularly on non-Gaussian sequences and semi-parametric and non-parametric analysis have been reviewed. The first chapter gives a background of results on martingales and strong mixing sequences, which enable us to generate various classes of CAN estimators in the case of dependent observations. Topics discussed include inference in Markov chains and extension of Markov chains such as Raftery's Mixture Transition Density model and Hidden Markov chains and...
This work is an overview of statistical inference in stationary, discrete time stochastic processes. Results in the last fifteen years, particularly o...
The first part of this title contained all statistical tests that are relevant for starters on SPSS, and included standard parametric and non-parametric tests for continuous and binary variables, regression methods, trend tests, and reliability and validity assessments of diagnostic tests. The current part 2 of this title reviews multistep methods, multivariate models, assessments of missing data, performance of diagnostic tests, meta-regression, Poisson regression, confounding and interaction, and survival analyses using log tests and segmented time-dependent Cox regression. Methods for...
The first part of this title contained all statistical tests that are relevant for starters on SPSS, and included standard parametric and non-param...
This book presents recent advances (from 2008 to 2012) concerning use of the Naive Bayes model in unsupervised word sense disambiguation (WSD).
While WSD, in general, has a number of important applications in various fields of artificial intelligence (information retrieval, text processing, machine translation, message understanding, man-machine communication etc.), unsupervised WSD is considered important because it is language-independent and does not require previously annotated corpora. The Naive Bayes model has been widely used in supervised WSD, but its use in unsupervised WSD...
This book presents recent advances (from 2008 to 2012) concerning use of the Naive Bayes model in unsupervised word sense disambiguation (WSD).
In statistics, the Behrens-Fisher problem is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples. In his 1935 paper, Fisher outlined an approach to the Behrens-Fisher problem. Since high-speed computers were not available in Fisher's time, this approach was not implementable and was soon forgotten. Fortunately, now that high-speed computers are available, this approach can easily...
In statistics, the Behrens-Fisher problem is the problem of interval estimation and hypothesis testing concerning the differenc...