Unistat
Encyclopedia
The Unistat computer program
Computer program
A computer program is a sequence of instructions written to perform a specified task with a computer. A computer requires programs to function, typically executing the program's instructions in a central processor. The program has an executable form that the computer can use directly to execute...

 is a statistical data analysis
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....

 tool featuring two modes of operation: The stand-alone user interface
User interface
The user interface, in the industrial design field of human–machine interaction, is the space where interaction between humans and machines occurs. The goal of interaction between a human and a machine at the user interface is effective operation and control of the machine, and feedback from the...

 is a complete workbench for data input, analysis and visualization while the Microsoft Excel
Microsoft Excel
Microsoft Excel is a proprietary commercial spreadsheet application written and distributed by Microsoft for Microsoft Windows and Mac OS X. It features calculation, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications...

 add-in mode extends the features of the mainstream spreadsheet application with powerful analytical capabilities.

With its first release in 1984, Unistat soon differentiated itself by targeting the new generation of microcomputers that were becoming commonplace in offices and homes at a time when data analysis was largely the domain of big iron
Big iron
Big iron, as the hacker's dictionary the Jargon File defines it, "refers to large, expensive, ultra-fast computers. It is used generally for number crunching supercomputers such as Crays, but can include more conventional big commercial IBM mainframes"....

 mainframe
Mainframe computer
Mainframes are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.The term originally referred to the...

 and minicomputer
Minicomputer
A minicomputer is a class of multi-user computers that lies in the middle range of the computing spectrum, in between the largest multi-user systems and the smallest single-user systems...

s. Since then, the product has gone through several major revisions targeting various desktop computing platforms, but its development has always been focused on user interaction and dynamic visualization.

As desktop computing has continued to proliferate throughout the 1990s and onwards, Unistat's end-user oriented interface has attracted a following amongst biomedicine
Biomedicine
Biomedicine is a branch of medical science that applies biological and other natural-science principles to clinical practice,. Biomedicine, i.e. medical research, involves the study of physiological processes with methods from biology, chemistry and physics. Approaches range from understanding...

 researchers, social scientists, market researchers, government departments and students, enabling them to perform complex data analysis without the need for large manuals and scripting language
Scripting language
A scripting language, script language, or extension language is a programming language that allows control of one or more applications. "Scripts" are distinct from the core code of the application, as they are usually written in a different language and are often created or at least modified by the...

s.

Statistics procedures supported by Unistat include:
  • Parametric statistics
    Parametric statistics
    Parametric statistics is a branch of statistics that assumes that the data has come from a type of probability distribution and makes inferences about the parameters of the distribution. Most well-known elementary statistical methods are parametric....

  • Non-parametric statistics
    Non-parametric statistics
    In statistics, the term non-parametric statistics has at least two different meanings:The first meaning of non-parametric covers techniques that do not rely on data belonging to any particular distribution. These include, among others:...

    : binomial test
    Binomial test
    In statistics, the binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories.-Common use:...

    , chi-squared test, Cohen's kappa
    Cohen's kappa
    Cohen's kappa coefficient is a statistical measure of inter-rater agreement or inter-annotator agreement for qualitative items. It is generally thought to be a more robust measure than simple percent agreement calculation since κ takes into account the agreement occurring by chance. Some...

    , Fisher's exact test
    Fisher's exact test
    Fisher's exact test is a statistical significance test used in the analysis of contingency tables where sample sizes are small. It is named after its inventor, R. A...

    , Friedman two-way analysis of variance
    Friedman test
    The Friedman test is a non-parametric statistical test developed by the U.S. economist Milton Friedman. Similar to the parametric repeated measures ANOVA, it is used to detect differences in treatments across multiple test attempts. The procedure involves ranking each row together, then...

    , Kendall's tau, Kendall's W
    Kendall's W
    Kendall's W is a non-parametric statistic. It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters...

    , Kolmogorov–Smirnov test, Kruskal–Wallis one-way analysis of variance, Mann–Whitney U, McNemar's test
    McNemar's test
    In statistics, McNemar's test is a non-parametric method used on nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are equal...

    , median test
    Median test
    In statistics, Mood's median test is a special case of Pearson's chi-squared test. It is a nonparametric test that tests the null hypothesis that the medians of the populations from which two samples are drawn are identical...

    , Spearman's rank correlation coefficient
    Spearman's rank correlation coefficient
    In statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek letter \rho or as r_s, is a non-parametric measure of statistical dependence between two variables. It assesses how well the relationship between two variables can...

    , Duncan's new multiple range test
    Duncan's new multiple range test
    In statistics, Duncan's new multiple range test is a multiple comparison procedure developed by David B. Duncan in 1955. Duncan's MRT belongs to the general class of multiple comparison procedures that use the studentized range statistic qr to compare sets of means.Duncan's new multiple range test...

    , Wald–Wolfowitz runs test, Wilcoxon signed-rank test
    Wilcoxon signed-rank test
    The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used when comparing two related samples or repeated measurements on a single sample to assess whether their population mean ranks differ The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used...

  • Regression
    Regression analysis
    In statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables...

    : Linear regression
    Linear regression
    In statistics, linear regression is an approach to modeling the relationship between a scalar variable y and one or more explanatory variables denoted X. The case of one explanatory variable is called simple regression...

    , Stepwise regression
    Stepwise regression
    In statistics, stepwise regression includes regression models in which the choice of predictive variables is carried out by an automatic procedure...

    , Nonlinear regression
    Nonlinear regression
    In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables...

    , logit
    Logit
    The logit function is the inverse of the sigmoidal "logistic" function used in mathematics, especially in statistics.Log-odds and logit are synonyms.-Definition:The logit of a number p between 0 and 1 is given by the formula:...

    /probit
    Probit
    In probability theory and statistics, the probit function is the inverse cumulative distribution function , or quantile function associated with the standard normal distribution...

    /Weibull, logistic regression
    Logistic regression
    In statistics, logistic regression is used for prediction of the probability of occurrence of an event by fitting data to a logit function logistic curve. It is a generalized linear model used for binomial regression...

    , multinomial logit
    Multinomial logit
    In statistics, economics, and genetics, a multinomial logit model, also known as multinomial logistic regression, is a regression model which generalizes logistic regression by allowing more than two discrete outcomes...

    , Poisson regression
    Poisson regression
    In statistics, Poisson regression is a form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown...

     and Cox regressions
  • Analysis of variance
    Analysis of variance
    In statistics, analysis of variance is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different sources of variation...

  • General linear model
    General linear model
    The general linear model is a statistical linear model.It may be written aswhere Y is a matrix with series of multivariate measurements, X is a matrix that might be a design matrix, B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors or...

  • Multivariate analysis
    Multivariate analysis
    Multivariate analysis is based on the statistical principle of multivariate statistics, which involves observation and analysis of more than one statistical variable at a time...

    : Principal components analysis
    Principal components analysis
    Principal component analysis is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. The number of principal components is less than or equal to...

    , Linear discriminant analysis
    Linear discriminant analysis
    Linear discriminant analysis and the related Fisher's linear discriminant are methods used in statistics, pattern recognition and machine learning to find a linear combination of features which characterizes or separates two or more classes of objects or events...

    , canonical analysis
    Canonical analysis
    In statistics, canonical analysis belongs to the family of regression methods for data analysis. Regression analysis quantifies a relationship between a predictor variable and a criterion variable by the coefficient of correlation r, coefficient of determination r², and the standard regression...

    , Multidimensional scaling
    Multidimensional scaling
    Multidimensional scaling is a set of related statistical techniques often used in information visualization for exploring similarities or dissimilarities in data. MDS is a special case of ordination. An MDS algorithm starts with a matrix of item–item similarities, then assigns a location to each...

    , Canonical correlation analysis
  • Time series
    Time series
    In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured typically at successive times spaced at uniform time intervals. Examples of time series are the daily closing value of the Dow Jones index or the annual flow volume of the...

  • reliability
  • Survival analysis
    Survival analysis
    Survival analysis is a branch of statistics which deals with death in biological organisms and failure in mechanical systems. This topic is called reliability theory or reliability analysis in engineering, and duration analysis or duration modeling in economics or sociology...

  • Quality control
    Quality control
    Quality control, or QC for short, is a process by which entities review the quality of all factors involved in production. This approach places an emphasis on three aspects:...

    • Control chart
      Control chart
      Control charts, also known as Shewhart charts or process-behaviour charts, in statistical process control are tools used to determine whether or not a manufacturing or business process is in a state of statistical control.- Overview :...

      , Process capability
      Process capability
      A process is a unique combination of tools, materials, methods, and people engaged in producing a measurable output; for example a manufacturing line for machine parts...

      , Histogram
      Histogram
      In statistics, a histogram is a graphical representation showing a visual impression of the distribution of data. It is an estimate of the probability distribution of a continuous variable and was first introduced by Karl Pearson...

      , Pareto chart, ANOVA Gage R&R
      ANOVA Gage R&R
      ANOVA Gauge R&R is a measurement systems analysis technique that uses analysis of variance random effects model to assess a measurement system....

  • Bioassay
    Bioassay
    Bioassay , or biological standardization is a type of scientific experiment. Bioassays are typically conducted to measure the effects of a substance on a living organism and are essential in the development of new drugs and in monitoring environmental pollutants...

    Analysis: This optional module features potency estimation with parallel line, slope ratio and quantal response methods, with Fieller confidence intervals, validity tests, ED50 and graphical representations.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK