Mathematics

Two Quantitative Variables

Two quantitative variables refer to a pair of numerical data sets that can be analyzed together to identify any relationships or patterns between them. This analysis often involves techniques such as scatter plots, correlation, and regression to understand how changes in one variable may be associated with changes in the other. Understanding the relationship between two quantitative variables is essential in statistical analysis.

Written by Perlego with AI-assistance

5 Key excerpts on "Two Quantitative Variables"

  • Exploratory and Multivariate Data Analysis
    Chapter 4

    2-D Statistical Data Analysis

    1 Introduction

    In practice, many users stop their statistical investigations after having studied the variables independently from each other. However, they have used only 1-D analysis, and usually cannot put forward any explanations of any causality for their data. For example, a questionnaire with two questions can be analyzed using two frequency distributions. However, studying each frequency distribution individually cannot provide any relation between the two questions. Another example is given by the study of Two Quantitative Variables, for which as many statistical characteristics or graphics as required can be built (cf . Chapter 3 ). They cannot help, however, to explain the relation between the two variables. The only way to approach the explanation of how one variable is related to another is to build a relation between the two variables. That is the objective of 2-D statistical data analysis, where two variables are analyzed according to the following points of view:
    1.  To express and highlight the relationship between two variables, in order to show the statistical dependence between them.
    2.  When possible, to sum up the relations by a law of variation or a statistical dependence, and to characterize them by a numerical coefficient independent of the units of measure of the variables.
    These studies vary according to the type of variables involved (quantitative, categorical, chronological, logical, etc.), and are presented in what follows.

    2 2-D Analysis of Two Categorical Variables

    2.1 Contigency Data Sets

    The way to express a relation between two categorical variables is to compute a contingency data set as follows: Let two categorical variables be denoted by V 1 and V 2 :
    V 1 , has h forms denoted by A 1 A 2 , …
    , Ah
    ;
    V 2 has k forms denoted by B 1 , …
    , Bk .
    For each couple of forms (
    Ai
    ,
    Bj
    ), we compute the number of observations, denoted by
    nij
    , that possesses the forms A, and
    Bj
  • Quantitative Methods for Historians
    eBook - ePub

    Quantitative Methods for Historians

    A Guide to Research, Data, and Statistics

    Chapter 8: Statistics for Questions about Two Variables T he most general statistical question that can be asked about two variables is whether or not they are related in some way. In other words, can the values of one be used to predict the values of the other for a certain set of cases? If this is possible, the two variables are said to be related or correlated, even though the predictions may not be infallible. Perfect correlation implies that every value of one variable may be predicted exactly from the values of the other. Imperfect correlation between variables suggests that knowing a particular value of one variable provides information about the most likely value of the other variable. For example, because it is possible to predict the average weight of a group of persons for each of a range of values of height, body height and weight are said to be highly but not perfectly correlated. In such relationships a few predictions may be exact, but most will be off the mark to some small extent. The strength of a relationship between two variables is based upon the amount of error in the predictions. The less such forecasts turn out to be wrong, the stronger the correlation. One important limitation on statistical analysis, often misunderstood by laymen, is the difference between correlation and causation. Simply put, showing that two variables covary is not sufficient to prove that one variable actually causes another to behave as it does. It is both tempting and wrong to infer cause from covariation. Correlation may be accidental, such as the coincidence between fluctuations of the New York stock market and the monsoon in India, or dependent upon a third variable, such as the drop in the Swedish birthrate and the disappearance of storks, both of which may be attributed to industrialization
  • Making Sense of Statistical Methods in Social Research
    SIX Studying the Relationship between Two Variables Chapter Contents PART 1 ASSOCIATIONAL ANALYSIS Two Continuous Variables Two Nominal Variables Two Ordinal Variables PART 2 REGRESSION ANALYSIS Simple Linear Regression ANOVA Simple Logistic Regression The relationship of two variables, say X and Y, involves several aspects. First, direction, i.e. whether one variable comes at least analytically a priori – as a predictor or a cause – to the other. If we do not have any knowledge or theory about such direction, then we must treat the relationship as symmetrical; accordingly, our analysis focuses on the association (or correlation) only. 1 If we do have an interest in the direction, then we can use regression models. This explains why there are two parts in this chapter. The correlation of two variables can be studied with regard to its presence (whether there is a relationship), strength (how strong the relationship is) and sign (whether the relationship is positive or negative). Not all statistics can inform us about all three aspects. Sometimes, a statistic can only be used as a piece of supporting evidence for the presence of the association. The results can only tell us the probability that the observed association is purely a consequence of chance, and nothing more. Regression can never suggest or confirm the causal relationship of the two variables; it is only a mathematical representation of our theory about such causal connection. Different methods should be applied with respect to the level of measurement of the variables. When both variables are continuous, it is appropriate to employ simple linear regression. When the response variable (the target of explanation) is continuous while the explanatory variable (the predictor or the believed cause) is categorical, we use analysis of variance (ANOVA)
  • Quantitative Data Analysis with IBM SPSS 17, 18 & 19
    eBook - ePub
    • Alan Bryman, Duncan Cramer(Authors)
    • 2012(Publication Date)
    • Routledge
      (Publisher)

    Chapter 8

    Bivariate analysis: exploring relationships between two variables

    ■  Crosstabulation
    ■  
    Crosstabulation with statistical significance: the chi-square (χ 2 ) test
    ■  Correlation
    ■  Other approaches to bivariate relationships
    ■  Regression
    ■  Overview of types of variable and methods of examining relationships
    ■  Exercises
    T HIS CHAPTER FOCUSES on relationships between pairs of variables. Having examined the distribution of values for particular variables through the use of frequency tables, histograms, and associated statistics as discussed in Chapter 5 , a major strand in the analysis of a set of data is likely to be bivariate analysis – how two variables are related to each other. The analyst is unlikely to be satisfied with the examination of single variables alone, but will probably be concerned to demonstrate whether variables are related. The investigation of relationships is an important step in explanation and consequently contributes to the building of theories about the nature of the phenomena in which we are interested. The emphasis on relationships can be contrasted with the material covered in the previous chapter, in which the ways in which cases or participants may differ in respect to a variable were described. The topics covered in the present chapter bear some resemblance to those examined in Chapter 7
  • Essentials of Psychological Testing
    • Susana Urbina, Alan S. Kaufman, Nadeen L. Kaufman(Authors)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    If there is a correlation between two variables and the relationship between them is linear, there are only two possible outcomes: (1) a positive correlation or (2) a negative correlation. If there is no correlation, the data points do not align themselves into any definite pattern or trend, and we may assume that the two sets of data do not share a common source of variance. If there is either a positive or a negative correlation of any magnitude, we can evaluate the possibility that the correlation could have resulted from chance, using the size of the sample on which the correlation was computed and statistical tables that show the probability that a coefficient of a given magnitude could have occurred by chance. Naturally, the larger the coefficient, the less likely it is that it could be the result of chance. If the probability that the obtained coefficient resulted from chance is very small, we can be confident that the correlation between X and Y is greater than zero. In such cases, we assume that the two variables share a certain amount of common variance. The larger and the more statistically significant a correlation coefficient is, the larger the amount of variance we can assume is shared by X and Y. The proportion of variance shared by two variables is often estimated by squaring the correlation coefficient (r xy) and obtaining the coefficient of determination, or r 2 xy. Although coefficients of determination tell us how much of the variance in Y can be explained by the variance in X, or vice versa, they do not necessarily indicate that there is a causal relationship between X and Y. Scatterplots The graphic depiction of bivariate data in the form of scatter diagrams or scatterplots is essential in order for us to visualize the kind of relationship at hand. The scatterplots in Figure 2.5 present the patterns of points that result from plotting the bivariate distributions from Table 2.6
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.