Mathematics

Estimation in Real Life

Estimation in real life involves making educated guesses or approximations about quantities, measurements, or values without precise data. It is a practical skill used in everyday situations, such as budgeting, cooking, and planning. By using estimation, individuals can make informed decisions and solve problems without needing exact figures.

Written by Perlego with AI-assistance

6 Key excerpts on "Estimation in Real Life"

  • Thinking About Equations
    eBook - ePub

    Thinking About Equations

    A Practical Guide for Developing Mathematical Intuition in the Physical Sciences and Engineering

    • Matt A. Bernstein, William A. Friedman(Authors)
    • 2011(Publication Date)
    • Wiley
      (Publisher)
    5 ESTIMATION AND APPROXIMATION
    Most problems in science and engineering can be modeled with equations, but only a few yield equations for which we can find an exact, analytic solution. In this chapter, we discuss estimation and approximation techniques that can be applied to many of the remaining problems. Even if the exact solution is attainable, an approximate solution might suffice and is usually much easier to find. The array of estimation and approximation approaches is very broad, and here we limit ourselves to a few selected methods to give a flavor for this topic.
    One use of the word “estimation” is to describe the process with which we make quick, mental calculations. Section 5.1 contains an example illustrating how using powers of two provides simple and accurate numerical estimates for geometric sequences (also known as geometric progressions). Another use of the word “estimation” is to describe the technique of making order-of-magnitude guesses for numerical values based on simple principles and commonsense reasoning. These techniques are illustrated with example problems in Sections 5.2 and 5.3. The study of such problems has been advocated to help remedy “innumeracy” (i.e., numerical illiteracy) or a lack of “number sense” manifested by difficulty in determining whether the order of magnitude of a numerical quantity is plausible.
    The remaining three sections of this chapter deal with approximation methods. Several analytic methods that can be used to approximate the value of definite integrals are discussed in Section 5.4. Basic aspects of perturbation analysis are introduced in Section 5.5. The chapter concludes with a qualitative discussion of the importance of identifying and isolating the most important variables in a problem. These ideas are illustrated by discussions of Brownian motion and the nuclear optical model.
  • Software Metrics
    eBook - ePub

    Software Metrics

    A Guide to Planning, Analysis, and Application

    Chapter 9 Estimation Models

    Estimation Process

    Estimation is a process that uses prediction systems and intuition for cost and resource planning. Estimation is controlled by “cost realism,” which does not always insist on exactness but lays equal emphasis on logic as much on the mathematical form of the prediction system. It is concerned about assumptions regarding the future and the relevance of history. It is concerned about bias in prediction.
    On one hand, estimation models use rigorous statistics for generating the prediction equation. On the other hand, common sense rules several choices and assumptions made en route.
    Estimation is as much art as science.
    There are useful techniques available for time and effort estimation. Process and project metrics can provide historical perspective and powerful input for the generation of quantitative estimates. Past experience of all people involved can provide immeasurably as estimates are developed and reviewed. Estimation lays a foundation for all other project planning activities and project planning provides the road map for successful execution of the project.
    Size, effort, schedule, and the cost of the project are estimated in parallel with the requirements definition phase. As requirements get clearer and refined, the estimates also get refined in parallel. Size estimation involves predicting “how big” the software will be. This is done by counting the number of features, functions, lines of code, or objects and applying appropriate weights to arrive at one or two numbers that represent the size of the software. Based on the size of the software, productivity-related data, and experience from past projects; the size is converted into effort. Effort is usually estimated in terms of person-hours, person-days, or person-months that need to be consumed to create the software. Schedule is derived from the effort estimate, the number of team members, and the extent to which project life cycle activities are independent of each other. Estimated costs are calculated based on the effort that needs to be put in and other elements of cost such as travel, hardware, software, infrastructure, training specific to the project, and expected usage of communication facilities. Though estimation is an intense activity during and at the end of the requirements stage, tracking of estimates and reestimation continues throughout the project at a reduced intensity.
  • Handbook of Complementary Methods in Education Research
    • Judith L. Green, Judith Green, Gregory Camilli, Patricia B. Elmore, Patricia Elmore, Judith L. Green, Judith L Green, Gregory Camilli, Gregory Camilli, Patricia B. Elmore, Patricia B Elmore(Authors)
    • 2012(Publication Date)
    • Routledge
      (Publisher)
    15 Estimation Juliet Popper Shaffer
    University of California, Berkeley
    DOI: 10.4324/9780203874769-17
    An estimate can be described as “an educated guess about a quantity that is unknown but concerning which some information is available” (Lehmann 1982 ). Estimation abounds in our everyday lives. Polls and surveys are conducted constantly and are used to obtain estimates of many quantities: the proportion of high school graduates in a state, the average income and the distribution of incomes in a particular professional field, the average proficiency of students and how it has changed over time. Intervention research results in estimates of other types of quantities: The difference in average length of life of cancer patients getting different medical treatments, the differences in reading proficiency of students given different types of reading instruction. Every aspect of our lives depends on obtaining appropriate estimates of many quantities.
    Generally, estimates are based on data, consisting of some set of units(e.g. individuals, farms, schools, cameras) with values on quantitative variables of interest (e.g. test scores, crop productivity, neighborhood economic level, image quality). Sometimes we are interested only in describing the data, such as when a teacher obtains test scores on the students in her/his class; in that case, estimation does not come into the picture. The interest in these cases is in the kinds of summaries that give good pictures of the data set; this area is called descriptive statistics or data analysis. Descriptions of numerical quantities usually include some measures of central value and spread. There are many possible measures of each; of the former, the mean and median of the data are often given; of the latter, the variance or standard deviation and the interquartile range are commonly supplied. For some types of data, such as majors of students in a class, proportions are of interest. Other descriptions, involving two or more variables (e.g. student scores on a set of tests), are correlations and other measures of relationship, and mean differences and other differences among subgroups. Graphical representations of many kinds are useful, and can give good visual descriptions of the complete data distribution. Some aspects of data analysis are discussed in Chapter 30
  • The Pragmatics of Mathematics Education
    eBook - ePub

    The Pragmatics of Mathematics Education

    Vagueness and Mathematical Discourse

    • Tim Rowland(Author)
    • 2003(Publication Date)
    • Routledge
      (Publisher)
    estimation of the number of objects in a set. This choice of focus is partly for the sake of addressing what is perhaps the most obvious aspect of mathematical activity in which one would expect vague language to play a part. Moreover, it is possible in a short (5–10 minute) interview to present appropriate estimation tasks to children in a meaningful way, to obtain responses, and to follow these up from a restricted menu of probes. It is therefore convenient, in designing an age-related study, to use estimation rather than generalization tasks to elicit vague language when dealing with a pupil sample numbered in hundreds rather than tens.

    Estimation

    Clayton (1992, p. 11) classifies the diffuse notion of estimation into three broad categories.
    Computational estimation involves the determination of approximate (typically, mental) answers to arithmetic calculations, e.g. 97π is roughly 100×3. 1 or 310. Such competence is commended by the National Curriculum for Mathematics in England and Wales (DFE, 1995, p. 25) for the purpose of checking answers to precise calculations for their ‘reasonableness’; pupils, however, seem to regard such checks as trivial or pointless (Clayton, 1992, p. 163).
    Quantitative estimation indicates the magnitude of some continuous physical measure such as the weight of a book, the length of a stick.
    Numerical estimation entails a judgement of ‘numerosity’—the number of objects in a collection. In principle, such a set could be precisely quantified by counting. In practice such precise enumeration may be impracticable or simply judged to be unnecessary, excess to pragmatic requirements.
    Ellis (1968, p. 159) observes that counting may be considered to be a measuring procedure, unique in the non-arbitrariness of the unit of measure. In fact, Clayton merges numerical estimation and quantitative estimation into one analytical category. Sowder (1992) notes that ‘there simply is not a rich research base in estimation’ and that most such research has been on computational estimation. Moreover, ‘Numerosity estimation has received the least research attention, and […] the only two studies located combine it with measurement estimation’ (p. 372). One those two studies was reported in a short article—by Clayton himself—in Mathematics Teaching
  • Introductory Statistics
    46 ].
    Calculating the values of parameters using measured and observed empirical data is a subfield of statistics and signal processing used to calculate the values of par-
    ameters in mathematical models. To measure and diagnose the real value of a function or a certain group of populations, the process of estimating must be carried out. It is carried out based on observations made on samples that provide a composite representation of the target population or function. When performing the estimate process, a variety of statistics are used [46 ].
    In statistics, one of the most common applications is the estimation of population parameters using sample statistics. For example, a survey may be conducted to determine the percentage of adult citizens of a city who favor a proposal to construct a new sports stadium. A random sample of 200 persons was used to determine whether or not they endorsed the concept. As a result, 0.53 (106/200) of the persons in the sample agreed with the notion. The population percentage point estimate is defined as 0.53 (or 53 percent) divided by the total population. In this case, the estimate is a point estimate since it is composed of a single number or point [47 ].
    It is very unusual for the actual population parameter to be the same sample value. When considering the hypothetical situation in which we surveyed the whole city's adult population, it is very implausible that precisely 53 percent of the population would support the notion. As an alternative, we may present a range of possible values for the parameter by using confidence intervals [47 ].
    As a result, point estimates are often augmented with interval estimates or confidence ranges to provide a complete picture. Constructed by utilizing a technique that includes the population parameter for a set fraction of the time, confidence intervals include the population parameter. The pollster would arrive at the following 95 percent confidence interval, for example, if he or she utilized a procedure that included the parameter 95% of the time it was used. The pollster would thus conclude that anywhere between 46 percent and 60 percent of the public favors the initiative. In most cases, the media will publish this result by stating that 53 percent of the population supports the idea with a margin of error of 7 percent or less [47
  • Cognition and Chance
    eBook - ePub

    Cognition and Chance

    The Psychology of Probabilistic Reasoning

    If people were more capable of estimation and simple calculation, many obvious inferences would be drawn (or not), and fewer ridiculous notions would be entertained. (Paulos, 1990, p. 17)
    The ability to estimate has not received the attention it deserves in education. It is an extraordinarily useful ability. Evidence that this is beginning to be realized is perhaps seen in one of six major recommendations in the 1980 agenda-setting report of the National Council of Teachers of Mathematics: “Teachers should incorporate estimation activities into all areas of the program on a regular and sustaining basis, in particular encouraging the use of estimating skills to pose and select alternatives and to assess what a reasonable answer might be” (p. 7). The importance of estimation skills has also been stressed by the Curriculum Framework Task Force of the Mathematical Sciences Education Board (1988). Although estimates can be made of many types of variables—for example, quantities, magnitudes, durations—here attention is limited to probabilistic or statistical variables.

    Estimates of Central Tendency and Variability

    When asked to observe a set of numbers and to estimate some measure of central tendency, such as its mean, people are able under some conditions to produce reasonably accurate estimates (Beach & Swensson, 1966; Edwards, 1967; G.R.Peterson & Beach, 1967), although systematic deviations from actual values have also been reported (N.H.Anderson, 1964; I.P.Levin, 1974, 1975). When the numbers whose means are to be estimated have been presented sequentially, effects both of primacy (influence of the first few numbers in the sequence) (Hendrick & Costantini, 1970) and of recency (influence of the last few numbers) (N.H.Anderson, 1964) have been obtained. For skewed distributions, estimates of means are likely to be biased in the direction of medians (C.R.Peterson & Beach, 1967).
    People's ability to estimate variability has been studied but less extensively than their ability to estimate means. One focus of interest has been how perceived variability depends on the mean around which the variability occurs. Some investigators have reported results suggesting that perception of a distribution's variability is not influenced by the distribution's mean (I.P.Levin, 1975; Pitz, Leung, Hamilos, & Terpening, 1976). Others have found the two estimates to be related systematically. Estimates of the variability of a set of numbers have been noted to decrease as the mean of the set increased and the (absolute) variability remained the same; in other words, variances of the same magnitude around a small mean and around a large mean appeared larger in the former case (Beach & Scopp, 1968; Hofstatter, 1939; Lathrop, 1967). There is some question as to the extent to which this relationship reflects a true misperception as opposed to a confusion of variability in absolute terms with variability relative to a mean. A standard deviation of 20 pounds in the distribution of weights of 100 freight cars seems a good bit smaller than a standard deviation of 20 pounds in the distribution of weights of 100 people, even though in absolute terms it is not. It could also stem, at least in part, from the linguistic convention of making the interpretation of such words as small and large
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.