Physics

Error Calculation

Error calculation in physics involves determining the uncertainty or margin of error in a measurement or calculation. It is important for understanding the reliability and accuracy of experimental results. Error calculation typically involves techniques such as propagation of errors, standard deviation, and uncertainty analysis to quantify and account for the potential errors in measurements and calculations.

Written by Perlego with AI-assistance

8 Key excerpts on "Error Calculation"

  • Symbolic Mathematics for Chemists
    eBook - ePub

    Symbolic Mathematics for Chemists

    A Guide for Maxima Users

    • Fred Senese(Author)
    • 2018(Publication Date)
    • Wiley
      (Publisher)
    8 Error Analysis
    Every careful measurement in science is always given with the probable error… every observer admits that he is likely wrong, and knows about how much wrong he is likely to be.
    – Bertrand Russell [40]
    Error analysis is an essential part of the process of science. To draw valid conclusions from experimental data, we must have some estimate of the precision (reproducibility) and accuracy (correctness) of the measurements. To have confidence in computed results, we must propagate errors in the data through our calculations. To compare our results with accepted values, we have to objectively compare the size of the experimental error with the size of any observed discrepancies. To design efficient experiments, we have to know the relative sizes of contributing errors so that we can identify and minimize the errors that limit the accuracy and precision of the final results.
    In this chapter, we’ll use symbolic mathematics to estimate errors in datasets and propagate them through calculations. We’ll also see how statistics and assumptions about the distribution of errors can be used to objectively test hypotheses about the data.
    Throughout this chapter we will use functions from the stats package, which must be loaded before some of the functions will work.
    (%i1)
    load(stats)$

    8.1 Classifying Experimental Errors

    Some of the uncertainty in a measurement can be estimated by repeating the measurement under identical conditions. Consider a dataset consisting of 20 replicate readings x
    i
    from a digital thermometer.
    (%i1) data: [23.9, 24.0, 24.0, 24.0, 24.0, 24.0, 24.0, 24.0, 24.0, 24.0, 24.1,
    (%i2) 24.1, 24.1, 24.1, 24.1, 24.1, 24.1, 24.1, 24.2, 24.2]$
    How can we estimate the “true” temperature from this data? How uncertain may that estimate be, based on the scatter in the data?
    The answers to both questions lie in the distribution of the measurements. Note that the values occur more frequently than others; these measurements are more reproducible and perhaps more reliable than the others. The frequencies for each unique value can be found with the discrete_freq function, found in the stats
  • Measurement Data Modeling and Parameter Estimation
    • Zhengming Wang, Dongyun Yi, Xiaojun Duan, Jing Yao, Defeng Gu(Authors)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    Records from measurement equipments that are used to measure an object are measurement data. To obtain the data, we have to go through the following three steps: selection and installation of devices, observation, and reading. In any case, it is impossible to obtain measurement data that are the same as true values. The accuracy limitations of measurement devices, the subjective factors of measurers, and the environmental conditions will inevitably bring various errors to measurement data.
    Measurement practice shows that multiple measurements of one physical quantity inevitably have numerical fluctuations. Such a fluctuation is usually caused by the uncertainty of measurement data. The difference between the measurement datum and the true value due to the accuracy limitations of measurement devices, the uncertainty of device performance, the subjective factors of measurers, and the environmental conditions is called measurement error.
    Measurement error always exists. Therefore, measurement data cannot be applied directly in scientific research and engineering practice that require high accuracy. A series of analysis and processing of data is necessary. This includes engineering analysis (analysis of measurement devices, measurement principles, measurement process, and data rationales) and mathematical analysis (data modeling, error analysis, parameter estimation, hypothesis test and precision evaluation, etc.).
    In dealing with measurement data mathematically, we should
    1. Take full advantage of the characteristics of the measured physical quantity to establish an accurate mathematical model for its true value.
    2. Take full advantage of the characteristics of the measuring device to build mathematical models for various measurement errors.
    3. Use a solid mathematical theory as a basis.
    4. Make full use of accuracy, speed, and storage of a computer.

    1.1.2 Classification of Measurement

    1.1.2.1 Concept of Measurement
    Definition 1.1
    The quantity whose physical characteristics (states, movements, etc.) can be evaluated or expressed by numerical values is called a physical quantity. The process of comparing a physical quantity with a value of unit through experimenting is called a measurement. The process can be expressed as
  • Engineering Surveying
    • W Schofield, Mark Breach(Authors)
    • 2007(Publication Date)
    • CRC Press
      (Publisher)
    Survey results can never be exactly true for a number of reasons. Surveying equipment, like any other piece of equipment in the real world can only be manufactured to a certain level of precision. This means that there is a limit upon the quality of a measurement that can be made by any instrument. Although survey measuring procedures are designed to remove as many errors as possible there will always be some sources of error that cannot be compensated for. Whatever the scale on the instrument, be it digital or analogue, there is a limit to the number of significant digits that it can be read to. Surveyors are trained to get the most out of their instrumentation, but no observer can make perfect measurements. There is a limit to the steadiness of the hand and the acuity of the eye. All survey measurements are subject to external factors, for example all observed angles are subject to the effects of refraction, and observed distances, whether EDM or tape, will vary with temperature. The process of getting from observations to coordinates involves reductions of, and corrections to, observed data. Some mathematical formulae are rigorous, others are approximate. These approximations and any rounding errors in the computations will add further error to the computed survey results.
    The surveyor's task is to understand the source and nature of the errors in the survey work and appreciate how the observing methods and the computing process may be designed to minimize and quantify them. It is important to understand the nature of the measurement process. Firstly, the units in which the measurement is to take place must be defined, for example distances may be measured in metres or feet and angles may be in degrees, gons or mils. Next, the operation of comparing the measuring device with the quantity to be measured must be carried out, for example laying a tape on the ground between two survey stations. A numerical value in terms of the adopted units of measure is then allocated to the measured quantity. In one of the examples already quoted the readings of the tape at each station are taken and the difference between them is the allocated numerical value of the distance between the stations. The important point is that the true value of the interstation distance is never known, it can only be estimated by an observational and mathematical process.
    Since the true value of a measurement or coordinate can never be known it is legitimate to ask what is the accuracy or the precision of, or the error in, the estimate of that measurement or coordinate. Accuracy, precision and error have specific meanings in the context of surveying. Accuracy is a measure of reliability. In other words
    Accuracy = True value − Most probable value
    where the ‘most probable value’ is derived from a set of measurements. In the example above the most probable value might be the arithmetic mean of a number of independent measurements. Since the true value is never known then it is also impossible for the accuracy to be known. It can only be estimated. Accuracy can be estimated from ‘residuals’, for example, in the two sets of measurements below, which mean is the more accurate, that of the measurements of line AB or line XY
  • Essential Mathematics and Statistics for Forensic Science
    • Craig Adam(Author)
    • 2011(Publication Date)
    • Wiley
      (Publisher)
    10 Statistics in the evaluation of experimental data: computation and calibration Introduction: What more can we do with statistics and uncertainty?
    Having a sound understanding of the fundamentals of statistics associated with random variations in measurement allows us to develop further a range of calculations that may be applied in the analysis of experimental data. This chapter will deal with two important areas. First, when the outcomes of experimental work are used in further calculations, how can we determine the uncertainty in our results at the end of such a process? This requires computational skills in the propagation of errors through a variety of functions and formulae. Second, how may experimental uncertainties be accommodated in graphical analysis, particularly graphs related to the calibration or interpretation of analytical measurements? To deal with this, a more detailed understanding of topics such as the identification of outliers and the use of linear regression techniques is required. However, let us begin by reviewing our current understanding of the statistical interpretation of uncertainty.
    10.1 The propagation of uncertainty in calculations 10.1.1 Review of uncertainty in experimental measurement
    Throughout much of this book attention has been given to understanding and quantifying the uncertainty associated with experimental measurements. It is useful now to review how this topic has developed up to this point. In Section 1.3, discussion focused on a qualitative appreciation of uncertainty, its origins and how its magnitude might be estimated in particular instances, for example using the range method. Later, in Section 9.1.2, it was found that, since a series of repeated measurements effectively samples a normal distribution, the concept of confidence limits could be invoked to interpret meaningfully numerical estimates of uncertainty. The crucial parameter in such calculations is the standard deviation for a set of measurements. However, in dealing with a sample taken from a much larger population, the calculation of the standard error, followed by the use of the t
  • Applied Metrology for Manufacturing Engineering
    • Ammar Grous(Author)
    • 2013(Publication Date)
    • Wiley-ISTE
      (Publisher)
    We note that the error is totally insignificant to the point where it is sought in the seventh decimal place, which does not count because the results are rounded to three decimal places. We note that the arithmetic mean remained equal to 0.6547740. The quantified uncertainty is then:

    1.9. Principle of uncertainty calculation: types A and B

    When performing measuring experiments on mechanical parts, they should be repeated several times under the same conditions in the hope of obtaining a good trend. Unfortunately, there are always dispersions, and for this reason, we fall back on statistical modeling. Thus, we consider a mathematical expectation of the first order, which is the mean µ, and then, we calculate the variance σ2 and a standard deviation σ on a sample of size n. We then carry out mathematical statistics approaches [NIS 94, PRI 96, TAY 94].
    From a practical measure for true value xi , we have to make calculations of the average of n samples, the values being, obviously, assumed identical. The ensuing systematic errors may however be reduced by applying corrections. This appeals to a sense of analysis that the operator is, unfortunately, not always expected to master. The metrologist should be rigorous to obtain a good correlation between physical measurements and figures that are expected to reflect his or her numerical representation. Knowledge of the measuring process and the fundamental principles from physics is one of the best guarantors of the conduct of the metrology project. In practice, errors are not discussed quickly. Various errors occur during the measurement. We recall a few of them:
    – temperature and pressure; – precision of instruments and position of the feature being measured;
    – deformation of the mechanical part (we discuss this in Chapter 3 );
    – disruption of the quantity measured by the presence of the measuring instrument; – error due to the measurement method itself; – error due to the operator, and so on.
    The errors and their causes being enumerated, we now attempt to deal with the propagation law and the required corrections. Rigorous work of good metrologists means thinking about the errors not yet identified. Those already identified will be subject to potential adjustments to compensate. While these adjustments would be judiciously realized, there is still doubt on the value of the correction, and this is where a rigorous mind is needed. Among the various corrections, three categories should be defined: calibration, environmental, and standardization corrections:
  • Planning and Executing Credible Experiments
    eBook - ePub

    Planning and Executing Credible Experiments

    A Guidebook for Engineering, Science, Industrial Processes, Agriculture, and Business

    • Robert J. Moffat, Roy W. Henk(Authors)
    • 2021(Publication Date)
    • Wiley
      (Publisher)
    Uncertainty Analysis is often done “after the fact,” mainly to satisfy contractual or editorial requirements. What a waste! It is far more useful when it is done first and used in all phases of the experiment: planning, executing, and reporting. In the planning stage, an a priori Uncertainty Analysis can assess whether or not the instrumentation is accurate enough to use for the intended purpose and whether the method is appropriate or not. During execution, online Uncertainty Analysis can be used to monitor the scatter on repeated trials, providing a warning that the results are drifting. Accurately reporting the Uncertainty in the result allows meaningful comparison with other data in the literature.

    10.1.1 Distinguish Error and Uncertainty

    The error in a measurement is defined as the difference between the True Value and its measured value. This definition is clear and absolute. If we already knew the True Value, however, we need not perform an experiment. Since we don’t know the exact True Value, we cannot know the error for certain. This leaves us experimentalists with the task of estimating the error; we plan to estimate an Uncertainty band that encompasses the error 95% of the time.
    As an aside: when do we ever know an absolute True Value? We do know that the speed of light in a vacuum is 299,792,458 m/s. We know absolute zero temperature and that heat cannot be drawn from something at absolute zero. However, there is no pure vacuum in this universe; neither have we achieved absolute zero temperature. A third instance truly occurs when we know the exact values for each member of the total population. A fourth instance may occur in everyday life or engineering; we assume that the True Value of a property is the value:
    1. tabulated by the National Technical Information Service (NTIS ), formerly the National Bureau of Standards,
    2. provided as baseline data, or
    3. referenced to one of the basic conservation laws of physics or engineering.
    Other examples of the fourth instance: we assume the gasoline pump provides exactly one gallon of fuel at the advertised price; we assume the liter water bottle contains 1000 ml of fluid; we assume that when electrical probes contact, the resistance is zero. When we calibrate an instrument or qualify an experiment, keep alert to note your assumptions.
  • Principles of Nuclear Radiation Detection
    • Geoffrey G. Eichholz(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    CHAPTER 3 COUNTING STATISTICS AND ERROR DETERMINATIONS INTRODUCTION
    In reporting the outcome of any laboratory measurements, it is incumbent upon the experimenter to report not only the results of his measurements, but also the accuracy with which they were made or, more appropriately, the confidence with which they are held to be valid. Consequently, it is important that the experimentalist be aware of the various sources of error in his work and able to assess their significance. The mathematical framework for this assessment is the science of statistics.
    It is particularly important that statistics be applied to any laboratory work which includes radiation detection as part of its data collection, since the very process of radioactive decay is probabilistic in nature and the result of any counting experiment is perforce a random quantity. Since the result of any physical measurement can be considered as a random variable, the basic ideas developed in this chapter are not limited to measurements of radiation. They are applicable to all situations in which measurements are subject to some source of random error.
    UNCERTAINTY IN THE MEASUREMENT PROCESS
    The errors that accompany most laboratory measurements can be divided into two classes: systematic errors and random errors. Systematic errors result from faults in the measurement process or its interpretation that lead to a bias in the results. For example, if one measured the length of a table with a tape measure that had been stretched through misuse, the result would consistently be less than the actual length of the table. The measured length would be biased toward low values by the incorrect tape; the results would have a systematic error.
    On the other hand, the proper use of accurately calibrated instruments is no guarantee that the result of a measurement will be errorless. For example, even if perfectly accurate tape measures were used, it is unlikely that forty people would agree to the nth significant figure on the length of a table. Individual variation in holding the tape, reading the scale, etc. would produce a range of results for the measured length, regardless of how carefully the measurements were made. However, it is unlikely that there would be any bias in this set of measurements. A result chosen from this group is likely to contain an error, but the error will be random, not systematic.
  • Theory of Mental Tests
    • Harold Gulliksen(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    Basic Equations Derived from a Definition of Random Error

    1. Introduction

    We shall begin by assuming the conventional objective testing procedure in which the person is presented with a number of items to be answered. Each answer is scored as correct or incorrect, and a simple or a weighted sum of the correct answers is taken as the test score. The various procedures for determining which items to use and the best weighting methods will be considered later. For the present we assume that the numerical score is based on a count, one or more points for each correct answer and zero for each incorrect answer, and we turn our attention to the determination of the accuracy of this score.
    When psychological measurement is compared with the type of measurement found in physics, many points of similarity and difference are found. One of the very important differences is that the error of measurement in most psychological work is very much greater than it is in physics. For example, Jackson and Ferguson (1941) resorted to specially constructed “rubber rulers” in order to reduce the reliability of length measurements to values appreciably below .99. The estimation of the error in a set of test scores and the differentiation between “error” and “true” score on a test are central problems in mental measurement.

    2. The basic assumption of test theory

    It is necessary to make some assumption regarding the relationship between true scores and error scores. Let us define three basic symbols.
    • Xi
      = the score of the i th person on the test under consideration.
    • Ti
      = the true score of the i th person on this test.
    • Ei
      = the error component for the same person.
    In defining these symbols it is assumed that the gross score has two components. One of these components (T ) represents the actual ability of the person, a quantity that will be relatively stable from test to test as long as the tests are measuring the same thing. The other component (E ) is an error. It is due to the various factors that may cause a person sometimes to answer correctly an item that he does not know, and sometimes to answer incorrectly an item that he does know. So far, it will be observed, there is no proposition subject to any experimental check. We have simply said that there is some number T that would be the person’s correct score, and that the obtained score (X ) does not necessarily equal T
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.