Technology & Engineering

Nonlinear Regression

Nonlinear regression is a statistical method used to model complex relationships between variables that cannot be adequately represented by a linear model. It allows for the fitting of curves and other nonlinear patterns to the data, making it a valuable tool in engineering and technology for analyzing and predicting non-linear phenomena.

Written by Perlego with AI-assistance

5 Key excerpts on "Nonlinear Regression"

  • Introduction to Linear Regression Analysis
    • Douglas C. Montgomery, Elizabeth A. Peck, G. Geoffrey Vining(Authors)
    • 2021(Publication Date)
    • Wiley
      (Publisher)
    CHAPTER 12 INTRODUCTION TO Nonlinear Regression
    Linear regression models provide a rich and flexible framework that suits the needs of many analysts. However, linear regression models are not appropriate for all situations. There are many problems in engineering and the sciences where the response variable and the predictor variables are related through a known nonlinear function. This leads to a Nonlinear Regression model. When the method of least squares is applied to such models, the resulting normal equations are nonlinear and, in general they can be difficult to solve. The usual approach is to directly minimize the residual sum of squares by an iterative procedure. In this chapter we describe estimating the parameters in a Nonlinear Regression model and show how to make appropriate inferences on the model parameters. We also illustrate computer software for Nonlinear Regression.

    12.1 LINEAR AND Nonlinear Regression MODELS

    12.1.1 Linear Regression Models

    In previous chapters we have concentrated on the linear regression model
    (12.1)
    These models include not only the first-order relationships, such as Eq. (12.1) , but also polynomial models and other more complex relationships. In fact, we could write the linear regression model as
    (12.2)
    where zi represents any function of the original regressors x1 , x2 , …, xk , including transformations such as exp(xi ), , and sin(xi ). These models are called linear regression models because they are linear in the unknown parameters, the βj , j = 1, 2, …, k.
    We may write the linear regression model (12.1) in a general form as
    (12.3)
    where x′ = [1, x1 , x2 , …, xk ]. Since the expected value of the model errors is zero, the expected value of the response variable is
    We usually refer to f(x, β) as the expectation function
  • Statistical Applications for Environmental Analysis and Risk Assessment
    • Joseph Ofungwu(Author)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    However, Nonlinear Regression also has limitations, including (1) the form of the nonlinear model has to be specified by the analyst and although there are several “standard” nonlinear statistical models available (e.g., exponential growth or decay model and power model), it is not always easy to find a suitable model for the data in question. It is possible to find computer programs that can automatically fit several equations or models to the data and then choose the best-fitting equation, but such a model may not be meaningful because the computer obviously has no knowledge of the process being modeled; (2) unlike linear regression where “closed-form” solutions can always be obtained analytically for the regression coefficients, closed-form solutions are usually not possible with Nonlinear Regression, necessitating approximate solutions using numerical, iterative methods instead
  • A Computational Approach to Statistical Learning
    • Taylor Arnold, Michael Kane, Bryan W. Lewis(Authors)
    • 2019(Publication Date)
    4

    Linear Smoothers

    4.1 Non-Linearity

    Linear regression has excellent theoretical properties and, as we have seen, can be readily computed from observed data. Using ridge regression and principal component analysis we can tune these models to optimize for predictive error loss. Indeed, linear models are used throughout numerous fields for predictive and inferential models. One situation in which linear models begin to perform non-optimally is when the relationship between the response y and the data is not linear nor can it be approximated closely by a linear relationship.
    As an example of a non-linear model consider observing a variable y
    i
    gov- erned by
    (4.1)
    y i
    = cos
    (
    β 1
    ·
    x i
    )
    +
    e
    -
    x i
    ·
    β 2
    +
    ε i
    for some scalar value x
    i
    , unknown constants β 1 and β 2 , and the random noise variable ɛ
    i
    . A common approach for estimating the unknown parameters given a set of observations is to again minimize the sum of squared residuals. This sum is a well-defined function over the set of allowed β
    j
    ’s and often, as in this case, twice differentiable. While there is no analogous closed-form solution to the linear case, the minimizing estimate values can usually be found using a general purpose first, or second-order optimization technique. This approach is known as non -linear least squares and has significant theoretical guarantees over a wide class of problem formulations.
    What happens when we do not know a specific formula for y
    i
    that can be written down in terms of a small set of unknown constants β
    j
    ? Models of the form seen in Equation 4.1 often arise in engineering and science applications where the specific causal mechanism for the response y
    i
    is well understood. In statistical learning this is rarely the case. More often we just know that
    (4.2)
    E
    y i
    = f
    (
    x i
    )
    holds for some unknown function f . We may suspect that f has some general properties; depending on the application it may be reasonable to assume that f is continuous, has a bounded derivative, or is monotonically increasing in x
    i
    . As we do not know a specific formula for f in terms of parameters β
    j
    , the model given in Equation 4.1 is known as non-parametric regression. Common estimators for estimating the non-parametric regression function f
  • Statistical Techniques in Geographical Analysis
    • Dennis Wheeler, Gareth Shaw, Stewart Barr(Authors)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    9

    Simple Linear and Non-Linear Regression

    9.1  Introduction

    IN CHAPTER 8 WE
    examined methods of assessing the statistical relationship between two variables. The correlation coefficient is one such measure of association but, useful as this measure might be, it does not allow us to predict the numerical value of one variable based on the other. Neither does correlation make any assumptions on causation, for example that it is one of the variables that controls the behaviour of the other. The importance of regression analysis is that it goes much further than correlation, and it enables us to make a numerical prediction of one variable by reference to another. In order, however, to embark on this procedure we must decide on the direction of causation , and which is the dependent variable and which is the independent , i.e. which variable controls the other. Statistical convention dictates that the dependent variable is termed Y , and the independent variable X . For example, mean annual UK rainfall is known to increase with altitude. In this situation rainfall (Y ) could be argued to depend on altitude (X ). The reverse would not make scientific sense, although in some situations the dependency relationships are far less easy to distinguish, especially in areas of human geography.
    We will deal firstly with linear regression in which incremental changes in X produce a consistent response in Y across the range of observed X s, i.e. the two variables are linearly related. We will also only consider the case of a single predictor (Chapter 10 deals with multiple predictors). Simple linear regression is a valuable predictive and modelling tool, allowing geographers to recreate, in numerical terms, the way in which one variable controls another. It is, however, a parametric test and requires data at the interval or ratio scale. It requires also that these data are not significantly skewed in their distributions. Although pairs of variables may be found to be linearly related, there is no implication of perfection in the relationship and, as exemplified in Figure 8.5
  • Handbook of Regression Analysis With Applications in R
    • Samprit Chatterjee, Jeffrey S. Simonoff(Authors)
    • 2020(Publication Date)
    • Wiley
      (Publisher)
    for both groups). It is apparent that the estimated velocities are not very different from those based on all of the data.

    12.5 Summary

    We have only briefly touched on the basics of Nonlinear Regression fitting in this chapter. Bates and Watts (1988 ) and Seber and Wild (1989 ) provide much more thorough discussion of such models, including more details on estimation. They also discuss how differential geometry can be used to construct curvature measures (intrinsic curvature and parameter‐effects curvature, respectively) that quantify the extent to which the linear Taylor series approximation fails for a given data set and given model parameterization.

    KEY TERMS

    Nonlinear least squares
    A method for estimating the parameters of a Nonlinear Regression model. It is appropriate when the additive error term is (roughly) normally distributed with constant variance, and requires an iterative procedure to find the solution.
    Nonlinear Regression model
    A model for the relationship between a response and predictor(s) in which at least one parameter does not enter linearly into the model.
    Taylor series approximation
    A method of approximating a nonlinear function with a polynomial. The first‐order (linear) Taylor series approximation is the basis of standard inferential tools when fitting Nonlinear Regression models.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.