You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

2.3 Maximum likelihood and Bayesian estimation methods

2 min readjuly 25, 2024

(MLE) is a powerful statistical method for estimating model parameters. It maximizes the , assuming data is drawn from a known probability distribution, to find parameters that make observed data most probable.

takes MLE a step further by incorporating prior knowledge about parameters. It treats parameters as random variables with distributions, updating beliefs as new data is observed. This approach handles small sample sizes better and provides uncertainty quantification.

Maximum Likelihood Estimation

Principles of maximum likelihood estimation

Top images from around the web for Principles of maximum likelihood estimation
Top images from around the web for Principles of maximum likelihood estimation
  • Statistical method estimates model parameters maximizing likelihood function
  • Assumes data drawn from known probability distribution seeks parameters making observed data most probable
  • Estimates parameters of dynamic systems identifies or
  • Consistency achieves estimates converging to true values as sample size increases
  • Efficiency achieves minimum variance among unbiased estimators
  • Requires large sample sizes for accuracy sensitive to initial parameter guesses (, )

Formulation of likelihood functions

  • of observed data treated as function of unknown parameters
  • Define probability distribution of data given parameters
  • Express joint probability as product of individual probabilities
  • Take logarithm to simplify calculations (log-likelihood)
  • Set partial derivatives of log-likelihood to zero
  • Solve resulting equations for parameter estimates
  • Applies to in control systems (, )

Bayesian Estimation

Concept of Bayesian estimation

  • Incorporates prior knowledge about parameters treats them as random variables with distributions
  • represents initial belief about parameters
  • Likelihood function same as in MLE
  • updated belief after observing data
  • Handles small sample sizes better provides uncertainty quantification
  • Allows incorporation of expert knowledge
  • Updates prior distribution with observed data uses to compute posterior
  • Choosing appropriate prior distributions presents challenge
  • Computational complexity increases in high-dimensional problems

Application of Bayesian techniques

  • Maximum a posteriori (MAP) estimation maximizes posterior distribution balances prior knowledge with observed data
  • Posterior p(θy)p(yθ)p(θ)p(\theta|y) \propto p(y|\theta)p(\theta)
  • Maximize logp(yθ)+logp(θ)\log p(y|\theta) + \log p(\theta)
  • MAP reduces to MLE with uniform prior provides regularization through prior
  • (MCMC) methods and offer alternative techniques
  • enables adaptive control
  • accounts for uncertainty
  • Choosing improves computational efficiency
  • require careful consideration in practical applications
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary