Discrete probability distributions are the building blocks of random variable analysis in combinatorics. They help us model and understand events with distinct outcomes, like coin flips or dice rolls, by assigning probabilities to each possible result.
This section covers key distributions like Bernoulli, Binomial, Poisson, and Geometric. We'll explore their properties, applications, and how to calculate important values like expected outcomes and variability. Understanding these concepts is crucial for tackling real-world probability problems.
Probability Distributions
Bernoulli and Binomial Distributions
Top images from around the web for Bernoulli and Binomial Distributions Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
1 of 3
Top images from around the web for Bernoulli and Binomial Distributions Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
Binomial distribution - Wikipedia View original
Is this image relevant?
1 of 3
Bernoulli distribution models single binary outcome experiments (success or failure)
Probability mass function for Bernoulli distribution given by P ( X = k ) = p k ( 1 − p ) 1 − k P(X=k) = p^k(1-p)^{1-k} P ( X = k ) = p k ( 1 − p ) 1 − k where k is 0 or 1
Binomial distribution extends Bernoulli to n independent trials
Probability mass function for Binomial distribution expressed as P ( X = k ) = ( n k ) p k ( 1 − p ) n − k P(X=k) = \binom{n}{k}p^k(1-p)^{n-k} P ( X = k ) = ( k n ) p k ( 1 − p ) n − k
Applications include coin flips (heads or tails) and quality control (defective or non-defective items)
Parameters for Binomial distribution include n (number of trials) and p (probability of success )
Mean of Binomial distribution calculated as [ E ( X ) ] ( h t t p s : / / w w w . f i v e a b l e K e y T e r m : e ( x ) ) = n p [E(X)](https://www.fiveableKeyTerm:e(x)) = np [ E ( X )] ( h ttp s : // www . f i v e ab l eKey T er m : e ( x )) = n p
Variance of Binomial distribution determined by [ V a r ( X ) ] ( h t t p s : / / w w w . f i v e a b l e K e y T e r m : v a r ( x ) ) = n p ( 1 − p ) [Var(X)](https://www.fiveableKeyTerm:var(x)) = np(1-p) [ Va r ( X )] ( h ttp s : // www . f i v e ab l eKey T er m : v a r ( x )) = n p ( 1 − p )
Poisson and Geometric Distributions
Poisson distribution models rare events in a fixed interval
Probability mass function for Poisson distribution defined as P ( X = k ) = λ k e − λ k ! P(X=k) = \frac{\lambda^k e^{-\lambda}}{k!} P ( X = k ) = k ! λ k e − λ
λ (lambda) represents average number of events in the interval
Applications encompass radioactive decay and customer arrivals at a store
Geometric distribution models number of trials until first success
Probability mass function for Geometric distribution given by P ( X = k ) = ( 1 − p ) k − 1 p P(X=k) = (1-p)^{k-1}p P ( X = k ) = ( 1 − p ) k − 1 p
p denotes probability of success on each trial
Mean of Geometric distribution calculated as E ( X ) = 1 p E(X) = \frac{1}{p} E ( X ) = p 1
Variance of Geometric distribution determined by V a r ( X ) = 1 − p p 2 Var(X) = \frac{1-p}{p^2} Va r ( X ) = p 2 1 − p
Applications include number of coin flips until first heads or attempts to win a game
Hypergeometric Distribution
Hypergeometric distribution models sampling without replacement from finite population
Probability mass function expressed as P ( X = k ) = ( K k ) ( N − K n − k ) ( N n ) P(X=k) = \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}} P ( X = k ) = ( n N ) ( k K ) ( n − k N − K )
N represents total population size, K denotes number of success states in population
n signifies sample size, k indicates number of successes in sample
Differs from Binomial distribution as probability of success changes with each draw
Applications include quality control sampling and lottery draws
Mean of Hypergeometric distribution calculated as E ( X ) = n K N E(X) = n\frac{K}{N} E ( X ) = n N K
Variance determined by V a r ( X ) = n K N N − K N N − n N − 1 Var(X) = n\frac{K}{N}\frac{N-K}{N}\frac{N-n}{N-1} Va r ( X ) = n N K N N − K N − 1 N − n
Distribution Functions
Probability Mass Function (PMF)
Probability Mass Function defines probability of discrete random variable taking specific value
For discrete random variable X, PMF given by P ( X = x ) P(X=x) P ( X = x ) for each possible value x
PMF must satisfy two conditions: P ( X = x ) ≥ 0 P(X=x) \geq 0 P ( X = x ) ≥ 0 for all x, and ∑ x P ( X = x ) = 1 \sum_x P(X=x) = 1 ∑ x P ( X = x ) = 1
Represents probability distribution for discrete random variables
Used to calculate probabilities of specific outcomes or ranges of outcomes
Visualized as bar graph with heights representing probabilities
Cumulative Distribution Function (CDF)
Cumulative Distribution Function gives probability of random variable being less than or equal to specific value
For discrete random variable X, CDF defined as F X ( x ) = P ( X ≤ x ) F_X(x) = P(X \leq x) F X ( x ) = P ( X ≤ x )
CDF calculated by summing PMF values: F X ( x ) = ∑ k ≤ x P ( X = k ) F_X(x) = \sum_{k \leq x} P(X=k) F X ( x ) = ∑ k ≤ x P ( X = k )
Properties of CDF include monotonically increasing and right-continuous
Limits of CDF approach 0 as x approaches negative infinity and 1 as x approaches positive infinity
Used to find probabilities of intervals and percentiles of distribution
Relationship between PMF and CDF: P ( X = x ) = F X ( x ) − F X ( x − ) P(X=x) = F_X(x) - F_X(x^-) P ( X = x ) = F X ( x ) − F X ( x − ) where x^- is value just below x
Distribution Properties
Expected Value and Its Applications
Expected value represents average outcome of random variable over many trials
For discrete random variable X, expected value calculated as E ( X ) = ∑ x x P ( X = x ) E(X) = \sum_x x P(X=x) E ( X ) = ∑ x x P ( X = x )
Provides measure of central tendency for probability distribution
Linearity of expectation allows E ( a X + b ) = a E ( X ) + b E(aX + b) = aE(X) + b E ( a X + b ) = a E ( X ) + b for constants a and b
Used in decision making, risk assessment, and financial modeling
Applications include calculating average winnings in games of chance
Expected value of function g(X) given by E [ g ( X ) ] = ∑ x g ( x ) P ( X = x ) E[g(X)] = \sum_x g(x) P(X=x) E [ g ( X )] = ∑ x g ( x ) P ( X = x )
Variance and Standard Deviation
Variance measures spread or dispersion of random variable around its expected value
For discrete random variable X, variance calculated as V a r ( X ) = E [ ( X − E ( X ) ) 2 ] = ∑ x ( x − E ( X ) ) 2 P ( X = x ) Var(X) = E[(X - E(X))^2] = \sum_x (x - E(X))^2 P(X=x) Va r ( X ) = E [( X − E ( X ) ) 2 ] = ∑ x ( x − E ( X ) ) 2 P ( X = x )
Alternative formula for variance: V a r ( X ) = E ( X 2 ) − [ E ( X ) ] 2 Var(X) = E(X^2) - [E(X)]^2 Va r ( X ) = E ( X 2 ) − [ E ( X ) ] 2
Standard deviation defined as square root of variance: σ = V a r ( X ) \sigma = \sqrt{Var(X)} σ = Va r ( X )
Properties of variance include V a r ( a X + b ) = a 2 V a r ( X ) Var(aX + b) = a^2Var(X) Va r ( a X + b ) = a 2 Va r ( X ) for constants a and b
Used to assess risk and uncertainty in various fields (finance, engineering, sciences)
Chebyshev's inequality relates variance to probability of deviations from mean
Coefficient of variation given by ratio of standard deviation to mean: C V = σ μ CV = \frac{\sigma}{\mu} C V = μ σ