You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

Absorption and are key concepts in that shape their long-term behavior. Absorption occurs when a chain reaches a state it can't escape, while ergodicity ensures convergence to a unique regardless of the starting point.

These properties have wide-ranging applications. Absorption models processes that eventually end, like equipment failure or disease progression. Ergodicity is crucial for systems that reach equilibrium, such as chemical reactions or stable populations, and underpins algorithms like PageRank.

Definition of absorption

  • Absorption is a key concept in Markov chains where the process eventually reaches a state or set of states from which it cannot escape
  • Once a Markov chain enters an absorbing state, it remains there forever and the process effectively ends
  • The transition probabilities and structure of the Markov chain determine whether absorption occurs and how long it takes on average to reach an absorbing state

Absorbing states

Top images from around the web for Absorbing states
Top images from around the web for Absorbing states
  • An absorbing state is a state in a Markov chain that, once entered, cannot be left
  • Mathematically, an absorbing state ii has a transition probability of pii=1p_{ii} = 1 and pij=0p_{ij} = 0 for all jij \neq i
  • Examples of include:
    • In a game of Chutes and Ladders, the final square on the board is an absorbing state
    • In a model of equipment failure, a completely broken state from which no repair is possible is absorbing
  • A Markov chain is called absorbing if it has at least one absorbing state and it is possible to reach an absorbing state from any non-absorbing state

Transition probabilities

  • The transition probabilities of a Markov chain specify the likelihood of moving from one state to another in a single step
  • In an absorbing Markov chain, the transition probabilities can be arranged into a that separates absorbing and
  • The canonical form of the is:
I & 0\\ R & Q \end{bmatrix}$$ where: - $I$ is an identity matrix representing transitions between absorbing states (which are impossible) - $0$ is a zero matrix for transitions from absorbing to transient states - $R$ contains probabilities of moving from each transient state to each absorbing state - $Q$ is the matrix of transition probabilities between transient states ### Canonical form - The canonical form of the transition matrix clearly distinguishes the absorbing and transient states - Arranging the transition probabilities in canonical form allows for easier analysis of [absorption probabilities](https://www.fiveableKeyTerm:absorption_probabilities) and expected time until absorption - The structure of the canonical form also provides insights into the communication classes and [irreducibility](https://www.fiveableKeyTerm:Irreducibility) of the Markov chain - For example, if there are multiple absorbing states and the $R$ matrix has more than one non-zero column, the chain has multiple absorbing classes ## Absorption probabilities - Absorption probabilities quantify the likelihood of a Markov chain eventually being absorbed into each absorbing state, given a particular starting state - These probabilities can be computed using the [fundamental matrix](https://www.fiveableKeyTerm:fundamental_matrix) and the structure of the canonical form transition matrix - Understanding absorption probabilities helps analyze the long-term behavior of absorbing Markov chains and make predictions about the process ### Fundamental matrix - The fundamental matrix $N$ of an absorbing Markov chain is defined as: $$N = (I - Q)^{-1}$$ where $I$ is an identity matrix and $Q$ is the matrix of transition probabilities between transient states from the canonical form - Each entry $n_{ij}$ of the fundamental matrix represents the expected number of times the chain is in transient state $j$, given that it starts in transient state $i$, before being absorbed - The fundamental matrix captures important information about the behavior of the chain prior to absorption and is used in computing absorption probabilities and expected time until absorption ### Computations using fundamental matrix - Let $b_i$ be the probability of being absorbed into absorbing state $i$, given a particular starting distribution over the transient states - The absorption probabilities can be computed as: $$\vec{b} = N\vec{c}$$ where $\vec{c}$ is a column vector of the starting probabilities of being in each transient state - The entries of $\vec{b}$ give the absorption probabilities for each absorbing state - The fundamental matrix can also be used to calculate the [expected number of steps until absorption](https://www.fiveableKeyTerm:Expected_number_of_steps_until_absorption) for each starting state ### Expected number of steps until absorption - Let $t_i$ be the expected number of steps until absorption, given that the chain starts in transient state $i$ - The vector of expected absorption times $\vec{t}$ can be computed as: $$\vec{t} = N\vec{1}$$ where $\vec{1}$ is a column vector of all ones - Each entry of $\vec{t}$ represents the expected number of steps before the chain is absorbed, starting from the corresponding transient state - These expected absorption times provide valuable information about the duration of the process and how long it takes to reach an absorbing state on average - For example, in a model of a game or a machine repair process, the expected absorption times can help predict how many rounds are played or how long a machine operates before reaching a terminal state ## Ergodicity - Ergodicity is another important property of Markov chains that describes their long-term behavior and convergence to a stable distribution - A Markov chain is ergodic if it is both irreducible (it is possible to reach any state from any other state) and aperiodic (the chain does not get trapped in cycles) - Ergodic Markov chains have a unique stationary distribution that they converge to over time, regardless of the starting state ### Definition of ergodicity - A Markov chain is ergodic if it satisfies two conditions: 1. Irreducibility: For any two states $i$ and $j$, there is a positive probability of eventually reaching state $j$ from state $i$ (and vice versa) 2. [Aperiodicity](https://www.fiveableKeyTerm:Aperiodicity): The chain does not have a deterministic cycle that it gets trapped in - Ergodicity ensures that the long-term behavior of the chain is independent of the starting state and converges to a unique stationary distribution ### Ergodic Markov chains - In an ergodic Markov chain, the transition probabilities and structure of the chain allow for reaching any state from any other state over time - The chain mixes well and does not get stuck in subsets of states or deterministic cycles - Examples of ergodic Markov chains include: - A well-shuffled deck of cards, where any card can be reached from any other card after sufficient shuffling - A random walk on a connected, non-bipartite graph, where the walker can reach any node from any other node - Ergodic chains have important applications in modeling systems that exhibit long-term stability and predictability ### Stationary distributions - A stationary distribution $\vec{\pi}$ of a Markov chain is a probability distribution over the states that remains unchanged under the transition matrix $P$ - Mathematically, a distribution $\vec{\pi}$ is stationary if it satisfies: $$\vec{\pi} = \vec{\pi}P$$ - In other words, if the chain starts in the stationary distribution, it will remain in that distribution after each transition - For an ergodic Markov chain, the stationary distribution represents the long-term proportion of time the chain spends in each state, regardless of the starting state ### Uniqueness of stationary distribution - One of the key properties of ergodic Markov chains is the uniqueness of the stationary distribution - In an ergodic chain, there is only one distribution $\vec{\pi}$ that satisfies the stationary condition $\vec{\pi} = \vec{\pi}P$ - This unique stationary distribution is also the limiting distribution of the chain, meaning that as the number of steps tends to infinity, the distribution of the chain converges to $\vec{\pi}$, regardless of the initial state - The uniqueness of the stationary distribution in ergodic chains is crucial for making long-term predictions and understanding the stability of the modeled system - For example, in a model of a well-mixed chemical reaction, the unique stationary distribution would represent the equilibrium concentrations of the reactants and products ## Ergodicity vs absorption - Ergodicity and absorption are two distinct long-term behaviors that a Markov chain can exhibit - While ergodic chains converge to a unique stationary distribution, absorbing chains eventually get trapped in absorbing states - Understanding the differences between ergodicity and absorption is crucial for modeling and analyzing various systems and processes ### Differences in long-term behavior - Ergodic Markov chains: - Converge to a unique stationary distribution $\vec{\pi}$ over time - The long-term behavior is independent of the starting state - The chain continues to evolve and transition between states according to the stationary distribution - Absorbing Markov chains: - Eventually reach an absorbing state and remain there forever - The long-term behavior depends on the starting state and the absorption probabilities - Once an absorbing state is reached, the chain effectively stops evolving ### Irreducibility and aperiodicity - Irreducibility and aperiodicity are the two key properties that distinguish ergodic chains from absorbing chains - An ergodic chain must be both irreducible and aperiodic, ensuring that it can reach any state from any other state and does not get trapped in cycles - An absorbing chain, by definition, is not irreducible, as it has absorbing states that cannot be escaped once entered - The presence of absorbing states violates the irreducibility condition and prevents the chain from being ergodic ### Examples of ergodic and absorbing chains - Examples of ergodic Markov chains: - A random walk on a connected, non-bipartite graph - The long-term behavior of a well-shuffled deck of cards - The equilibrium distribution of molecules in a well-mixed gas - Examples of absorbing Markov chains: - A gambler's ruin problem, where the gambler either reaches a target wealth or goes broke (absorbing states) - A model of equipment failure, where the equipment either works or reaches a failed state that cannot be repaired - The stages of a disease progression, where the patient either recovers or reaches a terminal state ## Applications of absorption and ergodicity - The concepts of absorption and ergodicity in Markov chains have numerous applications across various fields, including [queueing theory](https://www.fiveableKeyTerm:Queueing_Theory), [population dynamics](https://www.fiveableKeyTerm:Population_Dynamics), and computer science - Understanding the long-term behavior of systems modeled as Markov chains allows for making predictions, optimizing processes, and designing effective algorithms ### Queueing theory - Queueing theory is the study of waiting lines and the analysis of congestion and delays in systems - Markov chains are often used to model queueing systems, where the states represent the number of customers or jobs in the system - Absorbing Markov chains can model situations where a queue eventually empties or reaches a maximum capacity (absorbing states) - Ergodic Markov chains can model stable queueing systems where the arrival and service rates balance out, leading to a steady-state distribution of queue lengths - Examples of queueing systems modeled using Markov chains include: - Call centers with a limited number of operators - Manufacturing systems with finite buffer capacities - Computer networks with packet queues at routers ### Population dynamics - Markov chains can be used to model the dynamics of populations over time, such as the growth or decline of species in an ecosystem - Absorbing Markov chains can represent situations where a population eventually becomes extinct (an absorbing state) or reaches a carrying capacity - Ergodic Markov chains can model stable populations with balanced birth and death rates, leading to an equilibrium distribution - Examples of population dynamics modeled using Markov chains include: - The spread of an infectious disease in a population - The genetic drift of alleles in a finite population - The dynamics of competing species in an ecosystem ### PageRank algorithm - The PageRank algorithm, used by search engines like Google, relies on the concepts of ergodicity and stationary distributions in Markov chains - Web pages are modeled as states in a Markov chain, with hyperlinks representing transitions between states - The ergodicity of the chain ensures the existence of a unique stationary distribution, which represents the long-term proportion of time a random web surfer spends on each page - The PageRank of a web page is determined by its stationary probability, reflecting its importance and centrality in the network of web pages - The PageRank algorithm has been successful in ranking web pages and providing relevant search results by leveraging the properties of ergodic Markov chains - Variations of the PageRank algorithm have also been applied to other domains, such as ranking scientific papers, social network users, and product recommendations
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary