Skinner's theory revolutionized our understanding of how behavior is shaped by consequences. It explains why we repeat actions that lead to rewards and avoid those that result in punishment, forming the basis for many modern techniques.
Unlike classical conditioning, which deals with involuntary responses, operant conditioning focuses on voluntary behaviors. This theory has wide-ranging applications, from animal training to human learning, and continues to influence fields like psychology, education, and even business management.
Operant Conditioning Basics
Fundamental Concepts
Top images from around the web for Fundamental Concepts
Operant Conditioning | Boundless Psychology View original
Is this image relevant?
Operant conditioning - Wikipedia, the free encyclopedia View original
Operant Conditioning | Boundless Psychology View original
Is this image relevant?
Operant conditioning - Wikipedia, the free encyclopedia View original
Is this image relevant?
1 of 3
Operant conditioning focuses on how consequences influence voluntary behavior and the likelihood of that behavior occurring again in the future
refers to actions that operate on the environment to produce rewarding or punishing consequences (pressing a lever)
Voluntary behavior in operant conditioning is not reflexive and instead is controlled by its consequences (choosing to study for a test)
Behavioral consequences, whether reinforcing or punishing, determine the probability of a behavior being repeated or suppressed
Comparison to Classical Conditioning
Unlike classical conditioning which deals with involuntary reflexive behaviors, operant conditioning examines the relationship between voluntary behaviors and their consequences
Classical conditioning involves an association between stimuli while operant conditioning involves an association between a behavior and its consequences
In operant conditioning, the organism plays an active role in producing the consequences that shape its behavior (exploring the environment)
Experimental Apparatus
Skinner Box
The is an experimental chamber used to study operant conditioning principles in animals (rats, pigeons)
It typically contains a lever or key that the animal can manipulate to obtain a such as food or avoid a such as an electric shock
The Skinner Box allows researchers to precisely control the environment and measure the animal's rate of responding
Stimulus Control and Response Rate
occurs when a particular stimulus or set of stimuli come to reliably elicit a specific operant response due to a history of reinforcement (a green light signaling the availability of food)
The , or the frequency of an operant behavior, can be influenced by the schedule of reinforcement in effect (a variable ratio schedule produces a high, steady response rate)
Researchers use the Skinner Box to investigate how different schedules of reinforcement and discriminative stimuli affect the acquisition, maintenance, and extinction of operant behaviors
Reinforcement Principles
Positive and Negative Reinforcement
involves presenting a rewarding stimulus after a behavior, increasing the likelihood of that behavior occurring again (receiving praise for a job well done)
involves removing an aversive stimulus after a behavior, also increasing the likelihood of that behavior occurring again (taking an aspirin to relieve a headache)
Both positive and negative reinforcement strengthen the preceding behavior, but differ in whether a stimulus is added or removed
Schedules of Reinforcement
Continuous reinforcement provides reinforcement after every instance of the target behavior, leading to quick acquisition but also rapid extinction when reinforcement is discontinued
Partial (intermittent) reinforcement provides reinforcement only after some instances of the target behavior, leading to slower acquisition but greater resistance to extinction
Fixed ratio schedules provide reinforcement after a fixed number of responses (every 10th response), while variable ratio schedules provide reinforcement after an unpredictable number of responses (on average, every 10th response)
Fixed interval schedules provide reinforcement for the first response after a fixed time interval (every 60 seconds), while variable interval schedules provide reinforcement for the first response after an unpredictable time interval (on average, every 60 seconds)