You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

10.3 Filter implementation structures

3 min readjuly 18, 2024

FIR and IIR filters can be implemented using various structures, each with unique pros and cons. is simple but can be unstable for high-order filters. Cascade and parallel structures offer better stability by breaking filters into smaller sections.

Lattice structures excel in stability and are great for adaptive filtering. When choosing a structure, consider , memory needs, and . Optimizing implementations involves hardware-specific tweaks and software techniques like and .

FIR Filter Implementation Structures

Implement FIR filters using direct form, cascade, and lattice structures

Top images from around the web for Implement FIR filters using direct form, cascade, and lattice structures
Top images from around the web for Implement FIR filters using direct form, cascade, and lattice structures
    • Implements difference equation directly
    • Requires NN multiplications and N1N-1 additions per output sample (NN = )
    • Delay elements connected in series
    • Suitable for low-order filters (10th order or less)
    • Decomposes into product of (SOS)
    • Each SOS implemented using direct form structure
    • Requires 2M2M multiplications and 2M2M additions per output sample (MM = number of SOS)
    • Improved compared to direct form
    • Suitable for high-order filters (greater than 10th order)
    • Based on lattice filter theory
    • Requires NN multiplications and 2N12N-1 additions per output sample
    • Highly modular and parallel structure enables efficient hardware implementation
    • Excellent numerical stability due to inherent properties of lattice filters
    • Suitable for adaptive filtering applications (, )

IIR filter implementation structures

    • Implements difference equation directly
    • Requires N+MN+M multiplications and N+M1N+M-1 additions per output sample (NN = , MM = )
    • Delay elements connected in series
    • Prone to numerical instability for high-order filters (greater than 10th order)
    • Canonical form of
    • Requires N+MN+M multiplications and N+M1N+M-1 additions per output sample
    • Minimizes number of delay elements to max(N,M)\max(N,M)
    • Improved numerical stability compared to direct form I
    • Decomposes transfer function into product of (SOS)
    • Each SOS implemented using structure
    • Requires 2M2M multiplications and 2M2M additions per output sample (MM = number of SOS)
    • Improved numerical stability compared to direct form structures
  • Parallel structure
    • Decomposes transfer function into sum of first-order and second-order sections
    • Each section implemented using direct form II structure
    • Requires 2M+N2M+N multiplications and 2M+N12M+N-1 additions per output sample (MM = number of second-order sections, NN = number of first-order sections)
    • Improved numerical stability compared to direct form structures

Comparison of filter structures

  • Computational complexity
    1. Direct form structures: N+MN+M multiplications and N+M1N+M-1 additions per output sample
    2. Cascade and parallel structures: 2M2M multiplications and 2M2M additions per output sample for second-order sections
    3. : NN multiplications and 2N12N-1 additions per output sample
    1. Direct form I: N+MN+M delay elements
    2. Direct form II: max(N,M)\max(N,M) delay elements
    3. Cascade and parallel structures: 2M2M delay elements for second-order sections
    4. Lattice structure: NN delay elements
  • Numerical properties
    • Direct form structures prone to numerical instability for high-order filters
    • Cascade and parallel structures have improved numerical stability due to second-order sections
    • Lattice structure has excellent numerical stability due to inherent properties of lattice filters

Optimization of filter implementations

    • Minimize number of multiplications and additions to reduce hardware complexity
    • Use instead of floating-point for resource-constrained systems (embedded systems)
    • Exploit in filter structure (lattice, parallel) for efficient hardware implementation
    • Utilize hardware-specific features (DSP slices, dedicated multipliers) for improved performance
    • Leverage vectorization and for parallel processing (AVX, SSE)
    • Optimize memory access patterns to minimize and improve data locality
    • Use for computationally expensive operations (trigonometric functions, exponentials)
    • Employ multi-threading for concurrent execution of filter sections on multi-core processors
    • Consider target platform's architecture and compiler optimizations for best performance
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary