You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

Photogrammetry transforms 2D images into precise 3D measurements, combining optics, math, and computer vision. This technique extracts spatial data from overlapping photos, serving as a cornerstone for remote sensing and geospatial imaging in various fields.

The process involves image acquisition, processing, and 3D reconstruction. It's used in , urban planning, archaeology, and more. Advances in digital tech and machine learning are expanding photogrammetry's capabilities and applications across industries.

Fundamentals of photogrammetry

  • Photogrammetry extracts precise 3D measurements from 2D images enabling accurate spatial data collection for Images as Data analysis
  • Combines principles of optics, mathematics, and computer vision to reconstruct 3D scenes from multiple overlapping photographs
  • Serves as a foundational technique in remote sensing and geospatial imaging providing valuable data for various applications

Definition and basic principles

Top images from around the web for Definition and basic principles
Top images from around the web for Definition and basic principles
  • Science and technology of obtaining reliable information about physical objects and the environment through processes of recording, measuring, and interpreting photographic images
  • Relies on principles using multiple images taken from different angles to determine 3D coordinates
  • Employs collinearity equations to establish mathematical relationships between image coordinates and object space coordinates
  • Requires camera calibration to account for lens distortions and internal camera geometry

Historical development

  • Originated in the mid-19th century with the advent of photography and stereoscopy
  • Analog photogrammetry used specialized plotting instruments (stereoplotters) for manual measurements
  • Transition to analytical photogrammetry in the 1950s introduced computer-based calculations
  • Digital photogrammetry emerged in the 1980s with advancements in digital imaging and computer processing
  • Modern photogrammetry integrates computer vision algorithms and machine learning techniques for automated processing

Applications in various fields

  • Geospatial mapping and cartography for creating topographic maps and updating geographic information systems (GIS)
  • Urban planning and for 3D city modeling and building information modeling (BIM)
  • Archaeology and cultural heritage preservation for documenting historical sites and artifacts
  • Environmental monitoring for tracking changes in landscapes, vegetation, and ecosystems
  • Forensics and accident reconstruction for crime scene analysis and traffic collision investigations
  • Film and video game industries for creating realistic 3D environments and special effects

Image acquisition for photogrammetry

  • Crucial step in photogrammetric workflow determining the quality and accuracy of final 3D reconstructions
  • Involves careful planning of camera settings, flight paths, and ground control to ensure optimal image coverage
  • Impacts the resolution, precision, and completeness of the resulting 3D models and

Camera types and specifications

  • Metric cameras designed specifically for photogrammetry with known and stable internal geometry
  • Non-metric cameras (consumer-grade digital cameras) increasingly used due to advancements in camera calibration techniques
  • Key specifications include sensor size, resolution, lens quality, and focal length
  • Large format cameras offer higher resolution and accuracy for
  • Multispectral and hyperspectral cameras capture data across multiple wavelengths for specialized applications

Flight planning and image overlap

  • Determines the path and altitude of the camera platform (aircraft, drone, or satellite) to achieve desired ground coverage
  • Forward overlap (typically 60-80%) ensures stereo coverage between consecutive images along the flight line
  • Side overlap (typically 20-40%) provides connections between adjacent flight lines
  • Higher overlap percentages improve tie point matching and reduce the risk of data gaps
  • Consideration of terrain variations, object height, and desired ground sampling distance (GSD) in planning

Ground control points

  • Well-defined points on the ground with known coordinates used to georeference and the photogrammetric model
  • Typically marked with targets or natural features that are easily identifiable in the images
  • Measured using high-accuracy surveying techniques (GPS, total station)
  • Distribution and number of GCPs affect the overall accuracy of the photogrammetric project
  • Can be supplemented or replaced by onboard RTK/PPK GPS systems in modern aerial platforms

Photogrammetric processing workflow

  • Transforms raw images into accurate 3D models and orthophotos through a series of computational steps
  • Involves complex algorithms for feature matching, bundle adjustment, and dense reconstruction
  • Requires significant computational resources especially for large datasets with high-resolution images

Image orientation and alignment

  • Determines the exterior orientation parameters (position and rotation) of each camera at the time of exposure
  • Utilizes automated tie point extraction and matching across multiple images
  • Employs bundle adjustment to simultaneously refine camera parameters and 3D point coordinates
  • Produces a sparse point cloud representing key features in the scene
  • Accuracy of orientation affects all subsequent processing steps and final product quality

Dense point cloud generation

  • Creates a detailed 3D point cloud by computing depth information for each pixel in the aligned images
  • Utilizes multi-view stereo algorithms to match pixels across multiple overlapping images
  • Density of the point cloud depends on image resolution, texture, and processing parameters
  • Filtering techniques applied to remove noise and outliers from the dense cloud
  • Serves as the basis for generating other 3D products (mesh, DEM) and orthophotos

Mesh creation and texturing

  • Converts the dense point cloud into a continuous 3D surface model (mesh) using triangulation algorithms
  • Mesh simplification and smoothing techniques applied to optimize the model for visualization or analysis
  • Texturing process projects original image data onto the mesh to create a photorealistic 3D model
  • Texture blending algorithms used to seamlessly combine images from multiple viewpoints
  • Resulting textured mesh used for visualization, virtual reality applications, and further analysis

Accuracy and precision in photogrammetry

  • Critical aspects in evaluating the quality and reliability of photogrammetric products
  • Influenced by various factors throughout the image acquisition and processing workflow
  • Essential for ensuring the usability of photogrammetric data in scientific and engineering applications

Sources of error

  • Camera calibration errors including lens distortion and principal point offset
  • GPS/INS errors in direct georeferencing systems affecting camera position and orientation
  • Ground control point measurement errors impacting model georeferencing and scale
  • Image matching errors due to poor texture, repetitive patterns, or occlusions
  • Systematic errors from incorrect processing parameters or software limitations
  • Environmental factors such as atmospheric refraction and ground movement

Quality control measures

  • Implementation of rigorous camera calibration procedures before and during projects
  • Use of redundant observations and robust estimation techniques in bundle adjustment
  • Cross-validation of results using independent check points or overlapping models
  • Analysis of residuals and statistical measures to identify and eliminate gross errors
  • Visual inspection of intermediate and final products for artifacts or inconsistencies
  • Adherence to standardized workflows and best practices in data acquisition and processing

Accuracy assessment methods

  • Comparison of photogrammetrically derived coordinates with independently surveyed check points
  • Calculation of for planimetric and vertical accuracy
  • Analysis of point cloud-to-point cloud or mesh-to-mesh differences between overlapping models
  • Use of cross-sections and profiles to assess the accuracy of 3D reconstructions
  • Evaluation of orthophoto accuracy through visual inspection and feature measurement
  • Application of statistical tests to assess the significance of observed errors and deviations

Digital elevation models (DEMs)

  • Represent the Earth's surface topography in a digital format essential for various geospatial analyses
  • Serve as fundamental datasets in GIS enabling terrain visualization, hydrological modeling, and landscape analysis
  • Play a crucial role in orthorectification of aerial and satellite imagery

Types of DEMs

  • represent the bare earth surface without vegetation or buildings
  • include the heights of objects on the surface (trees, buildings)
  • show the height difference between DSM and DTM
  • TINs (Triangulated Irregular Networks) represent terrain using connected triangular facets
  • Raster DEMs store elevation values in a regular grid format most common for analysis and visualization

DEM generation process

  • Extraction of elevation points from or dense point clouds
  • Filtering and classification of points to separate ground from non-ground features (for DTMs)
  • Interpolation of elevation values to create a continuous surface model
  • Resampling to desired resolution and coordinate system
  • Quality control and editing to remove artifacts and ensure hydrological consistency

Applications of DEMs

  • Terrain analysis for slope, aspect, and curvature calculations
  • Watershed delineation and hydrological modeling for flood risk assessment
  • Viewshed analysis for telecommunications and wind farm planning
  • Cut and fill volume calculations for earthwork and mining operations
  • Contour generation for topographic mapping and navigation
  • Input for orthorectification of aerial and satellite imagery

Orthophotography

  • Combines the image characteristics of a photograph with the geometric qualities of a map
  • Provides a planimetrically correct representation of the Earth's surface
  • Serves as a valuable data source for mapping, GIS, and remote sensing applications

Principles of orthorectification

  • Process of removing geometric distortions in aerial or satellite imagery caused by camera tilt and terrain relief
  • Utilizes collinearity equations to establish relationships between image and ground coordinates
  • Requires accurate camera orientation parameters and a detailed digital elevation model
  • Resamples original image pixels to a specified map projection and coordinate system
  • Results in an orthophoto where all points are in their true orthographic position

Orthophoto production workflow

  • Image preprocessing including radiometric corrections and color balancing
  • Aerial triangulation or image orientation to determine exterior orientation parameters
  • DEM generation or acquisition for terrain modeling
  • Orthorectification process applying geometric corrections to each image
  • Mosaicking of individual orthophotos into a seamless coverage
  • Color balancing and tonal adjustments for visual consistency
  • Quality control and accuracy assessment of the final orthophoto mosaic

Uses of orthophotos

  • Base maps for GIS providing up-to-date visual context for spatial data
  • Urban planning and land use monitoring for tracking development and change
  • Agricultural management for crop monitoring and precision farming
  • Forestry applications including tree cover mapping and forest health assessment
  • Emergency response and disaster management for rapid damage assessment
  • Property boundary mapping and cadastral updates
  • Transportation planning and infrastructure mapping

3D reconstruction techniques

  • Enable the creation of detailed 3D models from 2D images crucial for various applications in Images as Data analysis
  • Utilize computer vision algorithms to extract 3D information from multiple viewpoints
  • Produce dense point clouds, textured meshes, and other 3D representations of real-world objects and scenes

Structure from motion (SfM)

  • Photogrammetric technique that simultaneously estimates 3D structure and camera motion from image sequences
  • Utilizes feature detection and matching algorithms (SIFT, SURF) to identify corresponding points across images
  • Performs bundle adjustment to optimize camera parameters and 3D point positions
  • Generates sparse point clouds and camera poses as initial outputs
  • Suitable for unordered image collections and varying camera geometries

Multi-view stereo (MVS)

  • Dense reconstruction technique that follows SfM to create detailed 3D models
  • Computes depth maps for each image using pixel-wise matching across multiple views
  • Fuses depth maps to create dense point clouds or volumetric representations
  • Employs various algorithms (patch-based, volumetric, depth map fusion) for reconstruction
  • Produces highly detailed 3D models capturing fine surface geometry

Comparison of SfM vs MVS

  • SfM focuses on camera pose estimation and sparse reconstruction MVS on dense geometry reconstruction
  • SfM operates on feature points MVS utilizes full image information
  • SfM is more robust to varying image collections MVS requires more controlled image acquisition
  • SfM produces sparse point clouds and camera parameters MVS generates dense point clouds or meshes
  • SfM is computationally lighter MVS requires significant processing power and memory
  • SfM serves as a prerequisite for MVS in most photogrammetric workflows

Software tools for photogrammetry

  • Provide specialized functionality for processing photogrammetric data and generating 3D models
  • Range from user-friendly solutions for non-experts to advanced packages for professional applications
  • Play a crucial role in automating complex photogrammetric workflows and improving productivity

Commercial software options

  • offers a comprehensive photogrammetric workflow with advanced features
  • specializes in drone-based mapping and modeling with industry-specific solutions
  • Bentley ContextCapture provides large-scale 3D reconstruction capabilities for infrastructure projects
  • Trimble Inpho offers high-precision photogrammetric tools for aerial and satellite imagery processing
  • RealityCapture known for fast processing and high-quality mesh generation

Open-source alternatives

  • OpenDroneMap provides a complete photogrammetric pipeline for drone imagery processing
  • MicMac offers a flexible photogrammetric toolset developed by IGN France
  • COLMAP implements state-of-the-art Structure-from-Motion and Multi-View Stereo algorithms
  • VisualSFM combines SfM techniques with GPU acceleration for efficient 3D reconstruction
  • OpenSfM provides a Python implementation of Structure from Motion techniques

Cloud-based processing platforms

  • DroneDeploy offers cloud-based mapping and solutions for drone imagery
  • Mapbox provides cloud infrastructure for processing and hosting large-scale photogrammetric datasets
  • Skycatch delivers cloud-based photogrammetry services for construction and mining industries
  • Propeller Aero specializes in cloud processing of drone data for surveying and earthwork applications
  • Pix4Dcloud enables online processing and collaboration for various photogrammetry projects

Challenges in photogrammetry

  • Present ongoing obstacles in achieving accurate and efficient 3D reconstructions from images
  • Drive research and development efforts to improve photogrammetric techniques and technologies
  • Require innovative solutions to expand the applicability of photogrammetry in diverse fields

Dealing with complex geometries

  • Reconstruction of thin structures and sharp edges often results in smoothing or loss of detail
  • Handling of transparent, reflective, or homogeneous surfaces poses challenges for image matching
  • Occlusions and self-occlusions in complex scenes lead to incomplete or inaccurate reconstructions
  • Multi-scale approaches and adaptive meshing techniques address varying levels of detail
  • Integration of prior knowledge or constraints improves reconstruction of known object types

Handling large datasets

  • Processing of high-resolution images and large image collections requires significant computational resources
  • Data management and storage become critical for projects with terabytes of imagery and point clouds
  • Scalable algorithms and distributed computing solutions enable processing of massive datasets
  • Efficient data structures and out-of-core processing techniques manage memory limitations
  • Balancing processing time and reconstruction quality presents trade-offs in large-scale projects

Automation vs manual intervention

  • Fully automated workflows may produce suboptimal results in challenging scenarios
  • Manual intervention often required for quality control and refinement of automated results
  • Striking a balance between automation and user control to ensure accuracy and efficiency
  • Development of semi-automated tools with intuitive user interfaces for guided editing
  • Integration of machine learning techniques to reduce the need for manual intervention while maintaining quality
  • Shape the evolving landscape of 3D reconstruction and spatial data acquisition technologies
  • Drive innovations in hardware, software, and methodologies for photogrammetric applications
  • Expand the capabilities and accessibility of photogrammetry across various industries and research fields

Integration with other technologies

  • Fusion of photogrammetry with LiDAR data for improved accuracy and completeness of 3D models
  • Integration of thermal and multispectral imaging for enhanced analysis capabilities
  • Combination of terrestrial, aerial, and satellite photogrammetry for multi-scale mapping
  • Incorporation of GNSS/INS technologies for direct georeferencing and improved efficiency
  • Synergy with virtual and augmented reality for immersive visualization and interaction with 3D models

Advancements in machine learning

  • Deep learning techniques for improved feature matching and image classification
  • Convolutional neural networks for semantic segmentation of photogrammetric point clouds
  • Generative adversarial networks (GANs) for enhancing image resolution and filling data gaps
  • Reinforcement learning for optimizing flight planning and image acquisition strategies
  • Transfer learning approaches to adapt photogrammetric models to new domains with limited training data

Emerging applications

  • Real-time 3D reconstruction for autonomous navigation and robotics
  • Photogrammetric monitoring of structural health in civil engineering
  • Personalized medicine through 3D body scanning and modeling
  • Digital twin creation for smart cities and infrastructure management
  • Planetary photogrammetry for exploration and mapping of other celestial bodies
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary