Skip To Content Skip To Navigation

Reality Capture

True North | Spring 2021

Technology innovations have played a crucial role in transforming all aspects of the Geospatial and other interdisciplinary industries. Today they are reshaping our future. Reality Capture is unanimously considered to be one of the most revolutionary concepts transforming the industry by shifting from the analog to the digital representation of the world.

Due to its increased prevalence, we felt it was time to start a conversation in the form of a series of Reality Capture articles to address the following:

  • Educate our customers on emerging technologies
  • Flatten the learning curve and transitional burdens with our products and services
  • enlighten our customers on horizontal and vertical opportunities that will allow them to leverage 3D Reality Capture in their current operations
  • Build B2B partnerships anchored by trust

Let’s start by providing a brief explanation of what Reality Capture is. Reality Capture refers to the detailed digital representation of the existing 3-dimensional physical world as captured by (primarily) two different technologies; LiDAR and photogrammetry. With these we can analyze, collaborate and execute decisions to improve and maintain the overall project and product quality.

1. LIDAR

LiDAR (light detection and ranging) is the method that determines ranges by transmitting a laser beam to a target, then measuring the time it takes for the reflected laser beam (i.e., light) to return to the receiver.

Today's LiDAR sensors send anywhere from thousands to millions of pulses per second. The associated software algorithm takes this data and renders a 3D scene/model which is comprised of millions of points in space, commonly known as a ‘point cloud’. This 3D model might be the interior of a house, the exterior of a building, an industrial plant, a mechanical room, a cell tower, a construction or mine site, or on a larger scale a forestry area or an entire city.

There are essentially 3 main LiDAR data collection platforms: terrestrial, mobile, and airborne, achieving unprecedented accuracy that reaches near millimeter-level of positioning accuracy:

Terrestrial: LiDAR data is collected by stationary sensors mounted on ground-based tripods, commonly known as 3D laser scanners such as the FARO Focus series, or by hand-held laser scanners used indoors and outdoors.

Faro

Courtesy of FARO

Mobile: LiDAR data is collected by two or more laser sensors mounted on a moving vehicle. The Trimble MX9 is equipped with sensors able to produce a high-density point cloud accompanied by immersive imagery, dual and single laser configuration and state-of-the-art onboard Trimble® GNSS and Inertial technology for on-the-fly simultaneous georeferencing, to match any customer needs.

Airborne or Aerial: LiDAR data is collected by sensors mounted on airplanes, helicopters, or Unmanned Aerial Vehicles (UAVs).

A UAV (now RPAS) is remotely operated by a pilot who flies the system autonomously on pre-planned flight plans. The LiDAR-based UAVs such as the ones by Microdrones rely on direct georeferencing and therefore they require GNSS/IMU hardware onboard. These systems are capable of penetrating vegetation while delivering precise geometry.

md-liar

Courtesy of Microdrones

2. Photogrammetry

Photogrammetry is the technology/science that combines photography and geometry. By taking photographs of the same object or surface from at least two different locations, ‘lines of sight’ are developed, and ‘tie points’ are created on overlapping images based on common feature coordinates. The fundamental principle used is triangulation, similar to the way our eyes work together to provide perception of depth. Dedicated photogrammetric software assists in stitching together and georeferencing all the pictures taken to produce 3D coordinates of the locations of the objects in the 3D space, forming point clouds.

The two main types of photogrammetry are terrestrial and aerial.

  • Terrestrial: a camera is used in a stationary position, mounted on an elevated level, usually a tripod, or as a handheld.
  • Aerial: the camera sensor is mounted on airplanes, helicopters, or UAVs.

The photogrammetric UAVs are divided into two large categories based on the aircraft body, helicopters, such as the Microdrones Mapper quadcopters, and fixed-wings, such as the Delair UX11 fixed-wing. Image-based UAVs can be equipped with different camera sensors, such as RGB, multispectral or hyperspectral, and payloads (i.e., any weighted sensor mounted on a UAV) with respect to the intended operations and required deliverables.

To Summarize

Reality Capture is testimony to how far the industry has come in the past decade by bringing 3D mapping to a whole new level. While technological advancements such as improved speed, accuracy, and density of data continue to grow, the associated cost is falling exponentially and is no longer considered prohibitive.

At Cansel, we believe that change is always around the corner. We want to make sure we are here to support our customers by providing them with the expertise and agility to make confident decisions for their future development objectives using Reality Capture.

In the upcoming articles, and with the help of our technical experts, we will discuss in further detail each technology involved in Reality Capture data collection. We will also showcase how Cansel’s product portfolio and team have been helping multidisciplinary corporations to integrate those technologies within their organization’s structure and workflow, thus maximizing their investments.

Iliana Tsali, PhD
Survey Account Manager, Alberta
Cansel