Lidar 101
Light Detection And Ranging, aka LiDAR, is becoming ever more integrated in our daily lives. The latest iPhones now even allow you to try it at home. Where did this technology originate? How does it work? Most importantly, how can it be used to create smart & sustainable futures?
LiDAR group picture of the Sobolt team made with an iPhonee
LiDAR what?
Dolphins & bats use echolocation to create an image of the world around them. By sending out sounds, these animals measure the time it takes for them to reflect off surrounding obstacles; the longer it takes for an echo to return, the further away the object is. In 1906, humans took this technique as an inspiration for the development of sonar technology, which uses sound waves to measure distances as well.
Sonar then evolved into radar, which applies the same idea but uses radio waves through the air instead of sound. Radar dishes spin around, unleashing a beam of radio waves. As those waves hit something hard, they bounce back, and the dish reads them. As the exact speed of light in our air is known, one can calculate how distant objects are.
In LiDAR technology the reflection of laser beams is used to create an image of the scanned objects. Since light has a very short wavelength, LiDAR is capable of detecting & presenting the scanned area in minute detail. But that’s not all. The material composition & speed of the object reflecting the light also alters the characteristics of the emitted light. This allows the LiDAR detector to differentiate between scanned objects.
From image to information
A LiDAR image is a collection of many single points reflected off various objects around the scanner. These types of images are called point clouds. Such an image on its own looks cool but is useless on its own. The real added value lies in the interpretation of that image. This is where machine learning & AI come into play. These techniques interpret the point clouds and provide tangible, highly accurate and reliable information about the scanned objects.
Interpreting point cloud data is done in 2 stages.
- AI is applied to identify different types of objects within point cloud data. There are various techniques for identifying objects within point clouds. Most machine learning models try to find a correlation between shape, density and intensity of the points and the type of object. For example, asphalt roads are generally flat & black. These predefined object characteristics are then used to identify all the points related to a road object.
- Once the different objects within a point cloud have been identified , the AI algorithm analyzes these objects by measuring their characteristics. What gets measured depends on the information needed. Commonly used measurements are width, height & volume of the object.
Application of LiDAR
LiDAR was first used in meteorology, measuring clouds & pollution. The technology got its real claim to fame in 1971. Apollo 15 used it to accurately map the surface of the moon. Since then, the technology has improved dramatically & become cheaper. This made it accessible for more mainstream applications.
LiDAR uses ultraviolet, visible, or near infrared light to image objects. It can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. This allows for an application of the technology in any field where visual recognition is required.
- Agriculture: measuring the growth of crops or mapping the soil
- Biodiversity policy: allowing municipalities to map the trees and identify their key characteristics
- Car industry: Autonomously driving vehicles
- Infrastructure: road inspections
Let’s take the example of road inspections. LiDAR images are collected from a car with a scanner. By analyzing its positioning, bends, dents & skewness, we can accurately measure the state of the guardrail.
Combining LiDAR with AI in road inspections has many benefits.
- Accuracy. >99%
- Safety. The images are taken from a driving vehicle. As such, one does not have to physically inspect objects with cars chasing by. It also spots unsafe assets directly, making the follow-up quicker & easier.
- Time. When doing physical inspections roads or lanes need to be closed off. This causes congestion & a longer travel time for road users.
- Sustainability. Congestions cause higher CO2-emissions. By preventing these we save time, money and pollution. Furthermore, better preventative services measures can be taken as a result of more accurate inspections. This ensures a longer lifetime of the asset.
From nature for nature
Where animals have been using their echolocation for millenia, we have only just started unlocking its vast potential. Now that we have learnt from nature, it is time to give back. With the help of LiDAR + AI we are ready to take better actions for sustainable futures.