Skip to content

Processing LiDAR data with the laser cli

This package includes laser, a suite of command line tools for processing lidar data. These tools can be chained together into data processing pipelines to go from raw point cloud data to rasters of canopy structure.

We use the name laser for all command line tools, and lazer for the python package.

Once you've installed the lazer package, you can run laser {process} to use the command line utilities. Running laser -h will describe them all:

usage: laser [-h]  ...

optional arguments:
  -h, --help       show this help message and exit

laser:

    canopycover     Rasterizes canopy cover from height-normalized data.
    canopyheight    Rasterizes tree heights from height-normalized data.
    classify        Classifies buildings and vegetation from height-normalized data.
    ground          Identifies and classifies ground points.
    index           Creates a spatial index to improve multi-file processing speed.
    info            Retrieves basic las/laz file info, like the coordinate system.
    ladderfuels     Rasterizes ladder fuel density from height-normalized data.
    mosaic          Merges a list of rasters into a single mosaic.
    reproject       Converts xyz location horizontal datum.
    slicer          Rasterizes canopy vertical profiles.
    tile            Splits a series of las/laz files into square tiles with optional overlap buffers between them.
    verticalmetrics Converts vertical profile data to structural metric rasters.
    znormalize      Computes height above ground for each point.

Run laser {process} -h to show the help for each process.

👈🏾 You can find docs for each in the table of contents.

Dependencies

There are two dependencies for these utilities: wine and titan. Neither of these are installed in the lazer conda environment.

Wine is a unix utility for running Windows programs, which we use to call LAStools. This can be kind of a pain to install, and it's done at the system level.

Titan is a salosciences package that provides utilities for working with geospatial data. It's mostly used to pass consistent command line arguments and provide progress bars for some raster functions. You can install it with pip.


laser workflows

laser processing workflow block diagram

laser tools were designed to be run in serial to process large volumes of data on distributed compute systems.

The basic unit for LiDAR data processing is the tile, which is a small square-shaped area that contains thousands to hundreds of thousands of xyz point location records.

Most sites we process will contain hundreds to thousands of contiguous tiles, occasionally with buffers between them that contain duplicate points. laser processes can work with these data on large multi-core instances or on many small instances.

Many of the steps can either be run on a series of input las/laz files, which will distribute processing across all the cores on a machine (which can be often controlled with the --cores argument), or on a tile-by-tile basis.

Once each of these tiles go through the full LAZ processing pipeline (index -> reproject -> tile -> identify ground -> height normalize -> classify), we then convert these las files into raster metrics of vegetation structure (canopy height, canopy cover, ladder fuel density, vertical metrics).

These vegetation structure datasets are produced on a tile-by-tile basis, then merged into large rasters with laser mosaic. The mosaic rasters are the final data outputs from the pipeline and are uploaded to our cloud storage system.

Common usage patterns

Specifying input and output files

Many of the laser tools map one input file to one output file. This is the case for most of the laz processing steps.

laser classify input_file.laz -o output_file.laz

You could also just specify an output directory and it will generate an output filename automatically:

laser reproject {basename}.laz -odir reprojected/

And the output file will be reprojected/{basename}.laz.

You can specify multiple input files as multiple arguments or as wildcards. You can even do both at the same time, if you want. If you're running a process that maps inputs-to-outputs, it's recommended you set an output directory instead of an output file.

laser tile input_1.laz input_2.laz rawdata/*.laz -odir tiled/

You can also pass an ascii file where each line has the path to a file to process. This can be useful if you have a large number of files to glob. Just set the --input-is-list flag and the first input will be read as an ascii list.

laser canopyheight classified-laz-list.txt --input-is-list -odir canopyheight/

Handling tile bounding boxes

If the lidar data is processed with laser tile, we add a small buffer around each tile to run the ground point search/classification routine to make sure there are no interpolation artifacts on the boundaries between tiles.

When we run steps like laser canopycover, however, we only want to export the tiles to the extent of the original tile bounding box so we have perfectly non-overlapping tiles to merge with laser mosaic.

To do this, run the raster tool functions with the --use-tile-bb flag. This will ensure the buffers are stipped prior to exporting the raster tiles.

This flag is unnecessary if you're just processing tiled data with no buffer specified. This would be the case for datasets that already have ground points classified and don't need to be tiled for processing.

Back to top