filters.programmable¶
The programmable filter takes a stream of points and applies a Python function to each point in the stream.
The function must have two NumPy arrays as arguments, ins and outs. The ins array represents input points, the outs array represents output points. Each array contains all the dimensions of the point schema, for a number of points (depending on how large a point buffer the pipeline is processing at the time, a run-time consideration). Individual arrays for each dimension can be read from the input point and written to the output point.
import numpy as np
def multiply_z(ins,outs):
Z = ins['Z']
Z = Z * 10.0
outs['Z'] = Z
return True
Note that the function always returns True. If the function returned False, an error would be thrown and the translation shut down.
If you want to write a dimension that might not be available, use can use one or more add_dimension options.
To filter points based on a Python function, use the filters.predicate filter.
Example¶
{
"pipeline":[
"file-input.las",
{
"type":"filters.ground"
},
{
"type":"filters.programmable",
"script":"multiply_z.py",
"function":"multiply_z",
"module":"anything"
},
{
"type":"writers.las",
"filename":"file-filtered.las"
}
]
}
The XML pipeline file referenced the external multiply_z.py Python script, which scales up the Z coordinate by a factor of 10.
import numpy as np
def multiply_z(ins,outs):
Z = ins['Z']
Z = Z * 10.0
outs['Z'] = Z
return True
Options¶
- script
- When reading a function from a separate Python file, the file name to read from. [Example: functions.py]
- module
- The Python module that is holding the function to run. [Required]
- function
- The function to call.
- source
- The literal Python code to execute, when the script option is not being used.
- add_dimension
- The name of a dimension to add to the pipeline that does not already exist.