Workbench for 3D target detection and recognition from airborne motion stereo and ladar imagery

3D imagery has a well-known potential for improving situational awareness and battlespace visualization by providing enhanced knowledge of uncooperative targets. This potential arises from the numerous advantages that 3D imagery has to offer over traditional 2D imagery, thereby increasing the accuracy of automatic target detection (ATD) and recognition (ATR). Despite advancements in both 3D sensing and 3D data exploitation, 3D imagery has yet to demonstrate a true operational gain, partly due to the processing burden of the massive dataloads generated by modern sensors. In this context, this paper describes the current status of a workbench designed for the study of 3D ATD/ATR. Among the project goals is the comparative assessment of algorithms and 3D sensing technologies given various scenarios. The workbench is comprised of three components: a database, a toolbox, and a simulation environment. The database stores, manages, and edits input data of various types such as point clouds, video, still imagery frames, CAD models and metadata. The toolbox features data processing modules, including range data manipulation, surface mesh generation, texture mapping, and a shape-from-motion module to extract a 3D target representation from video frames or from a sequence of still imagery. The simulation environment includes synthetic point cloud generation, 3D ATD/ATR algorithm prototyping environment and performance metrics for comparative assessment. In this paper, the workbench components are described and preliminary results are presented. Ladar, video and still imagery datasets collected during airborne trials are also detailed.