Your search

In authors or contributors
  • Person detection is often critical for personal safety, property protection, and national security. Most person detection technologies implement unimodal classification, making predictions based on a single sensor data modality, which is most often vision. There are many ways to defeat unimodal person detectors, and many more reasons to ensure technologies responsible for detecting the presence of a person are accurate and precise. In this paper, we design and implement a multimodal person detection system which can acquire data from multiple sensors and detect persons based on a variety of unimodal classifications and multimodal fusions. We present two methods of generating system-level predictions: (1) device perspectives which makes a final decision based on multiple device-level predictions and (2) system perspectives which combines data samples from multiple devices into a single data sample and then makes a decision. Our experimental results show that system-level predictions from system perspectives are generally more accurate than system-level predictions from device perspectives. We achieve an accuracy of 100%, zero false positive rate and zero false negative rate with fusion of system perspectives motion and distance data. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language