Transfer Learning using Computer Vision Models for Fall Detection from UWBRadars
- Authority: The 1st International Conference on Smart Mobility and Logistics Ecosystems (SMiLE)
- Category: Conference Proceeding
Detecting when a person falls poses a substantial challenge to researchers because of the risk of serious injuries like femoral neck fractures, brain hemorrhages, or burns, which can lead to significant discomfort and, in some cases, worsen over time, resulting in complications or even fatalities. The effectiveness of fall detection is linked to promptly alerting caregivers, such as nurses, upon detecting a fall. In our study, we present a technique for identifying falls within a 40-square-meter apartment using data collected from three ultra-wideband radars. Our approach integrates pre-trained computer vision models (ResNet, VGG, and AlexNet) for fall detection, which is a binary classification task aimed at distinguishing between fall and non-fall events. To refine the model’s performance, we utilize data representing various fall scenarios simulated by 10 participants across three locations within the apartment. We evaluate the performance of the presented technique by using the leave-one-subject-out strategy. The results consistently demonstrate the superior performance of the ResNet model compared to the VGG and AlexNet models. Notably, our findings indicate an approximate 95% F1 score in fall detection, suggesting promising prospects for real-world deployment.