Safety-aware Expansion for Neural Network Repair
K. Majd, G. Fainekos, and H. Ben Amor
Abstract:
Deep Neural Networks (DNNs) have revolutionized robotics by enhancing autonomous perception and interaction. However, their application in safety-critical scenarios is constrained due to potential unsafe behavior with unseen data and the need to adapt to new safety requirements. Neural network repair methods aim to address these issues with existing approaches focusing on weight modification or architecture extension. Existing repair methods lack guaranteed safety, are computationally demanding, or degrade the network's original performance in the repaired regions. This paper proposes a novel repair method inspired by the cascade-correlation algorithm, utilizing neural network expansion to ensure safety and maintain original performance of the network. The proposed repair method involves the expansion of a neural network by introducing new hidden units into its last hidden layer. The training process for the added units utilizes gradient descent to maximize its covariance with constraint violation errors. The objective of this training is to ensure that the added neurons only activate in response to faulty samples. By doing so, the method aims to maintain the network's original performance for correct samples. After the isolation training of the newly added neurons, the last layer of the network undergoes fine-tuning with quadratic programming (QP). The QP-based fine-tuning ensures the satisfaction of constraints for the faulty samples. We showcase the effectiveness of our method in two applications: repairing a faulty classifier in the aircraft collision avoidance system and repairing a controller designed for a lower-leg prosthesis device.