My academic journey began at the University of Illinois,Urbana-Champaign, where I obtained a B.S. degree in Applied Mathematics, complemented by a minor in Computer Engineering.
My research interests lie at the intersection of robotics, machine learning, and control theory, with an emphasis on safe autonomy.
My work focuses on developing safety and stability verification methodologies for robotic control systems amid model and environmental uncertainties.
My goal is to devise scalable and robust approaches for operating autonomous systems within highly dynamic and uncertain settings, while providing safety and stability guarantees.
Neural Configuration Distance Function for Continuum Robot Control
Kehan Long, Hardik Parwana, Georgios Fainekos, Bardh Hoxha, Hideki Okamoto, Nikolay Atanasov
submitted to IEEE International Conference on Robotics and Automation (ICRA), 2024
arxiv
/
code
We present a novel method for modeling the shape of a continuum robot as a Neural Configuration Euclidean Distance Function (N-CEDF). By learning separate distance fields for each link and combining them through the kinematics chain, the learned N-CEDF provides an accurate and computationally efficient representation of the robot’s shape. The key advantage of a distance function representation of a continuum robot is that it enables efficient collision checking for motion planning in dynamic and cluttered environments, even with point-cloud observations. We integrate the N-CEDF into a Model Predictive Path Integral (MPPI) controller to generate safe trajectories. The proposed approach is validated for continuum robots with various links in several simulated environments with static and dynamic obstacles.
Sensor-based Distributionally Robust Control for Safe Robot Navigation in Dynamic Environments
Kehan Long, Yinzhuang Yi, Zhirui Dai, Sylvia Herbert, Jorge Cortés, Nikolay Atanasov
submitted to The International Journal of Robotics Research (IJRR), 2024
arxiv
/
code
/
website
We introduce a novel method for safe mobile robot navigation in dynamic, unknown environments, utilizing onboard sensing to impose safety constraints without the need for accurate map reconstruction. Traditional methods typically rely on detailed map information to synthesize safe stabilizing controls for mobile robots, which can be computationally demanding and less effective, particularly in dynamic operational conditions. By leveraging recent advances in distributionally robust optimization, we develop a distributionally robust control barrier function (DR-CBF) constraint that directly processes range sensor data to impose safety constraints. Coupling this with a control Lyapunov function (CLF) for path tracking, we demonstrate that our CLF-DR-CBF control synthesis method achieves safe, efficient, and robust navigation in uncertain dynamic environments. We demonstrate the effectiveness of our approach in simulated and real autonomous robot navigation experiments, marking a substantial advancement in real-time safety guarantees for mobile robots.
Distributionally Robust Policy and Lyapunov-Certificate Learning
Kehan Long, Jorge Cortés, Nikolay Atanasov
IEEE Open Journal of Control Systems (OJ-CSYS), 2024
arxiv
/
code
This article presents novel methods for synthesizing distributionally robust stabilizing neural controllers and certificates for control systems under model uncertainty. A key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment. We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate. To avoid the computational complexity involved in dealing with the space of probability measures, we identify a sufficient condition in the form of deterministic convex constraints that ensures the Lyapunov derivative constraint is satisfied. We integrate this condition into a loss function for training a neural network-based controller and show that, for the resulting closed-loop system, the global asymptotic stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution (OoD) model uncertainties. To demonstrate the efficacy and efficiency of the proposed methodology, we compare it with an uncertainty-agnostic baseline approach and several reinforcement learning approaches in two control problems in simulation.
Distributionally Robust Lyapunov Function Search Under Uncertainty
Kehan Long, Yinzhuang Yi, Jorge Cortés, Nikolay Atanasov
5th Learning for Dynamics & Control Conference (L4DC), 2023
arxiv
/
code
This paper devises methods for proving Lyapunov stability of dynamical systems subject to disturbances with an unknown distribution. We assume only a finite set of disturbance samples is available and that the true online disturbance realization may be drawn from a different distribution than the given samples. We formulate an optimization problem to search for a sum-of-squares (SOS) Lyapunov function and introduce a distributionally robust version of the Lyapunov function derivative constraint. We show that this constraint may be reformulated as several SOS constraints for polynomial systems. For general and higher dimensional systems, we provide a distributionally robust chance-constrained formulation for neural network Lyapunov function search.
Safe Control Synthesis With Uncertain Dynamics and Constraints
Kehan Long, Vikas Dhiman, Melvin Leok, Jorge Cortés, Nikolay Atanasov
IEEE Robotics and Automation Letters (RA-L), 2022
arxiv
This paper explores the synthesis of safe controls for dynamical systems with either probabilistic or worst-case uncertainty in both the dynamics model and the safety constraints (environments). We formulate novel probabilistic and robust (worst-case) control Lyapunov function (CLF) and control barrier function (CBF) constraints that take into account the effect of uncertainty in either case. We show that either the probabilistic or the robust (worst-case) formulation leads to a second-order cone program (SOCP), which enables efficient safe stabilizing control synthesis in real-time.
Learning Barrier Functions with Memory for Robust Safe Navigation
Kehan Long*, Cheng Qian*, Jorge Cortés, Nikolay Atanasov
IEEE Robotics and Automation Letters (RA-L), 2021
arxiv
This paper investigates safe navigation in unknown environments, using on-board range sensing to construct control barrier functions online. To represent different objects in the environment, we use the distance measurements to train neural network approximations of the signed distance functions incrementally with replay memory. This allows us to formulate a novel robust control barrier safety constraint which takes into account the error in the estimated distance fields and its gradient. Our formulation leads to a second-order cone program, enabling safe and stable control synthesis in a prior unknown environments.
Contact
You are very welcome to contact me regarding my research and collaboration. I can be contacted directly at kehan.lkh@gmail.com