Ruixuan Liu

I am a PhD student in the Robotics Institute, part of the School of Computer Science, at Carnegie Mellon University advised by Prof. Changliu Liu (Intelligent Control Lab). My research focuses on robot learning and control, generative manufacturing, human-robot collaboration.

I received my Bachelor’s degree in Electrical and Computer Engineering with a minor in Robotics at Carnegie Mellon University. During my undergrad, I worked in the AirLab led by Prof. Sebastian Scherer. My work focused on sensor fusion and 3D reconstruction for building structure inspection.

Email  /  GitHub  /  Google Scholar  /  CV  /  LinkedIn

profile photo

Highlights

  • 2024/11

    Our work on generative manufacturing and robotic assembly is featured on Modern Machine Shop News.
  • 2024/09

    Finished internship as an Applied Scientist at Amazon Robotics.
  • 2024/05

    Two papers accepted to RA-L.
  • 2024/04

    One paper accepted to 2024 International Symposium on Flexible Automation (ISFA).
  • 2024/04

    One paper accepted to Transactions on Machine Learning Research (TMLR).
  • 2024/01

    One paper accepted to ACC 2024.
  • 2023/10

    Our work on robotic Lego construction is featured on CMU News.
  • 2023/09

    One paper accepted to IROS'23 Workshop on Formal methods techniques in robotics systems: Design and control.
  • 2023/05

    One paper accepted to ACC'23 Workshop on Recent Advancement of Human Autonomy Interaction and Integration.
  • 2023/05

    One paper accepted to CCTA 2023.
  • 2023/03

    One paper accepted to CDC 2023.
  • 2022/12

    Demonstrated our work on proactive human-robot co-assembly to Ford. [CMU News]
  • 2022/12

    Our paper at CPHS22 won the

    best student paper

    award.
  • 2022/08

    Finished internship as an Research Scientist at Siemens Berkeley.
  • 2022/08

    One paper accepted to IFAC CPHS 2022.
  • 2022/07

    One paper accepted to RSS'22 workshop in Close-Proximity HRC.
  • 2022/05

    One paper accepted to AIM 2022.
  • 2022/04

    One paper accepted to 2022 International Symposium on Flexible Automation (ISFA).
  • 2022/01

    Demonstrated our work on safe HRC with interactive industrial robots to

    President Biden

    at Mill 19, Pittsburgh. [President Biden's Twitter] [CMU News] [Lab Website]
  • 2021/07

    One paper accepted to ICML'21 workshop of human in the loop learning.
  • 2020/12

    One paper jointly accepted to ACC 2021 and IEEE Control Systems Letters.



Research

I'm broadly interested in robotics, manipulation, learning and control, generative manufacturing, and human-robot collaboration. Please see the following list for my research works.

project image

Physics-Aware Combinatorial Assembly Sequence Planning using Data-free Action Masking


Ruixuan Liu, Alan Chen, Weiye Zhao, Changliu Liu
In part supported by CMU MFI
paper / code /

This paper formulates the combinatorial assembly sequence planning as a reinforcement learning problem. In particular, this work proposes an optimization-based physics-aware action mask to address the challenges due to the sim-to-real gap and combinatorial nature. The proposed method effectively guides the assembly policy learning and ensures violation-free deployment by planning physically executable assembly sequences to construct goal objects.

project image

StableLego: Stability Analysis of Block Stacking Assembly


Ruixuan Liu, Kangle Deng, Ziwei Wang, Changliu Liu
IEEE Robotics and Automation Letters (RA-L), 2024
In part supported by CMU MFI
paper / code /

This paper proposes a new optimization formulation, which optimizes over force-balancing equations, to infer the structural stability of block stacking assembly. In addition, we provide StableLego: a comprehensive Lego assembly dataset, which includes a wide variety of Lego assembly designs for real-world objects. The dataset includes more than 50k Lego structures built using standardized Lego bricks with different dimensions along with their stability inferences generated by the proposed algorithm.

project image

Robotic Planning under Hierarchical Temporal Logic Specifications


Xusheng Luo, Shaojun Xu, Ruixuan Liu, Changliu Liu
IEEE Robotics and Automation Letters (RA-L), 2024
In part supported by NSF
paper / youtube /

This paper formulates a decomposition-based method to address tasks under hierarchical temporal logic structure. Each specification is first broken down into a range of temporally interrelated sub-tasks. We further mine the temporal relations among the sub-tasks of different specifications within the hierarchy. Subsequently, a Mixed Integer Linear Program is utilized to generate a spatio-temporal plan for each robot.

project image

A Lightweight and Transferable Design for Robust LEGO Manipulation


Ruixuan Liu, Yifan Sun, Changliu Liu
International Symposium on Flexible Automation (ISFA), 2024
In part supported by Siemens, CMU MFI
paper / youtube /

This paper investigates safe and efficient robotic LEGO manipulation. In particular, this paper reduces the complexity of the manipulation by hardware-software co-design. An end-of-arm tool (EOAT) is designed, which reduces the problem dimension and allows large industrial robots to easily manipulate LEGO bricks. In addition, this paper uses evolution strategy to safely optimize the robot motion for LEGO manipulation.

project image

GUARD: A Safe Reinforcement Learning Benchmark


Weiye Zhao, Rui Chen, Yifan Sun, Ruixuan Liu, Tianhao Wei, Changliu Liu
Transactions on Machine Learning Research (TMLR), 2024
In part supported by NSF
paper / code /

This paper introduces GUARD, a Generalized Unified SAfe Reinforcement Learning Development Benchmark. GUARD has several advantages compared to existing benchmarks. First, GUARD is a generalized benchmark with a wide variety of RL agents, tasks, and safety constraint specifications. Second, GUARD comprehensively covers state-of-the-art safe RL algorithms with self-contained implementations. Third, GUARD is highly customizable in tasks and algorithms.

project image

Simulation-aided Learning from Demonstration for Robotic LEGO Construction


Ruixuan Liu, Alan Chen, Xusheng Luo, Changliu Liu
In part supported by Siemens, CMU MFI
paper / youtube /

This paper presents a simulation-aided learning from demonstration (SaLfD) framework for easily deploying LEGO prototyping capability to robots. In particular, the user demonstrates constructing the customized novel LEGO object. The robot extracts the task information by observing the human operation and generates the construction plan. A simulation is developed to verify the correctness of the learned construction plan and the resulting LEGO prototype.

project image

Robotic LEGO Assembly and Disassembly from Human Demonstration


Ruixuan Liu, Yifan Sun, Changliu Liu
ACC Workshop on Recent Advancement of Human Autonomy Interaction and Integration, 2023
In part supported by Siemens, CMU MFI
paper / youtube /

This paper studies automatic prototyping using LEGO. To satisfy individual needs and self-sustainability, this paper presents a framework that learns the assembly and disassembly sequences from human demonstrations.

project image

Proactive Human-Robot Co-Assembly: Leveraging Human Intention Prediction and Robust Safe Control


Ruixuan Liu, Rui Chen, Abulikemu Abuduweili, Changliu Liu
IEEE Conference on Control Technology and Applications (CCTA), 2023
In part supported by Ford
paper / website / news /

This paper presents an integrated framework for proactive HRC. A robust intention prediction module, which leverages prior task information and human-in-the-loop training, is learned to guide the robot for efficient collaboration. The proposed framework also uses robust safe control to ensure interactive safety under uncertainty. The developed framework is applied to a co-assembly task using a Kinova Gen3 robot.

project image

Zero-shot Transferable and Persistently Feasible Safe Control for High Dimensional Systems by Consistent Abstraction


Tianhao Wei, Shucheng Kang, Ruixuan Liu, Changliu Liu
IEEE Conference on Decision and Control (CDC), 2023
In part supported by NSF
paper /

This paper proposes a system abstraction method that enables the design of energy functions on a low-dimensional model. Then we can synthesize the energy function with respect to the low-dimensional model to ensure persistent feasibility. The resulting safe controller can be directly transferred to other systems with the same abstraction, e.g., when a robot arm holds different tools.

project image

Task-Agnostic Adaptation for Safe Human-Robot Handover


Ruixuan Liu, Rui Chen, Changliu Liu
IFAC Workshop on Cyber-Physical and Human Systems (CPHS), 2022

Best Student Paper


In part supported by Siemens
paper / video / website /

This paper proposes a task-agnostic adaptable controller that can (1) adapt to different lighting conditions, (2) adapt to individual behaviors and ensure safety when interacting with different humans, and (3) enable easy transfer across robot platforms with different control interfaces.

project image

Safe Interactive Industrial Robots using Jerk-based Safe Set Algorithm


Ruixuan Liu, Rui Chen, Changliu Liu
International Symposium on Flexible Automation (ISFA), 2022
In part supported by Siemens, Ford
paper / website / youtube /

This paper introduces a jerk-based safe set algorithm (JSSA) to ensure collision avoidance while considering the robot dynamics constraints. The JSSA greatly extends the scope of the original safe set algorithm, which has only been applied for second-order systems with unbounded accelerations. The JSSA is implemented on the FANUC LR Mate 200id/7L robot and validated with HRI tasks.

project image

Jerk-bounded Position Controller with Real-Time Task Modification for Interactive Industrial Robots


Ruixuan Liu, Rui Chen, Yifan Sun, Yu Zhao, Changliu Liu
IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 2022
In part supported by Siemens
paper / website / youtube /

This paper presents a jerk-bounded position control driver (JPC) for industrial robots. JPC provides a unified interface for tracking complex trajectories and is able to enforce dynamic constraints using motion-level control, without accessing servo-level control. Most importantly, JPC enables real-time trajectory modification. Users can overwrite the ongoing task with a new one without violating dynamic constraints.

project image

Iterative Adversarial Data Augmentation


Ruixuan Liu, Changliu Liu
ICML Workshop on Human in the Loop Learning, 2021
In part supported by Ford
paper /

This paper proposes an iterative adversarial data augmentation (IADA) framework to learn neural network models from an insufficient amount of training data. The method uses formal verification to identify the most “confusing” input samples, and leverages human guidance to safely and iteratively augment the training data with these samples.

project image

Human Motion Prediction using Adaptable Recurrent Neural Networks and Inverse Kinematics


Ruixuan Liu, Changliu Liu
IEEE Control Systems Letters, 2021
In part supported by Ford
paper / website / youtube /

This project focuses on predicting human arm motion for assembly tasks. In particular, online adaptation techniques are used to reduce prediction error due to model mismatch and data scarcity. In addition, a physical arm model is adopted to generate physically feasible motion prediction.

project image

Automatic Onsite Polishing


Ruixuan Liu, Weiye Zhao, Suqin He, Changliu Liu
In collaboration with Siemens, Yaskawa Motoman
In part supported by ARM Institute
website / youtube /

Conventional weld bead removal is manually performed by human workers, which is time-consuming, expensive, and more importantly, dangerous, especially in a confined environment, such as inside a metal tube. This project develops a robotic solution for polishing and grinding complex surfaces in a confined workspace to alleviate the cost and manual effort.




Award

  • 2022/12

    The Best Student Paper award at CPHS22.
  • 2019/05

    University Honors.
  • Fall 2017-Spring 2018

    Dean's List of Engineering.
  • Fall 2015-Spring 2016

    Dean's List of Engineering.

Service

Reviewer

for
  • Journals:

    IEEE L-CSS, IEEE RA-L, ACM TOMM.
  • Conferences:

    ICRA, IROS, ACC, ISFA.


Teaching

Teaching assistant for 16-720: Computer Vision. Instructor: Prof. Srinivasa G. Narasimhan.

Teaching assistant for 16-811: Math Fundamentals for Robotics. Instructor: Prof. Michael Erdmann.

Teaching assistant for 18-202: Mathematical Foundations of Electrical Engineering. Instructor: Prof. José Moura.


Other Projects

These include courseworks and side projects.

project image

Lidar Obstacle Detection


Ruixuan Liu
Internship at Zenuity
youtube /

This was an intern project during my internship at Zenuity in the summer of 2019. The team was trying to build a mobile robot that can manuever in the office automatically. There were many modules to implement, including path planning, motion control, safety, etc. I was working on the Lidar perception module. The goal was to use the Lidar to detect the obstacles and generate the free area that was traversable for the robot.

project image

Basketball Retriever


Ruixuan Liu, Alvin Shi, Roy Li
ECE Capstone: 18-500
youtube /

This was the capstone project I did during my undergrad. The idea was to build a collaborative mobile robot that helps the basketball player during practice. When a player is practicing shooting, the ball would bounce to random places if missed the shot. It is sometimes frustrating and time-consuming to get the ball back. Therefore, in this project we intend to build a mobile robot that can retrieve the ball each time when it bounces away. We would like to thank Prof. Tamal Mukherjee for his guidance and support.

project image

Real-time Object Tracking


Ruixuan Liu, Alvin Shi
Computer Vision: 16-720
youtube /

This was a course project for 16-720: Computer Vision. The goal for this project was to build a pipeline that could robustly track a desired object in real-time.

project image

3D Reconstruction with Thermal Texture


Ruixuan Liu, Henry Zhang, Yaoyu Hu
Internship at the AirLab
website /

This was a research project that I did during the internship in the AirLab led by Prof. Sebastian Scherer. The goal for this project was to enable building inspection from 3D reconstruction. We want to add thermal texture to the 3D reconstruction to make the inspection more reliable.

project image

Methods of Thermal Camera Calibration


Ruixuan Liu, Henry Zhang, Sebastian Scherer
Internship at the AirLab
paper / website /

This was a research project that I did during the internship in the AirLab led by Prof. Sebastian Scherer.

project image

Line Following Tank


Ruixuan Liu, Henry Zhang, Alvin Shi

Third place


CMU Mobot Competition
website / youtube /

This was a project for the CMU Mobot competition in 2018. This event is an annual competition held by the School of Computer Science. The goal is to build a mobile robot that follows a track in front of Wean Hall. There are small gates representing checkpoints placed along the track, and the robot is required to go through the gates. So the robot should strictly follow the line instead of go directly to the goal point. We got the third place in the end.

project image

Gesture Drone


Ruixuan Liu, Henry Zhang, Alvin Shi
CMU Build18 Hackathon
website /

This was a project for the CMU Build18 hackathon in 2018. The idea was to use simple, intuitive hand gestures to control a drone. We built a drone from scratch and modified the onboard controller to take customized hand gesture input.

project image

Smartphone Controlled Drone


Chris Yin, Ruixuan Liu
Internship at HIT
youtube /

This was a project that I did during the internship in the lab led by Prof. Xiaorui Zhu. The goal for this project was to achieve stable UAV control based on smartphone motion sensing to make the control method intuitive. This project was collaborated with Chris Yin.


Design and source code from Jon Barron's website