Zhang, Hanwen; Lu, Zeyu; Liang, Wenyu; Yu, Haoyong; Mao, Yao; Wu, Yan Interaction Control for Tool Manipulation on Deformable Objects Using Tactile Feedback Journal Article In: IEEE Robotics and Automation Letters, vol. 8, iss. 5, pp. 2700 - 2707, 2023, ISSN: 2377-3766. Tian, Daiying; Fang, Hao; Yang, Qingkai; Yu, Haoyong; Liang, Wenyu; Wu, Yan Reinforcement learning under temporal logic constraints as a sequence modeling problem Journal Article In: Robotics and Autonomous Systems, vol. 161, pp. 104351, 2023, ISSN: 0921-8890. Wang, Haodong; Liang, Wenyu; Liang, Boyuan; Ren, Hongliang; Du, Zhijiang; Wu, Yan Robust Position Control of a Continuum Manipulator Based on Selective Approach and Koopman Operator Journal Article In: IEEE Transactions on Industrial Electronics, 2023, ISSN: 0278-0046. Liang, Wenyu; Fang, Fen; Acar, Cihan; Toh, Wei Qi; Sun, Ying; Xu, Qianli; Wu, Yan Visuo-Tactile Feedback-Based Robot Manipulation for Object Packing Journal Article In: IEEE Robotics and Automation Letters, vol. 8, iss. 2, pp. 1151 - 1158, 2023, ISSN: 2377-3766. Liang, Boyuan; Liang, Wenyu; Wu, Yan Tactile-Guided Dynamic Object Planar Manipulation Inproceedings In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3203-3209, IEEE, 2022, ISBN: 978-1-6654-7927-1. Tian, Daiying; Fang, Hao; Yang, Qingkai; Guo, Zixuan; Cui, Jinqiang; Liang, Wenyu; Wu, Yan Two-Phase Motion Planning under Signal Temporal Logic Specifications in Partially Unknown Environments Journal Article In: IEEE Transactions on Industrial Electronics, 2022, ISSN: 0278-0046. Fang, Fen; Liang, Wenyu; Wu, Yan; Xu, Qianli; Lim, Joo Hwee Self-Supervised Reinforcement Learning for Active Object Detection Inproceedings In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10224-10231, IEEE, 2022, ISSN: 2377-3766. Liang, Boyuan; Liang, Wenyu; Wu, Yan Parameterized Particle Filtering for Tactile-based Simultaneous Pose and Shape Estimation Journal Article In: IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 2, pp. 1270-1277, 2021, ISSN: 2377-3766, (also accepted by ICRA 2022). Gauthier, Nicolas; Liang, Wenyu; Xu, Qianli; Fang, Fen; Li, Liyuan; Gao, Ruihan; Wu, Yan; Lim, Joo Hwee Towards a Programming-Free Robotic System for Assembly Tasks Using Intuitive Interactions Conference The 13th International Conference on Social Robotics (ICSR 2021), vol. 13086, Lecture Notes in Computer Science Springer, 2021, ISBN: 978-3-030-90525-5, (Best Presentation Award). Xu, Qianli; Fang, Fen; Gauthier, Nicolas; Liang, Wenyu; Wu, Yan; Li, Liyuan; Lim, Joo Hwee Towards Efficient Multiview Object Detection with Adaptive Action Prediction Inproceedings In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, ISBN: 978-1-7281-9077-8. Liang, Wenyu; Ren, Qinyuan; Chen, Xiaoqiao; Gao, Junli; Wu, Yan Dexterous Manoeuvre through Touch in a Cluttered Scene Inproceedings In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, ISBN: 978-1-7281-9077-8. Xu, Qianli; Gauthier, Nicolas; Liang, Wenyu; Fang, Fen; Tan, Hui Li; Sun, Ying; Wu, Yan; Li, Liyuan; Lim, Joo Hwee TAILOR: Teaching with Active and Incremental Learning for Object Registration Inproceedings In: Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), pp. 16120-16123, AAAI, 2021, (AAAI'21 Best Demo Award). Liang, Wenyu; Feng, Zhao; Wu, Yan; Gao, Junli; Ren, Qinyuan; Lee, Tong Heng Robust Force Tracking Impedance Control of an Ultrasonic Motor-actuated End-effector in a Soft Environment Inproceedings In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Las Vegas, USA, 2020. Chi, Haozhen; Li, Xuefang; Liang, Wenyu; Wu, Yan; Ren, Qinyuan Motion Control of a Soft Circular Crawling Robot via Iterative Learning Control Inproceedings In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 6524-6529, IEEE, Nice, France, 2019, ISBN: 978-1-7281-1398-2.2023
@article{interaction2023zhang,
title = {Interaction Control for Tool Manipulation on Deformable Objects Using Tactile Feedback},
author = {Hanwen Zhang and Zeyu Lu and Wenyu Liang and Haoyong Yu and Yao Mao and Yan Wu},
doi = {10.1109/LRA.2023.3257680},
issn = {2377-3766},
year = {2023},
date = {2023-03-15},
journal = {IEEE Robotics and Automation Letters},
volume = {8},
issue = {5},
pages = {2700 - 2707},
abstract = {The human sense of touch enables us to perform delicate tasks on deformable objects and/or in a vision-denied environment. To achieve similar desirable interactions for robots, such as administering a swab test, tactile information sensed beyond the tool-in-hand is crucial for contact state estimation and contact force control. In this letter, a tactile-guided planning and control framework using GTac, a hetero G eneous Tac tile sensor tailored for interaction with deformable objects beyond the immediate contact area, is proposed. The biomimetic GTac in use is an improved version optimized for readout linearity, which provides reliability in contact state estimation and force tracking. A tactile-based classification and manipulation process is designed to estimate and align the contact angle between the tool and the environment. Moreover, a Koopman operator-based optimal control scheme is proposed to address the challenges in nonlinear control arising from the interaction with the deformable object. Finaly, several experiments are conducted to verify the effectiveness of the proposed framework. The experimental results demonstrate that the proposed framework can accurately estimate the contact angle as well as achieve excellent tracking performance and strong robustness in force control.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{tian2023reinforcement,
title = {Reinforcement learning under temporal logic constraints as a sequence modeling problem},
author = {Daiying Tian and Hao Fang and Qingkai Yang and Haoyong Yu and Wenyu Liang and Yan Wu},
doi = {10.1016/j.robot.2022.104351},
issn = {0921-8890},
year = {2023},
date = {2023-03-01},
urldate = {2023-01-10},
journal = {Robotics and Autonomous Systems},
volume = {161},
pages = {104351},
abstract = {Reinforcement learning (RL) under temporal logic typically suffers from slow propagation for credit assignment. Inspired by recent advancements called trajectory transformer in machine learning, the reinforcement learning under Temporal Logic (TL) is modeled as a sequence modeling problem in this paper, where an agent utilizes the transformer to fit the optimal policy satisfying the Finite Linear Temporal Logic (LTLf) tasks. To combat the sparse reward issue, dense reward functions for LTLf are designed. For the sake of reducing the computational complexity, a sparse transformer with local and global attention is constructed to automatically conduct credit assignment, which removes the time-consuming value iteration process. The optimal action is found by the beam search performed in transformers. The proposed method generates a series of policies fitted by sparse transformers, which has sustainably high accuracy in fitting the demonstrations. At last, the effectiveness of the proposed method is demonstrated by simulations in Mini-Grid environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{robust2023wang,
title = {Robust Position Control of a Continuum Manipulator Based on Selective Approach and Koopman Operator},
author = {Haodong Wang and Wenyu Liang and Boyuan Liang and Hongliang Ren and Zhijiang Du and Yan Wu},
doi = {10.1109/TIE.2023.3236082},
issn = {0278-0046},
year = {2023},
date = {2023-01-17},
journal = {IEEE Transactions on Industrial Electronics},
abstract = {Continuum manipulators have infinite degrees of freedom and high flexibility, making it challenging for accurate modeling and control. Some common modeling methods include mechanical modeling strategy, neural network strategy, constant curvature assumption, etc. However, the inverse kinematics of the mechanical modeling strategy is difficult to obtain while a strategy using neural networks may not converge in some applications. For algorithm implementation, the constant curvature assumption is used as the basis to design the controller. When the driving wire is tight, the linear controller under constant curvature assumption works well in manipulator position control. However, this assumption of linearity between the deformation angle and the driving input value breaks upon repeated use of the driving wires which get inevitably lengthened. This degrades the accuracy of the controller. In this work, the Koopman theory is proposed to identify the nonlinear model of the continuum manipulator. Under the linearized model, the control input is obtained through model predictive control (MPC). As the lifted function can affect the effectiveness of the Koopman operator-based MPC (K-MPC), a novel design method of the lifted function through the Legendre polynomial is proposed. To attain higher control efficiency and computational accuracy, a selective control scheme according to the state of the driving wires is proposed. When the driving wire is tight, the linear controller is employed; otherwise, the K-MPC is adopted. Finally, a set of static and dynamic experiments has been conducted using an experimental prototype. The results demonstrate high effectiveness and good performance of the selective control scheme.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{visuo2023liang,
title = {Visuo-Tactile Feedback-Based Robot Manipulation for Object Packing},
author = {Wenyu Liang and Fen Fang and Cihan Acar and Wei Qi Toh and Ying Sun and Qianli Xu and Yan Wu},
doi = {10.1109/LRA.2023.3236884},
issn = {2377-3766},
year = {2023},
date = {2023-01-13},
journal = {IEEE Robotics and Automation Letters},
volume = {8},
issue = {2},
pages = {1151 - 1158},
abstract = {Robots are increasingly expected to manipulate objects, of which properties have high perceptual uncertainty from any single sensory modality. This directly impacts successful object manipulation. Object packing is one of the challenging tasks in robot manipulation. In this work, a new visuo-tactile feedback-based manipulation planning framework for object packing is proposed, which makes use of the on-the-fly multisensory feedback and an attention-guided deep affordance model as perceptual states as well as a deep reinforcement learning (DRL) pipeline. Significantly, multiple sensory modalities, vision and touch [tactile and force/torque (F/T)], are employed in predicting and indicating the manipulable regions of multiple affordances (i.e., graspability and pushability) for objects with similar appearances but different intrinsic properties (e.g., mass distribution). To improve the manipulation efficiency, the DRL algorithm is trained to select the optimal actions for successful object manipulation. The proposed method is evaluated on both an open dataset and our collected dataset and demonstrated in the use case of the object packing task. The results show that the proposed method outperforms the existing methods and achieves better accuracy with much higher efficiency.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2022
@inproceedings{liang2022tactile,
title = {Tactile-Guided Dynamic Object Planar Manipulation},
author = {Boyuan Liang and Wenyu Liang and Yan Wu},
doi = {10.1109/IROS47612.2022.9981270},
isbn = {978-1-6654-7927-1},
year = {2022},
date = {2022-10-23},
urldate = {2022-10-31},
booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages = {3203-3209},
publisher = {IEEE},
abstract = {Planar pushing is a fundamental robot manipulation task with most algorithms built upon the quasi-static as-sumption. Under this assumption the end-effector should apply force on the pushed object along the full moving trajectory. This means that the target position must lie in the robot's workspace. To enable a robot to deliver objects outside of its workspace and facilitate faster delivery, the quasi-static assumption should be lifted in favour of dynamical manipulation. In this work, we propose a two-staged data-driven manipulation method to hit an unknown object to reach a target position. This expands the reachability of the manipulated object beyond the robot's workspace. The robot equipped with a tactile sensor first explores for the stable pushing region (SPR) on the given object by using a gain-scheduling PD control with the contact centre estimated to maintain full contact between the object and the end-effector. In the second stage, a learning-based approach is used to generate the impulse the object should receive at the SPR to reach a target sliding distance. The performance of proposed method is evaluated on a KUKA LBR iiwa 14 R820 robot manipulator and a XELA tactile sensor.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@article{tian2022two,
title = {Two-Phase Motion Planning under Signal Temporal Logic Specifications in Partially Unknown Environments},
author = {Daiying Tian and Hao Fang and Qingkai Yang and Zixuan Guo and Jinqiang Cui and Wenyu Liang and Yan Wu},
doi = {10.1109/TIE.2022.3203752},
issn = {0278-0046},
year = {2022},
date = {2022-09-09},
urldate = {2022-09-09},
journal = {IEEE Transactions on Industrial Electronics},
abstract = {This paper studies the planning problem for robot residing in partially unknown environments under signal temporal logic (STL) specifications, where most existing planning methods using STL rely on a fully known environment. In many practical scenarios, however, robots do not have prior information of all obstacles. In this paper, a novel two-phase planning method, i.e., offline exploration followed by online planning, is proposed to efficiently synthesize paths that satisfy STL tasks. In the offline exploration phase, a Rapidly Exploring Random Tree* (RRT*) is grown from task regions under the guidance of timed transducers, which guarantees that the resultant paths satisfy the task specifications. In the online phase, the path with minimum cost in RRT* is determined when an initial configuration is assigned. This path is then set as the reference of the time elastic band algorithm, which modifies the path until it has no collisions with obstacles. It is shown that the online computational burden is reduced and collisions with unknown obstacles are avoided by using the proposed planning framework. The effectiveness and superiority of the proposed method are demonstrated in simulations and real-world experiments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@inproceedings{fang2022self,
title = {Self-Supervised Reinforcement Learning for Active Object Detection},
author = {Fen Fang and Wenyu Liang and Yan Wu and Qianli Xu and Joo Hwee Lim},
doi = {10.1109/LRA.2022.3193019},
issn = {2377-3766},
year = {2022},
date = {2022-07-21},
urldate = {2022-10-31},
booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
volume = {7},
number = {4},
pages = {10224-10231},
publisher = {IEEE},
abstract = {Active object detection (AOD) offers significant advantage in expanding the perceptual capacity of a robotics system. AOD is formulated as a sequential action decision process to determine optimal viewpoints to identify objects of interest in a visual scene. While reinforcement learning (RL) has been successfully used to solve many AOD problems, conventional RL methods suffer from (i) sample inefficiency, and (ii) unstable outcome due to inter-dependencies of action type (direction of view change) and action range (step size of view change). To address these issues, we propose a novel self-supervised RL method, which employs self-supervised representations of viewpoints to initialize the policy network, and a self-supervised loss on action range to enhance the network parameter optimization. The output and target pairs of self-supervised learning loss are automatically generated from the policy network online prediction and a range shrinkage algorithm (RSA), respectively. The proposed method is evaluated and benchmarked on two public datasets (T-LESS and AVD) using on-policy and off-policy RL algorithms. The results show that our method enhances detection accuracy and achieves faster convergence on both datasets. By evaluating on a more complex environment with a larger state space (where viewpoints are more densely sampled), our method achieves more robust and stable performance. Our experiment on real robot application scenario to disambiguate similar objects in a cluttered scene has also demonstrated the effectiveness of the proposed method.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2021
@article{liang2021parameterized,
title = {Parameterized Particle Filtering for Tactile-based Simultaneous Pose and Shape Estimation},
author = {Boyuan Liang and Wenyu Liang and Yan Wu},
url = {https://ieeexplore.ieee.org/document/9667178},
doi = {10.1109/LRA.2021.3139381},
issn = {2377-3766},
year = {2021},
date = {2021-12-31},
urldate = {2021-12-31},
journal = {IEEE Robotics and Automation Letters (RA-L)},
volume = {7},
number = {2},
pages = {1270-1277},
abstract = {Object state and shape estimation is essential in many robotic manipulation tasks (e.g., in-hand manipulation, insertion). While such estimation is typically relied on visual perception, for tasks to be carried out in a vision-degraded or vision-denied environment, haptics becomes the reliable source of perception. In this work, we propose the use of parameterized particle filtering to estimate object pose and shape in 3D space using tactile feedback. This approach is able to estimate with high accuracy using contact information of the object with a collision surface from a rough initial estimation. In comparison to conventional particle filtering, this approach significantly reduces the number of particles required for a satisfactory estimation, making it applicable for pose and shape estimation, where the number of degrees of freedom is high or even uncertain. Moreover, the proposed method can automatically choose the fastest-convergent contact action during the pose estimation stage to shorten the time required. A set of experiments in both simulation and on a real-world robot have been conducted to validate the proposed method and compare against the state-of-the-art approach in the literature. Results from both sets of experiments show that the proposed method can determine the pose and shape of the objects with very high accuracy within a small number of iterations.},
note = {also accepted by ICRA 2022},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@conference{towards2021gauthier,
title = {Towards a Programming-Free Robotic System for Assembly Tasks Using Intuitive Interactions},
author = {Nicolas Gauthier and Wenyu Liang and Qianli Xu and Fen Fang and Liyuan Li and Ruihan Gao and Yan Wu and Joo Hwee Lim},
editor = {Haizhou Li and Shuzhi Sam Ge and Yan Wu and Agnieszka Wykowska and Hongsheng He and Xiaorui Liu and Dongyu Li and Jairo Perez-Osorio},
url = {https://link.springer.com/chapter/10.1007/978-3-030-90525-5_18
http://yan-wu.com/wp-content/uploads/2021/11/gauthier2021towards.pdf},
doi = {10.1007/978-3-030-90525-5_18},
isbn = {978-3-030-90525-5},
year = {2021},
date = {2021-11-02},
booktitle = {The 13th International Conference on Social Robotics (ICSR 2021)},
volume = {13086},
pages = {203-215},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
abstract = {Although industrial robots are successfully deployed in many assembly processes, high-mix, low-volume applications are still difficult to automate, as they involve small batches of frequently changing parts. Setting up a robotic system for these tasks requires repeated re-programming by expert users, incurring extra time and costs. In this paper, we present a solution which enables a robot to learn new objects and new tasks from non-expert users without the need for programming. The use case presented here is the assembly of a gearbox mechanism. In the proposed solution, first, the robot can autonomously register new objects using a visual exploration routine, and train a deep learning model for object detection accordingly. Secondly, the user can teach new tasks to the system via visual demonstration in a natural manner. Finally, using multimodal perception from RGB-D (color and depth) cameras and a tactile sensor, the robot can execute the taught tasks with adaptation to changing configurations. Depending on the task requirements, it can also activate human-robot collaboration capabilities. In summary, these three main modules enable any non-expert user to configure a robot for new applications in a fast and intuitive way.},
note = {Best Presentation Award},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@inproceedings{xu2021efficient,
title = {Towards Efficient Multiview Object Detection with Adaptive Action Prediction},
author = {Qianli Xu and Fen Fang and Nicolas Gauthier and Wenyu Liang and Yan Wu and Liyuan Li and Joo Hwee Lim },
url = {https://ieeexplore.ieee.org/document/9561388},
doi = {10.1109/ICRA48506.2021.9561388},
isbn = {978-1-7281-9077-8},
year = {2021},
date = {2021-05-31},
booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
publisher = {IEEE},
abstract = {Active vision is a desirable perceptual feature for robots. Existing approaches usually make strong assumptions about the task and environment, thus are less robust and efficient. This study proposes an adaptive view planning approach to boost the efficiency and robustness of active object detection. We formulate the multi-object detection task as an active multiview object detection problem given the initial location of the objects. Next, we propose a novel adaptive action prediction (A2P) method built on a deep Q-learning network with a dueling architecture. The A2P method is able to perform view planning based on visual information of multiple objects; and adjust action ranges according to the task status. Evaluated on the AVD dataset, A2P leads to 21.9% increase in detection accuracy in unfamiliar environments, while improving efficiency by 22.7%. On the T-LESS dataset, multi-object detection boosts efficiency by more than 30% while achieving equivalent detection accuracy.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{liang2021dexterous,
title = {Dexterous Manoeuvre through Touch in a Cluttered Scene},
author = {Wenyu Liang and Qinyuan Ren and Xiaoqiao Chen and Junli Gao and Yan Wu},
url = {http://yan-wu.com/wp-content/uploads/2021/03/liang2021dexterous.pdf
https://ieeexplore.ieee.org/document/9562061},
doi = {10.1109/ICRA48506.2021.9562061},
isbn = {978-1-7281-9077-8},
year = {2021},
date = {2021-05-31},
booktitle = {2021 IEEE International Conference on Robotics and Automation (ICRA)},
publisher = {IEEE},
abstract = {Manipulation in a densely cluttered environment creates complex challenges in perception to close the control loop, many of which are due to the sophisticated physical interaction between the environment and the manipulator. Drawing from biological sensory-motor control, to handle the task in such a scenario, tactile sensing can be used to provide an additional dimension of the rich contact information from the interaction for decision making and action selection to manoeuvre towards a target. In this paper, a new tactile-based motion planning and control framework based on bioinspiration is proposed and developed for a robot manipulator to manoeuvre in a cluttered environment. An iterative two-stage machine learning approach is used in this framework: an autoencoder is used to extract important cues from tactile sensory readings while a reinforcement learning technique is used to generate optimal motion sequence to efficiently reach the given target. The framework is implemented on a KUKA LBR iiwa robot mounted with a SynTouch BioTac tactile sensor and tested with real-life experiments. The results show that the system is able to move the end-effector through the cluttered environment to reach the target effectively.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{xu2021tailor,
title = {TAILOR: Teaching with Active and Incremental Learning for Object Registration},
author = {Qianli Xu and Nicolas Gauthier and Wenyu Liang and Fen Fang and Hui Li Tan and Ying Sun and Yan Wu and Liyuan Li and Joo Hwee Lim},
url = {http://yan-wu.com/wp-content/uploads/2021/03/xu2021tailor.pdf
http://yan-wu.com/wp-content/uploads/2021/03/xu2021tailor_poster.pdf
https://twitter.com/RealAAAI/status/1364017094086389760
https://ojs.aaai.org/index.php/AAAI/article/view/18031},
year = {2021},
date = {2021-05-01},
booktitle = {Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI)},
volume = {35},
number = {18},
pages = {16120-16123},
publisher = {AAAI},
abstract = {When a robot is deployed to a new task, it often needs to be trained to detect novel objects. Using deep learning based detector, one has to collect and annotate a large number of images of the novel objects for training, which is labor intensive, time consuming and lack of scalability. We present TAILOR - a method and system for object registration with active and incremental learning. When instructed by a human teacher to register an object, TAILOR is able to automatically select viewpoints to capture informative images by actively exploring viewpoints, and employ a fast incremental learning algorithm to learn new objects without potential forgetting of previously learned objects. We demonstrate the effectiveness of our method with a KUKA robot to learn novel objects used in a real-world gearbox assembly task through natural interactions.},
note = {AAAI'21 Best Demo Award},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2020
@inproceedings{liang2020robust,
title = {Robust Force Tracking Impedance Control of an Ultrasonic Motor-actuated End-effector in a Soft Environment},
author = {Wenyu Liang and Zhao Feng and Yan Wu and Junli Gao and Qinyuan Ren and Tong Heng Lee},
url = {http://yan-wu.com/wp-content/uploads/2020/08/liang2020robust.pdf
https://ieeexplore.ieee.org/document/9340717},
doi = {10.1109/IROS45743.2020.9340717},
year = {2020},
date = {2020-10-31},
booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
publisher = {IEEE},
address = {Las Vegas, USA},
abstract = {Robotic systems are increasingly required not only to generate precise motions to complete their tasks but also to handle the interactions with the environment or human. Significantly, soft interaction brings great challenges on the force control due to the nonlinear, viscoelastic and inhomogeneous properties of the soft environment. In this paper, a robust impedance control scheme utilizing integral backstepping technology and integral terminal sliding mode control is proposed to achieve force tracking for an ultrasonic motor-actuated end-effector in a soft environment. In particular, the steady-state performance of the target impedance while in contact with soft environment is derived and analyzed with the nonlinear Hunt-Crossley model. Finally, the dynamic force tracking performance of the proposed control scheme is verified via several experiments.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2019
@inproceedings{chi2019motion,
title = {Motion Control of a Soft Circular Crawling Robot via Iterative Learning Control},
author = {Haozhen Chi and Xuefang Li and Wenyu Liang and Yan Wu and Qinyuan Ren},
url = {https://ieeexplore.ieee.org/document/9029234
https://yan-wu.com/wp-content/uploads/2020/05/chi2019motion.pdf},
doi = {10.1109/CDC40024.2019.9029234},
isbn = {978-1-7281-1398-2},
year = {2019},
date = {2019-12-13},
booktitle = {2019 IEEE 58th Conference on Decision and Control (CDC)},
pages = {6524-6529},
publisher = {IEEE},
address = {Nice, France},
abstract = {Soft robots have recently attracted widespread attention due to their abilities to work effectively in unstructured environments. As an actuation technology of soft robots, dielectric elastomer actuators (DEAs) exhibit many fantastic attributes such as large strain and high energy density. However, due to nonlinear electromechanical coupling, it is challenging to model a DEA accurately, and further it is difficult to control a DEA-based soft robot. This work studies a novel DEA-based soft circular crawling robot. The kinematics of the soft robot is explored and a knowledge-based model is established to expedite the controller design. An iterative learning control (ILC) method then is applied to control the soft robot. By employing ILC, the performance of the robot motion trajectory tracking can be improved significantly without using a perfect model. Finally, several numerical studies are conducted to illustrate the effectiveness of the ILC.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Publications
Interaction Control for Tool Manipulation on Deformable Objects Using Tactile Feedback Journal Article In: IEEE Robotics and Automation Letters, vol. 8, iss. 5, pp. 2700 - 2707, 2023, ISSN: 2377-3766. Reinforcement learning under temporal logic constraints as a sequence modeling problem Journal Article In: Robotics and Autonomous Systems, vol. 161, pp. 104351, 2023, ISSN: 0921-8890. Robust Position Control of a Continuum Manipulator Based on Selective Approach and Koopman Operator Journal Article In: IEEE Transactions on Industrial Electronics, 2023, ISSN: 0278-0046. Visuo-Tactile Feedback-Based Robot Manipulation for Object Packing Journal Article In: IEEE Robotics and Automation Letters, vol. 8, iss. 2, pp. 1151 - 1158, 2023, ISSN: 2377-3766. Tactile-Guided Dynamic Object Planar Manipulation Inproceedings In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3203-3209, IEEE, 2022, ISBN: 978-1-6654-7927-1. Two-Phase Motion Planning under Signal Temporal Logic Specifications in Partially Unknown Environments Journal Article In: IEEE Transactions on Industrial Electronics, 2022, ISSN: 0278-0046. Self-Supervised Reinforcement Learning for Active Object Detection Inproceedings In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10224-10231, IEEE, 2022, ISSN: 2377-3766. Parameterized Particle Filtering for Tactile-based Simultaneous Pose and Shape Estimation Journal Article In: IEEE Robotics and Automation Letters (RA-L), vol. 7, no. 2, pp. 1270-1277, 2021, ISSN: 2377-3766, (also accepted by ICRA 2022). Towards a Programming-Free Robotic System for Assembly Tasks Using Intuitive Interactions Conference The 13th International Conference on Social Robotics (ICSR 2021), vol. 13086, Lecture Notes in Computer Science Springer, 2021, ISBN: 978-3-030-90525-5, (Best Presentation Award). Towards Efficient Multiview Object Detection with Adaptive Action Prediction Inproceedings In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, ISBN: 978-1-7281-9077-8. Dexterous Manoeuvre through Touch in a Cluttered Scene Inproceedings In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, ISBN: 978-1-7281-9077-8. TAILOR: Teaching with Active and Incremental Learning for Object Registration Inproceedings In: Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), pp. 16120-16123, AAAI, 2021, (AAAI'21 Best Demo Award). Robust Force Tracking Impedance Control of an Ultrasonic Motor-actuated End-effector in a Soft Environment Inproceedings In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Las Vegas, USA, 2020. Motion Control of a Soft Circular Crawling Robot via Iterative Learning Control Inproceedings In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 6524-6529, IEEE, Nice, France, 2019, ISBN: 978-1-7281-1398-2.2023
2022
2021
2020
2019