Project Work : Application of Machine Learning and Inverse Kinematic Modelling for Avoiding Singularity in Serial Robots (Ph2)
Project Description:
The project aims to study and address the issue of "singularity" in Universal Robots by developing a strategy using machine learning methods. Students will work on both simulations and experiments. In serial robotics, difficulties in reaching certain positions usually stem from three main factors: the robot's inability to physically reach the location, joint movement limitations, or constraints on straight-line motion within its workspace. This project seeks to explore the workspace, identify singularities, and resolve them using intelligent algorithms. Since singularity issues are often mathematical rather than physical constraints, the objective is to build a Reinforcement Learning (RL) model, train it within an acceptable error range, and use its predictions to calculate the joint values to achieve the required TCP position and orientation instead of inverse kinematics in real-time. The final goal is to integrate the developed model for remote robot control via an HTC tracker in online mode using a pre-existing ROS code.
Tasks and Duties:
Understanding the Kinematic Model and URDF
Understanding the relationship between joint angles and the position/orientation of the TCP.
Develop a training environment for RL training
Understanding and improving the Simulation model for visualization
Define a Singularity Avoidance/Elimination strategy using reinforcement learning method
Integrate the kinematic model, machine learning model, and trajectory planning algorithms into ROS nodes, utilizing ready-to-use ROS architecture for communication and movement control in remote control mode via HTC tracker.
Conduct experimental tests to validate the proposed approach and writing a scientific report.
Requirements:
Understanding of Kinematics/Robotics
Basic familiarity with ROS
Python programming skills
Solid knowledge of machine learning, especially Artificial Neural Networks and Reinforcement Learning methods.
Contact Person: Mohammad Sadeghi (mohammad.sadeghi@tuhh.de)
Desired starting date: As Soon As possible
Institut für Mechatronik im Maschinenbau (iMEK), Eißendorfer Straße 38, 21073 Hamburg