{"id":53940,"date":"2021-11-08T16:33:31","date_gmt":"2021-11-08T08:33:31","guid":{"rendered":"https:\/\/www.tm-robot.com\/?p=53940"},"modified":"2023-04-26T16:19:40","modified_gmt":"2023-04-26T08:19:40","slug":"a-look-at-the-current-challenges-of-robot-vision","status":"publish","type":"post","link":"https:\/\/www.tm-robot.com\/en\/a-look-at-the-current-challenges-of-robot-vision\/","title":{"rendered":"A Look at the Current Challenges of Robot Vision"},"content":{"rendered":"
[vc_row][vc_column][vc_column_text css=”.vc_custom_1589427958454{margin-bottom: 0px !important;border-bottom-width: 20px !important;}”]<\/p>\n
\u2018Robotics\u2019 and \u2018Machine Vision\u2019 are both time-honored research fields.<\/strong> Robotic courses are generally offered by departments such as mechanical engineering, automation, and system control engineering, while machine vision courses are offered by information engineering and electrical engineering departments. Through the cooperation of experts from these two fields, robots are given the ability to \u2018see\u2019 and have visual perception. This is why robot vision system is a technology that relies heavily on integrated engineering. Robot vision is designed to detect human beings and objects in the environment by calculating its position on the camera coordinates system, and converting the coordinates system of the robotic arm, then driving its motor and elbow joint to operate a task. This seemingly simple process greatly rests on complicated computer calculations. In this article, we will address the difficulties of integrating robotic arms with machine vision.[\/vc_column_text][mk_fancy_title margin_bottom=”” font_family=”none”]<\/p>\n [\/mk_fancy_title][vc_column_text css=”.vc_custom_1589427924974{margin-bottom: 0px !important;}”]Traditional robotic arm programming enables the arms to perform the same action by the moving points of multiple arms. Since the points are fixed teaching points, a large number of jigs are required to fix the work pieces or peripheral processing machinery, with a lower elasticity. All points must be reset again if the relationship between the arms and work area is changed due to external force. If machine vision is integrated with robotic arms, the moving position of the arms can be elastically corrected by taking advantage of visual recognition and compensation. This effectively reduces the jigs requires, and increases the flexibility of handling diverse and multi-posture work pieces.<\/p>\n The spatial relationship between a robotic arm and camera is also known as the hand-eye relationship, categorized into Eye-in-Hand, Eye-to-Hand, and Upward-looking. Eye-in-Hand means that the camera is positioned on the end axis of the arm. After the camera is used for taking pictures and visual recognition, the arm can be driven to clamp the work piece. Eye-to-Hand means that the camera and arm are fixed separately. The advantage of this approach is that the robotic arm can move at the same time when the camera is capturing images, resulting in a better cycle time. However, a disadvantage is that a fixed connection between the arm and the camera must be maintained. If there are changes to this connection, re-calibration is required. As for the upward-looking relationship, also known as secondary positioning, after a work piece is gripped by the arm which comes into sight of the camera, the difference between the current posture and the standard posture is compared for further calculation and adjustments. The Upward-looking is more accurate in terms of positioning when compared to the Eye-in-Hand and Eye-to-Hand.<\/p>\n <\/p>\n [\/mk_fancy_title][vc_column_text css=”.vc_custom_1589427138112{margin-bottom: 0px !important;}”]<\/p>\n3 Types of Hand-Eye Relationship<\/h2>\n
[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][mk_fancy_title size=”36″ force_font_size=”true” size_smallscreen=”34″ size_tablet=”34″ size_phone=”30″ margin_bottom=”0″ font_family=”none”]<\/p>\n
Hand-Eye Relationship Comparisons<\/h2>\n