3DVision compilation is a fixed process, and must be edited in order:
- Region of Interest,
- Object Search, and
- Picking Constraints.
No editing process can be skipped.
NOTE: Process editing status comes with six types:
a. All white denotes the pre-process have not completed and the module is not editable.
b. The upper right corner in gray denotes that the module is editable but not yet edited.
c. The upper right corner in green denotes the module has been edited successfully.
d. The upper right corner in orange denotes the module that has been edited but failed.
e. The box framed in green denotes the module is currently running successfully.
f. The box framed in orange denotes the module is currently running and failed.
Step #1: Initiate: In the process compilation, after clicking Initiate, adjust the camera parameters to make the objects in the scene be fully imaged. After confirming that everything is adjusted, click Save.
- Hand Eye Type: The relationship between the 3D camera and the robot. Currently supports Eye-to-Hand only. The field below is the relationship between the camera and the object.
- Camera Parameters: Click to open the 3D camera parameter setting tab. Details can be found here.
Step #2: Region of Interest: After completing Initiate, click Region of Interest in the process compilation, select the desired algorithm to apply, and filter background and noise beyond the region. To modify after editing, right-click Region of Interest in the process compilation to do so.
2D Trimming: Directly set the region of interest on the depth map in pixels and keep the set depth within the selected region to remove the background, reduce the amount of information, and increase the algorithm calculation speed. The cropping will have different FOV depending on the distance to the camera. This is suitable for applications where the height of the region of interest is smaller or the background needs to be roughly removed only. The settings and parameters are described here.
3D Trimming: Create a 3D cube in the camera space for cropping. The settings are all relative to the real distance of the camera (unit: mm) in order to reduce the amount of information and increase the algorithm calculation speed. This method is for the need to accurately frame the region of interest and applicable to larger changes in height or the box fetching. The settings and parameters are described here.
Step #3a: Clustering – Distance: Adjust the Down Sampling interval and grouping distance appropriately to divide the point cloud in the scene into multiple small groups, so that the algorithm only needs to search for targets in each small group subsequently. This reduces the number of algorithm calculations and search ranges. The settings and parameters are described here.
Step #3b: Clustering – AI+ Detection: AI+ Detection requires a dongle key to enable. Users can apply 2D images of the camera to AI+ Detection for the point cloud clustering by bounding box detection. The clustered classification can also be used in the subsequent Find process. The settings and parameters are described here.
Step #4: Find: Click Find in the process compilation to select the algorithm to apply from the two algorithms, CAD, and Geometry. The settings and parameters are described as follows:
- Geometry: Select an appropriate simple geometric form in the Geometry Type list. Adjust the parameters of simple geometry to appropriate values to find objects stably: plane (length, width), box (length, width, and height), sphere (radius), cylinder (radius and depth), disc (radius).
- CAD: The model created by the 3D modeling function uses the surface characteristics of the object as a template for comparison, and finds the point cloud that matches the description in the point cloud information. Users can convert from common CAD files to the desired point cloud model through the Modeling function. Point cloud clustering may crop the features of objects, so point cloud clustering is not recommended to use with CAD files.
Step #5: Picking Constraints: Click Picking Constraintsin the process compilation. Appropriately adjust and set the gauge for picking and placing objects including the choice and the judgment for objects after the discovery of objects based on the site conditions such as workpieces, tools, and work platforms. Users can filter unfavorable gauge of the object picking based on the usage conditions to improve the picking efficiency and reduce the danger. The settings and parameters are described here.