braccio\_plus
oakd
rpi5
Toys
rpi\_imager
UTF-8
.
Localisation Options
> Local
, choose en_US.UTF-8
.
locale\_1
locale\_2
braccio\_carrier
upload\_data
datasets
labelling
create\_impulse
raw\_features
generate\_features
Toys
Toys
model\_testing
eim
model and start the inferencing, run the following command and follow the instructions.
inferencing
model\_compile
download\_block\_output
.xml
- Describes the model topology..bin
- Contains the weights and binary data..blob
file, which can be deployed to the OAK-D device.
~/EI_Pick_n_Place/pnp_ws/src/ei_yolov5_detections/resources
folder on the Raspberry Pi 5. We can test the generated model using the depthai-python library:
depth\_inferencing
stl
moveit_resources_braccio_description
to keep all STL files and URDF for reusability. The robot model URDF can be found in the GitHub repository for this project:
https://github.com/metanav/EI\_Pick\_n\_Place/tree/main/pnp\_ws/src/braccio\_description/urdf
urdf_launch
and joint_state_publisher
packages and launch the visualization.
robot\_urdf\_rviz
braccio.urdf
file from the moveit_resources_braccio_description
package.
moveit2\_assistant\_1
moveit2\_assistant\_2
fixed
virtual joint that attaches the base_link
of the arm to the world
frame. This virtual joint signifies that the base of the arm remains stationary in the world frame.
moveit2\_assistant\_3
moveit2\_assistant\_4
moveit2\_assistant\_5
moveit2\_assistant\_7
braccio_gripper
group as an end effector. The end effectors can be used for attaching objects to the arm while carrying out pick-and-place tasks.
moveit2\_assistant\_6
board\_manager
/joint_states
topic and subscribes to the /gripper/gripper_cmd
and /arm/follow_joint_trajectory
topics.
ei_yolov5_detections
node detects the objects and publishes the detection results using the Edge Impulse trained model on the OAK-D depth camera.
pick_n_place
node plans a pick and place operation using MoveIt Task Constructor. MoveIt Task Constructor provides a way to plan for tasks that consist of multiple different subtasks (known as stages as shown in the image below).
moveit2\_task\_stages
/ei_yolov5/spatial_detections
topic and plans the pick and place operation. While bringing up this node, we need to provide command line parameters for the exact (X, Y, Z) position of the camera in meters from the base of the robot.
robot_state_publisher
and move_group
nodes to publish the robot model and provide MoveIt 2 actions and services respectively.