Skip to content

Team B Unitree Go1 Pull Request #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 41 commits into from
Apr 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
f1de1e6
Add images
TingWeiWong May 4, 2023
540de0f
Create unitree-go1.md
TingWeiWong May 4, 2023
e694647
Finish unitree go1 markdown
TingWeiWong May 4, 2023
9b7421c
Add unitree go1 link
TingWeiWong May 4, 2023
499953f
Create docker-security
TingWeiWong May 7, 2023
ef2ed35
Rename docker-security to docker-security.md
TingWeiWong May 7, 2023
783b5a1
Add images for docker security
TingWeiWong May 7, 2023
451711b
Fix image path
TingWeiWong May 7, 2023
a1e96c2
Update docker-security.md
TingWeiWong May 7, 2023
20e1212
Update navigation.yml
TingWeiWong May 7, 2023
76dbb04
Merge branch 'RoboticsKnowledgebase:master' into master
TingWeiWong Dec 13, 2023
235a530
Update unitree-go1.md
TingWeiWong Dec 13, 2023
54a26a1
Rename assets/images/form_factor.png to wiki/common-platforms/assets/…
TingWeiWong Dec 13, 2023
54947a6
Rename assets/images/docker_socket.png to wiki/common-platforms/asset…
TingWeiWong Dec 13, 2023
048e53f
Rename assets/images/unitree_side.png to wiki/common-platforms/as…
TingWeiWong Dec 13, 2023
68b501e
Rename unitree_side.png to unitree_side.png
TingWeiWong Dec 13, 2023
b0296c7
Rename assets/images/unitree_top.png to wiki/common-platforms/ass…
TingWeiWong Dec 13, 2023
4382080
Rename assets/images/wired.png to wiki/common-platforms/assets/wi…
TingWeiWong Dec 13, 2023
7f26bbd
Rename assets/images/wireless.png to wiki/common-platforms/assets…
TingWeiWong Dec 13, 2023
df5c9cb
Update unitree-go1.md
TingWeiWong Dec 13, 2023
75c5fa4
Update unitree-go1.md
TingWeiWong Dec 13, 2023
5b9c766
Create azure-block-detection
TingWeiWong Dec 13, 2023
cb3bcc1
Rename azure-block-detection to azure-block-detection.md
TingWeiWong Dec 13, 2023
4bcbf3f
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
1e38cb6
Add files via upload
TingWeiWong Dec 13, 2023
d2fdae9
Add files via upload
TingWeiWong Dec 13, 2023
7816929
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
9989f05
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
01fc4e4
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
a714374
Add files via upload
TingWeiWong Dec 13, 2023
e31063b
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
e7782e5
Add files via upload
TingWeiWong Dec 13, 2023
06b30d2
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
e13d7d7
Rename wiki/sensing/hsv_img.png to wiki/sensing/assets/hsv_img.png
TingWeiWong Dec 13, 2023
b08d07f
Rename wiki/sensing/norm_img.png to wiki/sensing/assets/norm_img.png
TingWeiWong Dec 13, 2023
14d1a6a
Update unitree-go1.md
TingWeiWong Dec 13, 2023
2b1bd41
Update unitree-go1.md
TingWeiWong Dec 13, 2023
9a1cb70
Update azure-block-detection.md
TingWeiWong Dec 13, 2023
37f2980
Update navigation.yml
TingWeiWong Dec 13, 2023
4abd765
Update unitree-go1.md (remove corrupted picture)
TingWeiWong Apr 28, 2024
11b5384
Merge branch 'master' into master
nevalsar Apr 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions _data/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ wiki:
url: /wiki/common-platforms/dji-drone-breakdown-for-technical-projects/
- title: DJI SDK
url: /wiki/common-platforms/dji-sdk/
- title: Unitree Go1
url: /wiki/common-platforms/unitree-go1/
- title: Pixhawk
url: /wiki/common-platforms/pixhawk/
- title: Asctec Pelican UAV Setup Guide
Expand Down Expand Up @@ -114,6 +116,8 @@ wiki:
url: /wiki/sensing/robotic-total-stations.md
- title: Thermal Cameras
url: /wiki/sensing/thermal-cameras/
- title: Azure Block Detection
url: /wiki/sensing/azure-block-detection/
- title: DWM1001 UltraWideband Positioning System
url: /wiki/sensing/ultrawideband-beacon-positioning.md
- title: Actuation
Expand Down Expand Up @@ -283,6 +287,8 @@ wiki:
children:
- title: Docker
url: /wiki/tools/docker/
- title: Docker Security
url: /wiki/tools/docker-security
- title: Docker for PyTorch
url: /wiki/tools/docker-for-pytorch/
- title: Vim
Expand Down
Binary file added assets/images/privesc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/common-platforms/assets/docker_socket.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/common-platforms/assets/form_factor.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions wiki/common-platforms/assets/unitree_side.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/common-platforms/assets/unitree_top.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/common-platforms/assets/wired.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/common-platforms/assets/wireless.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
102 changes: 102 additions & 0 deletions wiki/common-platforms/unitree-go1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be
# overwritten except in special circumstances.
# You should set the date the article was last updated like this:
date: 2023-05-03 # YYYY-MM-DD
# This will be displayed at the bottom of the article
# You should set the article's title:
title: Unitree Go1 Edu
# The 'title' is automatically displayed at the top of the page
# and used in other parts of the site.
---
This is an article that provides an overview of the Unitree Go1 Edu robot, including its features and capabilities. Unitree Robotics is a leading Chinese manufacturer that specializes in developing, producing, and selling high-performance quadruped robots. One of the company's primary advantages is that they offer quadruped platforms at a significantly lower cost compared to competitors like Boston Dynamics. In addition, they have announced plans to release experimental humanoid platforms in the near future.

There are three versions of the Unitree Go1: Air, Pro, and Edu. The Edu model is designed for educational purposes and provides developers with access to the platform. In this article, we will focus on the capabilities of the Go1 Edu, which is a popular choice for students and researchers due to its affordability and ease of use.

## Form Factor
![Form_Factor](assets/form_factor.png)
The Unitree Go1 Edu has compact dimensions of 645 x 280 x 400 mm and weighs 12 kg.
It boasts a top speed of 3.7-5 m/s and a maximum load capacity of 10 kg, although it's recommended to keep the payload under 5 kg.
By default, the robot can traverse steps up to 10 cm high, but with programming, it's possible to overcome larger obstacles.
The Go1 Edu features 12 degrees of freedom, including HAA (hip abduction/adduction), HFE (hip flexion/extension), and KFE (knee flexion/extension) joints.
The Body/Thigh Joint Motor design is highly adaptable to various mechanical equipment, with an instantaneous torque of 23.7 N·m, while the Knee Joint has a torque of 35.55 N·m.

## Power and Interface
![Unitree_TOP](assets/unitree_top.png)

The Unitree Go1 Edu robot is equipped with a reliable lithium-ion power cell with a 6000mAh capacity that provides an endurance time of 1-2.5 hours. The robot's battery management system (BMS) closely monitors the battery status, ensuring safe and stable operation during use. The batteries themselves feature overcharge protection, providing an additional layer of safety.

The top plate of the robot features several ports, including USB and HDMI ports that connect to corresponding computers. The USB and HDMI port pair located closest to the Ethernet port, along with the Ethernet port itself, connects to the Raspberry Pi. Additionally, users can draw out 24V, 12A power from the top plate using an XT30 connector.

## Sensors and Processors

The Unitree Go1 Edu robot is equipped with a range of sensors, including five pairs of stereo-fisheye cameras located at the face, chin, lower belly, right torso, and left torso, providing a 360-degree field of view. Additionally, it has three sets of ultrasonic sensors positioned in different directions to detect obstacles in its path. The robot also features an IMU, four foot force sensors, and face LEDs, which can be programmed to display different expressions.

Moreover, Unitree provides customization options for processors and additional sensors. In the 2023 MRSD Unitree Go1, for instance, there is one Raspberry Pi CM4 (Compute Module 4), two Nvidia Jetson Nanos, and one Nvidia NX. The Raspberry Pi comes with a 32 GB SD card where Unitree's off-the-shelf software is pre-installed.

## Network Configuration for Unitree Go1 Camera Streaming
* Four computers inside Unitree Go1: three Jetson Nano and one Raspberry Pi. Four devices are connected with a switch.
* The inbuilt wifi card inside Raspberry Pi is connected to the switch and is called Eth0.
* Raspberry Pi also has an extra Wi-Fi card, which is used as a hotspot 192.168.12.1.
* User laptop connects to the robot hotspot, with a static IP 192.168.12.18.
* Users can connect to all four devices via Ethernet cable, with a static IP 192.168.123.123.
![Wired](assets/wired.png)

* Each Nano controls and processes a pair of fisheye cameras. The Unitree camera SDK provides an API that captures and rectifies skewed fisheye camera stream and sends out the UDP packets.
* `./bins/example_putImagetrans` sends camera streams with udp packets
* `./bins/example_getimagetrans` receives the udp packets and show camera streams with gstreamer
* You can modify the receiver program and do whatever you want
* The de-fish API requires a straight connection with the camera. It must be run inside Jetson Nano. Users can’t receive raw camera stream and run this inbuilt program on their own laptop. In addition, this API is designed for Ethernet connection. It requires the third segment of the image receiver IP address to be 123. This means the user's laptop must have a 123-segment IP address.
* In addition, user need to modify the config file inside Unitree Nano Jetson, which is `/UnitreecameraSDK/trans_rect_config.yaml`.

## Wirelessly Stream Camera Feed from Unitree Go1's Head Cameras to Desktop
In order to receive a camera stream wirelessly, you will need to modify the routing tables on your device.

```console
-----------------------------head nano---------------------------
sudo route del -net 192.168.123.0 netmask 255.255.255.0

#the following four commands kill the camera processes
ps -aux | grep point_cloud_node | awk '{print $2}' | xargs kill -9
ps -aux | grep mqttControlNode | awk '{print $2}' | xargs kill -9
ps -aux | grep live_human_pose | awk '{print $2}' | xargs kill -9
ps -aux | grep rosnode | awk '{print $2}' | xargs kill -9

cd UnitreecameraSDK
./bins/example_putImagetrans


----------------------------raspberry pi-----------------------
sudo route add -host 192.168.123.123 dev wlan1


----------------------------user laptop-----------------------
# input ifconfig and find out the wifi name that is used for Go1
# mine is wlp0s20f3
sudo ifconfig wlp0s20f3:123 192.168.123.123 netmask 255.255.255.0
sudo route del -net 192.168.123.0 netmask 255.255.255.0

cd UnitreecameraSDK
./bins/example_getimagetrans
```

## Controlling Unitree in Simulation and Real-World Scenarios

### Introduction
Unitree Robotics provides a high-level control interface for directly controlling the real robot. However, controlling the movement of a robot in simulation using simple commands is a challenge. This documentation provides an overview of the issues we faced and the solutions we found while controlling the Unitree Go1 robot in simulation and real-world scenarios.

### Controlling the Robot in Simulation
The Gazebo simulation environment currently limits the use of `unitree_legged_msgs::LowCmd` as the subscribed message type, which requires manual motor torque and angle setting. To convert `unitree_legged_msgs::HighCmd` to `unitree_legged_msgs::LowCmd`, the `HighCmd` to `LowCmd` functions are hidden in the robot interface high level in `/raspi/Unitree/auto start/programming/programming.py`. However, this limitation can be overcome by exploring the MIT Champ code and using the IsaacSim platform from Nvidia.

### Controlling the Robot in Real-World Scenarios
To ensure safety, it is crucial to carefully review the user manual and record the full action sequence of the Unitree Go1 robot. The provided software packages, including the unitree legged SDK and unitree ROS to real, can be used to study example codes and create custom packages for specific use cases. For instance, the example_walk.cpp can be used to send the HIGH Command message to the robot, allowing users to set start and end points for the robot to plan its route from start to end.

## Summary
If you are considering using the Unitree Go1 for your project, be aware that you will either need to be content with the default controller or implement your own state estimation and legged controller. One of the main drawbacks of using commercial products like this is that the code is closed-source. When deploying your own code on the Unitree Raspberry Pi, it is important to keep an eye on memory usage and find a balance between performance and computing capabilities. (Note: This section contains the latest information as of May 2023)



## References
- [Unitree Go1 Education Plus](https://www.wevolver.com/specs/unitree-robotics-go1-edu-plus)
- [Unitree vs. Boston Dynamics](https://www.generationrobots.com/blog/en/unitree-robotics-vs-boston-dynamics-the-right-robot-dog-for-me/)
- [Unitree 3D Lidar](https://www.active-robots.com/unitree-go1-air-3.html)
Binary file modified wiki/sensing/assets/cropped.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/hsv_img.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/norm_img.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/norm_mask.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/norm_result.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/normalized.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified wiki/sensing/assets/original.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/pipeline.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/rgb_vector.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/zoom1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added wiki/sensing/assets/zoom2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
71 changes: 71 additions & 0 deletions wiki/sensing/azure-block-detection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be
# overwritten except in special circumstances.
# You should set the date the article was last updated like this:
date: 2023-12-13 # YYYY-MM-DD
# This will be displayed at the bottom of the article
# You should set the article's title:
title: Azure Block Detection
# The 'title' is automatically displayed at the top of the page
# and used in other parts of the site.
---

This article presents an overview of object detection using the Azure camera without relying on learning-based methods. It is utilized within our Robot Autonomy project, specifically for detecting Jenga blocks and attempting to assemble them.

### Detection Pipeline
To identify individual blocks and their respective grasping points, the perception subsystem undergoes a series of five steps. Initially, it crops the Azure Kinect camera image to center on the workspace. Following this, it applies color thresholding to filter out irrelevant objects and discern the blocks. Subsequently, it identifies the contours of these blocks and filters them based on their area and shape characteristics. Once the blocks are recognized, the perception subsystem computes the grasping points for each block. Collectively, these steps facilitate the accurate detection of block locations and their corresponding grasping points on the workstation.


![Pipeline of Block Detection](assets/pipeline.png)

### Image Cropping
The initial stage of the perception subsystem involves cropping the raw image. Raw images often contain extraneous details, such as the workspace's supporting platform or the presence of individuals' feet near the robot. By cropping the image to focus solely on the workspace, we eliminate a significant amount of unnecessary information, thereby enhancing the system's efficiency and robustness.

Currently, this approach employs hard-coded cropping parameters, requiring manual specification of the rows and columns to retain within the image.

![Cropped Image](assets/cropped.png)

### Color Segmentation
Color segmentation can pose challenges in images with prominent shadows. Shadows cause a decrease in RGB pixel values, while light causes an increase, making it challenging to distinguish between different colors. To address this, we employ HSV (Hue, Saturation, Value) thresholding on the image.

For reliable detection of the brown color of the Jenga blocks under varying lighting conditions, we utilize the HSV color space, consisting of three channels: hue, saturation, and value. By thresholding these channels, we filter out the desired colors. However, using a fixed RGB threshold for detecting brown is challenging due to its variable RGB values under different lighting.

To tackle this issue, we employed color meter software to establish the brown color range for the Jenga blocks. This range, encompassing lower and upper brown values, was applied to our HSV thresholding function. The resulting HSV thresholded image is depicted in Figure 10b.

To further refine Jenga block detection and eliminate background noise, we apply a mask to the HSV thresholded image. Initially, we create a mask by contour area thresholding and then fill any holes within the contour to obtain a solid mask. The resulting masked image is shown in Figure 6a. This process ensures the reliable detection of Jenga blocks by removing remaining noise or unwanted objects.

![RGB Vector](assets/rgb_vector.png)

### Block Contours

Contours play a pivotal role in computer vision's object detection. In our perception system, we utilize the mask derived from the HSV thresholded image to generate precise and consistent contours, enhancing accuracy.

We utilize OpenCV2's 'findContours' function to generate contours from the masked image. However, these contours encompass not only the Jenga blocks but also the robot manipulator. Since our focus is solely on detecting rectangular shapes corresponding to the Jenga blocks, we employ thresholding based on approximate block size and rectangular characteristics.

To simplify contours and reduce points, we apply OpenCV2's 'minAreaRect' function to the contours. This function generates contours with only four points representing the four corners of the blocks. Comparing the area of the original contour with the 'minAreaRect' contour allows us to confirm if the detected object is indeed a rectangle by setting a threshold ratio.

Subsequently, we identify the two grasp points of the block by detecting its longer sides. To determine these grasp points in the image frame, we align the depth image with the RGB image to acquire the depth value. Utilizing the x, y, and depth values, we transform the 2D pixel points back to the 3D pose in the camera frame using the intrinsic matrix. The grasp point concerning the base frame is then computed by performing a transform tree lookup, thereby completing the entire perception cycle.

![Contours](assets/zoom1.png)


### Image HSV Thresholding vs. Normalization

To mitigate the issue of filtering out irrelevant data, we explored two approaches: HSV thresholding and image normalization. In addition to the conventional representation of each pixel as an RGB value, pixels can also be depicted as 3D vectors in RGB space. While lighting influences the vector's magnitude, it doesn't alter its direction. Normalizing each vector nullifies the lighting effect, preserving only its direction and effectively converting RGB vectors into unit vectors.

For identifying jenga block pixels, we calculated the cosine similarity between each pixel's RGB vector and the background color. Pixels with excessive similarity to the background were masked out.

Although image normalization showed promise, it proved less effective in cluttered scenarios compared to the HSV method. The HSV method, involving thresholding in the HSV color space, exhibited greater reliability in detecting jenga blocks across varying lighting conditions.

Normalized Image | HSV Image
:-------------------------:|:-------------------------:
![Norm](assets/norm_img.png) | ![HSV](assets/hsv_img.png)


## References
- [MIT Jenga Robot](https://news.mit.edu/2019/robot-jenga-0130)





Loading