The original post’s instructions left out a few critical steps, and as a consequence, not everything was properly, and correctly, documented. More critically, additional setup utilities weren’t run and not captured/poorly documented.
These are directions for building and running the Jetson Inference examples on the Jetson Xavier NX running JetPack 4.4 Developer Preview. Jetson Inference was originally released for the Jetson Nano. It’s been a while since I worked with these examples on that platform, and I did not document anything, as it all seemed to work back then.
These are tl;dr instructions for guaranteed application building.
First, install the following packages:
sudo apt install libglew-dev qt5-default
Then clone a copy of Jetson Inference. You can do this anywhere, but do it in a part of the file system where you have normal (non-root) access.
git clone https://github.com/dusty-nv/jetson-inference.git
This will produce a new folder jetson-inference. Change directory into jetson-inference and execute the following git command.
git submodule update --init
Create a build directory (mkdir build), and change directory into it.
Execute CMake to create the necessary build infrastructure.
cmake ../
During the cmake process step, two scripts will be run that will display ASCII menus within the terminal. The first menu is the Model Downloader. This is how it first appears.
Only two of the models are selected by default.
I selected all of them and downloaded everything. That’s because I’m working off the Western Digital Black SSD I installed and documented two posts back (see “adding a western digital wd_black 250gb nvme drive to jetson xavier nx“)
After selecting all the models, a second ASCII menu is presented to install PyTorch.
I personally skipped this one because I want to install PyTorch using my Python 3.8.3 virtual environment, which I also built for this NX a few posts back. Once I figure out how to install the PyTorch for Python 3.8.3 I’ll revisit documenting that specific step.
Once CMake is finished, execute make within the build folder.
make -j $(nproc)
Assuming you have all six NX cores enabled, and are building on an SSD (not the boot SDXC card), then it should build fairly quickly.
Now for the running part. As a very basic smoke test, attach a Raspberry Pi Camera Module V2 to the CAM0 port. That port is on the edge of the bottom circuit board right above the power barrel connector. You might want to power down the NX when you do this.
When you come back up change back to the jetson-inference/build folder, and then:
./aarch64/bin/camera-capture
You should see something on your desktop similar to the following screen capture.
The application, when running, will take over the entire desktop. The small window will allow you to set up the application for other tasks, and if you kill the window, then the entire application exits.
I’m in the process of testing out other features found in the Jetson Interface examples on the NX and will report anything untoward or unusual when time permits. My honeydew list is calling…
[…] A more complete and correct version of these instructions is here: /2020/05/31/building-and-running-jetson-inference-examples-on-… […]
LikeLike