Server Environment Configuration

Only after configuring the server environment can the project run smoothly, including model training, inference, and animal behavior analysis processes. Paper Figure Reproduction does not require server environment configuration, only local environment configuration is needed.

Server Environment Configuration Notice

The deep learning environment is the core foundation of this project. We understand that environment setup can be a complex and challenging process, which is a common problem throughout the AI field. Although we are continuously optimizing the deployment process, the current project still has some version compatibility issues. We recommend that you be patient when configuring the environment and have a certain spirit of technical research to deal with possible challenges.

Quick Experience in Playground

You can use Playground for demonstrations related to video analysis and processing. Playground is based on Docker, which packages most deep learning environments for users to quickly experience.


Hardware Configuration

Server hardware configuration includes:

  • CPU: Intel Xeon 4314
  • Memory: 256 GB
  • GPU: 4 NVIDIA GeForce RTX 3090 cards
  • Storage: 1 TB SSD + 8 TB HDD

Software Environment

Software environment includes:

  • Operating System: Ubuntu 18.04
  • Deep Learning Framework: CUDA 11.4, cuDNN 8.8, TensorRT 8.6
  • Development Tools: Python 3.7.16, OpenCV-Python 4.8.0, TensorFlow 2.7.0, PyTorch 1.11.0

Used for deep learning model training and data analysis.

Important Warning

  • Currently only supports Ubuntu system and NVIDIA graphics cards. Please refer to relevant materials for other systems. Or use Playground for quick experience.

  • Since the software was configured in 2019, some software versions may be outdated. It is recommended to choose the latest versions according to actual needs. Most Python libraries can be found in pip and conda, but some libraries may be difficult to install compatibly. This document cannot accommodate all software's latest versions. Please refer to relevant materials.

Dependent Software Versions

It is recommended to use the software version numbers referenced by Playground.

Software Name Host Test
Version (2022)
Playground in
Docker (👍)
2025 Version
(UnTested)
Notes
Ubuntu 18.04 22.04 24.04 Operating system, recommended
Python 3.7.16 3.8.10 3.12 Development language, required
CUDA 11.4 12.9 12.9 Deep learning framework, required
cuDNN 8.8 9.12 9.12 Deep learning framework, required
TensorRT 8.6 8.6 10.13 Inference acceleration, required
OpenCV-Python 4.8.0 4.12.0 4.12.0 Image processing, required
TensorFlow 2.7.0 2.11.0 2.16.0 DANNCE model, required
PyTorch 1.11.0 1.12.1 2.7.1 Deep learning framework, required
Docker 24.0.6 - 28.3.0 Containerization, for Mediamtx
FFmpeg 4.x 4.x 6.x Video reading and writing
Mediamtx - - - Video stream server, for closed-loop behavioral intervention

Test Installation Environment

After all software is installed, testing is required to ensure successful installation. Testing methods are as follows:

Test nvidia cuda driver

$ nvidia-smi

Test Nvidia Cuda Compiler

$ nvcc --version  # Not available for "playground"

Test TensorRT Command Line Tools

$ trtexec --onnx=/path/to/model.onnx --saveEngine=/path/to/model.engine  # Not available for "playground"

Test polygraphy (NVIDIA Official Tool) Command Line Tools

$ polygraphy inspect model --onnx=/path/to/model.onnx

Test FFmpeg Command Line Tools

# ffmpeg generate a 640x480 video, 30 fps, 1 minute
$ ffmpeg -f lavfi -i testsrc=duration=60:size=640x480:rate=30 -c:v libx264 -pix_fmt yuv420p -f mp4 /path/to/test.mp4

# convert it to hevc
$ ffmpeg -i /path/to/test.mp4 -c:v hevc -pix_fmt yuv420p -f mp4 /path/to/test_hevc.mp4

Test Docker Command Line Tools

$ docker run hello-world
$ docker run --rm -it -e MTX_PROTOCOLS=tcp -p 8554:8554 -p 1935:1935 bluenviron/mediamtx

Third-party Deep Learning Toolkits

Installed deep learning models:

Model (conda) Purpose Project Path (Playground)
Mask-RCNN Rat segmentation ~/mmdet-pkg
mmpose 2D keypoint detection ~/mmpose-pkg
YOLO-v8 Real-time rat segmentation ~/YOLOv8-pkg
DANNCE 3D keypoint detection for rats ~/dannce-pkg
SmoothNet Keypoint temporal smoothing -

1. OPEN-MMLAB/MMDETECTION Model

This is an open-source project that provides MaskRCNN for object detection and segmentation. We forked it (in 2022) and added custom models and configuration files to achieve multi-rat segmentation.

Please refer to MMdetection's official documentation for installation instructions.

  • Code address: https://github.com/chenxinfeng4/mmdetection.git

Important Warning

We forked MMDET 1.x version, which currently supports python3.7; the latest MMDET on the official website is 3.x version, which may have serious compatibility issues. Please choose the appropriate version according to actual needs.

2. OPEN-MMLAB/MMPOSE Model

This is an open-source project for human pose keypoint detection. We forked it (in 2022) and added custom models and configuration files to achieve 2D keypoint detection, which is used for ball detection in our project.

Please refer to MMpose's official documentation for installation instructions.

  • Code address: https://github.com/chenxinfeng4/mmpose.git

Important Warning

We forked the MMPOSE 0.x version. New versions and environments may have compatibility issues. Please choose the appropriate version according to your actual needs.

3. YOLO-v8 Model

This is a lightweight object detection and instance segmentation open-source project. We forked it (in 2024) and added custom models and configuration files to achieve real-time multi-rat segmentation.

  • Code address: https://github.com/chenxinfeng4/ultralytics.git

Important Warning

Our forked version may have compatibility issues with the latest version. Please choose the appropriate version according to your actual needs.

4. DANNCE Model

This is a library for multi-view animal pose estimation. We forked it (in 2022) and added custom models and configuration files to achieve real-time multi-rat 3D keypoint recognition.

  • Code address: https://github.com/chenxinfeng4/dannce.git

Important Warning

Our forked version may have compatibility issues with the latest version. Moreover, this library has high prediction accuracy but slow speed. We performed speed optimization, which caused some differences from the original code and more compatibility issues. Installation is relatively troublesome, and we are optimizing the installation documentation.

5. SmoothNet Model

This is a library for pose time smoothing. We forked it (in 2022) and added custom models and configuration files to achieve multi-rat 3D keypoint smoothing.

  • Code address: https://github.com/chenxinfeng4/SmoothNet.git

Important Warning

Our forked version may have compatibility issues with the latest version. Please choose the appropriate version according to your actual needs.