OS 운영체제/Linux

[NVIDIA] 젯슨 오린 컨테이너 시작하기

쉬고싶은 거북이 2025. 3. 19. 18:13

 

1. Jetson Container 시작하기

방법 1) 깡통 컨테이너에서 시작하기 (세팅)

 

nvidia/l4t-base:r32.4.3  

sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r32.4.3

 

https://developer.nvidia.com/embedded/learn/tutorials/jetson-container

 

Your First Jetson Container

What is a container? A container is an executable unit of software where an application and its run time dependencies can all be packaged together into one entity. Since everything needed by the application is packaged with the application itself, containe

developer.nvidia.com

 

 

방법 2) 추론용 컨테이너에서 시작하기 (학습)

 

dustynv/jetson-inference:r35.4.1

git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference
cd jetson-inference
docker/run.sh

 

https://github.com/dusty-nv/jetson-inference

 

GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitive

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. - dusty-nv/jetson-inference

github.com

 

 

방법 3)  모듈형 컨테이너로 시작하기 (사용)

git clone https://github.com/dusty-nv/jetson-containers
jetson-containers run dustynv/comfyui:r36.3.0 # 예시.

 

https://github.com/dusty-nv/jetson-containers

 

GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T

Machine Learning Containers for NVIDIA Jetson and JetPack-L4T - dusty-nv/jetson-containers

github.com

 

ML (Machine Learning) PyTorch, TensorFlow, JAX, ONNX Runtime, DeepStream, HoloScan, CTranslate2, JupyterLab
LLM (Large Language Models) SGLang, vLLM, MLC, AWQ, Transformers, text-generation-webui, Ollama, llama.cpp, llama-factory, exllama, AutoGPTQ, FlashAttention, DeepSpeed, bitsandbytes, xformers
VLM (Vision-Language Models) llava, llama-vision, VILA, LITA, NanoLLM, ShapeLLM, Prismatic, xtuner
ViT (Vision Transformers) NanoOWL, NanoSAM, Segment Anything (SAM), Track Anything (TAM), clip_trt
RAG (Retrieval-Augmented Generation) llama-index, langchain, jetson-copilot, NanoDB, FAISS, RAFT
L4T (Linux for Tegra) l4t-pytorch, l4t-tensorflow, l4t-ml, l4t-diffusion, l4t-text-generation
CUDA (NVIDIA GPU Computing) CuPy, cuda-python, PyCUDA, Numba, OpenCV:CUDA, cuDF, cuML
Robotics Cosmos, Genesis, ROS, LeRobot, OpenVLA, 3D Diffusion Policy, Crossformer, MimicGen, OpenDroneMap, ZED
Graphics stable-diffusion-webui, ComfyUI, Nerfstudio, MeshLab, PixSFM, Gsplat
Mamba (AI Video Processing) Mamba, MambaVision, Cobra, Dimba, VideoMambaSuite
Speech Processing Whisper, whisper_trt, Piper, Riva, Audiocraft, Voicecraft, XTTS
Home/IoT homeassistant-core, wyoming-whisper, wyoming-openwakeword, wyoming-piper