After successfully downloading Helm Chart, the next step is to extract the file by executing the following syntax.
tar-xvzfnim-llm-1.3.0.tgz
Enter the extracted folder by running the syntax below.
cdnim-llm
Change the security context in the values.yaml file
Determine the base model used, in this guide using meta llama 8b.
Add link model
If you want use other model, you can refer to NVDIA web page. The following are the steps for getting the URL for the model that will be used.
On the NVIDIA website, in the Search API Catalog column, enter the name of the model that will be used.
The Search API Catalog column displays several models, and select the model to use.
On the selected model page there is an Experience tab.
On the Experience tab on the right there are several options, namely Python, LangChain, Node, Shell, and Docker. Select Docker.
In the Docker section under "Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure." in the bottom row there is the model URL. Copy the URL.
Edit the gpu based on the needs.
Put your api key in line "ngcAPI Key".
Edit the storage based on the needs.
Edit Storage
Save the file after updating, press CTRL + O to save, and CTRL + X to exit.
Add Runtime NVIDIA
Open the deployment.yaml file in the files folder by executing the following syntax.
After the deployment.yaml file opens, look for the specsection and add runtimeClassName: nvidia according to the image below.
Add runtimeClassName
Save the file after updating, press CTRL + O to save, and CTRL + X to exit.
Install Helm Chart
Install Helm Chart by executing the following syntax.
Install Helm Chart
Wait until the installation process is complete and run the following syntax to see the list of available pods.
Testing
When finished, run the following syntax to display a list of nim services in the namespace.
In the image above, you can see nim-llm using the IP cluster 10.250.225.0 and the status is active. Run the following syntax to log into a container running inside a pod in Kubernetes.