Deployment LLM with Deka GPU + NIM
Last updated
Last updated
Pre-requisite
Contact dekagpu.support@lintasarta.co.id to get licensed for NVIDIA AI Enterprise.
You need NGC API Keys, click this link.
Contact dekagpu.support@lintasarta.co.id to get root privileged access.
After you have fulfilled the prerequisites above, the next step is to download Helm Chart by executing the following syntax.
After successfully downloading Helm Chart, the next step is to extract the file by executing the following syntax.
Enter the extracted folder by running the syntax below.
Change the security context in the values.yaml file
Determine the base model used, in this guide using meta llama 8b.
If you want use other model, you can refer to NVDIA web page. The following are the steps for getting the URL for the model that will be used.
On the NVIDIA website, in the Search API Catalog column, enter the name of the model that will be used.
The Search API Catalog column displays several models, and select the model to use.
On the selected model page there is an Experience tab.
On the Experience tab on the right there are several options, namely Python, LangChain, Node, Shell, and Docker. Select Docker.
In the Docker section under "Pull and run the NVIDIA NIM with the command below. This will download the optimized model for your infrastructure." in the bottom row there is the model URL. Copy the URL.
Edit the gpu based on the needs.
Put your api key in line "ngcAPI Key".
Edit the storage based on the needs.
Save the file after updating, press CTRL + O to save, and CTRL + X to exit.
Open the deployment.yaml file in the files folder by executing the following syntax.
After the deployment.yaml file opens, look for the spec
section and add runtimeClassName: nvidia
according to the image below.
Save the file after updating, press CTRL + O to save, and CTRL + X to exit.
Install Helm Chart by executing the following syntax.
Wait until the installation process is complete and run the following syntax to see the list of available pods.
When finished, run the following syntax to display a list of nim services in the namespace.
In the image above, you can see nim-llm using the IP cluster 10.250.225.0 and the status is active. Run the following syntax to log into a container running inside a pod in Kubernetes.