Integrasi API Preview
API Integration Preview is a feature provided by the Service Portal AI that displays an auto-generated code snippet for each prompt you submit to the model via the Interactive Chat panel. So, every time you test a prompt in the Playground Tab, Deka LLM automatically generates a ready-to-use API call example.

cURL
cURL (Client URL) is a command-line tool used to send requests to servers using various protocols such as HTTP, HTTPS, and FTP. Below is an example of a cURL command generated when you input a prompt.

Explanation of the cURL Command.
This line curl https://dekallm.cloudeka.ai/v1/chat/completions \defines the API endpoint URL used by Deka LLM to request a chat completion.
This line corresponds to the API Key you entered in the API Key field.
This line -H "Authorization": "Bearer YOUR_API_KEY" \ is used to authenticate the API request by providing the API key in a Bearer token format.
This line -H "Content-Type: application/json" \ sets the request payload format to JSON.
This JSON object is sent via an HTTP POST request and contains:
"model": "meta/llama-4-maverick-instruct",Specifies the LLM model to use in Deka LLM.
"messages": [{"role": "user", "content": ""}],An array of messages, where
"role": "user"represents the you, and"content": ""is the prompt.temperature": 0.6Controls the creativity or randomness of the response (higher value = more creative).
"top_p": 0.7Configures nucleus sampling to control the cumulative probability for token selection, affecting the randomness.
Python
Python is a programming language known for its simple syntax, readability, and support for imperative, functional, and object-oriented paradigms. Below is the example code auto-generated when you input a prompt.

Explanation of the Python Code:
Imports the OpenAI class from the official OpenAI Python SDK, which provides an interface to interact with LLM API.
This line is used to create a client instance to communicate with the Deka LLM API endpoint. There are two important parameters used
base_urldisplays the URL of the Deka LLM endpoint,api_keyis sed to authenticate requests and is taken from the API Key column.
This line client.chat.completions.create()is used to send a request for chat completion. There are four important parameters used, namely:
model="meta/llama-4-maverick-instruct",Specifies the LLM model to use in Deka LLM.
messages=[{"role": "user", "content": ""}],An array of messages, where
"role": "user"represents the you, and"content": ""is the prompt.temperatur=0.6Controls the creativity or randomness of the response (higher value = more creative).
and
top_p=0.7Configures nucleus sampling to control the cumulative probability for token selection, affecting the randomness.
This line print(completion.choices[0].message.content) is used to represent the model's response to the message you send.
Node.js
Node.js is a runtime environment for executing JavaScript code outside the browser. Below is an example Node.js code generated when you input a prompt.

Explanation of the Node.js Code:
This line import OpenAI from "openai"; imports the OpenAI class from the official Node.js SDK, which allows you to interact with the Deka LLM API.
This line const openai = new OpenAI({...}); is used to create your instance withapiKey andbaseURL configurations.
This line const chatCompletion = await openai.chat.completions.create({...}); is used to send a chat completion request to the model you are using in Deka LLM with parameters. There are four important parameters used, namely:
model="meta/llama-4-maverick-instruct",Specifies the LLM model to use in Deka LLM.
messages=[{"role": "user", "content": ""}],An array of messages, where
"role": "user"represents the you, and"content": ""is the prompt.temperatur=0.6Controls the creativity or randomness of the response (higher value = more creative).
and
top_p=0.7Configures nucleus sampling to control the cumulative probability for token selection, affecting the randomness.
Last updated
