Embedding
Embeddings are vector representations of text or other data that can be used in various machine learning and language processing applications. Embeddings convert words, sentences, or documents into numerical vectors in a high-dimensional space, making it easier for computers to understand and process the meaning of the data being analyzed. Here are some common uses of embeddings:
Search works by converting each document and query into vectors. Search results are ranked based on the similarity of embedding vectors between the query and documents. The closer the vectors, the more relevant the result.
Clustering works by grouping sentences converted into embedding vectors using clustering algorithms such as K-Means. Texts with similar embedding vectors are grouped together.
Recommendation works by converting item descriptions into embedding vectors. When a user shows interest in an item, the system searches for other items with similar embedding vectors to recommend.
Anomaly Detection works by converting sentences into vectors, so vectors that are significantly different from the majority of other vectors are identified as anomalies.
Diversity Measurement works by obtaining vectors from sentences to analyze how diverse the sentences are in vector space, which can be measured by looking at the distribution of distances between vectors.
Classification works by comparing sentence vectors with vectors of existing labels and classifying the sentence into the category with the most similar vector.
This endpoint uses the POST method, where request data is sent to the server for processing. The following endpoints are available for Embeddings that can be utilized.
At this time, only the following model, baai/bge-multilingual-gemma2
, can be used for Embeddings. Here is the endpoint you can send:
Following are the results of the responses received.
Last updated