To generate vector embeddings, you can use a built-in model and select a vector embedding
technique, or you can connect to your own model.
On the
Vector Embedding
tab, you can use one of the following
options:
Use the built-in model
If you use a built-in model, you can select one of the available vector
embedding techniques. For information about each technique, see Built-in vector embedding techniques.
Connect to your own model
To connect to your own model on a platform like Azure OpenAI, you can select or
create a Large Language Model connection. Then, select the number of dimensions
in the vector. You can select a number from the drop-down list, or you can type
the number.
For more information about Large Language Model connections, see the
Administrator help.
Make sure the model is
deployed in the same region as the
advanced cluster
to reduce cross-region data transfer costs.
Consider the following rules and guidelines for vector embeddings:
Vector embeddings created by
different embedding models can't be compared even if they have the same dimensions.
If you switch between embedding models, rerun the mapping, including all Source,
Chunking, Vector Embedding, and Target transformations, to create embeddings for all
documents using the new model.
Because the Vector Embedding
transformation is a passive transformation that produces one output row for each
input row, input columns that contain null or empty strings return an empty output
vector. If the vector is empty, vector databases like Pinecone might drop the
row.