Use a vector embedding technique to create a vector embedding for input text. You can
choose a technique based on the pre-trained model you want to use to convert the text to a
vector.
A vector embedding represents the text as an array of numbers. Each element in the array
represents a different dimension of the text. To create vector embeddings, select an
input column for embedding and then select one of the following vector embedding
techniques:
Word embedding
Convert each word to a vector using the Word2Vec Gigaword 5th Edition model with
300 dimensions (word2vec_giga_300). Useful for text classification and sentiment
analysis.
BERT embedding
Convert each sentence to a vector using the Smaller BERT Embedding
(L-2_H-768_A-12) model with 768 dimensions (small_bert_L2_768). Useful for text
classification and semantic search.
Consider the following rules and guidelines for vector embeddings:
Vector embeddings created by
different embedding models can't be compared even if they have the same dimensions.
If you switch between embedding models, rerun the mapping, including all Source,
Chunking, Vector Embedding, and Target transformations, to create embeddings for all
documents using the new model.
Because the Vector Embedding
transformation is a passive transformation that produces one output row for each
input row, input columns that contain null or empty strings return an empty output
vector. If the vector is empty, vector databases like Pinecone might drop the
row.
The Vector Embedding transformation
can process only English text.