diff --git a/autovec-structured/autovec_langchain.ipynb b/autovec-structured/autovec_langchain.ipynb new file mode 100644 index 00000000..89af2c2c --- /dev/null +++ b/autovec-structured/autovec_langchain.ipynb @@ -0,0 +1,434 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "502eb13e", + "metadata": { + "jp-MarkdownHeadingCollapsed": true + }, + "source": [ + "# Create and Deploy Operational Cluster on Capella\n", + "To get started with Couchbase Capella, create an account and use it to deploy a cluster. \n", + "\n", + "Make sure that you deploy a `Multi-node` cluster with `data`, `index`, `query` and `eventing` services enabled. To know more, please follow the [instructions](https://docs.couchbase.com/cloud/get-started/create-account.html).\n", + " ## Couchbase Capella Configuration\n", + " When running Couchbase using [Capella](https://cloud.couchbase.com/sign-in), the following prerequisites need to be met:\n", + " * Create the [database credentials](https://docs.couchbase.com/cloud/clusters/manage-database-users.html) to access the travel-sample bucket (Read and Write) used in the application.\n", + " * [Allow access](https://docs.couchbase.com/cloud/clusters/allow-ip-address.html) to the Cluster from the IP on which the application is running." + ] + }, + { + "cell_type": "markdown", + "id": "4369c925-adbc-4c7d-9ea6-04ff020cb1a6", + "metadata": {}, + "source": [ + "# Data Upload and Preparation\n", + "\n", + "There are various techniques that exist to insert data into the cluster. For this tutorial we will be importing sample data set named as `travel-sample`, please follow the steps mentioned below to import the data:\n", + "\n", + "* Go to Data Tools Import.\n", + "\n", + "* Choose Load sample data.\n", + "\n", + "* Choose a sample named `travel-sample`\n", + "\n", + "* Click Import.\n", + "\n", + "To know moer about different methods how you can import the data on capella, please visit the following [link](https://docs.couchbase.com/cloud/guides/load.html).\n", + "\n", + "After data import is complete, follow the next steps to achieve vectorization for your required fields." + ] + }, + { + "cell_type": "markdown", + "id": "7e3afd3f-9949-4f5e-b96a-1aac1a3aea29", + "metadata": {}, + "source": [ + "# Deploying the Model\n", + "Now, before we actually create embeddings for the documents, we need to deploy a model that will create the embeddings for us. \n", + "\n", + "> **⚠️ IMPORTANT:** The model **must** be deployed in the **same region** as your database cluster for workflows to function properly. Failing to match regions will prevent the workflow from working and may require cluster redeployment.\n", + "\n", + "## Selecting the Model \n", + "1. To select the model, you first need to navigate to the \"AI Services\" tab, then select \"Models\" and click on \"Deploy New Model\".\n", + " \n", + " \n", + "\n", + "2. Enter the model name, and choose the model that you want to deploy. After selecting your model, choose the model infrastructure and region where the model will be deployed. **Ensure this matches your database cluster region.**\n", + " \n", + " \n", + "\n", + "## Access Control to the Model\n", + "\n", + "1. After deploying the model, go to the \"Models\" tab in the AI Services and click on \"Setup Access\".\n", + "\n", + " \n", + "\n", + "2. Enter your API key name, expiration time and the IP address from which you will be accessing the model.\n", + "\n", + " \n", + "\n", + "3. Download your API key\n", + "\n", + " \n" + ] + }, + { + "cell_type": "markdown", + "id": "daaf6525-d4e6-45fb-8839-fc7c20081675", + "metadata": {}, + "source": [ + "## Deploying AutoVectorization Workflow\n", + "\n", + "Now, we are at the step that will help us create the embeddings/vectors. To proceed with the vectorization process, please follow the steps below. For more details, refer to the [data processing documentation](https://docs.couchbase.com/ai/build/vectorization-service/data-processing.html).\n", + "\n", + "1. For deploying the autovectorization, you need to go to the `AI Services` tab, then click on `Workflows`, and then click on `Create New Workflow`.\n", + "\n", + " \n", + " \n", + "2. Start your workflow deployment by giving it a name and selecting where your data will be provided to the auto-vectorization service. There are currently 3 options: `pre-processed data (JSON format) from Capella`, `pre-processed data (JSON format) from external sources (S3 buckets)` and `unstructured data from external sources (S3 buckets)`. For this tutorial, we will choose the first option, which is pre-processed data from Capella.\n", + "\n", + " \n", + "\n", + "3. Now, select the `cluster`, `bucket`, `scope` and `collection` from which you want to select the documents and get the data vectorized.\n", + "\n", + " \n", + "\n", + "4. Field Mapping will be used to tell the AutoVectorize service which data will be converted to embeddings.\n", + "\n", + " For this tutorial, we use the Custom source fields approach to vectorize specific, semantically meaningful fields. This is more realistic than vectorizing all fields, as it focuses on content-rich fields that are relevant for semantic search.\n", + " \n", + " We select the following fields to be converted into a single vector with the name `vec_descr_review_state`:\n", + " - `description` - The hotel description\n", + " - `reviews` - Customer reviews\n", + " - `state` - State \n", + "\n", + " \n", + " \n", + "5. After choosing the type of mapping, you can optionally create a vector index on the new vector embedding field. While vector search will work without an index using brute force, creating an index is **highly recommended** for better performance, especially with larger datasets.\n", + "\n", + " \n", + "\n", + "6. Below screenshot highlights the whole process which were mentioned above, and click next afterwards as shown below. We will be going ahead with the custom source field mappings for this tutorial.\n", + "\n", + " \n", + "\n", + "\n", + "7. Select the model which will be used to create the embeddings. There are two options to create the embeddings, `capella based` and `external model`.\n", + " \n", + " \n", + "\n", + " - For this tutorial, capella based embedding model is used as can be seen in the image above. API credentials can be uploaded using the file downloaded during model setup section or it can be entered manually as well.\n", + " - Choices between private and insecure networking is available to choose.\n", + " - A click on `Next` will land you at the final page of the workflow.\n", + "\n", + "\n", + "\n", + "8. `Workflow Summary` will display all the necessary details of the workflow including `Data Source`, `Model Service` and `Billing Overview` as shown in image below.\n", + "\n", + " \n", + "\n", + "\n", + "\n", + "9. `Hurray! Workflow Deployed` Now in the `workflow` tab we can see the workflow deployed and can check the status of our workflow run.\n", + "\n", + " \n", + "\n", + "After this step, your vector embeddings for the selected fields should be ready and you can checkout in your document schema a vector field should be there as highlighter below in the image.\n", + " \n", + "\n", + "\n", + "\n", + "In the next step, we will demonstrate how we can use the generated vectors to perform vector search.\n" + ] + }, + { + "cell_type": "markdown", + "id": "e50204a4", + "metadata": {}, + "source": [ + "# Vector Search\n", + "\n", + "The following code cells implement semantic vector search against the embeddings generated by the AutoVectorization workflow. These searches are powered by **Couchbase's Vector Search service using Hyperscale Indexes**.\n", + "\n", + "Before you proceed, make sure the following packages are installed by running:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9d38e3de", + "metadata": { + "vscode": { + "languageId": "powershell" + } + }, + "outputs": [], + "source": [ + "!pip install langchain-couchbase==1.0.1 langchain-openai" + ] + }, + { + "cell_type": "markdown", + "id": "a1854af3", + "metadata": {}, + "source": [ + "**Required versions:**\n", + "- `langchain-couchbase = 1.0.1` (supports QueryVectorStore)\n", + "- `langchain-openai` (latest version)\n", + "\n", + "Now, please proceed to execute the cells in order to run the vector similarity search.\n", + "\n", + "# Importing Required Packages\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "30955126-0053-4cec-9dec-e4c05a8de7c3", + "metadata": {}, + "outputs": [], + "source": [ + "from couchbase.cluster import Cluster\n", + "from couchbase.auth import PasswordAuthenticator\n", + "from couchbase.options import ClusterOptions\n", + "\n", + "from langchain_openai import OpenAIEmbeddings\n", + "from langchain_couchbase.vectorstores import CouchbaseQueryVectorStore\n", + "from langchain_couchbase.vectorstores import DistanceStrategy\n", + "\n", + "from datetime import timedelta" + ] + }, + { + "cell_type": "markdown", + "id": "e5be1f01", + "metadata": {}, + "source": [ + "# Cluster Connection Setup\n", + " - Defines the secure connection string, user credentials, and creates a `Cluster` object." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7e4c9e8d", + "metadata": {}, + "outputs": [], + "source": [ + "# Replace with your Capella connection details\n", + "\n", + "endpoint = \"COUCHBASE_CAPELLA_ENDPOINT\" # Connection String\n", + "username = \"COUCHBASE_CAPELLA_USERNAME\" # Capella Username\n", + "password = \"COUCHBASE_CAPELLA_PASSWORD\" # Capella Password\n", + "\n", + "auth = PasswordAuthenticator(username, password)\n", + "options = ClusterOptions(auth)\n", + "\n", + "cluster = Cluster(endpoint, options)\n", + "cluster.wait_until_ready(timedelta(seconds=5))" + ] + }, + { + "cell_type": "markdown", + "id": "bbeb8a4f", + "metadata": {}, + "source": [ + "# Selection of Buckets / Scope / Collection / Index / Embedder\n", + " - Sets the bucket, scope, and collection where the documents (with vector fields) live.\n", + " - `embedder` instantiates the NVIDIA embedding model that will transform the user's natural language query into a vector at search time.\n", + " - `open_api_key` is the api key token created during model deployment.\n", + " - `open_api_base` is the Capella model services endpoint found in the models section.\n", + " - for more details visit [openAIEmbeddings](https://docs.langchain.com/oss/python/integrations/text_embedding/openai).\n", + "\n", + "`Note that the Capella AI Endpoint also requires an additional /v1 from the endpoint if not shown on the UI`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "799b2efc", + "metadata": {}, + "outputs": [], + "source": [ + "bucket_name = \"travel-sample\"\n", + "scope_name = \"inventory\"\n", + "collection_name = \"hotel\"\n", + "\n", + "# Using the OpenAI SDK with Capella model services (compatible with OpenAIEmbeddings)\n", + "embedder = OpenAIEmbeddings(\n", + " model=\"nvidia/llama-3.2-nv-embedqa-1b-v2\", # Query embedding model\n", + " openai_api_key=\"COUCHBASE_CAPELLA_MODEL_API_KEY\", # Replace with your API key\n", + " openai_api_base=\"COUCHBASE_CAPELLA_ENDPOINT/v1\", # Add /v1 to endpoint\n", + " check_embedding_ctx_length=False,\n", + " tiktoken_enabled=False,\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "fda36710", + "metadata": {}, + "source": [ + "# VectorStore Construction\n", + " - Creates a [CouchbaseQueryVectorStore](https://couchbase-ecosystem.github.io/langchain-couchbase/langchain_couchbase.html#couchbase-query-vector-store) instance that interfaces with **Couchbase's Query service** to perform vector similarity searches using [Hyperscale/Composite](https://docs.couchbase.com/cloud/vector-index/use-vector-indexes.html) indexes. \n", + " - The vector store:\n", + " * Knows where to read documents (`bucket/scope/collection`).\n", + " * Knows the embedding field (the vector produced by the Auto-Vectorization workflow).\n", + " * Uses the provided embedder to embed queries on-demand for similarity search.\n", + " - `text_key` specifies the primary field to display in results (we use `name` for hotel names).\n", + " - `embedding_key` specifies the vector field name that contains the embeddings (must match the field name from the workflow).\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "50b85f78", + "metadata": {}, + "outputs": [], + "source": [ + "vector_store = CouchbaseQueryVectorStore(\n", + " cluster=cluster,\n", + " bucket_name=bucket_name,\n", + " scope_name=scope_name,\n", + " collection_name=collection_name,\n", + " embedding=embedder,\n", + " text_key=\"name\", # Primary field to display (hotel name)\n", + " embedding_key=\"hyperscale_autovec_workflow_vec_descr_review_state\", # Vector field from workflow\n", + " distance_metric=DistanceStrategy.DOT\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "be207963", + "metadata": {}, + "source": [ + "# Performing a Similarity Search\n", + " - Defines a natural language query to search for hotels based on infrastructure and service quality.\n", + " - Calls `similarity_search(k=3)` to retrieve the top 3 most semantically similar documents.\n", + " - Each result contains:\n", + " * `page_content`: The value of `text_key` (hotel name)\n", + " * `metadata`: Additional fields like description, reviews, city, country, and address\n", + " - Change `query` to any descriptive phrase (e.g., \"budget friendly hotel near the beach\").\n", + " - Adjust `k` for more or fewer results.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "id": "177fd6d5", + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "--- Result 1 ---\n", + "Hotel Name: Medway Youth Hostel\n", + "Reviews:\n", + "Author: Ozella Sipes\n", + "Content: This was our 2nd trip here and we enjoyed it as much or more than last year. Excellent location across from the French Market and just across the street from the streetcar stop. Very convenient to several small but good restaurants. Very clean and well maintained. Housekeeping and other staff are all friendly and helpful. We really enjoyed sitting on the 2nd floor terrace over the entrance and \"people-watching\" on Esplanade Ave., also talking with our fellow guests. Some furniture could use a little updating or replacement, but nothing major.\n", + "\n", + "Author: Barton Marks\n", + "Content: We found the hotel de la Monnaie through Interval and we thought we'd give it a try while we attended a conference in New Orleans. This place was a perfect location and it definitely beat staying downtown at the Hilton with the rest of the attendees. We were right on the edge of the French Quarter withing walking distance of the whole area. The location on Esplanade is more of a residential area so you are near the fun but far enough away to enjoy some quiet downtime. We loved the trolly car right across the street and we took that down to the conference center for the conference days we attended. We also took it up Canal Street and nearly delivered to the WWII museum. From there we were able to catch a ride to the Garden District - a must see if you love old architecture - beautiful old homes(mansions). We at lunch ate Joey K's there and it was excellent. We ate so many places in the French Quarter I can't remember all the names. My husband loved all the NOL foods - gumbo, jambalya and more. I'm glad we found the Louisiana Pizza Kitchen right on the other side of the U.S. Mint (across the street from Monnaie). Small little spot but excellent pizza! The day we arrived was a huge jazz festival going on across the street. However, once in our rooms, you couldn't hear any outside noise. Just the train at night blowin it's whistle! We enjoyed being so close to the French Market and within walking distance of all the sites to see. And you can't pass up the Cafe du Monde down the street - a busy happenning place with the best French dougnuts!!!Delicious! We will defintely come back and would stay here again. We were not hounded to purchase anything. My husband only received one phone call regarding timeshare and the woman was very pleasant. The staff was laid back and friendly. My only complaint was the very firm bed. Other than that, we really enjoyed our stay. Thanks Hotel de la Monnaie!\n", + "\n", + "\n", + "--- Result 2 ---\n", + "Hotel Name: The Balmoral Guesthouse\n", + "Reviews:\n", + "\n", + "--- Result 3 ---\n", + "Hotel Name: The Robins\n", + "Reviews:\n", + "Author: Blaise O'Connell IV\n", + "Content: Staff need a bit of a refresher in customer service...we couldn't get a safe and there were 6 of us - \"sorry, none left\" and not too helpful, the location was terrific but the staff let us down...no water to drink in the rooms - no bar fridge or coffee making facilities...there is a bar next door which is okay - serves breakfast & lunch & dinner...stay here only for the location\n", + "\n", + "Author: Nedra Cronin\n", + "Content: We ended up choosing the Holiday Inn because it had a combination of a low price and a really convenient location. It's close to some great restaurants and just far enough from Bourbon street so that it isn't noisy. The indoor pool on the 10th floor was nice, too. No complaints here.\n", + "\n", + "Author: Marianna Schmeler\n", + "Content: I must explain this history in order that you know what kind of hotel you are going to stay. 2011- Mardi Grass. During the week of the 9th March, we stayed at this hotel for a few days. When we checked out, we drove direction Baton Rouge. As we arrived, we found out that my husband had lost his back bag with all documents, passport, computer, etc. somewhere... The recepcionist in Super 8 La Place helped us a lot. We were very nervous, and she even let us phone to New Orleans just to check if our bag was there. No news, but finally, we decided to drove back to New Orleans. As we arrived the recepcionist, Nimahs, helped us a lot! Security guy, Dal, also. But as they didn´t know about the lost bag, we asked to check the security cameras. The responsible guy came, and checked the cameras with my husband. Thanks to that, the bag was found! Happy end of the history thanks to that people! No one accepted any tips from us. We surely will recomend Super 8 hotels from now on. On the other hand, I must say that this is not a well located hotel if you want to party every night in the French Quarter. But we thought it was a good choice, also thinking about prices, and because we had our own car.\n", + "\n", + "Author: Davon Price\n", + "Content: Nice staff, road trip for our family and wanted cheap but clean. And close to French quarter.\n", + "\n", + "Author: Joannie Barrows DDS\n", + "Content: We read reviews of motels in New Orleans before choosing this one based on good feedback from others. It turned out to be a good choice, for it was a comfortable stay and the staff obviously put a lot of effort into making it a quality establishment. When we left they even had staff doing work in the garden. The room itself was good, with two beds, a fridge and coffee maker. Amazingly, the wifi also worked in the room, which I found to be something of a rarity for places in this price range. The floor sloped slightly downwards, which was a bit odd, but it didn't matter at all. There is plenty of parking, even if the spaces are quite tight. The motel is equipped with a gym, which I didn't use, plus an outdoor pool, which looked quite good but was far too cold when I was there. The business center was very useful, for I was able to print out flight tickets. Breakfast was good (though waffles would enhance it!), but was packed away promptly at 9:30. It was good to fill up in the morning while watching the news before going into New Orleans for the day. That brings me to the location - it's not in an area that you'd want to walk around, but isn't too far from fast food places and is very close to the interstate, so it's easy to get into New Orleans itself. A lot of the motels in New Orleans are in this area. This is an impressive motel and I would happily stay here again if I was every lucky enough to be able to return to New Orleans.\n", + "\n" + ] + } + ], + "source": [ + "query = \"Which hotels have good food?\"\n", + "results = vector_store.similarity_search(query, k=3, fields=[\"reviews\"])\n", + "\n", + "for i, doc in enumerate(results, 1):\n", + " print(f\"\\n--- Result {i} ---\")\n", + " print(f\"Hotel Name: {doc.page_content}\")\n", + " print(f\"Reviews:\")\n", + " received_reviews = doc.metadata[\"reviews\"]\n", + " for j, review in enumerate(received_reviews, 1):\n", + " print(f\"Author: {review[\"author\"]}\")\n", + " print(f\"Content: {review[\"content\"]}\\n\")" + ] + }, + { + "cell_type": "markdown", + "id": "f9e0d863", + "metadata": {}, + "source": [ + "## Results and Interpretation\n", + "\n", + "The search results display the top 3 (or `k`) most semantically similar hotels based on your query.\n", + "\n", + "**What you see in each result:**\n", + "- **Hotel Name** (`page_content`): The value from the `text_key` field (hotel name).\n", + "- **Metadata fields**:\n", + " - `reviews`: Customer reviews\n", + "\n", + "### How the Ranking Works\n", + "1. Your natural language query (e.g., `\"Which hotels have good food?\"`) is embedded using the NVIDIA model (`nvidia/llama-3.2-nv-embedqa-1b-v2`).\n", + "2. The query embedding is compared against the `vec_descr_review_state` field in each document using dot product similarity.\n", + "3. Results are sorted by vector similarity. Higher similarity = closer semantic meaning.\n", + "\n", + "### Key Observations\n", + "- The results are ranked by **semantic similarity**, not keyword matching. Hotels that conceptually match your query about \"infrastructure\" and \"service\" will rank higher, even if those exact words don't appear in the document.\n", + "- By including extra `fields`, you get rich, contextual information beyond just the hotel name, making the results more actionable.\n", + "- The embedding combines `description`, `reviews` and `state` fields, so the search understands the hotel's overall quality based on multiple data points.\n", + "\n", + "> Your vector search pipeline is working if the returned documents feel meaningfully related to your natural language query—even when exact keywords do not match. Feel free to experiment with increasingly descriptive queries to observe the semantic power of the embeddings.\n" + ] + }, + { + "cell_type": "markdown", + "id": "be65528b", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.14.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/autovec-structured/frontmatter.md b/autovec-structured/frontmatter.md new file mode 100644 index 00000000..6160292b --- /dev/null +++ b/autovec-structured/frontmatter.md @@ -0,0 +1,21 @@ +--- +# frontmatter +path: "/tutorial-couchbase-capella-autovectorization-workflows-with-structured-data-and-langchain" +title: Auto-Vectorization of Strucutured Data with Couchbase Capella AI Services +short_title: Auto-Vectorization with Couchbase and Semantic Search using LangChain +description: + - Learn how to use Couchbase Capella's AI Services auto-vectorization feature to automatically convert your structured data into vector embeddings. + - To learn about the auto-vectorization of unstuctured data read the following [tutorial](tutorial-couchbase-autovectorization-workdlows-with-unstructured-data-and-langchain). + - This tutorial demonstrates how to set up automated embedding generation workflows and perform semantic search using LangChain. +content_type: tutorial +filter: sdk +technology: + - vector search +tags: + - Hyperscale Vector Index + - Artificial Intelligence + - LangChain +sdk_language: + - python +length: 20 Mins +--- diff --git a/autovec-structured/img/Access_control.png b/autovec-structured/img/Access_control.png new file mode 100644 index 00000000..149dcd0b Binary files /dev/null and b/autovec-structured/img/Access_control.png differ diff --git a/autovec-structured/img/Create_auto_vec.png b/autovec-structured/img/Create_auto_vec.png new file mode 100644 index 00000000..61baeae7 Binary files /dev/null and b/autovec-structured/img/Create_auto_vec.png differ diff --git a/autovec-structured/img/Select_embedding_model.png b/autovec-structured/img/Select_embedding_model.png new file mode 100644 index 00000000..22abe87a Binary files /dev/null and b/autovec-structured/img/Select_embedding_model.png differ diff --git a/autovec-structured/img/cluster_cloud_config.png b/autovec-structured/img/cluster_cloud_config.png new file mode 100644 index 00000000..c478a833 Binary files /dev/null and b/autovec-structured/img/cluster_cloud_config.png differ diff --git a/autovec-structured/img/cluster_no_nodes.png b/autovec-structured/img/cluster_no_nodes.png new file mode 100644 index 00000000..8a09de47 Binary files /dev/null and b/autovec-structured/img/cluster_no_nodes.png differ diff --git a/autovec-structured/img/create_cluster.png b/autovec-structured/img/create_cluster.png new file mode 100644 index 00000000..8af4219b Binary files /dev/null and b/autovec-structured/img/create_cluster.png differ diff --git a/autovec-structured/img/deploying_model.png b/autovec-structured/img/deploying_model.png new file mode 100644 index 00000000..5b830341 Binary files /dev/null and b/autovec-structured/img/deploying_model.png differ diff --git a/autovec-structured/img/download_api_key_details.png b/autovec-structured/img/download_api_key_details.png new file mode 100644 index 00000000..8ee7dc82 Binary files /dev/null and b/autovec-structured/img/download_api_key_details.png differ diff --git a/autovec-structured/img/import_sd.png b/autovec-structured/img/import_sd.png new file mode 100644 index 00000000..e6d1a664 Binary files /dev/null and b/autovec-structured/img/import_sd.png differ diff --git a/autovec-structured/img/imported_data_hotel.png b/autovec-structured/img/imported_data_hotel.png new file mode 100644 index 00000000..1aeb7f80 Binary files /dev/null and b/autovec-structured/img/imported_data_hotel.png differ diff --git a/autovec-structured/img/importing_model.png b/autovec-structured/img/importing_model.png new file mode 100644 index 00000000..41e80e92 Binary files /dev/null and b/autovec-structured/img/importing_model.png differ diff --git a/autovec-structured/img/login.png b/autovec-structured/img/login.png new file mode 100644 index 00000000..30e8b1e2 Binary files /dev/null and b/autovec-structured/img/login.png differ diff --git a/autovec-structured/img/login_.png b/autovec-structured/img/login_.png new file mode 100644 index 00000000..e1711271 Binary files /dev/null and b/autovec-structured/img/login_.png differ diff --git a/autovec-structured/img/model_api_key_form.png b/autovec-structured/img/model_api_key_form.png new file mode 100644 index 00000000..0713a53c Binary files /dev/null and b/autovec-structured/img/model_api_key_form.png differ diff --git a/autovec-structured/img/model_setup_access.png b/autovec-structured/img/model_setup_access.png new file mode 100644 index 00000000..91dfae79 Binary files /dev/null and b/autovec-structured/img/model_setup_access.png differ diff --git a/autovec-structured/img/node_select_cluster_opt.png b/autovec-structured/img/node_select_cluster_opt.png new file mode 100644 index 00000000..a15a0f77 Binary files /dev/null and b/autovec-structured/img/node_select_cluster_opt.png differ diff --git a/autovec-structured/img/password_cluster.png b/autovec-structured/img/password_cluster.png new file mode 100644 index 00000000..85ad736d Binary files /dev/null and b/autovec-structured/img/password_cluster.png differ diff --git a/autovec-structured/img/select_cluster.png b/autovec-structured/img/select_cluster.png new file mode 100644 index 00000000..381439fe Binary files /dev/null and b/autovec-structured/img/select_cluster.png differ diff --git a/autovec-structured/img/setup_access.png b/autovec-structured/img/setup_access.png new file mode 100644 index 00000000..08bf9643 Binary files /dev/null and b/autovec-structured/img/setup_access.png differ diff --git a/autovec-structured/img/start_workflow.png b/autovec-structured/img/start_workflow.png new file mode 100644 index 00000000..23ce813a Binary files /dev/null and b/autovec-structured/img/start_workflow.png differ diff --git a/autovec-structured/img/vector_all_field_mapping.png b/autovec-structured/img/vector_all_field_mapping.png new file mode 100644 index 00000000..8800ac88 Binary files /dev/null and b/autovec-structured/img/vector_all_field_mapping.png differ diff --git a/autovec-structured/img/vector_custom_field_mapping.png b/autovec-structured/img/vector_custom_field_mapping.png new file mode 100644 index 00000000..eb01f36e Binary files /dev/null and b/autovec-structured/img/vector_custom_field_mapping.png differ diff --git a/autovec-structured/img/vector_data_source.png b/autovec-structured/img/vector_data_source.png new file mode 100644 index 00000000..f9db7e46 Binary files /dev/null and b/autovec-structured/img/vector_data_source.png differ diff --git a/autovec-structured/img/vector_field.png b/autovec-structured/img/vector_field.png new file mode 100644 index 00000000..25778fb8 Binary files /dev/null and b/autovec-structured/img/vector_field.png differ diff --git a/autovec-structured/img/vector_field_mapping.png b/autovec-structured/img/vector_field_mapping.png new file mode 100644 index 00000000..dfdeacf3 Binary files /dev/null and b/autovec-structured/img/vector_field_mapping.png differ diff --git a/autovec-structured/img/vector_index.png b/autovec-structured/img/vector_index.png new file mode 100644 index 00000000..7c0c6736 Binary files /dev/null and b/autovec-structured/img/vector_index.png differ diff --git a/autovec-structured/img/vector_index_page.png b/autovec-structured/img/vector_index_page.png new file mode 100644 index 00000000..25179a09 Binary files /dev/null and b/autovec-structured/img/vector_index_page.png differ diff --git a/autovec-structured/img/workflow.png b/autovec-structured/img/workflow.png new file mode 100644 index 00000000..fcf8a0c6 Binary files /dev/null and b/autovec-structured/img/workflow.png differ diff --git a/autovec-structured/img/workflow_deployed.png b/autovec-structured/img/workflow_deployed.png new file mode 100644 index 00000000..224dcfa1 Binary files /dev/null and b/autovec-structured/img/workflow_deployed.png differ diff --git a/autovec-structured/img/workflow_summary.png b/autovec-structured/img/workflow_summary.png new file mode 100644 index 00000000..f7f06e3a Binary files /dev/null and b/autovec-structured/img/workflow_summary.png differ