Instill AI changelog

Release 47: Improvement & fixes

changelog cover

Pipeline Builder

  • [Improvement] Add auto-save for pipeline when "Run" button clicked

Python SDK

  • [Improvement] Remove attribute naming _service

  • [Improvement] Support specifying target namespace in endpoint functions

  • [Improvement] Align initialization functions between each service client

  • [Improvement] Remove configuration system

Other

  • [Bug] Fix iterator component dependency has no connection in preview canvas

  • [Bug] Fix error encountered when model running at "Scaling" stage after 15 minutes

  • [Improvement] Add */* format type to generateInputsPayload

  • [Improvement] Add Instill Model scaling hint for the VDP Component

  • [Improvement] Update the landing page after onboarding to the "Explore" page.

  • [Improvement] Update inconsistent logo sizes in the "About" Page

To file an issue please use this template or raise them in our Discord community.


One-click integrations with GitHub, Slack & Google Drive

changelog cover

OAuth is an open-standard authorization protocol or framework that provides applications the ability for “secure designated access.

Today, we upgraded the GitHub, Slack and Google Drive Components to use OAuth for a quicker and simpler authentication process to connect to your external apps.

To get started:

  1. Sign in to the Console

  2. Navigate to your top right Profile > Setting > Integrations

  3. Search for GitHub, Slack or Google Drive and click connect

Key Benefits:

  • Secure Connections: Access your resources safely without sharing passwords.

  • Easy Access Management: Quickly control who can access connected services.

  • Automatic Setup: No manual IDs needed—connections are set up automatically.

  • Seamless Integration: Connect to top platforms in just a click, simplifying your workflow.

We will be adding more integrations very soon so stay tuned!

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


Visit Your Dashboard

changelog cover

The Dashboard is a centralized view for observing all your pipeline and model run activity and Instill Credit consumption costs. This view includes a chart for data visualization and a table for your run history logs.

This means you can easily review and calculate any future costs associated with your pipeline and model usage. Simply log into your Instill Cloud account and go to the Dashboard tab to check it out!

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


Instill App Component

changelog cover

We are introducing Instill App Component on💧 Instill VDP to easily access your conversation history from your AI Assistants created from Instill App.

Features

  • Support tasks:

    • Read Chat History

    • Write Message

To request a new component, please go here and click "Request A Component".


Release 46: Improvement & fixes

changelog cover

Pipeline Builder

  • [Improvement] Implement Auto-Save for Pipeline When "Run" Button Clicked

Python SDK

  • [Improvement] Remove Attribute Naming _service

  • [Improvement] Support specifying target namespace in endpoint functions

  • [Improvement] Align initialization functions between each service client

  • [Improvement] Remove configuration system

Other

  • [Bug] Fix iterator component dependency has no connection in preview canvas

  • [Bug] Fix Error Encountered When Model Running at "Scaling" Stage After 15 Minutes

  • [Improvement] Add */* format type to generateInputsPayload

  • [Improvement] Instill Model Scaling Hint for the VDP Component

  • [Improvement] Update the landing page after onboarding to the "Explore" page.

  • [Improvement] Inconsistent Logo Sizes in the "About" Page

To file an issue please use this template or raise them in our Discord community.


Ready-to-use AI Assistants on Instill Cloud

changelog cover

Today, we are releasing Instill App, where you can create ready-to-use AI Assistants that seamlessly connect with your knowledge bases stored in Instill Artifact. This chatbot interface enables you to easily interact with any documents you have uploaded, fast-tracking the infrastructure for achieving RAG applications.

To get started:

  1. Create a Catalog in Artifact

  2. Upload your files

  3. Create App using either:

    • the Console: Instill Cloud > Applications > Create App (see video above)

    • or the API

    export INSTILL_API_TOKEN=********
    
    curl -X POST 'https://api.instill.tech/v1alpha/namespaces/NAMESPACE_ID/apps' \
    --header "Content-Type: application/json" \
    --header "Authorization: Bearer $INSTILL_API_TOKEN" \
    --data '{
      "id": "customer-chatbot",
      "description": "Answers customer enquiries about Instill AI.",
      "tags": ["AI", "Assistant"]
    }'
  4. Go to Applications and chat with your AI Assistant

This feature is exclusively available on ☁️ Instill Cloud and Instill Enterprise.

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


Release 45: Improvement & fixes

changelog cover

Unified Data Processing

  • [Bug] Fix JSON operator jq filtering

  • [Bug] Fix ask-the-catalog Pipeline Output Display Chunks as 'object'

Pipeline Builder

  • [Feat.] Add per component output in editor for debugging

  • [Bug] Fix JSON output missing in Playground and pipeline builder for Crawler

  • [Bug] Fix no hint when renaming does not work

  • [Bug] Fix only Partial Credit-Related Errors Msg Displayed in the Entire Pipeline

  • [Bug] Fix logic to display correct UI for JSON output

  • [Improvement] Format UX for full visibility of Ask task when scrolling

  • [Improvement] Add smart hint before user types any character (`CTL+space`)

Other

  • [Bug] Fix button not functioning in pipeline preview tab

  • [Bug] Fix CMDO will be broken after adding one random component

  • [Bug] Fix org name and logo disappear after creating an organization

  • [Bug] Use the "Response title" instead of the "Response key" in the Pipeline Playground Output view

  • [Bug] Fix dialogue windows jitter after clicking the Clone button

  • [Improvement] Replace UI for not found Public Pipelines & Models to the Playground Tab

To file an issue please use this template or raise them in our Discord community.


Instill Integrations

changelog cover

We are improving vendor connection resources by introducing Instill Integrations to easily setup and contain the connection details of any 3rd party vendors (i.e. OpenAI API key, Pinecone URL and API key, etc.) on Instill VDP. This will ensure higher observability by ensuring you see what vendor has been authorized for each integration and improve overall user experience when applied in the pipeline builder.

In contrast to Secret Management, this new integration feature will separate the component setup from the variable which means:

  • Secrets will be used for avoiding repetition and will not be tied to any component type

  • Integrations will be used to store connection setup details all in one place

To setup an integration, you can either do it through the Integrations page or within the pipeline builder.

Configuring integrations

The Integrations page will allow you to easily configure and protect your external connections all in one place, allowing you to quickly search & filter available integrations and configure new ones.

To setup your personal new integration, simply follow these steps:

  1. Go to your Profile > Settings > Integrations

  2. Click Connect on any available 3rd party vendor

  3. Fill in the connection details

To setup your organizations new integration, simply follow these steps:

  1. Go to your Profile > Your Organization > Organization > Integrations

  2. Click Connect on any available 3rd party vendor

  3. Fill in the connection details

Using integrations

When writing a pipeline recipe in the pipeline builder, you can reference the connection in the setup section within your component declaration:

component: 
  library-db:
    type: sql
    input:
      table-name: books
      update-data: ${variable.books}
    setup: ${connection.my-library-1}  # <-- reference connection 
    task: TASK_UPDATE

Next, we will also enable connecting with 3rd party services via OAuth.

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


Instill SDK for Python Devs

changelog cover

The Instill SDK provides a Python-based interface for interacting with Instill Core features, including data, pipeline and model operations.

Please note: Our Python SDK supports up to and including Python 3.11.

Data - Instill Artifact

Perform key data operations such as:

  • List, create, update, and delete catalogs

  • Upload, process, and manage files within a catalog

  • View and search file chunks

  • Ask catalog to retrieve relevant information based on files

Pipeline - Instill VDP

Handle pipeline tasks:

  • Create and run pipelines

Model - Instill Model

Work with AI models:

  • Create and serve models

  • Run model for inference

Check out Instill AI Cookbooks to learn how to use the Python SDK:

The Python SDK is undergoing continuous improvements. We'd love to hear your feedback! Please share your thoughts in the #feedback channel on our Discord community.


Release 44: Improvement & fixes

changelog cover

Instill Artifact

  • [Improvement] Rename all APIs to follow a new structure

Pipeline Builder

  • [Bug] Enable Asana being added as a component correctly

  • [Bug] Fix component spinning after pipeline successfully runs

  • [Bug] Correct component indentation display when adding a component

  • [Improvement] Center code in the left editor panel when clicking a preview component

  • [Improvement] Make the left sidebar in the Low-Code Editor adjustable

  • [Improvement] Enhance the "Add Component" experience by removing null entries

  • [Improvement] Round component borders and increase text size

  • [Improvement] Enable description hints for each key

Python SDK

  • [Bug] Fix issue with triggering the organization pipeline using InstillClient and trigger_org_pipeline_release()

Other

  • [Bug] Add missing Public/Private labels in some scenarios

  • [Bug] Fix bugs when converting JPG to text

  • [Bug] Eliminate white padding issue in the image preview widget

  • [Improvement] Remove the version "test" button

To file an issue please use this template or raise them in our Discord community.


Revamping the Pipeline Builder

changelog cover

We are introducing a completely new way of building your pipelines on Instill VDP to shift our tool to focus on better supporting developers in the AI and data domain. The goal of VDP is to help you dynamically build versatile pipelines for complex and unique use cases that require minimal maintenance with high data observability.

Recipe as Code

The new pipeline builder UI allows you to edit YAML recipes next to the pipeline preview for improved debugging and data flow visualization. This means you can edit recipe.yml files in your own editors like VSCode and simply import them into VDP, streamlining the recipe sharing process. Additionally, the editor offers smart hints and syntax highlighting, making it easier to build and modify recipes efficiently.

Watch this tutorial on how to build recipes in the new pipeline editor:

Quick tips:

  • Use Control+O or Cmd+O shortcut to add components easily

  • Use Control+K or Cmd+K shortcut to search for things

  • Use Control+S or Cmd+S shortcut to save your recipe manually

  • Use Control+Enter or Cmd+Enter shortcut to run the pipeline

  • Click the Getting Started button at the bottom-left corner to view the guide in the pipeline preview panel.

High Observability

The pipeline preview offers a visual representation of data flow between components, striking a balance between code-level control and visual management. This feature is particularly beneficial for data engineers working on complex pipelines, as it provides real-time streaming feedback to simplify debugging. The preview also enhances component presentation, making it easier to build, test, and maintain sophisticated data pipelines efficiently.

You can try exploring the recipe of the structured-text-summarizer pipeline to transform a paragraph of unstructured text into structured JSON output in the new pipeline editor.

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


Release 43: Improvement & fixes

changelog cover

VDP components

  • [Improvement] Instill Artifact Component Tasks I/O Design

  • [Improvement] Add create issue to Jira Component

Pipeline builder

  • [Bug] Auto-save Synchronization Between Frontend and Backend May Cause Users to Revert to a Previous State

  • [Bug] Pipeline Overview Version Dropdown Fails to Display All Versions

  • [Bug] Streaming is not Working on Safari Browser

  • [Bug] Broken Documentation Link for Component Instill Model

  • [Bug] Output Image Cannot display on the Console

  • [Improvement] Reduce the font size of code blocks in the Pipeline Editor

  • [Improvement] User Can Use Control+S or Cmd+S to Manual-Save Recipe

  • [Improvement] Add the Preview from Pipeline Editor to the Overview Page

  • [Improvement] Pipeline Editor Output Default UI Need to Adjustment

  • [Improvement] User does not know how to make a variable multiline

To file an issue please use this template or raise them in our Discord community.


Pipeline Streaming Mode

changelog cover

The new pipeline editor contains a pipeline preview with a streaming feature which is useful for you to observe how data flows through the components when running your pipeline. It will provide visibility into the status of each component in your pipeline run.

You can enable streaming mode by:

  1. Create or use an existing pipeline such as structured-web-insights

  2. Clone this pipeline by clicking Clone in the top right

  3. Go into editor mode by clicking Edit in the top right

  4. Setup your Google Search Engine (see setup docs or README)

  5. Fill in the input form and hit Run

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


Freshdesk Component

changelog cover

Freshdesk is a cloud-based customer support platform that was founded with the mission of enabling companies of all sizes to provide great customer service. Our goal is simple: make it easy for brands to talk to their customers and make it easy for users to get in touch with businesses.

We added a Freshdesk Component on💧 Instill VDP to enable more Customer Relationship Management (CRM) capabilities when building your pipelines. This application component focuses on using Freshdesk's ticketing system.

Features

  • Support tasks:

    • Get Ticket

    • Create Ticket

    • Reply To Ticket

    • Create Ticket Note

    • Get All Conversations

    • Get Contact

    • Create Contact

    • Get Company

    • Create Company

    • Get All

    • Get Product

    • Get Group

    • Get Agent

    • Get Skill

    • Get Role

To request a new component, please go here and click "Request A Component".


Chroma Component

changelog cover

Chroma is an AI-native open-source embedding database.

We added a Chroma Component on💧 Instill VDP to expand your vector database options and enable you to build and search Chroma's' vector databases.

Features

  • Support tasks:

    • Batch Upsert

    • Upsert

    • Query

    • Delete

    • Create Collection

    • Delete Collection

To request a new component, please go here and click "Request A Component".


Asana Component

changelog cover

Asana is a web and mobile "work management" platform designed to help teams organize, track, and manage their work

We added an Asana Component on💧 Instill VDP to empower project management workflow automation. Build unique pipeline automation that you can mix and match across various vendors using Instill Components like data, application and AI.

Features

  • Support tasks:

    • Goal

    • Task

    • Project

    • Portfolio

To request a new component, please go here and click "Request A Component".


Runs for Logging Pipeline and Model Triggers

changelog cover

We are introducing Runs, a logging system for pipeline and model executions. This feature provides execution logs for the currently selected pipeline or model that will help you better track credit usage, monitor performance and get snapshots of each executions metadata which is useful for debugging.

To get started, simply follow these steps:

Pipeline Logging Runs

  1. Go to any pipeline such as the contract-reviewer

  2. In the Playground, run the pipeline

  3. Go to "Runs" tab to view your history of pipeline executions

  4. Click into any Run ID to view specific metadata related to this pipeline execution

Clicking into a specific Run ID will include useful information including run duration, credit consumption, trigger source, recipe, input/output results, and each components metadata.

Model Logging Runs

  1. Go to any Instill Model such as the instill-ai/llava-1-6-13b.

  2. In the Playground, run the model.

  3. Go to "Runs" tab to view your history of model executions

  4. Click into any Run ID to view specific metadata related to this model execution

More at: Instill AI


Milvus Component

changelog cover

Milvus is an open-source vector database built for GenAI applications. Install with pip, perform high-speed searches, and scale to tens of billions of vectors with minimal performance loss.

We added Milvus Component on💧 Instill VDP to expand your vector database options and enable you to build and search Milvus' vector databases.

Features

  • Support tasks:

    • Vector Search

    • Upsert

    • Delete

    • Drop Collection

    • Create Partition

    • Drop Partition

To request a new component, please go here and click "Request A Component".


Release 42: Improvement & fixes

changelog cover

Improvements

  • Added create issue task to Jira Component

  • Added new fields in Website Crawl

  • Incorporated I/O tasks in Instill Artifact Component

  • Added Excel pivot table to Markdown conversion in Document Component

  • Implemented code snippet extraction in Web operator

  • Enhanced SQL component with SSL/TLS input (base64 encoded) and engine relocation

  • Expanded HubSpot Component with deal/ticket updates and task retrieval

  • Added XLSX to Text/Markdown conversion support for Catalog

  • Introduced PDF to Images conversion in Document component

  • Improved Artifact component with base64 upload, multiple file support, and filename output

  • Added file upload interruption notification

Bug Fixes

  • Fixed mobile device login/signup buttons

  • Corrected Public Model Playground Page button wording

  • Resolved text highlight omission issue

  • Fixed blank Floating Window UI bug

  • Addressed Catalog file deletion issue

To report a bug or improvement, please create an issue on GitHub here.


Zilliz Component

changelog cover

Powered by open-source Milvus, Zilliz delivers the most performant and cost-effective vector database for AI at any scale.

We added Zilliz Component on💧 Instill VDP to expand your vector database options and enable you to build and search Zilliz's vector database.

Features

  • Support tasks:

    • Vector Search

    • Upsert

    • Batch Upsert

    • Delete

    • Create Collection

    • Drop Collection

    • Create Partition

    • Drop Partition

To request a new component, please go here and click "Request A Component".


Top up your Instill Credits

changelog cover

All Instill Cloud users automatically receive 10,000 Instill Credits for free each month. Instill Credits help reduce the need to pre-configure any AI components by skipping the need for creating accounts or API keys on 3rd party services.

If you run out, simply follow these steps to top up:

Personal Credits:

  1. Click the top-right avatar

  2. Go to Settings > Billing > Credits

Organization Credits:

  1. Click the top-right avatar

  2. Go to Settings > Organization

  3. Select a specific organization

  4. Go to Billing > Credits

Learn more about this feature by joining our Discord where we showcase these new features and you can ask questions in the Weekly Office Hours.


WhatsApp: Automate Sending Messages

changelog cover

We added WhatsApp Component on 💧 Instill VDP to help you automate workflows and send template messages on WhatsApp. Construct useful workflows that can automate your work and unstructured data with any AI Component and then send the result as a message to anyone on WhatsApp.

Features

  • Support tasks:

    • Send Text Based Template Message

    • Send Media Based Template Message

    • Send Location Based Template Message

    • Send Authentication Template Message

    • Send Text Message

    • Send Media Message

    • Send Location Message

    • Send Contact Message

    • Send Interactive Call To Action Url Button Message

To request a new component, please go here and click "Request A Component".


Release 41: Improvement & fixes

changelog cover

Improvements

  • Removed the bottom credits balance

  • Prevented deletion of catalogs under organizations

  • Enhanced web operator scrape quality for markdown

Bug Fixes

  • Resolved issue preventing users from scrolling in nodes within the pipeline-builder

  • Corrected dual indicator display on Console

  • Addressed organization credit consumption problem

  • Fixed frontend not rendering backend error messages correctly

  • Repaired broken Explore functionality in the Navigation Bar

  • Resolved issue with iterator over pinecone-kb.output.matches Array

More at: Instill AI


Fireworks AI: Fastest Inference for Generative AI

changelog cover

Fireworks AI is a generative AI inference platform to run and customize models with industry-leading speed and production readiness.

We added Fireworks AI Component on 💧 Instill VDP to unlock a wide range of natural language processing capabilities for your applications.

Features

  • Support the following tasks:

    • Text Generation Chat

    • Text Embeddings

  • Models:

    • llama3.1 (405B, 70B, 9B), llama3 (70B, 8B), gemma2-9b, phi-3-vision-128K, deepseek-coder, qwen2 (72B), nomic-ai/nomic-embed, thenlper/gte, WhereIsAI/UAE

☁️ Instill Cloud demo:

More at: Instill AI


Weaviate: AI-Native Vector Database

changelog cover

Weaviate is a vector database combining versatile vector and hybrid search capabilities with scalable, real-time operations on multi-modal data.

We added Weaviate Component on 💧 Instill VDP to expand your vector database options and enhance capabilities for managing and querying vector data efficiently.

Features

  • Support tasks:

    • Vector search

    • Batch insert

    • Insert

    • Update

    • Delete

    • Delete Collection

To request a new component, please go here and click "Request A Component".


MongoDB: Versatile Data Movement for Document Databases

changelog cover

MongoDB is a NoSQL database that stores data in flexible, JSON-like documents, allowing for easy scalability and dynamic schema design, making it well-suited for handling large volumes of unstructured or semi-structured data in modern applications.

We added MongoDB Component on 💧 Instill VDP to enhance your data pipelines and connecting to your MongoDB databases.

Features

  • Support tasks:

    • Insert

    • Insert Many

    • Find

    • Update

    • Delete

    • Drop Collection

    • Drop Database

    • Create Search Index

    • Drop Search Index

    • Vector Search

More at: Instill AI


Elasticsearch: Scalable Data Store and Vector Database

changelog cover

Elasticsearch is a distributed, RESTful engine with powerful search, analytics, and a scalable vector database, ideal for full-text and semantic search, as well as personalized recommendations.

We added Elasticsearch Component on 💧 Instill VDP to enable powerful and flexible solution for indexing, searching, updating, and deleting data within your Elasticsearch databases.

Features

  • Support tasks:

    • Search

    • Vector Search

    • Index

    • Multi Index

    • Update

    • Delete

    • Create Index

    • Delete Index

To request a new component, please go here and click "Request A Component".


Groq: Fast, Affordable, and Energy Efficient AI

changelog cover

Groq solutions are based on the Language Processing Unit (LPU), a new category of processor and LPUs run Large Language Models (LLMs) at substantially faster speeds and, on an architectural level, up to 10x better energy efficiency compared to GPUs.

We added Groq Component on 💧 Instill VDP to help you achieve fast AI inference at scale for your AI applications.

Features

  • Support text generation chat task with models: llama3.1, llama3, gemma2, gemma, and more

☁️ Instill Cloud demo

To request a new component, please go here and click "Request A Component".


Qdrant: High-Performance Vector Search at Scale

changelog cover

Qdrant is a vector database and similarity search engine designed to handle high-dimensional vectors for performance and massive-scale AI applications.

We added Qdrant Component on💧 Instill VDP to integrate with the Qdrant vector search and storage solution and provide you with more flexibility and choice when it comes to vector database solutions.

Features

  • Support tasks:

    • Vector Search

    • Batch Upsert

    • Upsert

    • Delete

    • Create Collection

    • Delete Collection

More at: Instill AI


OpenAI: Structured Outputs

changelog cover

We added OpenAI's latest gpt-4o-2024-08-06 model to OpenAI Component on 💧 Instill VDP so you can build more reliable AI pipelines with Structured Outputs. This update enables you to define JSON schemas for model outputs, ensuring consistent, easily processable, and accurate responses that streamline workflows and enhance error handling. It is ideal for data analysts, developers, business intelligence teams, and customer service automation, particularly when working with large volumes of text data or building robust AI-powered applications that require reliable, structured information.

Features

  • OpenAI's latest model: gpt-4o-2024-08-06

  • New response format option: json_schema

☁️ Instill Cloud demo

More at: Instill AI


Artifact Component

changelog cover

We recently launched 💾 Instill Artifact, the final piece that completes Instill Core as a full-stack AI infrastructure tool. Instill Artifact allows you to unify unstructured data from various sources on one platform and prepare it for AI and RAG tasks. Check out our blog for more details.

We've introduced the Artifact Component to 💧 Instill VDP, giving you the power to effortlessly build data pipelines that turn your files and documents into RAG-ready data. With this, you can easily retrieve relevant data from your catalog for AI chatbots, search, and recommendation systems. Plus, you can directly query your catalog and get instant, LLM-powered answers from your data.

Features

  • Support tasks:

    • Upload Files

    • Get Files Metadata

    • Get Chunks Metadata

    • Get File In Markdown

    • Match File Status

    • Search Chunks

    • Query

☁️ Instill Cloud demo

More at: Instill AI


Introducing Instill Artifact: Getting your data AI-ready

changelog cover

We're excited to introduce 💾 Instill Artifact, a new feature for orchestrating unstructured data like documents, images, audio, and video into Instill Catalog - a unified AI-ready Augmented Data Catalog format for unstructured data and AI applications.

💾 Instill Artifact can help with:

  • Data Preparation: Create high-quality AI-ready data catalogs from uploaded files.

  • Simplicity: Use low-latency API calls or Instill Console's one-click interface.

  • Transparency: Access and control data at catalog, file, or chunk levels.

  • Data Integrity: Ensure AI application reliability with markdown-based source-of-truth.

  • Scalability: Automate unstructured data transformation & growing data volumes.

  • Versatility: Process various unstructured, semi-structured, and structured data types and sources.

  • Full-Stack AI Integration: Seamlessly connect with 💧 Instill VDP and ⚗️ Instill Model.

This release allows you to:

  • Create your own Knowledge Base

  • Upload files in formats like PDF, Markdown, and TXT

  • View metadata for each Knowledge Base

  • Examine "Catalog" details for uploaded files

This feature completes 🔮 Instill Core and lays the groundwork for developing full-stack AI solutions by ensuring your data is unified, clean, and ready for insight extraction.

More at: Instill AI


Local Full-Stack AI with Ollama Component

changelog cover

The Ollama Component is a key addition to 🔮 Instill Core, extending our full-stack AI platform to your local laptops! Self-hosting your own 🔮 Instill Core is ideal for secure, efficient, and flexible AI development and deployment.

With this integration, you can access and utilize AI models served on the Ollama for tasks such as:

  • Text generation chat

  • Text embedding

For setup, refer to our deployment documentation and the Ollama server tutorial.

Note: You will need to adjust your Docker network settings to ensure connectivity between 🔮 Instill Core and Ollama containers.

More at: Instill AI


Powerful CRM with HubSpot + AI Components

changelog cover

Adding the new HubSpot Component enables you to empower your CRM and marketing workflow use cases together with other AI Components using 💧 Instill VDP. This component will support the following tasks (see docs):

  • Get, Create Contact

  • Get, Create Deal

  • Get, Create Company

  • Get, Create Ticket

  • Get Thread

  • Insert Message

  • Retrieve Association

With these tasks, you will be able to create workflow automations with AI such as:

  • Get message thread from HubSpot, connect it to other AI Components and summarise into action items

  • Apply sentiment analysis to retrieved details of contacts

  • and more!

Try demo on ☁️ Instill Cloud:

More at: Instill AI


Multimedia Processing with Audio and Video Components

changelog cover

Data comes in many formats beyond text, so we've expanded our Operator Components to support a wider range of unstructured data processing methods. This updates introduces 2 new operators - Audio Component and Video Component.

These improvements significantly expand our platform's capabilities to split audio files into multiple chunks and sub-sample videos to provide you with more robust tools for unstructured data manipulation on 🔮 Instill Core.

Try demo on ☁️ Instill Cloud:

More at: Instill AI


SQL Component

changelog cover

We have included the SQL Component in this release to help you connect to your SQL databases: MySQL, PostgreSQL, MariaDB, Firebird, and Oracle. This integration supports SQL in your data pipelines on 💧 Instill VDP to allow for more flexible data movement and support the following tasks:

  • Insert

  • Update

  • Select

  • Delete

  • Create Table

  • Drop Table

More at: Instill AI


PM Workflow Automations with Jira + AI Components

changelog cover

As a Project Manager (PM), organizing tickets in your kanban board can get overwhelming really quickly, so we added the new Jira Component to support tasks like:

  • Get boards, issues, epic, sprint

Using 💧 Instill VDP, the Jira Component can connect with existing AI Components to solve the issue of scattered information. Easily add AI to automatically align tickets in your existing project management tool.

Try demo on ☁️ Instill Cloud:

More at: Instill AI


GPT-4o mini: Cost-efficient small LLM

changelog cover

OpenAI recently released the most cost-efficient small LLM on the market, GPT-4o mini, so we updated our OpenAI Component to support the new model, unlocking the latest AI powers including:

  • Low cost & latency for diverse tasks

  • Text & vision API support

  • Enhanced non-English text processing

  • 99.5% cost reduction per 1,000 Input Tokens vs GPT-4 (see docs)

Try the demo on ☁️ Instill Cloud:

More at: Instill AI


Cohere: Advanced NLP Capabilities

changelog cover

We've expanded our list of AI Components with the new Cohere Component to enable powerful Natural Language Processing (NLP) capabilities on 🔮 Instill Core.

Key Additions:

  • Text Generation Chat

  • Text Embeddings

  • Text Reranking

Expanded NLP Capabilities:

Leverage Cohere's Large Language Models (LLMs) on 🔮 Instill Core for various natural language use cases:

  • Classification

  • Semantic search

  • Paraphrasing

  • Summarization

  • Content generation

Try it on ☁️ Instill Cloud:

This integration empowers you to harness Cohere's advanced Language Models directly within 🔮 Instill Core, opening up new possibilities for sophisticated NLP applications.

We're eager to see how you'll use these new capabilities in your projects!

More at: Instill AI


GitHub: Boost developer workflow

changelog cover

The new GitHub Component on 🔮 Instill Core allows developers to integrate and automate various GitHub operations, boosting development workflow efficiency. Key tasks include:

  • Listing and retrieving pull requests and commits

  • Managing review comments

  • Handling issues—listing, retrieving, and creating

  • Setting up automated notifications and actions via webhooks

Try it on ☁️ Instill Cloud:

We will soon introduce a new event-triggering mechanism for 💧Instill VDP which can be coupled with this to build even more powerful workflow automations.

More at: Instill AI


Mistral: Accelerate building AI

changelog cover

We have added Mistral Component, a powerful new addition to our AI Components family on 🔮 Instill Core.

Key Features:

  1. Text Generation Chat

  2. Text Embeddings

This component enables seamless integration with AI models served on the Mistral Platform, significantly accelerating AI development processes.

Benefits:

  • Streamlined access to Mistral's advanced AI models

  • Enhanced text generation and embedding capabilities

  • Simplified AI integration for developers

Leverage the Mistral component to boost your AI-driven applications and services. Explore the new possibilities in text generation and embeddings today!

Try this short-film script writer pipeline on ☁️ Instill Cloud.


Serverless Model Serving: Optimize AI deployment

changelog cover

The Serverless Model Serving feature on ⚗️ Instill Model offers a more cost-efficient solution for AI deployment by eliminating the need to manage the underlying infrastructure. ⚗️ Instill Model handles the compute resource provisioning, auto-scaling and availability, allowing you, the developers to focus on model development and deployment 🎉.

Key Benefits:

  • Scalability: Automated resource allocation scales to handle varying workloads efficiently.

  • Cost-Efficiency: Compute resources are provisioned on-demand and scaled down to zero when idling.

  • Simplified Management: No need for server management, free up your teams to focus on improving models and applications.

  • Quicker Deployment: Seamless integration with 🔮 Instill Core for faster full-stack AI development with data pipeline and knowledge bases.

We've also made significant performance enhancements:

  • Faster image push times

  • Reduced model cold-start times

We’re continuously enhancing ⚗️ Instill Model and will soon introduce Dedicated Model Serving for production-purpose model serving service.

☁️ Instill Cloud users can host public models for free and we offer monthly free Instill Credits for ☁️ Instill Cloud users to run any public models hosted on ☁️ Instill Cloud.

More at: Instill AI


Anthropic's Claude 3.5 Sonnet is here

changelog cover

We've added Anthropic as a new AI component on 🔮 Instill Core, allowing users to connect to the AI models served on the Anthropic Platform. Get access to models including the Claude-3-family and most recently, the Claude-3.5-Sonnet. Claude 3.5 Sonnet sets new standards in reasoning, knowledge, and coding, excelling in nuance, humour, and high-quality content. We currently support these tasks.

To quickly get started,

  1. Go to this pipeline on ☁️ Instill Cloud

  2. Type in your input like "What is Instill Core?"

  3. Clone this pipeline & start building your AI projects using the Anthropic AI component 🙌


Build Smarter Emails

changelog cover

You've got a new update! The Email Component on 🔮 Instill Core introduces ways to send and read your emails from the Mail Protocol. You can connect to different email servers through the Email component including Gmail, Outlook, Yahoo, and iCloud.

💌 Try sending an email using this pipeline on ☁️ Instill Cloud.


Share Resources Efficiently via Namespace Switch

changelog cover

Our new Namespace Switch makes it easier for you to toggle between your individual or organization's namespace. It makes browsing all available resources much quicker and helps you use your Instill Credit more efficiently.

To get started,

  1. Join or create a new organization.

  2. Click the namespace toggle next to the logo in the top left corner to switch between your different namespaces.

  3. Done! You can now use the resources of the currently selected namespace 🎉.

👉 Try it on Instill Cloud now!


Save Your Time with Instill Credit

changelog cover

Our new feature, Instill Credit, makes it easy to adopt Instill Cloud by reducing the time needed to build and set up pipelines.

You can create AI components with out-of-the-box configurations, avoiding the need for external accounts or API keys.

Start Right Away

Get started right away with 10,000 free credits every month 🤩. That means you can dive right in and get started without worrying about costs.

We've integrated Instill Credit seamlessly into AI components, so accessing models from providers like OpenAI and Stability AI is now a breeze. Wondering if you're using Instill Credit? Just look for Instill credentials preset using ${secrets.INSTILL_SECRET} as input in the supported components.

👉 Learn more about how Instill Credit works. We're continuously expanding support for more AI components with Instill Credit.

If you prefer to use your own keys, simply create a secret on your profile secrets page and reference it in the component configuration.

More Credits on the Way

But wait, there's more! We're gearing up to introduce Credit top-up and auto-billing options soon. Stay tuned for seamless operations without worrying about running out of credits.


Boost Workforce with Slack

changelog cover

We've added a new Instill Component, Slack, to Instill Core, and our team can't wait to use it! This addition enables you to build pipelines to read and write messages from your Slack workspace, making your workflow smoother and communication even better.

With this new feature, you can do cool stuff like:

  • Send Slack messages to LLM and get back helpful insights and actions powered by AI

  • Automate routine tasks, freeing up your team to focus on higher-value work

Try It Now!

  1. Create and install a Slack app

    Follow the Slack API tutorial to create and install a Slack app in your workspace. You'll get a Slack API bot token like "xoxb-******", which we'll need later.

  2. Set up your Slack token as a secret

    Go to Instill Console, click on your Avatar in the top right corner, then go to Settings > Secrets > Create Secret. Use the token you got in the first step to create a secret.

  3. Build a pipeline with the Slack component

    Go to the pipeline builder page, click on Component+ at the top left and Select Slack. Use the secret you created in step 2 as the token input in the Slack connection section. Now you're all set to start sending and receiving messages. Just make sure your Slack app is in the right channel.

So go ahead, have some fun with it! Whether you're using it to summarize messages, crack a few jokes, or just boost your team's productivity, we hope it helps you create a more connected and efficient work environment.


Introducing Secret Management for Connector Configuration

changelog cover

We've made some big improvements to how you set up your pipeline connectors, especially when it comes to managing sensitive information like API keys.

Now, you can create secrets, which are encrypted bits of sensitive data. These secrets are reusable and secure. You can use these secrets in your pipelines, but only if you specifically reference them in the connector setup.

When you head to the Settings > Secrets tab, you'll see a complete list of all your secrets. Once you've set them up, you can easily use them in your pipeline connectors.

Let's walk through setting up the OpenAI connector as an example:

To start, go to Settings > Secrets, and click on Create Secret. Enter your OpenAI API key with the ID openai-api-key.

Now, drag and drop the OpenAI connector onto the canvas. Then, access the OpenAI Connection section by clicking on More. Input ${ and select your OpenAI secret from the dropdown ${secrets.openai-api-key}.

With the OpenAI connector configured and your own API key in place, you're all set to build your pipeline. Any costs incurred will go directly to the service provider.

What's Next 🔜

In addition to the above Bring Your Own API Key (BYOK) feature, we're thrilled to introduce the upcoming Instill Credit for Instill Cloud! As a bonus, FREE Instill Credits will be available to all Instill Cloud users. Once it's ready, you won't need to register your own 3rd-party vendor account; these credits will grant access to AI models like OpenAI's GPT-4 without any additional setup required.

Stay tuned for more updates!


Instill Cloud Expands: Now Available in North America 🌎

changelog cover

With our recent deployment in the United States, Instill Cloud marks a significant milestone in our global expansion journey. We're extending our commitment to providing seamless cloud solutions across continents.

From Europe to Asia and now North America, our goal remains unchanged: to empower businesses worldwide with reliable, high-performance cloud services. As we continue to grow and evolve, stay tuned for more updates on how Instill Cloud is transforming the way organizations build AI applications.

👉 Try out Instill Cloud today


Llama 3 8B Instruct Model Now Available on Instill Cloud

changelog cover

Just two days ago, Meta unveiled its most powerful openly available LLM to date, Llama 3. For more information, visit here.

Today, you can access the Llama 3 8B Instruct model on Instill Cloud!

How to Use

Here are the steps to use it:

  1. Log in to Instill Cloud.

  2. Use our provided pipeline to compare the capabilities of Llama 3 and Llama 2 side by side.

Alternatively, you can directly call the model. See the API request below:

Remember to replace <INSTILL CLOUD API TOKEN> in the API request with your own Instill Cloud API Token.


Instill Cloud Goes Global: From Europe to Asia 🌏

changelog cover

Instill Cloud platform is now deployed across multiple regions!

From the Netherlands in Europe, we've extended our footprint to Singapore in Asia, enhancing accessibility and performance for our users across diverse geographical locations. The US region will join our global network next, further enhancing accessibility for users across North America.

👉 Try out Instill Cloud today and stay tuned for updates as we continue to extend the reach of the Instill Cloud platform to serve you better.


Introducing Iterators for Batch Processing in Your Pipeline

changelog cover

We're excited to introduce a new component type: Iterators. Iterators are designed to process each data element individually within an array or list. This functionality is crucial for efficiently handling data in batches within your pipeline.

Iterator ( [ 🌽 , 🐮 , 🐓 ] , cook) =>  [ 🍿 , 🥩 , 🍗 ]

Enhance Your Workflows with Iterators

Imagine you're summarizing a webpage with extensive content. With an iterator, you can process the substantial content of a webpage, breaking it down into manageable chunks segmented by tokens. This enables you to efficiently generate concise summaries for each section of the page.

👉 Try out the pipeline Website Summarizer with Iterator.


LLaVA Level-Up: From 7B to 13B 🔼

changelog cover

We've leveled up LLaVA model on Instill Cloud from LLaVA 7B to LLaVA 13B!

👉 Dive into our updated tutorial to discover how to unlock the enhanced capabilities of LLaVA.

👉 We've built a visual assistant pipeline using LLaVA 13B. Feel free to try it out by uploading an image and ask one question about the image.


Introducing LLaVA 🌋: Your Multimodal Assistant

changelog cover

📣 The latest LLaVA model, LLaVA-v1.6-7B, is now accessible on Instill Cloud!

What's LLaVA?

LLaVA stands for Large Language and Vision Assistant, an open-source multimodal model fine-tuned on multimodal instruction-following data. Despite its training on a relatively small dataset, LLaVA demonstrates remarkable proficiency in comprehending images and answering questions about them. Its capabilities resemble those of multimodal models like GPT-4 with Vision (GPT-4V) from OpenAI.

What's New in LLaVA 1.6?

According to the original blog post, LLaVA-v1.6 boasts several enhancements compared to LLaVA-v1.5:

  • Enhanced Visual Perception: LLaVA now supports images with up to 4x more pixels, allowing it to capture finer visual details. It accommodates three aspect ratios, with resolutions of up to 672x672, 336x1344, and 1344x336.

  • Improved Visual Reasoning and OCR: LLaVA's visual reasoning and Optical Character Recognition (OCR) capabilities have been significantly enhanced, thanks to an improved mixture of visual instruction tuning data.

  • Better Visual Conversations: LLaVA now excels in various scenarios, offering better support for different applications. It also demonstrates improved world knowledge and logical reasoning.

  • Efficient Deployment: LLaVA ensures efficient deployment and inference, leveraging SGLang for streamlined processes.

👉 Dive into our tutorial to learn how to leverage LLaVA's capabilities effectively.


New and Improved Smart Hint

changelog cover

We've improved the Smart Hint feature to offer contextual advice and suggestions throughout your pipeline.

Here's how it functions:

  • Firstly, Smart Hint informs users about the type of content an input field can accept, indicated through placeholder text, making it clear whether references are allowed.

  • Upon interacting with the input field, Smart Hint reveals the specific type of data it supports, such as String, Number, Object, etc.

  • Moreover, when a user enters ${ into an input field, Smart Hint intelligently filters and presents only the relevant hints that are compatible with the field.

This structured guidance method simplifies the process of building your pipeline, enhancing productivity and ease of use.

jq JSON Processor Integration

Take control of your JSON data with seamless jq filter integration within the JSON Operator. Apply custom jq filters directly to your data streams, enabling advanced manipulation and extraction with precision.

To get started, head to the JSON Operator and select TASK_JQ.

Video Input Support

Unlock limitless possibilities with the addition of video input support. Seamlessly ingest and process video content within your workflows, enabling dynamic data analysis, transformation, and the use of models that support video content.

Improvements

  • Improved performance and stability ensure a seamless user experience.

  • Enhanced user interface with refined visual cues and streamlined navigation for increased productivity

Thank You for Your Contributions 🙌

We highly value your feedback, so feel free to initiate a conversation on GitHub Discussions or join us in the #general channel on Discord.

Huge thanks to rsmelo92 for their contribution!


Instill Python SDK: Pipeline Creation made Easier

changelog cover

We are thrilled to announce a significant upgrade to our Python SDK, aimed at streamlining and enhancing the pipeline creation and resource configuration process for connectors and operators.

One of the key challenges users faced in previous versions was the difficulty in understanding the required configurations for connectors and operators. With the implementation of JSON schema validation, type hints, and helper classes that define all the necessary fields, users no longer need to refer to extensive documentation to configure resources. This ensures a more straightforward and error-free setup.

This update marks a significant step forward in improving user experience and reducing friction in the configuration process. We believe that these enhancements will empower users to make end-to-end use of the Python SDK; resource configuration, pipeline creation, debugging/editing/updating, and deployment from within their tech stack.

You can find a complete pipeline setup example with Python SDK on our GitHub repo.

Your feedback is crucial to us, so please don't hesitate to share your thoughts and experiences with the improved configuration workflow. We're committed to continually enhancing our platform to meet your needs and provide you with the best possible user experience.

To set up and get started with the SDKs, head over to their respective GitHub Repos:


New Pipeline Component Reference Schema: ${}

changelog cover

We're excited to introduce an enhanced Component Reference Schema that will make pipeline building even more straightforward. With this update, referencing a component field is as simple as using ${}. When establishing connections between components, there's no need to distinguish between {} or {{}} anymore. Just type ${ in any form field within a component, and our Smart Hint feature will promptly suggest available connections.

Rest assured, you won't need to manually update your existing pipelines due to this change from our previous reference schemas {} and {{}}. We've seamlessly migrated all pipelines to this new format.

Explore Our OpenAPI Documentation

We have just launched our OpenAPI Documentation, which contains all the essential information required for smooth integration with our APIs. Please visit our OpenAPI Documentation to access comprehensive details. You can even insert your Instill Cloud API token in the Authorization Header to experiment with the APIs on our platform.

Bearer <INSTILL CLOUD API TOKEN>

User-friendly Error Messages

We understand the importance of clear communication, especially when errors occur. That's why we've revamped our error messages to be more user-friendly and informative. You'll now see which component in the pipeline the error is related to, allowing you to quickly identify and address issues, and speeding up the debugging process.

Instill Cloud Console URL Migration

We have successfully migrated the Instill Cloud console URL from https://console.instill.tech to https://instill.tech. Please make sure to update your bookmarks accordingly.

Thank You for Your Contribution 🙌

We highly value your feedback, so feel free to initiate a conversation on GitHub Discussions or join us in the #general channel on Discord.

Huge thanks to @chenhunghan and @HariBhandari07 for their contributions!


Introducing Zephyr-7b, Stable Diffusion XL, ControlNet Canny on Instill Model 🔮

changelog cover

Open-source models empower you and your businesses to leverage them within your own infrastructure, granting you complete control over your data and ensuring the privacy of sensitive information within your network. This approach mitigates the risk of data breaches and unauthorized access, affording you the flexibility of not being bound to a single hosting vendor.

With Instill Model, you gain access to a wide range of cutting-edge open-source models. Now, you can seamlessly use Zephyr-7b, Stable Diffusion XL, and ControlNet Canny in an integrated environment for FREE.

Zephyr-7b is a fine-tuned version of Mistral-7b created by Hugging Face. It is trained on public datasets but also optimized with knowledge distillation techniques. It's lightweight and fast, although 'intent alignment' may suffer. Read more about it and distillation techniques here.

Stable Diffusion XL empowers you to craft detailed images with concise prompts and even generate text within those images!

ControlNet Canny allows you to control the outputs of Stable Diffusion models and hence manipulate images in innovative ways.

We are committed to expanding our offering of open-source models, so stay tuned for more!


Instill VDP, Now on Beta! 🦾

changelog cover

In this Beta release, we are delighted to announce that the Instill VDP API has now achieved a state of stability and reliability. This marks a pivotal moment for Instill VDP, and we are committed to ensuring the continued robustness of our API.

We recognize the importance of a seamless transition for our users and pledge to avoid any disruptive or breaking changes. This commitment means that you can confidently build upon the current version of the Instill VDP API, knowing that your existing integrations and workflows will remain intact.

This achievement represents a major milestone in our journey to provide the best experience for our users. We understand the critical role that stability and consistency play in your development efforts, and we are excited to take this step forward together with you.

As we move forward, we will keep striving to enhance the features of Instill VDP, listening to your feedback, and working closely with our community to deliver even more value and innovation. Thank you for being part of this exciting journey with us.

Support JSON input

Our Start operator now supports JSON inputs. Leveraging JSON as an input format enables the handling of semi-structured data, providing a well-organized representation of information. This is especially valuable when working with a REST API connector as a payload or when utilizing the Pinecone connector to insert records that include metadata in the form of key-value pairs.

Use Redis as a chat history store

We've introduced a Redis connector to facilitate chat history functionality. Redis, a widely used in-memory data store, enables the efficient implementation of a chat history system. Messages from a given chat session, identified by a unique Session ID, can be stored consistently, complete with their associated roles like "system," "user," and "assistant." Moreover, you have the flexibility to configure the retrieval of the latest K messages and integrate them seamlessly with a Large Language Model (LLM), such as OpenAI's GPT models, allowing you to develop a chatbot with full control over your data.

Improve RAG capabilities

We now support metadata in the Pinecone connector to enhance the efficiency and precision of your vector searches. With the ability to associate metadata key-value pairs with your vectors, you can now access not only the most similar records after your data has been indexed but also retrieve the associated metadata. Additionally, you can define filter expressions during queries, enabling you to refine your vector searches based on metadata. This functionality proves invaluable when dealing with extensive datasets where precision and efficiency in retrieval are paramount.

Unlock Instill VDP's full potential with Instill Hub!

Welcome to Instill Hub, your go-to platform for all things Instill VDP! Here, you have the opportunity to explore and use pipelines that have been contributed by fellow members. These pipelines are designed to provide you with the most up-to-date and efficient AI capabilities. You can seamlessly trigger these pipelines for your projects, or if you're feeling creative, you can even clone them and customize them to suit your unique needs.

But that's not all – we strongly encourage you to share your pipelines with the Instill Hub community. By doing so, you'll be contributing to the growth and success of our platform, helping others in the community benefit from your expertise.

So, whether you're eager to explore, eager to contribute, or simply excited to learn from others, Instill Hub is here to foster a supportive and collaborative community around Instill VDP. Join Instill Cloud today and let's work together to make AI innovation accessible to everyone!