Instill AI changelog

Save Your Time with Instill Credit

changelog cover

Our new feature, Instill Credit, makes it easy to adopt Instill Cloud by reducing the time needed to build and set up pipelines.

You can create AI components with out-of-the-box configurations, avoiding the need for external accounts or API keys.

Start Right Away

Get started right away with 10,000 free credits every month ๐Ÿคฉ. That means you can dive right in and get started without worrying about costs.

We've integrated Instill Credit seamlessly into AI components, so accessing models from providers like OpenAI and Stability AI is now a breeze. Wondering if you're using Instill Credit? Just look for Instill credentials preset using ${secrets.INSTILL_SECRET} as input in the supported components.

๐Ÿ‘‰ Learn more about how Instill Credit works. We're continuously expanding support for more AI components with Instill Credit.

If you prefer to use your own keys, simply create a secret on your profile secrets page and reference it in the component configuration.

More Credits on the Way

But wait, there's more! We're gearing up to introduce Credit top-up and auto-billing options soon. Stay tuned for seamless operations without worrying about running out of credits.


Boost Workforce with Slack

changelog cover

We've added a new Instill component, Slack, to Instill Core, and our team can't wait to use it! This addition enables you to build pipelines to read and write messages from your Slack workspace, making your workflow smoother and communication even better.

With this new feature, you can do cool stuff like:

  • Send Slack messages to LLM and get back helpful insights and actions powered by AI ๐Ÿ’ก.

  • Automate routine tasks, freeing up your team to focus on higher-value work ๐Ÿฆพ.

Try It Now!

  1. Create and install a Slack app

    Follow the Slack API tutorial to create and install a Slack app in your workspace. You'll get a Slack API bot token like "xoxb-******", which we'll need later.

  2. Set up your Slack token as a secret

    Go to Instill Console, click on your Avatar in the top right corner, then go to Settings > Secrets > Create Secret. Use the token you got in the first step to create a secret.

  3. Build a pipeline with the Slack component

    Go to the pipeline builder page, click on Component+ at the top left and Select Slack. Use the secret you created in step 2 as the token input in the Slack connection section. Now you're all set to start sending and receiving messages. Just make sure your Slack app is in the right channel.

So go ahead, have some fun with it! Whether you're using it to summarize messages, crack a few jokes, or just boost your team's productivity, we hope it helps you create a more connected and efficient work environment.


Introducing Secret Management for Connector Configuration

changelog cover

We've made some big improvements to how you set up your pipeline connectors, especially when it comes to managing sensitive information like API keys.

Now, you can create secrets, which are encrypted bits of sensitive data. These secrets are reusable and secure. You can use these secrets in your pipelines, but only if you specifically reference them in the connector setup.

When you head to the Settings > Secrets tab, you'll see a complete list of all your secrets. Once you've set them up, you can easily use them in your pipeline connectors.

Let's walk through setting up the OpenAI connector as an example:

To start, go to Settings > Secrets, and click on Create Secret. Enter your OpenAI API key with the ID openai-api-key.

Now, drag and drop the OpenAI connector onto the canvas. Then, access the OpenAI Connection section by clicking on More. Input ${ and select your OpenAI secret from the dropdown ${secrets.openai-api-key}.

With the OpenAI connector configured and your own API key in place, you're all set to build your pipeline. Any costs incurred will go directly to the service provider.

What's Next ๐Ÿ”œ

In addition to the above Bring Your Own API Key (BYOK) feature, we're thrilled to introduce the upcoming Instill Credit for Instill Cloud! As a bonus, FREE Instill Credits will be available to all Instill Cloud users. Once it's ready, you won't need to register your own 3rd-party vendor account; these credits will grant access to AI models like OpenAI's GPT-4 without any additional setup required.

Stay tuned for more updates!


Instill Cloud Expands: Now Available in North America ๐ŸŒŽ

changelog cover

With our recent deployment in the United States, Instill Cloud marks a significant milestone in our global expansion journey. We're extending our commitment to providing seamless cloud solutions across continents.

From Europe to Asia and now North America, our goal remains unchanged: to empower businesses worldwide with reliable, high-performance cloud services. As we continue to grow and evolve, stay tuned for more updates on how Instill Cloud is transforming the way organizations build AI applications.

๐Ÿ‘‰ Try out Instill Cloud today


Llama 3 8B Instruct Model Now Available on Instill Cloud

changelog cover

Just two days ago, Meta unveiled its most powerful openly available LLM to date, Llama 3. For more information, visit here.

Today, you can access the Llama 3 8B Instruct model on Instill Cloud!

How to Use

Here are the steps to use it:

  1. Log in to Instill Cloud.

  2. Use our provided pipeline to compare the capabilities of Llama 3 and Llama 2 side by side.

Alternatively, you can directly call the model. See the API request below:

Remember to replace <INSTILL CLOUD API TOKEN> in the API request with your own Instill Cloud API Token.


Instill Cloud Goes Global: From Europe to Asia ๐ŸŒ

changelog cover

Instill Cloud platform is now deployed across multiple regions!

From the Netherlands in Europe, we've extended our footprint to Singapore in Asia, enhancing accessibility and performance for our users across diverse geographical locations. The US region will join our global network next, further enhancing accessibility for users across North America.

๐Ÿ‘‰ Try out Instill Cloud today and stay tuned for updates as we continue to extend the reach of the Instill Cloud platform to serve you better.


Introducing Iterators for Batch Processing in Your Pipeline

changelog cover

We're excited to introduce a new component type: Iterators. Iterators are designed to process each data element individually within an array or list. This functionality is crucial for efficiently handling data in batches within your pipeline.

Iterator ( [ ๐ŸŒฝ , ๐Ÿฎ , ๐Ÿ“ ] , cook) => ย [ ๐Ÿฟ , ๐Ÿฅฉ , ๐Ÿ— ]

Enhance Your Workflows with Iterators

Imagine you're summarizing a webpage with extensive content. With an iterator, you can process the substantial content of a webpage, breaking it down into manageable chunks segmented by tokens. This enables you to efficiently generate concise summaries for each section of the page.

๐Ÿ‘‰ Try out the pipeline Website Summarizer with Iterator.


LLaVA Level-Up: From 7B to 13B ๐Ÿ”ผ

changelog cover

We've leveled up LLaVA model on Instill Cloud from LLaVA 7B to LLaVA 13B!

๐Ÿ‘‰ Dive into our updated tutorial to discover how to unlock the enhanced capabilities of LLaVA.

๐Ÿ‘‰ We've built a visual assistant pipeline using LLaVA 13B. Feel free to try it out by uploading an image and ask one question about the image.


Introducing LLaVA ๐ŸŒ‹: Your Multimodal Assistant

changelog cover

๐Ÿ“ฃ The latest LLaVA model, LLaVA-v1.6-7B, is now accessible on Instill Cloud!

What's LLaVA?

LLaVA stands for Large Language and Vision Assistant, an open-source multimodal model fine-tuned on multimodal instruction-following data. Despite its training on a relatively small dataset, LLaVA demonstrates remarkable proficiency in comprehending images and answering questions about them. Its capabilities resemble those of multimodal models like GPT-4 with Vision (GPT-4V) from OpenAI.

What's New in LLaVA 1.6?

According to the original blog post, LLaVA-v1.6 boasts several enhancements compared to LLaVA-v1.5:

  • Enhanced Visual Perception: LLaVA now supports images with up to 4x more pixels, allowing it to capture finer visual details. It accommodates three aspect ratios, with resolutions of up to 672x672, 336x1344, and 1344x336.

  • Improved Visual Reasoning and OCR: LLaVA's visual reasoning and Optical Character Recognition (OCR) capabilities have been significantly enhanced, thanks to an improved mixture of visual instruction tuning data.

  • Better Visual Conversations: LLaVA now excels in various scenarios, offering better support for different applications. It also demonstrates improved world knowledge and logical reasoning.

  • Efficient Deployment: LLaVA ensures efficient deployment and inference, leveraging SGLang for streamlined processes.

๐Ÿ‘‰ Dive into our tutorial to learn how to leverage LLaVA's capabilities effectively.


New and Improved Smart Hint

changelog cover

We've improved the Smart Hint feature to offer contextual advice and suggestions throughout your pipeline.

Here's how it functions:

  • Firstly, Smart Hint informs users about the type of content an input field can accept, indicated through placeholder text, making it clear whether references are allowed.

  • Upon interacting with the input field, Smart Hint reveals the specific type of data it supports, such as String, Number, Object, etc.

  • Moreover, when a user enters ${ into an input field, Smart Hint intelligently filters and presents only the relevant hints that are compatible with the field.

This structured guidance method simplifies the process of building your pipeline, enhancing productivity and ease of use.

jq JSON Processor Integration

Take control of your JSON data with seamless jq filter integration within the JSON Operator. Apply custom jq filters directly to your data streams, enabling advanced manipulation and extraction with precision.

To get started, head to the JSON Operator and select TASK_JQ.

Video Input Support

Unlock limitless possibilities with the addition of video input support. Seamlessly ingest and process video content within your workflows, enabling dynamic data analysis, transformation, and the use of models that support video content.

Improvements

  • Improved performance and stability ensure a seamless user experience.

  • Enhanced user interface with refined visual cues and streamlined navigation for increased productivity

Thank You for Your Contributions ๐Ÿ™Œ

We highly value your feedback, so feel free to initiate a conversation onย GitHub Discussionsย or join us in the #general channel onย Discord.

Huge thanks to rsmelo92 for their contribution!


Instill Python SDK: Pipeline Creation made Easier

changelog cover

We are thrilled to announce a significant upgrade to our Python SDK, aimed at streamlining and enhancing the pipeline creation and resource configuration process for connectors and operators.

One of the key challenges users faced in previous versions was the difficulty in understanding the required configurations for connectors and operators. With the implementation of JSON schema validation, type hints, and helper classes that define all the necessary fields, users no longer need to refer to extensive documentation to configure resources. This ensures a more straightforward and error-free setup.

This update marks a significant step forward in improving user experience and reducing friction in the configuration process. We believe that these enhancements will empower users to make end-to-end use of the Python SDK; resource configuration, pipeline creation, debugging/editing/updating, and deployment from within their tech stack.

You can find a complete pipeline setup example with Python SDK on our GitHub repo.

Your feedback is crucial to us, so please don't hesitate to share your thoughts and experiences with the improved configuration workflow. We're committed to continually enhancing our platform to meet your needs and provide you with the best possible user experience.

To set up and get started with the SDKs, head over to their respective GitHub Repos:


New Pipeline Component Reference Schema: ${}

changelog cover

We're excited to introduce an enhanced Component Reference Schema that will make pipeline building even more straightforward. With this update, referencing a component field is as simple as using ${}. When establishing connections between components, there's no need to distinguish between {} or {{}} anymore. Just type ${ in any form field within a component, and our Smart Hint feature will promptly suggest available connections.

Rest assured, you won't need to manually update your existing pipelines due to this change from our previous reference schemas {} and {{}}. We've seamlessly migrated all pipelines to this new format.

Explore Our OpenAPI Documentation

We have just launched our OpenAPI Documentation, which contains all the essential information required for smooth integration with our APIs. Please visit our OpenAPI Documentation to access comprehensive details. You can even insert your Instill Cloud API token in the Authorization Header to experiment with the APIs on our platform.

Bearer <INSTILL CLOUD API TOKEN>

User-friendly Error Messages

We understand the importance of clear communication, especially when errors occur. That's why we've revamped our error messages to be more user-friendly and informative. You'll now see which component in the pipeline the error is related to, allowing you to quickly identify and address issues, and speeding up the debugging process.

Instill Cloud Console URL Migration

We have successfully migrated the Instill Cloud console URL from https://console.instill.tech to https://instill.tech. Please make sure to update your bookmarks accordingly.

Thank You for Your Contribution ๐Ÿ™Œ

We highly value your feedback, so feel free to initiate a conversation onย GitHub Discussionsย or join us in the #general channel onย Discord.

Huge thanks to @chenhunghan and @HariBhandari07 for their contributions!


Introducing Zephyr-7b, Stable Diffusion XL, ControlNet Canny on Instill Model ๐Ÿ”ฎ

changelog cover

Open-source models empower you and your businesses to leverage them within your own infrastructure, granting you complete control over your data and ensuring the privacy of sensitive information within your network. This approach mitigates the risk of data breaches and unauthorized access, affording you the flexibility of not being bound to a single hosting vendor.

With Instill Model, you gain access to a wide range of cutting-edge open-source models. Now, you can seamlessly use Zephyr-7b, Stable Diffusion XL, and ControlNet Canny in an integrated environment for FREE.

Zephyr-7b is a fine-tuned version of Mistral-7b created by Hugging Face. It is trained on public datasets but also optimized with knowledge distillation techniques. It's lightweight and fast, although 'intent alignment' may suffer. Read more about it and distillation techniques here.

Stable Diffusion XL empowers you to craft detailed images with concise prompts and even generate text within those images!

ControlNet Canny allows you to control the outputs of Stable Diffusion models and hence manipulate images in innovative ways.

We are committed to expanding our offering of open-source models, so stay tuned for more!


Instill VDP, Now on Beta! ๐Ÿฆพ

changelog cover

In this Beta release, we are delighted to announce that the Instill VDP API has now achieved a state of stability and reliability. This marks a pivotal moment for Instill VDP, and we are committed to ensuring the continued robustness of our API.

We recognize the importance of a seamless transition for our users and pledge to avoid any disruptive or breaking changes. This commitment means that you can confidently build upon the current version of the Instill VDP API, knowing that your existing integrations and workflows will remain intact.

This achievement represents a major milestone in our journey to provide the best experience for our users. We understand the critical role that stability and consistency play in your development efforts, and we are excited to take this step forward together with you.

As we move forward, we will keep striving to enhance the features of Instill VDP, listening to your feedback, and working closely with our community to deliver even more value and innovation. Thank you for being part of this exciting journey with us.

Support JSON input

Our Start operator now supports JSON inputs. Leveraging JSON as an input format enables the handling of semi-structured data, providing a well-organized representation of information. This is especially valuable when working with a REST API connector as a payload or when utilizing the Pinecone connector to insert records that include metadata in the form of key-value pairs.

Use Redis as a chat history store

We've introduced a Redis connector to facilitate chat history functionality. Redis, a widely used in-memory data store, enables the efficient implementation of a chat history system. Messages from a given chat session, identified by a unique Session ID, can be stored consistently, complete with their associated roles like "system," "user," and "assistant." Moreover, you have the flexibility to configure the retrieval of the latest K messages and integrate them seamlessly with a Large Language Model (LLM), such as OpenAI's GPT models, allowing you to develop a chatbot with full control over your data.

Improve RAG capabilities

We now support metadata in the Pinecone connector to enhance the efficiency and precision of your vector searches. With the ability to associate metadata key-value pairs with your vectors, you can now access not only the most similar records after your data has been indexed but also retrieve the associated metadata. Additionally, you can define filter expressions during queries, enabling you to refine your vector searches based on metadata. This functionality proves invaluable when dealing with extensive datasets where precision and efficiency in retrieval are paramount.

Unlock Instill VDP's full potential with Instill Hub!

Welcome to Instill Hub, your go-to platform for all things Instill VDP! Here, you have the opportunity to explore and use pipelines that have been contributed by fellow members. These pipelines are designed to provide you with the most up-to-date and efficient AI capabilities. You can seamlessly trigger these pipelines for your projects, or if you're feeling creative, you can even clone them and customize them to suit your unique needs.

But that's not all โ€“ we strongly encourage you to share your pipelines with the Instill Hub community. By doing so, you'll be contributing to the growth and success of our platform, helping others in the community benefit from your expertise.

So, whether you're eager to explore, eager to contribute, or simply excited to learn from others, Instill Hub is here to foster a supportive and collaborative community around Instill VDP. Join Instill Cloud today and let's work together to make AI innovation accessible to everyone!