Introducing the Haystack Integration

Dec 11, 2023

Gradient Team

Today, Gradient is excited to announce the integration of its cutting-edge LLM development platform with the Haystack framework.

Today, Gradient is excited to announce the integration of its cutting-edge LLM development platform with the Haystack framework.

Today, Gradient is excited to announce the integration of its cutting-edge LLM development platform with the Haystack framework.

Today, Gradient is excited to announce the integration of its cutting-edge LLM development platform with the Haystack framework.

Today, Gradient is excited to announce the integration of its cutting-edge LLM development platform with the Haystack framework.

Introducing the Haystack Integration

Today, we’re excited to announced our partnership with Haystack by deepset, as one of the first integration partners working directly with Haystack 2.0. While Haystack 2.0 is still in beta, it introduces some major improvements to how you can build and customize production-ready LLM applications - including a complete overhaul on the framework, while maintaining the fundamental abstractions that have contributed to its prior success (e.g. pipelines, document stores and nodes, etc.)

This partnership brings together two leading tools in the AI sector, enabling enterprise business and developers with the ability to enhance efficiency, precision, and innovation in AI-focused applications.

About the Gradient Developer Platform

Gradient’s platform allows users to customize and enhance LLMs with domain-specific data, enabling high precision outputs tailored for individual use cases. When building on Gradient, users have full control of the data they use and ownership of the model they create.

About the Haystack Framework

Haystack is an open-source LLM framework in Python by deepset that’s used for building production-ready natural language processing (NLP) applications. Developers and enterprise businesses can easily try out the latest models in NLP, while maintaining flexibility and ease-of-use.

Haystack provides two main structures: components and pipelines. A pipeline can be comprised of any number of components (such as the new Gradient components), where each component is responsible for a task (such as retrieval, ranking, generation, etc.). Pipelines can branch out for concurrent tasks, have conditional routing, and loops.

What this Integration Means

The Gradient and Haystack integration enables users to use their Gradient models for desired tasks in a Haystack pipelines. Within the Haystack framework, users can call the Gradient API to run inference on their own LLMs that are built on the Gradient platform or call the Gradient Embeddings API to generate vector embeddings from text.

Let’s break down a few examples of how you could leverage the Gradient integration in a Haystack pipeline, as part of your AI development process.

  1. Vector Embeddings: Whether you’re using an existing model or a custom model that’s built on the Gradient platform, you can leverage the Gradient Embedding API to create embeddings from documents (e.g. indexing via GradientDocumentEmbedder) and store the documents and the generated embeddings into a vector store of your choice. You can also create embeddings from text (e.g. queries via GradientTextEmbedder) to conduct vector search and retrieval within a query pipeline.

  2. Retrieval Augmented Generation (RAG): The Gradient and Haystack integration enables users to effectively implement RAG on top of their private, custom models in the Gradient platform (e.g. via GradientGenerator). With the Haystack framework, users can generate enhanced completions based on additional information retrieved from an indexed knowledge database.

  3. Agent Pipelines (Coming Soon): While agent capabilities are not yet available in the Beta release of Haystack 2.0, the Gradient and Haystack integration will enable users to use their Gradient models for desired tasks in a Haystack pipeline. Within the Haystack framework, users can call the Gradient API to run inference on their own LLMs built using the Gradient platform. That means you can fine-tune multiple LLMs on Gradient with specific data sets that are tailored for different domains and/or tasks, and then select the best model to use for each component of an application - enhancing accuracy and adaptability.

Get started with our developer documentation today.

Try Gradient or Join Us

Gradient is now available to enterprise businesses and developers developers globally. Sign up and get started with $5 in workplace credits.

Enterprise businesses and developers are only beginning to see the potential of LLMs in various applications, and we’re excited to help simplify this process. For anyone looking to join us in this mission, we’re hiring!