Using Haystack

Haystack provides a robust framework for implementing Retrieval-Augmented Generation (RAG) by integrating data retrieval and response generation components. This section details how to set up and use Haystack for efficient data retrieval and response generation with MonsterAPI.

Follow these steps to use our LLM endpoints with Haystack:

Step 1: Setting Up the Components

  1. LinkContentFetcher: Fetches HTML content from URLs.

  2. HTMLToDocument: Converts HTML content into document objects.

  3. PromptBuilder: Prepares a prompt using the content and user’s query.

  4. OpenAIGenerator: Generates responses using MonsterAPI.

    from haystack import Pipeline
    from haystack.utils import Secret
    from haystack.components.fetchers import LinkContentFetcher
    from haystack.components.converters import HTMLToDocument
    from haystack.components.builders import PromptBuilder
    from haystack.components.generators import OpenAIGenerator
    
    fetcher = LinkContentFetcher()
    converter = HTMLToDocument()
    
    prompt_template = """
    According to the contents of this website:
    {% for document in documents %}
     {{document.content}}
    {% endfor %}
    Answer the given question: {{query}}
    Answer:
    """
    
    prompt_builder = PromptBuilder(template=prompt_template)
    
    llm = OpenAIGenerator(
        api_key=Secret.from_env_var("MONSTER_API_KEY"),
        api_base_url="https://llm.monsterapi.ai/v1/",
        model="microsoft/Phi-3-mini-4k-instruct",
        generation_kwargs = {"max_tokens": 256}
    )
    

Step 2: Building and Connecting the Pipeline

  1. Create the Pipeline: Add components sequentially.

  2. Connect Components: Ensure smooth data flow between components.

    pipeline = Pipeline()
    pipeline.add_component("fetcher", fetcher)
    pipeline.add_component("converter", converter)
    pipeline.add_component("prompt", prompt_builder)
    pipeline.add_component("llm", llm)
    

Step 3: Running the Pipeline

  1. Execute the Pipeline: Run the pipeline with necessary inputs and retrieve the response.

    result = pipeline.run({
        "fetcher": {"urls": ["https://developer.monsterapi.ai/docs/"]},
        "prompt": {"query": "What are the features of MonsterAPI?"}
    })
    
    print(result["llm"]["replies"][0])
    

By following these steps, you can effectively utilize Haystack with our platform to create a RAG system that dynamically retrieves relevant information and generates accurate, context-aware responses.