MonsterAPI Python Client

A Python client for interacting with Monster API v2 AI Models and Services.

Installation

pip install monsterapi

Has support to following MonsterAPI services:

Text-Gen/LLMs: ---------------

  1. falcon-7b-instruct
  2. falcon-40b-instruct
  3. mpt-7b-instruct
  4. mpt-30b-instruct
  5. llama2-7b-chat
  6. zephyr-7b-beta
  7. codellama-13b-instruct
  8. codellama-34b-instruct

Image Gen: ----------

  1. txt2img - stable-diffusion v1.5
  2. sdxl - stable-diffusion XL V1.0
  3. pix2pix - Instruct-pix2pix
  4. img2img - Image to Image using Stable Diffusion
  5. photo-maker - PhotoMaker

Speech Gen: -----------

  1. sunoai-bark - Bark (Sunoai Bark)
  2. whisper - (Whisper Large V2)
  3. speech2text-v2 - (Whisper Large V3)

MonsterDeploy - Deploy Large Language Models

  1. Monster Deploy LLMs (deploy-llm)

Basic Usage to access Hosted AI-Models

Import Module

from monsterapi import client

set MONSTER_API_KEY env variable to your API key.

os.environ["MONSTER_API_KEY"] = <your_api_key>
client = client() # Initialize client

or

pass api_key parameter to client constructor.

client = client(<api_key>) # pass api_key as parameter

Use generate method

result = client.generate(model='falcon-7b-instruct', data={
    "prompt": "Your prompt here",
    # ... other parameters
})

or

Send a response to a model and suitable payload and retreive payload

# Fetching a response
response = client.get_response(model='falcon-7b-instruct', data={
    "prompt": "Your prompt here",
    # ... other parameters
})
print(response["process_id"])

Get the status of the process

status = client.get_status("your_process_id")
print(status)

Wait and Get the Result

# Waiting for result
result = client.wait_and_get_result("your_process_id")
print(result)

Quick Serve LLM

Launch a llama2-7b model using QuickServe API

Prepare and send payload to launch a LLM deployment.

Choose Per_GPU_VRAM and GPU_Count based on your model size and batch size.

Please see here for detailed list of supported model and infrastructure matrix.

launch_payload = {
    "basemodel_path": "meta-llama/Llama-2-7b-chat",
    "loramodel_path": "",
    "prompt_template": "{prompt}{completion}",
    "api_auth_token": "b6a97d3b-35d0-4720-a44c-59ee33dbc25b",
    "per_gpu_vram": 24,
    "gpu_count": 1
}

# Launch a deployment
ret = client.deploy("llm", launch_payload) 
deployment_id = ret.get("deployment_id")
print(ret)

# Get deployment status
status_ret = client.get_deployment_status(deployment_id)
print(status_ret)

logs_ret = client.get_deployment_logs(deployment_id)
print(logs_ret)

# Terminate Deployment
terminate_return = client.terminate_deployment(deployment_id)
print(terminate_return)

About us

Check us out at monsterapi.ai

Checkout our new MonsterAPI Deploy service here

Check out new no-code finetuning service here