MonsterAPI Deploy Docs - Beta

MonsterAPI Deploy Service Deploy llm models with one request

Introduction

Introducing MonsterAPI's Deploy Service, your gateway to seamlessly deploy Large language models (LLMs) and docker containers on MonsterAPI's robust GPU compute infrastructure. Developed with the vLLM (Variably-Large Language Models) project as its foundation, the Deploy service is built to cater to a vast array of models, providing swift deployment and optimization of pioneering language models.

❗️

Deploy Service is accessible to Beta users right now. For access, apply here.

Quick Start Resources:

📝 Demo Notebooks:

Discover Colab notebooks with Monster Deploy integration on our Projects page 😊🚀!

Features Overview

  • Diverse Deployment Options:
    • Deploy open-source LLMs as REST API endpoints.
    • Deploy docker containers with your choice of docker images.
    • Deploy finetuned LLMs by simply specifying LoRA adapters.
  • Increased throughput: Inception from the vLLM project ensures higher throughput while serving requests.
  • Custom Resource Allocation: Define your custom GPU and RAM configurations.
  • Multi-GPU Support: Resource allocation for up to 4 GPUs to handle large AI models.

Supported deployment methods:

MonsterAPI Deploy currently supports deployment of LLMs as a REST API endpoint and any custom docker image as a hosted docker container on our low cost yet scalable and secure GPU infrastructure.

Two services are facilitate with these API:

Deployment Services

  1. /deploy/llm: Deploy an LLM as a REST API service with/without LoRA adapter.
  2. /deploy/custom_image: Deploy a docker container with any docker image from your docker registry.
  3. /deploy/sdxl-dreambooth: Deploy a SDXL Gradio Dreambooth with finetuned model.

Finetuning Services

  1. /finetune/llm: Finetune an LLM using LoRA.
  2. /finetune/speech2text/whisper: Finetune a whisper speech to text model.
  3. /finetune/text2image/sdxl-dreambooth: Dreambooth finetune stable diffusion models.

This opens up an array of possibilities such as:

  1. Quickly get an API endpoint that can start serving text generation requests using models like Llama2 7B, CodeLlama 34B, Falcon 40B for your AI projects.
  2. Deploy docker container driven applications such as Automatic1111 for stable diffusion UI, with just specifying a docker image.
  3. Finetune models on MonsterAPI's no-code LLM finetuner and then deploy them with their LoRA adapters to swiftly get an API endpoint serving requests using the domain specific LLM finetuned on your datasets.

Resource Configurations

You can choose from a range of GPU and RAM configurations such as:

RAM Size (GB)816244880
No. of GPUsUp to 4 GPUsUp to 4 GPUsUp to 4 GPUsUp to 4 GPUsUp to 2 GPUs

Our computing network ensures there's enough availability of GPU resource to meet the specific demands of your AI projects and thus, giving you unparalleled processing capability to tackle even the most complex tasks.

Model Compatibility Criteria

This section outlines the foundational requirements and benchmarks that models need to meet to be successfully integrated into the Deploy Service platform. Ensuring compatibility guarantees seamless integration and optimal performance when deploying your models.

  • Base Model Path: Initiate with a path to a Hugging Face model. Ensure it is authenticated and identified within the Hugging Face platform.
  • Vast Model Support: Leveraging vLLM technology, Deploy Service accepts any model supported by vLLM.

Curated List of Compatible Base Models

Falcon Models: tiiuae/falcon-7b, tiiuae/falcon-40b
GPT-2 Series: gpt2, gpt2-xl [Limited to 1xGPU]
GPT-J Models: EleutherAI/gpt-j-6b [Limited to 1xGPU]
GPT-NeoX: EleutherAI/gpt-neox-20b
LLaMA & LLaMA-2: meta-llama/Llama-2-70b-hf
Mistral Models: mistralai/Mistral-7B-v0.1
MPT Models: mosaicml/mpt-7b
OPT Models: facebook/opt-66b
Qwen Models: Qwen/Qwen-7B

Note: Our base model list is always expanding. Stay tuned for more integrations!

Beta Phase & Feedback

The Quick Deploy Service is still in its beta stage. We are keen on refining and enhancing the platform based on your feedback.

Get Beta Access: Sign up here for beta access and get free credits to try out the Deploy service deployments.

Guides & Tutorials

Connect & Explore

Your feedback and insights are a cornerstone of our development.