Provides information about using Generative AI models on Monster API

Monster API provides several pre-hosted Generative AI models.

The API requests operate asynchronously. This means that after a successful API request, you'll receive a process_id.

This process_id can then be utilized to retrieve results via the Fetch Results API.

Let's explore some of the API concepts below:


Monster API uses Bearer Token authentication.

Just include your API key in the Authorization Header for each API request.

API Request Body:

Monster API supports multiple Content Types in API request body, thus ensuring seamless integration with your workflows.

Supported Content-Types:

  1. application/json
  2. multipart/form-data

Requests made with any of the above Content-Type are treated same by our platform.

Use cases:

  1. Send a JSON payload in API request:
curl --request POST \
     --url \
     --header 'accept: application/json' \
     --header 'authorization: Bearer <API Key>' \
     --header 'content-type: application/json' \
     --data '
  "prompt": "What is two + two?"
  1. Send a file in API request:
curl --request POST \
     --url \
     --header 'accept: application/json' \
     --header 'authorization: Bearer <API Key>' \
     --header 'content-type: multipart/form-data' \
     --form [email protected] \
     --form language=en


File size is limited to 8 MB

If you want to use a larger file in your requests, then refer to the implementation below.

  1. Send large files (> 8 MB) in API request:

By default, our APIs handle files up to 8 MB. For larger files, you can either send a publicly accessible file URL or use our Upload API. This API uploads your files to our S3 buckets and returns a download url. You can then use this URL in an API request to Generative AI APIs with an application/json payload.

The download URLs returned by Upload API are secure and valid for 30 minutes only.

Do checkout our Recipe on working with large files or refer File Upload API

To use the uploaded files in your Gen AI API requests, refer to the specific AI Model API guides.

API Responses:

Our APIs returns a standard JSON format response. Each API call returns a process_id

  "callback_url": "",
  "message": "Request Accepted Successfully",
  "process_id": "aaaaaa-bbbbbb-cc-ggh",
  "status_url": ""

You need to use Fetch Results API to retrieve status of your request or fetch final results.

This response also provides a status_url which can be used to fetch results. Simply use GET method to query this status_url with your API key set in Authorization header.

Want to know more about schema for all the Generative AI Model APIs? Do checkout our Docs


Monster API provides an option to easily use a webhook URL as well.

Monster API provides an option to easily use a registered webhook as a callback.

Webhooks offer a powerful solution for developers:

Instead of manually polling the Fetch Results API, they can effortlessly receive automatic status updates on their API requests. This approach not only simplifies automation but also enhances scalability.

Pre-requisites for using a webhook:

  • Register a webhook URL on Monster API platform with a webhook name of your choice.
  • Pass the webhook in your Generative AI Model API requests using your webhook's name.

That's it. Now your webhook will start receiving your API request status updates.

Check out our Webhooks API to get started.

Also, you may explore this tutorial for a quick start:

Error Codes:

Monster API follows standard HTTP Error codes and verbs.

  "message": "Error Message"
HTTP Status Code Summary
HTTP Error CodeDefinition
200 - OKEverything worked as expected
400 - Bad RequestThe request was unacceptable, often due to a missing a required parameter
401 - UnauthorisedNo valid API key provided.
403 - ForbiddenThe API key doesn't have permissions to perform the request.
404 - Not FoundThe requested resource doesn't exist
408 - Request TimeoutProvided webhook is not responding and getting timeout out
415 - Unsupported ContentUnsupported file extension for a model
429 - Too Many RequestToo many requests hit the API too quickly. Please slow down or upgrade your plan.
500,501,502 - Internal Server ErrorSomething went wrong on our side

With all this information, let's get started with our first Generative Model API request!