Lm Eval

Use Eluether Evaluation Harness to evaluate on lm_eval engine. Supported evals are as follow for engine:

  1. lm_eval: mmlu, gsm8k, hellaswag, arc, truthfulqa, winogrande

Models >8B and context more than 8k are not currently supported. Support will be added shortly.

Log in to see full request history
Body Params
string
Defaults to Null

Unique deployment for the instance, auto-generated if not provided.

string
Defaults to mistralai/Mistral-7B-v0.1
string

Lora adapter HF path or HTTPS link to download the model.

const
required
string
Defaults to gsm8k,hellaswag

command seperated supported tasks. For supported see route description above.

Responses

Language
Credentials
Click Try It! to start a request and see the response here! Or choose an example:
application/json