tomasmcm/gorilla-openfunctions-v1 📝🔢 → 📝

▶️ 418 runs 📅 Nov 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
api-call-generation function-execution json-formatting language-model openai-functions text-to-api

About

Source: gorilla-llm/gorilla-openfunctions-v1 ✦ Quant: TheBloke/gorilla-openfunctions-v1-AWQ ✦ Extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context

Example Output

Prompt:

"USER: <> Call me an Uber ride type "Plus" in Berkeley at zipcode 94704 in 10 minutes <> [{"name": "Uber Carpool", "api_name": "uber.ride", "description": "Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters", "parameters": [{"name": "loc", "description": "Location of the starting place of the Uber ride"}, {"name": "type", "enum": ["plus", "comfort", "black"], "description": "Types of Uber ride user is ordering"}, {"name": "time", "description": "The amount of time in minutes the customer is willing to wait"}]}]
ASSISTANT: "

Output

uber.ride(loc="94704", type="plus", time=10)

Performance Metrics

0.30s Prediction Time
76.17s Total Time
All Input Parameters
{
  "top_k": -1,
  "top_p": 0.95,
  "prompt": "USER: <<question>> Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes <<function>> [{\"name\": \"Uber Carpool\", \"api_name\": \"uber.ride\", \"description\": \"Find suitable ride for customers given the location, type of ride, and the amount of time the customer is willing to wait as parameters\", \"parameters\": [{\"name\": \"loc\", \"description\": \"Location of the starting place of the Uber ride\"}, {\"name\": \"type\", \"enum\": [\"plus\", \"comfort\", \"black\"], \"description\": \"Types of Uber ride user is ordering\"}, {\"name\": \"time\", \"description\": \"The amount of time in minutes the customer is willing to wait\"}]}]\nASSISTANT: ",
  "max_tokens": 128,
  "temperature": 0.8,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
stop Type: string
List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
top_k Type: integerDefault: -1
Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
top_p Type: numberDefault: 0.95Range: 0.01 - 1
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
prompt (required) Type: string
Text prompt to send to the model.
max_tokens Type: integerDefault: 128
Maximum number of tokens to generate per output sequence.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
presence_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
frequency_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00,  3.47it/s]
Processed prompts: 100%|██████████| 1/1 [00:00<00:00,  3.47it/s]
Generated 22 tokens in 0.2924685478210449 seconds.
Version Details
Version ID
574ca2dfccd6ea5006ae008b773dd9d5a1da77f978fe183f4f6452bbc9f62aba
Version Created
November 25, 2023
Run on Replicate →