tomasmcm/starling-lm-7b-alpha 📝🔢 → 📝

▶️ 71.9K runs 📅 Nov 2023 ⚙️ Cog 0.8.6 📄 Paper ⚖️ License
ai-feedback code-generation language-model natural-language-processing question-answering reinforcement-learning text-generation

About

Source: berkeley-nest/Starling-LM-7B-alpha ✦ Quant: TheBloke/Starling-LM-7B-alpha-AWQ ✦ An open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF)

Example Output

Prompt:

"GPT4 Correct User: Tell me about AI<|end_of_turn|>GPT4 Correct Assistant:"

Output

Artificial Intelligence (AI) refers to the development of computer systems or software that are capable of performing tasks that would typically require human intelligence. These tasks can include learning, problem-solving, reasoning, understanding natural language, perception, and decision-making. AI is a rapidly growing field that encompasses various sub-disciplines, including machine learning, deep learning, and neural networks.

Machine learning is a subset of AI that focuses on the development of algorithms and models that enable computers to learn from data without being explicitly programmed. It involves the use of statistical techniques to identify patterns in data and make predictions or decisions

Performance Metrics

2.83s Prediction Time
2.86s Total Time
All Input Parameters
{
  "top_k": -1,
  "top_p": 0.95,
  "prompt": "GPT4 Correct User: Tell me about AI<|end_of_turn|>GPT4 Correct Assistant:",
  "max_tokens": 128,
  "temperature": 0.8,
  "presence_penalty": 0,
  "frequency_penalty": 0
}
Input Parameters
stop Type: string
List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.
top_k Type: integerDefault: -1
Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.
top_p Type: numberDefault: 0.95Range: 0.01 - 1
Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
prompt (required) Type: string
Text prompt to send to the model.
max_tokens Type: integerDefault: 128
Maximum number of tokens to generate per output sequence.
temperature Type: numberDefault: 0.8Range: 0.01 - 5
Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.
presence_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
frequency_penalty Type: numberDefault: 0Range: -5 - 5
Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.
Output Schema

Output

Type: string

Example Execution Logs
Processed prompts:   0%|          | 0/1 [00:00<?, ?it/s]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.76s/it]
Processed prompts: 100%|██████████| 1/1 [00:02<00:00,  2.76s/it]
Generated 128 tokens in 2.7641451358795166 seconds.
Version Details
Version ID
1cee13652378fac04fe10dedd4c15d3024a0958c3e52f97a1aa7c4d05b99ef99
Version Created
November 30, 2023
Run on Replicate →