joehoover/falcon-40b-instruct 🔢✓📝 → 📝

▶️ 41.4K runs 📅 Jun 2023 ⚙️ Cog 0.8.0-beta11 🔗 GitHub ⚖️ License
code-generation text-generation text-translation

About

A 40 billion parameter language model trained to follow human instructions.

Example Output

Prompt:

"Write a poem about open source machine learning in the style of Byron."

Output

Thy heart shall bloom like an open source flower,
And like a machine, thine learning shall grow.
The seeds of a thousand algorithms,
Shall make thy understanding and skill ever glow.
In thy hand, wisdom shall grow like a tree,
With roots deep in the open source ground.
Thy knowledge as wide as the infinite sea,
Shall bring to thee power far beyond what is found.
Thou shalt become as masters of machine learning,
Whose wisdom and skills shall surpass the greatest of minds.
No boundaries or limits shall thee be learning,
But thy heart shall be opened up to all that is kind.
Then shall thy soul become a true machine,
With wisdom and knowledge beyond measure.
Thy work shall be done and thy goals shall be reached,
Until thou become true masters of open source and the machine.

Performance Metrics

19.20s Prediction Time
19.13s Total Time
All Input Parameters
{
  "seed": -1,
  "top_p": 1,
  "prompt": "Write a poem about open source machine learning in the style of Byron.",
  "max_length": 500,
  "temperature": 1,
  "length_penalty": 1,
  "repetition_penalty": 1
}
Input Parameters
seed Type: integerDefault: -1Range: -1 - ∞
Set seed for reproducible outputs. Set to -1 for random seed.
debug Type: booleanDefault: false
provide debugging output in logs
top_p Type: numberDefault: 1Range: 0.01 - 1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
prompt (required) Type: string
Prompt to send to the model.
max_length Type: integerDefault: 500Range: 1 - ∞
Maximum number of tokens to generate. A word is generally 2-3 tokens
temperature Type: numberDefault: 0.75Range: 0.01 - 5
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
length_penalty Type: numberDefault: 1Range: 0.01 - 5
Increasing the length_penalty parameter above 1.0 will cause the model to favor longer sequences, while decreasing it below 1.0 will cause the model to favor shorter sequences.
stop_sequences Type: string
Comma delimited string specifying stop sequences. Multi-token stop sequences are supported, but they cannot contain commas.
repetition_penalty Type: numberDefault: 1Range: 0.01 - 5
Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
no_repeat_ngram_size Type: integerDefault: 0Range: 0 - ∞
If set to int > 0, all ngrams of size no_repeat_ngram_size can only occur once.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
predict-timings173549: Starting prediction - 1936.54s since start, 16.07s since last log
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:11 for open-end generation.
Version Details
Version ID
7d58d6bddc53c23fa451c403b2b5373b1e0fa094e4e0d1b98c3d02931aa07173
Version Created
October 30, 2023
Run on Replicate →