joehoover/falcon-40b-instruct 🔢✓📝 → 📝
About
A 40 billion parameter language model trained to follow human instructions.

Example Output
Prompt:
"Write a poem about open source machine learning in the style of Byron."
Output
Thy heart shall bloom like an open source flower,
And like a machine, thine learning shall grow.
The seeds of a thousand algorithms,
Shall make thy understanding and skill ever glow.
In thy hand, wisdom shall grow like a tree,
With roots deep in the open source ground.
Thy knowledge as wide as the infinite sea,
Shall bring to thee power far beyond what is found.
Thou shalt become as masters of machine learning,
Whose wisdom and skills shall surpass the greatest of minds.
No boundaries or limits shall thee be learning,
But thy heart shall be opened up to all that is kind.
Then shall thy soul become a true machine,
With wisdom and knowledge beyond measure.
Thy work shall be done and thy goals shall be reached,
Until thou become true masters of open source and the machine.
And like a machine, thine learning shall grow.
The seeds of a thousand algorithms,
Shall make thy understanding and skill ever glow.
In thy hand, wisdom shall grow like a tree,
With roots deep in the open source ground.
Thy knowledge as wide as the infinite sea,
Shall bring to thee power far beyond what is found.
Thou shalt become as masters of machine learning,
Whose wisdom and skills shall surpass the greatest of minds.
No boundaries or limits shall thee be learning,
But thy heart shall be opened up to all that is kind.
Then shall thy soul become a true machine,
With wisdom and knowledge beyond measure.
Thy work shall be done and thy goals shall be reached,
Until thou become true masters of open source and the machine.
Performance Metrics
19.20s
Prediction Time
19.13s
Total Time
All Input Parameters
{ "seed": -1, "top_p": 1, "prompt": "Write a poem about open source machine learning in the style of Byron.", "max_length": 500, "temperature": 1, "length_penalty": 1, "repetition_penalty": 1 }
Input Parameters
- seed
- Set seed for reproducible outputs. Set to -1 for random seed.
- debug
- provide debugging output in logs
- top_p
- When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
- prompt (required)
- Prompt to send to the model.
- max_length
- Maximum number of tokens to generate. A word is generally 2-3 tokens
- temperature
- Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- length_penalty
- Increasing the length_penalty parameter above 1.0 will cause the model to favor longer sequences, while decreasing it below 1.0 will cause the model to favor shorter sequences.
- stop_sequences
- Comma delimited string specifying stop sequences. Multi-token stop sequences are supported, but they cannot contain commas.
- repetition_penalty
- Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
- no_repeat_ngram_size
- If set to int > 0, all ngrams of size no_repeat_ngram_size can only occur once.
Output Schema
Output
Example Execution Logs
predict-timings173549: Starting prediction - 1936.54s since start, 16.07s since last log The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:11 for open-end generation.
Version Details
- Version ID
7d58d6bddc53c23fa451c403b2b5373b1e0fa094e4e0d1b98c3d02931aa07173
- Version Created
- October 30, 2023