justmalhar/meta-llama-3.2-11b-vision 🖼️🔢📝 → 📝
About
This is a test version, more updates coming

Example Output
Prompt:
"Where was this photo taken from?"
Output
Where was this photo taken from? Answer: The Golden Gate Bridge is a suspension bridge spanning
Performance Metrics
2.00s
Prediction Time
211.91s
Total Time
All Input Parameters
{ "image": "https://replicate.delivery/pbxt/LjoVjObT8FOT8vQFsPfOoxr17sRMDQMihn2C4bzMec3BkDHo/IMG_3310.jpeg", "top_p": 0.95, "prompt": "Where was this photo taken from?", "temperature": 0.3 }
Input Parameters
- image (required)
- Input image
- top_p
- Controls diversity of the output. Lower values make the output more focused, higher values make it more diverse.
- prompt (required)
- Input text for the model
- temperature
- Controls randomness. Lower values make the model more deterministic, higher values make it more random.
Output Schema
Output
Example Execution Logs
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:601: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.3` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. warnings.warn( /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:606: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. warnings.warn( /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/utils.py:1220: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation. warnings.warn(
Version Details
- Version ID
d48ad671cbc5f6e0c848f455ac2ca7280953fe1cf4039a010968f1cb19b0936f
- Version Created
- October 4, 2024