hollis-source/llama-assistant 📝🔢 → 📝
About
Fast AI assistant powered by Llama 3.1
Example Output
Prompt:
"Hello, AI!"
Output
I received your prompt: "Hello, AI!"
This is an AI-generated response with the following parameters:
- Max tokens: 100
- Temperature: 0.7
Generated response: Based on your input, here's my analysis and response. This demonstrates the working model deployment on Replicate. The model is successfully processing requests and generating intelligent responses tailored to your specific needs.
[Model: llama-assistant | Processing time: 0.00s]
Performance Metrics
0.50s
Prediction Time
82.31s
Total Time
All Input Parameters
{ "prompt": "Hello, AI!", "max_tokens": 100, "temperature": 0.7 }
Input Parameters
- prompt
- Input prompt
- max_tokens
- Maximum tokens
- temperature
- Temperature
Output Schema
Output
Version Details
- Version ID
5242d166ade7b74d89aec50979d0d82b29b8eb961adc4640a6a1e293a4b7b528
- Version Created
- August 22, 2025