nateraw/nous-hermes-llama2-awq 🔢📝 → 📝
About
TheBloke/Nous-Hermes-Llama2-AWQ served with vLLM
Example Output
Prompt:
"Write a short script that uses the PRAW library to download the top 3 posts of all time from /r/pics and save the image attachments."
Output
import praw
def get_top_posts(reddit, subreddit):
posts = reddit.subreddit(subreddit).top(limit=100)
top_posts = []
for post in posts:
if post.score >= 1000:
top_posts.append(post)
return top_posts
def save_image_attachments(posts, image_dir):
for post in posts:
for comment in post.comments:
if 'image' in comment.body.lower() or 'jpg' in comment.body.lower() or 'png' in comment.body.lower():
try:
comment.crop((0, 0, 1080, 1920))
comment.download(os.path.join(image_dir, f"{post.id}-{comment.name}"))
except Exception as e:
print(f"Failed to download image for post {post.id}: {e}")
if __name__ == "__main__":
# Create PRAW client
client_id, client_secret, user_agent = "your_client_id", "your_client_secret", "pic-downloader"
reddit = praw.Reddit(client_id=client_id, client_secret=client_secret, user_agent=user_agent)
# Get top 3 posts of all time from /r/pics and save the image attachments
image_dir = "images"
top_posts = get_top_posts(reddit, "pics")
save_image_attachments(top_posts, image_dir)
# Close PRAW client
reddit.close()
Replace "your_client_id", "your_client_secret", and "pic-downloader" with your own PRAW client ID, client secret, and user agent string. This script will download the top 3 posts of all time from /r/pics and save their image attachments to a folder named "images" in the same directory as the script.
Performance Metrics
23.29s
Prediction Time
23.30s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "prompt": "Write a short script that uses the PRAW library to download the top 3 posts of all time from /r/pics and save the image attachments.", "temperature": 0.8, "max_new_tokens": 512, "prompt_template": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:\n", "presence_penalty": 0, "frequency_penalty": 0 }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- prompt (required)
- temperature
- The value used to modulate the next token probabilities.
- max_new_tokens
- The maximum number of tokens the model should generate as output.
- prompt_template
- The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
- presence_penalty
- Presence penalty
- frequency_penalty
- Frequency penalty
Output Schema
Output
Example Execution Logs
Generated 498 tokens in 9.170540809631348 seconds.
Version Details
- Version ID
b3f3f0120a3c4fd37a5e75164cc3ed883c248b9e6d004a70f0d31c3b0debb604
- Version Created
- October 16, 2023