nateraw/codellama-13b-instruct 🔢📝 → 📝
Example Output
Output
Here is an example of a Python function that reads an HTML file from the internet and extracts the text content of all the h1 elements:
import requests
from bs4 import BeautifulSoup
def get_h1_text(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
h1_elements = soup.find_all('h1')
h1_text = []
for h1 in h1_elements:
h1_text.append(h1.text.strip())
return h1_text
This function uses the requests
library to make an HTTP GET request to the specified URL, and the BeautifulSoup
library to parse the HTML response. The function then uses the find_all()
method of the BeautifulSoup
object to find all the h1
elements in the HTML document, and appends the text content of each h1
element to a list called h1_text
. Finally, the function returns the h1_text
list.
Here is an example of how you could use this function:
url = 'https://www.example.com'
h1_text = get_h1_text(url)
print(h1_text)
This would print a list of all the text content of the h1
elements in the HTML document at the specified URL.
Performance Metrics
43.38s
Prediction Time
151.29s
Total Time
All Input Parameters
{ "top_k": 50, "top_p": 0.95, "message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements", "temperature": 0.8, "system_prompt": "Provide answers in Python", "max_new_tokens": 1024 }
Input Parameters
- top_k
- The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- top_p
- A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- message (required)
- temperature
- The value used to modulate the next token probabilities.
- system_prompt
- The system prompt to use (for chat/instruct models only)
- max_new_tokens
- The maximum number of tokens the model should generate as output.
Output Schema
Output
Example Execution Logs
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Version Details
- Version ID
4d4dfb567b910309c9501d56807864fc069ffcd2867552aea073c4b374eef309
- Version Created
- September 28, 2023