nateraw/codellama-7b-instruct 🔢📝 → 📝

▶️ 28 runs 📅 Sep 2023 ⚙️ Cog 0.8.6
code-generation question-answering text-generation

Example Output

Output

Here is an example of how you might do this in Python:

import requests
from bs4 import BeautifulSoup

def get_h1_text(url):
   response = requests.get(url)
   soup = BeautifulSoup(response.content, 'html.parser')
   h1_texts = []
   for h1 in soup.find_all('h1'):
       h1_texts.append(h1.text.strip())
   return h1_texts

This function takes in a URL as a string argument and returns a list of strings, where each string is the text content of an h1 element in the HTML file at that URL.

You can use this function like this:

h1_texts = get_h1_text('https://www.example.com')
print(h1_texts)

This will print out a list of all the h1 elements in the HTML file at the given URL, along with their text content.

Note: In this example, we are using the requests and BeautifulSoup libraries to retrieve the HTML content of the URL and parse it with Beautiful Soup. You will need to install these libraries in your Python environment before running this code.

Performance Metrics

42.63s Prediction Time
84.99s Total Time
All Input Parameters
{
  "top_k": 50,
  "top_p": 0.95,
  "message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
  "temperature": 0.8,
  "system_prompt": "Provide answers in Python",
  "max_new_tokens": 1024
}
Input Parameters
top_k Type: integerDefault: 50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
top_p Type: numberDefault: 0.9
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
message (required) Type: string
temperature Type: numberDefault: 0.2
The value used to modulate the next token probabilities.
system_prompt Type: stringDefault: Provide answers in Python
The system prompt to use (for chat/instruct models only)
max_new_tokens Type: integerDefault: 256
The maximum number of tokens the model should generate as output.
Output Schema

Output

Type: arrayItems Type: string

Example Execution Logs
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Version Details
Version ID
6a9502976b5d020c6dd421d5790d988cd77013b642ea1276669550603a81989f
Version Created
September 28, 2023
Run on Replicate →