victor-upmeet/whisperx-a40-large ✓📝🔢🖼️ → ❓
About
Accelerated transcription, word-level timestamps and diarization with whisperX large-v3 for large audio files

Example Output
Output
{"segments":[{"end":30.811,"text":" The little tales they tell are false. The door was barred, locked and bolted as well. Ripe pears are fit for a queen's table. A big wet stain was on the round carpet. The kite dipped and swayed but stayed aloft. The pleasant hours fly by much too soon. The room was crowded with a mild wob.","start":2.585},{"end":48.592,"text":" The room was crowded with a wild mob. This strong arm shall shield your honor. She blushed when he gave her a white orchid. The beetle droned in the hot June sun.","start":33.029}],"detected_language":"en"}
Performance Metrics
8.00s
Prediction Time
139.60s
Total Time
All Input Parameters
{ "debug": false, "vad_onset": 0.5, "audio_file": "https://replicate.delivery/pbxt/JrckTmbaACSq83MQ5IW8E85b2NPUWZYpCyvxD7A836I5j21G/OSR_uk_000_0050_8k.wav", "batch_size": 64, "vad_offset": 0.363, "diarization": false, "temperature": 0, "align_output": false }
Input Parameters
- debug
- Print out compute/inference times and memory usage information
- language
- ISO code of the language spoken in the audio, specify None to perform language detection
- vad_onset
- VAD onset
- audio_file (required)
- Audio file
- batch_size
- Parallelization of input audio transcription
- vad_offset
- VAD offset
- diarization
- Assign speaker ID labels
- temperature
- Temperature to use for sampling
- align_output
- Aligns whisper output to get accurate word-level timestamps
- max_speakers
- Maximum number of speakers if diarization is activated (leave blank if unknown)
- min_speakers
- Minimum number of speakers if diarization is activated (leave blank if unknown)
- initial_prompt
- Optional text to provide as a prompt for the first window
- huggingface_access_token
- To enable diarization, please enter your HuggingFace token (read). You need to accept the user agreement for the models specified in the README.
- language_detection_min_prob
- If language is not specified, then the language will be detected recursively on different parts of the file until it reaches the given probability
- language_detection_max_tries
- If language is not specified, then the language will be detected following the logic of language_detection_min_prob parameter, but will stop after the given max retries. If max retries is reached, the most probable language is kept.
Output Schema
- segments
- Segments
- detected_language
- Detected Language
Example Execution Logs
No language specified, language will be first be detected for each audio file (increases inference time). Lightning automatically upgraded your loaded checkpoint from v1.5.4 to v2.1.1. To apply the upgrade to your files permanently, run `python -m pytorch_lightning.utilities.upgrade_checkpoint ../root/.cache/torch/whisperx-vad-segmentation.bin` Model was trained with pyannote.audio 0.0.1, yours is 3.0.1. Bad things might happen unless you revert pyannote.audio to 0.x. Model was trained with torch 1.10.0+cu102, yours is 2.1.0+cu121. Bad things might happen unless you revert torch to 1.x. Detected language: en (1.00) in first 30s of audio...
Version Details
- Version ID
1395a1d7aa48a01094887250475f384d4bae08fd0616f9c405bb81d4174597ea
- Version Created
- August 30, 2024