ttsds/fishspeech_1_0 📝🖼️ → 🖼️

▶️ 184 runs 📅 Jan 2025 ⚙️ Cog 0.13.6 🔗 GitHub 📄 Paper ⚖️ License
text-to-speech voice-cloning

About

The Fish Speech V1.0 model.

Example Output

Output

Example output

Performance Metrics

3.16s Prediction Time
83.16s Total Time
All Input Parameters
{
  "text": "With tenure, Suzie'd have all the more leisure for yachting, but her publications are no good.",
  "text_reference": "and keeping eternity before the eyes, though much",
  "speaker_reference": "https://replicate.delivery/pbxt/MNFXdPaUPOwYCZjZM4azsymbzE2TCV2WJXfGpeV2DrFWaSq8/example_en.wav"
}
Input Parameters
text (required) Type: string
text_reference (required) Type: string
speaker_reference (required) Type: string
Output Schema

Output

Type: stringFormat: uri

Example Execution Logs
2025-01-28 09:46:42.653 | INFO     | tools.llama.generate:generate_long:491 - Encoded text: With tenure, Suzie'd have all
2025-01-28 09:46:42.654 | INFO     | tools.llama.generate:generate_long:491 - Encoded text: the more leisure for yachting,
2025-01-28 09:46:42.654 | INFO     | tools.llama.generate:generate_long:491 - Encoded text: but her publications are no
2025-01-28 09:46:42.654 | INFO     | tools.llama.generate:generate_long:491 - Encoded text: good.
2025-01-28 09:46:42.654 | INFO     | tools.llama.generate:generate_long:509 - Generating sentence 1/4 of sample 1/1
  0%|          | 0/1858 [00:00<?, ?it/s]/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/torch/backends/cuda/__init__.py:342: FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see, torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature.
warnings.warn(
  0%|          | 6/1858 [00:00<00:31, 58.98it/s]
  1%|          | 13/1858 [00:00<00:29, 62.02it/s]
  1%|          | 20/1858 [00:00<00:29, 62.85it/s]
  1%|▏         | 27/1858 [00:00<00:29, 62.97it/s]
  2%|▏         | 34/1858 [00:00<00:28, 63.17it/s]
  2%|▏         | 41/1858 [00:00<00:28, 63.23it/s]
  3%|▎         | 48/1858 [00:00<00:28, 63.45it/s]
  3%|▎         | 55/1858 [00:00<00:28, 63.64it/s]
3%|▎         | 57/1858 [00:00<00:29, 62.02it/s]
2025-01-28 09:46:43.653 | INFO     | tools.llama.generate:generate_long:565 - Generated 59 tokens in 1.00 seconds, 59.06 tokens/sec
2025-01-28 09:46:43.653 | INFO     | tools.llama.generate:generate_long:568 - Bandwidth achieved: 23.04 GB/s
2025-01-28 09:46:43.654 | INFO     | tools.llama.generate:generate_long:573 - GPU Memory used: 2.45 GB
2025-01-28 09:46:43.654 | INFO     | tools.llama.generate:generate_long:509 - Generating sentence 2/4 of sample 1/1
  0%|          | 0/1751 [00:00<?, ?it/s]
  0%|          | 7/1751 [00:00<00:27, 63.96it/s]
  1%|          | 14/1751 [00:00<00:27, 64.07it/s]
  1%|          | 21/1751 [00:00<00:26, 64.20it/s]
  2%|▏         | 28/1751 [00:00<00:26, 63.93it/s]
  2%|▏         | 35/1751 [00:00<00:26, 63.92it/s]
2%|▏         | 39/1751 [00:00<00:27, 62.36it/s]
2025-01-28 09:46:44.300 | INFO     | tools.llama.generate:generate_long:565 - Generated 41 tokens in 0.65 seconds, 63.47 tokens/sec
2025-01-28 09:46:44.300 | INFO     | tools.llama.generate:generate_long:568 - Bandwidth achieved: 24.77 GB/s
2025-01-28 09:46:44.300 | INFO     | tools.llama.generate:generate_long:573 - GPU Memory used: 2.45 GB
2025-01-28 09:46:44.301 | INFO     | tools.llama.generate:generate_long:509 - Generating sentence 3/4 of sample 1/1
  0%|          | 0/1665 [00:00<?, ?it/s]
  0%|          | 7/1665 [00:00<00:25, 64.20it/s]
  1%|          | 14/1665 [00:00<00:26, 63.41it/s]
  1%|▏         | 21/1665 [00:00<00:25, 63.69it/s]
  2%|▏         | 28/1665 [00:00<00:25, 63.84it/s]
  2%|▏         | 35/1665 [00:00<00:25, 63.98it/s]
2%|▏         | 37/1665 [00:00<00:26, 62.18it/s]
2025-01-28 09:46:44.911 | INFO     | tools.llama.generate:generate_long:565 - Generated 39 tokens in 0.61 seconds, 63.85 tokens/sec
2025-01-28 09:46:44.912 | INFO     | tools.llama.generate:generate_long:568 - Bandwidth achieved: 24.91 GB/s
2025-01-28 09:46:44.912 | INFO     | tools.llama.generate:generate_long:573 - GPU Memory used: 2.45 GB
2025-01-28 09:46:44.912 | INFO     | tools.llama.generate:generate_long:509 - Generating sentence 4/4 of sample 1/1
  0%|          | 0/1603 [00:00<?, ?it/s]
  0%|          | 7/1603 [00:00<00:24, 64.35it/s]
  1%|          | 14/1603 [00:00<00:24, 64.30it/s]
1%|          | 17/1603 [00:00<00:26, 60.48it/s]
2025-01-28 09:46:45.209 | INFO     | tools.llama.generate:generate_long:565 - Generated 19 tokens in 0.30 seconds, 63.95 tokens/sec
2025-01-28 09:46:45.209 | INFO     | tools.llama.generate:generate_long:568 - Bandwidth achieved: 24.95 GB/s
2025-01-28 09:46:45.209 | INFO     | tools.llama.generate:generate_long:573 - GPU Memory used: 2.45 GB
/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/torch/nn/modules/conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:919.)
return F.conv1d(input, weight, bias, self.stride,
Next sample
Version Details
Version ID
54ec17a9858e5a0936b5fd67eb22dbdc4e7fb4262cfe59b1c117f89ca4b5a12b
Version Created
January 28, 2025
Run on Replicate →