kcaverly/deepseek-coder-6.7b-instruct 🔢📝✓ → 📝
About
A ~7B parameter language model from Deepseek for SOTA repository level code completion
Example Output
Output
Sure, here is how you can define an enum named PredictionStatus with three variants: Starting, InProgress, and Complete in Rust:
#[derive(Debug)]
enum PredictionStatus {
Starting,
InProgress,
Complete,
}
In this code:
#[derive(Debug)]is an attribute that tells Rust to derive certain traits for this enum. In this case,Debugis used to enable printing of the enum variants for debugging purposes.enum PredictionStatusdefines a new enum type namedPredictionStatus.Starting,InProgress, andCompleteare the variants of the enum.
You can use this enum in your code like this:
fn main() {
let status = PredictionStatus::Starting;
println!("{:?}", status); // prints: Starting
}
In this code, PredictionStatus::Starting creates a value of the PredictionStatus enum with the variant Starting. The println! macro with the {:?} format string is used to print the enum value, which will print the variant name.
Performance Metrics
14.98s
Prediction Time
230.52s
Total Time
All Input Parameters
{
"messages": "[{\"role\": \"user\", \"content\": \"please create a rust enum named prediction status, with three variants starting, in progress and complete\"}]",
"do_sample": false,
"max_new_tokens": 512,
"num_return_sequences": 1
}
Input Parameters
- top_k
- The number of highest probability vocabulary tokens to keep for top-k filtering.
- top_p
- If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
- messages (required)
- Chat messages, passed as a json string
- do_sample
- Whether or not to use sampling; use greedy decoding otherwise.
- max_new_tokens
- Maximum new tokens to generate.
- num_return_sequences
- The number of independently computed returned sequences for each element in the batch.
Output Schema
Output
Example Execution Logs
/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
warnings.warn(
/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/generation/configuration_utils.py:396: UserWarning: `do_sample` is set to `False`. However, `top_k` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_k`.
warnings.warn(
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:32021 for open-end generation.
Sure,
here
is
how
you
can
define
an
enum
named
`PredictionStatus`
with
three
variants:
`Starting`,
`InProgress`,
and
`Complete`
in
Rust:
```rust
#[derive(Debug)]
enum
PredictionStatus
{
Starting,
InProgress,
Complete,
}
```
In
this
code:
-
`#[derive(Debug)]`
is
an
attribute
that
tells
Rust
to
derive
certain
traits
for
this
enum.
In
this
case,
`Debug`
is
used
to
enable
printing
of
the
enum
variants
for
debugging
purposes.
-
`enum
PredictionStatus`
defines
a
new
enum
type
named
`PredictionStatus`.
-
`Starting`,
`InProgress`,
and
`Complete`
are
the
variants
of
the
enum.
You
can
use
this
enum
in
your
code
like
this:
```rust
fn
main()
{
let
status
=
PredictionStatus::Starting;
println!("{:?}",
status);
//
prints:
Starting
}
```
In
this
code,
`PredictionStatus::Starting`
creates
a
value
of
the
`PredictionStatus`
enum
with
the
variant
`Starting`.
The
`println!`
macro
with
the
`{:?}`
format
string
is
used
to
print
the
enum
value,
which
will
print
the
variant
name.
<|EOT|>
Version Details
- Version ID
4d38f86e1a3091b3a54b76333040315134dd57070d28397f378857b254ae1590- Version Created
- December 7, 2023