Model
TTSModelSettings
dataclass
Settings for a TTS model.
Source code in src/cai/sdk/agents/voice/model.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
voice
class-attribute
instance-attribute
voice: (
Literal[
"alloy",
"ash",
"coral",
"echo",
"fable",
"onyx",
"nova",
"sage",
"shimmer",
]
| None
) = None
The voice to use for the TTS model. If not provided, the default voice for the respective model will be used.
buffer_size
class-attribute
instance-attribute
buffer_size: int = 120
The minimal size of the chunks of audio data that are being streamed out.
dtype
class-attribute
instance-attribute
dtype: DTypeLike = int16
The data type for the audio data to be returned in.
transform_data
class-attribute
instance-attribute
transform_data: (
Callable[
[NDArray[int16 | float32]], NDArray[int16 | float32]
]
| None
) = None
A function to transform the data from the TTS model. This is useful if you want the resulting audio stream to have the data in a specific shape already.
instructions
class-attribute
instance-attribute
instructions: str = "You will receive partial sentences. Do not complete the sentence just read out the text."
The instructions to use for the TTS model. This is useful if you want to control the tone of the audio output.
text_splitter
class-attribute
instance-attribute
text_splitter: Callable[[str], tuple[str, str]] = (
get_sentence_based_splitter()
)
A function to split the text into chunks. This is useful if you want to split the text into chunks before sending it to the TTS model rather than waiting for the whole text to be processed.
speed
class-attribute
instance-attribute
speed: float | None = None
The speed with which the TTS model will read the text. Between 0.25 and 4.0.
TTSModel
Bases: ABC
A text-to-speech model that can convert text into audio output.
Source code in src/cai/sdk/agents/voice/model.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
model_name
abstractmethod
property
model_name: str
The name of the TTS model.
run
abstractmethod
run(
text: str, settings: TTSModelSettings
) -> AsyncIterator[bytes]
Given a text string, produces a stream of audio bytes, in PCM format.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text
|
str
|
The text to convert to audio. |
required |
Returns:
Type | Description |
---|---|
AsyncIterator[bytes]
|
An async iterator of audio bytes, in PCM format. |
Source code in src/cai/sdk/agents/voice/model.py
72 73 74 75 76 77 78 79 80 81 82 |
|
StreamedTranscriptionSession
Bases: ABC
A streamed transcription of audio input.
Source code in src/cai/sdk/agents/voice/model.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
|
transcribe_turns
abstractmethod
transcribe_turns() -> AsyncIterator[str]
Yields a stream of text transcriptions. Each transcription is a turn in the conversation.
This method is expected to return only after close()
is called.
Source code in src/cai/sdk/agents/voice/model.py
88 89 90 91 92 93 94 |
|
close
abstractmethod
async
close() -> None
Closes the session.
Source code in src/cai/sdk/agents/voice/model.py
96 97 98 99 |
|
STTModelSettings
dataclass
Settings for a speech-to-text model.
Source code in src/cai/sdk/agents/voice/model.py
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
|
prompt
class-attribute
instance-attribute
prompt: str | None = None
Instructions for the model to follow.
language
class-attribute
instance-attribute
language: str | None = None
The language of the audio input.
temperature
class-attribute
instance-attribute
temperature: float | None = None
The temperature of the model.
turn_detection
class-attribute
instance-attribute
turn_detection: dict[str, Any] | None = None
The turn detection settings for the model when using streamed audio input.
STTModel
Bases: ABC
A speech-to-text model that can convert audio input into text.
Source code in src/cai/sdk/agents/voice/model.py
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
|
model_name
abstractmethod
property
model_name: str
The name of the STT model.
transcribe
abstractmethod
async
transcribe(
input: AudioInput,
settings: STTModelSettings,
trace_include_sensitive_data: bool,
trace_include_sensitive_audio_data: bool,
) -> str
Given an audio input, produces a text transcription.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
AudioInput
|
The audio input to transcribe. |
required |
settings
|
STTModelSettings
|
The settings to use for the transcription. |
required |
trace_include_sensitive_data
|
bool
|
Whether to include sensitive data in traces. |
required |
trace_include_sensitive_audio_data
|
bool
|
Whether to include sensitive audio data in traces. |
required |
Returns:
Type | Description |
---|---|
str
|
The text transcription of the audio input. |
Source code in src/cai/sdk/agents/voice/model.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
|
create_session
abstractmethod
async
create_session(
input: StreamedAudioInput,
settings: STTModelSettings,
trace_include_sensitive_data: bool,
trace_include_sensitive_audio_data: bool,
) -> StreamedTranscriptionSession
Creates a new transcription session, which you can push audio to, and receive a stream of text transcriptions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
StreamedAudioInput
|
The audio input to transcribe. |
required |
settings
|
STTModelSettings
|
The settings to use for the transcription. |
required |
trace_include_sensitive_data
|
bool
|
Whether to include sensitive data in traces. |
required |
trace_include_sensitive_audio_data
|
bool
|
Whether to include sensitive audio data in traces. |
required |
Returns:
Type | Description |
---|---|
StreamedTranscriptionSession
|
A new transcription session. |
Source code in src/cai/sdk/agents/voice/model.py
149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
|
VoiceModelProvider
Bases: ABC
The base interface for a voice model provider.
A model provider is responsible for creating speech-to-text and text-to-speech models, given a name.
Source code in src/cai/sdk/agents/voice/model.py
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
get_stt_model
abstractmethod
get_stt_model(model_name: str | None) -> STTModel
Get a speech-to-text model by name.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str | None
|
The name of the model to get. |
required |
Returns:
Type | Description |
---|---|
STTModel
|
The speech-to-text model. |
Source code in src/cai/sdk/agents/voice/model.py
179 180 181 182 183 184 185 186 187 188 189 |
|
get_tts_model
abstractmethod
get_tts_model(model_name: str | None) -> TTSModel
Get a text-to-speech model by name.
Source code in src/cai/sdk/agents/voice/model.py
191 192 193 |
|