Advanced Features
Dasha BlackBox provides advanced capabilities to handle complex conversation scenarios and enhance call quality. This guide covers the key features available in the Features tab when creating or editing your agent.Overview
The Features tab allows you to enable and configure:- Ambient Noise Handling: Add realistic background noise to agent voice output
- Backchannel: Agent acknowledges user while they speak (e.g., “uh-huh”, “I see”)
- Call Transfer: Route calls to human agents or other endpoints
- Fillers: Add natural speech hesitations to make agent sound more human
- IVR Detection: Detect answering machines and IVR systems
- Language Switching: Let users switch languages mid-conversation
- Max Duration: Set maximum call duration limits
- Post-Call Analysis: Extract structured insights from conversations
- Silence Management: Handle prolonged user silence during calls
- Talk First: Agent speaks first when call connects
Language Switching
Users can request language changes during a call.What is Language Switching?
Language switching allows your agent to:- Respond to user language requests
- Adapt voice synthesis to match the requested language
- Continue the conversation in the target language
- Switch back to the original language when requested
When to Use Language Switching
Ideal Use Cases:- Customer support for global audiences
- Healthcare services with diverse patient populations
- Government services in multilingual regions
- Travel and hospitality applications
Configuration
To enable language switching:- Navigate to the Features tab in the agent editor
- Toggle Enable Language Switching to ON
- Save your agent configuration
The language switching feature is a simple on/off toggle. When enabled, the agent responds to user language requests. Ensure your LLM and TTS provider support the languages users may request.
- Via Dashboard
- Via API
Enable the FeatureIn the Features tab:
- Toggle Language Switching to ON
- The agent responds to user language requests
- ElevenLabs: Multilingual V2 model supports 29 languages
- Cartesia: Sonic model with multilingual support
- Dasha: Multi-language voice synthesis
Requirements
For language switching to work effectively, ensure:-
LLM Support: Use models that handle multiple languages well
- ✅ OpenAI GPT-4/GPT-4.1 (excellent multilingual support)
- ✅ DeepSeek (strong multilingual capabilities)
- ⚠️ Some specialized models may have limited language support
-
TTS Support: Choose voices that match your target languages
- ElevenLabs Multilingual V2: Supports 29 languages
- Cartesia Sonic: Multilingual capabilities
- Dasha: Supports just 2 languages, but fast
-
System Prompt: Instruct your agent to handle language requests
Ambient Noise Handling
Add realistic background noise to your agent’s voice output.What is Ambient Noise?
Ambient noise adds natural background sounds to your agent’s audio output:- Makes the agent sound more realistic and human-like
- Simulates natural environments (office, call center, etc.)
- Adds authenticity to AI-generated voice
- Creates a more natural conversation experience
When to Use Ambient Noise
Ideal Use Cases:- Agents posing as human customer service representatives
- Scenarios where perfect audio quality seems artificial
- Call center simulations requiring realistic background
- Applications where environmental context matters
- Customer service agent appearing to work from an office
- Sales representative calling from a realistic workplace
- Appointment reminders that sound naturally human
- Any scenario where pure AI voice seems too perfect
Configuration
To enable ambient noise:- Navigate to the Features tab
- Toggle Ambient Noise to ON
- Adjust the noise level from 0.0 (none) to 1.0 (maximum)
- Test to find the right balance for your use case
Configure background noise level for realistic audio output
- Via Dashboard
- Via API
Enable Ambient NoiseIn the Features tab:
- Toggle Ambient Noise to ON
- Adjust the slider to set noise intensity
- Start with 0.3-0.5 for subtle realism
- Preview with test calls to find optimal level
Finding the Right Level
Noise Level Guidelines:- 0.0: No background noise - pure AI voice (most artificial)
- 0.2-0.4: Subtle office ambiance - natural without distraction
- 0.5-0.7: Moderate background - noticeable call center environment
- 0.8-1.0: Heavy ambient noise - may impact clarity
Best Practices
1. Balance Realism and Clarity- Use enough noise to sound natural
- Avoid levels that impair understanding
- Test with your target audience
- Some jurisdictions require AI disclosure
- Ambient noise doesn’t replace legal requirements
- Be transparent about AI agent identity when required
Ambient noise adds background sound to the agent’s output - it does not filter or suppress noise from the user’s environment. This feature enhances realism, not speech recognition.
Backchannel
Enable your agent to acknowledge users while they speak, creating more natural conversations.What is Backchannel?
Backchannel refers to the short verbal cues that listeners use to indicate they’re actively listening without interrupting the speaker. These include phrases like “uh-huh”, “I see”, “right”, and “mmm-hmm”. When enabled, your agent will:- Acknowledge the user during longer speech segments
- Make the conversation feel more natural and engaging
- Show active listening without interrupting
- Reduce the perception of talking to a machine
When to Use Backchannel
Ideal Use Cases:- Customer support conversations with detailed explanations
- Sales calls where customers describe their needs
- Healthcare intake where patients share symptoms
- Any scenario requiring empathetic listening
- High-urgency calls (emergency services)
- Quick transactional interactions
- IVR-style routing scenarios
- When users expect rapid responses
Configuration
Backchannel is disabled by default. To enable it:- Navigate to the Features tab in the agent editor
- Toggle Backchannel to ON
- Select your preferred behavior type
- Configure behavior-specific settings
- Save your agent configuration
Behavior Types
Dasha BlackBox supports three backchannel behavior types, each suited to different use cases:- Static
- Static Advanced
- Smart V1
Static Behavior
The simplest backchannel implementation using predefined phrases with random timing.Purpose: Quick setup with basic acknowledgment functionality.Properties:| Property | Type | Default | Description |
|---|---|---|---|
type | string | "static" | Behavior type identifier |
phrases | string[] | null | Phrases to use (e.g., ["uh-huh", "I see"]) |
frequency | number | 0.5 | How often to trigger (0.0 = very rare, 1.0 = very frequent) |
- Simple agents needing basic acknowledgment
- Testing backchannel functionality
- Scenarios where timing precision isn’t critical
Choosing the Right Behavior
| Scenario | Recommended Behavior | Reason |
|---|---|---|
| Simple support calls | Static | Low overhead, predictable behavior |
| High-volume outbound | Static Advanced | Precise timing, cost-efficient |
| Empathetic support | Smart V1 | Context-aware, natural responses |
| Healthcare intake | Smart V1 | Adapts to emotional content |
| Sales qualification | Static Advanced | Consistent, professional acknowledgments |
| Quick surveys | Static | Minimal overhead for short calls |
Best Practices
1. Start Conservative- Begin with
frequency: 0.3-0.5 - Increase based on user feedback
- Too frequent acknowledgments can be distracting
- Record test calls with backchannel enabled
- Review timing and appropriateness
- Adjust settings based on findings
Important: Backchannel only works during phone calls. It does not apply to chat or web-based interactions.
Call Transfer
Route calls to human agents, other AI agents, or external phone numbers.What is Call Transfer?
Call transfer enables your agent to:- Escalate complex issues to human agents
- Route calls to departments or specialists
- Forward calls to external phone numbers
- Maintain conversation context during transfer
Transfer Types
Dasha BlackBox supports three transfer methods:Cold Transfer (Blind Transfer)
The AI agent disconnects immediately after initiating the transfer. When to Use:- Simple routing scenarios
- No context needed at destination
- Fast handoff required
- IVR-style call routing
Warm Transfer (Attended Transfer)
The AI agent consults with the destination before completing the transfer. When to Use:- Complex issues requiring context
- VIP or sensitive situations
- Quality assurance needed
- Human agent availability check
HTTP Transfer (Programmatic Routing)
Uses a webhook to determine transfer destination dynamically. When to Use:- Business rule-based routing
- CRM integration for routing
- Load balancing across agents
- Custom transfer logic
Configuration
To enable call transfer:- Navigate to the Features tab
- Toggle Call Transfer to ON
- Select transfer type (cold, warm, or HTTP)
- Configure destination endpoint(s)
- Add transfer instructions to system prompt
Configure call transfer type and destinations
- Cold Transfer
- Warm Transfer
- HTTP Transfer
Dashboard Configuration:
- Enable Call Transfer in Features tab
- Select Cold Transfer type
- Enter endpoint destination:
- Phone number:
+1-555-0100 - SIP URI:
sip:support@example.com
- Phone number:
- Update system prompt:
Transfer Routes
Configure multiple transfer destinations for different scenarios using thetransferRoutes array:
Route Configuration:
Transfer Best Practices
1. Set Clear Expectations- Transfer success rate
- Time to connect
- Post-transfer customer satisfaction
- Reasons for transfer
Testing Transfers
Step 1: Create Test Agent- Enable transfers
- Configure test phone numbers
- Set short timeout values
- Successful cold transfer
- Successful warm transfer
- Transfer failure (busy/no answer)
- Multiple transfer attempts
- Fallback routing
- Call connects to destination
- Context is preserved (warm transfer)
- Fallbacks work correctly
- User experience is smooth
Fillers
Add natural speech hesitations to make your agent sound more human and conversational.What are Fillers?
Fillers are the natural speech hesitations that humans use when speaking, such as “um”, “uh”, “well”, and similar sounds. When enabled, your agent will occasionally insert these sounds into its speech, creating a more natural and human-like conversation experience. When enabled, fillers:- Add occasional hesitation sounds to agent speech
- Make the AI voice sound less robotic and more natural
- Create a more comfortable conversation atmosphere
- Reduce the “uncanny valley” effect of perfectly smooth AI speech
When to Use Fillers
Ideal Use Cases:- Customer support where a human feel is important
- Sales calls requiring rapport building
- Companion or conversational agents
- Any scenario where natural speech patterns matter
- Professional or formal announcements
- Emergency or urgent communications
- IVR-style automated systems
- Scenarios requiring maximum clarity and precision
- Time-sensitive transactional calls
Configuration
Fillers are enabled by default with the “um” filler sound. To configure:- Navigate to the Features tab in the agent editor
- Locate the Fillers setting
- Toggle ON/OFF as needed
- Configure custom filler texts if desired
- Save your agent configuration
Properties
| Property | Type | Default | Description |
|---|---|---|---|
isEnabled | boolean | true | Whether fillers are enabled |
strategy.type | string | "static" | Strategy type (currently only static supported) |
strategy.texts | string[] | ["um"] | Array of filler texts to randomly use |
Strategy Types
Currently, Dasha BlackBox supports the static strategy for fillers: Static Strategy- Uses a predefined list of filler texts
- Randomly selects from the list during speech
- Simple, predictable behavior
API Example
- Enable with Defaults
- Custom Fillers
- Disable Fillers
Best Practices
1. Keep Filler Lists Short- Use 2-4 different fillers for natural variation
- Too many options can sound inconsistent
- Common choices:
["um", "uh"]or["um", "well"]
- Preview your agent with fillers enabled
- Ensure fillers sound natural with your chosen TTS voice
- Some voices handle fillers better than others
- Enable for relationship-building conversations
- Disable for transactional or urgent scenarios
- A/B test to measure impact on user satisfaction
Default Behavior: Fillers are enabled by default with
["um"] as the filler text. This provides a natural starting point that works well with most TTS voices.IVR Detection
IVR (Interactive Voice Response) detection enables your AI agent to detect when it encounters an automated phone system or voicemail during outbound calls. This is particularly important for transfer scenarios where the destination might be an IVR menu before reaching a human.Overview
When making outbound calls or transferring to external numbers, your AI agent may encounter IVR systems that require navigation (e.g., “Press 1 for sales, press 2 for support”). IVR detection allows your agent to:- Detect IVR Systems: Recognize when it’s talking to an automated system vs. a human
- Navigate Menus: Optionally use DTMF tones to navigate through IVR menus
- Handle Voicemail: Detect answering machines and voicemail systems
- Respond Appropriately: Speak an appropriate phrase or take action when IVR is detected
Per-Channel Enablement
IVR detection can be enabled or disabled for specific call channels. The defaults reflect typical use cases for each channel type.| Parameter | Type | Default | Description |
|---|---|---|---|
enabledForOutbound | boolean | true | Enable IVR detection for outbound phone calls |
enabledForInbound | boolean | false | Enable IVR detection for inbound phone calls |
enabledForWebCall | boolean | false | Enable IVR detection for web-based voice calls |
enabledForChat | boolean | false | Enable IVR detection for text chat sessions |
IVR Navigation
TheivrNavigation parameter enables DTMF (Dual-Tone Multi-Frequency) navigation through IVR menus. When enabled, the AI agent will attempt to navigate IVR systems using touch-tone inputs rather than immediately giving up.
| Parameter | Type | Default | Description |
|---|---|---|---|
ivrNavigation | boolean | false | Enable DTMF navigation through IVR menus |
- Warm Transfers to Businesses: When transferring to external companies with IVR systems
- Outbound Campaigns: When calling businesses that have automated attendants
- Multi-Level Phone Trees: When you need to navigate to specific departments
- Human-Answered Lines: When destination is likely answered by humans
- Simple Detection Only: When you only need to detect IVR without navigation
- Time-Sensitive Calls: When IVR navigation delay is unacceptable
IVR Detection Behavior Types
IVR detection supports two behavior types that determine how the agent responds when an IVR system is detected.SmartV1IvrDetectionBehavior
AI-driven IVR handling that uses GPT to generate contextual responses or navigation decisions.| Field | Type | Required | Description |
|---|---|---|---|
type | "smartV1" | Yes | Type discriminator |
additionalInstructions | string | No | Optional instructions for GPT to guide IVR handling |
- Variable IVR Systems: When encountering different types of IVR menus
- Context-Aware Navigation: When navigation decisions depend on call context
- Dynamic Responses: When the appropriate IVR response varies by situation
- “When encountering an IVR, navigate to the sales department”
- “If voicemail is detected, leave a brief message about the callback”
- “Press 0 to reach operator, or say ‘agent’ if voice-enabled IVR”
- “For , press 1 for support, then 2 for billing” (supports variables)
StaticIvrDetectionBehavior
Rule-based IVR detection that uses a predefined static phrase when IVR is detected.| Field | Type | Required | Description |
|---|---|---|---|
type | "static" | Yes | Type discriminator |
staticPhrase | string | No | Fixed phrase to say when IVR is detected |
- Consistent Response: When you want the same response to all IVR systems
- Simple Scenarios: When complex navigation isn’t needed
- Voicemail Messages: When leaving a standard message on answering machines
- “This is an automated message. A representative will call you back shortly.”
- “Hello , please call us back at your convenience.” (supports variables)
- “I detected an automated system. I’ll try again later.”
Complete IVR Detection Configuration
Full Configuration Example:IVR Detection Parameter Reference
| Parameter | Type | Default | Description |
|---|---|---|---|
isEnabled | boolean | true | Master toggle for IVR detection |
enabledForOutbound | boolean | true | Enable for outbound phone calls |
enabledForInbound | boolean | false | Enable for inbound phone calls |
enabledForWebCall | boolean | false | Enable for web-based voice calls |
enabledForChat | boolean | false | Enable for text chat sessions |
ivrNavigation | boolean | false | Enable DTMF menu navigation |
behavior | object | null | Detection behavior configuration |
behavior.type | "smartV1" | "static" | - | Behavior type discriminator |
behavior.additionalInstructions | string | null | GPT instructions (smartV1 only) |
behavior.staticPhrase | string | null | Fixed phrase (static only) |
Deprecated Parameter: The
staticPhraseForIVR parameter at the top level of ivrDetection is deprecated. Use behavior.staticPhrase with behavior.type: "static" instead.IVR Detection and Transfers
IVR detection is particularly relevant when combined with transfers: Scenario: Cold Transfer to Business When cold-transferring to an external business, the destination might have an IVR system. Configure IVR detection to handle this:For complete details on transfer configuration, see the transfer documentation earlier in this guide. Transfer webhook payloads documented in Webhook Events align with the transfer parameters documented here.
Post-Call Analysis
Extract structured insights from conversations automatically.What is Post-Call Analysis?
Post-call analysis uses AI to:- Extract key information from conversations
- Categorize call outcomes
- Score conversation quality
- Generate structured data for your systems
Use Cases
Customer Support:- Sentiment analysis
- Issue categorization
- Resolution status
- Follow-up requirements
- Lead qualification scores
- Interest level assessment
- Next steps identification
- Objection tracking
- Symptom documentation
- Appointment scheduling confirmation
- Insurance information capture
- Follow-up instructions
Configuration
Define custom analysis forms to extract exactly what you need:- Via Dashboard
- Via API
Step 1: Enable Post-Call AnalysisIn the Features tab:
- Toggle Post-Call Analysis to ON
- Click Add Analysis Form
- Name your form (e.g., “Lead Qualification”)
-
Name:
lead_score -
Type:
number - Description: “Score from 1-10 based on interest level”
-
Name:
contact_info_collected -
Type:
boolean - Description: “Did we collect email and phone?”
-
Name:
next_step -
Type:
enum -
Values:
["schedule_demo", "send_info", "call_back", "not_interested"] - Description: “What should happen next?”
- Call detail page
- Result webhooks
- API call details
Label Types
String: Free-form textAccessing Analysis Results
Via Dashboard:- Navigate to Calls page
- Click on completed call
- View “Post-Call Analysis” section
- See extracted labels and values
Silence Management
Handle prolonged user silence during calls with automated reminders and graceful conversation endings.What is Silence Management?
Silence Management controls how your agent responds when users become silent during a conversation. Rather than awkwardly waiting indefinitely or abruptly ending calls, this feature enables intelligent handling of silence through configurable reminders and termination rules. When enabled, Silence Management:- Detects prolonged periods of user silence
- Sends gentle reminder prompts to re-engage users
- Limits reminder attempts to avoid annoyance
- Automatically ends conversations after configurable thresholds
- Provides graceful call termination when users become unresponsive
When to Use Silence Management
Ideal Use Cases:- Customer support calls where users may step away
- Sales calls where prospects may need time to think
- Healthcare calls where patients may need to gather information
- Any scenario where unresponsive users should be handled gracefully
- High-value calls (increase thresholds to allow more time)
- Quick surveys (decrease thresholds for efficiency)
- Elderly or accessibility-focused agents (increase thresholds)
- High-volume operations (decrease to optimize resources)
Configuration
Silence Management is enabled by default. To configure:- Navigate to the Features tab in the agent editor
- Locate the Silence Management settings
- Adjust parameters as needed
- Save your agent configuration
Properties
| Property | Type | Default | Description |
|---|---|---|---|
isEnabled | boolean | true | Enable/disable silence management |
maxReminderAttempts | number | 2 | Maximum reminder prompts before action |
reminderSilenceThresholdSeconds | number | 5 | Seconds of silence before sending a reminder |
endWhenReminderLimitExceeded | boolean | true | End call when max reminders exceeded |
endAfterSilenceThresholdSeconds | number | null | Absolute silence duration to force end call (optional) |
endAfterSilenceThresholdSeconds is set, the call will end after that duration of continuous silence, regardless of reminder attempts:
API Examples
- Default Configuration
- Patient (More Time)
- With Absolute Timeout
- Disable
Best Practices
1. Tune for Your Use Case| Use Case | Recommended Settings |
|---|---|
| Quick surveys | threshold: 3s, attempts: 1 |
| Customer support | threshold: 5s, attempts: 2 (defaults) |
| Healthcare intake | threshold: 10s, attempts: 3 |
| Elderly users | threshold: 15s, attempts: 4 |
| High-volume outbound | threshold: 4s, attempts: 1 |
- Set
endAfterSilenceThresholdSecondsfor high-volume operations - Prevents resource waste on abandoned calls
- Typical values: 45-90 seconds
- Track calls ended due to silence
- Review whether users were actually disengaged
- Adjust thresholds based on completion rates
Default Behavior: Silence Management is enabled by default with sensible settings (5-second threshold, 2 reminders, end on limit exceeded). These defaults work well for most customer service scenarios.
Combining Features
Advanced features work together to create powerful experiences:Example: Multilingual Support with Transfer
Example: Post-Call Analysis for Compliance
Example: Natural Conversation with Backchannel, Fillers, and Silence Management
- Uses occasional “um” and “well” fillers to sound more human
- Acknowledges users with “uh-huh” and “I see” during longer explanations
- Gently prompts silent users and gracefully ends unresponsive calls
Best Practices Summary
Ambient Noise
- Start with level 0.3-0.5 for subtle realism
- Balance natural sound with call clarity
- Test with target audience before deployment
- Match noise level to your use case scenario
Backchannel
- Start with frequency 0.3-0.5 and adjust based on feedback
- Use static behavior for predictable, cost-efficient deployments
- Use smartV1 for empathetic, context-aware interactions
- Test timing with representative conversations
- Keep phrase lists short (3-5 phrases) for natural variation
Call Transfer
- Always configure fallback destinations for HTTP transfers
- Brief recipients during warm transfers for context
- Test all transfer flows thoroughly before production
- Monitor transfer success rates and failure reasons
Fillers
- Keep filler lists short (2-4 options) for consistency
- Match fillers to your agent’s persona (formal vs. casual)
- Disable for urgent or time-sensitive scenarios
- Test with your chosen TTS voice for natural sound
Language Switching
- Ensure your LLM supports multilingual conversations
- Choose TTS providers with strong multilingual capabilities
- Test language detection and switching in realistic scenarios
- Monitor quality and accuracy across languages
Post-Call Analysis
- Define clear, specific labels with detailed descriptions
- Use enums for categorical data to ensure consistency
- Keep label count manageable (5-10 per form)
- Validate analysis accuracy against actual conversations
Silence Management
- Use defaults (5s threshold, 2 attempts) for most scenarios
- Increase thresholds for accessibility or high-value calls
- Decrease thresholds for quick surveys or high-volume operations
- Set absolute timeout for cost-sensitive deployments
Troubleshooting
Ambient Noise Too High or Too Low
- Reduce level if users complain about audio quality
- Increase level if agent sounds too artificial
- Test different levels with various audiences
- Consider use case requirements (realism vs clarity)
Backchannel Not Triggering
- Verify
isEnabledis set totrue - Check that behavior is configured (not null)
- Ensure user is speaking long enough (check
minLengthVoicefor staticAdvanced) - Verify frequency is above 0 (try 0.5 to start)
- Note: Backchannel only works during phone calls, not chat
Backchannel Too Frequent or Disruptive
- Reduce
frequencyvalue (try 0.3) - Increase
minLengthVoice(try 12-15 seconds) - Increase
minCooldown(try 10-12 seconds) - Use static behavior instead of smartV1 for predictable timing
Fillers Sounding Unnatural
- Test with different TTS voices (some handle fillers better)
- Reduce filler variety to 2-3 options
- Try common fillers:
["um"]or["um", "uh"] - Disable fillers if they don’t sound right with your voice
Inaccurate Post-Call Analysis
- Review and refine label descriptions
- Provide examples in descriptions
- Use enums instead of free-form strings
- Test with various conversation types
- Iterate based on results
Language Switching Not Working
- Verify TTS provider supports target languages
- Check LLM has multilingual capabilities
- Ensure voices are available for all languages
- Review system prompt language instructions
Silence Management Ending Calls Too Quickly
- Increase
reminderSilenceThresholdSeconds(try 8-10) - Increase
maxReminderAttempts(try 3-4) - Remove or increase
endAfterSilenceThresholdSeconds - Review whether users need more thinking time
Silence Management Not Ending Unresponsive Calls
- Verify
isEnabledis true - Check
endWhenReminderLimitExceededis true - Consider adding
endAfterSilenceThresholdSecondsfor absolute timeout - Reduce thresholds if calls stay open too long
Transfer Failures
- Verify destination numbers are correct
- Test reachability of transfer endpoints
- Check SIP configuration if using SIP URIs
- Monitor network connectivity
- Configure fallback routes
Next Steps
Now that you understand advanced features:- Enable Features: Add features one at a time to your agent
- Test Thoroughly: Verify each feature works as expected
- Monitor Performance: Track metrics for enabled features
- Optimize: Refine configuration based on real-world usage