Skip to main content

Advanced Features

Dasha BlackBox provides advanced capabilities to handle complex conversation scenarios and enhance call quality. This guide covers the key features available in the Features tab when creating or editing your agent.

Overview

The Features tab allows you to enable and configure:
  • Ambient Noise Handling: Add realistic background noise to agent voice output
  • Backchannel: Agent acknowledges user while they speak (e.g., “uh-huh”, “I see”)
  • Call Transfer: Route calls to human agents or other endpoints
  • Fillers: Add natural speech hesitations to make agent sound more human
  • IVR Detection: Detect answering machines and IVR systems
  • Language Switching: Let users switch languages mid-conversation
  • Max Duration: Set maximum call duration limits
  • Post-Call Analysis: Extract structured insights from conversations
  • Silence Management: Handle prolonged user silence during calls
  • Talk First: Agent speaks first when call connects
These features are optional and can be enabled based on your use case. Start with the basics and add advanced features as your needs grow.

Language Switching

Users can request language changes during a call.

What is Language Switching?

Language switching allows your agent to:
  • Respond to user language requests
  • Adapt voice synthesis to match the requested language
  • Continue the conversation in the target language
  • Switch back to the original language when requested

When to Use Language Switching

Ideal Use Cases:
  • Customer support for global audiences
  • Healthcare services with diverse patient populations
  • Government services in multilingual regions
  • Travel and hospitality applications
Example Scenario:
User (in English): "Hello, I need help with my account"
Agent: "Hello! I'd be happy to help. How can I assist you today?"
User: "Sorry, no English. Spanish?"
Agent (in Spanish): "¡Por supuesto! ¿En qué puedo ayudarte hoy?"

Configuration

To enable language switching:
  1. Navigate to the Features tab in the agent editor
  2. Toggle Enable Language Switching to ON
  3. Save your agent configuration
The language switching feature is a simple on/off toggle. When enabled, the agent responds to user language requests. Ensure your LLM and TTS provider support the languages users may request.
Enable the FeatureIn the Features tab:
  • Toggle Language Switching to ON
  • The agent responds to user language requests
Configure TTS for Multilingual SupportEnsure your TTS provider supports multilingual voices:
  • ElevenLabs: Multilingual V2 model supports 29 languages
  • Cartesia: Sonic model with multilingual support
  • Dasha: Multi-language voice synthesis

Requirements

For language switching to work effectively, ensure:
  1. LLM Support: Use models that handle multiple languages well
    • ✅ OpenAI GPT-4/GPT-4.1 (excellent multilingual support)
    • ✅ DeepSeek (strong multilingual capabilities)
    • ⚠️ Some specialized models may have limited language support
  2. TTS Support: Choose voices that match your target languages
    • ElevenLabs Multilingual V2: Supports 29 languages
    • Cartesia Sonic: Multilingual capabilities
    • Dasha: Supports just 2 languages, but fast
  3. System Prompt: Instruct your agent to handle language requests
    You are a multilingual customer support agent. When users request
    a language change, continue the conversation in their chosen language.
    Maintain context and conversation flow.
    
Important: Voice synthesis quality varies by language and provider. Test all target languages thoroughly before production deployment.

Ambient Noise Handling

Add realistic background noise to your agent’s voice output.

What is Ambient Noise?

Ambient noise adds natural background sounds to your agent’s audio output:
  • Makes the agent sound more realistic and human-like
  • Simulates natural environments (office, call center, etc.)
  • Adds authenticity to AI-generated voice
  • Creates a more natural conversation experience

When to Use Ambient Noise

Ideal Use Cases:
  • Agents posing as human customer service representatives
  • Scenarios where perfect audio quality seems artificial
  • Call center simulations requiring realistic background
  • Applications where environmental context matters
Example Scenarios:
  • Customer service agent appearing to work from an office
  • Sales representative calling from a realistic workplace
  • Appointment reminders that sound naturally human
  • Any scenario where pure AI voice seems too perfect

Configuration

To enable ambient noise:
  1. Navigate to the Features tab
  2. Toggle Ambient Noise to ON
  3. Adjust the noise level from 0.0 (none) to 1.0 (maximum)
  4. Test to find the right balance for your use case
Ambient Noise configuration in Features tab Configure background noise level for realistic audio output
Enable Ambient NoiseIn the Features tab:
  • Toggle Ambient Noise to ON
  • Adjust the slider to set noise intensity
  • Start with 0.3-0.5 for subtle realism
  • Preview with test calls to find optimal level

Finding the Right Level

Noise Level Guidelines:
  • 0.0: No background noise - pure AI voice (most artificial)
  • 0.2-0.4: Subtle office ambiance - natural without distraction
  • 0.5-0.7: Moderate background - noticeable call center environment
  • 0.8-1.0: Heavy ambient noise - may impact clarity
Start with 0.3-0.5 and adjust based on feedback. Too much noise can reduce call quality.

Best Practices

1. Balance Realism and Clarity
  • Use enough noise to sound natural
  • Avoid levels that impair understanding
  • Test with your target audience
2. Match Your Use Case
Office environment: 0.2-0.4
Call center: 0.4-0.6
Natural outdoor setting: 0.5-0.7
3. Consider Your Disclosure Requirements
  • Some jurisdictions require AI disclosure
  • Ambient noise doesn’t replace legal requirements
  • Be transparent about AI agent identity when required
Ambient noise adds background sound to the agent’s output - it does not filter or suppress noise from the user’s environment. This feature enhances realism, not speech recognition.

Backchannel

Enable your agent to acknowledge users while they speak, creating more natural conversations.

What is Backchannel?

Backchannel refers to the short verbal cues that listeners use to indicate they’re actively listening without interrupting the speaker. These include phrases like “uh-huh”, “I see”, “right”, and “mmm-hmm”. When enabled, your agent will:
  • Acknowledge the user during longer speech segments
  • Make the conversation feel more natural and engaging
  • Show active listening without interrupting
  • Reduce the perception of talking to a machine

When to Use Backchannel

Ideal Use Cases:
  • Customer support conversations with detailed explanations
  • Sales calls where customers describe their needs
  • Healthcare intake where patients share symptoms
  • Any scenario requiring empathetic listening
When to Avoid:
  • High-urgency calls (emergency services)
  • Quick transactional interactions
  • IVR-style routing scenarios
  • When users expect rapid responses

Configuration

Backchannel is disabled by default. To enable it:
  1. Navigate to the Features tab in the agent editor
  2. Toggle Backchannel to ON
  3. Select your preferred behavior type
  4. Configure behavior-specific settings
  5. Save your agent configuration

Behavior Types

Dasha BlackBox supports three backchannel behavior types, each suited to different use cases:

Static Behavior

The simplest backchannel implementation using predefined phrases with random timing.Purpose: Quick setup with basic acknowledgment functionality.Properties:
PropertyTypeDefaultDescription
typestring"static"Behavior type identifier
phrasesstring[]nullPhrases to use (e.g., ["uh-huh", "I see"])
frequencynumber0.5How often to trigger (0.0 = very rare, 1.0 = very frequent)
When to Use:
  • Simple agents needing basic acknowledgment
  • Testing backchannel functionality
  • Scenarios where timing precision isn’t critical
API Example:
{
  config: {
    version: "v1",
    features: {
      version: "v1",
      backchannel: {
        version: "v1",
        isEnabled: true,
        behavior: {
          type: "static",
          phrases: ["uh-huh", "I see", "right", "mmm-hmm"],
          frequency: 0.5
        }
      }
    }
  }
}
Phrase Tips: Include 3-5 varied phrases for natural-sounding acknowledgments. Phrases can include variables using {{var_name}} syntax.

Choosing the Right Behavior

ScenarioRecommended BehaviorReason
Simple support callsStaticLow overhead, predictable behavior
High-volume outboundStatic AdvancedPrecise timing, cost-efficient
Empathetic supportSmart V1Context-aware, natural responses
Healthcare intakeSmart V1Adapts to emotional content
Sales qualificationStatic AdvancedConsistent, professional acknowledgments
Quick surveysStaticMinimal overhead for short calls

Best Practices

1. Start Conservative
  • Begin with frequency: 0.3-0.5
  • Increase based on user feedback
  • Too frequent acknowledgments can be distracting
2. Choose Appropriate Phrases
// Professional/Formal
phrases: ["I understand", "I see", "Certainly", "Of course"]

// Casual/Friendly
phrases: ["Uh-huh", "Right", "Okay", "Got it", "Mmm-hmm"]

// Empathetic
phrases: ["I understand", "I hear you", "That makes sense", "I see"]
3. Test with Real Conversations
  • Record test calls with backchannel enabled
  • Review timing and appropriateness
  • Adjust settings based on findings
Important: Backchannel only works during phone calls. It does not apply to chat or web-based interactions.

Call Transfer

Route calls to human agents, other AI agents, or external phone numbers.

What is Call Transfer?

Call transfer enables your agent to:
  • Escalate complex issues to human agents
  • Route calls to departments or specialists
  • Forward calls to external phone numbers
  • Maintain conversation context during transfer

Transfer Types

Dasha BlackBox supports three transfer methods:

Cold Transfer (Blind Transfer)

The AI agent disconnects immediately after initiating the transfer. When to Use:
  • Simple routing scenarios
  • No context needed at destination
  • Fast handoff required
  • IVR-style call routing
Example:
Agent: "Let me transfer you to our billing department."
[Agent initiates transfer and disconnects]
[User connects directly to billing]

Warm Transfer (Attended Transfer)

The AI agent consults with the destination before completing the transfer. When to Use:
  • Complex issues requiring context
  • VIP or sensitive situations
  • Quality assurance needed
  • Human agent availability check
Example:
Agent: "I'm going to connect you with a specialist. Please hold."
[Agent calls specialist privately]
Agent (to specialist): "I have a customer with billing issue X..."
Specialist: "I can help. Transfer them over."
[Agent completes transfer]

HTTP Transfer (Programmatic Routing)

Uses a webhook to determine transfer destination dynamically. When to Use:
  • Business rule-based routing
  • CRM integration for routing
  • Load balancing across agents
  • Custom transfer logic
Example:
// Your webhook receives transfer request
{
  "callId": "call_123",
  "agentId": "agent_456",
  "reason": "billing_issue",
  "context": { "accountId": "acc_789" }
}

// Return transfer destination
{
  "isSuccess": true,
  "transfer": {
    "version": "v1",
    "type": "warm",
    "isEnabled": true,
    "endpointDestination": "+1-555-0199", // or SIP URI
    "description": "Transfer to specialist for account issues"
  },
  "message": "Transfer initiated successfully"
}

Configuration

To enable call transfer:
  1. Navigate to the Features tab
  2. Toggle Call Transfer to ON
  3. Select transfer type (cold, warm, or HTTP)
  4. Configure destination endpoint(s)
  5. Add transfer instructions to system prompt
Call Transfer configuration in Features tab Configure call transfer type and destinations
Dashboard Configuration:
  1. Enable Call Transfer in Features tab
  2. Select Cold Transfer type
  3. Enter endpoint destination:
    • Phone number: +1-555-0100
    • SIP URI: sip:support@example.com
  4. Update system prompt:
    You are a routing agent. When users request billing help,
    say "Let me transfer you to billing" and initiate transfer.
    When they need technical support, transfer to tech support.
    
API Configuration:
{
  config: {
    version: "v1",
    features: {
      version: "v1",
      transfer: {
        version: "v1",
        type: "cold",
        isEnabled: true,
        endpointDestination: "+1-555-0100",
        description: "Transfer to billing department for payment questions"
      }
    }
  }
}

Transfer Routes

Configure multiple transfer destinations for different scenarios using the transferRoutes array: Route Configuration:
{
  config: {
    version: "v1",
    features: {
      version: "v1",
      transfer: {
        version: "v1",
        type: "cold",
        isEnabled: true,
        endpointDestination: "+1-555-0100",
        description: "Main routing agent with multiple destinations",
        transferRoutes: [
          {
            name: "billing",
            endpointDestinations: [
              { endpoint: "+1-555-0100" }
            ],
            description: "Billing department for payment questions"
          },
          {
            name: "technical",
            endpointDestinations: [
              { endpoint: "sip:tech@example.com" }
            ],
            description: "Technical support for product issues"
          },
          {
            name: "emergency",
            endpointDestinations: [
              { endpoint: "+1-555-0911" }
            ],
            description: "Urgent issues requiring immediate attention"
          }
        ]
      }
    }
  }
}
System Prompt Example:
You are a customer service routing agent. Route calls as follows:

- Billing questions → transfer to billing route
- Technical problems → transfer to technical route
- Emergencies → transfer to emergency route immediately
- General questions → handle directly

Always explain to the user why you're transferring them and
what to expect.

Transfer Best Practices

1. Set Clear Expectations
Agent: "I'm going to connect you with our billing specialist
who can help with your payment plan. This should take just
a moment."
2. Provide Context For warm transfers, brief the receiving party:
Agent (to specialist): "I have Jane Doe on the line with a
question about invoice #12345. She's asking about payment
terms."
3. Handle Transfer Failures Configure fallback destinations for HTTP transfers:
{
  config: {
    version: "v1",
    features: {
      version: "v1",
      transfer: {
        version: "v1",
        type: "http",
        isEnabled: true,
        description: "HTTP transfer with fallback",
        webhook: {
          url: "https://your-api.com/routing"
        },
        fallback: {
          version: "v1",
          type: "cold",
          isEnabled: true,
          endpointDestination: "+1-555-0101",
          description: "Backup destination if webhook fails"
        }
      }
    }
  }
}
4. Track Transfer Metrics Monitor transfer performance:
  • Transfer success rate
  • Time to connect
  • Post-transfer customer satisfaction
  • Reasons for transfer

Testing Transfers

Step 1: Create Test Agent
  • Enable transfers
  • Configure test phone numbers
  • Set short timeout values
Step 2: Test Scenarios
  1. Successful cold transfer
  2. Successful warm transfer
  3. Transfer failure (busy/no answer)
  4. Multiple transfer attempts
  5. Fallback routing
Step 3: Verify Behavior
  • Call connects to destination
  • Context is preserved (warm transfer)
  • Fallbacks work correctly
  • User experience is smooth
Production Considerations:
  • Test transfers thoroughly with real phone numbers
  • Verify all destinations are reachable 24/7
  • Configure fallbacks for every transfer route
  • Monitor transfer success rates in production

Fillers

Add natural speech hesitations to make your agent sound more human and conversational.

What are Fillers?

Fillers are the natural speech hesitations that humans use when speaking, such as “um”, “uh”, “well”, and similar sounds. When enabled, your agent will occasionally insert these sounds into its speech, creating a more natural and human-like conversation experience. When enabled, fillers:
  • Add occasional hesitation sounds to agent speech
  • Make the AI voice sound less robotic and more natural
  • Create a more comfortable conversation atmosphere
  • Reduce the “uncanny valley” effect of perfectly smooth AI speech

When to Use Fillers

Ideal Use Cases:
  • Customer support where a human feel is important
  • Sales calls requiring rapport building
  • Companion or conversational agents
  • Any scenario where natural speech patterns matter
When to Avoid:
  • Professional or formal announcements
  • Emergency or urgent communications
  • IVR-style automated systems
  • Scenarios requiring maximum clarity and precision
  • Time-sensitive transactional calls

Configuration

Fillers are enabled by default with the “um” filler sound. To configure:
  1. Navigate to the Features tab in the agent editor
  2. Locate the Fillers setting
  3. Toggle ON/OFF as needed
  4. Configure custom filler texts if desired
  5. Save your agent configuration

Properties

PropertyTypeDefaultDescription
isEnabledbooleantrueWhether fillers are enabled
strategy.typestring"static"Strategy type (currently only static supported)
strategy.textsstring[]["um"]Array of filler texts to randomly use

Strategy Types

Currently, Dasha BlackBox supports the static strategy for fillers: Static Strategy
  • Uses a predefined list of filler texts
  • Randomly selects from the list during speech
  • Simple, predictable behavior
{
  strategy: {
    type: "static",
    texts: ["um", "uh", "well"]
  }
}

API Example

{
  config: {
    version: "v1",
    features: {
      version: "v1",
      fillers: {
        version: "v1",
        isEnabled: true,
        strategy: {
          type: "static",
          texts: ["um"]
        }
      }
    }
  }
}

Best Practices

1. Keep Filler Lists Short
  • Use 2-4 different fillers for natural variation
  • Too many options can sound inconsistent
  • Common choices: ["um", "uh"] or ["um", "well"]
2. Match Your Agent’s Persona
// Casual/Friendly Agent
texts: ["um", "uh", "well", "let me think"]

// Professional Agent
texts: ["well", "let me see"]

// Minimal Fillers
texts: ["um"]
3. Test Audio Quality
  • Preview your agent with fillers enabled
  • Ensure fillers sound natural with your chosen TTS voice
  • Some voices handle fillers better than others
4. Consider Your Use Case
  • Enable for relationship-building conversations
  • Disable for transactional or urgent scenarios
  • A/B test to measure impact on user satisfaction
Default Behavior: Fillers are enabled by default with ["um"] as the filler text. This provides a natural starting point that works well with most TTS voices.

IVR Detection

IVR (Interactive Voice Response) detection enables your AI agent to detect when it encounters an automated phone system or voicemail during outbound calls. This is particularly important for transfer scenarios where the destination might be an IVR menu before reaching a human.

Overview

When making outbound calls or transferring to external numbers, your AI agent may encounter IVR systems that require navigation (e.g., “Press 1 for sales, press 2 for support”). IVR detection allows your agent to:
  • Detect IVR Systems: Recognize when it’s talking to an automated system vs. a human
  • Navigate Menus: Optionally use DTMF tones to navigate through IVR menus
  • Handle Voicemail: Detect answering machines and voicemail systems
  • Respond Appropriately: Speak an appropriate phrase or take action when IVR is detected

Per-Channel Enablement

IVR detection can be enabled or disabled for specific call channels. The defaults reflect typical use cases for each channel type.
ParameterTypeDefaultDescription
enabledForOutboundbooleantrueEnable IVR detection for outbound phone calls
enabledForInboundbooleanfalseEnable IVR detection for inbound phone calls
enabledForWebCallbooleanfalseEnable IVR detection for web-based voice calls
enabledForChatbooleanfalseEnable IVR detection for text chat sessions

IVR Navigation

The ivrNavigation parameter enables DTMF (Dual-Tone Multi-Frequency) navigation through IVR menus. When enabled, the AI agent will attempt to navigate IVR systems using touch-tone inputs rather than immediately giving up.
ParameterTypeDefaultDescription
ivrNavigationbooleanfalseEnable DTMF navigation through IVR menus
When to Enable:
  • Warm Transfers to Businesses: When transferring to external companies with IVR systems
  • Outbound Campaigns: When calling businesses that have automated attendants
  • Multi-Level Phone Trees: When you need to navigate to specific departments
When to Disable:
  • Human-Answered Lines: When destination is likely answered by humans
  • Simple Detection Only: When you only need to detect IVR without navigation
  • Time-Sensitive Calls: When IVR navigation delay is unacceptable

IVR Detection Behavior Types

IVR detection supports two behavior types that determine how the agent responds when an IVR system is detected.

SmartV1IvrDetectionBehavior

AI-driven IVR handling that uses GPT to generate contextual responses or navigation decisions.
FieldTypeRequiredDescription
type"smartV1"YesType discriminator
additionalInstructionsstringNoOptional instructions for GPT to guide IVR handling
When to Use:
  • Variable IVR Systems: When encountering different types of IVR menus
  • Context-Aware Navigation: When navigation decisions depend on call context
  • Dynamic Responses: When the appropriate IVR response varies by situation
API Example:
{
  config: {
    features: {
      ivrDetection: {
        isEnabled: true,
        enabledForOutbound: true,
        enabledForInbound: false,
        enabledForWebCall: false,
        enabledForChat: false,
        ivrNavigation: true,
        behavior: {
          type: "smartV1",
          additionalInstructions: "When calling customer service lines, try to reach a human representative. Press 0 or say 'representative' if prompted."
        }
      }
    }
  }
}
Example Additional Instructions:
  • “When encountering an IVR, navigate to the sales department”
  • “If voicemail is detected, leave a brief message about the callback”
  • “Press 0 to reach operator, or say ‘agent’ if voice-enabled IVR”
  • “For , press 1 for support, then 2 for billing” (supports variables)

StaticIvrDetectionBehavior

Rule-based IVR detection that uses a predefined static phrase when IVR is detected.
FieldTypeRequiredDescription
type"static"YesType discriminator
staticPhrasestringNoFixed phrase to say when IVR is detected
When to Use:
  • Consistent Response: When you want the same response to all IVR systems
  • Simple Scenarios: When complex navigation isn’t needed
  • Voicemail Messages: When leaving a standard message on answering machines
API Example:
{
  config: {
    features: {
      ivrDetection: {
        isEnabled: true,
        enabledForOutbound: true,
        ivrNavigation: false,
        behavior: {
          type: "static",
          staticPhrase: "Hello, this is an automated call from Acme Corp. Please call us back at 1-800-ACME-123 at your earliest convenience. Thank you."
        }
      }
    }
  }
}
Example Static Phrases:
  • “This is an automated message. A representative will call you back shortly.”
  • “Hello , please call us back at your convenience.” (supports variables)
  • “I detected an automated system. I’ll try again later.”

Complete IVR Detection Configuration

Full Configuration Example:
const response = await fetch('https://your-api-url.com/api/v1/agents', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    name: "Outbound Sales Agent",
    isEnabled: true,
    config: {
      primaryLanguage: "en-US",
      llmConfig: {
        vendor: "openai",
        model: "gpt-4.1-mini",
        prompt: "You are a sales outreach agent..."
      },
      ttsConfig: {
        vendor: "ElevenLabs",
        voiceId: "21m00Tcm4TlvDq8ikWAM"
      },
      features: {
        ivrDetection: {
          isEnabled: true,
          // Per-channel enablement
          enabledForOutbound: true,
          enabledForInbound: false,
          enabledForWebCall: false,
          enabledForChat: false,
          // DTMF navigation
          ivrNavigation: true,
          // AI-driven behavior
          behavior: {
            type: "smartV1",
            additionalInstructions: "Navigate to reach a human. If voicemail, leave a brief callback request."
          }
        },
        // Transfer configuration for escalations
        transfer: {
          type: "warm",
          endpointDestination: "+1-555-0100",
          interactionWithOperator: {
            type: "smartV1"
          }
        }
      }
    }
  })
});

IVR Detection Parameter Reference

ParameterTypeDefaultDescription
isEnabledbooleantrueMaster toggle for IVR detection
enabledForOutboundbooleantrueEnable for outbound phone calls
enabledForInboundbooleanfalseEnable for inbound phone calls
enabledForWebCallbooleanfalseEnable for web-based voice calls
enabledForChatbooleanfalseEnable for text chat sessions
ivrNavigationbooleanfalseEnable DTMF menu navigation
behaviorobjectnullDetection behavior configuration
behavior.type"smartV1" | "static"-Behavior type discriminator
behavior.additionalInstructionsstringnullGPT instructions (smartV1 only)
behavior.staticPhrasestringnullFixed phrase (static only)
Deprecated Parameter: The staticPhraseForIVR parameter at the top level of ivrDetection is deprecated. Use behavior.staticPhrase with behavior.type: "static" instead.

IVR Detection and Transfers

IVR detection is particularly relevant when combined with transfers: Scenario: Cold Transfer to Business When cold-transferring to an external business, the destination might have an IVR system. Configure IVR detection to handle this:
{
  config: {
    features: {
      ivrDetection: {
        isEnabled: true,
        enabledForOutbound: true,
        ivrNavigation: true,
        behavior: {
          type: "smartV1",
          additionalInstructions: "When transferred call reaches IVR, navigate to human operator."
        }
      },
      transfer: {
        type: "cold",
        endpointDestination: "+1-555-COMPANY",
        description: "Transfer to external company support"
      }
    }
  }
}
For complete details on transfer configuration, see the transfer documentation earlier in this guide. Transfer webhook payloads documented in Webhook Events align with the transfer parameters documented here.

Post-Call Analysis

Extract structured insights from conversations automatically.

What is Post-Call Analysis?

Post-call analysis uses AI to:
  • Extract key information from conversations
  • Categorize call outcomes
  • Score conversation quality
  • Generate structured data for your systems

Use Cases

Customer Support:
  • Sentiment analysis
  • Issue categorization
  • Resolution status
  • Follow-up requirements
Sales:
  • Lead qualification scores
  • Interest level assessment
  • Next steps identification
  • Objection tracking
Healthcare:
  • Symptom documentation
  • Appointment scheduling confirmation
  • Insurance information capture
  • Follow-up instructions

Configuration

Define custom analysis forms to extract exactly what you need:
Step 1: Enable Post-Call AnalysisIn the Features tab:
  1. Toggle Post-Call Analysis to ON
  2. Click Add Analysis Form
  3. Name your form (e.g., “Lead Qualification”)
Step 2: Define LabelsAdd labels to extract:
  • Name: lead_score
  • Type: number
  • Description: “Score from 1-10 based on interest level”
  • Name: contact_info_collected
  • Type: boolean
  • Description: “Did we collect email and phone?”
  • Name: next_step
  • Type: enum
  • Values: ["schedule_demo", "send_info", "call_back", "not_interested"]
  • Description: “What should happen next?”
Step 3: Save and TestAfter calls, analysis results appear in:
  • Call detail page
  • Result webhooks
  • API call details

Label Types

String: Free-form text
{
  name: "customer_concern",
  type: "string",
  description: "Main issue raised by customer"
}
Boolean: Yes/no questions
{
  name: "issue_resolved",
  type: "boolean",
  description: "Was the customer's issue resolved?"
}
Number: Numeric values
{
  name: "satisfaction_score",
  type: "number",
  description: "Customer satisfaction from 1-10"
}
Enum: Predefined choices
{
  name: "call_outcome",
  type: "enum",
  values: ["resolved", "escalated", "callback_needed", "no_action"],
  description: "Final call outcome"
}

Accessing Analysis Results

Via Dashboard:
  1. Navigate to Calls page
  2. Click on completed call
  3. View “Post-Call Analysis” section
  4. See extracted labels and values
Via API (Call Results Search):
// Post-call analysis is available in the call result object
const response = await fetch(
  'https://blackbox.dasha.ai/api/v1/callresults/search',
  {
    method: 'POST',
    headers: {
      'Authorization': 'Bearer YOUR_API_KEY',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      callIds: ['CALL_ID']
    })
  }
).then(r => r.json());

const callResult = response.results[0];
const analysis = callResult.result?.postCallAnalysis;
// {
//   lead_qualification: {
//     lead_score: 8,
//     budget_confirmed: true,
//     timeline: "1-3_months",
//     main_pain_point: "Manual data entry taking too much time"
//   }
// }
Via Webhooks (Completed Webhook):
// Completed webhook payload includes result.postCallAnalysis
{
  "type": "CompletedWebHookPayload",
  "status": "Completed",
  "callId": "call_123",
  "agentId": "agent_456",
  "orgId": "org_789",
  "callAdditionalData": {},
  "agentAdditionalData": {},
  "callType": "OutboundAudio",
  "createdTime": "2024-01-15T10:30:00Z",
  "completedTime": "2024-01-15T10:35:30Z",
  "durationSeconds": 330,
  "inspectorUrl": "https://blackbox.dasha.ai/calls/call_123",
  "transcription": [...],
  "result": {
    "postCallAnalysis": {
      "lead_qualification": {
        "lead_score": 8,
        "budget_confirmed": true,
        "timeline": "1-3_months",
        "main_pain_point": "Manual data entry"
      }
    },
    "transferInformation": []
  }
}

Silence Management

Handle prolonged user silence during calls with automated reminders and graceful conversation endings.

What is Silence Management?

Silence Management controls how your agent responds when users become silent during a conversation. Rather than awkwardly waiting indefinitely or abruptly ending calls, this feature enables intelligent handling of silence through configurable reminders and termination rules. When enabled, Silence Management:
  • Detects prolonged periods of user silence
  • Sends gentle reminder prompts to re-engage users
  • Limits reminder attempts to avoid annoyance
  • Automatically ends conversations after configurable thresholds
  • Provides graceful call termination when users become unresponsive

When to Use Silence Management

Ideal Use Cases:
  • Customer support calls where users may step away
  • Sales calls where prospects may need time to think
  • Healthcare calls where patients may need to gather information
  • Any scenario where unresponsive users should be handled gracefully
When to Adjust Defaults:
  • High-value calls (increase thresholds to allow more time)
  • Quick surveys (decrease thresholds for efficiency)
  • Elderly or accessibility-focused agents (increase thresholds)
  • High-volume operations (decrease to optimize resources)

Configuration

Silence Management is enabled by default. To configure:
  1. Navigate to the Features tab in the agent editor
  2. Locate the Silence Management settings
  3. Adjust parameters as needed
  4. Save your agent configuration

Properties

PropertyTypeDefaultDescription
isEnabledbooleantrueEnable/disable silence management
maxReminderAttemptsnumber2Maximum reminder prompts before action
reminderSilenceThresholdSecondsnumber5Seconds of silence before sending a reminder
endWhenReminderLimitExceededbooleantrueEnd call when max reminders exceeded
endAfterSilenceThresholdSecondsnumbernullAbsolute silence duration to force end call (optional)
Optional Absolute Timeout: If endAfterSilenceThresholdSeconds is set, the call will end after that duration of continuous silence, regardless of reminder attempts:
┌─────────────────────────────────────────────────────────────────────┐
│        Continuous Silence >= endAfterSilenceThresholdSeconds        │
│                           (if configured)                           │
└─────────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────────┐
│                         FORCE END CALL                              │
│                  (overrides reminder-based flow)                    │
└─────────────────────────────────────────────────────────────────────┘

API Examples

{
  config: {
    version: "v1",
    features: {
      version: "v1",
      silenceManagement: {
        version: "v1",
        isEnabled: true,
        maxReminderAttempts: 2,
        reminderSilenceThresholdSeconds: 5,
        endWhenReminderLimitExceeded: true
      }
    }
  }
}

Best Practices

1. Tune for Your Use Case
Use CaseRecommended Settings
Quick surveysthreshold: 3s, attempts: 1
Customer supportthreshold: 5s, attempts: 2 (defaults)
Healthcare intakethreshold: 10s, attempts: 3
Elderly usersthreshold: 15s, attempts: 4
High-volume outboundthreshold: 4s, attempts: 1
2. Use Absolute Timeout Wisely
  • Set endAfterSilenceThresholdSeconds for high-volume operations
  • Prevents resource waste on abandoned calls
  • Typical values: 45-90 seconds
3. Monitor and Adjust
  • Track calls ended due to silence
  • Review whether users were actually disengaged
  • Adjust thresholds based on completion rates
Default Behavior: Silence Management is enabled by default with sensible settings (5-second threshold, 2 reminders, end on limit exceeded). These defaults work well for most customer service scenarios.

Combining Features

Advanced features work together to create powerful experiences:

Example: Multilingual Support with Transfer

{
  config: {
    version: "v1",
    primaryLanguage: "en-US",
    features: {
      version: "v1",
      languageSwitching: {
        version: "v1",
        isEnabled: true
      },
      transfer: {
        version: "v1",
        type: "cold",
        isEnabled: true,
        endpointDestination: "+1-555-0100",
        description: "Transfer to language-specific support teams",
        transferRoutes: [
          {
            name: "english_support",
            endpointDestinations: [
              { endpoint: "+1-555-0100" }
            ],
            description: "English support team"
          },
          {
            name: "spanish_support",
            endpointDestinations: [
              { endpoint: "+1-555-0101" }
            ],
            description: "Spanish support team"
          },
          {
            name: "french_support",
            endpointDestinations: [
              { endpoint: "+1-555-0102" }
            ],
            description: "French support team"
          }
        ]
      }
    }
  }
}
System prompt:
You are a multilingual support router. Detect and respond in the user's
language. For complex issues requiring human assistance, transfer to the
appropriate language support team based on the conversation language.

Example: Post-Call Analysis for Compliance

{
  config: {
    version: "v1",
    features: {
      version: "v1",
      postCallAnalysis: [
        {
          version: "v1",
          name: "compliance_check",
          isEnabled: true,
          labels: [
            {
              name: "sensitive_data_shared",
              type: "boolean",
              description: "PII or sensitive data discussed"
            },
            {
              name: "escalation_needed",
              type: "boolean",
              description: "Issue requires human follow-up"
            }
          ]
        }
      ]
    }
  }
}

Example: Natural Conversation with Backchannel, Fillers, and Silence Management

{
  config: {
    version: "v1",
    primaryLanguage: "en-US",
    features: {
      version: "v1",
      // Make agent sound more human with fillers
      fillers: {
        version: "v1",
        isEnabled: true,
        strategy: {
          type: "static",
          texts: ["um", "well", "let me see"]
        }
      },
      // Acknowledge user while they speak
      backchannel: {
        version: "v1",
        isEnabled: true,
        behavior: {
          type: "staticAdvanced",
          phrases: ["uh-huh", "I see", "right", "okay"],
          frequency: 0.5,
          minLengthVoice: 8,
          minLengthVoiceDeviation: 0.25,
          minCooldown: 6,
          minCooldownDeviation: 0.25
        }
      },
      // Handle silence gracefully
      silenceManagement: {
        version: "v1",
        isEnabled: true,
        maxReminderAttempts: 2,
        reminderSilenceThresholdSeconds: 5,
        endWhenReminderLimitExceeded: true
      }
    },
    llmConfig: {
      version: "v1",
      vendor: "openai",
      model: "gpt-4.1-mini",
      prompt: "You are a friendly, empathetic customer support agent. Listen carefully to customers, acknowledge their concerns, and provide helpful solutions."
    }
  }
}
This configuration creates a natural-sounding agent that:
  • Uses occasional “um” and “well” fillers to sound more human
  • Acknowledges users with “uh-huh” and “I see” during longer explanations
  • Gently prompts silent users and gracefully ends unresponsive calls

Best Practices Summary

Ambient Noise

  • Start with level 0.3-0.5 for subtle realism
  • Balance natural sound with call clarity
  • Test with target audience before deployment
  • Match noise level to your use case scenario

Backchannel

  • Start with frequency 0.3-0.5 and adjust based on feedback
  • Use static behavior for predictable, cost-efficient deployments
  • Use smartV1 for empathetic, context-aware interactions
  • Test timing with representative conversations
  • Keep phrase lists short (3-5 phrases) for natural variation

Call Transfer

  • Always configure fallback destinations for HTTP transfers
  • Brief recipients during warm transfers for context
  • Test all transfer flows thoroughly before production
  • Monitor transfer success rates and failure reasons

Fillers

  • Keep filler lists short (2-4 options) for consistency
  • Match fillers to your agent’s persona (formal vs. casual)
  • Disable for urgent or time-sensitive scenarios
  • Test with your chosen TTS voice for natural sound

Language Switching

  • Ensure your LLM supports multilingual conversations
  • Choose TTS providers with strong multilingual capabilities
  • Test language detection and switching in realistic scenarios
  • Monitor quality and accuracy across languages

Post-Call Analysis

  • Define clear, specific labels with detailed descriptions
  • Use enums for categorical data to ensure consistency
  • Keep label count manageable (5-10 per form)
  • Validate analysis accuracy against actual conversations

Silence Management

  • Use defaults (5s threshold, 2 attempts) for most scenarios
  • Increase thresholds for accessibility or high-value calls
  • Decrease thresholds for quick surveys or high-volume operations
  • Set absolute timeout for cost-sensitive deployments

Troubleshooting

Ambient Noise Too High or Too Low

  • Reduce level if users complain about audio quality
  • Increase level if agent sounds too artificial
  • Test different levels with various audiences
  • Consider use case requirements (realism vs clarity)

Backchannel Not Triggering

  • Verify isEnabled is set to true
  • Check that behavior is configured (not null)
  • Ensure user is speaking long enough (check minLengthVoice for staticAdvanced)
  • Verify frequency is above 0 (try 0.5 to start)
  • Note: Backchannel only works during phone calls, not chat

Backchannel Too Frequent or Disruptive

  • Reduce frequency value (try 0.3)
  • Increase minLengthVoice (try 12-15 seconds)
  • Increase minCooldown (try 10-12 seconds)
  • Use static behavior instead of smartV1 for predictable timing

Fillers Sounding Unnatural

  • Test with different TTS voices (some handle fillers better)
  • Reduce filler variety to 2-3 options
  • Try common fillers: ["um"] or ["um", "uh"]
  • Disable fillers if they don’t sound right with your voice

Inaccurate Post-Call Analysis

  • Review and refine label descriptions
  • Provide examples in descriptions
  • Use enums instead of free-form strings
  • Test with various conversation types
  • Iterate based on results

Language Switching Not Working

  • Verify TTS provider supports target languages
  • Check LLM has multilingual capabilities
  • Ensure voices are available for all languages
  • Review system prompt language instructions

Silence Management Ending Calls Too Quickly

  • Increase reminderSilenceThresholdSeconds (try 8-10)
  • Increase maxReminderAttempts (try 3-4)
  • Remove or increase endAfterSilenceThresholdSeconds
  • Review whether users need more thinking time

Silence Management Not Ending Unresponsive Calls

  • Verify isEnabled is true
  • Check endWhenReminderLimitExceeded is true
  • Consider adding endAfterSilenceThresholdSeconds for absolute timeout
  • Reduce thresholds if calls stay open too long

Transfer Failures

  • Verify destination numbers are correct
  • Test reachability of transfer endpoints
  • Check SIP configuration if using SIP URIs
  • Monitor network connectivity
  • Configure fallback routes

Next Steps

Now that you understand advanced features:
  1. Enable Features: Add features one at a time to your agent
  2. Test Thoroughly: Verify each feature works as expected
  3. Monitor Performance: Track metrics for enabled features
  4. Optimize: Refine configuration based on real-world usage

API Cross-Refs