Professional Documents
Culture Documents
Avaya IX WEM V15 2 Technical Overview
Avaya IX WEM V15 2 Technical Overview
Avaya IX WEM V15 2 Technical Overview
Technical Overview
Version 15.2
Intended audience
This guide is designed to be used by:
Company and Business Partner professional services staff or any party responsible for
planning and setting up systems
All customer staff involved in system deployment
Customer system administration and IT staff responsible for site preparation and
installing workstations
Systems Field Services and partners responsible for installing workstations as part of the
suite installation and site acceptance testing
1.28 In the End-to-End Encryption topic, clarified that files are encrypted with a
certified industry standard string algorithm (AES 256 using CTR mode).
1.27 In the End-to-End Encryption topic, deleted RSA KMS content and updated
Thales KMS content.
About this guide
1.25 In the System Overview, removed the bulleted list of products, and
replaced it with the main products in the WFO suite.
In Products, rearranged the list of products and removed Branch
Forecasting from the list.
In Data Center, updated the Data Center graphic to include the
Interaction Analytics Application and Recorder Central Web Services
application under Web Applications, and removed the Web Services
group.
In the topic, Token authentication methods, removed the description for
Additional security.
In the Data Flows chapter, removed the Web Services component from
many different data flows, and updated the corresponding text.
premises flow.
Cloud Verint Da Vinci Speech Transcription Service: New data flow
1.22 Updated the ETL job names in the Database ETL dataflow.
1.20 Added a new flow for HTML5 streaming replay with encryption,
Playback Interaction with Encryption using HTML5 Streaming Data Flow.
Renamed Playback Interaction with Encryption Data Flow to Playback
Interaction with Encryption using ActiveX Data Flow.
1.18 Mobile solution section: Under Data-at-rest and mobile device security, added
user's first name and last name to the list of data that is saved on mobile
devices.
1.13 Removed references to the legacy Mobile App, which has been
deprecated
Clarified the possible implementations of the SQL Server AlwaysOn
feature
1.12 Added new Automated Quality Management (AQM) data flows for fully
automated evaluations
Updated "Additional Resources" in the Recording chapter to reflect
currently supported integrations
1.11 The "Configuration Data Flow" topic in the "System Management" chapter
is modified to show that the EMA now pulls configuration changes from
Enterprise Manager. In previous releases, Enterprise Manager pushed
configuration changes to EMA.
1.09 Added the SQL Server AlwaysOn solution to the Database High Availability
Solutions section
1.07 Added description of new Verint TeamView mobile app to Mobile Solution
section
Section: Text Analytics data flows: added "Text Analytics alarms and
monitoring flow"
Added topic for "Interaction Analytics application data flow"
Section: System Management Services: Removed Information Collector
service from list of services.
Modified "Real-Time Monitoring–Retrieving Employee Information" and
"End-to-End Encryption" for Cloud Screen Capture.
Section Text Analytics Service (TAS): added description of Alarms and
Monitoring Service (AMS) to list of TAS services
Updated with new document template.
1.05 Updates
1.03 Updates for V15.2 HFR2 for Real-Time Analytics (RTA) Framework:
Renamed server role "Biometrics Engine" to "Voice Biometrics Engine".
Renamed server role "Enrollment Engine" to "Voice Enrollment Engine".
Workforce Optimization
Overview
Workforce Optimization products and services are designed to help organizations of all
sizes. The WFO suite reduces operating costs, identify revenue opportunities and
competitive advantages, and improves performance, profitability, and the customer
experience.
Topics
System Overview 17
Products 19
Management Services 28
Framework Layer 31
Workforce Optimization Overview System Overview
System Overview
Workforce Optimization products and services are designed to help organizations of all sizes. The
WFO suite reduces operating costs, identifies revenue opportunities and competitive advantages. In
addition, it improves performance, profitability, and the customer experience.
The solution provides functionality for Recording, Workforce Management (WFM), Desktop and Process
Analytics (DPA), Speech Analytics, Text Analytics, and others.
The system offers the Verint Mobile Work View and Verint Mobile Team View mobile apps. The mobile
apps provide employees and managers with the core benefits of the Workforce Optimization Suite
from their mobile devices.
The WFO solution is a feature rich, end-to-end enterprise solution that provides a modular
architecture and deployment model, and a modern and customizable user interface.
The core elements of the system include:
Rich, client application with modern UI capabilities (such as drag & drop and dynamic resizing)
Single workspace with inter-application functionality and data sharing
Efficient user application workflows with enhanced reporting and dashboards
Market-leading speech performance with unique Semantic Intelligence capability
Enhanced serviceability, simplified deployment, secure operations, and low TOC
Management
Organization Management User Management
Services
Products, page 19 Provides all the products in the system suite, which
together provide functionality and services to users.
Framework Layer, page 31 Provides the software infrastructure layer in the system,
enabling system configurations, web services,
authentication, and mobile apps.
Products
The system products include:
Recording Interactions, page 19: Provides a recording and archiving infrastructure that records and
stores audio, video, and screen data for compliance, customer analytics, and Workforce
Optimization.
Interactions, page 20: Provides you with the ability to search and play back employee-customer
interactions, and perform quality monitoring activities to improve the customer experience.
Workforce Management, page 24: Helps measure and take advantage of the individual talents and
preferences of each employee. WFM uniquely ensures that employee skills and proficiencies are
aligned with business objectives and customer needs, and helps produce optimum schedules.
Desktop and Process Analytics (DPA), page 26: Captures events and data from employee desktops
and makes them easy to act on.
Scorecards, page 25: Helps agents, supervisors, and all contact center employees focus on critical
aspects of their performance and identify opportunities for improvement.
Coaching, page 25: Addresses the needs of managing all aspects of inter-personal performance
optimization efforts.
eLearning, page 26: Provides hard skills and soft skills training applicable for the entire agent life
cycle (before, during, and after the hiring process). eLearning provides training assessment and
design tools.
Interaction Analytics, page 23: Provides unified data from both Speech and Text Analytics for
category and term trends, and themes.
Speech Analytics, page 21: Analyzes ongoing changes in customer behavior and drives effective
organizational changes needed to address challenging market conditions.
Text Analytics, page 22: Analyzes text-based interactions to identify what customers are engaging
with in organizations, and how they are engaging with the products and services for insights into
customer experience.
Customer Feedback, page 26: Provides a highly reliable, scalable, and flexible voice and Web/email
system for conducting intelligent and dynamic post-call and post-contact surveys.
Recording Interactions
Contact recordings provide the raw intelligence for subsequent customer analytics and workforce
optimization.
The Recording and Archiving system provides the following features:
Records VoIP and TDM audio through a variety of passive interception and delivery/termination
interfaces
Records IP-based video conferencing
Interactions
The Interactions application provides you with the ability to search and play back employee-customer
interactions. It also performs quality monitoring to improve employee performance and the
customer experience.
The Interactions application supports the following:
Evaluations
You can manage the entire employee evaluation, feedback and development process, quickly
highlighting gaps in employee skill sets, and enabling prompt corrective action to improve
performance.
Using the Inbox, you can let the system automatically select which interactions are pushed for
evaluation. You can evaluate employee recorded interactions. In addition, employees can use the
evaluation process to perform their own self-evaluation. You can also assess the entire customer
experience among multiple interactions.
Workflow
You can flag interactions and evaluations and place them in folders for subsequent review and action.
Alerts inform employees and managers that a new interaction or evaluation has been placed in a
folder for their review and action.
Reports
You can generate canned and ad hoc reports on evaluation scores, evaluation activities and recorded
interactions. You can also run analysis reports to uncover trends and relationships within
interactions.
Speech Analytics
Speech Analytics provides you with valuable insight into the key business issues in your enterprise. By
analyzing critical business data from millions of customer-employee interactions, you can understand
the performance issues and act quickly.
The Speech Analytics application delivers fast results through main workflows that are designed
around user tasks.
The main features include:
Discover: Displays what the system has automatically surfaced for you.
Analyze: Provides tools to analyze what you have discovered, and to perform ad-hoc analysis. You
can drill down and interact with the data in a meaningful way to find specific information.
Report & Design: Generates and stores reports created during your analysis. Design allows you to
create and change categories.
Discover
Discover tells you what is happening or changing, and what to look into that you did not know about:
Discover Trends: Trends surface changes in categories and terms stated in employee-customer
interactions. By analyzing these changes, you can understand emerging business phenomena, and
pinpoint significant events that require close attention. Trends also reveal critical information you
were not aware of, and identify process or service issues before they escalate.
Discover Themes: Themes are groups of expressions that have similar meaning in your data. Themes
help you understand what is happening in your calls, without the need to know what to look for in
advance. By reviewing the volume of interactions represented by a theme, you can understand the
magnitude of the business issue.
Analyze
Analyze allows you to perform an ad-hoc analysis. It is helpful when you know what you need to
investigate and want to find the drivers and impact of the business issue:
Search Capabilities and Suggest: You can focus on a specific business issue by searching for
interactions that include specific terms or phrases stated in the conversation. The enhanced
Suggest feature displays a list of words used within the same context as your search entry. You can
also view more terms with which your term is closely related, and longer phrases that include your
term.
Analyze Categories: Speech Analytics categories group interactions that deal with specific business
issues. View statistical information about the categories defined in the system. Investigate how
interactions are distributed among the categories to understand the nature of employee-customer
interactions.
Analyze Charts: You can use charts to view and analyze statistical information about all interactions
or a subset of interactions retrieved by your search. Charts help you identify trends in the
interactions.
Analyze Context: Focus your search on interactions that are most related to the business issue you
are investigating by analyzing the context in which specific terms are used. Analyze Context displays
terms used within the same context in a visual term tree view.
Analyze Root Cause: You can understand the potential drivers of a defined data set by analyzing
possible root causes surfaced automatically by the system.
Tune
Allows language model managers to review the suggestions submitted by employees for incorrectly
transcribed terms and phrases, collated per language and vocabulary. You can then replace with a
different suggestion, and approve or reject the suggestion. Approved suggestions are exported for
integration into the language model through Phonetics Boosting.
Text Analytics
Text Analytics provides data on text-based interactions in your enterprise. View and analyze the data in
the Text Analytics application to gain valuable insights into key business issues in the enterprise.
You can analyze the data by themes, key terms, and categories, which represent classifications of text-
based data. Themes and key terms are automated classifications, while categories are user-defined
classifications. In addition, you can analyze the sentiment associated with themes, key terms, and
categories.
Discover trends
Discover Trends to view what the Analytics Engine automatically surfaces for you, and track what is
trending for different content types over different periods of time — current or historical:
Check Trending tables to identify text elements with the maximum change, based on the relative
change within selected period of time.
Leverage Speaker Separation to isolate topics and relations according to usage by the employee or
the customer.
Analyze Trend View charts to view day-to-day trend behavior over time for individual text elements
from Trending tables.
Interaction Analytics
Interaction Analytics provides unified data on interactions from both Speech and Text-based sources
in your enterprise in the same location. Viewing related data from different media such as calls, emails,
and chats, provides valuable insights for better understanding performance issues, and find solutions.
Create an Interaction Analytics project by mapping the Speech and Text projects that best represents
the business issue you want to analyze, and see trends and themes the system surfaces from the
millions of interactions.
Discover highlights
View short-term or long-term snapshots of key interaction metrics and trending categories for the
Speech and Text projects mapped to an Interaction Analytics project.
Workforce Management
The Workforce Management (WFM) solution helps measure and capitalize on the individual talents
and preferences of each employee. WFM uniquely ensures that employee skills and proficiences are
aligned with business objectives and customer needs, and helps produce optimum schedules.
WFM is part of a unified analytics-driven WFO solution. It interoperates with Quality Monitoring to
incorporate agent quality scores easily for better schedules that have the right blend of agent quality.
It interacts with eLearning and Coaching to receive learning and coaching requests that can be
scheduled at the appropriate time without impacting service levels. Integrated Scorecards come with
pre-defined quality and productivity KPIs that are displayed in role appropriate Scorecards for
consistent communication across all levels of the center.
Branch Forecasting
Branch Forecasting provides an intuitive solution for resource scheduling at your branches. Branch
Forecasting pushes forecasting and planning data to Workforce Management to use when scheduling
resources.
Branch Forecasting is an application that predicts the number of resources required for bank branches
to complete their forecasted daily banking transactions. The forecasted data is then pushed and used
alongside Branch Forecasting scheduling, to create work schedules for the bank branch resources.
A resource is an employee of the bank, which could include a bank teller, customer service agent, or
manager. Branch Forecasting generates volume (transaction) forecasts, resource forecasts, and staff
mix forecasts. A variety of standard reports are also provided that allow you to view the forecasted
data in different scenarios. The forecasts can then be integrated with a Branch Workforce
Management site to create schedules based on your forecasted needs.
Branch Forecasting uses past transactional history to help you predict future resource requirements.
The application's algorithms use data from time studies and your organization's Electronic Journal (EJ)
system to develop an accurate forecast for your resource requirements.
The application uses a highly customized model of your organization to provide forecasting and
reporting services related to your resources and their workload. The most important part of this is a
forecast of resource requirements, measured in full time equivalents (FTE). A standard resource
forecast is stored by 30-minute increments.
Branch Forecasting is part of the Workforce Optimization suite. Many of the Branch Forecasting
features are optimized by the applications in the WFO suite.
Related topics
Workforce Management, page 24
Related information
Branch Forecasting User Guide
Scorecards
The Scorecards module helps agents, supervisors, and all contact center employees to focus on
critical aspects of their performance and identify opportunities for improvement.
With Scorecards, users can address complex questions such as:
Which agents and teams are performing to expectations?
Where do I spend my coaching time? Which agents? Which teams?
How is one agent performing relative to their peers?
How much did Team B improve since completing the new training?
Coaching
Coaching directs, instructs, and trains a person or group of people, with the aim of achieving a set
goal or developing specific skills. The Coaching solution effectively addresses the needs of managing
all aspects of inter-personal performance optimization efforts.
Coaching:
Provides employees with personalized guidance on how to improve their performance and extend
their skills.
Helps ensure visibility, accountability, and fairness in staff development practices.
eLearning
The eLearning offering is divided into the following components:
Lesson Management: Provides both hard skills and soft skills training. Lesson Management
enables employees to do their jobs successfully, no matter what stage of the life cycle—before,
during, and after the hiring process.
Competency Based Learning: Provides everything provided by Lesson Management, in addition to
automatically assigning, delivering, and assessing training. Competency Based Learning simplifies
the tasks associated with assigning and managing training. It also enables your contact center to
deliver personalized learning efficiently through the entire agent life cycle — before, during, and
after the hiring process.
Customer Feedback
The Customer Feedback solution provides a highly reliable, scalable, and flexible Voice and
Web/email system for conducting intelligent and dynamic post-call and post-contact surveys.
Customer Feedback is installed on site (behind company firewalls using internal security policies). It
directly interacts with existing telephony and company networks to provide efficient capture and
analysis of customer feedback.
Capturing customer feedback as part of every interaction allows an organization to gain a
comprehensive view of customer perception of their whole business. Customer Feedback is different
than traditional survey projects that typically capture biased feedback about a narrow aspect of
customer perception. The Customer Feedback solution enables organizations to capture easy to act
on census-level feedback from every interaction.
Identify, count, and visualize processes and workflow based on user interaction with software
applications
Provide real-time processing of data to make Next Best Action recommendations to users
Replicate data elements from one application to other applications without expensive data
integration
Incorporate biometric status messages into Desktop rule processing for employee alerts
Management Services
Management Services consist of applications and utilities that allow users to view, manage, and
configure system entities and product functionality:
Dashboard, page 28: Allows users to have a single view of valuable information across multiple
applications and evaluate it.
System Management, page 29: Enables users to perform system management activities from a
single, Web-based application (Enterprise Management). Changes are saved and processed centrally
in one single, secure, highly available database. These activities are fully integrated and unified for
all products.
Organization Management, page 29: Allows administrators to set up different hierarchies that allow
them to manage users.
User Management, page 30: Allows administrators to set up and create user profiles for every
employee in their organization using the unified, single user management solution.
Dashboard
The WFO suite-wide unified dashboard allows users to generate multiple dashboards. Users can
select reports from multiple applications out of the collections of available reports.
The Dashboard is based on a powerful user interface that supports drag & drops, dynamic resizing,
and dynamic repositioning of widgets and portlets on the screen. The Dashboard also offers a highly
flexible sharing scheme and management console.
The Dashboard supports dashboard viewing and creation from the same single screen. Users with
the right level of privileges can also share a dashboard with other users. Dashboard sharing is also
done from the same single screen.
A management console is available for high-level administration needs. From the management
console, the administrator can see dashboards per organization or users. Administrators can delete
dashboards, change dashboard owners, or create new dashboards.
The Dashboard allows users to have a single view of valuable information across multiple
applications and evaluate it.
For example, one can present information that relates to analyzed or evaluated interactions. In
addition, dashboards can present information available from other applications, such as Scorecards,
Analytics, and WFM. This data can be displayed in a single location using one or more dashboards.
Dashboards facilitate the access to frequently used reports and data, while providing a unified view
of team performance across multiple applications.
For example, from a single dashboard, a manager in a contact center can now view the following:
Quality scores of each of their teams (from QM)
Overall contact center Performance KPIs (from Scorecards)
Trend of a specific KPI (from Scorecards)
Major reasons why customers are calling (from Speech Analytics)
System Management
System management activities are performed from a single, Web-based application (Enterprise
Management), and saved and processed centrally in one single, secure, highly available database.
These activities are fully integrated and unified for all products.
They include:
License management
Configuration
Status and alarm monitoring
Version information
User management
Generating a topology report
For example, using the Enterprise Management application, you can define configurations for all
applications and view system status and all active alarms in the enterprise. The application sends all
management changes and requests to the system database, where the changes are centralized,
processed and distributed to the relevant servers accordingly.
In addition, because all management information is centralized, management activities are fully
integrated.
Customers and Field Engineers (FEs) can use the application because it is highly secure. The activities
that are available for users to perform are based on their roles and privileges (permissions) assigned
to them in their user profile. For examples, users who do not have permissions to add new servers in
the enterprise are not able to perform this change.
Organization Management
Administrators can set up different hierarchies that allow them to manage users:
Organizational hierarchies are structured according to the managerial and employee hierarchy in
the company.
Group hierarchies are structured according to a specific logical structure defined by the
administrator.
These hierarchies allow administrators to set rules for users, based on their position in the
organization or their association with a specific defined group.
Related information
User Management Guide
User Management
Administrators set up and create user profiles for every employee in their organization using the
unified, single user management solution for the suite. The User Management application then sends
the changes to one, single central database, where all system management data is saved for the
whole enterprise.
Administrators assign specific user privileges and permissions to each profile (called roles and
privileges). When a user logs in to the Portal, the system authenticates and authorizes them. The user
is only authorized to view and access the applications and functionality defined within their scope
and visibility.
Related information
User Management Guide
Framework Layer
The Framework controls the overall software infrastructure and mechanisms in the solution,
including:
Web Services, page 31: Allows communication and a service layer between the application and the
different services provided by the suite.
Authentication, page 31: Supports two main authentication models—Windows Integrated
Authentication and DB Realm.
Mobile Apps, page 32: Provides quick and easy access to view calendar and work schedule
information from any iPhone or Android phone.
Web Services
Web services allow communication, and a service layer between the application and the different
services provided by the suite.
These Web access services are used to allow a simple and consistent interface to the application,
databases, and alarms. They provide the base for allowing different services to interact with each
other.
Authentication
The system supports several methods of user authentication. These include DB Realm, Windows Active
Directory with LDAP or SSO, Security Assertion Markup Language (SAML), and OpenID Connect (OIDC).
Each method uses a specific authentication principle (federated or form based), and can be used for
specific applications (desktop/web, mobile, reports) within the system. The authentication process is
implemented in the WebLogic component.
DB Authentication (DBRealm)
The DB Realm (system or internal) is a Form-based authentication method. DBRealm authenticates
the user with a user name and password that is maintained solely within the system’s database. The
password hashes are managed securely in the database. When DB Realm authentication method is
used, password and account locking policies are also managed within the system.
Mobile Apps
Verint Mobile Team View and Verint Mobile Work View provide managers and employees quick and
easy access to view their calendar and work schedule information from any iPhone or Android phone.
Work View also provides performance data for objectives and Key Performance Indicators (KPIs) in an
easy-to-read format.
The mobile apps can be downloaded directly from the AppStore™ or Google Play™ at no charge.
However, because it is not standalone, the user must be logged into a system server for it to work.
Logical Architecture
The system logical architecture is based on three logical deployment zones: Data Center,
Sites, and Desktop. The Data Center and Site zones each contain server roles that provide
specific functionality for the system.
Topics
Every system deployment includes one Data Center, and one or more Sites and Desktops (depending
on system size and scaling issues).
Dividing the system functions into logical zones supports flexibility for different system scaling levels.
It also streamlines the flow of data, enables easier and more efficient upgrade paths, and provides
system security.
Maintaining the data in one single location (Data Center Zone) both protects sensitive system data
and provides centralized access to data by authorized users. The Site zone can be configured in
multiple instances with multiple servers, providing system flexibility and scalability.
Main system data is sent from the Site zone to the Data Center Zone. The Site zone sends recorded
content and other stored data to the Data Center Zone. The Data Center Zone provides a centralized,
single location where users can access this data to view and modify it. In turn, the Data Center Zone
sends user information and system configuration information to the Site zone, where it is then
integrated into the customer environment. For more information on logical system building blocks
used in the Data Center and Site zones, see Logical Building Blocks—Server Roles, page 35.
The software in the Data Center and Site zones can be upgraded separately, which enables easier
upgrade processes. For example, the customer can upgrade the Data Center Zone for new
applications or new features, without the need to invest in upgrading the entire enterprise.
Related topics
Data Center, page 36
Sites, page 40
Data Center
The Data Center provides a single, central point of access for application and content metadata. Every
system deployment includes one Data Center zone.
Users access the Data Center to view and modify system stored and real-time data. Users who do not
have access to the Data Center zone cannot log in to an application or access any of the data.
The following graphic shows the server roles defined in the Data Center Zone.
Data Processing, Includes offline services used for Interaction Flow Manager
page 38 data processing hosted on one or Framework Integration
more servers. Service
Speech Application
Service
Forecasting and
Scheduling Service
Interaction Analytics
Services
Databases
The Data Center Zone hosts one or more database servers, depending on the size of the system
deployment.
Data Center Zone databases contain the following information:
System Management Data: Includes IT-oriented information on licenses, configuration, and data
sources.
Application Management Data: Includes business-oriented information on:
User management: Includes users, hierarchy, roles, and user preferences
Application management: Includes forms, flags, reports, and Custom Data (for Workforce
Optimization Interactions & Analytics), and Key Performance Indicators (for Scorecards).
Application Data: Includes raw contact information, evaluations, agent adherence to workflow
procedures, scorecard source measures, DPA data, Speech content, Biometrics information, and
excludes audio and screen information.
Operational Data: Includes archived segment data indicating which segments are archived,
including the information required to restore and play back a segment. The system generates
operational data and maintains it in the database. The system manages acquired structured and
unstructured data.
Web Applications
The Web Application layer includes Web UI Application and Web Services.
The application cluster consists of one or more application servers, depending on the deployment
size. In large deployments with more than one server, the application servers are deployed behind a
Load Balancer (LB). Each server exposes the same set of applications and services.
All users log on to the system through Web applications. A single point of authentication—Single Sign-
On (SSO)—provides application access in the system, as defined by user privileges.
Web applications run application pages with system management and application management data.
It is structured information. Unstructured information is accessed directly from the Site zone in which
it was recorded or archived.
All application servers run the same version of software and have an identical configuration. Users
access the system from the URL of the LB. The LB routes the user to one of the application servers.
Web services provide a secure interface for the upload, retrieval, and updating of Data Center Zone
database information. They also provide a secure interface for real-time information received from
the CTI switch. This information includes the status of an employee, the number of active employees,
and other user data.
Web services include Marking Web Services, Data Access Services, which enable third-party
integration and professional services that enhance the product. Extra services include the
transcription web service, DPA services, and Desktop messaging.
Data Processing
The Data Center zone includes services used for data processing hosted on one or more servers.
These offline services are common to all deployments and operate on data that is synchronized
between two databases, or uploaded to a database.
For Speech Analytics, the Data Center hosts one or more Speech Application servers according to the
number of speech instances required by the customer. Each Speech Application server hosts a single
speech instance.
For Text Analytics, the Text Analytics Service (TAS) receives the data from the Interaction Capture
Service, parses the data, tags it with semantically meaningful information, creates indexed search,
and generates analytics. Accordingly to functionality, the TAS can be divided into three types of logical
servers: TAS Application, TAS Datastore, and TAS Installation. Based on the type of deployment, they
can be consolidated on the same or on different physical servers. Each server is associated with a set
of services.
The following are examples of offline services executed in the Data Center as offline processes:
Business workflows, such as:
Inbox selection
Call distribution—Managed by CTI Contact End rules defined in the Rule Editor.
Cradle-to-Grave—Partial contacts originating from different CTI servers are gathered into a single
contact with corrected contact-level information in the Contact Database.
Forecasting and Scheduling Services, to manage and plan your contact center activities.
Operational workflows, such as:
Offline maintenance jobs.
ETLs (Export, Transform and Load) data from one database to another.
Integration with external data sources.
ETLs (Export, Transform and Load) data from external sources into a database.
Reporting Services
The Data Center Zone provides Reporting Services for Workforce Management (WFM), Scorecards,
eLearning, Coaching, Customer Feedback, and Interactions and Analytics.
In addition, there are other reporting mechanisms that are implemented as part of the system
proprietary applications. These reports include DPA reports and Speech Analytics reports (which are
implemented over the Speech index).
Sites
The Site zone is responsible for recording (known as content acquisition), content storage, and
integration with the customer environment. The Site zone hosts components used for integration
with the customer environment for Full-Time Recording and Workforce Management.
The number of Site zones varies according to the geographical deployment of the call center,
switches, and network infrastructure. You can deploy Site zones at any location in the organization.
You can store audio and video content at any Site zone, regardless of where you recorded the call.
Usually, there is a correlation between the number of Site zones and the size of the deployment, and
the number can reach tens of sites.
When a Site zone is disconnected from the Data Center Zone, users at the Site cannot connect to
applications, search calls, or playback calls.
Content Access
Content access enables users and background offline processes to retrieve audio and screen real-
time and recorded content. Playback can be done over the telephone, over computer speakers, or by
downloading the file.
Integration Services
The system integration services provide integration with the customer environment, and include the
following:
Employee Information: Provides information about the following:
Which employees are currently logged on
Which employees sit at which desktop
Content Storage
(ACRA only) The system supports both short-term and long-term content storage:
Short-Term Storage: Every Recorder has its own storage (local buffer), and it records and saves the
file locally. Data is stored in the local recorder buffer according to how long the data needs to be
saved. For example, the requirements are that the data needs to be saved for six months locally.
Therefore, the size of the Recorder (for example, 15,000 MB) is configured accordingly.
Long-Term Storage: The Archive Manager provides long-term storage, and can archive content
recorded in another Site zone. In this deployment, the Archive Manager is located next to the
network storage infrastructure.
Content Processing
Audio transcription servers can process audio content recorded in another Site zone. Typically, the
system is designed to minimize the transition of unstructured data because the volume of
unstructured data is significantly larger than the structured data.
The system also performs offline processing of speech transcription files for analytics using the
Speech Transcription engine.
Infrastructure used to support a common enterprise solution for analyzing all metadata and audio
captured by recorders
The Recorder Analytics Framework includes the Analytics Service; the analytics engines are separate
from the framework.
Analytics Service
(ACRA only) The Analytics Service is a core component of the Real-Time Analytics (RTA) Framework that
runs on the recorder platform. The Analytics Service functions as the interface between calls and the
analytics engines (which process the calls and metadata).
The Analytics Service is a separate process that can run on any server. The Analytics Service usually
runs on the recorder, especially for real-time processing. When contact information is processed after
a call is completed (such as in campaign-based processing), the service can easily be run on other
servers.
Obtaining audio data is in a suitable format (decompressing it if necessary), and passing it to the
analytics engines. If the audio stream is stereo, it can be supplied to any analytics engine as one of
the following:
Single interleaved stereo stream
Single mixed mono stream
Two separate mono streams
Obtaining and processing results from analytics engine (including both Recorder Analytics Rule
matches and raw metadata), and processing the results.
Analytics Engines
(ACRA only) Engines analyze the metadata and audio provided by the Analytics Service. The Recorder
Analytics Framework supports multiple analytics engines, each of which functions as plug-ins to the
Analytics Service. Each analytics engine runs on a recorder platform.
An analytics engine will:
1. Process audio and associated metadata
2. Return the processing results to the Analytics Service for further action
An analytics engine can provide specific custom results. These results could be actions taken as a
result of its analysis, depending on how the analytics engine is configured.
Examples of specific analytics engines include:
Metadata Detection
Real-Time Acoustics
Real-Time Speech Analytics (RTSA)
Voice Biometrics
Voice Enrollment
Desktop
The Desktop is the main component in the customer environment that hosts software and certified
third-party software.
Depending on the package, the Desktop optionally contains the following types of clients required by
agents to work with system servers:
Integration Services Agent: Retrieves and acquires agent information, and extend contact
metadata with data available only on the agent desktop.
Content Recorder Agent: Records agent screen activity.
Content Access Client: Provides Playback Control (Interactions and Analytics)
Thick Client Applications: Includes Form Designer (Interactions and Analytics)
Real-time Agent Notification: Enables the system to send notifications to the employee desktops.
Customer Environment
The hardware infrastructure that supports the customer software environment includes the following
types of components:
Telephony: Provides a CTI link, which enables integration with PBXs, Automatic Call Distributors,
and Interactive Voice Response mechanisms
Storage: Serves as the archive drives for recorded content. Long-term and short-term content
storage, data files, backup location, and advance storage solutions are all critical components for
disaster recovery.
Examples of storage solutions include Storage Area Networks, Network-attached storage, and
Content Addressable Storage.
The system is deployed with one Data Center Zone, one or more Site Zones, and multiple
Desktops in the customer environment. The system supports various deployment levels (or
scales). The levels range from a small deployment of 250 agent seats (level 1) to an
enterprise scale of 50 K agent seats (level 6).
Topics
Deployment Overview 52
Platforms and Server Roles 54
Deployment Principles 55
Deployment Levels 56
Databases by Platform 58
Physical Deployment Use Case 59
Deployment and Scalability Deployment Overview
Deployment Overview
The system is deployed with one Data Center Zone, one or more Site Zones, and multiple Desktops in
the Customer Environment.
The Data Center and Site Zones each contain server roles that provide specific functionality for the
system. A predefined logical group of server roles that are installed together on a physical server is
defined as a platform. Only one single platform can be installed on a server.
The system contains approximately 16 different platforms. Two or more platforms can include the
same server role. Platforms are hardware and operating system-independent—you can install the
same platform on servers with different hardware specifications and operating systems.
Hardware specifications of servers include specific parameters (such as CPU, memory, hard drive,
RAID Controller, and disk partitions). Two or more physical servers can have the same installed
platform.
A server includes third-party software (such as Windows and SQL), which is a part of the platform and
the responsibility of the customer. It is installed in advance on the server, and the customer verifies it
using the Server Readiness Tool.
Platform
Software only
Server Role 1
Component (MSI)
Server Role 2
Component (MSI) Servers
In addition, a server can be hosted on several different hardware types, where the hardware type
represents the minimum specifications of the underlying machine.
Hardware types are specified by the number of vCPU’s or HW threads required, and by the amount of
memory required in GB.
Additional HW requirements per server are specified in the Server Details section of the Customer
Furnished Equipment (CFE) Guide.
Related information
Server Details section (Customer Furnished Equipment (CFE) Guide)
Deployment Principles
The following include the core deployment principles that have been integrated into the system
architecture:
Same Deployment Concepts for All Applications: The same deployment concepts, scales, and
installation procedures are used, regardless of the application (QM, REC, WFM, or WFO) the
customer has purchased. The fact that different applications are installed does not change the
common deployment practices implemented for all applications.
OS/HW Independent Platforms: All defined platforms are hardware and operating system-
independent. The same platform can be installed on multiple servers with different hardware
specifications and operating systems.
High Availability and SQL Remote Access: Application and database high availability is supported,
and also remote SQL capabilities.
Flexible Deployments, Sizing and Scalability:
Deployment scaling is driven by whether or not a deployment has reached any given physical or
logical limit.
In Data Center zones, database servers scale up and then out, separating each database to its
own server. Application servers scale out through load balancing.
In deployments including Recorders and Speech Analytics, servers scale out through multiple
units.
Deployment Levels
The system supports various deployment levels (or scales), ranging from a small deployment (level 1)
to an enterprise scale (level 6). To support this range, the system can be deployed in one server or
multiple servers, depending on the size of the deployment.
The smallest deployment is a single box solution where the two logical zones, Data Center and Site,
reside on the same physical server. A single box deployment is a consolidated platform that consists
of almost all the server roles that are part of the WFO analytics offering.
In a Multiple Box solution, the deployment is distributed over multiple servers with multiple
platforms. The single box solution becomes a Multiple Box solution under the following
circumstances:
Number of supported agent seats increases
Customer environment is distributed and requires deployment of remote sites
Customer requires databases and application high availability
Security considerations require physical separation of database and application servers.
The following diagram illustrates the different deployment levels and the platforms that can be
installed on the servers in the different levels. Level 1 is the Single Box or consolidated deployment,
and levels 2–6 represent various levels of Multiple Box deployment solutions.
Level Description
1 Smallest deployment size with recording. A consolidated server has both Data
Center and Site Zone server roles.
Level Description
The Data Center deployment level depends on the number of employee seats and the
applications being deployed. The deployment can also include more parameters (such as
database size).
The Workforce Optimization Suite supports SQL cluster and SQL farm deployments, where
SQL Server and databases are hosted externally to the database platform. In this case,
Database Management services are required. Database Management services are used to
configure and manage the databases in the cluster. The services are also for hosting reports
and post processing functions (see Data Processing, page 38 and Reporting Services,
page 39).
Databases by Platform
Platforms host specific databases. The databases hosted by the platform depend on the deployment
level and the platform itself.
Site Description
Recording Archive Site The system enables an archive server from one site to
archive calls from other, multiple sites.
Another type of recording site can be a site with a single
recorder that is configured to act as an archive server.
This recording site can be located physically in the Data
Center Zone.
In this example, the site is a pure archive site that archives
calls recorded by recorders located in other sites (new and
existing). The site could also include recorders that act as
audio recorders.
Site Description
Data Center The Data Center Zone, in this example, is deployed with
the following:
Databases: There are six database servers. Each server
is installed using a different platform, and has a local
instance of SQL.
Applications: The application cluster resides behind
load-balancers and up to 11 servers installed with an
application platform.
Speech: For every speech instance, there is a server
with a Speech Analytics platform installed. The server is
configured with a Speech Application Service server
role.
Data Flows
There are many different data flows, or processes, that are implemented in the system. User
requests, product activities, and system events trigger these flows.
Topics
For example, administrators have an Interactions and Analytics license. After administrators set up
users with the User Management module, they use a different application called the Assignment
Manager (also accessed by means of the Portal). They use this application to define user access
permissions and scope of the Interactions and Analytics applications.
Administrators define the scope of what users can do in the Interactions and Analytics applications
based on user groups and role affiliations defined in the system.
Administrators then assign entities to groups and roles, such as forms, flags, reports, and folders. For
example, by default, the Ad Hoc Query Analyst role can only access the Ad Hoc Reports section of the
Reports application.
Related information
Interactions and Analytics Administration Guide
In addition to setting up user profiles with defined roles and privileges, administrators can set up
different hierarchies that allow them to manage users:
Organizational hierarchies are structured according to the managerial and employee hierarchy in
the company.
Group hierarchies are structured according to a specific logical structure defined by the
administrator.
These hierarchies allow administrators to set rules for users, based on their position in the
organization or their association with a specific defined group. (For more information on user roles
and privileges and organization and group hierarchies, see the User Management Guide).
Framework Framework
Applications Databases
Related topics
Generic User Setup, page 63
Related information
Recorder Configuration and Administration Guide
Related information
Interaction Data Import Manager Configuration Guide
2. Archive Storage: If the Locator does not find the file in the non-archive storage, it then searches
the archive storage of all sites until it finds the interaction. Files can be archived on online media
(such as fileshares or SANs) or offline media (such as tapes or DVDs).
To search for interactions, the Locator uses an HTTP/HTTPS Web-based file retrieval component
called a Content Server. The Locator sends a request to the relevant Content Server—first in non-
archive storage, and then in archive storage, until it finds the requested interaction.
Usually, if a special registry key is set, the system first searches in a specific site to find the interaction.
It then searches in other locations (called Site-Dependent Playback).
Refer to the following examples:
Search for Interaction in Recorder Site, page 79: The Locator searches in a Recorder Site for the
interaction, both in the non-archive and archive storage locations.
Search for Interaction Using Site-Dependent Playback, page 84: When site-dependent playback is
enabled (through a registry key), the Locator first searches in the site that is specified in the registry
(both in non-archive and archive locations). Only if it cannot find the file in this specified site, it
continues looking for it in other locations.
The non-archive and archive searches can occur in different sites (depending on where the
interaction is located).
For visual presentation, the diagram shows both types of searches (non-archive and archive)
occurring in the same site
3 Locator Archive If the Content Server did not find the file in the non-
Database archive storage on the Recorder, the Locator
queries the Archive Database. The query attempts
to find servers that have Archivers that can access
the file in archive storage.
This query uses the Index Number (INUM) of the
first recorded segment of the requested type
(audio, video, and screen) belonging to the
interaction. The INUM is a unique 15-digit number,
where the first 6 digits are the recorder serial
number.
The Archive Database sends back to the Locator
one of three possible responses:
Finds Interaction in Online Media: A list of
servers where there is an Archiver running with
access to an archived copy of the interaction.
Proceed to step 4a.
Finds Interaction in Offline Media: A list of
offline media (tapes, DVDs) that contain the
interaction. Proceed to step 4b.
Does Not Find Interaction: The database
cannot find the file on any archived media.
Proceed to step 4b.
NOTE: The servers that have access to the file in
archive storage can be located in different sites. For
visual presentation, the diagram shows both types
of searches (non-archive and archive) occurring in
the same site.
4a Locator Central Archive The Locator sends out a request to each Content
Server on the list (one at a time) generated by the
Archive Database. It continues to send the request
until it receives a positive response.
The first server that returns a positive response to
the Locator is the Content Server that attempts to
search for the interaction. (This server is usually the
first one on the list).
The Content Server then finds the interaction on
the online media, and begins preparing it for
retrieval.
1 Locator Archive If the Content Server did not find the file
Database in the non-archive storage, the Locator
queries the Archive Database (using the
M, C, ST key). The query attempts to find
servers that have Archivers that can
access the file in archive storage.
The Archive Database processes the M,
C, ST key and sends back to the Locator
three possible responses:
Finds Interaction in Online Media:
A list of servers where there is an
Archiver running with access to an
archived copy of the interaction.
Proceed to step 2a.
Finds Interaction in Offline Media:
A list of offline media (tapes, DVDs)
that contain the interaction. Proceed
to step 2b.
Does Not Find Interaction: The
database cannot find the file on any
archived media. Proceed to step 2b.
Related topics
Playback Interaction Data Flow: Retrieve Interaction using ActiveX, page 87
In Site-Dependent Playback, the Locator searches for the recording by searching possible locations in
the following order until the recording is found:
1 Locator Recorder (in The Recorder that originally created the recording is
specified site) in the Site or Site Group specified by the Site-
Dependent Playback tip. The Locator then searches
for the recording on that recorder buffer.
2 Locator Archiver (on a The Locator searches for an archived copy of the
server in a recording on any archive server in the Site or Site
specified site) Group specified by the Site-Dependent Playback tip.
3 Locator Recorder (in The Recorder that originally created the recording is
other site) not in the Site or Site Group specified by the Site-
Dependent Playback tip. The Locator then searches
for the recording on that Recorder buffer.
4 Locator Archiver (on a The Locator searches for an archived copy of the
server in other recording on any archive server in other sites.
sites)
1 Desktop Content Server When the user selects the URL, the Playback
(Playback Application running in the browser sends a request
Application in to the Content Server to retrieve the file from its
Browser) location.
The Content Server retrieves the file from the
storage location:
Non-Archive File System: The Content Server
retrieves the file directly from the non-archive file
system (local buffer or ATSM storage).
Archive Medium (by Archiver): The Content
Server retrieves the file from the archive medium
by the Archiver.
If encryption is enabled in the system, the Player
application on the Desktop checks the file. The
Player determines whether it is encrypted before
playing it back to the user.
Related topics
Playback Interaction with Encryption using ActiveX Data Flow, page 145
1 Desktop Content Server When the user selects the URL, the Browser sends a
(Browser) request to the Content Server to retrieve the file from
its location.
The Content Server retrieves the audio, screen, and
video/share data from the storage location:
Non-Archive File System: The Content Server
retrieves the file directly from the non-archive file
system (local buffer or ATSM storage).
Archive Medium (by Archiver): The Content
Server retrieves the file from the archive medium
by the Archiver
For HTML5 streaming playback, Content Server
returns the audio, screen, and video/share data in
chunks, as needed by the Desktop (Browser). This
step occurs iteratively for streaming replay.
Related topics
Playback Interaction with Encryption using HTML5 Streaming Data Flow, page 146
Related topics
Real-Time Monitoring—Retrieving Employee Information, page 90
Real-Time Monitoring—Streaming Audio, page 93
1 Monitoring Interaction By the Portal, the user selects the employee they
Desktop Applications want to monitor in one of the following ways:
Select Employee, View Detailed Status: View the
detailed status of all employees in a specific
group. Then select the specific employee you
want to monitor from the list (Interactions >
Real Time > Monitor Employees)
Type Extension Number: Enter the employee
extension number (Interactions > Real Time >
Monitor Extensions)
Related topics
Real-Time Monitoring—Streaming Audio, page 93
DWH database fetches the evaluation data and calculates the KPI scores and displays them in
Scorecards.
Based on the default configuration, the entire Speech Analytics cycle, from when a call is recorded to
when it is built into the index, takes approximately 2 hours. For a detailed breakdown of this process
timeline, see Speech Analytics Pipeline Flow, page 130.
1 Desktop Project Rules Using the Project Rules Manager (PPM), the user
Manager (PRM) configures transcription rules on the enterprise
level.
The rule includes language and specific
vocabulary, which forms the basis of the cluster
definition used by the Speech Transcription
Service. Each rule is applied to a specific project,
which is associated with a specific Speech
Application Server.
2 Project Rules QM Database The Project Rules Manager saves the rule
Manager (PRM) definitions to the QM Database.
Related topics
Retrieve Tasks for Transcription Data Flow, page 107
Related topics
On-premises transcription data flow, page 109
4 Speech DAS Web API The Speech Transcription System requests the
Transcription language code of the transcribed call in ISO
System format from the DAS Web API.
5 DAS Web API Speech The DAS Web API retrieves the language code in
Transcription ISO format from the QM Database, and sends it
System to the Speech Transcription System.
To use the Verint Da Vinci Speech Transcription Service in the cloud, you must configure
the relevant server role and Common Cloud Services settings in System Management.
11. API Gateway Speech Transcription The API Gateway forwards the
System same to the Speech Transcription
System.
12. Speech Transcription Speech Transcription Performs the second pass on the
Service Service transcribed audio segments:
Assigns the Transcription
Quality Score
Labels speakers
Related information
Workflow: Configure Common Cloud Services (System Administration Guide)
Verint Da Vinci Speech Transcription Service Configuration Workflow (Enterprise Manager Config
& Admin Guide)
the flexibility to analyze the transcription in different external applications for enhanced insights and
more accurate predictive models.
6 IAES DUS Web API Requests the DUS Web API to send the
metadata for every interaction in the
bulk-task set.
There must be a minimum of 40,000 interactions or 2,000 hours of audio, whose Start Time of
the interactions is within the last 14 days. Otherwise, the Analytics Training process cannot
retrieve the interactions.
For example, there are a total of 50,000 interactions in the system over a month period
(since the system has been up and running). However, in the past 14 days, only 10,000
interactions have been generated. In this case, the Analytics Training process does not
retrieve the interactions and does not run Training.
When the Training process runs, it extracts ontology-related items from the interactions. The Training
process creates an updated ontology and saves it in the Speech Analytics Database. It creates the
ontology by comparing the previous published ontology and the new ontology-related items found in
the interactions.
The items in the ontology (including themes, relations, and terms) can help the user make non-trivial
observations about their business.
Related topics
Index Data Flow, page 118
Indexed Data Integration Flow, page 120
Themes Data Flow, page 124
3 TRS Speech The TRS copies the SPS data to the Speech
Products Products Database.
Database
Related topics
Applications User Setup, page 64
Call is stored in The call is stored in the Central Total: 15–30 minutes (15-
database Contact Database. min ETL schedule + 15-min
delay)
Database Delay After a configurable lag time delay, 5–30 minutes (default: 5
from Real-Time the Project Rules Manager retrieves min)
records from the Sessions View.
3 Marking Data Layer Contact The MDL sends the interaction and
(MDL) Database contact information to the Contact
Database.
5 Tagger Service Search Service The Tagger Service forwards the tagged
data to the Search Service.
6 Search Service Text Indexing The Search Service stores the data in the
Service Text Indexing Service.
The flow shows only those TAS services that are applicable to the current flow.
1 Client Text Application The client logs on to the Text Application and
performs a search for interactions to analyze.
2 Text Application Search Service The Text Application forwards the search
query to the Search Service.
3 Search Service Text Indexing The Search Service queries the Text Indexing
Service Service and returns the search results.
The flow shows only those TAS services that are applicable to the current flow.
1 TAS Services Alarms and The Alarms and Monitoring Agent collects health
Monitoring metrics from each of the TAS services that it
Agent monitors.
2 Alarms and Alarms and The Alarms and Monitoring Manager does the
Monitoring Monitoring following:
Agent Manager Polls the Alarms and Monitoring Agent at
predefined intervals, and retrieves the metrics
for each TAS service that is monitored.
Groups the incoming alerts by the TAS server
from which the alert originated, and by the
TAS service for which the alert is sent.
Related topics
Recording With Encryption Data Flow, page 141
Recording Using Import Manager With Encryption Data Flow, page 142
Playback Interaction with Encryption using ActiveX Data Flow, page 145
Playback Interaction with Encryption using HTML5 Streaming Data Flow, page 146
Speech Analytics Encryption Flows, page 148
2 Recorder Local Key Recorder checks the Local Key Cache for
Cache a valid key.
One of the following occurs:
If the key is valid, the Recorder uses
the key and the process continues
with step 6.
If the key is not valid, the process
continues with step 3.
3 Recorder KMS API If the Recorder does not have a valid key
in the Local Key Cache, it sends a key ID
request to the KMS API.
Note: Every 5 minutes the recorder
checks with the KMS API to verify that the
key is still valid.
4 KMS API Recorder The KMS API retrieves the key from the
KMS and then returns the key to the
Recorder.
5 Recorder Local Key The Recorder uses the Local Key Cache to
Cache cache the latest (active) key in a secure
manner.
6 Recorder Call Buffer The Recorder encrypts the data using the
current key and places the key ID in the
file header.
The encrypted media (and metadata) is
saved to the recorder call buffer.
4 Recorder Local Key Recorder checks the Local Key Cache for
Cache a valid key.
One of the following occurs:
If the key is valid, the Recorder uses
the key and the process continues
with step 8.
If the key is not valid, the process
continues with step 5.
5 Recorder KMS API If the Recorder does not have a valid key
in the Local Key Cache, it sends a key ID
request to the KMS API.
Note: Every 5 minutes the recorder
checks with the KMS API to verify that the
key is still valid.
6 KMS API KMS The KMS API retrieves the key from the
KMS and then returns the key to the
Recorder.
7 Recorder Local Key The Recorder uses the Local Key Cache
Cache to cache the latest (active) key in a secure
manner.
8 Recorder Call Buffer The Recorder encrypts the data using the
current key and places the key ID in the
file header.
The encrypted media (and metadata) is
saved to the recorder call buffer.
3 Key Proxy Web KMS The Key Proxy Service sends a key ID
Service request to the KMS API by HTTPS. The
KMS API forwards the request to the
KMS.
The KMS responds by sending the key to
the Key Proxy Web Service (through the
KMS API).
4 Key Proxy Web Desktop The Key Proxy Web Service sends the key
Service (Player) to the Player by HTTPS.
5 Desktop Desktop The Player uses the key to decrypt the file
(Player) (Player) in memory, and plays it back to the user.
By default, the browser saves
downloaded files in its temporary
internet files folder in an encrypted
format.
Related topics
Playback Interaction with Encryption using HTML5 Streaming Data Flow, page 146
Related topics
Playback Interaction with Encryption using ActiveX Data Flow, page 145
To ensure that sensitive data is encrypted during these transfers, you must use HTTPS
protocols and enable HTTPS on your site.
For details on HTTPS enablement, see the Security Configuration Guide.
The encryption and decryption processes include:
The Playback component in the Speech Transcription Service decrypts the file so that it can
decompress it. Once the file is decompressed, the Speech Transcription Service can transcribe the
file into text.
After the Speech Transcription Service transcribes the file, it sends the transcript to the TRS over
HTTPS. In turn, the TRS encrypts the transcript. It then stores it in an encrypted format in the Speech
Products Database.
Requests to retrieve transcripts are sent to the TRS over HTTPS. The TRS reads and decrypts the
transcript from the Speech Products Database, and sends the decrypted file to the requesting
service over HTTPS.
The following encryption diagrams assume that the system is configured to use the HTTPS
protocol for network communications.
Related topics
Speech Analytics Audio and Video File Decryption Data Flow, page 148
Speech Analytics Transcript Storage Data Flow, page 150
Speech Analytics Transcript Retrieval Data Flow, page 152
Data Center
Data Processing
Framework Interaction
Archive
Contact Databases
Framework
Data Data OLTP
Database Database
Warehouse Warehouse Database Interaction Interaction
Analytics Flow
Speech Services Manager
Contact QM Speech
DPA
Analytics Products
Database Database Database
Database Database Forecasting
Framework
and
Integration
Scheduling
Biometrics Service
Service
Database
Speech
Application
Service
Web Applications
Interaction Framework DPA
Applications Applications Application
Reporting Services
Encryption Services
4 Reporting
Services
KMS
Database
HTTPS
TDM IP 2 3
Recorder Analyzer
Audio file 6
1
Integration Services
Content Access
Content Storage
Telephone Recorder
Content Integration
Playback
Server Central
Service Service
Archive
4 Playback SDK Local Key The Playback SDK either retrieves the key
Cache/KMS from the Local Key Cache or from the KMS
(through the KMS API) by HTTPS.
5 Playback SDK Playback SDK The Playback SDK uses the key to decrypt
the file in memory, and decompresses the
file to PCM format.
The Speech Analytics Transcription Engine
can now transcribe the decrypted file.
Related topics
Speech Analytics Transcription Data Flow, page 104
2 TRS Local Key The TRS either retrieves the key from
Cache or KMS the Local Key Cache or from the KMS
(through the KMS API) by HTTPS.
3 TRS Local Key The TRS either retrieves the key from the
Cache or KMS Local Key Cache or from the KMS
(through the KMS API) through HTTPS.
Related topics
DPA client/server data flow, page 157
DPA Reporting data flow, page 158
DPA Integration with WFM Data Flows, page 160
View DPA Applications in Player Data Flow, page 162
View Interactions Data in Timeline Report Data Flow, page 163
3 DPA Desktop DPA Applications When the client version differs from
Transfer Service Web Service the server version, the DPA Desktop
Transfer service requests the
updated configuration information
for each component with a
mismatched version.
2 Desktop (DPA DPA Applications The DPA reports user defines report
Reports) parameters and clicks Display. A
request is sent to the DPA
Applications to generate the report.
Related topics
Applications User Setup, page 64
Related topics
Applications User Setup, page 64
Archive Topologies
(ACRA only) After a recorder has completed recording a contact, it stores it on its local disk. The
recorder call storage drive, no matter how large, has some limit to its capacity. Therefore, at some
point, the older contacts need to be moved to long-term storage.
Archive refers to the infrastructure dedicated to preserving call information in long-term storage
(usually for one year and longer, depending on customer requirements). The archive service transfers
recorded content from recorders to specific storage media for preservation.
The following are the two different archive topologies that can be configured:
Local Archive Topology, page 165: Recorders push contacts directly from their local call buffer to the
target media.
Central Archive Topology, page 167: Central archive server is configured to pull contact data from
Recorders and write this data to the target media.
Each type of archive (local or central) can be deployed in different configurations. To determine which
type of archive is used depends on the topology best suited for a customer specific requirements (see
Local vs. Central Archive, page 168).
For detailed information on archive functionality, general data flow, setup, and configuration, see the
Archive Administration Guide.
Local Archive with Fixed Media Fixed media (SAN or Centera) is located in a Site,
Onsite and can be shared among multiple Recorders.
(Site 2) The only data transferred between the Site and
the Data Center is lightweight, archive activity. The
data includes status updates, updates to the
database, and progress tracking.
Local Archive with Fixed Media Fixed media (SAN or Centera) is located outside a
Offsite particular Site (either in Data Center or in another
(Sites 3 & 4) site in the enterprise).
The Recorders in a specific site configured with
Local Archive push contact data across WAN
bandwidth to the fixed media located in a
different location.
Central Archive Onsite One or multiple Central Archive servers are located in a
(Site 1) specific site.
In this configuration, the Central Archives pull contact data
from the Recorders configured in the same site. They write the
data to specific removable or fixed media.
All archive data stays within the same site, and is not pushed
across a WAN. Therefore, the only data transferred between
the Site and the Data Center in this configuration is lightweight
archive activity. This includes status updates, updates to the
database, and progress tracking.
Central Archive Offsite One or multiple Central Archive servers are located in the Data
(Sites 2 & 3) Center.
In this configuration, the Central Archives pull contact data
across a WAN from the Recorders located in a specific Site.
They then write the data to specific removable or fixed media
located in the Data Center.
This configuration allows pulling contact data from multiple
sites to a single archive server. Alternatively, you can subdivide
Central Archive servers by site, based on data load
considerations.
Data Streaming If target archive media is co- Need to stream data from
located on site with the the Recorder to the Central
recorders, or is physically Archive servers.
attached to each recorder, If the Central Archive server
you can avoid streaming data is located in the Data Center,
across a WAN. it involves streaming data
across a WAN.
Database Processes
The following describes the main database processes in the system:
Database ETL Flows, page 171: Describes the main database ETL (Extract, Transform, Load) flows in
the system, including marking, transferring and synchronizing data between databases.
Database Retention and Purging, page 173: Describes the retention and purging setup and process
logic of various databases in the system.
Data Center
Data Warehouse
QM Application ETL Interaction Data Databases
Database Warehouse
Framework Speech
Data Analytics
Warehouse Database
Data Warehouse
Contact ETL Speech
DPA
Products
Database
Database
Contact OLTP Contact
Database Contact Database Biometrics Archive Framework
Database Database Database
Database ETL
Marking
Data Processing
Web Services
Interaction
Framework DPA Interaction Interaction
Applications Applications Application Analytics Flow
Services Manager
Forecasting
Framework
and
Integration
Scheduling
Web Applications Service
Service
Interaction Framework DPA
Applications Applications Application
Speech
Application
Sends Service
Recording
Data
Site
Recorder
Integration Services Content
Processing
IP TDM IP Recorder
Speech
Integration
Recorder Recorder Analyzer Service Transcription
Service
IP
Screen Import Content Access
Recorder Content Storage
Recorder Manager Telephone
Video Content
Playback Central
Server Archive
Service
Customer
Feedback
Survey
Interaction Retention Quantity (Millions): Defines the database retention threshold according to
the total number of interactions the database retains (watermark). Once this total number of
interactions in this database has been exceeded, the purging mechanism will be activated.
Defining different values for the EM parameters according to the database allows setting different
purging thresholds for different databases in the system.
This enables, for example, retaining a longer history of interactions in the Interaction Data Warehouse,
but having a limited number of interactions in the Contact Database, where interactions are retained
primarily for searching purposes.
The Database Purger logic differs between subsystems—for example:
Contact OLTP Database: Interactions are purged based on the minimum value defined between
the retention period and retention quantity parameters. An interaction will only be purged if it exists
in the Contact Database. For this reason, the retention period for the Contact OLTP Database must
be shorter than the retention period for the Contact Database.
QM Database: Evaluations are purged based on the minimum value defined between the retention
period and retention quantity parameters. Unevaluated and/or non-flagged interactions are purged
based on the value set for Unevaluated Contacts Retention Period (days).
Interactions that are evaluated and/or flagged are not purged until the associated evaluation's
retention period has passed and the assigned flag's retention period has passed.
Contact Database: Interactions are purged based on the minimum value defined between the
retention period and retention quantity parameters.
An interaction is not purged if one of the following conditions exist:
The call is archived and the archive expiration date has not passed
The interaction exists in the QM DB (the interaction is flagged and/or evaluated)
Interaction Data Warehouse: Data is purged according to the following logic:
Interactions that originated from the QM Database are deleted according to the Unevaluated
Contacts Retention Period (days).
Evaluations are deleted according to the evaluations retention period and retention quantity
parameters.
Data that originated from the Contact Database is purged based on the defined retention.
Speech Analytics Index: The Speech Analytics Application server purges interactions out of the
index on a daily basis, when either the number of interactions or total audio hours reaches the
maximum limit. The purger removes old interactions first in whole day increments, so that the
oldest day(s) in the index are purged first.
For all databases, after an interaction is deleted, all of the information related to this interaction is
purged as well (for example: call custom data, call remarks).
For more information on the EM parameters related to database retention, see the Enterprise
Manager Configuration Guide.
1 Desktop Framework From the Desktop, the Web browser sends the
Applications request to run a report to Framework Applications.
4 Reporting Framework The SSRS services take the properties of the report
Services (SSRS Applications output and send them to the Framework
services) Applications.
Related information
WFO Report Development Kit (RDK)
WFO Reports Guide
WFO Ad Hoc Reports Guide
System Redundancy
The system supports redundancy mechanisms that ensure high-quality system service is
met under normal conditions, and provide solutions that continue to provide service in the
event of failure.
Topics
For details on redundancy configurations for recorders, see High Availability, page 256 in
Recording, page 236.
Both Data Centers must be identical in their hardware and software configurations. If a disaster
occurs that renders the Active Production Data Center inoperable, administrators need to perform a
Data Center switch-over procedure to switch Data Center operations from the inoperable Data
Center to the Secondary (or Standby) Data Center. The Standby Data Center then becomes the new
Active Production Data Center.
For more information about this solution, see the Data Center Redundancy Guide.
Related topics
Windows and SQL Cluster solution, page 180
SQL Server AlwaysOn solution, page 181
The system supports the deployment of its SQL databases on external servers. In this deployment,
the SQL Server resides outside of the servers, and the database platform is configured to work
remotely with SQL Server instances.
Customers can optionally use the Windows Clustering solution to support SQL database high
availability. The clustering solution includes at least two servers (clustering nodes)—one active server
with a running SQL Server instance, and one passive server (known as a standby server) with no
running SQL Server instance.
Both servers have the same virtual address. If the active server is not available, the system
automatically activates the standby server. This process is known as failover.
Since databases are stored on a shared disk visible to the two clustering nodes, the standby server
only needs to run the SQL Services. Applications can continue using the same network address after
failover to connect to the databases.
The Database High Availability failover scenario is implemented in the following way:
1. The system creates a session to the active SQL Server through a Cluster Name device.
2. The active SQL Server fails.
3. The system prompts the Application Server to initiate a new connection with the Cluster Name
device.
4. When the Application Server reconnects, the Cluster Name device directs the session to the
formerly passive SQL Server, which has now become the active SQL Server.
Introduced in SQL Server 2012, AlwaysOn Availability Groups feature maximizes the availability of a
set of user databases for an enterprise. An availability group supports a failover environment for a
discrete set of user databases, known as availability databases, that fail over together (see
https://msdn.microsoft.com/en-us/library/hh510230(v=sql.110).aspx).
WFO requires that all its databases within an instance are in the same availability group.
The AlwaysOn feature can be used in the following implementations:
Databases high availability (also known as standby databases)
Disaster Recovery (DR) solution for WFO databases
Off-load reporting
The monitor is installed on the server hosting the Framework Database server role.
The monitor consists on a time stamp written to each database on the primary SQL instances. The
time stamp is replicated as part of the database replication to the secondary SQL instance.
To validate that the databases are synchronized, the AlwaysOn monitor runs in five minutes intervals,
and compares the time stamps between the primary and secondary instances. If there is an issue
related to database failover readiness, the monitor provides a relevant alarm message on the WFO
system monitor.
Communication with the application servers is implemented only through the LB virtual address. The
system is available as long as one application server is active.
Application server redundancy is also used for system scalability in cases where a single application
server is not sufficient to handle the application workload. High Availability solution design must
ensure that enough application servers are available at any given time.
For example, in a typical N+1 deployment, the system includes an extra application server in addition
to the number of application servers required to process the workload. When not more than one
server fails, the system continues to be available and meets the performance requirements. In
contrast to the database clustering solution, all application servers are active at all times and are
available to serve user requests.
The Application High Availability failover scenario is implemented in the following way:
1. Application Server A fails during an active session.
2. The system prompts the user to log on again.
3. When the user logs on again, the system creates a new session and the Load Balancer directs the
session to Application Server B.
End-to-End Encryption
The end-to-end encryption solution achieves high availability by deploying two Key Management
Servers (KMS) in a redundant configuration.
On the Thales KMS, high availability is configured in an active-active mode within the Thales KMS.
For more information on end-to-end encryption, see the Thales Key Manager Server Installation and
Configuration Guide.
Virtual Machines: Virtual machine products (such as VMWare), can provide high availability
solutions that are transparent to the system, providing high availability of all system components.
The Virtual Machines solution is similar to the Windows and SQL clustering solution (see Database
High Availability solutions, page 179), where the system has redundant servers in an active/passive
configuration. The virtual machine files are stored on a shared storage server, which is available to
both the active and passive servers.
When the active server fails, the system automatically performs the failover process, activating the
passive (or standby) server, which means the virtual machines are being executed on the standby
server instead of the primary server. This solution can be combined with other solutions as well.
For example, it can include multiple application servers accessed through a load balancer, and if
one of them fails, an additional server is activated automatically by the virtual environment.
Boot from SAN: Booting servers from a Storage Area Network (SAN) eliminates the need for each
server to have its own internal disk. Server storage, including operating system files, can be
relocated to a shared network disks location, and the risk of local disk failure is removed. In this
scenario, the standby server is shut down and only booted up when the primary server fails.
The Virtual Machines and Boot from SAN solutions can be applied to all Data Center
zone platforms. However, each specific deployment and implementation requires
approval and certification.
Network Solutions: Customers can choose any network high availability solution, as long as
networking requirements, such as bandwidth and latency, are met. Network high availability is the
responsibility of the customer.
Coaching Yes
System Management
The system supports a unified, centralized system management architecture for all
products.
Topics
System Description
Management
Service
System Description
Management
Service
System Description
Management
Service
User Administrators set up and create user profiles for every employee in
Management their organization using the unified, single user management
solution for the Enterprise Suite.
The User Management application then sends the changes to one,
single central database, where all system management data is saved
for the whole enterprise. Administrators assign specific user
privileges and permissions to each profile (called roles and privileges).
When a user logs in to the Portal, they are authenticated and
authorized by the system. The user is only authorized to view and
access the applications and functionality defined within their scope
and visibility.
Status and Alarms and status displays are used to monitor the overall health of
Alarms the system. Users can view overall system status according to the
installation hierarchy, where per each of the hierarchy nodes, an
Active Alert Count is displayed.
To view detailed status and alarm information for specific servers,
the user selects the relevant server and the system retrieves the
alarms and status messages locally on the server.
For a detailed data flow that describes how alarms are generated and
processed in the system, and then how the alarm information is
retrieved and displayed to users, see Alarms Data Flows, page 202.
System Description
Management
Service
Topology Report The Topology Report provides system information that is useful
when planning system upgrades and in troubleshooting scenarios.
The Topology Report consists of five individual reports:
Summary: Contains information about the creation of the
Topology Report, the customer, and the items licensed in the
enterprise.
Servers: Contains detailed information about the hardware
components and operating system software installed on each
server in the enterprise.
Storage: Contains statistics about the disk space capacity and the
free disk space on each server in the enterprise.
Recorders: Contains information for the Recorder and
Consolidated servers in the enterprise
Versions: Contains information about the WFO software server
version, service pack version, and specific hot fixes installed on
each server in the enterprise
For more information, see the Enterprise Manager Configuration and
Administration Guide.
1a Component Alarm Service The component sends the alarm directly to the
Alarm Service.
2 Alarm Service Alarm Service The Alarm Service processes the alarm. The
Alarm Service uses the alarm configuration to
determine whether to send an SNMP trap to
an SNMP node or send an email to a particular
person to provide notification that the alarm is
triggered.
The alarm configuration also determines
whether a delay period occurs before further
alarm processing, the priority level assigned to
the alarm, and other aspects of the alarm
processing.
The alarm configuration is done from the
System Monitoring > System Monitor > Alarm
Settings screen.
NOTE: The Alarm Service can also trigger
alarms for Performance Monitor-based alarms,
and for File Tampered alarms.
3 Alarm Service Active Alarms The Alarm Service creates an XML file for each
Directory alarm it processes and places this file in the
<install
directory
>software\contactstore\alarms\active
directory on the managed server.
4 Alarm Service System Monitor The System Monitor (in either the Enterprise
Manager Agent (EMA) or Recorder Manager
(RM) application) on the server accesses the
alarm XML files in the directory above to
display alarm information about active alarms.
This information displays in the Active Alarms
tab in the EMA or RM application.
From the System Monitor, alarms can be
filtered, sorted and acknowledged to help
support personnel analyze the problem.
2 EMA Active Alarms The AlarmJob process detects any new active alarm
Directory XML files that were added to the <install
directory>software\contactstore\alarms\active
directory on the managed server since the previous
running of the AlarmJob process.
3 EMA Enterprise The Enterprise Manager Agent collects the new active
Manager alarm XML files from the directory and sends them to
the Enterprise Manager over the HTTP(S) connection.
4 Enterprise Framework Enterprise Manager stores the active alarm XML files
Manager Database received from the server in the Framework Database.
Topics
Security Overview
The system supports the following main security requirements:
Secure Sockets Layer (SSL) Protocol, page 208: Provides secure HTTP-based communications
End-to-End Encryption (ACRA only), page 210: Supports encrypting of media files such as audio and
screen during recording, and can then store them in an encrypted format throughout their entire
lifecycle
Pausing and Resuming Recording, page 211: Enables audio recording to be muted and screen
recordings to be blanked out to protect sensitive data from being exposed
Networking, page 212: Supports Data center SSL offload where all HTTPS traffic is terminated at the
load balancer (LB) or web application firewall (WAF) and all communication behind it (inside the data
center) is over non-HTTPS communication. Also supports Mobile Networking, which is required to
support the mobile apps, and system communication through firewalls
Domain Trust, page 214: Domain trust is needed to allow a single MSA and DMSA account in the
Data Center.
Remote Access, page 215: Supports system personnel’s remote access to the system for providing
management and maintenance services in an efficient and timely manner
System Rights, Settings, and Services, page 216: Supports security templates specified in the User
Rights, Windows Services, and Settings Guide .
Anti-Virus Support, page 217: Supports anti-virus applications that scan for viruses on a periodic,
scheduled basis
Application Security, page 223: Supports Network Address Translation (NAT) for all servers and
desktops, application user authentication methods for Web-based communication between
desktops and servers, supports built-in, secure system authentication processes that occur
automatically for service and server communication, and specific, configurable application security
methods, including defining session timeouts.
User Management Permissions, page 226: Supports a secure user management methodology for all
users, and additional configurable filters for Interactions and Analytics users
Audit Trail, page 227: Provides a record of the actions performed in the Recording Framework
Applications. For Interactions and Analytics applications, the Audit Trail Integrator solution enables
integration of the Audit Trail feature with any database through an Open Database Connectivity
(ODBC) connection.
This certificate file is unique for each server and is protected by an export password. The Common
Name (CN) on each certificate must match exactly the server name used by applications to access
that server.
Customers are responsible for obtaining and providing the TLS certificates. Customers can use their
own Certificate Authority (such as Microsoft Certificate Authority on the Domain Controller), a public
Certificate Authority (such as VeriSign) or a private/virtual Certificate Authority (such as OpenSSL).
Customers are required to provide these certificates per each server during site readiness and the
readiness checklist.
Related information
Security Configuration Guide
Microsoft CryptoAPI
DPA data-at-rest uses the Microsoft CryptoAPI encryption method to encrypt the local DPA data
before it is stored in standard MSMQ queues. DPA encryption is not KMS based.
Related information
Security Configuration Guide
Desktop Applications Deployment Reference and Installation Guide
Recorder Configuration and Administration Guide
Related information
Security Configuration Guide
Networking
Refer to the following networking security requirements:
Data Center SSL offload, page 212
Mobile Networking, page 212
Firewalls, page 213
Separating the addresses is optional and the same address may be used for both internal and external
communication. However, since communication to the external address is done over HTTPS, and
communication to the internal address is done over HTTP, it is recommended to use separate
addresses in different network segments, or to carefully restrict the access to the non-HTTPS port to
data center servers only.
Mobile Networking
The Mobile Gateway provides mobile-specific back-end services such as native mobile push
notifications and content optimization for mobile devices.
The Mobile Gateway enables native push notifications by communicating with Google Firebase Cloud
Messaging (FCM) and Apple Push Notification (APN) services.
Deploying more than two Mobile Gateways provides support for high availability.
Segmented Topology
Related information
Firewall Ports Configuration Guide
Firewalls
To enable system communication through firewalls, adhere to the guidelines specified in Firewall
Ports Configuration Guide.
Ports related to specific recorder integrations are found in the relevant Recorder integration guide.
Domain Trust
The system supports Domain and Workgroup Integration. In the domain environment the system
servers are deployed in a Data center. In a non-domain environment, the system servers at each site
are deployed in a Windows workgroup.
Domain Integration
Domain trust is required in the Data Center to allow a single MSA and DMSA account. It is also used
to allow system services to authenticate against the SQL Server using a Windows integrated
authentication.
Domain trust is not required between the Data Center and the sites or between workstations and
system servers.
Workgroup Integration
In a workgroup environment, the service user accounts, policies and settings are maintained
individually. There is no formal membership, policy enforcement or authentication process formed
by the workgroup.
Related information
Technologies, Security, & Network Deployment Reference Guide
Remote Access
Effective support in accordance with the company’s maintenance contracts requires that company
personnel have access to the system for remote support functions.
Remote Administration mode of Terminal Services is used for management and maintenance
purposes.
Remote access to the system is essential for providing service to the system. Using the remote
connection, Customer Services can manage service requests in an efficient and timely manner.
Furthermore, during the deployment phase, remote access can be used to verify site readiness and
to efficiently configure distributed systems that have components distributed between several
geographical locations.
Related information
Technologies, Security, Encryption and Network Integration Deployment Reference Guide
Related information
User Rights, Windows Services, and Settings Guide
Anti-Virus Support
The system supports antivirus applications that scan for viruses on a periodic, scheduled basis.
The system provides file extensions, and files and folders that should not be scanned by antivirus
applications. To prevent scanning these files, the customer needs to set up the specific exclusions in
the antivirus application being used. The exclusion of system processes from scans is also supported.
Related information
Anti-virus Exclusion List (Technologies, Security, Encryption and Network Integration Deployment
Reference Guide)
Foundation Token
Authentication type
Keys used for token The server creates the token and passes it back to the client.
creation Token is a random string with a length of 10 characters.
Authentication type
What is encrypted in the Token in cache contains expiration time, user name, and user
token? locale.
Additional security
Authentication type
Session No
Token is generated by the client and validated by the server for
each request
Secret phrase key Multiple keys are used in the Enterprise. Typically one per
management customer or tenant.
The keys are randomly generated by the system and
distributed in the Security Settings XML file in an encrypted
format. Keys generated on demand or as needed, so there is
no default key for the system. The first key will be generated
during first configuration distribution.
Authentication type
Algorithm used for token It uses HMAC-SHA256 algorithm to securely sign a randomly
creation generated salt value plus data from the request. Basically it is a
Hash-based Message Authentication Code(HMAC) signature
that uses a Secure Hash Algorithm 2 with a 256-bit key (SHA-
256) as its hashing function.
What is encrypted in the The token contains a signature of data rather than encrypted
token? data. The token is put into the Authorization header and
includes:
The name of the algorithm used to create the signature
The Salt value used in the signature
The time that the token was issued
The key id of the secret key used to sign the data
The signature of the data as described below.
The following data is signed by the HMAC algorithm to create a
signature:
Salt value which is 32 bytes of randomly generated data
URI of the request
HTTP method used in the request
Any Verint specific request headers (starts with ‘Verint-‘)
Additional security
Authentication type
Session No
Token validated in each request
Authentication type
Algorithm used for token Hashing passphrase with salt. Token created with AES256,
creation 32bit IV. If no passphrase is set in EM (system default), the key
is fixed, and DES is used.
Additional security When using a passphrase AES is used and salt is added to
GET.
For Content Server communication, an additional hash od
session data is added to GET.
Authentication type
Session No
Token validated in each request
Authentication type
Additional security
Application Security
The system supports several methods to secure applications and grant visibility to authenticated
users.
Session Timeout
Administrators can configure a timeout period for sessions. When a user’s browser is open and there
is no detectable user activity, after the timeout period is reached, the session ends.
Related information
Security Configuration Guide
The use of EMM tools enhances overall security and facilitates administration tasks. EMM tools do not
require any specific configuration in the Mobile Gateway, and can co-exist with and complete the
Mobile Gateway.
Examples of how EMM tools can be useful:
Route all mobile network access through an EMM gateway that is placed in front of the Load
Balancer. This solution allows for monitoring and fine-tuning access decisions before accessing the
internal network. For example, access can be restricted according to device properties, specific
apps, network location, or specific users.
Multi-Factor Authentication (MFA) is achieved by enforcing strong authentication using MDM, MAM,
or MIM.
Enforce device password protection to encrypt all device content when locked.
If a private Certificate Authority is used, use MDM to distribute root CA certificates to mobile
devices.
Remote wiping of devices using MDM, or of application data using MAM (for data saved on mobile
devices).
The specified definitions and permissions of users and system entities provides a two-pronged
approach to secure user access, which allows administrators to manage users in a unified, secure
way.
For setting up users, administrators assign specific user roles and privileges to each profile, where:
Roles are assigned to users to define their access permissions to applications (Supervisor or Agent
role, for example)
Privileges are associated with roles to define the features of the application a user is able to view,
and the functionality within the application the user can access.
When a user logs in to the Portal, they are authenticated and authorized by the system. The user is
only authorized to view and access the applications and functionality defined within their scope and
visibility (according to their defined role and privileges).
In addition, administrators can set up different hierarchies that allow them to manage users:
Organizational hierarchies are structured according to the managerial and employee hierarchy in
the company.
Group hierarchies are structured according to a specific logical structure defined by the
administrator.
These hierarchies allow administrators to set rules for users, based on their position in the
organization or their association with a specific defined group.
Related information
Roles and privileges and organization and group hierarchies (User Management Guide)
Audit Trail
Audit Trail provides a record of the actions performed in the applications. It allows contact centers to
track who logged into the system, performed a search, played back contacts, evaluate or flag
contacts, assign and complete training materials, and delete items from the application.
For the Framework Application, audit actions are logged in the BPMAIN database. The audit trail
viewer allows an administrator to view these audits.
For Interactions and Analytics applications, the Audit Trail Integrator solution enables integration of
the Audit Trail feature with any database (Microsoft SQL Server, Sybase, Oracle and more) through
an Open Database Connectivity (ODBC) connection. The Audit Trail Integrator provides long-term
storage of Audit Trail history, support for data collection across sites, and the ability to generate
reports based on this data. Interaction and Analytics actions are logged in the log files in the
Application Server.
The Audit Trail Integrator solution enables each Audit Trail customer to use a predefined
destination for the Audit Trail database, which can be hosted on the server where the Reporting
Services server role is hosted, instead of on a dedicated server.
Related information
Audit Trail Integration solution (Workforce Optimization SDK Programmer’s Guide)
Configuring the audit viewer Workforce Optimization System Administration Guide
Time Management
The system supports the configuration and management of multiple time zone settings. This
allows viewing a specific time setting on reports and charts, and allows users to generate
queries according to a specific time.
Topics
The way the system stores the time setting and how it is viewed by users depends on the type of
system operation or activity that occurred:
Time Settings—Storage and Display
User Interface
Storage
You can also view Search & Playback operations by Local Time in the system.
Time Configurations
System Time is only configured in the EM upon system installation.
Local Time is configured to one of the following:
Organization: Time zone is based on the organization of the user or agent who performed the
operation. This is useful in scenarios where agents are working in different regions, allowing you to
unify time zone tagging across multiple time zones.
DataSource: Time zone is based on the time zone specified for the phone data source.
Related information
Configuring the Local Time (Recorder Configuration and Administration Guide)
Los Angeles (UTC-8) New York (UTC-5) London (UTC) Paris (UTC+1)
Agent Lenny Lester starts his shift at 9:00 AM and finishes at 4:00 PM. Lenny’s first call of the day
begins at 9:05 AM and is recorded for 10 minutes.
The system stores the information in his contact details according to the system time, which is set
according to UTC: 09:05+8=17:05 (5:05 PM):
The next day, Nora Nelson in the New York headquarters and Pierre Praff in the Paris office both
perform a search for a random sample of calls conducted by the Los Angeles billing group the
previous day. One of the calls returned in the results is Lenny’s call.
When viewing the contact details, they will see the following:
When Lenny searches for his own calls, he will see his contact’s details as follows:
Related information
Maintenance Guide
Recording
The Recorder can record both voice and screen data in IP, TDM, and mixed telephony
environments. In IP environments, the Recorder can also record video from video-enabled
telephones.
Topics
Overview 237
Recording Types 243
Topologies 252
IP Recorder Filtering 260
Recording Overview
Overview
The Recorder can record both voice and screen data in your call center, in IP, TDM, and mixed
telephony environments. For IP environments, the Recorder can also record video data. The Recorder
also supports dialer integrations and recording in trading environments. The Recorder Integration
Service handles CTI events from third-party switches and other data sources, controls recording,
manages recording rules, and is integral to the real time monitor process in providing information to
the Data Center, as illustrated at a high level in the diagram below.
Recording Functions
The primary functions of the Recorder are to record, archive, and replay voice, video, screen, and
dialer-based interactions. Recorder features include:
Full-time and selective, rules-driven recording
Close integration with third-party CTI devices
Archiving support
Call replay audio delivered to the PC or through a telephone
High-availability (redundancy)
Web-based administration
The recording solution consists of a set of logical servers that can be deployed on a single machine or
on multiple machines in a large enterprise environment. These servers can also be deployed in the
form of clusters in order to scale with the size of the customer’s systems.
The Recorder supports both TDM and IP recording, including trunk-side recording (TDM) or gateway
recording (IP), and station-side recording (TDM) or extension-side recording (IP) recording. You can
configure each of these types of recording by using the Enterprise Manager to set up extension
groups or pools (called member groups), each with a data source that defines where the recorded call
is coming from, and then setting the recording mode.
creating two recording segments which will be combined into one segment. (These are known as
stitched recording segments or recording INUM.)
There are two ways in which segments are captured:
The first creates agent recordings. The system will record an employee or agent when they are active,
and stop when they become inactive. (For trunks, because there can be more than one agent on a
call, the system tracks a "primary agent" per trunk and creates segments based on that.)
The second is back office recording, which creates segments based on CTI calls. If one agent is on two
calls at the same time (for example, a customer call and a consultation call), the system creates two
segments.
Back office recording segments depend on the calls created by the specific switch with which
the Recorder is integrated. This document describes the most common scenarios, but some
switches or call flows may segment differently.
Non-associated call data allows the Integration Service to place records into the database when they
cannot be directly associated with a voice call (either because the call has finished or the inum of the
call is unknown). At this point, a join is performed between some common element within the
associated data, such as a unique ID from the CTI system, to allow this non-associated data to be
added to the call details.
Once you determine which fields you need to use in your system, you can add them as custom
attributes, then map these custom attributes to an adapter. You can then use these attributes for
tagging and to build recording rules, where the attributes become criteria upon which the decision to
record or not is based.
Attributes
Attributes are used to record and retrieve calls based on real criteria associated with employees (such
as an Employee ID), contacts (such as number of holds), devices (including extensions) and CTI events
(such as a call ID). You can use them to establish the conditions that trigger recordings, through
recording rules, and to tag calls, by mapping them to custom data.
There are both standard attributes, which are predefined and have specific values or behaviors, and
custom attributes, which are created to serve specific business needs using data present in a particular
environment.
Values for standard attributes are pulled from different places. For example, Employee attributes are
obtained from the Employee configuration, Contact attributes are collected from information in the
contacts, and CTI attributes are received or derived from CTI.
In certain cases attributes won’t have values. This can be because configuration is incomplete, there
are third-party limitations, or the attributes are simply not applicable to a given environment. If the
standard attributes don’t contain the data you need, you can create new ones.
Custom Data
The Recorder makes use of Custom Data (CD) and Conditional Custom Data (CCD) to tag data and
make it usable for things like reporting and analytics.
Recording Decisions
The Recorder Integration Service uses the following mechanisms to determine whether a session
should be recorded or not:
Extension Recording Mode
Recorder Fallback Type
Recording Rules
AIM and External API Commands (for example from eQuality Connect or Cisco Phone Services) can
also be factors in whether or not a given session is recorded.
Any record or block commands take precedence over all other decisions.
Record—Any extension configured with Record as the Recording mode will always have every
session it is in recorded, for the entirety of the session (assuming that the recording system is
configured and working correctly). The only thing that can change this behavior is a block business
rule, AIM command or external API command.
Do Not Record—Any extensions configured with this will not be recorded regardless of any AIM,
recording rule, or external commands.
Application Controlled—All sessions with extensions with this setting will be recorded from the
beginning until the end (assuming there are enough resources in the recording system). At the end
of the session it will be discarded if no business rules, AIM commands or external commands are
received to record or keep the session. It is important to remember that this requires the session to
be recorded since the beginning of the session, so even if it is not kept the system will record all
sessions for this extension.
Start on Trigger—All sessions with extensions with this setting will not be recorded by default. Only
at the time of the first trigger (a business rule, AIM command or external command) will recording
of the session begin, continuing until the end of the session. No resources are used until the
recording decision is made. On the flip side, the beginning of the session will not be recorded.
Recording Resource—Used only for soft phones, in conjunction with the Service Observe or Single
Step Conferencing Recorder Control Type.
There is a fifth extension recording mode ‘Recording Resource’ that is used only in Avaya
DMCC for the soft phones configured in the system. Since these are not actually recorded but
used for recording, they are ignored for this discussion.
Recording Rules
Recording rules are a core piece of the Recorder Integration Service, used for selective recording,
recording screens, and tagging sessions. Recording rules extend the functionality of your recording
system by allowing you to implement recording and tagging on the basis of a business logic that
reflects the goals of your enterprise. Each rule consists of a set of conditions (such as "extension
starts with") and actions (record, block, and so on). The rules trigger recording when contacts that
take place between customer interaction center employees and customers meet the specified
criteria.
You can also use Tag Only to trigger After Call Work, or add the recording rule name to the
standard attribute Fired Business Rules, without affecting the recording decision of a rule. All
of these actions have a percentage setting that applies to them. That is, the action specified
by the recording rule will only be taken for the specified percentage of calls that meet the
rule’s conditions and occur during the set schedule. This is primarily used for selective
recording.
Related information
Recorder Configuration and Administration Guide
Recording Types
Recording falls under two broad categories: IP and TDM, which each have a number of permutations.
The following sections describe some of the types of recording available for IP and TDM Recorders:
IP Recording, page 243
TDM Recording, page 247
You can use Service Observe (which allows agent extension monitoring) and Single Step Conferencing
(used to connect an in-progress call to a device) in both IP and TDM Recording with certain switches.
Refer to the Integration Guide for your environment for more information.
IP Recording
Support for IP Recording includes VoIP Gateway Recording (including SIPREC Recording, SIP Trunk
Interception, and SIP Session Replication), Extension-side recording, Duplicate Media Stream (DMS),
Real-time Transport Protocol (RTP) Detection, SIP Trunk Recording, and RTP Proxy Recording. The
style of recording dictates which calls are recorded, and which segment of any call is recorded.
In addition, IP Recording supports video recording for video-enabled telephones in Cisco SCCP and
SIP recording environments. IP Recording refers to either voice or video recorded using an
IP Recorder.
Gateway Recording
Gateway Recording is accomplished by mirroring (that is, duplicating data streams) the Gateway and
the call processing system server/cluster. This type of recording is also referred to as VoIP
Interception. If there is a requirement to record conference calls, then the conference bridge
resources—that is, all the telephones that will participate in the conference through the conference
bridge—must also be mirrored. Care should be taken to ensure that port mirroring for the
conference bridge resource does not take the IP Recorder (audio or video) over its configured
capacity for maximum packets per second.
The following diagram is an example of a Gateway recording solution, since the voice Gateway and
the call processing system are mirrored.
Mirror Port
PSTN
(Signaling to
the Recorder )
Switch
Internet
IP Recorder
Agent
IP
Phones
Mirror Port Media Gateway /
(Audio to the
Edge Device
Recorder )
The VoIP media gateway converts voice/video to a media streaming protocol, usually Real-time
Transport Protocol (RTP). When a conference is established, the RTP traffic flows between the
Gateway and the conference bridge. This means that the IP Recorder cannot associate it with any
device. Port mirroring the Gateway enables Recorder awareness of the RTP streams between the IP
device and the Gateway, allowing it to record this traffic.
Skinny Call Control Protocol (SCCP) traffic only flows between the IP device and the switch. The
Gateway does not use the SCCP protocol, and therefore mirroring only the Gateway results in the
Recorder not being able to record since it has no way of initiating the recording. This necessitates the
need to mirror ports for the switch server/cluster. Doing so enables the Recorder to see all the SCCP
packets for the entire system.
Give careful consideration to the use of Gateway recording solutions because mirroring a large server
cluster means that each IP Recorder is being forced to monitor and track every call in the cluster.
Failover configuration is an important factor since very often, after detecting the failure of a
server/cluster, the IP device will register with another switch in the network. If this other switch is not
mirrored, then recording will not be possible.
Another consideration for Gateway recording is the ability to mirror the Gateway channels. With
Gateway recording, a channel is more likely to be utilized, meaning that its use may push beyond the
number of concurrent recording channels supported on the IP Recorder. So depending on the
amount of traffic the Gateway supports on a single network port, you may not be able to mirror it
directly. (See the Performance and Sizing Guidelines for the latest recommendations.) In these
instances you will require a device to load-balance the traffic to multiple Recorders.
Related topics
ADC, page 252
Extension-side Recording
Extension-side recording is achieved by port mirroring the traffic to and from an IP phone (as in
station-side recording in TDM recording). You may do this using either port or VLAN mirroring. You
may also use a network tap device if no mirror ports are available.
Port mirroring the IP device itself means that all RTP traffic to and from that device and SCCP traffic
between the device and the call processing system server/cluster (such as Cisco UCM), will be
received by the Recorder. In this configuration, there is no need to explicitly port mirror the call
processing system node or any of the conference bridge resources.
The following diagram illustrates an example of extension-side recording in that the access switches
to which the IP phones connect are port-mirrored directly.
NIC 1
NIC 2
NIC 3
VoIP Delivery
VoIP Delivery recording (also referred to as DMS recording), refers to deployments wherein the
switch/phone duplicates the audio it is sending and receiving, then directs it to the Recorders.
RTP Detection
In IP Recording you can use RTP detection to record calls in Recorder Controlled or CTI Controlled
environments (either all of the time or in fallback mode).
RTP detection is always enabled in Performance mode (which prevents loss of audio due to CTI
disconnection) and Liability modes (in which audio is recorded either by CTI or VOX and as VOX in
between CTI calls).
In load-balanced Recorders, the RTP streams are only visible to one of the Recorders, and,
therefore, only recorded on that Recorder.
When a call is placed from an internal phone to an external number, the PBX sends the necessary
information to the SIP trunk provider, who establishes the call to the dialed number and acts as an
intermediary for the call. All signaling and voice/video traffic between the PBX and the provider is
exchanged using SIP and RTP protocol packets over the IP network.
If the called number is a traditional PSTN telephone, the trunk provider routes the IP packets to the
PSTN gateway that is closest to the number being called, to minimize possible long distance charges.
The provider can also terminate PSTN numbers, and route incoming calls for those numbers back to
the IP PBX over the SIP Trunk. This allows businesses to offer local phone numbers in several
geographical areas, but service them all from a single location.
If the called number can be reached over a SIP Trunk, the call does not need to be routed over the
PSTN, but can instead be carried on the IP network end-to-end, creating a very cost-effective solution.
SIP trunking can also serve as the starting point for the entire breadth of real time communications
possible with the protocol, including instant messaging (IM), presence applications, whiteboarding
and application sharing.
The SIP trunk can be provided by a SIP trunking service provider or by an independent ITSP. In fact,
there may be several parties involved, each one providing a different part of the service required to
deliver end-to-end communication.
Because a SIP trunk is not a physical connection, there is no explicit limit on the number of calls that
can be carried over a single trunk. Each call consumes a certain amount of network bandwidth, so the
number of calls is limited by the amount of bandwidth and call processing resources that can flow
between the IP PBX and the provider’s equipment.
Implementation
The Recorder records traffic at the SIP Trunk. This includes SIPREC environments and environments
in which SIP trunk sessions are replicated by an edge device such as an Acme PacketTM SBC to the
Recorder. The way in which traffic is provided to the Recorder depends on the port
mirroring/replication mode. In SIP Trunk Recording, the edge device provides the Recorder with both
signaling and audio/video; in this case, the signaling does not carry the agent’s extension. SIP Trunk
Recording is therefore established at the member group level (not at the extension level).
TDM Recording
The Recorder supports trunk-side and station-side TDM recording.
Integration
Recorder CTI Server
Service Server
Server
Agent
LAN
Workstations
Junction
Box
T 1 Line
PBX
PSTN
Enterprise
Manager Server
Punchdown
Block
Trunk Delivery (line-side recording through E1 trunks, illustrated below) is a type of trunk termination
that can be implemented in Avaya switches and is supported on ISDN trunks (DT6409 and DT3209
cards only). E1 line-side (E1 LS) is a recording method in which the Recorder uses service observe in to
control extensions (supported in Avaya switches and IPC Media Recorder environments). The
Recorder maps each of its recording channels to one of the E1 trunk time slots, and to a specific
extension. When the Recorder starts up, it establishes services observes to each configured
extension, and from that moment on, the trunk delivers the extension's audio to the Recorder.
PBX
E 1 Line Tx
LAN E 1 Lines
Rx
PSTN
CSU /DSU / Recorder
NT /ISDN Server
Agent
Workstations
Integration
CTI Server Service Server
LAN
Enterprise
Agent
Manager Server
Workstations
T 1/E 1 Line
PBX
PSTN
Recording
Punchdown
Block
Digits N Y N N
Caller No Y N N N
Called No Y N N N
Direction Y N N N
DTMF Digits Y Y Y Y
CLI Y Y Y N
First Message N Y N N
Last Message N Y N N
None/VOX N N N * *
ISDN/VOX N N N * *
ISDN/D-Channel # # Y * *
NFAS/VOX N N N * *
NFAS/D-Channel** # # Y * *
CAS/VOX N N N * *
CAS/CAS N N N * *
RBS/VOX N N N * *
RBS/CAS N N N * *
DASS2/VOX N N N * *
DASS2/ D-Channel # # Y * *
Topologies
This section describes several topologies you may use when deploying IP Recorder systems to record
both audio and video, many of which utilize an Application Delivery Controller (ADC) or other load
balancing device.
ADC, page 252
Single Recorder Cluster, page 255
High Availability, page 256
ADC
You may use an ADC or other load balancing device with multiple IP Recorders (audio or video),
allowing deployment against higher density gateways or mirror ports on core switches. Supported
devices are described in detail in associated versions of the VoIP Interception Deployment Reference
Guide.
The device will be situated between the device used for port mirroring and the Recorders, and will
distribute the RTP to the Recorders. The following diagram illustrates this configuration.
Recorders
IDS Device
RTP detection is enabled in Performance and Liability fallback modes to prevent audio/video loss. The
ADC ensures that there is only one recording for a given call, because the RTP is balanced to exactly
one active Recorder. Since the RTP is not given to any other Recorders there will only be one
recording for a given call.
Using an ADC allows you to configure IP recording environments that also derive several secondary
features, as described in the following sections.
In order for the IP Recorder server or IP Recorder Video server to successfully record a call, it
must see both sides of the call; that is, the RTP that flows in both directions. In some
topologies, it may become necessary to use the source-destination load-balancing algorithm
available within the ADC.
Resource Scalability
Utilizing an ADC enables the IP Recorder to expand as the utilization of the VoIP system expands.
Once the network traffic has been provided to the ADC, IP Recorders can be added to the
configuration without the need for significant network engineering or additional port mirroring
resources.
If the utilization of the Gateway increases before the configuration of additional recording
resource, it may result in the existing Recorders overloading and failing.
Added IP Recorders will require an IP address and network port for data network
connectivity.
Utilization will not be even if individual Recorders experience down time that results in them
being out of the weighted round robin for substantial periods of time.
Flooding must be avoided on anything other than the required call control protocol.
duplicating the call control protocol packets to all the IP Recorders requiring the call control protocol.
Data Network
Recorder Cluster
Integration Service
Server for the
Recorder Cluster
IDS Device
High Availability
Recording provides high availability through redundancy of the Recorders, Integration Service, or
both.
The following sections describe the types of redundancy available, and subsequent sections provide
configuration instructions for the basic scenarios for each. You will find additional information in the
Recorder Configuration Guide, and direction for specific integrations in your Integration Guide (where
applicable).
Recorder Redundancy
There are three types of Recorder Redundancy:
N+N, in which all calls are recorded by pairs of Recorders. (N+N requires Integration Service
Redundancy as well.)
N-Dedicated M-Shared, in which calls are recorded by a main N Recorder, with a backup M Recorder
Data Network
Recorder Cluster
Integration Service
Load
Server for the Balancers /IDS
Recorder Cluster Devices
Redundant Link
Protectors
The cluster is designed to be fault tolerant of key elements being offline for periods of time, and
includes the following components:
Data Center Zone application components: The servers for these components have no effect on
the ability of the system to record. If the database is unavailable, the Recorders queue up the
recorded calls. Once the database comes back online, the Recorders will upload the calls.
Archive: This component is designed to run behind real-time archiving of the calls. The system
would only be detrimentally affected if it was offline for a sustained period, such that when it came
back online, calls to be archived were no longer on the Recorders. The hard disks on the Recorders
should be sized such that they can be tolerant of the Archive system running behind.
Cluster Integration Service: In this configuration, the Integration Service is utilized for CTI
Integrations and tagging. If the Integration Service server fails, then this tagging will be lost. If an
extension must be recorded even during an Integration Service failure, it should not be configured
in this mode.
IP Recorder Nodes: As described above, the configuration in the diagram contains five Recorders,
but is specified as providing 4000 channels of concurrent recording, as the fifth Recorder represents
the spare capacity required for redundancy.
ADC or Load Balancing Device: If either load balancer fails, the other passive device will be
presented with the links through the Link Protectors. A network port failure would result in that
individual link being activated to the redundant the load balancer.
Link Protectors: If the Link Protector fails, then the network connection will be maintained to the
primary the load balancing device through the protector's fail-through capability. The system is
likely not to be fault tolerant of a Link Protector and load balancer failure at the same time.
Duplicate packets are supported, provided that the maximum packet count per second does
not exceed the maximum capacity of the Recorder. However, it is strongly recommended that
duplicate packets are removed, as they are a known contributor to network issues. (Duplicate
packets will also reduce the total number of "noise" calls detectable by the Recorder.)
See the Performance and Sizing Guidelines for more information.
1 + 1 Network Feeds
Both the IP Recorder and Analyzer support the use of redundant network feeds. In this configuration,
the IP Recorder receives duplicate packets for calls that are taking place. If either feed fails, the call is
still recorded since the duplicate feed will still provide the packets required. Using duplicate feeds on
the IP Recorder does, however, double the amount of traffic the server is required to handle.
Therefore, when using duplicate feeds, the overall recording capability of the IP Recorder is reduced
by 50%.
In these configurations you should disable the duplicate packet alarm in Enterprise Manager.
The IP Recorder supports a maximum of five network interfaces for recording when 2 GB of physical
memory is used in the server. If less than 2 GB of memory is available, then only four network
interface ports are supported.
NIC Failover
If the cable connected to a Delivery NIC is unplugged, or if the Delivery NIC is disabled through the
Windows Network Management system, the error priority of all extensions configured for Delivery
recording will be raised, so that the Integration Service can fall back to other Recorders to record
calls. This is only relevant for situations in which the Integration Service is on a different server from
IP Capture, and a NIC other than that used for Delivery recording is used as the management NIC (for
the Recorder Integration Service connection to the IP Capture Engine). If a separate NIC is not used
for the Integration Service, the Integration Service uses the link failure as a condition that triggers
redundancy.
The Recorder only supports one NIC (or a team/bonded NIC pair) for delivery of audio.
This feature is enabled by default, but you can disable it using the Delivery NIC Status Check
setting in the IPCaptureConfig.xml (in the%IMPACT360SOFTWAREDIR%\ContactStore folder).
IP Recorder Filtering
IP recording contains two levels of filtering. This filtering takes place in the WinPcap network driver,
which is very efficient. However, wherever possible you should seek to reduce the number of packets
arriving at the NICs for an IP Recorder server or IP Recorder Video server by ensuring only required
packets are forwarded from the network.
IP recording allows specification of the WinPcap filter at the system level: that is, the same filter will
be applied to all enabled NICs, at the NIC level. When NIC-level filters are used, they are appended
with the system-level filter if configured.
An example system-level filter might be “tcp port 2000 or udp” for a Cisco-based solution, where the
SCCP is transmitted on the default port number of 2000.
An example of a NIC-level filter might be “tcp port 2000” for a Cisco-based solution where the UCM
cluster has been port mirrored into a specific NIC, and the SCCP is the only information required from
port mirroring.
When configuring system- and interface-level filters, ensure that they do not conflict with
each other.
The filters are configured using Recorder Manager, and do not require a restart of the IPCapture
service. When reconfiguring the filters, packet loss may occur during the filter application period. Use
of IP Recorder filtering allows you to decrease network traffic .
Related topics
Less Network Traffic, page 254
Related information
Configuring IP recording filters (Recorder Configuration and Administration Guide)
Part of the Customer Engagement Optimization platform, Text Analytics adopts a tiered
approach to unstructured text data processing, analysis, and trending.
Topics
Related topics
Interaction Capture, page 263
Text Analytics Service (TAS), page 264
Text Application, page 264
Interaction Capture
(ACRA only) The Interaction Capture Service integrates with customer environments to receive the
source data from different data sources, and in different formats. It transforms this data into a
uniform format for ingestion by the Text Analytics Service (TAS).
Related topics
Text Analytics architecture overview, page 262
Text Application
The Text Application is a web-based application that displays analytics data based on user requests.
The application provides dedicated workspaces for trend discovery, and content, and interaction
analysis with faceted and free-text search capabilities.
Related topics
Text Analytics architecture overview, page 262
TAS servers
All the TAS servers are deployed in the Data Center. In a consolidated deployment, the
TAS Application, Datastore, and Management servers are installed on the same physical server. In
distributed environments, each server is installed on one or more physical servers.
TAS services
Each TAS server includes several services to enable analytics. Some services are present on more than
one server, while others are unique to the server type.
Coordinator Service
The Coordinator Service interfaces between the recorder and the TAS. The Coordinator Service
receives the normalized raw data from the recorder, and parses it into a format ready for ingestion by
the TAS. While parsing the data, the Coordinator Service also extracts metadata and calculated
metrics.
Tagger Service
The Tagger Service analyzes the unstructured data according to NLP (Natural Language Processing)
algorithms. The services run a pipeline of annotators to annotate the source documents into themes,
relations, topics, and key terms.
Search Service
The Search Service is the data access layer to the interactions data store (Text Indexing Service),
providing the functionality to manage interactions, and the business logic for the application. The
Search Service provides aggregated analytic insights on the set of unstructured interactions from
different perspectives, including trend, root-cause, and faceted search.
Model Management Service
The Model Management Service provides the user interface to view and manage the text language
models. The Tagger Service uses the text language model to extract and then annotate the text in the
interactions.
Configuration Service
The Configuration Service stores user-defined configuration settings per customer, such as text
models to support the Tagger Service, and user-defined categories. It also stores the retention period
for each project.
Purge Service
The Purge Service is installed on the Management Server. It provides a mechanism for permanently
deleting interactions in projects, according to the retention period in days, defined for each project.
GlusterFS
GlusterFS is an open source scalable network file system used in high-availability environments. It
provides a shared folder across Docker containers and servers, and holds project-level data such as
the language model and category definitions.
Apache ZooKeeper
The Zookeeper is also a third-party service provided by Apache. The ZooKeeper is a centralized
service that maintains configuration and naming information. It also provides distributed
synchronization, and group services. Within the TAS deployment, the Zookeeper’s main responsibility
is to support high-availability of the Text Indexing Service, as a repository for the cluster configuration
and coordination.
Apache Kafka
Apache Kafka, an open-source stream processing platform from Apache, provides highly scalable
message-queuing functionality for real-time data feeds. Within the TAS deployment, Kafka queues the
export requests for interactions for retrieval by the Data Export Service.
Logger Services
There are two logger services:
Central Logger Service: provides log indexing and visualization services for all the TAS services
through ELK (Elastic Search), an open-source third-party tool.
Logger Service: collects logs from all the TAS services and aggregates them into a file in the file
system, through FluentD, another open-source third-party tool.
Secure Gateway
The Secure Gateway service is installed on every TAS server. The Secure Gateway supports SSL
offload for intra-server communication by offloading the encrypted communication to unencrypted
communication, when sending web service requests to the back-end server components.
In addition, the Secure Gateway verifies the Service Web Token (SWT) requests.
Related topics
Text Analytics architecture overview, page 262
Text Analytics model management data flow, page 135: describes how users can manage the text
language model used by the Text Analytics Service (TAS).
Text Analytics alarms and monitoring flow, page 136: describes how alarms are generated for TAS
services and displayed in the System Monitor's Alarm Dashboard.
Single Box
The smallest deployment is a Single Box solution where almost all the TAS services reside on the
same physical server.
Multiple Box
In a Multiple Box solution, the deployment is distributed over multiple servers. The Single Box
solution becomes a Multiple Box solution when:
Deployment specifications exceed those of Single Box deployment
Customer requires high availability of databases or application
Related topics
Text Analytics Architecture, page 261
Mobile solution
The mobile solution includes the Verint Mobile Work View and Verint Mobile Team View
mobile applications and the Mobile Gateway. The mobile applications allow employees to
perform tasks directly from their mobile device, for example, access schedule information
and perform schedule changes. The Mobile Gateway provides a single external interface
between the system and the mobile application for mobile-specific back-end services.
Topics
Mobile applications
Work View and Team View mobile apps allow employees, supervisors, and managers to quickly, and
easily log on to their information from an iOS or Android device.
Verint Mobile Work View (for employees)
Work View allows employees to view and manage their schedule, view their performance
scorecards, and stay up-to-date with notifications and updates.
Verint Team Mobile View (for supervisors and managers)
Team View allows supervisors and managers to view their employees' schedules, manage their
employees' requests, and stay up-to-date with notifications.
Work View and Team View apps require installing the Mobile Gateway server-side component
alongside the system. The Mobile Gateway enhances security features for the sign in process, and
enables employees, supervisors, and managers to receive push notifications to their mobile devices.
For example, supervisors receive push notifications when their employees' schedule changes or their
shift bidding status changes.
Mobile Gateway
The Mobile Gateway provides a single external interface between the system and the mobile
applications (iOS or Android based) for mobile-specific back-end services.
The main functions of the Mobile Gateway include:
Secure sign-in to the mobile applications
Mobile push notifications by communicating with Google Firebase Cloud Messaging (FCM) and
Apple Push Notification (APN) services
Enhance the system's native APIs so that content is displayed properly on mobile devices (for
example, format changes and pagination).
Typically, between the mobile device and the internal data center servers there will be a load balancer
or application gateway device deployed on the DMZ. This device can terminate HTTPS communication
for inspection, and then continue the communication to the DC servers using HTTPS or HTTP.
If the communication method to the backend servers is over HTTPS, this is referred to as SSL
bridging.
If the communication method is over HTTP (not HTTPS), this is referred to as SSL offload.
It is also possible to configure the device so that it does not terminate at all HTTPS traffic. This
configuration is referred to as SSL pass-through.
VPN tunneling
If mobile services should not be available to the public network at all, a VPN tunnel is required
between mobile devices and the corporate network, allowing devices to connect to the mobile
network as if they were located on the internal LAN.
User authentication
When users access the mobile application, they are authenticated according to a pre-configured
authentication method defined on the server.
The following authentication methods are supported for the mobile applications:
OpenID Connect (OIDC): Federated authentication method, to authenticate users against an
Identity Provider that supports OpenID Connect protocol (OIDC) and is certified as supported by
Verint. OIDC is an authentication method where the user's credentials are held with a third-party
identity provider (IdP) and not within the system. The system verifies the user's identity based on a
simple JSON-based identity token. When using an OIDC provider with multi-factor authentication
support, this capability can then also be used when authenticating in the mobile applications. The
user name for whom solution role is granted, must be included in the identity token.
LDAP: Authentication method that uses a simple bind authentication process. The user is identified
by the Active Directory and the proof of identity comes in the form of a password.
Database: Authentication method that authenticates the user with a user name and password that
is maintained solely and securely within the system’s own database.
The above authentication methods are the most common native mobile application authentication
methods. The SAML authentication method is not supported in the mobile solution.
Authorization
Once a user is authenticated, the application authorizes the user according to their specific rights and
permissions, as defined in the User Management application, and saved in the system database.
Within the system, each user is assigned one or more roles, where each role contains a set of
privileges. A role and its privileges allow the user to view certain pages and to perform certain
functions within the system.
Note that role is granted to a user who is identified by their user name. When authenticating using
OIDC, ensure that the relevant user name is included in the identity token provided by the IdP, as the
IdP user name may differ from the user name used in the solution. For example,
‘john.doe@acme.com’ (UPN) vs. ‘jdoe’ (sAMAccountName)
For additional security and granularity, user permissions for mobile access can be configured to differ
from the user’s permissions for desktop access. That way, for example, the user can create schedule
requests only when accessing WFO from a workstation, but not when accessing it from the mobile
device.
Authentication flows
The following diagrams illustrate user authentication and authorization using several authentication
methods.
For simplicity, network components deployed in the DMZ, such as the load balancer or application
gateway, are not included in the diagram, but it is assumed that every connection to the internal
network servers is done through these devices.
User termination
Users can be deleted or terminated in the User Management application by system administrators.
When a user is terminated, their credentials are immediately blocked on the WFO side. A new session
token cannot be generated and no new application data can be retrieved. Since the application data
is not saved on the mobile device and is wiped once the app is closed, this means that the user no
longer has access to application data.
User credentials or any WFO data are not cached on the devices.
The tokens are saved encrypted in the application’s sandbox (Android) or in the key chain (iOS) and
can only be decrypted by the server that issued them. For enhanced protection, it is also
recommended to enforce device password protection using the customer’s MDM infrastructure.
The following device permissions are needed for the mobile app:
iOS: Push notifications
Android: No special permissions are needed
High availability
High availability of the Mobile Gateway is achieved by deploying multiple servers. Each server is
independent and stateless. A load balancer (provided by the customer) is responsible for distributing
incoming requests between the servers. The load balancer pings each one of the servers for a
heartbeat, so a request is always sent to a "live" Mobile Gateway.
Data is synchronized between the Mobile Gateway servers by continuously synching the cache layer
(Redis). The Redis Sentinel components (deployed on each Mobile Gateway server) keep track of the
availability of each one of the Redis services, forming a quorum that continuously decides which
Redis is considered to be the master.
The number of servers participating in a high availability scenario is 2+M where M is the number of
concurrent major failures. So in order to support one major failure, three servers are required.
Disaster recovery
The system also supports Disaster Recovery architecture, where the Data Center can switch over to a
standby DC located in a different availability zone within an hour. In such a deployment, the customer
can deploy two load balancers configured as redundant.