Monday 14 April 2014

Administrative Workstation subprocess in UCCE

The administrative workstation (AW) provides the systems administrator with an array of
configuration tools to manage and maintain the UCCE platform
The most popular AW deployment is the AW-HDS-DDS.

The logger holds the database used by the central controllers, but rather than this database
being modified by the client applications, the configuration tools actually modify a
copy of the configuration data stored in the local database on the AW. When changes are
made in this database, the AW processes communicate with the loggers to instruct them
of the change.As well as containing a copy of the configuration data, the AW database also contains
data used for real-time reporting. Although this data is described as real-time, in practice
it should actually be termed “near real-time” as the data is updated approximately every
10–12 seconds.


Below are the database that are present in the mentioned deployment:

<instance name>_awdb: Used for storing UCCE configuration and real-time data
<instance name>_hds: Used by the HDS processes for long-term historical data storage
<instance name>_wv: Used by WebView for storing WebView-specific configuration

Where the instance-name is the customer instance name

The AW consists of many software processes:

 configlogger: The Configuration Logger process stores configuration data in the AW
database.

 replication: The Replication process receives historical data from the logger and
inserts this data into the HDS database on the AW.

 rtdist: The Real-Time Distributor receives real-time data from the router and distributes
this data to all the real-time clients that are connected to it. These clients can be
other AWs, typically a client AW.

 rtclient: The Real-Time Client on the AW is responsible for updating the local AW
database. The rtclient gets its data from the rtdist process.

 updateaw: The UpdateAW process ensures that the local AW configuration database
remains current with configuration data from the central controller.

Friday 11 April 2014

Different subprocess in Peripheral Gateway

The PG provides an abstraction layer between the peripheral (usually an ACD or IVR)
and the UCCE central controller. With UICM and UCCH, the supported peripherals can
be from many vendors, but with UCCE, the peripheral that provides call control for the
agent devices is the Unified CM, with Cisco Unified IP IVR or Cisco Customer Voice
Portal (CVP) providing IVR and call queuing.
Similar to the router and logger nodes, the PGs are deployed in a duplex manner, but
their physical location varies depending on the deployment architecture used. Typically,
the duplex PG pair will be deployed at the same location as the ACD. For example, in a
distributed call processing model using two or more independent Unified CM clusters, a
duplex PG pair would be deployed at each of the sites coresident with the Unified CM
cluster. This rule is true for all deployment models with the exception of the Clustering
Over the WAN model. With this deployment model, the Unified CM cluster is typically
split over two sites. Each site contains one side of the UCCE central controller, including
a PG.A single PG pair can service more than one peripheral. In typical UCCE deployments, the
PGs usually service both the Unified CM cluster and the IVRs, but for larger deployments,
dedicated PGs can be used.ACD PGs, including Unified CM PGs, also have the CTI Server process and, if required,the CTI OS Server.
The type of ACD being deployed reflects directly on the different PIM process installed.
UCCE uses the Enterprise Agent PIM (eagtpim) process, which is named after a remote
agent component used in early versions of UCCE that allowed agents to work using analog
cards installed in their PC rather than being connected to an ACD. This feature was
withdrawn from UCCE a long time ago.
The PG node processes are as follows:

mdsproc: The Message Delivery Service : The Message Delivery Service (MDS) process manages message delivery between the processes running on the PG.

opc-cce: The Open Peripheral Controller is the heart of the PG. The OPC is responsible
for synchronization with the other PG as part of the PG pair and prepares the call records for the UCCE database.

pgagent: The Peripheral Gateway Agent (PGAgent) manages the session layer communication between the PG and the ccagent process running on the router. When
deployed as a duplex pair, the pgagent process window displays with which side of
the central controller router it maintains an active connection. If the process windowdisplays InSvc A:Active B:Idle, you could determine that pgagent has an active connectionwith ccagent on Router A; therefore, only heartbeat traffic is being sent to Router B. Router-side preference is configured during PG setup. The PG can be configured to have preference as Side A, Side B, or no preference. This preference is typically used to engineer traffic routing when the PGs are deployed remotely from the central controller but is also used during failure scenarios. Should the preferred side go offline, the nonpreferred side will take over. When the preferred side comes back online, the active side will switch back again. This switchback does not occur if the PGs are configured with no preferred sides.

testsync: The Testsync process provides an application interface for the various test and debugging tools to connect to.

jtapigw: Many third-party applications communicate with a Unified CM cluster using
a Cisco-proprietary JTAPI. For the Jtapigw process to function, the Cisco JTAPI
driver needs to be installed on the PG. Cisco JTAPI is available from the Plugins page
on the Unified CM servers. You should ensure that the version of JTAPI installed on
the PG is the same version of JTAPI available from the Unified CM server.

eagtpim: This is the Enterprise Agent PIM process that connects to the jtapigw
process required for connection to a Unified CM cluster.

ctisvr: The Computer Telephony Integration (CTI) server process is installed on PGs,
where the peripheral communicates real-time agent data to the PG and the agents use
a CTI-enabled desktop application to inform the PG of agent state changes and information including CTI data (wrap-up codes, reason codes, and call data updates). The ctisvr process communicates with the OPC process running on the PG. Throughout
early versions of UICM, the CTI Server provided the native connection to all agent
desktop applications and third-party applications that required real-time data such as wallboards and call recording, where data tagging is used. To support developers and make the solution more scalable, Cisco developed the CTI Object Server (CTI OS) and requested that all CTI applications developed now be against CTI OS rather than the CTI Server.

ctios server: The CTI OS Server process establishes a connection to the CTI Server
process and provides an interface for desktop and third-party applications to develop nagainst using the CTI OS Toolkit. The ctios server also establishes a connection to its duplex pair to provide resiliency. The title bar of the CTI OS process window displays the IP address and port of the active CTI Server it is connected to. It also displays the IP port that the process is listening on. An example of this display is [ACTIVE, CG 192.168.15.30, CGPort 42027, Listen Port: 42028].

vrupim: The Vrupim process is the PIM for IVRs connected through the GED125
specification. It is common for deployments with multiple IP IVRs to have several
Vrupim processes running on the same PG.

Friday 4 April 2014

Different subprocess in Cisco ICM Router

Router is the main brain of the UCCE system. The configuration data get stored in router memory and also it has knowledge of the real time data .It get data from PG.As this is the brain of the UCCE system it take the routing decision based in the data it has got from PG, its stored configuration ,routing scripts .

The Router has below process:

router: This process is used by the UCCE to provide the route response and manage the route request.This process also collect all the real time data and maintain a holistic view of the contact center.

ccagent:  This process is used to communicate with the PG and the status bar of the sevice can show you how many PG it is connected to

dbagent: This process  is used to check communication between the router and the logger by validating access to central controller database

mdsproc:  This process  is used to manage reliable message delivery between different  UCCE process

testsync : The testsync process provide application interface for connection to various test and debugging tool to connect to

Thursday 3 April 2014

Logger Process in ICM

Logger component of ICM is the component that  store the configuration of the UCCE in its database.
Along with the configuration it store the Historical data  for 30 Days .It also store the the real time data, call variable, routing scripts etc.

The logger process can work in a duplex  environment.

there are various subprocess under the logger .




Let discuss the process one  by one

configlogger process: The configlogger process is used to store the configuration data in the central controller database 

csfc process: Cusromer suppport forwarding service(csfs) process is  use to monitor the connection between the logger and router using regular heartbeat

histlogger Process: This process is used to store historical data in the central database

recovery process: This process is used to synchronize the historical data  from the most recent logger database in case there is a fallback scenario comes in .The recovery process the happen with a process called state transfer

replication process:  This  process syncronize the historical data from the central controller database to historical database