i2 Analyze with high availability
i2 Analyze supports an active/active high availability and disaster recovery deployment pattern. The only component that is not deployed in an active/active state is the database management system, which uses an active/passive pattern where one instance of the Information Store database is the primary and others are standby.
After you read the following information that explains how each component is used in a deployment with high availability, follow the instructions to deploy i2 Analyze in this pattern. For more information about how to deploy i2 Analyze, see Deploying i2 Analyze with high availability.
- Load balancer
-
A load balancer is required to route requests from clients to the Liberty servers that host the i2 Analyze application. Additionally, the load balancer is used to monitor the status of the Liberty servers in a deployment. The load balancer must route requests only to servers that report their status as "live".
After you deploy i2 Analyze with a load balancer, you can make requests to i2 Analyze through the load balancer only.
You can also use the load balancer to distribute requests from clients across the servers in your deployment. The load balancer must provide server persistence for users.
- Liberty
- To provide high availability of Liberty, and
the i2 Analyze application, you can deploy i2
Analyze on multiple Liberty servers. Each Liberty
server can process requests from clients to
connect to i2 Analyze.
In a deployment with multiple Liberty servers, one is elected the leader. The leader can process requests that require exclusive access to the database. The actions that are required to be completed on the leader Liberty are described in Liberty leadership configuration.
At a minimum, two Liberty servers are required. - Database management system
- To provide high availability of the
Information Store database, use the functions
provided by your database management system to
replicate the primary database to at least one
standby instance. i2 Analyze connects to the
primary instance at one time, with the contents of
the Information Store database replicated to the
standby instances.
If the primary instance fails, one of the standby instances becomes the primary. When a standby becomes the primary, it must be configured to be read and writable. This means that i2 Analyze can continue to function when the initial primary server fails.
- For more information about high availability in Db2, see High availability.
- For more information about high availability
in SQL Server:
- For Enterprise edition, see What is an Always On availability group?
- For Standard edition, see Basic availability groups
- If you are deploying with SQL Server on Linux, see Always On availability groups on Linux
For Db2, at a minimum one primary server and one standby server is required.
For SQL Server, at a minimum one primary server and two standby servers are required. If you are using basic availability groups, this is one standby server and one failover quorum witness.
- Solr
- i2 Analyze uses SolrCloud to deploy Solr with
fault tolerance and high availability
capabilities. To provide fault tolerance and high
availability of the Solr cluster, deploy the
cluster across at least two Solr nodes with each
node on a separate server.
At a minimum, two Solr servers are required.
For more information about SolrCloud, see How SolrCloud Works.
- ZooKeeper
- To deploy ZooKeeper for high availability,
deploy multiple ZooKeeper servers in a cluster
known as an ensemble. For a highly
available solution, Apache ZooKeeper recommend
that you use an odd number of ZooKeeper servers in
your ensemble. The ZooKeeper ensemble continues to
function while there are more than 50% of the
members online. This means, that if you have three
ZooKeeper servers, you can continue operations
when a single ZooKeeper server fails.
At a minimum, three ZooKeeper servers are required.
For more information about a multiple server ZooKeeper setup, see Clustered (Multi-Server) Setup.