Deploying i2 Analyze on multiple servers

To deploy i2 Analyze in a multiple-server deployment topology, you must run the commands to install, deploy, and start the components of i2 Analyze on each server.

You must have an i2 Analyze configuration that is set up for a multiple-server physical deployment topology or a deployment with high availability:

To deploy i2 Analyze in a multiple server deployment topology, you must provide the configuration to each deployment toolkit. Then, you can run the commands to deploy the components of i2 Analyze on each server. It is important to pay attention to which server you must run each command on, and whether you need to specify the hostname of that server.

Run any toolkit commands from the toolkit\scripts directory in the deployment toolkit on the specified server in your environment.

Copying the i2 Analyze configuration

The i2 Analyze configuration is required by all servers that host components of i2 Analyze. You do not have to copy the configuration to the database server, if it contains only the Information Store database and no other components.

Copy the toolkit\configuration from the server where you created and populated your i2 Analyze configuration, to the toolkit directory of the deployment toolkit on each server in your environment.

Installing components

Install the components of i2 Analyze on the servers that you have identified.

  1. On the Liberty server, run the following commands:
    setup -t installLiberty
  2. On each ZooKeeper server, run the following command:
    setup -t installZookeeper
  3. On each Solr server, run the following command:
    setup -t installSolr

Deploying and starting components

Deploy and start the components of i2 Analyze on the specified servers.

  1. On each ZooKeeper server, encode the credentials and create and start any ZooKeeper hosts:
    setup -t ensureCredentialsEncoded
    setup -t createZkHosts --hostname zookeeper.hostname
    setup -t startZkHosts --hostname zookeeper.hostname
    Where zookeeper.hostname is the hostname of the ZooKeeper server where you are running the command, and matches the value for the host-name attribute of a <zkhost> element in the topology.xml file.
  2. On the Liberty server, run the command to upload the Solr configuration to ZooKeeper:
    setup -t createAndUploadSolrConfig --hostname liberty.hostname
    Where liberty.hostname is the hostname of the Liberty server where you are running the command, and matches the value for the host-name attribute of the <application> element in the topology.xml file.
  3. On each Solr server, encode the credentials and create and start any Solr nodes:
    setup -t ensureCredentialsEncoded
    setup -t createSolrNodes --hostname solr.hostname
    setup -t startSolrNodes --hostname solr.hostname
    Where solr.hostname is the hostname of the Solr server where you are running the command, and matches the value for the host-name attribute of a <solr-node> element in the topology.xml file.
On the Liberty server, run the commands to deploy and start a number of the components.
  1. Create the Solr collections:
    setup -t createSolrCollections --hostname liberty.hostname

    To test that the Solr Collection is created correctly, click Cloud in the Solr Web UI, or you can go to http://solr.hostname:port-number/solr/#/~cloud. Log in with the user name and password for Solr in the credentials.properties file.

    A horizontal tree with the collection as the root is displayed. Here you can see the breakdown of the shards, nodes, and replicas in any collections.

  2. Create the Information Store database:
    setup -t createDatabases

    To check that the database is created correctly, connect to the database by using a database management tool.

  3. Deploy the i2 Analyze application:
    setup -t deployLiberty
  4. If you are using IBM HTTP Server, configure the HTTP Server:
    setup -t configureHttpServer
  5. Start the Liberty server:
    setup -t startLiberty
After you deploy and start i2 Analyze, return to perform the rest of the instructions for creating a deployment in your current environment: