In a deployment that provides high availability, i2 Analyze uses SolrCloud to deploy Solr with fault tolerance and high availability capabilities. Solr uses an active/active pattern that allows all Solr servers to respond to requests.
Detecting server failureTo detect if there has been a Solr failure, you can use the following mechanisms:
- The IBM_i2_Component_Availability.log contains messages that report the status of the Solr cluster. For more information about the messages that are specific to Solr, see Solr messages.
- The Solr Web UI displays the status of any
collections in a Solr cluster. Navigate to the
following URL in a web browser to access the UI:
http://localhost:8983/solr/#/~cloud?view=graph. Where localhost and 8983 are replaced with the hostname and port number of one of your Solr nodes.
Automatic failoverSolr continues to update the index and provide search results while there is at least one replica available for each shard.
Recovering failed serversThere are a number of reasons why a server might fail. Use the logs from the server to diagnose and solve the issue.
- The Solr logs are located in the i2analyze\deploy\solr\server\logs\8983 directory on each Solr server. Where 8983 is the port number of the Solr node on the server.
The application logs are in the deploy\wlp\usr\servers\opal-server\logs directory, specifically the IBM_i2_Solr.log file.
For more information about the different log files and their contents, see Deployment log files.
To recover the failed server you might need to restart the server, increase the hardware specification, or replace hardware components.
Reinstating high availabilityOn the recovered Solr server, run
setup -t startSolrNodes -hn solr.hostnameto start the Solr nodes on the server.
You can us the IBM_i2_Component_Availability.log and Solr Web UI to ensure that the nodes start correctly and that the cluster returns to its previous state.