Welcome to docs.opsview.com

Using a shared ODW database with multiple Opsview Master servers

It is possible to have multiple distinct Opsview master systems all feeding to a shared ODW instance.

There are some constraints:

  • Each Opsview Master is separately administered and have no knowledge of each other
  • Each Opsview Master is on the same version (you can upgrade each master individually - see Upgrading Opsview below for details)
  • There are no Opsview host names that are duplicated between all Opsview Masters. (Note: This is the host name, as understood by NagiosĀ® Core - you can have different host names that point to the same DNS name if you wish). Obviously, this cannot be constrained within Opsview because the masters do not know about each other. This constraint exists because the Runtime to ODW import script, import_runtime, does some key searching based on host names
  • If you want to monitor the same host from different masters, you can put the master information within the host name to overcome the above constraint (eg, host1-uk, host1-us, host1-japan)
  • Each Opsview Master will have a unique opsview_instance_id set. This can be an integer between 1 and 20. These cannot clash between the masters. The default value is 1
  • The timezone for all the Opsview master systems must be the same. (In Opsview 3, you can set each master to a different timezone and all data inserted will be in UTC in ODW.)
  • All Opsview Masters are time synchronised. This is not a requirement, but it will affect trying to compare information between masters
  • The runtime.nagios_objects table lists all the objects within Nagios Core that are added into the Runtime database. These objects are keyed by the auto incremented object_id column. This value could clash between Opsview Masters when calculating downtimes. To avoid clashing, this object_id value will be added with (opsview_instance_id - 1) * 100000000. This means there is a constraint that there are less than 100,000,000 objects in runtime.nagios_objects. This affects the downtime calculations


From these steps, you will start to record Opsview information into the central ODW instance. There are currently no instructions for merging existing ODW data into this central instance.

The prerequisite is that you have an ODW database already setup which you want to be the central ODW instance.

On each Opsview master server, stop any ODW imports from happening:

su - nagios
touch /usr/local/nagios/var/upgrade.lock

Ensure each opsview master has appropriate mysql permissions to access the shared ODW database. See the Mysql documentation to set access rights appropriately:

mysql -u {username} -p{password} -h {host} {odw_db_name}

Edit /usr/local/nagios/etc/opsview.conf and point the ODW database credentials to the correct instance, and set the opsview_instance_id:

$odw_db = "odw";
$odw_dbuser = "odw";
$odw_dbpasswd = "changeme";
$odw_dbhost = "";
$opsview_instance_id = 5;

The opsview_instance_id must be unique for each Opsview master.

Remove the lock and do a single update:

rm /usr/local/nagios/var/upgrade.lock
/usr/local/nagios/bin/import_runtime -i 1 -r '2009-01-13 00'

This will start importing data from this Opsview Master instance from midnight on 2009-01-13.

The cronjob will then start importing as usual.

Upgrading Opsview

When upgrading an Opsview master, the install scripts automatically update ODW schemas if applicable. As each Opsview master will be using a single ODW, you will need to disable imports for all Opsview masters until they have been updated.

On each of your Opsview masters, set the upgrade flag:

su - nagios
touch /usr/local/nagios/var/upgrade.lock

Now upgrade one of your Opsview masters. The upgrade process will upgrade ODW's schema (if applicable). When the upgrade has finished on this master, the upgrade.lock file will be automatically removed. New ODW import will continue as normal.

You can now upgrade all your other Opsview masters.