Welcome to docs.opsview.com

Differences

This shows you the differences between two versions of the page.

opsview4.6:monitoringserver [2014/09/09 12:19] (current)
Line 1: Line 1:
 +====== Monitoring Server ======
 +This contains the list of all the monitoring servers in your environment.
 +
 +**Note**: When you delete a monitoring server, if there is an associated [[netflow_collectors|NetFlow Collector]]
 +it will also be deleted. Also, Opsview will run ''/etc/init.d/opsview stop'' on each node in the cluster.
 +
 +
 +===== Activated =====
 +If a monitoring server is not activated, then at the Opsview reload:
 +  * all hosts associated with this system will not be monitored
 +  * no files will be transferred to the system
 +
 +This is useful if a monitoring server is unavailable as a reload will fail if there are problems with a slave server.
 +
 +On a save, if a monitoring server has been changed from an activated to a deactivated state, Opsview will run ''/etc/init.d/opsview stop'' on each node in the cluster.
 +
 +**Note**: After a reload, as the hosts associated to this system will not be configured, all scheduled downtime and acknowledgements will be removed.
 +
 +The Opsview master server is always activated.
 +
 +===== Cluster Nodes =====
 +Choose from the list which host that will be your slave system. You can choose more than one host to get a clustered monitoring system.
 +
 +**Note**: If you add a new node to an existing system and this system is used as a [[opsview4.6:netflow_collectors|NetFlow collector]],
 +then you will need to [[opsview4.6:netflow_collectors#syncing_data_between_nodes|synchronise historical NetFlow data]]
 +manually to ensure information is consistent between the all nodes in this system.
 +
 +
 +===== Passive =====
 +If a monitoring server is marked as passive, then the slave will not have any Opsview configuration sent to it. However, tunnels will still be setup to tunnel information between the master and slave so that NSCA results can be received on the master securely. All hosts and services assigned to this monitoring system are created in the master's configuration. It is the responsibility of the slave server to send results back to the master.
 +
 +This is useful for setting up a pre-existing Nagios Core server to forward all its results back into Opsview via NSCA without turning that server into a fully managed Opsview slave.  There is more information on this [[opsview4.6:migrating:nagios#planning_your_partial_migration|here]].
 +
 +===== SSH Tunnel =====
 +The direction of the SSH tunnels to be used for the slave node, or all slave nodes within a cluster. If the forward option has been selected the Master server will initiate and manage the SSH connection to the cluster nodes. The reverse SSH tunnels allow the slaves to initiate conversations to the master so that the master is able to start new communications with a slave as required.
 +
 +If you are changing the direction from forward to reverse, you will need to:
 +  * Stop Opsview on the master: ''/etc/init.d/opsview stop''
 +  * Configure both the master and slave servers for reverse SSH - [[opsview4.6:slavesetup#using_slaves_with_reversed_ssh_tunnels|Reverse SSH Configuration Guide]]
 +  * Start Opsview on the master: ''/etc/init.d/opsview start''
 +  * Reload
 +  * Check that new results are received for that slave
 +
 +If you are changing the direction from reverse to forward:
 +  * Stop Opsview on the master: ''/etc/init.d/opsview stop''
 +  * Stop the SSH tunnel from the slave by running ''/etc/init.d/opsview-slave stop'' on that slave server
 +  * Remove the configuration file on the slave: ''rm /usr/local/nagios/etc/opsview-slave.conf''
 +  * Change the direction in the user interface and submit
 +  * Start Opsview on the master: ''/etc/init.d/opsview start''
 +  * Reload
 +  * Check that new results are received for that slave
 +
Navigation
Print/export
Toolbox