Hi all,
I'm studying how to apply splunk to my architecture. I've a simple question about log monitoring.
I've a cluster failover with different oracle instances.
Now for example INSTACE1 is running on nodeA and INSTANCE2 is running on nodeB.
So I have on nodeA the alert log in path
/opt/oracle/instance1/diag/rdbms/instance1/INSTANCE1/trace/alert_INSTANCE1.log
Ok, with a forwarder I can setup the reading of log in a very simple way. But how I can define to monitor this log on both nodeA (when instance is running on nodeA) and on nodeB (when instance switches on nodeB)?
Hence the same question may be applied to process monitoring, process ora_pmon_INSTANCE1 should run at least on one node...
Any interesting idea?
Thanks
Ste
Log file is on ashared filesystem and in case of failover the filesystem is dismounted from nodeA and remounted on nodeB. You'll write on the same file in the same path on a different node.
can you clarify what happens to the file on failover? If you fail instance 1 over from nodeA to nodeB, does the new instance 1 that is running on nodeB start writing to a new file? Or does the old file get moved over from nodeA to nodeB, and then the new instance starts appending to the moved copy of the file?
Could you not monitor both files all the time?
If the paths are the same, you could have the same config on both/all forwarders (I'm assuming that you have more than one host).
[monitor:///opt/oracle/instance*/diag/rdbms/instance*/INSTANCE*/trace/alert_INSTANCE*.log]
Or am I missing something?
/Kristian