Hi,
I have a Universal Forwarder installed on the License Master server to send the License_usage.log to the central indexer cluster ( total is 5 ).
but when I am writing the search in the Central Cluster, I found some gaps for some dates: My search is as below :
index=XXX sourcetype=license_usage.log type= RolloverSummary earliest=-30d latest=@d| eval gb=b/1024/1024/1024 | timechart span=1d useother=f limit=20 sum(gb) AS volume_gb by pool |rename _time as Date| eval Date=strftime(Date,"%Y-%m-%d")
Please let me know how do I find the Root Cause for this issue.
Question: Do you really have more than 20 license pools?
I tested this on my (fairly large) Splunk deployment (with four license pools) and didn't see any gaps over 30 days.
earliest=-30d latest=@d index=_internal source="*license_usage.log" type="RolloverSummary"
| timechart span=1d sum(eval(b/1024/1024/1024/1024)) AS volume_tb BY pool
| rename _time AS Date | eval Date = strftime(Date, "%F")
I would remove the UF and let the LM forward its logs to your indexer(s) and see if your results are more complete after a week or so. Perhaps remove the limit=20 and useother=f options from timechart.
I realize this doesn't answer your question, but your current solution is not considered best practice for forwarding logs from Splunk.
You don't need a separate UF to forward the license manager's logs, it is perfectly capable of doing so on its own.
Create $SPLUNK_HOME/etc/apps/heavyforwarder_outputs/default/outputs.conf:
[tcpout]
defaultGroup = primary_indexers
[tcpout:primary_indexers]
server = server_one:9997, server_two:9997
[indexAndForward]
index = false
Substitute your indexers for server_one, server_two, etc. and restart Splunk.