Hi ,
I am blacklisting some excessive message in the transforms.conf. Here is an example of my config:
[md_client_blacklist]
REGEX = ((DEBUG,)|(Ignoring gap)
As a result I do not see any "Ignoring gap" message in my search (as expected).
Is there is a setting that would let me to see every 1000th "Ignoring gap" message in my search? ( I would not like to completely blacklist this message).
Thank you.
We are going to compress this on the application level before it gets to SPLUNK. Thank you all for your help.
Ok, if there is no pattern to identify the 1000ish record, what if you took just one line once an hour? You could create a scripted input to fire once an hour to read the last "Ignoring gap" message in your file. It
Are you on Unix? You could grep and awk every 1000th line as a scripted input or as a cron job to write the 1000th lines to their own separate log that you monitor. That way you could blacklist all of the "Ignoring gap" messages in the original log file and just get the 1000s from the new log file.
(I'm sure PowerShell could also do either of these if you're on Windows.)
Unix sample command....
$ grep "End time:" backend-server.log.2016-04-23 | wc -l
4612
$ grep "End time:" backend-server.log.2016-04-23 | awk 'NR % 1000 ==0'
End time: Sat Apr 23 06:34:14 EDT 2016
End time: Sat Apr 23 06:37:04 EDT 2016
End time: Sat Apr 23 12:11:17 EDT 2016
End time: Sat Apr 23 12:13:09 EDT 2016
$
Well, by blacklisting, the message does not go in the global license pool and you save the space.
However, there is a value in the message as well.
So I would like to skip it 999 lines and show every 1000. Therefore a log of 100 000 lines become 100 lines.
no, unfortunately there is no pattern. Messages are very random.
Can we have some sample events (full events, just mask any sensitive data)?
Is there any pattern in the time at which these events occur?? like fixed minute OR second? OR coming every min, 10 sec etc?
I have never heard of such a thing....I don't think it's possible.
You might be able to trick it with a separate OS script that would scrape out every 100oth "gap".....but Splunk on its own cannot do what you are looking for. On the matter of what you're looking for....why are you trying to to this in the first place?