Splunk Search

My real time events finally show up in Splunk GUI tomorrow sometime

maverick
Splunk Employee
Splunk Employee

I have approximately sixty Splunk forwarders sending the Windows events to my central Splunk indexer. Fours of them are AD servers, which send a bunch of events (one AD alone sends 1.3G/day), of course, and the rest are desktops, which send up only a few events in comparison. Overall, I'm indexing around 4 to 5 Gigs/day total.

However, the events that are generated today are not showing up until tomorrow. In other words, I'm getting all of the events, but they are not displaying now in the Splunk GUI in real time (or even when I search for the past few hours or day, sometimes), but tomorrow when I search for today's timerange, they will all be there.

I'm wondering if maybe the indexing of the incoming events is consuming my Splunk processing (and it's priority too, which is what I would expect) and the optimizing and search processing is taking a "back seat" until there is enough processing power to complete the optimizing and that is why it shows up late, but its all there.

Can someone confirm this might be the case or provide a different theory that makes sense?

0 Karma
2 Solutions

gkanapathy
Splunk Employee
Splunk Employee

No. It is extremely unlikely in the wildest that hours of latency would be caused by a lack of CPU processing ability. The length of time for an event to pass thru the CPU pipeline isn't that long, and if by some method (e.g., some horrible regex) you did make it that long, then you would actually never be able to index that amount of data in a day because your average rate would be so low. (I mean, I suppose I could come with a a config and dataset that did behave that way, by having a bad regex that clogged up the pipe for several seconds or minutes and data that invoked it arrive at a high enough rate to cause massive lags but rarely enough that the average allows it to keep up...that's fairly insane.)

It's much more likely that the data simply isn't arriving at the indexer as soon as you would like. This may be caused by network problems, forwarder problems, original data generation problems. It could also be data/timestamping misunderstandings. The first place I would look is to see if the data is even arriving at the network port, perhaps using tcpdump or some sniffer. I would also look very closely at specific events that do eventually arrive and examine that for consistency with the original data, particularly timestamp and timezone consistency.

View solution in original post

gbolcer
Explorer

Same issue--turns out that the clocks on the machines were off significantly.

sudo su yum install ntp chkconfig ntpd on yum install rdate rdate -s cuckoo.nevada.edu (first eg I found on net) /etc/init.d/ntpd start

Everything works perfectly now.

View solution in original post

gbolcer
Explorer

Same issue--turns out that the clocks on the machines were off significantly.

sudo su yum install ntp chkconfig ntpd on yum install rdate rdate -s cuckoo.nevada.edu (first eg I found on net) /etc/init.d/ntpd start

Everything works perfectly now.

gkanapathy
Splunk Employee
Splunk Employee

No. It is extremely unlikely in the wildest that hours of latency would be caused by a lack of CPU processing ability. The length of time for an event to pass thru the CPU pipeline isn't that long, and if by some method (e.g., some horrible regex) you did make it that long, then you would actually never be able to index that amount of data in a day because your average rate would be so low. (I mean, I suppose I could come with a a config and dataset that did behave that way, by having a bad regex that clogged up the pipe for several seconds or minutes and data that invoked it arrive at a high enough rate to cause massive lags but rarely enough that the average allows it to keep up...that's fairly insane.)

It's much more likely that the data simply isn't arriving at the indexer as soon as you would like. This may be caused by network problems, forwarder problems, original data generation problems. It could also be data/timestamping misunderstandings. The first place I would look is to see if the data is even arriving at the network port, perhaps using tcpdump or some sniffer. I would also look very closely at specific events that do eventually arrive and examine that for consistency with the original data, particularly timestamp and timezone consistency.

maverick
Splunk Employee
Splunk Employee

That makes more sense. Thanks for the confirmation!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...