Splunk Search

Has anyone ran into Indexer CPU spiking when changes to lookup files occur?

briancronrath
Contributor

I've been troubleshooting an issue for some time now that is proving pretty difficult to resolve. My goal is to change the contents of a particular lookup file we use, which a lot of different searches and field extractions use. The actual columns themselves are not changing at all, they should be exactly the same number, name, and format. The only thing different about the "newer" lookup file is that there should be more rows. Not that many new rows though, we're talking 4200 vs 4800 really.

The problem I'm running into is that when I write the new lookup file (it's just a csv that uses a lookup definition) all of our indexers spike to 100% CPU Utilization and stay there until I revert back to the old version of the file (then they drop down to hardly any utilization at all). I've tried both overwriting the current file, or just creating a new file and pointing the lookup definition to use that file, and both produce the same results. I've gone over the differences of data between the two versions of the lookup file and really just am not coming up with anything that different between the two.

Has anyone ever ran into something like this? I'm just wondering what else I can look into to try and resolve this, currently I am pretty stumped. We are running Splunk version 6.4.0

0 Karma
1 Solution

briancronrath
Contributor

After hammering away I was able to resolve this. Turns out, it wasn't such a straightforward problem. We have a bunch of monitors running that were horribly optimized, and the new data in the lookup tables caused those horribly optimized monitory search queries to start pulling back more results. I had to go through each user's queries and optimize them in order to resolve this issue.

View solution in original post

0 Karma

briancronrath
Contributor

After hammering away I was able to resolve this. Turns out, it wasn't such a straightforward problem. We have a bunch of monitors running that were horribly optimized, and the new data in the lookup tables caused those horribly optimized monitory search queries to start pulling back more results. I had to go through each user's queries and optimize them in order to resolve this issue.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...