Getting Data In

Results returning wrong eventtypes

mikelanghorst
Motivator

I have used batch to import a couple gigs of syslog data from an application. When I search for those application events they are assocated with a large number of completely unrelated eventtypes.

Example log message: Nov 19 00:15:39 servername (bpi-db-1001): Returning Connection to Jboss pool

Eventtypes associated to this: eventtype=auditd Options| eventtype=cpu Options| eventtype=df check df host success Options| eventtype=hardware Options| eventtype=interfaces Options| eventtype=iostat cpu iostat report resource success Options| eventtype=lastlog Options| eventtype=lsof file lsof report resource success Options| eventtype=netstat cpu netstat os report success Options| eventtype=openPorts Options| eventtype=package Options| eventtype=protocol Options| eventtype=ps os process ps report success Options| eventtype=top os process report success top Options| eventtype=unix-all-logs Options| eventtype=usersWithLoginPrivs Options| eventtype=vmstat memory report resource success vmstat Options| eventtype=who

The only changes I've made to the server from out of box was configuring distributed search and enabling the unix & windows app.

For example in $SPLUNK_HOME/etc/apps/unix/default/eventtypes.conf defines the lsof eventtype as "sourcetype = lsof"

Tags (2)
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

This is a known issue that has been reported to support. The issue here is that some bad unix app's event types were made visible in all apps. This makes all events in all apps match the bad eventtypes that were added to the unix app. The eventypes.conf (apps/unix/default) file contained a bunch of entries that are missing the "search =" in front of the queries. Adding this and restarting splunk appears to have fixed this issue without causing other problems. Here is a sample of the bad event types, they are all under this scripted-input section.

---------------------------------------------------------------------
# an eventtype for every scripted-input sourcetype
[vmstat]
sourcetype = vmstat
---------------------------------------------------------------------

I just copied the file into $SPLUNK_HOME/apps/unix/local and removed everything above

# an eventtype for every scripted-input sourcetype

After this I prepended search = to each sourcetype, so the new file followed this format:

---------------------------------------------------------------------
# an eventtype for every scripted-input sourcetype
[vmstat]
search = sourcetype = vmstat

[iostat]
search = sourcetype = iostat

[ps]
search = sourcetype = ps

[top]
search = sourcetype = top
----------------------------------------------------------------------

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

This is a known issue that has been reported to support. The issue here is that some bad unix app's event types were made visible in all apps. This makes all events in all apps match the bad eventtypes that were added to the unix app. The eventypes.conf (apps/unix/default) file contained a bunch of entries that are missing the "search =" in front of the queries. Adding this and restarting splunk appears to have fixed this issue without causing other problems. Here is a sample of the bad event types, they are all under this scripted-input section.

---------------------------------------------------------------------
# an eventtype for every scripted-input sourcetype
[vmstat]
sourcetype = vmstat
---------------------------------------------------------------------

I just copied the file into $SPLUNK_HOME/apps/unix/local and removed everything above

# an eventtype for every scripted-input sourcetype

After this I prepended search = to each sourcetype, so the new file followed this format:

---------------------------------------------------------------------
# an eventtype for every scripted-input sourcetype
[vmstat]
search = sourcetype = vmstat

[iostat]
search = sourcetype = iostat

[ps]
search = sourcetype = ps

[top]
search = sourcetype = top
----------------------------------------------------------------------
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...