Getting Data In

log monitoring problem

jaymehta18
New Member

Hi,

This is Jay Mehta from India. We currently have requirement to do log monitoring and for this we are looking @ splunk for our solution. We have downloaded the free version of Splunk and Universal Forwarder (tar balls). We have 2 linux systems and we have installed Splunk as indexer on 1 and Universal Forwarder on other. Currently, we are not able to make it work.

We want to monitor few log files placed under /var/logs on system where universal forwarder is installed. We have configured inputs.conf and outputs.conf. We have also updated inputs.conf of Splunk indexer and made it to hear on port 9997. But somehow we are not able to make it work.

We just want to monitor files under /var/logs on splunk indexer.
I have 1 question and 1 request for this:

  1. Where will the files land up on splunk indexer system if universal forwarder will send them
  2. I request to please help me configure inputs.conf and outputs.conf which needs to be configured on both the systems. May be somebody can paste the exact commands to be put in.

Also i have 1 suggestion - improve documentation of splunk as it is very confusing on how to configure and monitor things.

Thanks in advance.

Regards,
Jay

Tags (3)
0 Karma
1 Solution

lguinn2
Legend

On the forwarder

inputs.conf should contain
[monitor:///var/log]

outputs.conf should contain
[tcpout:indexer1]
server=yourindexer.domain.com:9997

On the indexer

inputs.conf should contain
[splunktcp://:9997]

Notes

In the example, I assumed that the name of your indexer is yourindexer.domain.com, but you could put the ip address there instead, if you prefer.
Note that you can set up the indexer using the GUI. But the Universal Forwarder does not have a UI.
Things to check:

  1. Is the name of the directory /var/log or /var/logs on the forwarder?
  2. Is the port 9997 open?
  3. Do you have network connectivity from the forwarder to the indexer?

Run the following search on the indexer to see if the forwarder is connecting, and how much data it has sent (if any). I recommend that you cut-and-paste.

index=_internal source=*metrics.log group=tcpin_connections |
eval sourceHost=if(isnull(hostname), sourceHost,hostname) |
rename connectionType as connectType |
eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")|
eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver |
fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by Hour connectType sourceIp sourceHost destPort Indexer Ver
| fieldformat Hour=strftime(Hour,"%x %H")

View solution in original post

lguinn2
Legend

On the forwarder

inputs.conf should contain
[monitor:///var/log]

outputs.conf should contain
[tcpout:indexer1]
server=yourindexer.domain.com:9997

On the indexer

inputs.conf should contain
[splunktcp://:9997]

Notes

In the example, I assumed that the name of your indexer is yourindexer.domain.com, but you could put the ip address there instead, if you prefer.
Note that you can set up the indexer using the GUI. But the Universal Forwarder does not have a UI.
Things to check:

  1. Is the name of the directory /var/log or /var/logs on the forwarder?
  2. Is the port 9997 open?
  3. Do you have network connectivity from the forwarder to the indexer?

Run the following search on the indexer to see if the forwarder is connecting, and how much data it has sent (if any). I recommend that you cut-and-paste.

index=_internal source=*metrics.log group=tcpin_connections |
eval sourceHost=if(isnull(hostname), sourceHost,hostname) |
rename connectionType as connectType |
eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder")|
eval version=if(isnull(version),"pre 4.2",version) | rename version as Ver |
fields connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver
| eval Indexer= splunk_server
| eval Hour=relative_time(_time,"@h")
| stats avg(tcp_KBps) sum(tcp_eps) sum(tcp_Kprocessed) sum(kb) by Hour connectType sourceIp sourceHost destPort Indexer Ver
| fieldformat Hour=strftime(Hour,"%x %H")

lguinn2
Legend

The error message is because Splunk could not "phone home" to splunk.com to see if there are any updates available - you can ignore it.

You can change inputs.conf on the forwarder, but you will need to restart Splunk on the forwarder to have it re-scan the configuration files. For example:

cd /opt/splunkforwarder/bin
./splunk restart

After you restart Splunk on the forwarder, you should see your new inputs.

0 Karma

jaymehta18
New Member

Thank you very much lguinn for that answer and yes i can see the data now in search.

I have 1 more problem now:

I am using a free version of Splunk and after implementing your suggestion, i could see 2 log files on indexer. But after that i could not find anything else. I made some changes on forwarder to see if indexer brings those changes. But it did not.

In log file of indexer, i found this ERROR(this was the last line of log file): ERROR ApplicationUpdater - Error checking for update via https://splunkbase.splunk.com/api/apps:resolve/checkforupgrade: Connect timed out.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...