All Apps and Add-ons

*Nix App - Collecting data remotely from other host. Without installing Splunk on them

dannux
Path Finder

Can I run the scripts from this app remotely to other servers and capture information to be displayed in the dashboards of the Splunk server? Please note that I cannot install Splunk in the servers that I want to monitor. Is that possible? I really like the way the application presents the data.

Tags (1)
1 Solution

Lowell
Super Champion

(I thought this was answered else where, but I couldn't find the link.)

Breakdown of your options:

  1. Copying the unix app's input scripts to the remote machine and then launching them remotely using ssh. So this requires that the splunk server be able to login to your unix box and run a script installed on that box. I found a doc online that describes the whole process in detail: Monitor Your NIX Systemes with no Forwarder.

  2. Another approach is to install the scripts on the remote box and push the events back to splunk using something like netcat to a listening port tcp on splunk. This is helpful if remote polling via ssh isn't a good option for you. You would need to schedule the scripts on the remote machine using cron or something like that. The disadvantage here is that the communication wouldn't be encrypted and you may need to customize the splunk config to get event breaking to work properly.

  3. If installing scripts remotely isn't an option, then I think your only option left is to re-write the script to run within a ssh session. In other words, you could take the entire "df.sh" script and reduce it down to a ssh command like: ssh user@host df -Tphl If you want to be more precise (and get the exact formatting), you can run the scripts in debug mode and look at the debug output file for the exact command line that the script uses.

For example, I ran df.sh --debug on my system an found that I would need a ssh command like this to fully match up to the splunk generated "df" sourcetype:

ssh user@remote.host df -TPhl | awk '{if (NR==1) {$0 = header}}    ($2 ~ /^(tmpfs)$/) {next}  {printf "%-50s  %-10s  %10s  %10s  %10s  %10s    %s\n",  $1, $2, $3, $4, $5, $6, $7}' header="Filesystem                                          Type              Size        Used       Avail      UsePct    MountedOn"

Notice that the "df" command is run remotely, but the awk command would be run locally. Which works fine in this case.


So the bottom line is that you either need to be able to (1) install some scripts on the remote machine, or (2) have remote shell access. If you don't have either of these, then you're going have to get very creative.

View solution in original post

Lowell
Super Champion

(I thought this was answered else where, but I couldn't find the link.)

Breakdown of your options:

  1. Copying the unix app's input scripts to the remote machine and then launching them remotely using ssh. So this requires that the splunk server be able to login to your unix box and run a script installed on that box. I found a doc online that describes the whole process in detail: Monitor Your NIX Systemes with no Forwarder.

  2. Another approach is to install the scripts on the remote box and push the events back to splunk using something like netcat to a listening port tcp on splunk. This is helpful if remote polling via ssh isn't a good option for you. You would need to schedule the scripts on the remote machine using cron or something like that. The disadvantage here is that the communication wouldn't be encrypted and you may need to customize the splunk config to get event breaking to work properly.

  3. If installing scripts remotely isn't an option, then I think your only option left is to re-write the script to run within a ssh session. In other words, you could take the entire "df.sh" script and reduce it down to a ssh command like: ssh user@host df -Tphl If you want to be more precise (and get the exact formatting), you can run the scripts in debug mode and look at the debug output file for the exact command line that the script uses.

For example, I ran df.sh --debug on my system an found that I would need a ssh command like this to fully match up to the splunk generated "df" sourcetype:

ssh user@remote.host df -TPhl | awk '{if (NR==1) {$0 = header}}    ($2 ~ /^(tmpfs)$/) {next}  {printf "%-50s  %-10s  %10s  %10s  %10s  %10s    %s\n",  $1, $2, $3, $4, $5, $6, $7}' header="Filesystem                                          Type              Size        Used       Avail      UsePct    MountedOn"

Notice that the "df" command is run remotely, but the awk command would be run locally. Which works fine in this case.


So the bottom line is that you either need to be able to (1) install some scripts on the remote machine, or (2) have remote shell access. If you don't have either of these, then you're going have to get very creative.

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...