All Apps and Add-ons

Splunk Add-On for Powershell - why not just create .csvs and load those?

juriggs
Path Finder

Hi,

I use PowerShell a lot to generate all kinds of information for our admins and engineers to look at and I'm looking at the Add-On for Splunk. It seems a bit complicated, and if I'm understanding things correctly, there are some fairly serious limitations as well.

A large number of my scripts already output .csv or .html files. Is there any reason I shouldn't just let those continue to run and point Splunk at the folder that contains all those files? Just wondering if I'm missing something here...

Thanks.

0 Karma
1 Solution

javiergn
SplunkTrust
SplunkTrust

If you are on 6.3 PowerShell is natively supported so you don't need to install anything else. See this:

http://docs.splunk.com/Documentation/Splunk/6.3.3/Data/MonitorWindowsdatawithPowerShellscripts

The way you are using PowerShell at the moment is perfectly fine. It all depends on what you are trying to do.

You could always keep your CSVs and HTML files but also output PS objects for Splunk so that you can easily get those objects as events and make them easily searchable.

Of course you can monitor a CSV file and generate events too, but there are some constraints and you would normally use a CSV from a lookup instead. Not sure who would you parse an HTML file and make it searchable unless it contains very well-delimited tables for instance.

I can go more in detail if you want but I need to know what you are trying to achieve.

I have used PowerShell to monitor processes or services, local admins, permissions, etc and it works fine.

Thanks,
J

View solution in original post

0 Karma

javiergn
SplunkTrust
SplunkTrust

If you are on 6.3 PowerShell is natively supported so you don't need to install anything else. See this:

http://docs.splunk.com/Documentation/Splunk/6.3.3/Data/MonitorWindowsdatawithPowerShellscripts

The way you are using PowerShell at the moment is perfectly fine. It all depends on what you are trying to do.

You could always keep your CSVs and HTML files but also output PS objects for Splunk so that you can easily get those objects as events and make them easily searchable.

Of course you can monitor a CSV file and generate events too, but there are some constraints and you would normally use a CSV from a lookup instead. Not sure who would you parse an HTML file and make it searchable unless it contains very well-delimited tables for instance.

I can go more in detail if you want but I need to know what you are trying to achieve.

I have used PowerShell to monitor processes or services, local admins, permissions, etc and it works fine.

Thanks,
J

0 Karma

juriggs
Path Finder

Most of my scripts look across our entire environment. An example would be finding all the scheduled tasks that are running on all of our servers. I then gather that information and output it to a .csv or .htm file, and before we got Splunk I created a web application that displayed the results in tabular format.

Let's use the scheduled tasks script as an example. In that script I create a custom PSObject with many NoteProperties such as LastResult,NextRunTime,ComputerName,TaskName,Author, and RunAs.

If I'm reading your answer and the documentation right, would I just run my script from the server I have Splunk installed on and use select-object on the output like so:

$output | select LastResult,NextRunTime,ComputerName,TaskName,Author,RunAs

Then set up my inputs file like so:

[powershell://WindowsScheduledTasks]
script = . "$SplunkHome\etc\apps\My-App\bin\getscheduledtasks.ps1"
schedule = whatever cron schedule I decide to use
sourcetype = Windows:ScheduledTask


The big thing I'm trying to accomplish is to run one script from one location and get information about all the machines in my domain at one time. I'd like to avoid having to deploy any forwarders at this particular moment if possible.

Will the way I propose setting it up work the way I'm wanting it to?

Thanks very much for your help.

0 Karma

javiergn
SplunkTrust
SplunkTrust

The big thing I'm trying to accomplish is to run one script from one location and get information about all the machines in my domain at one time. I'd like to avoid having to deploy any forwarders at this particular moment if possible.

Based on this comment I would keep your scripts collecting the info you need and then point Splunk to the CSVs. That way you have a hard copy of the file in case something goes wrong during ingestion and you don't have to make that many changes.

The moment you start deploying universal forwarders everywhere it might be a better approach to run small scripts and just collect local data.

Just to give you an example, as I was in a very similar position some time ago:

  • Initial scenario: ** PowerShell script listing local administrators remotely from lots of servers ** Running every hour or so ** Insert everything with a timestamp into a MS SQL database ** Splunk will read those events straight from the SQL DB
  • Final scenario: ** Universal Forwarders deployed everywhere ** Local UF running a small script to list local administrators ** Splunk ingesting those locally and sending them to the indexer ** Scripts running every few minutes instead ** Scripts also handling errors and sending that Splunk

Let me know if that helps.

0 Karma

juriggs
Path Finder

That's really helpful. I can see the benefits of putting forwarders all over the place, I'm just not sure I'm going to get the permission I need to do so, particularly on things like domain controllers. I do like the idea of stuffing it all into a database, though - maybe that's the way to go. I know this is a tangential question but in my previous experience with Splunk the DBConnector app was really unreliable. Has that improved in this newest version of the product?

Thanks for your help... I'll mark as the answer right now.

0 Karma

javiergn
SplunkTrust
SplunkTrust

I've been happily using DBConnect for a while now and the only problems I had was initially as part of the configuration. It's not the most straightforward app but it works fine for me and I'm reading in some cases millions of events (SharePoint for instance).

I'm still using the old version DBConnect 1 so can't comment on the latest one though but in theory is meant to be more reliable and easier to use.

Just another piece of advise, DBConnect can read new events from DBs based on timestamp, ID or a combination of both. If you want my personal advise, set up a auto incremental ID (int or bigint depending on your needs) and use that as a cursor. Timestamp might not be as precise as you would want to and therefore might lead to duplicates.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...