Getting Data In

opsec feed in real-time

a212830
Champion

Hi,

We are migrating away from LogLogic to Splunk for log management. We have a requirement to get the feed from checkpoint Opsec feed in real-time. Does anyone know of any tool (splunk or otherwise) that can perform this function? Does splunk have any plans to provide this functionality?

Tags (1)
0 Karma

sspencer_splunk
Splunk Employee
Splunk Employee

If you want to bring Check Point logs into Splunk as close to real-time as possible, you'll want to run fw1-loggrabber outside of Splunk and have it write the Check Point log stream(s) to disk. I had great success using the method below to bring up to eight independent Check Point log streams into Splunk at one time.

My lag was < 1 sec.

I intend on writing up a much more thorough document that describes all of the challenges one faces when dealing with Check Point logs, but I'll have to save that for another time. As a result, this procedure will only handle a single Check Point log stream. If you'd like to hear how I scaled out to eight streams put your request in as a comment below this post.


Pre-requisites:

  1. A functional Check Point LEA setup
  2. Knowledge of Linux libraries, package management, and how they work together
  3. Knowledge of Linux SysVinit and/or systemd
  4. Strong knowledge of Splunk "sources", "sourcetypes", "hosts", config files, etc.
  5. You're working with a Linux server, have root access to that server, and that either a Splunk indexer or a Splunk Universal Forwarder is installed onto this server.

Here's a high-level view of the steps you'll need to take:

  1. Acquire fw1-loggrabber software.
  2. Verify that the fw1-loggrabber binaries work on your server.
  3. Build a directory structure to support a locally-installed fw1-loggrabber.
  4. Install fw1-loggrabber.
  5. Migrate existing fw1-loggrabber configurations.
  6. Build a fw1-loggrabber startup initscript.
  7. Configure cron to restart fw1-loggrabber on a schedule.
  8. Point Splunk at the fw1-loggrabber data.
  9. Plan for maintenance of fw1-lograbber data.

Low-level details:

  1. Find yourself the most recent version of fw1-loggrabber you can find. It's probably this one.
  2. Extract the tarball. Manually run the fw1-loggrabber binary to ensure that you have the appropriate 32-bit libraries installed on your server. I'll leave this up to you to figure out how to do. Your server will complain when you try to run the fw1-loggrabber binary without all the necessary libraries. (If you cannot find these libraries for your OS, manually copy the two or three 32-bit library files that fw1-loggrabber needs - from an older 32-bit server you have access to - to /lib and make version symlinks if necessary.)
  3. Decide where to install fw1-loggrabber. Personally, I prefer /usr/local but you could also use /opt. (Please respect FHS and LSB standards in making your decision. You never know who will be administering that server five years from now.) By default, fw1-loggrabber will install into /usr/local/fw1-loggrabber if you run the INSTALL.sh script that is included in the tarball.
  4. Install fw1-loggrabber using your preferred method.
  5. Based on my pre-requisites, I assume that you have a correctly functioning lea.conf file. Copy that file to /usr/local/fw1-loggrabber/etc. Copy your OPSEC p12 certificate file here, too. You should also have a functioning fw1-loggrabber.conf file. That one needs to change slightly. Make these edits:
    OUTPUT_FILE_PREFIX="/var/log/fw1-loggrabber" OUTPUT_FILE_ROTATESIZE=536870912 ONLINE_MODE = "yes" RESOLVE_MODE = "no"
    Technically, you should be able to set the rotate size just a hair under 2GB, but that never worked out well for me. fw1-loggrabber would barf at about 750MB and not rotate the fw.log file. Moving on, I used a directory called "/var/log/fw1-loggrabber" for my logs. Do what you wish with your logs. Next, you'll want to keep DNS resolution turned off. You'll lose real-time access if you don't do this. DNS can easily add 5-10 seconds of delay to log processing. (If you really need DNS names, consider using a time-based field lookup.
    A side note: there is a limit to the number of fields that can be brought in with fw1-loggrabber using the FIELDS stanza. I don't remember what that limit is, but the consequence is that any field names that exceed the limit simply won't appear in your log output. (I think the problem lies in the length of the variable that holds the FIELDS data in the source code of fw1-loggrabber.)
    You should be able to run the command
    $ /usr/local/fw1-loggrabber/bin/fw1-loggrabber –c /usr/local/fw1-loggrabber/etc/fw1-loggrabber.conf –l /usr/local/fw1-loggrabber/etc/lea.conf
    now. If everything goes well, a file should appear in /var/log/fw1-loggrabber with lots of Check Point data.
  6. Build a startup script in /etc/init.d or drop the command above into /etc/rc.local. It's up to you, but keep in mind the admin who eventually takes over your work. You'll make them happy if you respect FHS and LSB. I'll provide the SysVinit and systems scripts that I wrote when I have a chance.
  7. Configure cron to restart fw1-loggrabber once-in-a-while. My experience was that the fw1-loggrabber binary will only run for a few days before it randomly quits. Assuming you created a functioning initscript in the previous step, all you need to do is call that script with
    $ service fw1-loggrabber restart
    or
    $ systemctl status fw1-loggrabber.service
    using cron once-a-day. Otherwise, you could probably run "pkill fw1-loggrabber" in cron, then rerun the command above in step 5.
  8. Configure Splunk to monitor the newly created file (or the new directory). You'll also need to ensure that the appropriate field extractions are in place and that you've configured the correct source and sourcetypes for the new log file. You can borrow field extractions from the OPSEC LEA for Check Point app, if you'd like.
  9. Create a procedure to manage/compress/delete all of the firewall data that is generated by the steps above. Since your Check Point Security Management server(s) have an official copy of this data (and your Splunk index has another copy of this data), you can safely delete the rotated fw1-loggrabber files on a schedule that is appropriate to you. Don't skip this step or you'll eventually have a full disk partition!

a212830
Champion

Is this a different tool than the lea-loggrabber.sh? If so, how?

0 Karma

davecroto
Splunk Employee
Splunk Employee

It is important that you test this in your non-prod environment first. But LogLogic uses a similar LEA client approach that Splunk uses (see attached). The problem could very well be that the config file has "ON-LINE MODE" set to false. It is set to no-online by default. As far as I understand it online mode set to no-online does not enable real time collection.

If you have your admin run help on the fw1-loggrabber they will see that there is an option for —online|--no-online. Again, please fully consider how on-line mode will effect production by fully testing it in a development environment first. If you have the horse power on the box, you should be able to change this to —online and Splunk will index in real-time.

$SPLUNKHOME/etc/apps/fw1-loggrabber/bin/fw1-loggrabber —help

alt text

DaveSavage
Builder

Nice one Dave, and thanks! Voting this one up.

0 Karma

DaveSavage
Builder

Splunk support an OPSEC Log Export API - take a look at:
http://splunk-base.splunk.com/apps/22386/opsec-lea-for-check-point-linux
Br
Dave

DaveSavage
Builder

I see that others have commented on the product integration, and with some success. Whether their needs are quite so immediate is not clear, but in this environment they usually are (ours are, or to quote some squirrely terms from contracts 'as near real-time as possible').
I don't know the answer re turbo-ing the script (and am assuming you have thought of, or already use forwarders?)but you may be best supported by Splunk Support by raising a ticket with them, unless some of the other guys on here can comment? Good luck. I'll follow this with interest.

0 Karma

a212830
Champion

The script wakes up every XX seconds - that's the lag. The data is used by our security team (different group, different tool), who need it in real-time.

0 Karma

DaveSavage
Builder

Interesting...because we are about to do the same. Where is the 'lag'? What is your business driver - I ask because if it's Remote Monitoring slanted e.g. traps / alarms you could use 'tool of your choice' and use Splunk as the single pane of glass with some integration work.

0 Karma

a212830
Champion

That's what we are using - it's not real-time.

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...