My search displays this, but I when I change my search to this to get a clearer picture, I miss the time stamps - this is my search:
eventtype=failed_login | stats count by username, host | search count > 3
These are the results without the pipes:
10/26/15 PM 2015-10-26T12:24:22-05:00 eagnmnmedaec sshd[32213]: error: PAM: Authentication failure for xxxxx from 56.207.248.22
12:24:22.000 PM host = eagnmnmedaec source = /var/log/warn sourcetype = warn-2
The xxxxx
are the user ID. How do I pull the date/time stamp from this? Any way to make it real-time... I've only had one class on this so I am a little naïve.
You're doing a count over a time frame so if you add a timestamp, it will make the count always be 1 because only 1 failure happened at that time...but you can do it:
eventtype=failed_login | stats count by username, host, _time | search count > 3
If you want to keep the count over the timeframe you are searching you can show the latest or earliest timestamp for the count of events like this:
eventtype=failed_login | stats latest(_time), count by username, host | search count > 3
EDIT:
cleanup the timestamp using convert:
eventtype=failed_login | stats latest(_time) AS time, count by username, host | search count > 3 | convert ctime(time)
I am not sure this is running the reports every 5 minutes, 1hour, or 24 hours as I think I set it up. I tried to trip an alert with logging in wrong but didn't get anything on my dash board. If I set up send email I get one every 5 minutes but I that I am not running the search again - just getting the PDF every 5 minutes....
Do I need to set these up as alerts? I was hoping on using the dash board - here is one of the dash boards:
eventtype=failed_login hoursago=24 | stats latest(_time) AS time, count by username, host | search count > 3 | convert ctime(time)
I need this to run every 24 hours same with the 1 hour and 5 minute dashboards
I clicked the autorun in the editing of dashboards...I found some stuff in XML reference manua for refresh but no examples...
thanks
Is there a search parameter that only gets new errors - not historical or old values. I want to see what is hitting the servers with logins now.
Thank you....
Here's what I like to do....
Save this search as a panel on a dashboard and have it auto-refresh every x minutes looking back at the last x minutes (say, 5 minutes).
Clone this panel and save a new panel with auto-refresh every 5 minutes looking back at the last hour.
You can add a third showing last 24 hours. Now you have a dashboard that shows you all the logins in the last 5 minutes, last hour and last day, and the panels will auto-refresh so you can just keep your window open.
From my one little online course I took I know there is a way to save the clones to the original so all three can exist together on the screen. What am I doing wrong trying to save them to the original?
Also the search is find logins that are older then the search implies. Here is my search - I know its missing a field.
eventtype=failed_login | stats latest(_time) AS time, count by username, host | search count > 3 | convert ctime(time) | sendemail to=James.M.Lynn@usps.gov MAPIEAGN.usps.gov subject="Failed Logins" message="These are failed logins" sendresults=true inline=true format=table sendpdf=true
You need to save your search as a dashboard panel on a new dashboard, then edit the dashboard to clone the panel and update the timerange options.
See: http://docs.splunk.com/Documentation/Splunk/6.3.0/SearchTutorial/Createnewdashboard
For the second issue, assuming MAPIEAGN.usps.gov
is the mail relay, you need to use server=MAPIEAGN.usps.gov