Monitoring Splunk

How to determine how long splunk has been up?

wrangler2x
Motivator

Is there a command in splunk or some way to find out how long it has been up since the last restart?

Tags (2)
1 Solution

wrangler2x
Motivator

You can use the rest API to get this information. Try this:

| rest /services/server/info | eval LastStartupTime=strftime(startup_time, "%Y/%m/%d  %H:%M:%S")
| eval timenow=now()
| eval daysup = round((timenow - startup_time) / 86400,0)
| eval Uptime = tostring(daysup) + " Days"
| table splunk_server LastStartupTime Uptime

View solution in original post

woodcock
Esteemed Legend

This is a refinement of the answers by @lguinn2 and @tiny3001 (NOTE: you will have to edit the host= part):

index=_internal "splunkd started" AND NOT sourcetype=splunkd_remote_searches AND host=*-spl-*
| dedup host
| eval uptime = tostring(now() - _time,"duration")
| table host uptime
0 Karma

BDein
Explorer

Try this one:

index=_internal "Splunkd starting" sourcetype=splunkd component=loader AND host=* 
| append 
    [| search index=_internal "splunkd started" sourcetype=splunkd_stderr AND host=* 
        ] 
| eval st_{sourcetype}=1 
| stats count sum(st_*) AS * earliest(_time) AS firstTime latest(_time) AS lastTime BY host 
| eval uptime = tostring(now() - lastTime,"duration") 
| foreach *Time 
    [| eval <<FIELD>>=strftime(<<FIELD>>,"%Y-%m-%d %H:%M:%S")
        ] 
| table host count firstTime lastTime uptime *
0 Karma

isoutamo
SplunkTrust
SplunkTrust
As it has said earlier all queries from _internal logs works only if you have those still on indexers. Quite often retention time for those is so short that you haven't those on any larger environment!
0 Karma

lakromani
Builder

For this to work, you need to set time long enough to catch the restart.  With a big solution with many server and lots of logs, this will be slow to find.

0 Karma

isoutamo
SplunkTrust
SplunkTrust
You probably need to extend the retention time for _internal to get those events stored enough long to find them. In general case the rest is better for full splunk enterprise instances. Of course this requires that you haven’t disable rest on HF layer. For UF the only solution is store those events to internal for enough long time.
0 Karma

wrangler2x
Motivator

You can use the rest API to get this information. Try this:

| rest /services/server/info | eval LastStartupTime=strftime(startup_time, "%Y/%m/%d  %H:%M:%S")
| eval timenow=now()
| eval daysup = round((timenow - startup_time) / 86400,0)
| eval Uptime = tostring(daysup) + " Days"
| table splunk_server LastStartupTime Uptime

robfrey
Splunk Employee
Splunk Employee

Just happened to be looking for this very thing today and stumbled across submission. I wouldn't have thought to query the REST API for this without checking here first, but it seemed a little obvious after reading the accepted solution -- that's what I love about strong user communities.

For what it's worth, here's my own slightly more direct SPL that produces roughly the same results as the accepted answer in case it helps anyone else.

| rest splunk_server=local /services/server/info 
| eval uptime=tostring(now() - startup_time, "duration")
| convert ctime(startup_time)
| table splunk_server, startup_time, uptime

 

0 Karma

robfrey
Splunk Employee
Splunk Employee

I should have included that the SPL I provided is only searching the REST API of the search head executing the search (splunk_server=localhost) which can easily be removed if desired

0 Karma

richgalloway
SplunkTrust
SplunkTrust

This is the answer that should be accepted, IMO, @wrangler2x. The others, especially the one from @tiny3301, work, but only if Splunk was restarted recently. Once the logs have rolled enough times, the "splunkd started" message won't be found.

---
If this reply helps you, Karma would be appreciated.
0 Karma

dijikul
Communicator
| rest / services/server/info 

This only shows indexers. What's the REST endpoint for startup time of all Universal Forwarders?

0 Karma

tiny3001
Path Finder

I know I'm ressurecting an old question, but the search is useful.

Except for one thing...

If you don't exclude a specific sourcetype, you get results for your searches looking for "splunkd started". Which might confuse things.
So

index=_internal "splunkd started" NOT sourcetype=splunkd_remote_searches

Hope that helps someone.

kristian_kolb
Ultra Champion

Searching in sourcetype=splunkd index=_internal you will find a message like this;

10-08-2013 08:55:27.844 +0200 INFO  loader - Splunkd starting (build 143156).

NB, this is for version 5.x, don't know if it differs in 6.x

/K

sowings
Splunk Employee
Splunk Employee

6.0: 10-07-2013 08:33:05.380 -0700 INFO loader - Splunkd starting (build 182037).

0 Karma

lguinn2
Legend

Try this search:

index=_internal "splunkd started"

to find out when was the last time that splunkd was started. Note that you may have to also add host=zzzz if you want to restrict to a particular host.

If you really want only the uptime, try this:

index=_internal "splunkd started"
| head 1
| eval uptime = tostring(now() - _time,"duration")
| fields uptime

Sayanta_Basak_I
Explorer

I downvoted this post because did not work

0 Karma

dijikul
Communicator

This only works when your logs stretch far enough back to catch the startup.

If your forwarders stay online long enough, the logs roll and you lose the data, which is why the REST approach is supposedly better, however I'm having trouble making that work in our Hybrid environment, personally.

0 Karma

woodcock
Esteemed Legend

It also did not work because it had | field instead of | fields but I just fixed that.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...