Hi,
Can i please know how to calculate the log size per day for a specific source or a sourcetype reporting to splunk.
@kteng2024, several options have been given to your for calculating log size per day by sourcetype. Is your issue resolved?
Hi @kteng2024,
You can try below query;-
| dbinspect index=xyz | fields - size | eval date_s=strftime(startEpoch,"%d/%m/%y") | eval date_e=strftime(endEpoch,"%d/%m/%y") | stats count sum(sizeOnDiskMB) AS size sum(eventCount) AS eventcount by date_e,path|eval sizeinGB=round(size/1024,2) | fields - size
Check out Monitoring Console within Splunk for License Usage
Also check out Meta Woot app from Splunkbase
hey @kteng2024
If you want to calculate log size per day for a specific sourcetype try below:
index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by st fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]
If you want to calculate log size per day for a specific source try below:
index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by s fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]
Let me know if it helps you!
This should get you started.
index=foo source=bar | bin span=1d _time | stats sum(eval(len(_raw))) as TotalSize by _time
Hi,
This functionality is built in to the License Usage report accessible from either the License settings page and choose from the Split By droplist, or the Monitoring conosle > Indexing > License Usage.
You can easily open any of the prebuilt panels in search and modify the query to suit your needs.
index=_internal [`set_local_host`] source=*license_usage.log* type="Usage"
| eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h)
| eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s)
| eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx)
| bin _time span=1d
| stats sum(b) as b by _time, s, st, h, idx
| timechart span=1d sum(b) AS volumeB by st fixedrange=false
| fields - _timediff
| foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024, 2)]
Take note of the stats line. It has the optional parameters you want. Modify the timechart line below that with the specific split you're looking for. s=source, st=sourcetype, h=host, idx=index
Hello,
I know it is an very old post but it is close to what I'm looking for.
I'm trying to extract the log volume per source type, the below query is working fine but it groups all "small" source types in an "other" column. I can't find how to show all sourcetypes in the result ?
index=_internal (host=*.*splunk*.* NOT host=sh*.*splunk*.*) source=*license_usage.log* type="Usage"
| eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h)
| eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s)
| eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx)
| timechart span=1d eval(round((sum(b)/pow(2,30)),3)) AS Volume by st
| append
[| search (index=summary source="splunk-entitlements")
| bin _time span=1d
| stats max(ingest_license) as license by _time]
| stats values(*) as * by _time
| rename license as "license limit"
| fields - volume
It appears to be Splunk default behavior to roll up data in that manner, I believe you may need to remove the empty (NULL) -if any, and OTHER field values from the display.
Use:
useother=f
usenull=f
in your SPL.