I am trying to create a baseline for average outbound connections per day/week/month. I started with this as my search:
source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16) | stats count by src, dst, srcprt | stats avg(count) by 1d@d*
However, there doesn't seem to be any results. The fields are correct, and it shows a table listing with dst, src count when I remove the part of the search after the last pipe.
My goal is to return 3 fields: Average Connections per day, per week, and per month that I can print to a flat file.
Any help would be greatly appreciated.
Try this:
source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1d
| stats count by _time
| stats avg(count) AS dailyAvg
| appendcol
[search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1w
| stats count by _time
| stats avg(count) AS weeklyAvg]
| appendcol [search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1mon
| stats count by _time
| stats avg(count) AS monthlyAvg]
Or if more breakout is desired, perhaps this:
source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1d
| stats count by src, dst, srcprt _time
| stats avg(count) AS dailyAvg BY src, dst, srcprt
| appendcol
[search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1w
| stats count by src, dst, srcprt _time
| stats avg(count) AS weeklyAvg BY src, dst, srcprt
| appendcol
[search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1mon
| stats count by src, dst, srcprt _time
| stats avg(count) AS monthlyAvg BY src, dst, srcprt]
Try something like this
source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket span=1d _time
| stats count by src, dst, srcprt _time
| appendpipe
| stats avg(count) as av
| eval ReportType=__@STR@_
| eval Include="DailyAvg"]
| appendpipe
| bucket span=1w _time
| stats sum(count) as count by _time
| stats avg(count) as av
| eval ReportType="Y"
| eval Include="WeeklyAvg" ]
| appendpipe
| bucket span=1mon _time
| stats sum(count) as count by _time
| stats avg(count) as av
| eval ReportType=__@STR@_
| eval Include="Y" ]
| where Include="MonthlyAvg"
| xyseries Include ReportType avg
Try this:
source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1d
| stats count by _time
| stats avg(count) AS dailyAvg
| appendcol
[search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1w
| stats count by _time
| stats avg(count) AS weeklyAvg]
| appendcol [search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1mon
| stats count by _time
| stats avg(count) AS monthlyAvg]
Or if more breakout is desired, perhaps this:
source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1d
| stats count by src, dst, srcprt _time
| stats avg(count) AS dailyAvg BY src, dst, srcprt
| appendcol
[search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1w
| stats count by src, dst, srcprt _time
| stats avg(count) AS weeklyAvg BY src, dst, srcprt
| appendcol
[search source=fwlogs earliest=-2mon@m latest=@m NOT (dstip=10.0.0.0/8 OR dstip=172.16.0.0/12 OR dstip=192.168.0.0/16)
| bucket _time span=1mon
| stats count by src, dst, srcprt _time
| stats avg(count) AS monthlyAvg BY src, dst, srcprt]
This works as intended thank you! However, I've noticed it is too slow for my purposes so I'm going to use data models 🙂