I had a crash the past day so I am setting up a watchdog, but I have splunk monitoring 2 directories. I confirmed the files are there (they get copied hourly), but when I search, say past 7 day's, there not there. I am trying to learn as I go,
Index overview looks fine;
main 129447925 splunk2 16308102022
Index health shows just one box warm-5 (I am not sure on this, but there are no events near this weeks date)
Index Volume last 24 hours shows no results found, past week shows 536M, but from my memory, the free version is 500 per day so I don't think that is an issue either.
There is some data from today once I restarted, but I am not sure is there something that runs at night that will fill in the missing), or another way to tell splunk, hey', there are files you didn't index here. The fact that the past 24 hours shows no results is the disturbing part also.
So any help on that is appreciated, I am getting back into the app so looking forward to more intelligent questions soon!
You really haven't given enough detail to provide much of an answer. It's possible your timestamps aren't being recognized properly, or some buckets in your index are corrupt and are therefore not being seen by splunk (although, I've never really seen this happen).
If you exceeded the license limit too many times the interactive search will be shutdown, but you'll see a message indicating that specifically. But even if that happens, you data still get's indexed, it simply isn't searchable.
Two general suggestions:
index=_internal source=*splunkd* ERROR OR WARN
)