I think this is just because the global number accounts for all indexes, and the search in the screenshot is only of index=mail.
Even if you've only set up data inputs for the "mail" index, you may have other data indexed into the default index of "main", and splunk always indexes a small but steady amount of metrics and other data into index="_internal"
If the _internal data is affecting that global number that would be less than ideal but hopefully it isnt:
log in as admin (or else you cannot see the internal index) search for
index=_internal | head 100
and see if the latest event there matches the time shown on the dashboard. If it does, i think it's best considered as a bug and we can open a support case for it and try and get it fixed.
The reason is because, the Global Summary Index calculate by default the event which are stored in the "main" index. In the example above, the more recent events were coming from another index (mail in the particular example) and it was not set up correctly under Roles.
To change that:
I think this is just because the global number accounts for all indexes, and the search in the screenshot is only of index=mail.
Even if you've only set up data inputs for the "mail" index, you may have other data indexed into the default index of "main", and splunk always indexes a small but steady amount of metrics and other data into index="_internal"
If the _internal data is affecting that global number that would be less than ideal but hopefully it isnt:
log in as admin (or else you cannot see the internal index) search for
index=_internal | head 100
and see if the latest event there matches the time shown on the dashboard. If it does, i think it's best considered as a bug and we can open a support case for it and try and get it fixed.