We are using micro Services in our system. All services are doing its tast and call other one. We are logging these services requests/responses with a same uniqueIdentifier. What do you suggest to see this flow from point to endpoint in one query with inputs and outputs in same row according to timestamps? We need to monitor our system from begging to end.
Or maybe you can suggest a method/way us to log differently to achieve this goal, we are open for any suggestion.
Relying on timestamp in a distributed system may not be good enough, because you will probably never be able to guarantee exact timestamp synchronization. A better approach to show flow later in search is to log parent/child uniqueId:
Seq Parent Current
1 null id1
2 id1 id2
3 id2 id3
4 id2 id4
etc.
This will allow you to not only show a call sequence, but also parallel execution paths (see Seq 3&4 above), if your system allows for those to happen.
But having a uniqueId in all log messages will be enough to show everything that happened in context, just potentially not in exact sequence if there is clock drift between systems.
Read this: http://dev.splunk.com/view/logging-best-practices/SP-CAAADP6
And yes, if you want to be able to easily identify transactions later, then include the transaction identifier in each log entry - that way you're not at the mercy of incorrectly configured ntp services (although definately include the timestamp ! As precise as your system allows)