elasticsearch - Log shipping from tomcat cluster -


i'm investigating issue logging system , looking input on possible solutions problem. have now:

  • cluster of 6 tomcats, logging (log4j2) configured use socketappender
  • the listener these logstash agent puts logged events on redis
  • another logstash agent picks entries redis , pushes them elasticsearch

the problem have @ times client sockets (log4j loggers) wait indefinitely causing application become unresponsive. 1 of suggested solutions away socket appenders , use local file (we don't need "instant" log info in kibana). logstash agent configured read 6 files (one per instance) , push these straight elastic search. can suggest disadvantages of approach other having 6 files defined in input configuration of logstash? other options can suggest? in advance.

i not use socketappender if have choice so. 1 issue being 1 mentioned , 1 if on logstash 1.5x or before, find more troublesome, exact timestamp of event (as created log4j2) not conveyed, means timestamp of log line arrival time of log line in logstash instead of time @ log line created application. if you're aggregating logs different apps/servers/subsystems in stack, hassle make sense of temporality of events. fixed in logstash 2.0, though, still worth mentioning.

aside that, there @ least 3 reasons storing logs file on filesystem instead of shipping them directly via tcp:

  1. your log files act de facto backup of log events, i.e. can replay files anytime wish
  2. your application doesn't depend on synchronous sub-system sending log lines, sinks them on own file system
  3. your logstash can down (upgrade, network connectivity, etc) , app still able produce logs, no matter what's going on down chain.

Comments

Popular posts from this blog

javascript - Chart.js (Radar Chart) different scaleLineColor for each scaleLine -

apache - Error with PHP mail(): Multiple or malformed newlines found in additional_header -

java - Android – MapFragment overlay button shadow, just like MyLocation button -