Learning Logs in 12 Factor APP

IT contains a three step

  1. We can capture logs in case of troubleshooting
    They talk about traditional way where the logs are stored in a log.txt file(depends on system ) and the problem is that if the docker image is gone log are log are gone
    2.we try to push logs to the centralized location ,here the problem is that, log.txt is tightly coupled with the logging system
  2. It will write all the logs to its standard out or to the local file in a structured json format, then use by agent ,then transfer to central location

Case 2 and Case 3 not able to understand and in 3 also its is tightly coupled if docker image is gone then all end ,so how we retrive log ?

  1. It is tightly coupled because the application is writing a file. You need another process in the same container (or pod for Kubernetes) to read these files when the application closes them and then send them to the log central location (normally something like Elastic Search). Thus the log collector has to know about the app and the log files it produces.
  2. Writing logs to standard output (effectively printing as if messages to the screen) in containers, means that the container logging system collects them (docker logs etc). You have a separate container running a process like filebeat which knows how to collect container logs from all containers in the cluster in real time and send them to the central logging system. This is known as collecting streaming logs, as the logs are a continuous stream which can be consumed in real time as it is produced. It is the preferred way to do it in containerized systems such as docker and kubernetes. This removes the coupling as the app does not need to know that a logging agent (like filebeat) is collecting its logs, and filebeat doesn’t need to know anything about the apps it is collecting logs for.
1 Like