
[{"content":" My Story # Last year (2025), I had the opportunity to speak at the AWS Community Day NL event. The year before, in 2024, I joined the same event as a volunteer.\nIn 2025, I also applied to be a speaker at AWS Summit Amsterdam, but my application was not accepted. After that, I asked a friend for feedback on what makes a strong speaker application. He provided detailed and valuable insights, which helped me improve my approach. Then, I was selected as a speaker for AWS Community Day NL 2025.\nThis time, I presented on Modern Logging in AWS. I chose this topic because I believe it is highly relevant and interesting, especially in a community-driven event. The session was designed for a total of 30 minutes, consisting of 20 minutes of presentation and 10 minutes of quiz. The quiz included a €10 prize for the participant with the highest score.\nMonitoring vs Observability # Monitoring and observability are often used interchangeably, but they are not the same.\nMonitoring tells you when something is wrong in a system. It focuses on detecting issues, usually through predefined metrics, logs, or alerts. In other words, monitoring answers the question: “Is something broken?”\nObservability goes a step further. It helps you understand what is happening inside a system and why it is happening. With observability, you are not only aware that a problem exists, but you also gain the context needed to investigate and fix it. It answers questions like: “What is happening, and how can I resolve it?”\nAWS # In the AWS ecosystem, observability is built through a combination of native services, Application Performance Monitoring (APM) tools, and open-source solutions. These tools work together to collect, correlate, aggregate, and analyze data across different layers of the system.\nThis includes signals from networks, infrastructure, applications, and services running in cloud environments, hybrid setups, or even on-premises systems. By combining these data sources, AWS observability solutions provide a unified view of the entire system.\nThe main goal is to gain deeper insight into system behavior, performance, and overall health. With this insight, teams can detect issues earlier, investigate root causes more effectively, and remediate problems faster.\nLogs vs Metrics # Logs and metrics are two fundamental pillars of system observability, but they serve very different purposes. Logs are detailed records of events that occur within a system, application, or device. They capture what happened at a specific point in time, often including a timestamp, severity level (such as info, warning, or error), a descriptive message, and additional context like user ID, process ID, or request details. For example:\n2025-09-15 10:32:15 INFO User 123 logged in from 192.168.1.5\nBecause of this structure, logs are qualitative and descriptive. They help reconstruct the story of events, making them especially useful when investigating issues or understanding the sequence of actions that led to a problem.\nMetrics, on the other hand, are numerical measurements that describe the performance or state of a system. Examples include CPU usage at 65%, request latency of 200 ms, or 1,245 active users.\nUnlike logs, metrics are collected and aggregated over regular intervals. This makes them ideal for tracking trends, spotting patterns, and monitoring system health over time.\nIn short, metrics are quantitative and numerical — they measure how much or how often something happens — while logs provide the detailed context needed to understand what actually happened.\nZerolog # Zerolog is a lightweight logging library designed specifically for producing structured JSON logs. Unlike traditional logging libraries that output plain text, Zerolog focuses on structured logging, where every log entry is formatted as a JSON object. This makes logs easier to parse, search, and analyze using modern observability tools.\nIt provides a simple and fluent API that allows developers to log messages at different severity levels such as debug, info, warn, error, and fatal. The API is designed to be minimal and fast, reducing overhead while still offering rich contextual logging.\nOne of Zerolog’s strengths is its performance. It is optimized for high-throughput applications where logging overhead must be kept very low. This makes it a popular choice in systems where performance is critical, such as microservices and distributed systems.\nBecause it produces structured logs by default, Zerolog is a strong fit for modern observability practices. JSON-formatted logs can be easily ingested by log aggregation tools, search systems, and observability platforms, enabling better filtering, correlation, and analysis across services.\nIn short, Zerolog is a good choice when you need fast, structured, and machine-readable logging that integrates well with modern cloud-native environments.\nLeveled Logging # Leveled logging is one of those concepts that often confuses developers, especially when deciding which level to use in different environments.\nIn practice, log levels help you control the amount and type of information your application produces. I typically use debug logs only during local development. They are very detailed and useful for understanding internal behavior while building or testing features.\nIn production, I reduce log noise by switching to higher log levels such as info or error. This helps keep logs meaningful and easier to analyze, while also reducing unnecessary storage and processing costs.\nHowever, log levels are not strict rules, you can temporarily enable debug or trace logs in production when investigating a live issue, as long as it is controlled and time-bound.\nWhy Logging Matters # Logging is not just a technical detail, it plays a critical role in how modern systems are operated and maintained.\nFaster debugging Logs are usually the first source of truth when something breaks. They provide immediate clues about what went wrong and where to start investigating.\nSystem visibility In distributed systems and cloud environments, logs help you understand how services behave in real production conditions, not just in local tests.\nCost control Poor logging practices can generate massive volumes of unnecessary data, which directly impacts storage and observability costs—especially in cloud platforms like AWS.\nCompliance and auditing Logs also serve as a record of events. They can be used as evidence for auditing, security investigations, and compliance requirements.\nCommon Problems with Logs # Logging is essential, but without a clear strategy, it can quickly create more problems than it solves. Here are some of the most common challenges teams face.\nCost explosion Logs can grow very quickly, especially in high-traffic systems. Without proper filtering, retention policies, or log level control, storage and ingestion costs can skyrocket—particularly in cloud environments where you pay for both data volume and processing.\nInconsistent formats When different services use different log formats, it becomes difficult to correlate data across systems. One service might log plain text, another JSON, and another with completely different field names. This inconsistency makes searching, parsing, and analyzing logs much harder.\nMissing context Logs without enough context are often useless. If there are no timestamps, request IDs, user identifiers, or service names, it becomes difficult to trace what actually happened. Context is what turns logs into actionable insights.\nScattered logs When logs are spread across multiple systems, servers, or environments without centralization, troubleshooting becomes slow and inefficient. Engineers may need to jump between tools or machines just to piece together a single issue.\n👉 Full story, open my presentation\nNova\n","externalUrl":null,"permalink":"/docs/getting-started/","section":"Docs","summary":"My Story # Last year (2025), I had the opportunity to speak at the AWS Community Day NL event. The year before, in 2024, I joined the same event as a volunteer.\nIn 2025, I also applied to be a speaker at AWS Summit Amsterdam, but my application was not accepted. After that, I asked a friend for feedback on what makes a strong speaker application. He provided detailed and valuable insights, which helped me improve my approach. Then, I was selected as a speaker for AWS Community Day NL 2025.\n","title":"Modern Logging in AWS","type":"docs"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","externalUrl":null,"permalink":"/docs/","section":"Docs","summary":"","title":"Docs","type":"docs"},{"content":"Welcome 👋\nThis is my documentation website.\nSections # 📘 Docs 🧪 Examples ","externalUrl":null,"permalink":"/","section":"My Docs Site","summary":"Welcome 👋\nThis is my documentation website.\nSections # 📘 Docs 🧪 Examples ","title":"My Docs Site","type":"page"},{"content":"Welcome 👋\nThis is my documentation website.\nSections # 📘 Docs 🧪 Examples ","externalUrl":null,"permalink":"/about/","section":"My Docs Site","summary":"Welcome 👋\nThis is my documentation website.\nSections # 📘 Docs 🧪 Examples ","title":"My Docs Site","type":"about"},{"content":"","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"}]