In the world of software development, logging often takes a backseat to unit testing and documentation. But logging is a powerful tool for debugging in production, and it provides vital data on the real-world use of your application. When things stop working, it’s the data in the logs that both the dev and operations teams use to troubleshoot the issue and quickly fix the problem.
Monitoring application and Linux system logs is a skill that every seasoned SysAdmin has down cold. Logs provide a window into understanding the health of your systems, and they’re the first place to look when things aren’t working. But no matter how familiar you are with Linux log monitoring, even gurus of the command line can learn new tricks. Whether you’re an old hand or a relative newcomer, here are seven tips on how to monitor log files in Linux that you may have overlooked.
Amazon Web Services (AWS) comprises more than 90 services and covers everything from computing and storage to analytics and Internet of Things tools. Using these services to build applications at scale requires constantly monitoring the entire software stack to make sure the wheels keep turning. But it’s when issues arise, and the wheels come off, that every developer puts their AWS logging setup to the test.
No matter how much logging you collect from other sources, some issues can only be diagnosed with application-level logging. As your application scales and you experience more crashing servers, network failures, and intermittent bugs, logs become even more important. Beyond application details, logs can even include business intelligence data to help you make improved business decisions.
Like most developers, you’ve probably seen the benefits of logging first-hand—the right log message can be the key to unlocking the trickiest of software issues. But not all logging is created equal and actionable logs don’t magically appear. If you want the very best logs, you need to optimize your logging using tried-and-true best practices.
Looking for an easy-to-use web framework that provides high-level building blocks to go from concept to production quickly? One popular choice is Django, a Python web development framework originally created at World Online, a newspaper web company for rapidly building and launching websites to support news stories.
If there’s one thing developers can’t get enough of, it’s tools. Azure, the cloud computing platform from Microsoft, offers over 100 services and tools for building infrastructure and running software apps. Each of these services produces log files in various formats, containing a slew of important details such as service health and status, performance metrics, and resource access requests—everything developers need to monitor Azure and keep their apps running smoothly.
Life would be much simpler if applications running inside containers always behaved correctly. Unfortunately, as every sysadmin and developer knows, that is never the case. When things inevitably start going wrong you need diagnostic information to figure out how and why. Being able to gather useful information from your Docker container logs can mean the difference between a minor issue and a critical outage.