General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Continuous Delivery
comments
Comments by "" (@retagainez) on "Mastering Your Microservices With Observability" video.
Well, as Dave mentioned, microservice developers might have different standards from team-to-team. If done separately from the data model, logging that data would be most useful if its all connected together in some form or another (correlation IDs so that you could query it) and that could require disciplined teams to do that. I agree it is lazy. In my experience, you'd be able to create a partial picture from the logs and query it using something like ElasticSearch, but it was hardly ever conclusive enough and mostly a scratch on the surface for something that needed to be further reproduced. This is THE problem and solved with a unique navigation of how teams are organized along with smart solutions that provide the necessary and exhaustive set of logs/metrics/traces for any particular event. I'm mostly drawing from my anecdote and working in an ElasticSearch logging system that had logs but still not valuable on its own without even more context and data. I guess one question would be how easy would it be to add testing for the logs associated with the transactional data? Whereas if you do it separately, you might not be able to test for the existence of logs. It might just require discipline. Maybe if your business is to sell data with logs associated with that data to your customers? Otherwise not sure.
3
Anecdotal, but I agree. I've worked in a system where observability of things like resources and infrastructure health were there, but we wouldn't necessarily have anything like a "trace" to debug issues or "metrics" to track customer usage. For such a complex system where we needed to make new features quickly, it was odd to see such a lack in business intelligence. Most certainly the observability/monitoring was added AFTER an outage, I became familiar with the tools that observed the production system.
2
@TheEvertw Also I would further agree that designing observability is a bit more involved than simply adding it to production. The same benefits were often sorely needed into testing/staging environments where devs had no clue what was broken and why. Adding monitors retrospectively is great and all for keeping customers from getting angry at your own issues, but it misses the point of adding monitors and thinking about how the overall system might function and what one might expect from it.
2
@TheEvertw In my experience, the monitoring system was heavily under engineered. There isn't any need for deep insight, but it should be able to state its dependencies and anytime it interacts with them and the chain of errors that occur downstream in other services. I think most of the time that sort of in-depth insight is most visible in the current test suite rather than observability tools. The tools don't need to dive too deep other than to keep track of performance, stability, or user interaction. Rarely did I ever need to tie the specific log to a line of code. The real issue was just tracking down the origin of the error, which service was producing it and the circumstances to reproduce it. I agree that observability should be designed and engineered into microservices. But, first and foremost make sure your microservices are as close as possible to be a "micro" service. After all, observing is the way to attribute your work and how its has reaped benefits from and for your customer.
1