Post by account_disabled on Sept 12, 2023 9:55:50 GMT
As cloud-native computing tools and practices mature, so does the need for integration into existing management and monitoring environments. This means integrating native cloud tools, which are often built around OpenTelemetry standards, with the monitoring frameworks and dashboards that enterprises are already using.
The key to adoption is to build on the Phone Number List existing toolset. This is because DevOps and CloudOps teams have built workflows around already familiar interfaces. Implementing monitoring and management is a priority for businesses because time spent on new interfaces and integrations is better spent maintaining applications and running services.
This brings with it additional challenges. Cloud-native computing requires a new layer in the enterprise stack consisting of a new platform that hosts cloud-native applications. This comes with its own management and monitoring challenges, as resource usage and scaling must be tracked to ensure new application nodes are functioning properly. Although these tools, especially Kubernetes, have their own monitoring services, it is essential to integrate them with existing infrastructure and application monitoring.
Fortunately, adopting opentelemetry or time-series storage in monitoring tools like Prometheus makes it relatively easy to merge application, infrastructure, and platform telemetry repositories, and tools like Azure Monitor can be used to aggregate data and logs. You can query .
Azure Monitor for Prometheus
Microsoft officially launched Azure Monitor's managed Prometheus time series repository at the Microsoft Build event last month. First released in the fall of 2022, this managed service brings Prometheus to Azure (it works on both Azure Monitor and Azure Managed Grafana) and connects it to familiar open source visualization tools along with Microsoft's own container monitoring tools. It provides access.
Importing Prometheus to Azure Monitor is simple. Azure Kubernetes Service has Microsoft's own Kubernetes distribution, but because it is a managed implementation of the open source platform, it provides the same API and support for all standard Kubernetes tools. It has always been possible to run your own Prometheus instance on Azure. This is an effective approach for relatively small systems that do not need to worry about expansion for anything other than the application.
Things get more difficult in large AKS deployments, where you need to think about expanding storage and adding high availability. Implementing Kubernetes within a regulated industry comes with additional challenges. This is because you need to consider the impact of data retention requirements on the Prometheus Store. Switching to a managed Prometheus service simplifies this process because Microsoft not only provides tools to automate much of the data scaling and protection process, but it also ensures that Prometheus is running up-to-date with the latest patches. All you need to do is write data, read and analyze stored data, without having to consider the workload that comes with managing Prometheus.
Managed Prometheus does not only require the use of Azure Monitor. All existing PromQL (Prometheus query language) tools and scripts can be used in Azure, and all rules built around Prometheus are also executed as is. As far as code goes, Azure's Managed Prometheus appears to be identical to other Prometheus endpoints and has the same support for data collection and queries. This approach allows you to migrate other Kubernetes environments to Azure and still have access to the metrics you care about.
Prometheus at Azure scale
Azure's managed Prometheus is built on top of Azure storage, so it can be used as extended storage for on-premises applications. So you can use Azure Monitor and Grafana as a single pane of glass to monitor both on-premises and cloud-hosted Kubernetes clusters, while also supporting your existing FromQL code. Managed Prometheus is built to support multiple clusters, so it's common to use separate instances for each Azure region, according to Microsoft. Queries work across multiple regions, so you can create custom dashboards in Grafana or Azure Monitor.
Prometheus services are designed to be scalable and resilient. It also provides a high availability mode that runs a collector on each node of the Kubernetes infrastructure. As with other Azure managed services, data is stored in the region you select and in other regions within the same geographic location in Azure. For example, the primary Prometheus repository is located in the West US, and the standby repository is located in the East US, so even if the primary data center goes down, metrics are stored in the standby data center.
Getting started with Azure Monitor for Prometheus
Setting up the Prometheus service for use with AKS is easy. First, create an Azure Monitor workspace to store metrics. Then connect your Kubernetes instance to Prometheus, either directly or through Container Insights. Once the workspace is created, connect to Azure Managed Grafana to set up dashboards and visualizations. Azure Monitor is written in FromQL and hosts rules and notifications that are used to trigger tasks or send notifications. Azure Managed Prometheus is supported as an event source for Kubernetes Event-Driven Autoscaling (KEDA), allowing you to use rules to extend beyond the default Kubernetes resource-based model.
Configuring an AKS cluster to use this service is relatively simple. Whether you choose the Direct Provisioning or Container Insights option, it installs a containerized version of the Azure Monitor agent that collects metrics from your cluster and running nodes. The cluster has one limitation: it requires the use of managed identity authentication, so you must implement this authentication. This is best practice when using AKS with other Azure services.
Microsoft has largely automated the process of setting up monitoring agents for Linux containers, so Azure Monitor configures and deploys agents as needed. If you use Windows containers with AKS, you will need to manually configure most of the monitoring services (for now), including running YAML and configmaps provided by Microsoft. After the agent is deployed, you can use kubectl to check whether it is running on your node pool.
For most applications, the default metrics collection settings will be sufficient. Microsoft also provides an automatically provisioned dashboard in Grafana ( source code included on GitHub ) with an organized list of available metrics and targets. You can manage cloud native applications in your own way by adding your own dashboards and rules.
The key to adoption is to build on the Phone Number List existing toolset. This is because DevOps and CloudOps teams have built workflows around already familiar interfaces. Implementing monitoring and management is a priority for businesses because time spent on new interfaces and integrations is better spent maintaining applications and running services.
This brings with it additional challenges. Cloud-native computing requires a new layer in the enterprise stack consisting of a new platform that hosts cloud-native applications. This comes with its own management and monitoring challenges, as resource usage and scaling must be tracked to ensure new application nodes are functioning properly. Although these tools, especially Kubernetes, have their own monitoring services, it is essential to integrate them with existing infrastructure and application monitoring.
Fortunately, adopting opentelemetry or time-series storage in monitoring tools like Prometheus makes it relatively easy to merge application, infrastructure, and platform telemetry repositories, and tools like Azure Monitor can be used to aggregate data and logs. You can query .
Azure Monitor for Prometheus
Microsoft officially launched Azure Monitor's managed Prometheus time series repository at the Microsoft Build event last month. First released in the fall of 2022, this managed service brings Prometheus to Azure (it works on both Azure Monitor and Azure Managed Grafana) and connects it to familiar open source visualization tools along with Microsoft's own container monitoring tools. It provides access.
Importing Prometheus to Azure Monitor is simple. Azure Kubernetes Service has Microsoft's own Kubernetes distribution, but because it is a managed implementation of the open source platform, it provides the same API and support for all standard Kubernetes tools. It has always been possible to run your own Prometheus instance on Azure. This is an effective approach for relatively small systems that do not need to worry about expansion for anything other than the application.
Things get more difficult in large AKS deployments, where you need to think about expanding storage and adding high availability. Implementing Kubernetes within a regulated industry comes with additional challenges. This is because you need to consider the impact of data retention requirements on the Prometheus Store. Switching to a managed Prometheus service simplifies this process because Microsoft not only provides tools to automate much of the data scaling and protection process, but it also ensures that Prometheus is running up-to-date with the latest patches. All you need to do is write data, read and analyze stored data, without having to consider the workload that comes with managing Prometheus.
Managed Prometheus does not only require the use of Azure Monitor. All existing PromQL (Prometheus query language) tools and scripts can be used in Azure, and all rules built around Prometheus are also executed as is. As far as code goes, Azure's Managed Prometheus appears to be identical to other Prometheus endpoints and has the same support for data collection and queries. This approach allows you to migrate other Kubernetes environments to Azure and still have access to the metrics you care about.
Prometheus at Azure scale
Azure's managed Prometheus is built on top of Azure storage, so it can be used as extended storage for on-premises applications. So you can use Azure Monitor and Grafana as a single pane of glass to monitor both on-premises and cloud-hosted Kubernetes clusters, while also supporting your existing FromQL code. Managed Prometheus is built to support multiple clusters, so it's common to use separate instances for each Azure region, according to Microsoft. Queries work across multiple regions, so you can create custom dashboards in Grafana or Azure Monitor.
Prometheus services are designed to be scalable and resilient. It also provides a high availability mode that runs a collector on each node of the Kubernetes infrastructure. As with other Azure managed services, data is stored in the region you select and in other regions within the same geographic location in Azure. For example, the primary Prometheus repository is located in the West US, and the standby repository is located in the East US, so even if the primary data center goes down, metrics are stored in the standby data center.
Getting started with Azure Monitor for Prometheus
Setting up the Prometheus service for use with AKS is easy. First, create an Azure Monitor workspace to store metrics. Then connect your Kubernetes instance to Prometheus, either directly or through Container Insights. Once the workspace is created, connect to Azure Managed Grafana to set up dashboards and visualizations. Azure Monitor is written in FromQL and hosts rules and notifications that are used to trigger tasks or send notifications. Azure Managed Prometheus is supported as an event source for Kubernetes Event-Driven Autoscaling (KEDA), allowing you to use rules to extend beyond the default Kubernetes resource-based model.
Configuring an AKS cluster to use this service is relatively simple. Whether you choose the Direct Provisioning or Container Insights option, it installs a containerized version of the Azure Monitor agent that collects metrics from your cluster and running nodes. The cluster has one limitation: it requires the use of managed identity authentication, so you must implement this authentication. This is best practice when using AKS with other Azure services.
Microsoft has largely automated the process of setting up monitoring agents for Linux containers, so Azure Monitor configures and deploys agents as needed. If you use Windows containers with AKS, you will need to manually configure most of the monitoring services (for now), including running YAML and configmaps provided by Microsoft. After the agent is deployed, you can use kubectl to check whether it is running on your node pool.
For most applications, the default metrics collection settings will be sufficient. Microsoft also provides an automatically provisioned dashboard in Grafana ( source code included on GitHub ) with an organized list of available metrics and targets. You can manage cloud native applications in your own way by adding your own dashboards and rules.