diff --git a/LogMonitor/docs/README.md b/LogMonitor/docs/README.md index 1c93799f..88fdb0c4 100644 --- a/LogMonitor/docs/README.md +++ b/LogMonitor/docs/README.md @@ -1,17 +1,19 @@ # Log Monitor Documentation **Contents:** + - [Sample Config File](#sample-config-file) - [ETW Monitoring](#etw-monitoring) - [Event Log Monitoring](#event-log-monitoring) - [Log File Monitoring](#log-file-monitoring) - [Process Monitoring](#process-monitoring) +- [IIS Monitoring](#iis-monitoring-with-log-monitor) - [Log Format Customization](#log-format-customization) - [Security Advisory for Config File](#security-advisory-for-config-file) ## Sample Config File -A sample Log Monitor Config file would be structured as follows: +A sample Log Monitor Config file would be structured as follows: ```json { @@ -60,6 +62,7 @@ A sample Log Monitor Config file would be structured as follows: } ``` + Please see below for how to customize your Config file for Log Monitor to pull from. ## ETW Monitoring @@ -82,13 +85,13 @@ logman query providers | findstr "" ### Configuration -- `type` (required): This indicates the type of log you want to monitor for. It should be `ETW`. +- `type` (required): This indicates the type of log you want to monitor for. It should be `ETW`. - `eventFormatMultiLine` (optional): This is Boolean to indicate whether you want the logs displayed with or without new lines. It is initially set to True and you can set it to False depending on how you want to view the logs on the console. - `providers` (required): Providers are components that generate events. This field is a list that shows the event providers you are monitoring for. - - `providerName` (optional): This represents the name of the provider. It is what shows up when you use logman. - - `providerGuid` (required): This is a globally unique identifier that uniquely identifies the provider you specified in the ProviderName field. - - `level` (optional): This string field specifies the verboseness of the events collected. These include `Critical`, `Error`, `Warning`, `Information` and `Verbose`. If the level is not specified, level will be set to `Error`. - - `keywords` (optional): This string field is a bitmask that specifies what events to collect. Only events with keywords matching the bitmask are collected This is an optional parameter. Default is 0 and all the events will be collected. + - `providerName` (optional): This represents the name of the provider. It is what shows up when you use logman. + - `providerGuid` (required): This is a globally unique identifier that uniquely identifies the provider you specified in the ProviderName field. + - `level` (optional): This string field specifies the verboseness of the events collected. These include `Critical`, `Error`, `Warning`, `Information` and `Verbose`. If the level is not specified, level will be set to `Error`. + - `keywords` (optional): This string field is a bitmask that specifies what events to collect. Only events with keywords matching the bitmask are collected This is an optional parameter. Default is 0 and all the events will be collected. ### Examples @@ -137,9 +140,9 @@ Using both the provider's name and provider GUID: ### References -- https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw- -- https://learn.microsoft.com/en-us/windows/win32/wes/writing-an-instrumentation-manifest -- https://learn.microsoft.com/en-us/windows/win32/etw/about-event-tracing#providers +- +- +- ## Event Log Monitoring @@ -152,8 +155,8 @@ Event log is a record of events related to the system, security, and application - `startAtOldestRecord` (Required): This Boolean field indicates whether the Log Monitor tool should output event logs from the start of the container boot or from the start of the Log Monitor tool itself. If set `true`, the tool should output the event logs from the start of container boot, and if set false, the tool only outputs event logs from the start of log monitor. - `eventFormatMultiLine` (Optional): This is a Boolean field that is used to indicate whether the Log Monitor should format the logs to `STDOUT` as multi-line or single line. If the field is not set in the config file, by default the value is `true`. If the field is set `true`, the tool does not format the event messages to a single line (and thus event messages can span multiple lines). If set to false, the tool formats the event log messages to a single line and removes new line characters. - `channels` (Required): A channel is a named stream of events. It serves as a logical pathway for transporting events from the event publisher to a log file and possibly a subscriber. It is a sink that collects events. Each defined channel has the following properties: - - `name` (Required): The name of the event channel - - `level` (optional): This string field specifies the verboseness of the events collected. These include `Critical`, `Error`, `Warning`, `Information` and `Verbose`. If the level is not specified, level will be set to `Error`. + - `name` (Required): The name of the event channel + - `level` (optional): This string field specifies the verboseness of the events collected. These include `Critical`, `Error`, `Warning`, `Information` and `Verbose`. If the level is not specified, level will be set to `Error`. ### Examples @@ -201,10 +204,10 @@ Example 1 (Application channel, verboseness: Error): } ``` - ### References +### References - - https://learn.microsoft.com/en-us/windows/win32/eventlog/event-logging - - https://learn.microsoft.com/en-us/windows/win32/wes/defining-channels +- +- ## Log File Monitoring @@ -217,14 +220,14 @@ This will monitor any changes in log files matching a specified filter, given th - `type` (required): `"File"` - `directory` (required): set to the directory containing the files to be monitored. > :grey_exclamation:**NOTE:** Only works with absolute paths. - > - > To support *long file name* functionality, we prepend "\\?\" to the path. This approach extends the MAX_PATH limit from 260 characters to 32,767 wide characters. For more details, see [Maximum Path Length Limitation](https://learn.microsoft.com/en-us/windows/win32/fileio/naming-a-file#maximum-path-length-limitation). - > + > + > To support _long file name_ functionality, we prepend "\\?\" to the path. This approach extends the MAX_PATH limit from 260 characters to 32,767 wide characters. For more details, see [Maximum Path Length Limitation](https://learn.microsoft.com/en-us/windows/win32/fileio/naming-a-file#maximum-path-length-limitation). + > > Due to this modification, the path **must** be an [absolute path](https://learn.microsoft.com/en-us/dotnet/standard/io/file-path-formats#traditional-dos-paths), beginning with a disk designator with a backslash, for example "C:\" or "d:\". - > + > > [UNC paths](https://learn.microsoft.com/en-us/dotnet/standard/io/file-path-formats#unc-paths) and [DOS device paths](https://learn.microsoft.com/en-us/dotnet/standard/io/file-path-formats#dos-device-paths) are not supported. - > - > Ensure you [identify the type of path](https://learn.microsoft.com/en-us/dotnet/standard/io/file-path-formats#identify-the-path) and the path is correctly formatted to avoid issues. + > + > Ensure you [identify the type of path](https://learn.microsoft.com/en-us/dotnet/standard/io/file-path-formats#identify-the-path) and the path is correctly formatted to avoid issues. > > | Example | Path Type | Allowed | > |------------------------------------------------------- |------------------|--------------------| @@ -242,24 +245,24 @@ This will monitor any changes in log files matching a specified filter, given th > | "." | Relative | :x: | > | ".\" | Relative | :x: | > | "..\temp" | Relative | :x: | - + - `filter` (optional): uses [MS-DOS wildcard match type](https://learn.microsoft.com/en-us/previous-versions/windows/desktop/indexsrv/ms-dos-and-windows-wildcard-characters) i.e.. `*, ?`. Can be set to empty, which will be default to `"*"`. - `includeSubdirectories` (optional) : `"true|false"`, specify if sub-directories also need to be monitored. Defaults to `false`. - `includeFileNames` (optional): `"true|false"`, specifies whether to include file names in the logline, eg. `sample.log: xxxxx`. Defaults to `false`. -- `waitInSeconds` (optional): specifies the duration to wait for a file or folder to be created if it does not exist. It takes integer values between 0-INFINITY. Defaults to `300` seconds, i.e, 5 minutes. It can be passed as a value or a string. +- `waitInSeconds` (optional): specifies the duration to wait for a file or folder to be created if it does not exist. It takes integer values between 0-INFINITY. Defaults to `300` seconds, i.e, 5 minutes. It can be passed as a value or a string. - - `waitInSeconds = 0` + - `waitInSeconds = 0` When the value is zero(0), this is means that we do not wait and LogMonitor terminates with an error - - `waitInSeconds = +integer` - + - `waitInSeconds = +integer` + When the value is a positive integer, LogMonitor will wait for the specified time. Once the predefined time elapses, LogMonitor will terminate with an error. - - `waitInSeconds = "INFINITY"` + - `waitInSeconds = "INFINITY"` + + In this case, LogMonitor will wait forever for the folder to be created. - In this case, LogMonitor will wait forever for the folder to be created. - > :grey_exclamation:**NOTE** > - This field is case insensitive > - When "INFINITY" is passed, it must be passed as a string. @@ -269,21 +272,23 @@ This will monitor any changes in log files matching a specified filter, given th **Examples:** 1. Wait for 10 seconds - * As a value: `"waitInSeconds": 10` - * As a string: `"waitInSeconds": "10"` + - As a value: `"waitInSeconds": 10` + - As a string: `"waitInSeconds": "10"` 2. Wait forever/infinitely: - * `"waitInSeconds": "INFINITY"` or `"waitInSeconds": "inf"` or `"waitInSeconds": "∞"` - * This field is case-insensitive + - `"waitInSeconds": "INFINITY"` or `"waitInSeconds": "inf"` or `"waitInSeconds": "∞"` + - This field is case-insensitive
If a user provides an invalid value, a value less than 0, an error occurs: + ``` ERROR: Error parsing configuration file. 'waitInSeconds' attribute must be greater or equal to zero WARNING: Failed to parse configuration file. Error retrieving source attributes. Invalid source ``` -### Sample FileMonitor *LogMonitorConfig.json* +### Sample FileMonitor _LogMonitorConfig.json_ + #### Example 1 LogMonitor will monitor log files in the directory "c:\inetpub\logs" along with its subfolders. If the directory does not exist, it will wait for up to 10 seconds for the directory to be created. @@ -309,9 +314,9 @@ LogMonitor will monitor log files in the directory "c:\inetpub\logs" along with LogMonitor will monitor log files in the root directory, "C:\". - > When the directory is the root directory (e.g. "C:\\" ) we can only monitor a file that is in the root directory, not a subfolder. This is due to access issues (even when running LogMonitor as an Admin) for some of the folders in the root directory. Therefore, `includeSubdirectories` must be `false` for the root directory. + > When the directory is the root directory (e.g. "C:\\" ) we can only monitor a file that is in the root directory, not a subfolder. This is due to access issues (even when running LogMonitor as an Admin) for some of the folders in the root directory. Therefore, `includeSubdirectories` must be `false` for the root directory. -See sample valid *LogMonitorConfig.json* below: +See sample valid _LogMonitorConfig.json_ below: ```json { @@ -369,15 +374,108 @@ CMD "c:\\windows\\system32\\ping.exe -n 20 localhost" The Process Monitor will stream the output for `c:\windows\system32\ping.exe -n 20 localhost` +## IIS Monitoring with Log Monitor + +Log Monitor can tail IIS log files and forward formatted output to STDOUT. This is useful when running IIS inside Windows containers and you want container logs to be available through the standard container logging pipeline. + +## Quickstart + +1. Copy `LogMonitor.exe` and `LogMonitorConfig.json` into a directory inside the container or host. + +LogMonitorConfig.json + +```json +{ + "LogConfig": { + "logFormat": "json", + "sources": [ + { + "type": "File", + "directory": "c:\\inetpub\\logs", + "filter": "*.log", + "includeSubdirectories": true + }, + { + "type": "ETW", + "eventFormatMultiLine": false, + "providers": [ + { + "providerName": "IIS: WWW Server", + "providerGuid": "3A2A4E84-4C21-4981-AE10-3FDA0D9B0F83", + "level": "Information" + }, + { + "providerName": "Microsoft-Windows-IIS-Logging", + "providerGuid": "7E8AD27F-B271-4EA2-A783-A47BDE29143B", + "level": "Information" + } + ] + } + ] + } +} +``` + +Dockerfile + +```dockerfile +# escape=` +FROM mcr.microsoft.com/windows/servercore:ltsc2022 +WORKDIR /LogMonitor +COPY LogMonitorConfig.json . +COPY LogMonitor.exe . + +RUN powershell -Command ` + Add-WindowsFeature Web-Server; ` + Invoke-WebRequest -UseBasicParsing -Uri "https://github.com/microsoft/IIS.ServiceMonitor/releases/download/v2.0.1.10/ServiceMonitor.exe" -OutFile "C:\ServiceMonitor.exe" + +EXPOSE 80 + +ENTRYPOINT ["C:\\LogMonitor\\LogMonitor.exe", "C:\\ServiceMonitor.exe", "w3svc"] +``` + +## Build and run the Docker image + +Build the Docker image from the Dockerfile in this doc or the example folder: + +```powershell +docker build -t iis-logmonitor:latest -f Dockerfile . +``` + +Run the container locally: + +```powershell +docker run -it --rm -p 80:80 iis-logmonitor:latest +``` + +View container logs + +If you start the container detached and with a name you can follow logs with: + +```powershell +docker logs -f iis-logmonitor +``` + +If you started the container without a name, get its container ID and follow logs: + +```powershell +docker ps +docker logs -f +``` + +For a ready-to-run AKS example that shows how to build the image, deploy to AKS, and enable Azure Monitor (Container Insights), see the AKS example README: [examples/aks/iis-logmonitor/README.md](../../examples/aks/iis-logmonitor/README.md). + ## Log Format Customization ### Description + By default, logs will be displayed in JSON format. However, users can change the log format to either `XML` or their own `custom` defined format. To specify the log format, a user needs to configure the `logFormat` field in `LogMonitorConfig.json` to either `XML`, `JSON` or `Custom` (the field value is not case-insensitive)
For `JSON` and `XML` log formats, no additional configurations are required. However, the `Custom` log format, needs further configuration. For custom log formats, a user needs to specify the `customLogFormat` at the source level. ### Custom Log Format Pattern Layout + To ensure the different field values are correctly displayed in the customized log outputs, ensure to wrap the field names within modulo operators (%) and the field names specified matches the correct log sources' field names. For example: `%Message%, %TimeStamp%`
@@ -385,36 +483,40 @@ For example: `%Message%, %TimeStamp%`
Each log source tracked by log monitor (ETW, Log File, Events, and Process Monitor logs) has log field names specific to them: Event Logs: - - `Source`: The log source (Event Log) - - `TimeStamp`: Time at which the event was generated - - `EventSource`: The source of an event - - `EventID`: Unique identifier assigned to an individual event - - `Severity`: A label that indicates the importance or criticality of an event - - `Message`: The event message + +- `Source`: The log source (Event Log) +- `TimeStamp`: Time at which the event was generated +- `EventSource`: The source of an event +- `EventID`: Unique identifier assigned to an individual event +- `Severity`: A label that indicates the importance or criticality of an event +- `Message`: The event message ETW: - - `Source`: The log source (ETW) - - `TimeStamp`: Time at which the event was generated - - `Severity`: A label that indicates the importance or criticality of an event - - `ProviderId`: Unique identifier that is assigned to the event provider during its registration process. - - `ProviderName`: Unique identifier or name assigned to an event provider - - `DecodingSource`: Component or provider responsible for decoding and translating raw event data into a human-readable format - - `ExecutionProcessId`: Identifier associated with a process that is being executed at the time an event is generated - - `ExecutionThreadId`: Identifier associated with a thread at the time an event is generated - - `Keyword`: Flag or attribute assigned to an event or a group of related events - - `EventId`: Unique identifier assigned to an individual event - - `EventData`: Payload or data associated with an event. + +- `Source`: The log source (ETW) +- `TimeStamp`: Time at which the event was generated +- `Severity`: A label that indicates the importance or criticality of an event +- `ProviderId`: Unique identifier that is assigned to the event provider during its registration process. +- `ProviderName`: Unique identifier or name assigned to an event provider +- `DecodingSource`: Component or provider responsible for decoding and translating raw event data into a human-readable format +- `ExecutionProcessId`: Identifier associated with a process that is being executed at the time an event is generated +- `ExecutionThreadId`: Identifier associated with a thread at the time an event is generated +- `Keyword`: Flag or attribute assigned to an event or a group of related events +- `EventId`: Unique identifier assigned to an individual event +- `EventData`: Payload or data associated with an event. Log Files: - - `Source`: The log source (File) - - `TimeStamp`: Time at which the change was introduced in the monitored file. - - `FileName`: Name of the file that the log entry is read from. - - `Message`: The line/change added in the monitored file. + +- `Source`: The log source (File) +- `TimeStamp`: Time at which the change was introduced in the monitored file. +- `FileName`: Name of the file that the log entry is read from. +- `Message`: The line/change added in the monitored file. Process Monitor: - - `Source`: The log source (Process Monitor) - - `TimeStamp`: Time at which the process was executed - - `Message` : The output of the process/command executed + +- `Source`: The log source (Process Monitor) +- `TimeStamp`: Time at which the process was executed +- `Message` : The output of the process/command executed ### Sample Custom Log Configuration @@ -451,16 +553,17 @@ Each log source tracked by log monitor (ETW, Log File, Events, and Process M } ``` -For advanced usage of the custom log feature, a user can choose to define their own custom JSON log format. In such a case, The `logFormat` value should be `custom`. +For advanced usage of the custom log feature, a user can choose to define their own custom JSON log format. In such a case, The `logFormat` value should be `custom`.
To enable sanitization of the JSON output and ensure the the outputs displayed by the tool is valid, the user can add a suffix: `'|json'` after the desired custom log format. For example: + ```json { "LogConfig": { - "logFormat": "custom", + "logFormat": "custom", "sources": [ - { + { "type": "ETW", "eventFormatMultiLine": false, "providers": [ @@ -471,7 +574,7 @@ For example: } ], "customLogFormat": "{'TimeStamp':'%TimeStamp%', 'Source':'%Source%', 'Severity':'%Severity%', 'ProviderId':'%ProviderId%', 'ProviderName':'%ProviderName%', 'EventId':'%EventId%', 'EventData':'%EventData%'}|json" - }, + }, { "type": "Process", "customLogFormat": "{'TimeStamp':'%TimeStamp%', 'Source':'%Source%', 'Message':'%Message%'}|JSON" diff --git a/examples/aks/iis-logmonitor/Dockerfile b/examples/aks/iis-logmonitor/Dockerfile new file mode 100644 index 00000000..7612b39c --- /dev/null +++ b/examples/aks/iis-logmonitor/Dockerfile @@ -0,0 +1,19 @@ +# escape=` +FROM mcr.microsoft.com/windows/servercore:ltsc2022 +WORKDIR /LogMonitor +COPY LogMonitorConfig.json . + +# NOTE: replace the version (v2.1.3) in the URL below with the latest +# LogMonitor release or with the specific version/build you want to use. +RUN powershell.exe -command ` + wget ` + -uri "https://github.com/microsoft/windows-container-tools/releases/download/v2.1.3/LogMonitor.exe" ` + -outfile "LogMonitor.exe" + +RUN powershell -Command ` + Add-WindowsFeature Web-Server; ` + Invoke-WebRequest -UseBasicParsing -Uri "https://github.com/microsoft/IIS.ServiceMonitor/releases/download/v2.0.1.10/ServiceMonitor.exe" -OutFile "C:\ServiceMonitor.exe" + +EXPOSE 80 + +ENTRYPOINT ["C:\\LogMonitor\\LogMonitor.exe", "C:\\ServiceMonitor.exe", "w3svc"] diff --git a/examples/aks/iis-logmonitor/LogMonitorConfig.json b/examples/aks/iis-logmonitor/LogMonitorConfig.json new file mode 100644 index 00000000..3dc1f1a6 --- /dev/null +++ b/examples/aks/iis-logmonitor/LogMonitorConfig.json @@ -0,0 +1,29 @@ +{ + "LogConfig": { + "logFormat": "json", + "sources": [ + { + "type": "File", + "directory": "c:\\inetpub\\logs", + "filter": "*.log", + "includeSubdirectories": true + }, + { + "type": "ETW", + "eventFormatMultiLine": false, + "providers": [ + { + "providerName": "IIS: WWW Server", + "providerGuid": "3A2A4E84-4C21-4981-AE10-3FDA0D9B0F83", + "level": "Information" + }, + { + "providerName": "Microsoft-Windows-IIS-Logging", + "providerGuid": "7E8AD27F-B271-4EA2-A783-A47BDE29143B", + "level": "Information" + } + ] + } + ] + } +} diff --git a/examples/aks/iis-logmonitor/README.md b/examples/aks/iis-logmonitor/README.md new file mode 100644 index 00000000..8f752e5c --- /dev/null +++ b/examples/aks/iis-logmonitor/README.md @@ -0,0 +1,104 @@ +# Setup Guide + +## Prerequisites + +- The following tools: + - [_Docker desktop_](https://docs.docker.com/desktop/install/windows-install/) + - [_Azure CLI_](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli) + - Azure subscription with a few credits + +### Build the images + +- Change into the example folder (one-line or step-by-step): + +```powershell +cd .\examples\aks\iis-logmonitor +``` + +- Build the Docker image from the Dockerfile in this folder, tag it for Docker Hub, and push: + +```powershell +# build (run from examples/aks/iis-logmonitor) +docker build -t /iis-logmonitor:latest -f Dockerfile . + +# login to Docker Hub (interactive) +docker login + +# push +docker push /iis-logmonitor:latest +``` + +### Create AKS Cluster + +> _Run all this from Powershell_ + +- `az login` (if you have multiple subscriptions, make sure you have the right subscription set as default.) +- cd into `ps-scripts` +- Update `vars.txt` +- Run `./rg-create.ps1` to create the resource group. +- Run `./aks-create.ps1` - the script creates an AKS cluster, adds a Windows node pool and connects to the cluster. + +### Deploy the application + +```powershell +./deploy.ps1 +``` + +After a few minutes, check the status of the pods +```powershell +kubectl get pods +NAME READY STATUS RESTARTS AGE +iislogmonitor-95c488777-fkhgt 1/1 Running 0 2m5s +``` +This indicates the pod started successfully — `READY 1/1` and `STATUS Running` show the container is healthy. + +Check the service status and external IP, you should get something similar to: + +```powershell +# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +# kubernetes ClusterIP 10.0.0.1 443/TCP 36m +# iislogmonitor LoadBalancer 10.0.191.38 52.188.177.226 80:31349/TCP 2m19s +``` + +Access the app using the `http://EXTERNAL-IP` shown in the output, for example:`http://52.188.177.226` + +```powershell +Start-Process "http://52.188.177.226" +``` +To stream the container logs from that pod, run: + +```powershell +kubectl logs -f iislogmonitor-95c488777-fkhgt +``` + +### Configure Azure Monitor (Container Insights) + +You can enable AKS monitoring (Container Insights) from the Azure portal. Open the following onboarding view and follow the steps to enable monitoring for this cluster (select or create a Log Analytics workspace, then enable): + +After onboarding completes you can view container logs, metrics and insights in Azure Monitor > Container insights for the cluster. + +To query container logs (Log Analytics) for IIS entries you can run a KQL query in the Log Analytics `Logs` view. Example: + +```kql +// Find In ContainerLogV2 +ContainerLogV2 +| where LogMessage contains "W3SVC" +``` + +This returns container log entries that include IIS/W3SVC messages collected by Container Insights. + +The screenshots below show the Azure Monitor onboarding and Log Analytics views for this example. + +![iis1.png](images/iis1.png) + +![iis2.png](images/iis2.png) + +![iis3.png](images/iis3.png) + +![iis4.png](images/iis4.png) + +![iis5.png](images/iis5.png) + +### Clean-up + +Clean up by deleting the resource group, in `ps-scripts`, run: `./clean-up.ps1` diff --git a/examples/aks/iis-logmonitor/deployment/az-monitor-configmap.yaml b/examples/aks/iis-logmonitor/deployment/az-monitor-configmap.yaml new file mode 100644 index 00000000..2da60971 --- /dev/null +++ b/examples/aks/iis-logmonitor/deployment/az-monitor-configmap.yaml @@ -0,0 +1,265 @@ +kind: ConfigMap +apiVersion: v1 +data: + schema-version: + #string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent. + v1 + config-version: + #string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated) + ver1 + log-data-collection-settings: |- + # Log data collection settings + # Any errors related to config map settings can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. + + [log_collection_settings] + [log_collection_settings.multi_tenancy] + enabled = false # High log scale MUST be enabled to use this feature. Refer to https://aka.ms/cihsmode for more details on high log scale mode + disable_fallback_ingestion = false # If enabled, logs of the k8s namespaces for which ContainerLogV2Extension DCR is not configured will not be ingested to the default DCR. + + [log_collection_settings.stdout] + # In the absense of this configmap, default value for enabled is true + enabled = true + # exclude_namespaces setting holds good only if enabled is set to true + # kube-system,gatekeeper-system log collection are disabled by default in the absence of 'log_collection_settings.stdout' setting. If you want to enable kube-system,gatekeeper-system, remove them from the following setting. + # If you want to continue to disable kube-system,gatekeeper-system log collection keep the namespaces in the following setting and add any other namespace you want to disable log collection to the array. + # In the absense of this configmap, default value for exclude_namespaces = ["kube-system","gatekeeper-system"] + exclude_namespaces = ["kube-system","gatekeeper-system"] + # If you want to collect logs from only selective pods inside system namespaces add them to the following setting. Provide namepace:controllerName of the system pod. NOTE: this setting is only for pods in system namespaces + # Valid values for system namespaces are: kube-system, azure-arc, gatekeeper-system, kube-public, kube-node-lease, calico-system. The system namespace used should not be present in exclude_namespaces + # collect_system_pod_logs = ["kube-system:coredns"] + + [log_collection_settings.stderr] + # Default value for enabled is true + enabled = true + # exclude_namespaces setting holds good only if enabled is set to true + # kube-system,gatekeeper-system log collection are disabled by default in the absence of 'log_collection_settings.stderr' setting. If you want to enable kube-system,gatekeeper-system, remove them from the following setting. + # If you want to continue to disable kube-system,gatekeeper-system log collection keep the namespaces in the following setting and add any other namespace you want to disable log collection to the array. + # In the absense of this configmap, default value for exclude_namespaces = ["kube-system","gatekeeper-system"] + exclude_namespaces = ["kube-system","gatekeeper-system"] + # If you want to collect logs from only selective pods inside system namespaces add them to the following setting. Provide namepace:controllerName of the system pod. NOTE: this setting is only for pods in system namespaces + # Valid values for system namespaces are: kube-system, azure-arc, gatekeeper-system, kube-public, kube-node-lease, calico-system. The system namespace used should not be present in exclude_namespaces + # collect_system_pod_logs = ["kube-system:coredns"] + + [log_collection_settings.env_var] + # In the absense of this configmap, default value for enabled is true + enabled = true + [log_collection_settings.enrich_container_logs] + # In the absense of this configmap, default value for enrich_container_logs is false + enabled = true + # When this is enabled (enabled = true), every container log entry (both stdout & stderr) will be enriched with container Name & container Image + [log_collection_settings.collect_all_kube_events] + # In the absense of this configmap, default value for collect_all_kube_events is false + # When the setting is set to false, only the kube events with !normal event type will be collected + enabled = false + # When this is enabled (enabled = true), all kube events including normal events will be collected + [log_collection_settings.schema] + # In the absence of this configmap, default value for containerlog_schema_version is "v1" if "v2" is not enabled while onboarding + # Supported values for this setting are "v1","v2" + # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema + containerlog_schema_version = "v2" + #[log_collection_settings.enable_multiline_logs] + # fluent-bit based multiline log collection for .NET, Go, Java, and Python stacktraces. Update stacktrace_languages to specificy which languages to collect stacktraces for(valid inputs: "go", "java", "python", "dotnet"). + # NOTE: for better performance consider enabling only for languages that are needed. Dotnet is experimental and may not work in all cases. + # If enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line) up to 64 KB. + # Requires ContainerLogV2 schema to be enabled. See https://aka.ms/ContainerLogv2 for more details. + # enabled = "false" + # stacktrace_languages = [] + #[log_collection_settings.metadata_collection] + # kube_meta_cache_ttl_secs is a configurable option for K8s cached metadata. Default is 60s. You may adjust it in below section [agent_settings.k8s_metadata_config]. Reference link: https://docs.fluentbit.io/manual/pipeline/filters/kubernetes#configuration-parameters + # if enabled will collect kubernetes metadata for ContainerLogv2 schema. Default is false. + # enabled = false + # if include_fields commented out or empty, all fields will be included. If include_fields is set, only the fields listed will be included. + # include_fields = ["podLabels","podAnnotations","podUid","image","imageID","imageRepo","imageTag"] + #[log_collection_settings.filter_using_annotations] + # if enabled will exclude logs from pods with annotations fluentbit.io/exclude: "true". + # Read more: https://docs.fluentbit.io/manual/pipeline/filters/kubernetes#kubernetes-annotations + # enabled = false + + prometheus-data-collection-settings: |- + # Custom Prometheus metrics data collection settings + [prometheus_data_collection_settings.cluster] + # Cluster level scrape endpoint(s). These metrics will be scraped from agent's Replicaset (singleton) + # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. + + #Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h. + interval = "1m" + + ## Uncomment the following settings with valid string arrays for prometheus scraping + #fieldpass = ["metric_to_pass1", "metric_to_pass12"] + + #fielddrop = ["metric_to_drop"] + + # An array of urls to scrape metrics from. + # urls = ["http://myurl:9101/metrics"] + + # An array of Kubernetes services to scrape metrics from. + # kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] + + # When monitor_kubernetes_pods = true, prometheus sidecar container will scrape Kubernetes pods for the following prometheus annotations: + # - prometheus.io/scrape: Enable scraping for this pod + # - prometheus.io/scheme: Default is http + # - prometheus.io/path: If the metrics path is not /metrics, define it with this annotation. + # - prometheus.io/port: If port is not 9102 use this annotation + monitor_kubernetes_pods = false + + ## Restricts Kubernetes monitoring to namespaces for pods that have annotations set and are scraped using the monitor_kubernetes_pods setting. + ## This will take effect when monitor_kubernetes_pods is set to true + ## ex: monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"] + # monitor_kubernetes_pods_namespaces = ["default1"] + + ## Label selector to target pods which have the specified label + ## This will take effect when monitor_kubernetes_pods is set to true + ## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors + # kubernetes_label_selector = "env=dev,app=nginx" + + ## Field selector to target pods which have the specified field + ## This will take effect when monitor_kubernetes_pods is set to true + ## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ + ## eg. To scrape pods on a specific node + # kubernetes_field_selector = "spec.nodeName=$HOSTNAME" + + [prometheus_data_collection_settings.node] + # Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in the cluster + # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. + + #Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h. + interval = "1m" + + ## Uncomment the following settings with valid string arrays for prometheus scraping + + # An array of urls to scrape metrics from. $NODE_IP (all upper case) will substitute of running Node's IP address + # urls = ["http://$NODE_IP:9103/metrics"] + + #fieldpass = ["metric_to_pass1", "metric_to_pass12"] + + #fielddrop = ["metric_to_drop"] + + metric_collection_settings: |- + # Metrics collection settings for metrics sent to Log Analytics and MDM + [metric_collection_settings.collect_kube_system_pv_metrics] + # In the absense of this configmap, default value for collect_kube_system_pv_metrics is false + # When the setting is set to false, only the persistent volume metrics outside the kube-system namespace will be collected + enabled = false + # When this is enabled (enabled = true), persistent volume metrics including those in the kube-system namespace will be collected + + alertable-metrics-configuration-settings: |- + # Alertable metrics configuration settings for container resource utilization + [alertable_metrics_configuration_settings.container_resource_utilization_thresholds] + # The threshold(Type Float) will be rounded off to 2 decimal points + # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage + container_cpu_threshold_percentage = 95.0 + # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage + container_memory_rss_threshold_percentage = 95.0 + # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage + container_memory_working_set_threshold_percentage = 95.0 + + # Alertable metrics configuration settings for persistent volume utilization + [alertable_metrics_configuration_settings.pv_utilization_thresholds] + # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage + pv_usage_threshold_percentage = 60.0 + + # Alertable metrics configuration settings for completed jobs count + [alertable_metrics_configuration_settings.job_completion_threshold] + # Threshold for completed job count , metric will be sent only for those jobs which were completed earlier than the following threshold + job_completion_threshold_time_minutes = 360 + integrations: |- + [integrations.azure_network_policy_manager] + collect_basic_metrics = false + collect_advanced_metrics = false + [integrations.azure_subnet_ip_usage] + enabled = false + +# Doc - https://github.com/microsoft/Docker-Provider/blob/ci_prod/Documentation/AgentSettings/ReadMe.md + agent-settings: |- + # High log scale option for container logs high log volume scenarios. This mode is optimized for high log volume scenarios and have little higher resource utilization on low scale which will be addressed in subsequent agent releases. + # Refer to public documentation for more details - https://aka.ms/cihsmode before enabling this setting as this setting has additional dependencies such as Data Collection endpoint and Microsoft-ContainerLogV2-HighScale stream instead of Microsoft-ContainerLogV2. + [agent_settings.high_log_scale] + enabled = false + + # Retina Network Flow Logs throttle settings + # Controls the rate at which network flow log messages are processed. + # [agent_settings.networkflow_logs_config] + # throttle_enabled = true # By default is true and adjust this to control whether to enable or disable network flow log messages. + # throttle_rate = 5000 # By default is 5000 and range from 1 to 25,000 and adjust this to control the amount of messages for the time. + # throttle_window = 300 # By default is 300 and adjust this to control the amount of intervals to calculate average over. + # throttle_interval = "1s" # By default is 1s and adjust this to control time interval, expressed in "sleep" format. e.g 3s, 1.5m, 0.5h etc.. + # throttle_print = false # By default is false and adjust this to control whether to print status messages with current rate and the limits to information logs. + + # prometheus scrape fluent bit settings for high scale + # buffer size should be greater than or equal to chunk size else we set it to chunk size. + # settings scoped to prometheus sidecar container. all values in mb + [agent_settings.prometheus_fbit_settings] + tcp_listener_chunk_size = 10 + tcp_listener_buffer_size = 10 + tcp_listener_mem_buf_limit = 200 + + # prometheus scrape fluent bit settings for high scale + # buffer size should be greater than or equal to chunk size else we set it to chunk size. + # settings scoped to daemonset container. all values in mb + # [agent_settings.node_prometheus_fbit_settings] + # tcp_listener_chunk_size = 1 + # tcp_listener_buffer_size = 1 + # tcp_listener_mem_buf_limit = 10 + + # prometheus scrape fluent bit settings for high scale + # buffer size should be greater than or equal to chunk size else we set it to chunk size. + # settings scoped to replicaset container. all values in mb + # [agent_settings.cluster_prometheus_fbit_settings] + # tcp_listener_chunk_size = 1 + # tcp_listener_buffer_size = 1 + # tcp_listener_mem_buf_limit = 10 + + # The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft. + # They increase the maximum stdout/stderr log collection rate but will also cause higher cpu/memory usage. + ## Ref for more details about Ignore_Older - https://docs.fluentbit.io/manual/v/1.7/pipeline/inputs/tail + # [agent_settings.fbit_config] + # log_flush_interval_secs = "1" # default value is 15 + # tail_mem_buf_limit_megabytes = "10" # default value is 10 + # tail_buf_chunksize_megabytes = "1" # default value is 32kb (comment out this line for default) + # tail_buf_maxsize_megabytes = "1" # default value is 32kb (comment out this line for default) + # enable_internal_metrics = "true" # default value is false + # tail_ignore_older = "5m" # default value same as fluent-bit default i.e.0m + + # On both AKS & Arc K8s enviornments, if Cluster has configured with Forward Proxy then Proxy settings automatically applied and used for the agent + # Certain configurations, proxy config should be ignored for example Cluster with AMPLS + Proxy + # in such scenarios, use the following config to ignore proxy settings + # [agent_settings.proxy_config] + # ignore_proxy_settings = "true" # if this is not applied, default value is false + + # Disables fluent-bit for perf and container inventory for Windows + #[agent_settings.windows_fluent_bit] + # disabled = "true" + + # The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft. + # Configuration settings for the waittime for the network listeners to be available + # [agent_settings.network_listener_waittime] + # tcp_port_25226 = 45 # Port 25226 is used for telegraf to fluent-bit data in ReplicaSet + # tcp_port_25228 = 60 # Port 25228 is used for telegraf to fluentd data + # tcp_port_25229 = 45 # Port 25229 is used for telegraf to fluent-bit data in DaemonSet + + # The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft. + # [agent_settings.mdsd_config] + # monitoring_max_event_rate = "50000" # default 20K eps + # backpressure_memory_threshold_in_mb = "1500" # default 3500MB + # upload_max_size_in_mb = "20" # default 2MB + # upload_frequency_seconds = "1" # default 60 upload_frequency_seconds + # compression_level = "0" # supported levels 0 to 9 and 0 means no compression + + # Disables fluentd and uses fluent-bit for all data collection + # [agent_settings.resource_optimization] + # enabled = false # if this is not applied, default value is false + + # The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft. + # [agent_settings.telemetry_config] + # disable_telemetry = false # if this is not applied, default value is false + + # The following setting is for KubernetesMetadata CacheTTL Settings. + # [agent_settings.k8s_metadata_config] + # kube_meta_cache_ttl_secs = 60 # if this is not applied, default value is 60s + + # [agent_settings.chunk_config] + # PODS_CHUNK_SIZE = 10 # default value is 1000 and for large clusters with high number of pods, this can reduced to smaller value if the gaps in KubePodInventory/KubeNodeInventory data. + +metadata: + name: container-azm-ms-agentconfig + namespace: kube-system \ No newline at end of file diff --git a/examples/aks/iis-logmonitor/deployment/deployment.yml b/examples/aks/iis-logmonitor/deployment/deployment.yml new file mode 100644 index 00000000..d5cd2682 --- /dev/null +++ b/examples/aks/iis-logmonitor/deployment/deployment.yml @@ -0,0 +1,56 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: iislogmonitor +spec: + replicas: 1 + selector: + matchLabels: + app: iislogmonitor + template: + metadata: + labels: + app: iislogmonitor + spec: + nodeSelector: + kubernetes.io/os: windows + containers: + - name: iislogmonitor + # Replace the image below with your image built containing LogMonitor. + image: /iis-logmonitor:latest + imagePullPolicy: IfNotPresent + ports: + - containerPort: 80 + env: + - name: ASPNETCORE_URLS + value: http://*:80 + livenessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 30 + periodSeconds: 20 + readinessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 10 + periodSeconds: 10 + resources: + requests: + cpu: "250m" + memory: "256Mi" + limits: + cpu: "500m" + memory: "512Mi" +--- +apiVersion: v1 +kind: Service +metadata: + name: iislogmonitor +spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: iislogmonitor \ No newline at end of file diff --git a/examples/aks/iis-logmonitor/images/iis1.png b/examples/aks/iis-logmonitor/images/iis1.png new file mode 100644 index 00000000..078b0fad Binary files /dev/null and b/examples/aks/iis-logmonitor/images/iis1.png differ diff --git a/examples/aks/iis-logmonitor/images/iis2.png b/examples/aks/iis-logmonitor/images/iis2.png new file mode 100644 index 00000000..74d7341e Binary files /dev/null and b/examples/aks/iis-logmonitor/images/iis2.png differ diff --git a/examples/aks/iis-logmonitor/images/iis3.png b/examples/aks/iis-logmonitor/images/iis3.png new file mode 100644 index 00000000..0fdd6bc0 Binary files /dev/null and b/examples/aks/iis-logmonitor/images/iis3.png differ diff --git a/examples/aks/iis-logmonitor/images/iis4.png b/examples/aks/iis-logmonitor/images/iis4.png new file mode 100644 index 00000000..3460343e Binary files /dev/null and b/examples/aks/iis-logmonitor/images/iis4.png differ diff --git a/examples/aks/iis-logmonitor/images/iis5.png b/examples/aks/iis-logmonitor/images/iis5.png new file mode 100644 index 00000000..872c505b Binary files /dev/null and b/examples/aks/iis-logmonitor/images/iis5.png differ diff --git a/examples/aks/iis-logmonitor/ps-scripts/aks-create.ps1 b/examples/aks/iis-logmonitor/ps-scripts/aks-create.ps1 new file mode 100644 index 00000000..1017a8fd --- /dev/null +++ b/examples/aks/iis-logmonitor/ps-scripts/aks-create.ps1 @@ -0,0 +1,69 @@ +# ref - https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli +# az provider register --namespace Microsoft.OperationsManagement +# az provider register --namespace Microsoft.OperationalInsights +# az provider show -n Microsoft.OperationsManagement -o table +# az provider show -n Microsoft.OperationalInsights -o table + +# setting variables from variable file (robust parser) +Write-Host "Loading variables from .\vars.txt" -ForegroundColor Yellow +foreach ($line in Get-Content .\vars.txt) { + $line = $line.Trim() + if ([string]::IsNullOrWhiteSpace($line)) { continue } + if ($line.StartsWith('#')) { continue } + $parts = $line -split '=', 2 + $name = $parts[0].Trim() + $value = if ($parts.Count -gt 1) { $parts[1].Trim() } else { '' } + Set-Variable -Name $name -Value $value -Scope Script +} + +# ensure an Azure subscription is selected +try { + $acct = az account show -o json 2>$null | ConvertFrom-Json +} catch { + $acct = $null +} +if (-not $acct) { + Write-Error "No Azure subscription selected. Run `az login` then `az account set --subscription ` and retry." + exit 1 +} + +$subscriptionId = $acct.id +$tenantId = $acct.tenantId +Write-Host "Using subscription: $($acct.name) ($subscriptionId), Tenant: $tenantId" -ForegroundColor Green + +# az aks create ` +# --resource-group $resourceGroup ` +# --name $clusterName ` +# --node-count 2 ` +# --enable-addons monitoring ` +# --generate-ssh-keys ` +# --windows-admin-username $winUsername ` +# --windows-admin-password $winPassword ` +# --vm-set-type VirtualMachineScaleSets ` +# --network-plugin azure + +# removing the "--enable-addons monitoring" // fails on our subscription +az aks create ` + --resource-group $resourceGroup ` + --name $clusterName ` + --node-count 2 ` + --generate-ssh-keys ` + --windows-admin-username $winUsername ` + --windows-admin-password $winPassword ` + --vm-set-type VirtualMachineScaleSets ` + --network-plugin azure + +Write-Host "Getting cluster configuration" -ForegroundColor Yellow +az aks get-credentials --resource-group $resourceGroup --name $clusterName + +# By default, an AKS cluster is created with a node pool that can run Linux containers +# add an additional node pool that can run Windows Server containers alongside the Linux node pool +Write-Host "Adding Windows Server containers node pool" -ForegroundColor Yellow + +az aks nodepool add ` + --resource-group $resourceGroup ` + --cluster-name $clusterName ` + --os-type $osType ` + --name $windowsNodepoolName ` + --node-count 1 ` + --os-sku $osSku diff --git a/examples/aks/iis-logmonitor/ps-scripts/clean-up.ps1 b/examples/aks/iis-logmonitor/ps-scripts/clean-up.ps1 new file mode 100644 index 00000000..e090e380 --- /dev/null +++ b/examples/aks/iis-logmonitor/ps-scripts/clean-up.ps1 @@ -0,0 +1,45 @@ + +param( + [switch]$Force +) + +# setting variables from variable file (robust parser) +Write-Host "Loading variables from .\vars.txt" -ForegroundColor Yellow +foreach ($line in Get-Content .\vars.txt) { + $line = $line.Trim() + if ([string]::IsNullOrWhiteSpace($line)) { continue } + if ($line.StartsWith('#')) { continue } + $parts = $line -split '=', 2 + $name = $parts[0].Trim() + $value = if ($parts.Count -gt 1) { $parts[1].Trim() } else { '' } + Set-Variable -Name $name -Value $value -Scope Script +} + +# ensure an Azure subscription is selected +try { + $acct = az account show -o json 2>$null | ConvertFrom-Json +} catch { + $acct = $null +} +if (-not $acct) { + Write-Error "No Azure subscription selected. Run `az login` then `az account set --subscription ` and retry." + exit 1 +} +Write-Host "Using subscription: $($acct.name) ($($acct.id))" -ForegroundColor Green + +if (-not $Force) { + $confirm = Read-Host "This will DELETE resource group '$resourceGroup' and all resources. Type 'DELETE' to confirm" + if ($confirm -ne 'DELETE') { + Write-Host "Aborted by user." -ForegroundColor Yellow + exit 0 + } +} + +Write-Host "Deleting resource group: $resourceGroup" -ForegroundColor Red +az group delete --name $resourceGroup --yes --no-wait +if ($LASTEXITCODE -ne 0) { + Write-Error "Failed to initiate deletion of resource group '$resourceGroup' (exit code $LASTEXITCODE)" + exit $LASTEXITCODE +} + +Write-Host "Deletion initiated. Use 'az group show --name $resourceGroup' to check status." -ForegroundColor Green diff --git a/examples/aks/iis-logmonitor/ps-scripts/deploy.ps1 b/examples/aks/iis-logmonitor/ps-scripts/deploy.ps1 new file mode 100644 index 00000000..b8095d59 --- /dev/null +++ b/examples/aks/iis-logmonitor/ps-scripts/deploy.ps1 @@ -0,0 +1,37 @@ +Write-Host "Loading variables from .\vars.txt" -ForegroundColor Yellow +foreach ($line in Get-Content .\vars.txt) { + $line = $line.Trim() + if ([string]::IsNullOrWhiteSpace($line)) { continue } + if ($line.StartsWith('#')) { continue } + $parts = $line -split '=', 2 + $name = $parts[0].Trim() + $value = if ($parts.Count -gt 1) { $parts[1].Trim() } else { '' } + Set-Variable -Name $name -Value $value -Scope Script +} + +# ensure kubectl is available +if (-not (Get-Command kubectl -ErrorAction SilentlyContinue)) { + Write-Error "kubectl not found in PATH. Install kubectl and ensure it's on PATH." + exit 1 +} + +$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Definition +$deploymentDir = Resolve-Path -Path (Join-Path $scriptDir '..\deployment') -ErrorAction SilentlyContinue +if (-not $deploymentDir) { + Write-Error "Deployment directory '..\deployment' not found relative to script." + exit 1 +} + +Write-Host "Deploying app onto K8s cluster $clusterName" -ForegroundColor Yellow + +# show current kubectl context (informational) +$currentContext = kubectl config current-context 2>$null +if ($currentContext) { Write-Host "kubectl current-context: $currentContext" -ForegroundColor Yellow } else { Write-Warning "kubectl current-context is not set. Ensure your kubeconfig is correct." } + +kubectl apply -f $deploymentDir.Path +if ($LASTEXITCODE -ne 0) { + Write-Error "kubectl apply failed with exit code $LASTEXITCODE" + exit $LASTEXITCODE +} + +Write-Host "Completed: run [kubectl get pods] or [kubectl describe pods] to see details" -ForegroundColor Green diff --git a/examples/aks/iis-logmonitor/ps-scripts/rg-create.ps1 b/examples/aks/iis-logmonitor/ps-scripts/rg-create.ps1 new file mode 100644 index 00000000..19bb1386 --- /dev/null +++ b/examples/aks/iis-logmonitor/ps-scripts/rg-create.ps1 @@ -0,0 +1,28 @@ + +# setting variables from variable file +Write-Host "Loading variables from .\vars.txt" -ForegroundColor Yellow +foreach ($line in Get-Content .\vars.txt) { + $line = $line.Trim() + if ([string]::IsNullOrWhiteSpace($line)) { continue } + if ($line.StartsWith('#')) { continue } + $parts = $line -split '=', 2 + $name = $parts[0].Trim() + $value = if ($parts.Count -gt 1) { $parts[1].Trim() } else { '' } + Set-Variable -Name $name -Value $value -Scope Script +} + +# ensure an Azure subscription is selected +try { + $acct = az account show -o json 2>$null | ConvertFrom-Json +} catch { + $acct = $null +} +if (-not $acct) { + Write-Error "No Azure subscription selected. Run `az login` then `az account set --subscription ` and retry." + exit 1 +} + +Write-Host "Using subscription: $($acct.name) ($($acct.id))" -ForegroundColor Green + +Write-Host "Creating resource group: $resourceGroup" -ForegroundColor Yellow +az group create --name $resourceGroup --location $resourceGroupAZ diff --git a/examples/aks/iis-logmonitor/ps-scripts/vars.txt b/examples/aks/iis-logmonitor/ps-scripts/vars.txt new file mode 100644 index 00000000..688040c9 --- /dev/null +++ b/examples/aks/iis-logmonitor/ps-scripts/vars.txt @@ -0,0 +1,9 @@ +resourceGroup=logmonv213-demo +resourceGroupAZ=eastus +clusterName=aks-logmonv213 +winUsername= +# winPassword must be at least 14 chars +winPassword= +windowsNodepoolName=npwin +osType=Windows +osSku=Windows2022 \ No newline at end of file