From e61882b007cada20dec1e4843c8723885aca9e32 Mon Sep 17 00:00:00 2001
From: Brend Smits
Date: Fri, 13 Feb 2026 15:32:21 +0100
Subject: [PATCH 01/22] feat: add bidirectionalLabelMatch option and deprecate
exactMatch
Introduce a new bidirectionalLabelMatch option that performs strict
two-way label matching (runner labels must equal workflow labels as a
set). This preserves the existing exactMatch behavior (unidirectional
subset check) to avoid breaking changes.
The bidirectionalLabelMatch option for runner label matching
requires labels to be identical in both directions. Previously, a runner
with labels [A, B, C, D] would match a job requesting [A, B, C] when
exactMatch was true. Now, bidirectionalLabelMatch=true requires the labels to be
exactly identical - the runner will only match if the job requests
exactly [A, B, C, D].
This change affects users who have runners with extra labels (e.g.,
on-demand) that were previously matching jobs not explicitly
requesting those labels. After this change, such runners will only
be used when jobs explicitly request all of the runner labels.
Before: Job [A,B,C] + Runner [A,B,C,D] + exactMatch=true -> Match
After: Job [A,B,C] + Runner [A,B,C,D] + exactMatch=true -> No Match
ExactMatch was suppose to have this behaviour, but the avoid breaking
changes, the variable will be deprecated to give users time to migrate.
To migrate, use bidirectionalLabelMatch instead of exactMatch in your runner configs.
Then either:
1. Remove extra labels from runner configurations, or
2. Add the extra labels to your workflow job runs-on
Signed-off-by: Brend Smits
Co-authored-by: Stuart Pearson
---
README.md | 3 +-
.../webhook/src/ConfigLoader.test.ts | 8 +-
lambdas/functions/webhook/src/ConfigLoader.ts | 2 +-
.../webhook/src/runners/dispatch.test.ts | 80 +++++++++++++++++++
.../functions/webhook/src/runners/dispatch.ts | 33 ++++++--
lambdas/functions/webhook/src/sqs/index.ts | 1 +
main.tf | 1 +
modules/multi-runner/README.md | 2 +-
modules/multi-runner/variables.tf | 10 ++-
modules/webhook/README.md | 2 +-
modules/webhook/variables.tf | 7 +-
variables.tf | 8 +-
12 files changed, 134 insertions(+), 23 deletions(-)
diff --git a/README.md b/README.md
index db64112ce9..be25b537b8 100644
--- a/README.md
+++ b/README.md
@@ -126,10 +126,11 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh)
| [enable\_job\_queued\_check](#input\_enable\_job\_queued\_check) | Only scale if the job event received by the scale up lambda is in the queued state. By default enabled for non ephemeral runners and disabled for ephemeral. Set this variable to overwrite the default behavior. | `bool` | `null` | no |
| [enable\_managed\_runner\_security\_group](#input\_enable\_managed\_runner\_security\_group) | Enables creation of the default managed security group. Unmanaged security groups can be specified via `runner_additional_security_group_ids`. | `bool` | `true` | no |
| [enable\_organization\_runners](#input\_enable\_organization\_runners) | Register runners to organization, instead of repo level | `bool` | `false` | no |
+| [enable\_runner\_bidirectional\_label\_match](#input\_enable\_runner\_bidirectional\_label\_match) | If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `enable_runner_workflow_job_labels_check_all` which only checks that workflow labels are a subset of runner labels. When false, if __any__ label matches it will trigger the webhook. | `bool` | `false` | no |
| [enable\_runner\_binaries\_syncer](#input\_enable\_runner\_binaries\_syncer) | Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI. | `bool` | `true` | no |
| [enable\_runner\_detailed\_monitoring](#input\_enable\_runner\_detailed\_monitoring) | Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details. | `bool` | `false` | no |
| [enable\_runner\_on\_demand\_failover\_for\_errors](#input\_enable\_runner\_on\_demand\_failover\_for\_errors) | Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later. | `list(string)` | `[]` | no |
-| [enable\_runner\_workflow\_job\_labels\_check\_all](#input\_enable\_runner\_workflow\_job\_labels\_check\_all) | If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ label matches it will trigger the webhook. | `bool` | `true` | no |
+| [enable\_runner\_workflow\_job\_labels\_check\_all](#input\_enable\_runner\_workflow\_job\_labels\_check\_all) | DEPRECATED: Use `enable_runner_bidirectional_label_match` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse. | `bool` | `true` | no |
| [enable\_ssm\_on\_runners](#input\_enable\_ssm\_on\_runners) | Enable to allow access to the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances. | `bool` | `false` | no |
| [enable\_user\_data\_debug\_logging\_runner](#input\_enable\_user\_data\_debug\_logging\_runner) | Option to enable debug logging for user-data, this logs all secrets as well. | `bool` | `false` | no |
| [enable\_userdata](#input\_enable\_userdata) | Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI. | `bool` | `true` | no |
diff --git a/lambdas/functions/webhook/src/ConfigLoader.test.ts b/lambdas/functions/webhook/src/ConfigLoader.test.ts
index 3e15e3308a..11383cc326 100644
--- a/lambdas/functions/webhook/src/ConfigLoader.test.ts
+++ b/lambdas/functions/webhook/src/ConfigLoader.test.ts
@@ -165,7 +165,7 @@ describe('ConfigLoader Tests', () => {
});
await expect(ConfigWebhook.load()).rejects.toThrow(
- 'Failed to load config: Failed to load parameter for matcherConfig from path /path/to/matcher/config: Failed to load matcher config', // eslint-disable-line max-len
+ 'Failed to load config: Failed to load parameter for matcherConfig from path /path/to/matcher/config: Failed to load matcher config',
);
});
@@ -213,7 +213,7 @@ describe('ConfigLoader Tests', () => {
});
await expect(ConfigWebhook.load()).rejects.toThrow(
- "Failed to load config: Failed to parse combined matcher config: Expected ',' or ']' after array element in JSON at position 196", // eslint-disable-line max-len
+ "Failed to load config: Failed to parse combined matcher config: Expected ',' or ']' after array element in JSON at position 196",
);
});
});
@@ -244,7 +244,7 @@ describe('ConfigLoader Tests', () => {
});
await expect(ConfigWebhookEventBridge.load()).rejects.toThrow(
- 'Failed to load config: Environment variable for eventBusName is not set and no default value provided., Failed to load parameter for webhookSecret from path undefined: Parameter undefined not found', // eslint-disable-line max-len
+ 'Failed to load config: Environment variable for eventBusName is not set and no default value provided., Failed to load parameter for webhookSecret from path undefined: Parameter undefined not found',
);
});
});
@@ -309,7 +309,7 @@ describe('ConfigLoader Tests', () => {
});
await expect(ConfigDispatcher.load()).rejects.toThrow(
- 'Failed to load config: Failed to load parameter for matcherConfig from path undefined: Parameter undefined not found', // eslint-disable-line max-len
+ 'Failed to load config: Failed to load parameter for matcherConfig from path undefined: Parameter undefined not found',
);
});
diff --git a/lambdas/functions/webhook/src/ConfigLoader.ts b/lambdas/functions/webhook/src/ConfigLoader.ts
index 4af58022a4..910fbfe7c0 100644
--- a/lambdas/functions/webhook/src/ConfigLoader.ts
+++ b/lambdas/functions/webhook/src/ConfigLoader.ts
@@ -61,7 +61,7 @@ abstract class BaseConfig {
this.loadProperty(propertyName, value);
})
.catch((error) => {
- const errorMessage = `Failed to load parameter for ${String(propertyName)} from path ${paramPath}: ${(error as Error).message}`; // eslint-disable-line max-len
+ const errorMessage = `Failed to load parameter for ${String(propertyName)} from path ${paramPath}: ${(error as Error).message}`;
this.configLoadingErrors.push(errorMessage);
});
}
diff --git a/lambdas/functions/webhook/src/runners/dispatch.test.ts b/lambdas/functions/webhook/src/runners/dispatch.test.ts
index e8eff9be4c..d3f1b29523 100644
--- a/lambdas/functions/webhook/src/runners/dispatch.test.ts
+++ b/lambdas/functions/webhook/src/runners/dispatch.test.ts
@@ -225,6 +225,86 @@ describe('Dispatcher', () => {
const runnerLabels = [['self-hosted', 'linux', 'x64']];
expect(canRunJob(workflowLabels, runnerLabels, false)).toBe(true);
});
+
+ it('should match when runner has more labels than workflow requests with exactMatch=true (unidirectional).', () => {
+ const workflowLabels = ['self-hosted', 'linux', 'x64', 'staging', 'ubuntu-2404'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64', 'staging', 'ubuntu-2404', 'on-demand']];
+ expect(canRunJob(workflowLabels, runnerLabels, true)).toBe(true);
+ });
+
+ it('should match when labels are exactly identical with exactMatch=true.', () => {
+ const workflowLabels = ['self-hosted', 'linux', 'on-demand'];
+ const runnerLabels = [['self-hosted', 'linux', 'on-demand']];
+ expect(canRunJob(workflowLabels, runnerLabels, true)).toBe(true);
+ });
+
+ it('should match with exactMatch=true when labels are in different order.', () => {
+ const workflowLabels = ['linux', 'self-hosted', 'x64'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64']];
+ expect(canRunJob(workflowLabels, runnerLabels, true)).toBe(true);
+ });
+
+ it('should match with exactMatch=true when labels are completely shuffled.', () => {
+ const workflowLabels = ['x64', 'ubuntu-latest', 'self-hosted', 'linux'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64', 'ubuntu-latest']];
+ expect(canRunJob(workflowLabels, runnerLabels, true)).toBe(true);
+ });
+
+ it('should match with exactMatch=false when labels are in different order.', () => {
+ const workflowLabels = ['gpu', 'self-hosted'];
+ const runnerLabels = [['self-hosted', 'gpu']];
+ expect(canRunJob(workflowLabels, runnerLabels, false)).toBe(true);
+ });
+
+ // bidirectionalLabelMatch tests
+ it('should NOT match when runner has more labels than workflow requests (bidirectionalLabelMatch=true).', () => {
+ const workflowLabels = ['self-hosted', 'linux', 'x64', 'staging', 'ubuntu-2404'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64', 'staging', 'ubuntu-2404', 'on-demand']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(false);
+ });
+
+ it('should NOT match when workflow has more labels than runner (bidirectionalLabelMatch=true).', () => {
+ const workflowLabels = ['self-hosted', 'linux', 'x64', 'ubuntu-latest', 'gpu'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(false);
+ });
+
+ it('should match when labels are exactly identical with bidirectionalLabelMatch=true.', () => {
+ const workflowLabels = ['self-hosted', 'linux', 'on-demand'];
+ const runnerLabels = [['self-hosted', 'linux', 'on-demand']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(true);
+ });
+
+ it('should match with bidirectionalLabelMatch=true when labels are in different order.', () => {
+ const workflowLabels = ['linux', 'self-hosted', 'x64'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(true);
+ });
+
+ it('should match with bidirectionalLabelMatch=true when labels are completely shuffled.', () => {
+ const workflowLabels = ['x64', 'ubuntu-latest', 'self-hosted', 'linux'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64', 'ubuntu-latest']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(true);
+ });
+
+ it('should match with bidirectionalLabelMatch=true ignoring case.', () => {
+ const workflowLabels = ['Self-Hosted', 'Linux', 'X64'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(true);
+ });
+
+ it('should NOT match empty workflow labels with bidirectionalLabelMatch=true.', () => {
+ const workflowLabels: string[] = [];
+ const runnerLabels = [['self-hosted', 'linux', 'x64']];
+ expect(canRunJob(workflowLabels, runnerLabels, false, true)).toBe(false);
+ });
+
+ it('bidirectionalLabelMatch takes precedence over exactMatch when both are true.', () => {
+ const workflowLabels = ['self-hosted', 'linux', 'x64'];
+ const runnerLabels = [['self-hosted', 'linux', 'x64', 'ubuntu-latest']];
+ // exactMatch alone would accept this (runner has extra labels), but bidirectional should reject
+ expect(canRunJob(workflowLabels, runnerLabels, true, true)).toBe(false);
+ });
});
});
diff --git a/lambdas/functions/webhook/src/runners/dispatch.ts b/lambdas/functions/webhook/src/runners/dispatch.ts
index fe81e63a26..d0d9e992ff 100644
--- a/lambdas/functions/webhook/src/runners/dispatch.ts
+++ b/lambdas/functions/webhook/src/runners/dispatch.ts
@@ -42,12 +42,21 @@ async function handleWorkflowJob(
`Job ID: ${body.workflow_job.id}, Job Name: ${body.workflow_job.name}, ` +
`Run ID: ${body.workflow_job.run_id}, Labels: ${JSON.stringify(body.workflow_job.labels)}`,
);
- // sort the queuesConfig by order of matcher config exact match, with all true matches lined up ahead.
+ // sort the queuesConfig by order of matcher config exact/bidirectional match, with all true matches lined up ahead.
matcherConfig.sort((a, b) => {
- return a.matcherConfig.exactMatch === b.matcherConfig.exactMatch ? 0 : a.matcherConfig.exactMatch ? -1 : 1;
+ const aStrict = a.matcherConfig.bidirectionalLabelMatch || a.matcherConfig.exactMatch;
+ const bStrict = b.matcherConfig.bidirectionalLabelMatch || b.matcherConfig.exactMatch;
+ return aStrict === bStrict ? 0 : aStrict ? -1 : 1;
});
for (const queue of matcherConfig) {
- if (canRunJob(body.workflow_job.labels, queue.matcherConfig.labelMatchers, queue.matcherConfig.exactMatch)) {
+ if (
+ canRunJob(
+ body.workflow_job.labels,
+ queue.matcherConfig.labelMatchers,
+ queue.matcherConfig.exactMatch,
+ queue.matcherConfig.bidirectionalLabelMatch,
+ )
+ ) {
await sendActionRequest({
id: body.workflow_job.id,
repositoryName: body.repository.name,
@@ -80,14 +89,24 @@ export function canRunJob(
workflowJobLabels: string[],
runnerLabelsMatchers: string[][],
workflowLabelCheckAll: boolean,
+ bidirectionalLabelMatch = false,
): boolean {
runnerLabelsMatchers = runnerLabelsMatchers.map((runnerLabel) => {
return runnerLabel.map((label) => label.toLowerCase());
});
- const matchLabels = workflowLabelCheckAll
- ? runnerLabelsMatchers.some((rl) => workflowJobLabels.every((wl) => rl.includes(wl.toLowerCase())))
- : runnerLabelsMatchers.some((rl) => workflowJobLabels.some((wl) => rl.includes(wl.toLowerCase())));
- const match = workflowJobLabels.length === 0 ? !matchLabels : matchLabels;
+
+ let match: boolean;
+ if (bidirectionalLabelMatch) {
+ const workflowLabelsLower = workflowJobLabels.map((wl) => wl.toLowerCase());
+ match = runnerLabelsMatchers.some(
+ (rl) => workflowLabelsLower.every((wl) => rl.includes(wl)) && rl.every((r) => workflowLabelsLower.includes(r)),
+ );
+ } else {
+ const matchLabels = workflowLabelCheckAll
+ ? runnerLabelsMatchers.some((rl) => workflowJobLabels.every((wl) => rl.includes(wl.toLowerCase())))
+ : runnerLabelsMatchers.some((rl) => workflowJobLabels.some((wl) => rl.includes(wl.toLowerCase())));
+ match = workflowJobLabels.length === 0 ? !matchLabels : matchLabels;
+ }
logger.debug(
`Received workflow job event with labels: '${JSON.stringify(workflowJobLabels)}'. The event does ${
diff --git a/lambdas/functions/webhook/src/sqs/index.ts b/lambdas/functions/webhook/src/sqs/index.ts
index a028d7dcc4..b2c6982471 100644
--- a/lambdas/functions/webhook/src/sqs/index.ts
+++ b/lambdas/functions/webhook/src/sqs/index.ts
@@ -17,6 +17,7 @@ export interface ActionRequestMessage {
export interface MatcherConfig {
labelMatchers: string[][];
exactMatch: boolean;
+ bidirectionalLabelMatch?: boolean;
}
export type RunnerConfig = RunnerMatcherConfig[];
diff --git a/main.tf b/main.tf
index 017cbbbfe4..1c07389116 100644
--- a/main.tf
+++ b/main.tf
@@ -114,6 +114,7 @@ module "webhook" {
matcherConfig : {
labelMatchers : [local.runner_labels]
exactMatch : var.enable_runner_workflow_job_labels_check_all
+ bidirectionalLabelMatch : var.enable_runner_bidirectional_label_match
}
}
}
diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md
index 9b3dc5f7f7..9c6eb04c57 100644
--- a/modules/multi-runner/README.md
+++ b/modules/multi-runner/README.md
@@ -150,7 +150,7 @@ module "multi-runner" {
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
| [matcher\_config\_parameter\_store\_tier](#input\_matcher\_config\_parameter\_store\_tier) | The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`. | `string` | `"Standard"` | no |
| [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. | object({ enable = optional(bool, false) namespace = optional(string, "GitHub Runners") metric = optional(object({ enable_github_app_rate_limit = optional(bool, true) enable_job_retry = optional(bool, true) enable_spot_termination_warning = optional(bool, true) }), {}) }) | `{}` | no |
-| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in https://github.com/github-aws-runners/terraform-aws-github-runner/blob/main/lambdas/functions/control-plane/src/aws/runners.ts" enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
+| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in https://github.com/github-aws-runners/terraform-aws-github-runner/blob/main/lambdas/functions/control-plane/src/aws/runners.ts" enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "DEPRECATED: Use `bidirectionalLabelMatch` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse." bidirectionalLabelMatch: "If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `exactMatch` which only checks that workflow labels are a subset of runner labels. When false, if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
| [parameter\_store\_tags](#input\_parameter\_store\_tags) | Map of tags that will be added to all the SSM Parameter Store parameters created by the Lambda function. | `map(string)` | `{}` | no |
| [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no |
| [pool\_lambda\_timeout](#input\_pool\_lambda\_timeout) | Time out for the pool lambda in seconds. | `number` | `60` | no |
diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf
index faf9c946c4..ceb9f2c1e9 100644
--- a/modules/multi-runner/variables.tf
+++ b/modules/multi-runner/variables.tf
@@ -181,9 +181,10 @@ variable "multi_runner_config" {
}), {})
})
matcherConfig = object({
- labelMatchers = list(list(string))
- exactMatch = optional(bool, false)
- priority = optional(number, 999)
+ labelMatchers = list(list(string))
+ exactMatch = optional(bool, false)
+ bidirectionalLabelMatch = optional(bool, false)
+ priority = optional(number, 999)
})
redrive_build_queue = optional(object({
enabled = bool
@@ -252,7 +253,8 @@ variable "multi_runner_config" {
}
matcherConfig: {
labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`"
- exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook."
+ exactMatch: "DEPRECATED: Use `bidirectionalLabelMatch` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse."
+ bidirectionalLabelMatch: "If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `exactMatch` which only checks that workflow labels are a subset of runner labels. When false, if __any__ workflow label matches it will trigger the webhook."
priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999."
}
redrive_build_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries."
diff --git a/modules/webhook/README.md b/modules/webhook/README.md
index 10b0179672..c2ff43775e 100644
--- a/modules/webhook/README.md
+++ b/modules/webhook/README.md
@@ -87,7 +87,7 @@ yarn run dist
| [repository\_white\_list](#input\_repository\_white\_list) | List of github repository full names (owner/repo\_name) that will be allowed to use the github app. Leave empty for no filtering. | `list(string)` | `[]` | no |
| [role\_path](#input\_role\_path) | The path that will be added to the role; if not set, the environment name will be used. | `string` | `null` | no |
| [role\_permissions\_boundary](#input\_role\_permissions\_boundary) | Permissions boundary that will be added to the created role for the lambda. | `string` | `null` | no |
-| [runner\_matcher\_config](#input\_runner\_matcher\_config) | SQS queue to publish accepted build events based on the runner type. When exact match is disabled the webhook accepts the event if one of the workflow job labels is part of the matcher. The priority defines the order the matchers are applied. | map(object({ arn = string id = string matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = bool priority = optional(number, 999) }) })) | n/a | yes |
+| [runner\_matcher\_config](#input\_runner\_matcher\_config) | SQS queue to publish accepted build events based on the runner type. When exact match is disabled the webhook accepts the event if one of the workflow job labels is part of the matcher. The priority defines the order the matchers are applied. | map(object({ arn = string id = string matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = bool bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) })) | n/a | yes |
| [ssm\_paths](#input\_ssm\_paths) | The root path used in SSM to store configuration and secrets. | object({ root = string webhook = string }) | n/a | yes |
| [tags](#input\_tags) | Map of tags that will be added to created resources. By default resources will be tagged with name and environment. | `map(string)` | `{}` | no |
| [tracing\_config](#input\_tracing\_config) | Configuration for lambda tracing. | object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }) | `{}` | no |
diff --git a/modules/webhook/variables.tf b/modules/webhook/variables.tf
index 5f0a39c0d2..6da7fc122d 100644
--- a/modules/webhook/variables.tf
+++ b/modules/webhook/variables.tf
@@ -28,9 +28,10 @@ variable "runner_matcher_config" {
arn = string
id = string
matcherConfig = object({
- labelMatchers = list(list(string))
- exactMatch = bool
- priority = optional(number, 999)
+ labelMatchers = list(list(string))
+ exactMatch = bool
+ bidirectionalLabelMatch = optional(bool, false)
+ priority = optional(number, 999)
})
}))
validation {
diff --git a/variables.tf b/variables.tf
index 90769578c0..fe97d6ce4b 100644
--- a/variables.tf
+++ b/variables.tf
@@ -624,11 +624,17 @@ variable "log_level" {
}
variable "enable_runner_workflow_job_labels_check_all" {
- description = "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ label matches it will trigger the webhook."
+ description = "DEPRECATED: Use `enable_runner_bidirectional_label_match` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse."
type = bool
default = true
}
+variable "enable_runner_bidirectional_label_match" {
+ description = "If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `enable_runner_workflow_job_labels_check_all` which only checks that workflow labels are a subset of runner labels. When false, if __any__ label matches it will trigger the webhook."
+ type = bool
+ default = false
+}
+
variable "matcher_config_parameter_store_tier" {
description = "The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`."
type = string
From 3a222fb5a872c00e1d0106b10ea1c1654b0a8f1d Mon Sep 17 00:00:00 2001
From: Gregory McCue
Date: Mon, 9 Mar 2026 04:43:23 -0400
Subject: [PATCH 02/22] fix(install-runner.sh): support Debian (#5027)
Currently, when using the Debian OS, the existing `install-runner.sh`
script will take an additional 25s during start up as the script
attempts to install `libicu` via `dnf`
This PR adds support for the `debian` OS which is listed as a supported
OS in the GitHub docs:
https://docs.github.com/en/actions/reference/runners/self-hosted-runners#linux
---
modules/runners/templates/install-runner.sh | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/modules/runners/templates/install-runner.sh b/modules/runners/templates/install-runner.sh
index 05445eb98b..2d371ecb65 100644
--- a/modules/runners/templates/install-runner.sh
+++ b/modules/runners/templates/install-runner.sh
@@ -45,8 +45,8 @@ rm -rf $file_name
os_id=$(awk -F= '/^ID=/{print $2}' /etc/os-release)
echo OS: $os_id
-# Install libicu on non-ubuntu
-if [[ ! "$os_id" =~ ^ubuntu.* ]]; then
+# Install libicu on non-ubuntu, non-debian
+if [[ ! "$os_id" =~ ^(ubuntu|debian).* ]]; then
max_attempts=5
attempt_count=0
success=false
@@ -63,8 +63,8 @@ if [[ ! "$os_id" =~ ^ubuntu.* ]]; then
done
fi
-# Install dependencies for ubuntu
-if [[ "$os_id" =~ ^ubuntu.* ]]; then
+# Install dependencies for ubuntu and debian
+if [[ "$os_id" =~ ^(ubuntu|debian).* ]]; then
echo "Installing dependencies"
./bin/installdependencies.sh
fi
From 4507d17d0dc81c46f435accd20cbf9ce320b2cfd Mon Sep 17 00:00:00 2001
From: Brend Smits
Date: Mon, 9 Mar 2026 21:07:51 +0100
Subject: [PATCH 03/22] fix: gracefully handle JIT config failures and
terminate unconfigured instance (#4990)
This pull request enhances the robustness and reliability of the GitHub
Actions runner scaling logic by improving error handling and retry
mechanisms for GitHub API calls. It introduces the
`@octokit/plugin-retry` plugin to automatically retry failed API
requests, adds detailed logging for retry attempts, and ensures that
failures in creating JIT configs for individual runner instances do not
halt the entire scaling process. Additionally, new tests are added to
verify handling of various API failure scenarios.
**GitHub API client improvements:**
* Added `@octokit/plugin-retry` to dependencies (`package.json`) and
integrated it into the Octokit client initialization to enable automatic
retries for failed GitHub API requests.
[[1]](diffhunk://#diff-37d09418dae74ded5678eabfa3b60993ee491e2fd9e49e11142f639b078ac9b2R41)
[[2]](diffhunk://#diff-cf7cdd79fe0ed0e3a2e8928c0c7667a096c47c47abdb2354ddadee67e80a226dR21)
[[3]](diffhunk://#diff-cf7cdd79fe0ed0e3a2e8928c0c7667a096c47c47abdb2354ddadee67e80a226dL29-R30)
* Configured the retry plugin to log detailed warnings on each retry
attempt, including the HTTP method, URL, error message, and status code.
**Error handling and resilience in JIT config creation:**
* Updated `createJitConfig` in `scale-up.ts` to catch and log errors for
individual runner instances when creating JIT configs, allowing the
process to continue for remaining instances and logging a summary of
failed attempts at the end.
[[1]](diffhunk://#diff-fbc68af2a40bf14ad13a80b13958c0b52d1d0fde5f0009416a693fb4b691ceaeR537-R542)
[[2]](diffhunk://#diff-fbc68af2a40bf14ad13a80b13958c0b52d1d0fde5f0009416a693fb4b691ceaeR582-R596)
* Instances that failed to generate a configuration, will now be
terminated to avoid generating waste.
**Testing improvements:**
* Added comprehensive tests to `scale-up.test.ts` to verify correct
behavior when GitHub API calls fail for some instances, including
retryable errors (e.g., 5xx), non-retryable errors (e.g., 4xx), and
partial failures, ensuring only successful JIT configs are stored.
---
lambdas/functions/control-plane/package.json | 1 +
.../control-plane/src/github/auth.ts | 14 +-
.../src/scale-runners/scale-up.test.ts | 166 ++++++++++++++++++
.../src/scale-runners/scale-up.ts | 152 +++++++++++-----
lambdas/yarn.lock | 14 ++
5 files changed, 302 insertions(+), 45 deletions(-)
diff --git a/lambdas/functions/control-plane/package.json b/lambdas/functions/control-plane/package.json
index faccdfed42..7a75caa3ef 100644
--- a/lambdas/functions/control-plane/package.json
+++ b/lambdas/functions/control-plane/package.json
@@ -38,6 +38,7 @@
"@middy/core": "^6.4.5",
"@octokit/auth-app": "8.2.0",
"@octokit/core": "7.0.6",
+ "@octokit/plugin-retry": "8.0.3",
"@octokit/plugin-throttling": "11.0.3",
"@octokit/rest": "22.0.1",
"cron-parser": "^5.4.0"
diff --git a/lambdas/functions/control-plane/src/github/auth.ts b/lambdas/functions/control-plane/src/github/auth.ts
index 2d99b5979a..927765523b 100644
--- a/lambdas/functions/control-plane/src/github/auth.ts
+++ b/lambdas/functions/control-plane/src/github/auth.ts
@@ -19,6 +19,7 @@ type StrategyOptions = {
import { createSign, randomUUID } from 'node:crypto';
import { request } from '@octokit/request';
import { Octokit } from '@octokit/rest';
+import { retry } from '@octokit/plugin-retry';
import { throttling } from '@octokit/plugin-throttling';
import { createChildLogger } from '@aws-github-runner/aws-powertools-util';
import { getParameter } from '@aws-github-runner/aws-ssm-util';
@@ -27,7 +28,7 @@ import { EndpointDefaults } from '@octokit/types';
const logger = createChildLogger('gh-auth');
export async function createOctokitClient(token: string, ghesApiUrl = ''): Promise {
- const CustomOctokit = Octokit.plugin(throttling);
+ const CustomOctokit = Octokit.plugin(retry, throttling);
const ocktokitOptions: OctokitOptions = {
auth: token,
};
@@ -39,6 +40,17 @@ export async function createOctokitClient(token: string, ghesApiUrl = ''): Promi
return new CustomOctokit({
...ocktokitOptions,
userAgent: process.env.USER_AGENT || 'github-aws-runners',
+ retry: {
+ onRetry: (retryCount: number, error: Error, request: { method: string; url: string }) => {
+ logger.warn('GitHub API request retry attempt', {
+ retryCount,
+ method: request.method,
+ url: request.url,
+ error: error.message,
+ status: (error as Error & { status?: number }).status,
+ });
+ },
+ },
throttle: {
onRateLimit: (retryAfter: number, options: Required) => {
logger.warn(
diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts
index 8a10b82ca4..458d89763e 100644
--- a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts
+++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts
@@ -343,6 +343,172 @@ describe('scaleUp with GHES', () => {
],
});
});
+
+ it('should create JIT config for all remaining instances even when GitHub API fails for one instance', async () => {
+ process.env.RUNNERS_MAXIMUM_COUNT = '5';
+ mockCreateRunner.mockImplementation(async () => {
+ return ['i-instance-1', 'i-instance-2', 'i-instance-3'];
+ });
+ mockListRunners.mockImplementation(async () => {
+ return [];
+ });
+
+ mockOctokit.actions.generateRunnerJitconfigForOrg.mockImplementation(({ name }) => {
+ if (name === 'unit-test-i-instance-2') {
+ // Simulate a 503 Service Unavailable error from GitHub
+ const error = new Error('Service Unavailable') as Error & {
+ status: number;
+ response: { status: number; data: { message: string } };
+ };
+ error.status = 503;
+ error.response = {
+ status: 503,
+ data: { message: 'Service temporarily unavailable' },
+ };
+ throw error;
+ }
+ return {
+ data: {
+ runner: { id: 9876543210 },
+ encoded_jit_config: `TEST_JIT_CONFIG_${name}`,
+ },
+ headers: {},
+ };
+ });
+
+ await scaleUpModule.scaleUp(TEST_DATA);
+
+ expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toHaveBeenCalledWith({
+ org: TEST_DATA_SINGLE.repositoryOwner,
+ name: 'unit-test-i-instance-1',
+ runner_group_id: 1,
+ labels: ['label1', 'label2'],
+ });
+
+ expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toHaveBeenCalledWith({
+ org: TEST_DATA_SINGLE.repositoryOwner,
+ name: 'unit-test-i-instance-2',
+ runner_group_id: 1,
+ labels: ['label1', 'label2'],
+ });
+
+ expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toHaveBeenCalledWith({
+ org: TEST_DATA_SINGLE.repositoryOwner,
+ name: 'unit-test-i-instance-3',
+ runner_group_id: 1,
+ labels: ['label1', 'label2'],
+ });
+
+ expect(mockSSMClient).toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-1',
+ Value: 'TEST_JIT_CONFIG_unit-test-i-instance-1',
+ Type: 'SecureString',
+ Tags: [{ Key: 'InstanceId', Value: 'i-instance-1' }],
+ });
+
+ expect(mockSSMClient).toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-3',
+ Value: 'TEST_JIT_CONFIG_unit-test-i-instance-3',
+ Type: 'SecureString',
+ Tags: [{ Key: 'InstanceId', Value: 'i-instance-3' }],
+ });
+
+ expect(mockSSMClient).not.toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-2',
+ });
+ });
+
+ it('should handle retryable errors with error handling logic', async () => {
+ process.env.RUNNERS_MAXIMUM_COUNT = '5';
+ mockCreateRunner.mockImplementation(async () => {
+ return ['i-instance-1', 'i-instance-2'];
+ });
+ mockListRunners.mockImplementation(async () => {
+ return [];
+ });
+
+ mockOctokit.actions.generateRunnerJitconfigForOrg.mockImplementation(({ name }) => {
+ if (name === 'unit-test-i-instance-1') {
+ const error = new Error('Internal Server Error') as Error & {
+ status: number;
+ response: { status: number; data: { message: string } };
+ };
+ error.status = 500;
+ error.response = {
+ status: 500,
+ data: { message: 'Internal server error' },
+ };
+ throw error;
+ }
+ return {
+ data: {
+ runner: { id: 9876543210 },
+ encoded_jit_config: `TEST_JIT_CONFIG_${name}`,
+ },
+ headers: {},
+ };
+ });
+
+ await scaleUpModule.scaleUp(TEST_DATA);
+
+ expect(mockSSMClient).toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-2',
+ Value: 'TEST_JIT_CONFIG_unit-test-i-instance-2',
+ Type: 'SecureString',
+ Tags: [{ Key: 'InstanceId', Value: 'i-instance-2' }],
+ });
+
+ expect(mockSSMClient).not.toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-1',
+ });
+ });
+
+ it('should handle non-retryable 4xx errors gracefully', async () => {
+ process.env.RUNNERS_MAXIMUM_COUNT = '5';
+ mockCreateRunner.mockImplementation(async () => {
+ return ['i-instance-1', 'i-instance-2'];
+ });
+ mockListRunners.mockImplementation(async () => {
+ return [];
+ });
+
+ mockOctokit.actions.generateRunnerJitconfigForOrg.mockImplementation(({ name }) => {
+ if (name === 'unit-test-i-instance-1') {
+ // 404 is not retryable - will fail immediately
+ const error = new Error('Not Found') as Error & {
+ status: number;
+ response: { status: number; data: { message: string } };
+ };
+ error.status = 404;
+ error.response = {
+ status: 404,
+ data: { message: 'Resource not found' },
+ };
+ throw error;
+ }
+ return {
+ data: {
+ runner: { id: 9876543210 },
+ encoded_jit_config: `TEST_JIT_CONFIG_${name}`,
+ },
+ headers: {},
+ };
+ });
+
+ await scaleUpModule.scaleUp(TEST_DATA);
+
+ expect(mockSSMClient).toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-2',
+ Value: 'TEST_JIT_CONFIG_unit-test-i-instance-2',
+ Type: 'SecureString',
+ Tags: [{ Key: 'InstanceId', Value: 'i-instance-2' }],
+ });
+
+ expect(mockSSMClient).not.toHaveReceivedCommandWith(PutParameterCommand, {
+ Name: '/github-action-runners/default/runners/config/i-instance-1',
+ });
+ });
+
it.each(RUNNER_TYPES)(
'calls create start runner config of 40' + ' instances (ssm rate limit condition) to test time delay ',
async (type: RunnerType) => {
diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
index 2a4c2c1c58..7f797422cc 100644
--- a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
+++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
@@ -4,7 +4,7 @@ import { getParameter, putParameter } from '@aws-github-runner/aws-ssm-util';
import yn from 'yn';
import { createGithubAppAuth, createGithubInstallationAuth, createOctokitClient } from '../github/auth';
-import { createRunner, listEC2Runners, tag } from './../aws/runners';
+import { createRunner, listEC2Runners, tag, terminateRunner } from './../aws/runners';
import { RunnerInputParameters } from './../aws/runners.d';
import { metricGitHubAppRateLimit } from '../github/rate-limit';
import { publishRetryMessage } from './job-retry';
@@ -258,7 +258,29 @@ export async function createRunners(
...ec2RunnerConfig,
});
if (instances.length !== 0) {
- await createStartRunnerConfig(githubRunnerConfig, instances, ghClient);
+ const failedInstances = await createStartRunnerConfig(githubRunnerConfig, instances, ghClient);
+
+ // Terminate instances that failed to get configured to avoid waste
+ if (failedInstances.length > 0) {
+ logger.warn('Terminating instances that failed to get configured', {
+ failedInstances,
+ failedCount: failedInstances.length,
+ });
+
+ for (const instanceId of failedInstances) {
+ try {
+ await terminateRunner(instanceId);
+ } catch (error) {
+ logger.error('Failed to terminate instance', {
+ instanceId,
+ error: error instanceof Error ? error.message : String(error),
+ });
+ }
+ }
+
+ // Remove failed instances from the returned list
+ return instances.filter((id) => !failedInstances.includes(id));
+ }
}
return instances;
@@ -533,15 +555,21 @@ export function getGitHubEnterpriseApiUrl() {
return { ghesApiUrl, ghesBaseUrl };
}
+/**
+ * Creates the start configuration for runner instances by either generating JIT configs
+ * or registration tokens.
+ *
+ * @returns Array of instance IDs that failed to get configured
+ */
async function createStartRunnerConfig(
githubRunnerConfig: CreateGitHubRunnerConfig,
instances: string[],
ghClient: Octokit,
-) {
+): Promise {
if (githubRunnerConfig.enableJitConfig && githubRunnerConfig.ephemeral) {
- await createJitConfig(githubRunnerConfig, instances, ghClient);
+ return await createJitConfig(githubRunnerConfig, instances, ghClient);
} else {
- await createRegistrationTokenConfig(githubRunnerConfig, instances, ghClient);
+ return await createRegistrationTokenConfig(githubRunnerConfig, instances, ghClient);
}
}
@@ -556,11 +584,16 @@ function addDelay(instances: string[]) {
return { isDelay, delay };
}
+/**
+ * Creates registration token configuration for non-ephemeral runners.
+ *
+ * @returns Empty array (this configuration method does not have failure cases)
+ */
async function createRegistrationTokenConfig(
githubRunnerConfig: CreateGitHubRunnerConfig,
instances: string[],
ghClient: Octokit,
-) {
+): Promise {
const { isDelay, delay } = addDelay(instances);
const token = await getGithubRunnerRegistrationToken(githubRunnerConfig, ghClient);
const runnerServiceConfig = generateRunnerServiceConfig(githubRunnerConfig, token);
@@ -578,6 +611,8 @@ async function createRegistrationTokenConfig(
await delay(25);
}
}
+
+ return [];
}
async function tagRunnerId(instanceId: string, runnerId: string): Promise {
@@ -588,52 +623,81 @@ async function tagRunnerId(instanceId: string, runnerId: string): Promise
}
}
-async function createJitConfig(githubRunnerConfig: CreateGitHubRunnerConfig, instances: string[], ghClient: Octokit) {
+/**
+ * Creates JIT (Just-In-Time) configuration for ephemeral runners.
+ * Continues processing remaining instances even if some fail.
+ *
+ * @returns Array of instance IDs that failed to get JIT configuration
+ */
+async function createJitConfig(
+ githubRunnerConfig: CreateGitHubRunnerConfig,
+ instances: string[],
+ ghClient: Octokit,
+): Promise {
const runnerGroupId = await getRunnerGroupId(githubRunnerConfig, ghClient);
const { isDelay, delay } = addDelay(instances);
const runnerLabels = githubRunnerConfig.runnerLabels.split(',');
+ const failedInstances: string[] = [];
logger.debug(`Runner group id: ${runnerGroupId}`);
logger.debug(`Runner labels: ${runnerLabels}`);
for (const instance of instances) {
- // generate jit config for runner registration
- const ephemeralRunnerConfig: EphemeralRunnerConfig = {
- runnerName: `${githubRunnerConfig.runnerNamePrefix}${instance}`,
- runnerGroupId: runnerGroupId,
- runnerLabels: runnerLabels,
- };
- logger.debug(`Runner name: ${ephemeralRunnerConfig.runnerName}`);
- const runnerConfig =
- githubRunnerConfig.runnerType === 'Org'
- ? await ghClient.actions.generateRunnerJitconfigForOrg({
- org: githubRunnerConfig.runnerOwner,
- name: ephemeralRunnerConfig.runnerName,
- runner_group_id: ephemeralRunnerConfig.runnerGroupId,
- labels: ephemeralRunnerConfig.runnerLabels,
- })
- : await ghClient.actions.generateRunnerJitconfigForRepo({
- owner: githubRunnerConfig.runnerOwner.split('/')[0],
- repo: githubRunnerConfig.runnerOwner.split('/')[1],
- name: ephemeralRunnerConfig.runnerName,
- runner_group_id: ephemeralRunnerConfig.runnerGroupId,
- labels: ephemeralRunnerConfig.runnerLabels,
- });
-
- metricGitHubAppRateLimit(runnerConfig.headers);
-
- // tag the EC2 instance with the Github runner id
- await tagRunnerId(instance, runnerConfig.data.runner.id.toString());
+ try {
+ // generate jit config for runner registration
+ const ephemeralRunnerConfig: EphemeralRunnerConfig = {
+ runnerName: `${githubRunnerConfig.runnerNamePrefix}${instance}`,
+ runnerGroupId: runnerGroupId,
+ runnerLabels: runnerLabels,
+ };
+ logger.debug(`Runner name: ${ephemeralRunnerConfig.runnerName}`);
+ const runnerConfig =
+ githubRunnerConfig.runnerType === 'Org'
+ ? await ghClient.actions.generateRunnerJitconfigForOrg({
+ org: githubRunnerConfig.runnerOwner,
+ name: ephemeralRunnerConfig.runnerName,
+ runner_group_id: ephemeralRunnerConfig.runnerGroupId,
+ labels: ephemeralRunnerConfig.runnerLabels,
+ })
+ : await ghClient.actions.generateRunnerJitconfigForRepo({
+ owner: githubRunnerConfig.runnerOwner.split('/')[0],
+ repo: githubRunnerConfig.runnerOwner.split('/')[1],
+ name: ephemeralRunnerConfig.runnerName,
+ runner_group_id: ephemeralRunnerConfig.runnerGroupId,
+ labels: ephemeralRunnerConfig.runnerLabels,
+ });
+
+ metricGitHubAppRateLimit(runnerConfig.headers);
+
+ // tag the EC2 instance with the Github runner id
+ await tagRunnerId(instance, runnerConfig.data.runner.id.toString());
+
+ // store jit config in ssm parameter store
+ logger.debug('Runner JIT config for ephemeral runner generated.', {
+ instance: instance,
+ });
+ await putParameter(`${githubRunnerConfig.ssmTokenPath}/${instance}`, runnerConfig.data.encoded_jit_config, true, {
+ tags: [{ Key: 'InstanceId', Value: instance }, ...githubRunnerConfig.ssmParameterStoreTags],
+ });
+ if (isDelay) {
+ // Delay to prevent AWS ssm rate limits by being within the max throughput limit
+ await delay(25);
+ }
+ } catch (error) {
+ failedInstances.push(instance);
+ logger.warn('Failed to create JIT config for instance, continuing with remaining instances', {
+ instance: instance,
+ error: error instanceof Error ? error.message : String(error),
+ });
+ }
+ }
- // store jit config in ssm parameter store
- logger.debug('Runner JIT config for ephemeral runner generated.', {
- instance: instance,
- });
- await putParameter(`${githubRunnerConfig.ssmTokenPath}/${instance}`, runnerConfig.data.encoded_jit_config, true, {
- tags: [{ Key: 'InstanceId', Value: instance }, ...githubRunnerConfig.ssmParameterStoreTags],
+ if (failedInstances.length > 0) {
+ logger.error('Failed to create JIT config for some instances', {
+ failedInstances: failedInstances,
+ totalInstances: instances.length,
+ successfulInstances: instances.length - failedInstances.length,
});
- if (isDelay) {
- // Delay to prevent AWS ssm rate limits by being within the max throughput limit
- await delay(25);
- }
}
+
+ return failedInstances;
}
diff --git a/lambdas/yarn.lock b/lambdas/yarn.lock
index 53a3b2f938..57ce3fe8ee 100644
--- a/lambdas/yarn.lock
+++ b/lambdas/yarn.lock
@@ -155,6 +155,7 @@ __metadata:
"@middy/core": "npm:^6.4.5"
"@octokit/auth-app": "npm:8.2.0"
"@octokit/core": "npm:7.0.6"
+ "@octokit/plugin-retry": "npm:8.0.3"
"@octokit/plugin-throttling": "npm:11.0.3"
"@octokit/rest": "npm:22.0.1"
"@octokit/types": "npm:^16.0.0"
@@ -3971,6 +3972,19 @@ __metadata:
languageName: node
linkType: hard
+"@octokit/plugin-retry@npm:8.0.3":
+ version: 8.0.3
+ resolution: "@octokit/plugin-retry@npm:8.0.3"
+ dependencies:
+ "@octokit/request-error": "npm:^7.0.2"
+ "@octokit/types": "npm:^16.0.0"
+ bottleneck: "npm:^2.15.3"
+ peerDependencies:
+ "@octokit/core": ">=7"
+ checksum: 10c0/24d35d85f750f9e3e52f63b8ddd8fc8aa7bdd946c77b9ea4d6894d026c5c2c69109e8de3880a9970c906f624eb777c7d0c0a2072e6d41dadc7b36cce104b978c
+ languageName: node
+ linkType: hard
+
"@octokit/plugin-throttling@npm:11.0.3":
version: 11.0.3
resolution: "@octokit/plugin-throttling@npm:11.0.3"
From 1c69978e8002c72a8e148efe02749359d32ba6c1 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 21:14:14 +0100
Subject: [PATCH 04/22] chore(deps): bump zizmorcore/zizmor-action from 0.4.1
to 0.5.0 (#5034)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bumps
[zizmorcore/zizmor-action](https://github.com/zizmorcore/zizmor-action)
from 0.4.1 to 0.5.0.
Release notes
Sourced from zizmorcore/zizmor-action's
releases .
v0.5.0
What's Changed
New Contributors
Full Changelog : https://github.com/zizmorcore/zizmor-action/compare/v0.4.1...v0.5.0
Commits
0dce257
chore(deps): bump peter-evans/create-pull-request (#88 )
fb94974
Expose output-file as an output when
advanced-security: true (#87 )
867562a
chore(deps): bump the github-actions group with 2 updates (#85 )
7462f07
Bump pins in README (#84 )
See full diff in compare
view
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
.github/workflows/zizmor.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.github/workflows/zizmor.yml b/.github/workflows/zizmor.yml
index 7d337c61cc..35af91a536 100644
--- a/.github/workflows/zizmor.yml
+++ b/.github/workflows/zizmor.yml
@@ -31,6 +31,6 @@ jobs:
persist-credentials: false
- name: Run zizmor 🌈
- uses: zizmorcore/zizmor-action@135698455da5c3b3e55f73f4419e481ab68cdd95 # v0.4.1
+ uses: zizmorcore/zizmor-action@0dce2577a4760a2749d8cfb7a84b7d5585ebcb7d # v0.5.0
with:
persona: pedantic
From 087d71494e3f21f8cbaf27d423bb90f2f7d4e57d Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 21:34:13 +0100
Subject: [PATCH 05/22] chore(lambda): bump @types/express from 5.0.3 to 5.0.6
in /lambdas (#5002)
Bumps
[@types/express](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/express)
from 5.0.3 to 5.0.6.
Commits
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
lambdas/functions/webhook/package.json | 2 +-
lambdas/yarn.lock | 36 +++++++++++++-------------
2 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/lambdas/functions/webhook/package.json b/lambdas/functions/webhook/package.json
index 2b5a8e8174..cf9dd0c0be 100644
--- a/lambdas/functions/webhook/package.json
+++ b/lambdas/functions/webhook/package.json
@@ -20,7 +20,7 @@
"@aws-sdk/client-eventbridge": "^3.984.0",
"@octokit/webhooks-types": "^7.6.1",
"@types/aws-lambda": "^8.10.159",
- "@types/express": "^5.0.3",
+ "@types/express": "^5.0.6",
"@types/node": "^22.19.3",
"@vercel/ncc": "0.38.4",
"body-parser": "^2.2.1",
diff --git a/lambdas/yarn.lock b/lambdas/yarn.lock
index 57ce3fe8ee..781d157b05 100644
--- a/lambdas/yarn.lock
+++ b/lambdas/yarn.lock
@@ -223,7 +223,7 @@ __metadata:
"@octokit/webhooks": "npm:^14.2.0"
"@octokit/webhooks-types": "npm:^7.6.1"
"@types/aws-lambda": "npm:^8.10.159"
- "@types/express": "npm:^5.0.3"
+ "@types/express": "npm:^5.0.6"
"@types/node": "npm:^22.19.3"
"@vercel/ncc": "npm:0.38.4"
aws-lambda: "npm:^1.0.7"
@@ -5451,14 +5451,21 @@ __metadata:
languageName: node
linkType: hard
-"@types/express@npm:^5.0.3":
- version: 5.0.3
- resolution: "@types/express@npm:5.0.3"
+"@types/express@npm:^5.0.6":
+ version: 5.0.6
+ resolution: "@types/express@npm:5.0.6"
dependencies:
"@types/body-parser": "npm:*"
"@types/express-serve-static-core": "npm:^5.0.0"
- "@types/serve-static": "npm:*"
- checksum: 10c0/f0fbc8daa7f40070b103cf4d020ff1dd08503477d866d1134b87c0390bba71d5d7949cb8b4e719a81ccba89294d8e1573414e6dcbb5bb1d097a7b820928ebdef
+ "@types/serve-static": "npm:^2"
+ checksum: 10c0/f1071e3389a955d4f9a38aae38634121c7cd9b3171ba4201ec9b56bd534aba07866839d278adc0dda05b942b05a901a02fd174201c3b1f70ce22b10b6c68f24b
+ languageName: node
+ linkType: hard
+
+"@types/http-errors@npm:*":
+ version: 2.0.5
+ resolution: "@types/http-errors@npm:2.0.5"
+ checksum: 10c0/00f8140fbc504f47356512bd88e1910c2f07e04233d99c88c854b3600ce0523c8cd0ba7d1897667243282eb44c59abb9245959e2428b9de004f93937f52f7c15
languageName: node
linkType: hard
@@ -5494,13 +5501,6 @@ __metadata:
languageName: node
linkType: hard
-"@types/mime@npm:*":
- version: 3.0.1
- resolution: "@types/mime@npm:3.0.1"
- checksum: 10c0/c4c0fc89042822a3b5ffd6ef0da7006513454ee8376ffa492372d17d2925a4e4b1b194c977b718c711df38b33eb9d06deb5dbf9f851bcfb7e5e65f06b2a87f97
- languageName: node
- linkType: hard
-
"@types/mime@npm:^1":
version: 1.3.5
resolution: "@types/mime@npm:1.3.5"
@@ -5569,13 +5569,13 @@ __metadata:
languageName: node
linkType: hard
-"@types/serve-static@npm:*":
- version: 1.15.1
- resolution: "@types/serve-static@npm:1.15.1"
+"@types/serve-static@npm:^2":
+ version: 2.2.0
+ resolution: "@types/serve-static@npm:2.2.0"
dependencies:
- "@types/mime": "npm:*"
+ "@types/http-errors": "npm:*"
"@types/node": "npm:*"
- checksum: 10c0/dc934e2adce730480af5af6081b99f50be4dfb7f44537893444bcf1dc97f5d5ffb16b38350ecd89dd114184d751ba3271500631fa56cf1faa35be56f8e45971b
+ checksum: 10c0/a3c6126bdbf9685e6c7dc03ad34639666eff32754e912adeed9643bf3dd3aa0ff043002a7f69039306e310d233eb8e160c59308f95b0a619f32366bbc48ee094
languageName: node
linkType: hard
From 169fe48df4b808603cf8a21266def6a5538875a0 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 21:39:18 +0100
Subject: [PATCH 06/22] chore(deps): bump github/codeql-action from 4.31.9 to
4.32.4 (#5050)
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 4.31.9 to 4.32.4.
Release notes
Sourced from github/codeql-action's
releases .
v4.32.4
Update default CodeQL bundle version to 2.24.2 .
#3493
Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when private
package registries are configured . This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. #3473
When the CodeQL Action is run with
debugging enabled in Default Setup and private
package registries are configured , the "Setup proxy for
registries" step will output additional diagnostic information that
can be used for troubleshooting. #3486
Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. #3485
Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a nightly
CodeQL CLI release instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. #3484
v4.32.3
Added experimental support for testing connections to private
package registries . This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
#3466
v4.32.2
v4.32.1
A warning is now shown in Default Setup workflow logs if a private
package registry is configured using a GitHub Personal Access Token
(PAT), but no username is configured. #3422
Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. #3421
v4.32.0
v4.31.11
When running a Default Setup workflow with Actions
debugging enabled , the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. #3409
Improved error handling throughout the CodeQL Action. #3415
Added experimental support for automatically excluding generated
files from the analysis. This feature is not currently enabled for
any analysis. In the future, it may be enabled by default for some
GitHub-managed analyses. #3318
The changelog extracts that are included with releases of the CodeQL
Action are now shorter to avoid duplicated information from appearing in
Dependabot PRs. #3403
v4.31.10
CodeQL Action Changelog
See the releases
page for the relevant changes to the CodeQL CLI and language
packs.
4.31.10 - 12 Jan 2026
Update default CodeQL bundle version to 2.23.9. #3393
See the full CHANGELOG.md
for more information.
Changelog
Sourced from github/codeql-action's
changelog .
CodeQL Action Changelog
See the releases
page for the relevant changes to the CodeQL CLI and language
packs.
[UNRELEASED]
No user facing changes.
4.32.4 - 20 Feb 2026
Update default CodeQL bundle version to 2.24.2 .
#3493
Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when private
package registries are configured . This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. #3473
When the CodeQL Action is run with
debugging enabled in Default Setup and private
package registries are configured , the "Setup proxy for
registries" step will output additional diagnostic information that
can be used for troubleshooting. #3486
Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. #3485
Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a nightly
CodeQL CLI release instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. #3484
4.32.3 - 13 Feb 2026
Added experimental support for testing connections to private
package registries . This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
#3466
4.32.2 - 05 Feb 2026
4.32.1 - 02 Feb 2026
A warning is now shown in Default Setup workflow logs if a private
package registry is configured using a GitHub Personal Access Token
(PAT), but no username is configured. #3422
Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. #3421
4.32.0 - 26 Jan 2026
4.31.11 - 23 Jan 2026
When running a Default Setup workflow with Actions
debugging enabled , the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. #3409
Improved error handling throughout the CodeQL Action. #3415
Added experimental support for automatically excluding generated
files from the analysis. This feature is not currently enabled for
any analysis. In the future, it may be enabled by default for some
GitHub-managed analyses. #3318
The changelog extracts that are included with releases of the CodeQL
Action are now shorter to avoid duplicated information from appearing in
Dependabot PRs. #3403
4.31.10 - 12 Jan 2026
Update default CodeQL bundle version to 2.23.9. #3393
4.31.9 - 16 Dec 2025
No user facing changes.
4.31.8 - 11 Dec 2025
... (truncated)
Commits
89a39a4
Merge pull request #3494
from github/update-v4.32.4-39ba80c47
e5d84c8
Apply remaining review suggestions
0c20209
Apply suggestions from code review
314172e
Fix typo
cdda72d
Add changelog entries
cfda84c
Update changelog for v4.32.4
39ba80c
Merge pull request #3493
from github/update-bundle/codeql-bundle-v2.24.2
00150da
Add changelog note
d97dce6
Update default bundle to codeql-bundle-v2.24.2
50fdbb9
Merge pull request #3492
from github/henrymercer/new-repository-properties-ff
Additional commits viewable in compare
view
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
.github/workflows/codeql.yml | 4 ++--
.github/workflows/ossf-scorecard.yml | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml
index 021b58fd0c..5ea721467f 100644
--- a/.github/workflows/codeql.yml
+++ b/.github/workflows/codeql.yml
@@ -42,12 +42,12 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
- uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
+ uses: github/codeql-action/init@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4.32.4
with:
languages: ${{ matrix.language }}
build-mode: none
- name: Perform CodeQL Analysis
- uses: github/codeql-action/analyze@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
+ uses: github/codeql-action/analyze@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4.32.4
with:
category: "/language:${{matrix.language}}"
diff --git a/.github/workflows/ossf-scorecard.yml b/.github/workflows/ossf-scorecard.yml
index d2769bd33d..6fa9769ccd 100644
--- a/.github/workflows/ossf-scorecard.yml
+++ b/.github/workflows/ossf-scorecard.yml
@@ -53,6 +53,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
- uses: github/codeql-action/upload-sarif@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
+ uses: github/codeql-action/upload-sarif@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4.32.4
with:
sarif_file: results.sarif
From dd4b3c28d5e5814d4593c6c79b0ca7827c8e25a5 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 21:39:58 +0100
Subject: [PATCH 07/22] chore(deps): bump
google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml from
2.3.2 to 2.3.3 (#5043)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bumps
[google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml](https://github.com/google/osv-scanner-action)
from 2.3.2 to 2.3.3.
Release notes
Sourced from google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml's
releases .
v2.3.3
This updates OSV-Scanner to v2.3.3.
What's Changed
New Contributors
Full Changelog : https://github.com/google/osv-scanner-action/compare/v2.3.2...v2.3.3
Commits
c5996e0
Merge pull request #118
from google/update-to-v2.3.3
f4fac92
Update unified workflow example to point to v2.3.3 reusable
workflows
8ae4be8
Update reusable workflows to point to v2.3.3 actions
8018483
"Update actions to use v2.3.3 osv-scanner image"
2c222db
Merge pull request #115
from renovate-bot/renovate/workflows
115472d
chore(deps): update github/codeql-action action to v4.31.10
See full diff in compare
view
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
.github/workflows/ovs.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.github/workflows/ovs.yml b/.github/workflows/ovs.yml
index 778c4857f8..398d3b571e 100644
--- a/.github/workflows/ovs.yml
+++ b/.github/workflows/ovs.yml
@@ -17,4 +17,4 @@ jobs:
actions: read # Required to upload SARIF file to CodeQL
security-events: write # Require writing security events to upload
contents: read # for checkout
- uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml@2a387edfbe02a11d856b89172f6e978100177eb4" # v2.3.2
+ uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml@c5996e0193a3df57d695c1b8a1dec2a4c62e8730" # v2.3.3
From 869a450b0e1af240f28955b24579025d0730824d Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 21:40:39 +0100
Subject: [PATCH 08/22] chore(deps): bump step-security/harden-runner from
2.14.0 to 2.14.2 (#5041)
Bumps
[step-security/harden-runner](https://github.com/step-security/harden-runner)
from 2.14.0 to 2.14.2.
Release notes
Sourced from step-security/harden-runner's
releases .
v2.14.2
What's Changed
Security fix: Fixed a medium severity vulnerability where outbound
network connections using sendto, sendmsg, and sendmmsg socket system
calls could bypass audit logging when using egress-policy: audit. This
issue only affects the Community Tier in audit mode; block mode and
Enterprise Tier were not affected. See GHSA-cpmj-h4f6-r6pq
for details.
Full Changelog : https://github.com/step-security/harden-runner/compare/v2.14.1...v2.14.2
v2.14.1
What's Changed
In some self-hosted environments, the agent could briefly fall back
to public DNS resolvers during startup if the system DNS was not yet
available. This behavior was unintended for GitHub-hosted runners and
has now been fixed to prevent any use of public DNS resolvers.
Fixed npm audit vulnerabilities
Full Changelog : https://github.com/step-security/harden-runner/compare/v2.14.0...v2.14.1
Commits
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
.github/workflows/codeql.yml | 2 +-
.github/workflows/dependency-review.yml | 2 +-
.github/workflows/lambda.yml | 2 +-
.github/workflows/ossf-scorecard.yml | 2 +-
.github/workflows/packer-build.yml | 2 +-
.github/workflows/release.yml | 2 +-
.github/workflows/semantic-check.yml | 2 +-
.github/workflows/stale.yml | 2 +-
.github/workflows/terraform.yml | 6 +++---
.github/workflows/update-docs.yml | 4 ++--
10 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml
index 5ea721467f..e66edafa06 100644
--- a/.github/workflows/codeql.yml
+++ b/.github/workflows/codeql.yml
@@ -31,7 +31,7 @@ jobs:
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/dependency-review.yml b/.github/workflows/dependency-review.yml
index 70870495ec..882abe1e3d 100644
--- a/.github/workflows/dependency-review.yml
+++ b/.github/workflows/dependency-review.yml
@@ -24,7 +24,7 @@ jobs:
pull-requests: write # for actions/dependency-review-action to comment on PRs
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/lambda.yml b/.github/workflows/lambda.yml
index 8afc3697fb..cbf25b80f9 100644
--- a/.github/workflows/lambda.yml
+++ b/.github/workflows/lambda.yml
@@ -27,7 +27,7 @@ jobs:
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/ossf-scorecard.yml b/.github/workflows/ossf-scorecard.yml
index 6fa9769ccd..a01c46f01e 100644
--- a/.github/workflows/ossf-scorecard.yml
+++ b/.github/workflows/ossf-scorecard.yml
@@ -25,7 +25,7 @@ jobs:
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/packer-build.yml b/.github/workflows/packer-build.yml
index 2fe54a242a..ad77ab90d9 100644
--- a/.github/workflows/packer-build.yml
+++ b/.github/workflows/packer-build.yml
@@ -34,7 +34,7 @@ jobs:
working-directory: images/${{ matrix.image }}
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
index 4cf56d2797..2731b204f6 100644
--- a/.github/workflows/release.yml
+++ b/.github/workflows/release.yml
@@ -24,7 +24,7 @@ jobs:
environment: release
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/semantic-check.yml b/.github/workflows/semantic-check.yml
index 23d7e09d7b..33397a828f 100644
--- a/.github/workflows/semantic-check.yml
+++ b/.github/workflows/semantic-check.yml
@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
index 5b7ca70cf3..ba50481664 100644
--- a/.github/workflows/stale.yml
+++ b/.github/workflows/stale.yml
@@ -18,7 +18,7 @@ jobs:
pull-requests: write # for actions/stale to close stale PRs
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/terraform.yml b/.github/workflows/terraform.yml
index 9fd4407b68..5529a8fa4f 100644
--- a/.github/workflows/terraform.yml
+++ b/.github/workflows/terraform.yml
@@ -26,7 +26,7 @@ jobs:
image: hashicorp/terraform:${{ matrix.terraform }}
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -100,7 +100,7 @@ jobs:
image: hashicorp/terraform:${{ matrix.terraform }}
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -165,7 +165,7 @@ jobs:
image: hashicorp/terraform:${{ matrix.terraform }}
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
diff --git a/.github/workflows/update-docs.yml b/.github/workflows/update-docs.yml
index 9891e13523..a4ce671f0e 100644
--- a/.github/workflows/update-docs.yml
+++ b/.github/workflows/update-docs.yml
@@ -22,7 +22,7 @@ jobs:
pull-requests: write # for peter-evans/create-pull-request to create PRs with doc updates
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -72,7 +72,7 @@ jobs:
contents: write # for actions/checkout and mkdocs gh-deploy to push to gh-pages branch
steps:
- name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
+ uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
From 5121a98cc7a1eaa7c31adfab04278f5fa9200698 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 22:07:28 +0100
Subject: [PATCH 09/22] chore(docs): bump mkdocs-material from 9.7.1 to 9.7.2
in /.github/workflows/mkdocs in the python-deps group (#5051)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bumps the python-deps group in /.github/workflows/mkdocs with 1 update:
[mkdocs-material](https://github.com/squidfunk/mkdocs-material).
Updates `mkdocs-material` from 9.7.1 to 9.7.2
Release notes
Sourced from mkdocs-material's
releases .
mkdocs-material-9.7.2
[!WARNING]
Material for MkDocs is in maintenance mode
Going forward, the Material for MkDocs team focuses on Zensical , a next-gen static site
generator built from first principles. We will provide critical bug
fixes and security updates for Material for MkDocs until November
2026.
Read
the full announcement on our blog
Changes
Opened up version ranges of optional dependencies for
forward-compatibility
Added warning to mkdocs build about impending MkDocs
2.0 incompatibility (doesn't affect strict mode)
Changelog
Sourced from mkdocs-material's
changelog .
mkdocs-material-9.7.3 (2026-02-24)
Fixed #8567 :
Print MkDocs 2.0 incompatibility warning to stderr
mkdocs-material-9.7.2 (2026-02-18)
Opened up version ranges of optional dependencies for
forward-compatibility
Added warning to 'mkdocs build' about impending MkDocs 2.0
incompatibility
mkdocs-material-9.7.1 (2025-12-18)
Updated requests to 2.30+ to mitigate CVE in urllib
Fixed privacy plugin not picking up protocol-relative URLs
Fixed #8542 :
false positives and negatives captured in privacy plugin
mkdocs-material-9.7.0 (2025-11-11)
⚠️ Material for MkDocs is now in maintenance mode
This is the last release of Material for MkDocs that will receive new
features.
Going forward, the Material for MkDocs team focuses on Zensical, a
next-gen
static site generator built from first principles. We will provide
critical
bug fixes and security updates for Material for MkDocs for 12 months at
least.
Read the full announcement on our blog:
https://squidfunk.github.io/mkdocs-material/blog/2025/11/05/zensical/
This release includes all features that were previously exclusive to
the
Insiders edition. These features are now freely available to
everyone.
Note on deprecated plugins: The projects and typeset plugins are
included in
this release, but must be considered deprecated. Both plugins proved
unsustainable to maintain and represent architectural dead ends. They
are
provided as-is without ongoing support.
Changes:
Added support for pinned blog posts and author profiles
Added support for customizing pagination for blog index pages
Added support for customizing blog category sort order
Added support for staying on page when switching languages
Added support for disabling tags in table of contents
Added support for nested tags and shadow tags
Added support for footnote tooltips
Added support for instant previews
Added support for instant prefetching
Added support for custom social card layouts
Added support for custom social card background images
Added support for selectable rangs in code blocks
Added support for custom selectors for code annotations
... (truncated)
Commits
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore ` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore ` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore ` will
remove the ignore condition of the specified dependency and ignore
conditions
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
.github/workflows/mkdocs/requirements.in | 2 +-
.github/workflows/mkdocs/requirements.txt | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/.github/workflows/mkdocs/requirements.in b/.github/workflows/mkdocs/requirements.in
index 186ef9b32d..0d58b8705f 100644
--- a/.github/workflows/mkdocs/requirements.in
+++ b/.github/workflows/mkdocs/requirements.in
@@ -1 +1 @@
-mkdocs-material==9.7.1
+mkdocs-material==9.7.3
diff --git a/.github/workflows/mkdocs/requirements.txt b/.github/workflows/mkdocs/requirements.txt
index 1f148adb68..7419868870 100644
--- a/.github/workflows/mkdocs/requirements.txt
+++ b/.github/workflows/mkdocs/requirements.txt
@@ -223,9 +223,9 @@ mkdocs-get-deps==0.2.0 \
--hash=sha256:162b3d129c7fad9b19abfdcb9c1458a651628e4b1dea628ac68790fb3061c60c \
--hash=sha256:2bf11d0b133e77a0dd036abeeb06dec8775e46efa526dc70667d8863eefc6134
# via mkdocs
-mkdocs-material==9.7.1 \
- --hash=sha256:3f6100937d7d731f87f1e3e3b021c97f7239666b9ba1151ab476cabb96c60d5c \
- --hash=sha256:89601b8f2c3e6c6ee0a918cc3566cb201d40bf37c3cd3c2067e26fadb8cce2b8
+mkdocs-material==9.7.3 \
+ --hash=sha256:37ebf7b4788c992203faf2e71900be3c197c70a4be9b0d72aed537b08a91dd9d \
+ --hash=sha256:e5f0a18319699da7e78c35e4a8df7e93537a888660f61a86bd773a7134798f22
# via -r requirements.in
mkdocs-material-extensions==1.3.1 \
--hash=sha256:10c9511cea88f568257f960358a467d12b970e1f7b2c0e5fb2bb48cab1928443 \
From d30ad3862b8120274a99f31d523a9a35ebe054d2 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 22:07:57 +0100
Subject: [PATCH 10/22] chore(deps): bump the github group across 1 directory
with 4 updates (#5058)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bumps the github group with 4 updates in the / directory:
[actions/upload-artifact](https://github.com/actions/upload-artifact),
[actions/attest-build-provenance](https://github.com/actions/attest-build-provenance),
[actions/stale](https://github.com/actions/stale) and
[actions/cache](https://github.com/actions/cache).
Updates `actions/upload-artifact` from 6.0.0 to 7.0.0
Release notes
Sourced from actions/upload-artifact's
releases .
v7.0.0
v7 What's new
Direct Uploads
Adds support for uploading single files directly (unzipped). Callers
can set the new archive parameter to false to
skip zipping the file during upload. Right now, we only support single
files. The action will fail if the glob passed resolves to multiple
files. The name parameter is also ignored with this
setting. Instead, the name of the artifact will be the name of the
uploaded file.
ESM
To support new versions of the @actions/* packages,
we've upgraded the package to ESM.
What's Changed
New Contributors
Full Changelog : https://github.com/actions/upload-artifact/compare/v6...v7.0.0
Commits
Updates `actions/attest-build-provenance` from 3.1.0 to 4.1.0
Release notes
Sourced from actions/attest-build-provenance's
releases .
v4.1.0
[!NOTE]
As of version 4, actions/attest-build-provenance is simply
a wrapper on top of actions/attest .
Existing applications may continue to use the
attest-build-provenance action, but new implementations
should use actions/attest instead.
What's Changed
Full Changelog : https://github.com/actions/attest-build-provenance/compare/v4.0.0...v4.1.0
v4.0.0
[!NOTE]
As of version 4, actions/attest-build-provenance is simply
a wrapper on top of actions/attest .
Existing applications may continue to use the
attest-build-provenance action, but new implementations
should use actions/attest instead.
What's Changed
Full Changelog : https://github.com/actions/attest-build-provenance/compare/v3.2.0...v4.0.0
v3.2.0
What's Changed
Full Changelog : https://github.com/actions/attest-build-provenance/compare/v3.1.0...v3.2.0
Commits
a2bbfa2
bump actions/attest from 4.0.0 to 4.1.0 (#838 )
0856891
update RELEASE.md docs (#836 )
e4d4f7c
prepare v4 release (#835 )
02a49bd
Bump github/codeql-action in the actions-minor group (#824 )
7c757df
Bump the npm-development group with 2 updates (#825 )
c44148e
Bump github/codeql-action in the actions-minor group (#818 )
3234352
Bump @types/node from 25.0.10 to 25.2.0 in the
npm-development group (#819 )
18db129
Bump tar from 7.5.6 to 7.5.7 (#816 )
90fadfa
Bump @actions/core from 2.0.1 to 2.0.2 in the
npm-production group (#799 )
57db8ba
Bump the npm-development group across 1 directory with 3 updates (#808 )
Additional commits viewable in compare
view
Updates `actions/stale` from 10.1.1 to 10.2.0
Release notes
Sourced from actions/stale's
releases .
v10.2.0
What's Changed
Bug Fix
Dependency Updates
New Contributors
Full Changelog : https://github.com/actions/stale/compare/v10...v10.2.0
Commits
Updates `actions/cache` from 5.0.1 to 5.0.3
Release notes
Sourced from actions/cache's
releases .
v5.0.3
What's Changed
Full Changelog : https://github.com/actions/cache/compare/v5...v5.0.3
v.5.0.2
v5.0.2
What's Changed
When creating cache entries, 429s returned from the cache service
will not be retried.
Changelog
Sourced from actions/cache's
changelog .
Releases
How to prepare a release
[!NOTE]
Relevant for maintainers with write access only.
Switch to a new branch from main.
Run npm test to ensure all tests are passing.
Update the version in https://github.com/actions/cache/blob/main/package.json .
Run npm run build to update the compiled files.
Update this https://github.com/actions/cache/blob/main/RELEASES.md
with the new version and changes in the ## Changelog
section.
Run licensed cache to update the license report.
Run licensed status and resolve any warnings by
updating the https://github.com/actions/cache/blob/main/.licensed.yml
file with the exceptions.
Commit your changes and push your branch upstream.
Open a pull request against main and get it reviewed
and merged.
Draft a new release https://github.com/actions/cache/releases
use the same version number used in package.json
Create a new tag with the version number.
Auto generate release notes and update them to match the changes you
made in RELEASES.md.
Toggle the set as the latest release option.
Publish the release.
Navigate to https://github.com/actions/cache/actions/workflows/release-new-action-version.yml
There should be a workflow run queued with the same version
number.
Approve the run to publish the new version and update the major tags
for this action.
Changelog
5.0.3
5.0.2
Bump @actions/cache to v5.0.3 #1692
5.0.1
Update @azure/storage-blob to ^12.29.1 via
@actions/cache@5.0.1 #1685
5.0.0
[!IMPORTANT]
actions/cache@v5 runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of 2.327.1.
If you are using self-hosted runners, ensure they are updated before
upgrading.
4.3.0
... (truncated)
Commits
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore ` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore ` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore ` will
remove the ignore condition of the specified dependency and ignore
conditions
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
.github/workflows/lambda.yml | 2 +-
.github/workflows/ossf-scorecard.yml | 2 +-
.github/workflows/release.yml | 2 +-
.github/workflows/stale.yml | 2 +-
.github/workflows/terraform.yml | 6 +++---
.github/workflows/update-docs.yml | 2 +-
6 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/.github/workflows/lambda.yml b/.github/workflows/lambda.yml
index cbf25b80f9..5a35a8558c 100644
--- a/.github/workflows/lambda.yml
+++ b/.github/workflows/lambda.yml
@@ -46,7 +46,7 @@ jobs:
- name: Build distribution
run: yarn build
- name: Upload coverage report
- uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
+ uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
if: ${{ failure() }}
with:
name: coverage-reports
diff --git a/.github/workflows/ossf-scorecard.yml b/.github/workflows/ossf-scorecard.yml
index a01c46f01e..4eed21aec3 100644
--- a/.github/workflows/ossf-scorecard.yml
+++ b/.github/workflows/ossf-scorecard.yml
@@ -44,7 +44,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
- uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
+ uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: SARIF file
path: results.sarif
diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
index 2731b204f6..9d9cd51aaf 100644
--- a/.github/workflows/release.yml
+++ b/.github/workflows/release.yml
@@ -58,7 +58,7 @@ jobs:
- name: Attest
if: ${{ steps.release.outputs.releases_created == 'true' }}
id: attest
- uses: actions/attest-build-provenance@00014ed6ed5efc5b1ab7f7f34a39eb55d41aa4f8 # v3.1.0
+ uses: actions/attest-build-provenance@a2bbfa25375fe432b6a289bc6b6cd05ecd0c4c32 # v4.1.0
with:
subject-path: '${{ github.workspace }}/lambdas/functions/**/*.zip'
- name: Update release notes with attestation
diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
index ba50481664..fdc220074e 100644
--- a/.github/workflows/stale.yml
+++ b/.github/workflows/stale.yml
@@ -22,7 +22,7 @@ jobs:
with:
egress-policy: audit
- - uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
+ - uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
stale-issue-message: >
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed if no further activity occurs. Thank you for your contributions.
diff --git a/.github/workflows/terraform.yml b/.github/workflows/terraform.yml
index 5529a8fa4f..dd94de78f0 100644
--- a/.github/workflows/terraform.yml
+++ b/.github/workflows/terraform.yml
@@ -57,7 +57,7 @@ jobs:
run: apk add --no-cache tar
continue-on-error: true
- if: contains(matrix.terraform, '1.5.')
- uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
+ uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
name: Cache TFLint plugin dir
with:
path: ~/.tflint.d/plugins
@@ -123,7 +123,7 @@ jobs:
run: apk add --no-cache tar
continue-on-error: true
- if: contains(matrix.terraform, '1.3.')
- uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
+ uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
name: Cache TFLint plugin dir
with:
path: ~/.tflint.d/plugins
@@ -188,7 +188,7 @@ jobs:
run: apk add --no-cache tar
continue-on-error: true
- if: contains(matrix.terraform, '1.5.')
- uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
+ uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
name: Cache TFLint plugin dir
with:
path: ~/.tflint.d/plugins
diff --git a/.github/workflows/update-docs.yml b/.github/workflows/update-docs.yml
index a4ce671f0e..3320ea9f48 100644
--- a/.github/workflows/update-docs.yml
+++ b/.github/workflows/update-docs.yml
@@ -87,7 +87,7 @@ jobs:
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- - uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
+ - uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
From 26506ee2e16bedffd210afa043cd582cddb09565 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 22:08:13 +0100
Subject: [PATCH 11/22] fix(lambda): bump rollup from 4.46.2 to 4.59.0 in
/lambdas (#5052)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bumps [rollup](https://github.com/rollup/rollup) from 4.46.2 to 4.59.0.
Release notes
Sourced from rollup's
releases .
v4.59.0
4.59.0
2026-02-22
Features
Throw when the generated bundle contains paths that would leave the
output directory (#6276 )
Pull Requests
v4.58.0
4.58.0
2026-02-20
Features
Also support __NO_SIDE_EFFECTS__ annotation before
variable declarations declaring function expressions (#6272 )
Pull Requests
v4.57.1
4.57.1
2026-01-30
Bug Fixes
Fix heap corruption issue in Windows (#6251 )
Ensure exports of a dynamic import are fully included when called
from a try...catch (#6254 )
Pull Requests
... (truncated)
Changelog
Sourced from rollup's
changelog .
4.59.0
2026-02-22
Features
Throw when the generated bundle contains paths that would leave the
output directory (#6276 )
Pull Requests
4.58.0
2026-02-20
Features
Also support __NO_SIDE_EFFECTS__ annotation before
variable declarations declaring function expressions (#6272 )
Pull Requests
4.57.1
2026-01-30
Bug Fixes
Fix heap corruption issue in Windows (#6251 )
Ensure exports of a dynamic import are fully included when called
from a try...catch (#6254 )
Pull Requests
... (truncated)
Commits
Maintainer changes
This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub
Actions), a new releaser for rollup since your current version.
Install script changes
This version modifies prepare script that runs during
installation. Review the package contents before updating.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/github-aws-runners/terraform-aws-github-runner/network/alerts).
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
lambdas/yarn.lock | 220 ++++++++++++++++++++++++++++------------------
1 file changed, 135 insertions(+), 85 deletions(-)
diff --git a/lambdas/yarn.lock b/lambdas/yarn.lock
index 781d157b05..f73cddaca3 100644
--- a/lambdas/yarn.lock
+++ b/lambdas/yarn.lock
@@ -4259,142 +4259,177 @@ __metadata:
languageName: node
linkType: hard
-"@rollup/rollup-android-arm-eabi@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-android-arm-eabi@npm:4.46.2"
+"@rollup/rollup-android-arm-eabi@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-android-arm-eabi@npm:4.59.0"
conditions: os=android & cpu=arm
languageName: node
linkType: hard
-"@rollup/rollup-android-arm64@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-android-arm64@npm:4.46.2"
+"@rollup/rollup-android-arm64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-android-arm64@npm:4.59.0"
conditions: os=android & cpu=arm64
languageName: node
linkType: hard
-"@rollup/rollup-darwin-arm64@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-darwin-arm64@npm:4.46.2"
+"@rollup/rollup-darwin-arm64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-darwin-arm64@npm:4.59.0"
conditions: os=darwin & cpu=arm64
languageName: node
linkType: hard
-"@rollup/rollup-darwin-x64@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-darwin-x64@npm:4.46.2"
+"@rollup/rollup-darwin-x64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-darwin-x64@npm:4.59.0"
conditions: os=darwin & cpu=x64
languageName: node
linkType: hard
-"@rollup/rollup-freebsd-arm64@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-freebsd-arm64@npm:4.46.2"
+"@rollup/rollup-freebsd-arm64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-freebsd-arm64@npm:4.59.0"
conditions: os=freebsd & cpu=arm64
languageName: node
linkType: hard
-"@rollup/rollup-freebsd-x64@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-freebsd-x64@npm:4.46.2"
+"@rollup/rollup-freebsd-x64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-freebsd-x64@npm:4.59.0"
conditions: os=freebsd & cpu=x64
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm-gnueabihf@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-arm-gnueabihf@npm:4.46.2"
+"@rollup/rollup-linux-arm-gnueabihf@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-arm-gnueabihf@npm:4.59.0"
conditions: os=linux & cpu=arm & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm-musleabihf@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-arm-musleabihf@npm:4.46.2"
+"@rollup/rollup-linux-arm-musleabihf@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-arm-musleabihf@npm:4.59.0"
conditions: os=linux & cpu=arm & libc=musl
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm64-gnu@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-arm64-gnu@npm:4.46.2"
+"@rollup/rollup-linux-arm64-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-arm64-gnu@npm:4.59.0"
conditions: os=linux & cpu=arm64 & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-arm64-musl@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-arm64-musl@npm:4.46.2"
+"@rollup/rollup-linux-arm64-musl@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-arm64-musl@npm:4.59.0"
conditions: os=linux & cpu=arm64 & libc=musl
languageName: node
linkType: hard
-"@rollup/rollup-linux-loongarch64-gnu@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-loongarch64-gnu@npm:4.46.2"
+"@rollup/rollup-linux-loong64-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-loong64-gnu@npm:4.59.0"
conditions: os=linux & cpu=loong64 & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-ppc64-gnu@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-ppc64-gnu@npm:4.46.2"
+"@rollup/rollup-linux-loong64-musl@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-loong64-musl@npm:4.59.0"
+ conditions: os=linux & cpu=loong64 & libc=musl
+ languageName: node
+ linkType: hard
+
+"@rollup/rollup-linux-ppc64-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-ppc64-gnu@npm:4.59.0"
conditions: os=linux & cpu=ppc64 & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-riscv64-gnu@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-riscv64-gnu@npm:4.46.2"
+"@rollup/rollup-linux-ppc64-musl@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-ppc64-musl@npm:4.59.0"
+ conditions: os=linux & cpu=ppc64 & libc=musl
+ languageName: node
+ linkType: hard
+
+"@rollup/rollup-linux-riscv64-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-riscv64-gnu@npm:4.59.0"
conditions: os=linux & cpu=riscv64 & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-riscv64-musl@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-riscv64-musl@npm:4.46.2"
+"@rollup/rollup-linux-riscv64-musl@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-riscv64-musl@npm:4.59.0"
conditions: os=linux & cpu=riscv64 & libc=musl
languageName: node
linkType: hard
-"@rollup/rollup-linux-s390x-gnu@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-s390x-gnu@npm:4.46.2"
+"@rollup/rollup-linux-s390x-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-s390x-gnu@npm:4.59.0"
conditions: os=linux & cpu=s390x & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-x64-gnu@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-x64-gnu@npm:4.46.2"
+"@rollup/rollup-linux-x64-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-x64-gnu@npm:4.59.0"
conditions: os=linux & cpu=x64 & libc=glibc
languageName: node
linkType: hard
-"@rollup/rollup-linux-x64-musl@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-linux-x64-musl@npm:4.46.2"
+"@rollup/rollup-linux-x64-musl@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-linux-x64-musl@npm:4.59.0"
conditions: os=linux & cpu=x64 & libc=musl
languageName: node
linkType: hard
-"@rollup/rollup-win32-arm64-msvc@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-win32-arm64-msvc@npm:4.46.2"
+"@rollup/rollup-openbsd-x64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-openbsd-x64@npm:4.59.0"
+ conditions: os=openbsd & cpu=x64
+ languageName: node
+ linkType: hard
+
+"@rollup/rollup-openharmony-arm64@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-openharmony-arm64@npm:4.59.0"
+ conditions: os=openharmony & cpu=arm64
+ languageName: node
+ linkType: hard
+
+"@rollup/rollup-win32-arm64-msvc@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-win32-arm64-msvc@npm:4.59.0"
conditions: os=win32 & cpu=arm64
languageName: node
linkType: hard
-"@rollup/rollup-win32-ia32-msvc@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-win32-ia32-msvc@npm:4.46.2"
+"@rollup/rollup-win32-ia32-msvc@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-win32-ia32-msvc@npm:4.59.0"
conditions: os=win32 & cpu=ia32
languageName: node
linkType: hard
-"@rollup/rollup-win32-x64-msvc@npm:4.46.2":
- version: 4.46.2
- resolution: "@rollup/rollup-win32-x64-msvc@npm:4.46.2"
+"@rollup/rollup-win32-x64-gnu@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-win32-x64-gnu@npm:4.59.0"
+ conditions: os=win32 & cpu=x64
+ languageName: node
+ linkType: hard
+
+"@rollup/rollup-win32-x64-msvc@npm:4.59.0":
+ version: 4.59.0
+ resolution: "@rollup/rollup-win32-x64-msvc@npm:4.59.0"
conditions: os=win32 & cpu=x64
languageName: node
linkType: hard
@@ -10221,29 +10256,34 @@ __metadata:
linkType: hard
"rollup@npm:^4.43.0":
- version: 4.46.2
- resolution: "rollup@npm:4.46.2"
- dependencies:
- "@rollup/rollup-android-arm-eabi": "npm:4.46.2"
- "@rollup/rollup-android-arm64": "npm:4.46.2"
- "@rollup/rollup-darwin-arm64": "npm:4.46.2"
- "@rollup/rollup-darwin-x64": "npm:4.46.2"
- "@rollup/rollup-freebsd-arm64": "npm:4.46.2"
- "@rollup/rollup-freebsd-x64": "npm:4.46.2"
- "@rollup/rollup-linux-arm-gnueabihf": "npm:4.46.2"
- "@rollup/rollup-linux-arm-musleabihf": "npm:4.46.2"
- "@rollup/rollup-linux-arm64-gnu": "npm:4.46.2"
- "@rollup/rollup-linux-arm64-musl": "npm:4.46.2"
- "@rollup/rollup-linux-loongarch64-gnu": "npm:4.46.2"
- "@rollup/rollup-linux-ppc64-gnu": "npm:4.46.2"
- "@rollup/rollup-linux-riscv64-gnu": "npm:4.46.2"
- "@rollup/rollup-linux-riscv64-musl": "npm:4.46.2"
- "@rollup/rollup-linux-s390x-gnu": "npm:4.46.2"
- "@rollup/rollup-linux-x64-gnu": "npm:4.46.2"
- "@rollup/rollup-linux-x64-musl": "npm:4.46.2"
- "@rollup/rollup-win32-arm64-msvc": "npm:4.46.2"
- "@rollup/rollup-win32-ia32-msvc": "npm:4.46.2"
- "@rollup/rollup-win32-x64-msvc": "npm:4.46.2"
+ version: 4.59.0
+ resolution: "rollup@npm:4.59.0"
+ dependencies:
+ "@rollup/rollup-android-arm-eabi": "npm:4.59.0"
+ "@rollup/rollup-android-arm64": "npm:4.59.0"
+ "@rollup/rollup-darwin-arm64": "npm:4.59.0"
+ "@rollup/rollup-darwin-x64": "npm:4.59.0"
+ "@rollup/rollup-freebsd-arm64": "npm:4.59.0"
+ "@rollup/rollup-freebsd-x64": "npm:4.59.0"
+ "@rollup/rollup-linux-arm-gnueabihf": "npm:4.59.0"
+ "@rollup/rollup-linux-arm-musleabihf": "npm:4.59.0"
+ "@rollup/rollup-linux-arm64-gnu": "npm:4.59.0"
+ "@rollup/rollup-linux-arm64-musl": "npm:4.59.0"
+ "@rollup/rollup-linux-loong64-gnu": "npm:4.59.0"
+ "@rollup/rollup-linux-loong64-musl": "npm:4.59.0"
+ "@rollup/rollup-linux-ppc64-gnu": "npm:4.59.0"
+ "@rollup/rollup-linux-ppc64-musl": "npm:4.59.0"
+ "@rollup/rollup-linux-riscv64-gnu": "npm:4.59.0"
+ "@rollup/rollup-linux-riscv64-musl": "npm:4.59.0"
+ "@rollup/rollup-linux-s390x-gnu": "npm:4.59.0"
+ "@rollup/rollup-linux-x64-gnu": "npm:4.59.0"
+ "@rollup/rollup-linux-x64-musl": "npm:4.59.0"
+ "@rollup/rollup-openbsd-x64": "npm:4.59.0"
+ "@rollup/rollup-openharmony-arm64": "npm:4.59.0"
+ "@rollup/rollup-win32-arm64-msvc": "npm:4.59.0"
+ "@rollup/rollup-win32-ia32-msvc": "npm:4.59.0"
+ "@rollup/rollup-win32-x64-gnu": "npm:4.59.0"
+ "@rollup/rollup-win32-x64-msvc": "npm:4.59.0"
"@types/estree": "npm:1.0.8"
fsevents: "npm:~2.3.2"
dependenciesMeta:
@@ -10267,10 +10307,14 @@ __metadata:
optional: true
"@rollup/rollup-linux-arm64-musl":
optional: true
- "@rollup/rollup-linux-loongarch64-gnu":
+ "@rollup/rollup-linux-loong64-gnu":
+ optional: true
+ "@rollup/rollup-linux-loong64-musl":
optional: true
"@rollup/rollup-linux-ppc64-gnu":
optional: true
+ "@rollup/rollup-linux-ppc64-musl":
+ optional: true
"@rollup/rollup-linux-riscv64-gnu":
optional: true
"@rollup/rollup-linux-riscv64-musl":
@@ -10281,17 +10325,23 @@ __metadata:
optional: true
"@rollup/rollup-linux-x64-musl":
optional: true
+ "@rollup/rollup-openbsd-x64":
+ optional: true
+ "@rollup/rollup-openharmony-arm64":
+ optional: true
"@rollup/rollup-win32-arm64-msvc":
optional: true
"@rollup/rollup-win32-ia32-msvc":
optional: true
+ "@rollup/rollup-win32-x64-gnu":
+ optional: true
"@rollup/rollup-win32-x64-msvc":
optional: true
fsevents:
optional: true
bin:
rollup: dist/bin/rollup
- checksum: 10c0/f428497fe119fe7c4e34f1020d45ba13e99b94c9aa36958d88823d932b155c9df3d84f53166f3ee913ff68ea6c7599a9ab34861d88562ad9d8420f64ca5dad4c
+ checksum: 10c0/f38742da34cfee5e899302615fa157aa77cb6a2a1495e5e3ce4cc9c540d3262e235bbe60caa31562bbfe492b01fdb3e7a8c43c39d842d3293bcf843123b766fc
languageName: node
linkType: hard
From 53b513c476a035fd0c6168c0daadf0ae759097c7 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 22:09:10 +0100
Subject: [PATCH 12/22] fix(lambda): bump the aws-powertools group in /lambdas
with 4 updates (#5044)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bumps the aws-powertools group in /lambdas with 4 updates:
[@aws-lambda-powertools/parameters](https://github.com/aws-powertools/powertools-lambda-typescript),
[@aws-lambda-powertools/logger](https://github.com/aws-powertools/powertools-lambda-typescript),
[@aws-lambda-powertools/metrics](https://github.com/aws-powertools/powertools-lambda-typescript)
and
[@aws-lambda-powertools/tracer](https://github.com/aws-powertools/powertools-lambda-typescript).
Updates `@aws-lambda-powertools/parameters` from 2.30.2 to 2.31.0
Release notes
Sourced from @aws-lambda-powertools/parameters's
releases .
v2.31.0
Summary
In this release we are pleased to announce Tracer middleware for the
HTTP event handler, which allows users to enable distributed tracing for
their HTTP routes with minimal boilerplate code.
In addition, the metric utility now supports a fluent interface,
allowing you to chain multiple methods in a single statement.
We have also fixed a bug in the HTTP event handler that caused
parameterized headers to be handled incorrectly.
⭐ Special thanks to @nateiler and @dothomson for
their first PR merged in the project, and to @arnabrahman !
for another great contribution 🎉
Tracer Middleware
You can now use the Tracer utility with the HTTP event handler to
gain observability over your routes. The middleware:
Creates a subsegment for each HTTP route with the format
METHOD /path (e.g., GET /users)
Adds ColdStart and Service
annotations
Optionally captures JSON response bodies as metadata
Captures errors as metadata when exceptions occur
import { Router } from
'@aws-lambda-powertools/event-handler/http';
import { tracer as tracerMiddleware } from
'@aws-lambda-powertools/event-handler/http/middleware/tracer';
import { Tracer } from '@aws-lambda-powertools/tracer';
import type { Context } from 'aws-lambda';
const tracer = new Tracer({ serviceName: 'my-api' });
const app = new Router();
app.get(
'/users/cards',
[tracerMiddleware(tracer, { captureResponse: false })],
({ params }) => {
return { id: params.id, secret: 'sensitive-data' };
}
);
export const handler = async (event: unknown, context: Context) =>
app.resolve(event, context);
Metrics Fluent Interface
All mutation methods (with the exception of clear*) now
return the metric instance that was mutated, allowing you to chain
multiple metrics operations in a single statement.
import { Metrics} from
'@aws-lambda-powertools/metrics';
const metrics = new Metrics();
</tr></table>
... (truncated)
Changelog
Sourced from @aws-lambda-powertools/parameters's
changelog .
2.31.0
(2026-02-10)
Features
metrics return metrics instance from metrics
functions (#4930 )
(e7aa2e2 )
parameters pass underlying SDK error as cause to
GetParameterError (#4936 )
(b3499db )
event-handler add tracer middleware for HTTP routes
(#4982 )
(8be6157 )
Bug Fixes
event-handler handle set-cookie header values with
multiple attributes (#4990 )
(42317fe )
kafka handle tombstone events (#4991 )
(04c3236 )
Commits
54d1fa3
chore(ci): bump version to 2.31.0 (#5007 )
42317fe
fix(event-handler): handle set-cookie header values with multiple
attributes ...
8e4da8a
chore(deps): bump @types/node from 25.2.0 to 25.2.2 (#5004 )
ddf54e0
chore(deps): bump github/codeql-action from 4.32.1 to 4.32.2 (#4998 )
7692071
chore(deps): bump @types/node from 25.2.0 to 25.2.1 (#4999 )
d8dfadc
chore: manually upgrade dependency tree (#5002 )
60b6ce1
ci: switch npm auth to OIDC (#4997 )
04c3236
fix(kafka): handle tombstone events (#4991 )
8e1359e
chore(deps): bump the aws-cdk group across 1 directory with 3 updates
(#4985 )
4c6657a
test: extract DF idempotency e2e tests (#4994 )
Additional commits viewable in compare
view
Updates `@aws-lambda-powertools/logger` from 2.30.2 to 2.31.0
Release notes
Sourced from @aws-lambda-powertools/logger's
releases .
v2.31.0
Summary
In this release we are pleased to announce Tracer middleware for the
HTTP event handler, which allows users to enable distributed tracing for
their HTTP routes with minimal boilerplate code.
In addition, the metric utility now supports a fluent interface,
allowing you to chain multiple methods in a single statement.
We have also fixed a bug in the HTTP event handler that caused
parameterized headers to be handled incorrectly.
⭐ Special thanks to @nateiler and @dothomson for
their first PR merged in the project, and to @arnabrahman !
for another great contribution 🎉
Tracer Middleware
You can now use the Tracer utility with the HTTP event handler to
gain observability over your routes. The middleware:
Creates a subsegment for each HTTP route with the format
METHOD /path (e.g., GET /users)
Adds ColdStart and Service
annotations
Optionally captures JSON response bodies as metadata
Captures errors as metadata when exceptions occur
import { Router } from
'@aws-lambda-powertools/event-handler/http';
import { tracer as tracerMiddleware } from
'@aws-lambda-powertools/event-handler/http/middleware/tracer';
import { Tracer } from '@aws-lambda-powertools/tracer';
import type { Context } from 'aws-lambda';
const tracer = new Tracer({ serviceName: 'my-api' });
const app = new Router();
app.get(
'/users/cards',
[tracerMiddleware(tracer, { captureResponse: false })],
({ params }) => {
return { id: params.id, secret: 'sensitive-data' };
}
);
export const handler = async (event: unknown, context: Context) =>
app.resolve(event, context);
Metrics Fluent Interface
All mutation methods (with the exception of clear*) now
return the metric instance that was mutated, allowing you to chain
multiple metrics operations in a single statement.
import { Metrics} from
'@aws-lambda-powertools/metrics';
const metrics = new Metrics();
</tr></table>
... (truncated)
Changelog
Sourced from @aws-lambda-powertools/logger's
changelog .
2.31.0
(2026-02-10)
Features
metrics return metrics instance from metrics
functions (#4930 )
(e7aa2e2 )
parameters pass underlying SDK error as cause to
GetParameterError (#4936 )
(b3499db )
event-handler add tracer middleware for HTTP routes
(#4982 )
(8be6157 )
Bug Fixes
event-handler handle set-cookie header values with
multiple attributes (#4990 )
(42317fe )
kafka handle tombstone events (#4991 )
(04c3236 )
Commits
54d1fa3
chore(ci): bump version to 2.31.0 (#5007 )
42317fe
fix(event-handler): handle set-cookie header values with multiple
attributes ...
8e4da8a
chore(deps): bump @types/node from 25.2.0 to 25.2.2 (#5004 )
ddf54e0
chore(deps): bump github/codeql-action from 4.32.1 to 4.32.2 (#4998 )
7692071
chore(deps): bump @types/node from 25.2.0 to 25.2.1 (#4999 )
d8dfadc
chore: manually upgrade dependency tree (#5002 )
60b6ce1
ci: switch npm auth to OIDC (#4997 )
04c3236
fix(kafka): handle tombstone events (#4991 )
8e1359e
chore(deps): bump the aws-cdk group across 1 directory with 3 updates
(#4985 )
4c6657a
test: extract DF idempotency e2e tests (#4994 )
Additional commits viewable in compare
view
Updates `@aws-lambda-powertools/metrics` from 2.30.2 to 2.31.0
Release notes
Sourced from @aws-lambda-powertools/metrics's
releases .
v2.31.0
Summary
In this release we are pleased to announce Tracer middleware for the
HTTP event handler, which allows users to enable distributed tracing for
their HTTP routes with minimal boilerplate code.
In addition, the metric utility now supports a fluent interface,
allowing you to chain multiple methods in a single statement.
We have also fixed a bug in the HTTP event handler that caused
parameterized headers to be handled incorrectly.
⭐ Special thanks to @nateiler and @dothomson for
their first PR merged in the project, and to @arnabrahman !
for another great contribution 🎉
Tracer Middleware
You can now use the Tracer utility with the HTTP event handler to
gain observability over your routes. The middleware:
Creates a subsegment for each HTTP route with the format
METHOD /path (e.g., GET /users)
Adds ColdStart and Service
annotations
Optionally captures JSON response bodies as metadata
Captures errors as metadata when exceptions occur
import { Router } from
'@aws-lambda-powertools/event-handler/http';
import { tracer as tracerMiddleware } from
'@aws-lambda-powertools/event-handler/http/middleware/tracer';
import { Tracer } from '@aws-lambda-powertools/tracer';
import type { Context } from 'aws-lambda';
const tracer = new Tracer({ serviceName: 'my-api' });
const app = new Router();
app.get(
'/users/cards',
[tracerMiddleware(tracer, { captureResponse: false })],
({ params }) => {
return { id: params.id, secret: 'sensitive-data' };
}
);
export const handler = async (event: unknown, context: Context) =>
app.resolve(event, context);
Metrics Fluent Interface
All mutation methods (with the exception of clear*) now
return the metric instance that was mutated, allowing you to chain
multiple metrics operations in a single statement.
import { Metrics} from
'@aws-lambda-powertools/metrics';
const metrics = new Metrics();
</tr></table>
... (truncated)
Changelog
Sourced from @aws-lambda-powertools/metrics's
changelog .
2.31.0
(2026-02-10)
Features
metrics return metrics instance from metrics
functions (#4930 )
(e7aa2e2 )
parameters pass underlying SDK error as cause to
GetParameterError (#4936 )
(b3499db )
event-handler add tracer middleware for HTTP routes
(#4982 )
(8be6157 )
Bug Fixes
event-handler handle set-cookie header values with
multiple attributes (#4990 )
(42317fe )
kafka handle tombstone events (#4991 )
(04c3236 )
Commits
54d1fa3
chore(ci): bump version to 2.31.0 (#5007 )
42317fe
fix(event-handler): handle set-cookie header values with multiple
attributes ...
8e4da8a
chore(deps): bump @types/node from 25.2.0 to 25.2.2 (#5004 )
ddf54e0
chore(deps): bump github/codeql-action from 4.32.1 to 4.32.2 (#4998 )
7692071
chore(deps): bump @types/node from 25.2.0 to 25.2.1 (#4999 )
d8dfadc
chore: manually upgrade dependency tree (#5002 )
60b6ce1
ci: switch npm auth to OIDC (#4997 )
04c3236
fix(kafka): handle tombstone events (#4991 )
8e1359e
chore(deps): bump the aws-cdk group across 1 directory with 3 updates
(#4985 )
4c6657a
test: extract DF idempotency e2e tests (#4994 )
Additional commits viewable in compare
view
Updates `@aws-lambda-powertools/tracer` from 2.30.2 to 2.31.0
Release notes
Sourced from @aws-lambda-powertools/tracer's
releases .
v2.31.0
Summary
In this release we are pleased to announce Tracer middleware for the
HTTP event handler, which allows users to enable distributed tracing for
their HTTP routes with minimal boilerplate code.
In addition, the metric utility now supports a fluent interface,
allowing you to chain multiple methods in a single statement.
We have also fixed a bug in the HTTP event handler that caused
parameterized headers to be handled incorrectly.
⭐ Special thanks to @nateiler and @dothomson for
their first PR merged in the project, and to @arnabrahman !
for another great contribution 🎉
Tracer Middleware
You can now use the Tracer utility with the HTTP event handler to
gain observability over your routes. The middleware:
Creates a subsegment for each HTTP route with the format
METHOD /path (e.g., GET /users)
Adds ColdStart and Service
annotations
Optionally captures JSON response bodies as metadata
Captures errors as metadata when exceptions occur
import { Router } from
'@aws-lambda-powertools/event-handler/http';
import { tracer as tracerMiddleware } from
'@aws-lambda-powertools/event-handler/http/middleware/tracer';
import { Tracer } from '@aws-lambda-powertools/tracer';
import type { Context } from 'aws-lambda';
const tracer = new Tracer({ serviceName: 'my-api' });
const app = new Router();
app.get(
'/users/cards',
[tracerMiddleware(tracer, { captureResponse: false })],
({ params }) => {
return { id: params.id, secret: 'sensitive-data' };
}
);
export const handler = async (event: unknown, context: Context) =>
app.resolve(event, context);
Metrics Fluent Interface
All mutation methods (with the exception of clear*) now
return the metric instance that was mutated, allowing you to chain
multiple metrics operations in a single statement.
import { Metrics} from
'@aws-lambda-powertools/metrics';
const metrics = new Metrics();
</tr></table>
... (truncated)
Changelog
Sourced from @aws-lambda-powertools/tracer's
changelog .
2.31.0
(2026-02-10)
Features
metrics return metrics instance from metrics
functions (#4930 )
(e7aa2e2 )
parameters pass underlying SDK error as cause to
GetParameterError (#4936 )
(b3499db )
event-handler add tracer middleware for HTTP routes
(#4982 )
(8be6157 )
Bug Fixes
event-handler handle set-cookie header values with
multiple attributes (#4990 )
(42317fe )
kafka handle tombstone events (#4991 )
(04c3236 )
Commits
54d1fa3
chore(ci): bump version to 2.31.0 (#5007 )
42317fe
fix(event-handler): handle set-cookie header values with multiple
attributes ...
8e4da8a
chore(deps): bump @types/node from 25.2.0 to 25.2.2 (#5004 )
ddf54e0
chore(deps): bump github/codeql-action from 4.32.1 to 4.32.2 (#4998 )
7692071
chore(deps): bump @types/node from 25.2.0 to 25.2.1 (#4999 )
d8dfadc
chore: manually upgrade dependency tree (#5002 )
60b6ce1
ci: switch npm auth to OIDC (#4997 )
04c3236
fix(kafka): handle tombstone events (#4991 )
8e1359e
chore(deps): bump the aws-cdk group across 1 directory with 3 updates
(#4985 )
4c6657a
test: extract DF idempotency e2e tests (#4994 )
Additional commits viewable in compare
view
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore ` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore ` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore ` will
remove the ignore condition of the specified dependency and ignore
conditions
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
---
lambdas/functions/control-plane/package.json | 2 +-
lambdas/libs/aws-powertools-util/package.json | 6 +-
lambdas/yarn.lock | 58 +++++++++----------
3 files changed, 33 insertions(+), 33 deletions(-)
diff --git a/lambdas/functions/control-plane/package.json b/lambdas/functions/control-plane/package.json
index 7a75caa3ef..7519b16a06 100644
--- a/lambdas/functions/control-plane/package.json
+++ b/lambdas/functions/control-plane/package.json
@@ -32,7 +32,7 @@
"dependencies": {
"@aws-github-runner/aws-powertools-util": "*",
"@aws-github-runner/aws-ssm-util": "*",
- "@aws-lambda-powertools/parameters": "^2.30.2",
+ "@aws-lambda-powertools/parameters": "^2.31.0",
"@aws-sdk/client-ec2": "^3.984.0",
"@aws-sdk/client-sqs": "^3.984.0",
"@middy/core": "^6.4.5",
diff --git a/lambdas/libs/aws-powertools-util/package.json b/lambdas/libs/aws-powertools-util/package.json
index d018d0b53a..6f94a36236 100644
--- a/lambdas/libs/aws-powertools-util/package.json
+++ b/lambdas/libs/aws-powertools-util/package.json
@@ -20,9 +20,9 @@
"body-parser": "^2.2.1"
},
"dependencies": {
- "@aws-lambda-powertools/logger": "^2.30.2",
- "@aws-lambda-powertools/metrics": "^2.30.2",
- "@aws-lambda-powertools/tracer": "^2.30.2",
+ "@aws-lambda-powertools/logger": "^2.31.0",
+ "@aws-lambda-powertools/metrics": "^2.31.0",
+ "@aws-lambda-powertools/tracer": "^2.31.0",
"aws-lambda": "^1.0.7"
},
"nx": {
diff --git a/lambdas/yarn.lock b/lambdas/yarn.lock
index f73cddaca3..e2d3da5142 100644
--- a/lambdas/yarn.lock
+++ b/lambdas/yarn.lock
@@ -118,9 +118,9 @@ __metadata:
version: 0.0.0-use.local
resolution: "@aws-github-runner/aws-powertools-util@workspace:libs/aws-powertools-util"
dependencies:
- "@aws-lambda-powertools/logger": "npm:^2.30.2"
- "@aws-lambda-powertools/metrics": "npm:^2.30.2"
- "@aws-lambda-powertools/tracer": "npm:^2.30.2"
+ "@aws-lambda-powertools/logger": "npm:^2.31.0"
+ "@aws-lambda-powertools/metrics": "npm:^2.31.0"
+ "@aws-lambda-powertools/tracer": "npm:^2.31.0"
"@types/aws-lambda": "npm:^8.10.159"
"@types/node": "npm:^22.19.3"
aws-lambda: "npm:^1.0.7"
@@ -148,7 +148,7 @@ __metadata:
dependencies:
"@aws-github-runner/aws-powertools-util": "npm:*"
"@aws-github-runner/aws-ssm-util": "npm:*"
- "@aws-lambda-powertools/parameters": "npm:^2.30.2"
+ "@aws-lambda-powertools/parameters": "npm:^2.31.0"
"@aws-sdk/client-ec2": "npm:^3.984.0"
"@aws-sdk/client-sqs": "npm:^3.984.0"
"@aws-sdk/types": "npm:^3.973.1"
@@ -233,53 +233,53 @@ __metadata:
languageName: unknown
linkType: soft
-"@aws-lambda-powertools/commons@npm:2.30.2":
- version: 2.30.2
- resolution: "@aws-lambda-powertools/commons@npm:2.30.2"
+"@aws-lambda-powertools/commons@npm:2.31.0":
+ version: 2.31.0
+ resolution: "@aws-lambda-powertools/commons@npm:2.31.0"
dependencies:
"@aws/lambda-invoke-store": "npm:0.2.3"
- checksum: 10c0/4147877b7f3621ff0c45d99a19a3a6bdf295ff4ff69b6c3bc49c9cb15b0f91d12650fc90377a968c659c9e1a5b3b059ef85d1e4c76b7ae36fb6335b66ed4b7d1
+ checksum: 10c0/0bd9790d674d72c4290424e0f8b05af22595d295a79822ef816e3d72d35c28ca453a0fb45549be7cd6f58fbf4e015ccfdc067fa758d7837753e3a852e5f8dfac
languageName: node
linkType: hard
-"@aws-lambda-powertools/logger@npm:^2.30.2":
- version: 2.30.2
- resolution: "@aws-lambda-powertools/logger@npm:2.30.2"
+"@aws-lambda-powertools/logger@npm:^2.31.0":
+ version: 2.31.0
+ resolution: "@aws-lambda-powertools/logger@npm:2.31.0"
dependencies:
- "@aws-lambda-powertools/commons": "npm:2.30.2"
+ "@aws-lambda-powertools/commons": "npm:2.31.0"
"@aws/lambda-invoke-store": "npm:0.2.3"
peerDependencies:
- "@aws-lambda-powertools/jmespath": 2.30.2
+ "@aws-lambda-powertools/jmespath": 2.31.0
"@middy/core": 4.x || 5.x || 6.x || 7.x
peerDependenciesMeta:
"@aws-lambda-powertools/jmespath":
optional: true
"@middy/core":
optional: true
- checksum: 10c0/dad8ec43aa3e6d28a4ffb59f9e90c88a72b576bbb04c518645287b76ce6d2a3acb916ae23418ba4da52baf67ce8a82436848a9f443a371a1ab0a3636e1fae02f
+ checksum: 10c0/944e5efc543ccc2855762305c779e45d94316b245f4e8eb29d66db7f3395f79dc9bf6fa6c2e64b48a6270d97cb4f153fe05a6df39680f6d6884a5eef1a0faade
languageName: node
linkType: hard
-"@aws-lambda-powertools/metrics@npm:^2.30.2":
- version: 2.30.2
- resolution: "@aws-lambda-powertools/metrics@npm:2.30.2"
+"@aws-lambda-powertools/metrics@npm:^2.31.0":
+ version: 2.31.0
+ resolution: "@aws-lambda-powertools/metrics@npm:2.31.0"
dependencies:
- "@aws-lambda-powertools/commons": "npm:2.30.2"
+ "@aws-lambda-powertools/commons": "npm:2.31.0"
"@aws/lambda-invoke-store": "npm:0.2.3"
peerDependencies:
"@middy/core": 4.x || 5.x || 6.x || 7.x
peerDependenciesMeta:
"@middy/core":
optional: true
- checksum: 10c0/7d1e16a081d95c451dbb85d04c95119293adead4bf96948b77fe99757bc70d7dd38e5fe1b34f8cad5ceb58b11cd4ffe7b14bb1dd84b7fcb5997f099c344694bd
+ checksum: 10c0/95a38f52518dba640875ff29697c930f5bcd8d997df95c424dc4af888459342a4acba766257f25eac7abc647292d900599bbacbadedb5a5aef99b6ced7326bdd
languageName: node
linkType: hard
-"@aws-lambda-powertools/parameters@npm:^2.30.2":
- version: 2.30.2
- resolution: "@aws-lambda-powertools/parameters@npm:2.30.2"
+"@aws-lambda-powertools/parameters@npm:^2.31.0":
+ version: 2.31.0
+ resolution: "@aws-lambda-powertools/parameters@npm:2.31.0"
dependencies:
- "@aws-lambda-powertools/commons": "npm:2.30.2"
+ "@aws-lambda-powertools/commons": "npm:2.31.0"
peerDependencies:
"@aws-sdk/client-appconfigdata": ">=3.x"
"@aws-sdk/client-dynamodb": ">=3.x"
@@ -300,22 +300,22 @@ __metadata:
optional: true
"@middy/core":
optional: true
- checksum: 10c0/6e7265c823ddec1af031cc79b2d1b7d2aaa47a0e91f8c94b50c8c11595703f6d2037acb6f8ceb55f2c74bb54f7acef98f5db12c6d3ee9a032619006d843c3057
+ checksum: 10c0/3486dabdc302c6361a02def1e0a0052a7dce829b0673a4731ea13ff73465df5d2f50cf2b0f6d3ffcbd0ca806cd46bf84281a78b623cc3420dc4ef6f504a281f8
languageName: node
linkType: hard
-"@aws-lambda-powertools/tracer@npm:^2.30.2":
- version: 2.30.2
- resolution: "@aws-lambda-powertools/tracer@npm:2.30.2"
+"@aws-lambda-powertools/tracer@npm:^2.31.0":
+ version: 2.31.0
+ resolution: "@aws-lambda-powertools/tracer@npm:2.31.0"
dependencies:
- "@aws-lambda-powertools/commons": "npm:2.30.2"
+ "@aws-lambda-powertools/commons": "npm:2.31.0"
aws-xray-sdk-core: "npm:^3.12.0"
peerDependencies:
"@middy/core": 4.x || 5.x || 6.x || 7.x
peerDependenciesMeta:
"@middy/core":
optional: true
- checksum: 10c0/6a0deca827d81de9fbd16621a9b3d04c75e16b03b97ff01f4c60c9d12839e0d40d5719e675bf993e94baea10453d603c8cfe66f05a98df9a78c96a652b4b5f07
+ checksum: 10c0/c17b43c193af263b49852e14eeb2d8aab55ca2351d52a6fa4844dbc74ec5b0a9299442a9b82cf682e82fd99ef49146321f18d7cb47797007d548b78776d9b630
languageName: node
linkType: hard
From 0882f611b7caa1ee0e27ecb1dd784eb465a5e361 Mon Sep 17 00:00:00 2001
From: Brend Smits
Date: Mon, 9 Mar 2026 22:10:39 +0100
Subject: [PATCH 13/22] chore: add pull request template for better
contribution guidelines (#5057)
This PR adds a small pull request template that should make it easier
for maintainers to test new changes as they come in.
---
.github/pull_request_template.md | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
create mode 100644 .github/pull_request_template.md
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 0000000000..c53ac1dd80
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,19 @@
+## Description
+
+
+
+## Test Plan
+
+
+
+## Related Issues
+
+
From e3288541b25f64486ca78e595beaede216947c06 Mon Sep 17 00:00:00 2001
From: "runners-releaser[bot]"
<194412594+runners-releaser[bot]@users.noreply.github.com>
Date: Mon, 9 Mar 2026 22:16:57 +0100
Subject: [PATCH 14/22] chore(main): release 7.4.1 (#5033)
:robot: I have created a release *beep* *boop*
---
##
[7.4.1](https://github.com/github-aws-runners/terraform-aws-github-runner/compare/v7.4.0...v7.4.1)
(2026-03-09)
### Bug Fixes
* gracefully handle JIT config failures and terminate unconfigured
instance
([#4990](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/4990))
([c171550](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/c17155028fb685fc3afdfe677366f20a64e7c55d))
* **install-runner.sh:** support Debian
([#5027](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5027))
([7755b7f](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/7755b7f05dff5c9136d4d33cd977ebe2f4e6191c))
* **lambda:** add jti claim to GitHub App JWTs to prevent concurrent
collisions
([#5056](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5056))
([07bd193](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/07bd193c08b40ff47f8bb047d3fe06d0225266f2)),
closes
[#5025](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5025)
* **lambda:** bump @octokit/auth-app from 8.1.2 to 8.2.0 in /lambdas in
the octokit group
([#5035](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5035))
([1c8083e](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/1c8083eee0844d53c17836558811262c956f921d))
* **lambda:** bump axios from 1.13.2 to 1.13.5 in /lambdas
([#5028](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5028))
([0335e3a](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/0335e3aa1c087b5a24d22cf0d6144688be85147f))
* **lambda:** bump qs from 6.14.1 to 6.14.2 in /lambdas
([#5032](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5032))
([6dc97d5](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/6dc97d55d7b01c7c197573843b298236f891cda8))
* **lambda:** bump rollup from 4.46.2 to 4.59.0 in /lambdas
([#5052](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5052))
([1e798b1](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/1e798b1076be65340ad1e6e711a1ee27d26fe660))
* **lambda:** bump the aws group in /lambdas with 7 updates
([#5021](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5021))
([c3c158d](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/c3c158de3955693c82a737f88e7066f6304a7298))
* **lambda:** bump the aws-powertools group in /lambdas with 4 updates
([#5022](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5022))
([e8369cf](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/e8369cf5b660c344c7bb1e23729236c2779725d2))
---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Co-authored-by: runners-releaser[bot] <194412594+runners-releaser[bot]@users.noreply.github.com>
---
CHANGELOG.md | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index fd1faf3481..d0257eedd2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,20 @@
# Changelog
+## [7.4.1](https://github.com/github-aws-runners/terraform-aws-github-runner/compare/v7.4.0...v7.4.1) (2026-03-09)
+
+
+### Bug Fixes
+
+* gracefully handle JIT config failures and terminate unconfigured instance ([#4990](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/4990)) ([c171550](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/c17155028fb685fc3afdfe677366f20a64e7c55d))
+* **install-runner.sh:** support Debian ([#5027](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5027)) ([7755b7f](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/7755b7f05dff5c9136d4d33cd977ebe2f4e6191c))
+* **lambda:** add jti claim to GitHub App JWTs to prevent concurrent collisions ([#5056](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5056)) ([07bd193](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/07bd193c08b40ff47f8bb047d3fe06d0225266f2)), closes [#5025](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5025)
+* **lambda:** bump @octokit/auth-app from 8.1.2 to 8.2.0 in /lambdas in the octokit group ([#5035](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5035)) ([1c8083e](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/1c8083eee0844d53c17836558811262c956f921d))
+* **lambda:** bump axios from 1.13.2 to 1.13.5 in /lambdas ([#5028](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5028)) ([0335e3a](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/0335e3aa1c087b5a24d22cf0d6144688be85147f))
+* **lambda:** bump qs from 6.14.1 to 6.14.2 in /lambdas ([#5032](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5032)) ([6dc97d5](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/6dc97d55d7b01c7c197573843b298236f891cda8))
+* **lambda:** bump rollup from 4.46.2 to 4.59.0 in /lambdas ([#5052](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5052)) ([1e798b1](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/1e798b1076be65340ad1e6e711a1ee27d26fe660))
+* **lambda:** bump the aws group in /lambdas with 7 updates ([#5021](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5021)) ([c3c158d](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/c3c158de3955693c82a737f88e7066f6304a7298))
+* **lambda:** bump the aws-powertools group in /lambdas with 4 updates ([#5022](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5022)) ([e8369cf](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/e8369cf5b660c344c7bb1e23729236c2779725d2))
+
## [7.4.0](https://github.com/github-aws-runners/terraform-aws-github-runner/compare/v7.3.0...v7.4.0) (2026-02-04)
From 866eaf6a5f9851930de2113ac74ab4c99b5528b5 Mon Sep 17 00:00:00 2001
From: Ederson Brilhante
Date: Tue, 10 Mar 2026 18:33:19 +0100
Subject: [PATCH 15/22] refactor(webhook): add persistent keys in logs (#5030)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
## Summary
Update `publishOnEventBridge` to use the existing `readEvent` helper
instead of directly reading the `x-github-event` header and calling
`checkEventIsSupported`.
Only `eventType` is destructured from `readEvent`, since the parsed
event object isn’t needed.
## Why
This makes the EventBridge path consistent with `publishForRunners`,
ensuring persistent logging fields (repository, action, workflow job
name, status, etc.) are added to the logger in both code paths.
## Impact
* No functional changes
* Consistent logging behavior
* Removes duplicate event parsing logic
---
lambdas/functions/webhook/src/webhook/index.ts | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/lambdas/functions/webhook/src/webhook/index.ts b/lambdas/functions/webhook/src/webhook/index.ts
index c62a8d4190..343a4f5b41 100644
--- a/lambdas/functions/webhook/src/webhook/index.ts
+++ b/lambdas/functions/webhook/src/webhook/index.ts
@@ -21,7 +21,7 @@ export async function publishForRunners(
const checkBodySizeResult = checkBodySize(body, headers);
- const { event, eventType } = readEvent(headers, body);
+ const { event, eventType } = readEvent(headers, body, ['workflow_job']);
logger.info(`Github event ${event.action} accepted for ${event.repository.full_name}`);
if (checkBodySizeResult.sizeExceeded) {
// We only warn for large event, when moving the event bridge we can only can accept events up to 256KB
@@ -39,11 +39,10 @@ export async function publishOnEventBridge(
await verifySignature(headers, body, config.webhookSecret);
- const eventType = headers['x-github-event'] as string;
- checkEventIsSupported(eventType, config.allowedEvents);
-
const checkBodySizeResult = checkBodySize(body, headers);
+ const { eventType } = readEvent(headers, body, config.allowedEvents);
+
logger.info(
`Github event ${headers['x-github-event'] as string} accepted for ` +
`${headers['x-github-hook-installation-target-id'] as string}`,
@@ -127,9 +126,13 @@ function checkEventIsSupported(eventType: string, allowedEvents: string[]): void
}
}
-function readEvent(headers: IncomingHttpHeaders, body: string): { event: WorkflowJobEvent; eventType: string } {
+function readEvent(
+ headers: IncomingHttpHeaders,
+ body: string,
+ allowedEvents: string[],
+): { event: WorkflowJobEvent; eventType: string } {
const eventType = headers['x-github-event'] as string;
- checkEventIsSupported(eventType, ['workflow_job']);
+ checkEventIsSupported(eventType, allowedEvents);
const event = JSON.parse(body) as WorkflowJobEvent;
logger.appendPersistentKeys({
From 84381ae6a8b84358c49c39b0aabc164fb47fd1e8 Mon Sep 17 00:00:00 2001
From: Thomas Nemer <80506610+thomasnemer@users.noreply.github.com>
Date: Wed, 11 Mar 2026 15:53:44 +0100
Subject: [PATCH 16/22] feat(lambdas): add batch SSM parameter fetching to
reduce API calls (#5017)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
This PR intends to reduce SSM AWS API calls by doing the following:
Add `getParameters()` function to aws-ssm-util that fetches multiple SSM
parameters in a single API call with automatic chunking (max 10 per call
per AWS API limits).
Apply batch fetching to:
- auth.ts: fetch App ID and Private Key in one call (2 calls → 1)
- ConfigLoader.ts: fetch multiple matcher config paths in one call
- ami.ts: batch resolve SSM parameter values for AMI lookups
Also remove redundant appId SSM fetch in scale-up.ts that was only used
for logging.
---------
Co-authored-by: Brend Smits
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---
.../functions/ami-housekeeper/src/ami.test.ts | 126 ++++--------------
lambdas/functions/ami-housekeeper/src/ami.ts | 76 +++++++----
.../control-plane/src/github/auth.test.ts | 93 +++++++++++--
.../control-plane/src/github/auth.ts | 29 +++-
.../src/scale-runners/scale-up.ts | 2 -
.../webhook/src/ConfigLoader.test.ts | 36 +++--
lambdas/functions/webhook/src/ConfigLoader.ts | 27 ++--
lambdas/libs/aws-ssm-util/src/index.test.ts | 90 ++++++++++++-
lambdas/libs/aws-ssm-util/src/index.ts | 57 +++++++-
.../runners/job-retry/policies/lambda.json | 3 +-
.../runners/policies/lambda-scale-down.json | 3 +-
modules/runners/policies/lambda-scale-up.json | 3 +-
.../runners/pool/policies/lambda-pool.json | 3 +-
modules/webhook/policies/lambda-ssm.json | 2 +-
14 files changed, 378 insertions(+), 172 deletions(-)
diff --git a/lambdas/functions/ami-housekeeper/src/ami.test.ts b/lambdas/functions/ami-housekeeper/src/ami.test.ts
index 281e97856b..ab3149e00c 100644
--- a/lambdas/functions/ami-housekeeper/src/ami.test.ts
+++ b/lambdas/functions/ami-housekeeper/src/ami.test.ts
@@ -7,12 +7,8 @@ import {
EC2Client,
Image,
} from '@aws-sdk/client-ec2';
-import {
- DescribeParametersCommand,
- DescribeParametersCommandOutput,
- GetParameterCommand,
- SSMClient,
-} from '@aws-sdk/client-ssm';
+import { DescribeParametersCommand, DescribeParametersCommandOutput, SSMClient } from '@aws-sdk/client-ssm';
+import { getParameters } from '@aws-github-runner/aws-ssm-util';
import { mockClient } from 'aws-sdk-client-mock';
import 'aws-sdk-client-mock-jest/vitest';
@@ -20,6 +16,8 @@ import { AmiCleanupOptions, amiCleanup, defaultAmiCleanupOptions } from './ami';
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { fail } from 'assert';
+vi.mock('@aws-github-runner/aws-ssm-util');
+
process.env.AWS_REGION = 'eu-east-1';
const deleteAmisOlderThenDays = 30;
const date31DaysAgo = new Date(new Date().setDate(new Date().getDate() - (deleteAmisOlderThenDays + 1)));
@@ -83,22 +81,12 @@ describe("delete AMI's", () => {
mockSSMClient.reset();
mockSSMClient.on(DescribeParametersCommand).resolves(ssmParameters);
- mockSSMClient.on(GetParameterCommand, { Name: 'ami-id/ami-ssm0001' }).resolves({
- Parameter: {
- Name: 'ami-id/ami-ssm0001',
- Type: 'String',
- Value: 'ami-ssm0001',
- Version: 1,
- },
- });
- mockSSMClient.on(GetParameterCommand, { Name: 'ami-id/ami-ssm0002' }).resolves({
- Parameter: {
- Name: 'ami-id/ami-ssm0002',
- Type: 'String',
- Value: 'ami-ssm0002',
- Version: 1,
- },
- });
+ vi.mocked(getParameters).mockResolvedValue(
+ new Map([
+ ['ami-id/ami-ssm0001', 'ami-ssm0001'],
+ ['ami-id/ami-ssm0002', 'ami-ssm0002'],
+ ]),
+ );
mockEC2Client.on(DescribeLaunchTemplatesCommand).resolves({
LaunchTemplates: [
@@ -143,13 +131,7 @@ describe("delete AMI's", () => {
expect(mockEC2Client).toHaveReceivedCommand(DescribeLaunchTemplatesCommand);
expect(mockEC2Client).toHaveReceivedCommand(DescribeLaunchTemplateVersionsCommand);
expect(mockSSMClient).toHaveReceivedCommand(DescribeParametersCommand);
- expect(mockSSMClient).toHaveReceivedCommandTimes(GetParameterCommand, 2);
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: 'ami-id/ami-ssm0001',
- });
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: 'ami-id/ami-ssm0002',
- });
+ expect(getParameters).toHaveBeenCalledWith(['ami-id/ami-ssm0001', 'ami-id/ami-ssm0002']);
});
it('should NOT delete instances in use.', async () => {
@@ -485,14 +467,7 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/github-runner/config/ami_id' }).resolves({
- Parameter: {
- Name: '/github-runner/config/ami_id',
- Type: 'String',
- Value: 'ami-underscore0001',
- Version: 1,
- },
- });
+ vi.mocked(getParameters).mockResolvedValue(new Map([['/github-runner/config/ami_id', 'ami-underscore0001']]));
await amiCleanup({
minimumDaysOld: 0,
@@ -501,9 +476,7 @@ describe("delete AMI's", () => {
// AMI should not be deleted because it's referenced in SSM
expect(mockEC2Client).not.toHaveReceivedCommand(DeregisterImageCommand);
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: '/github-runner/config/ami_id',
- });
+ expect(getParameters).toHaveBeenCalledWith(['/github-runner/config/ami_id']);
expect(mockSSMClient).not.toHaveReceivedCommand(DescribeParametersCommand);
});
@@ -518,14 +491,7 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/github-runner/config/ami-id' }).resolves({
- Parameter: {
- Name: '/github-runner/config/ami-id',
- Type: 'String',
- Value: 'ami-hyphen0001',
- Version: 1,
- },
- });
+ vi.mocked(getParameters).mockResolvedValue(new Map([['/github-runner/config/ami-id', 'ami-hyphen0001']]));
await amiCleanup({
minimumDaysOld: 0,
@@ -534,9 +500,7 @@ describe("delete AMI's", () => {
// AMI should not be deleted because it's referenced in SSM
expect(mockEC2Client).not.toHaveReceivedCommand(DeregisterImageCommand);
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: '/github-runner/config/ami-id',
- });
+ expect(getParameters).toHaveBeenCalledWith(['/github-runner/config/ami-id']);
expect(mockSSMClient).not.toHaveReceivedCommand(DescribeParametersCommand);
});
@@ -561,14 +525,7 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/some/path/ami-id' }).resolves({
- Parameter: {
- Name: '/some/path/ami-id',
- Type: 'String',
- Value: 'ami-wildcard0001',
- Version: 1,
- },
- });
+ vi.mocked(getParameters).mockResolvedValue(new Map([['/some/path/ami-id', 'ami-wildcard0001']]));
await amiCleanup({
minimumDaysOld: 0,
@@ -580,9 +537,7 @@ describe("delete AMI's", () => {
expect(mockSSMClient).toHaveReceivedCommandWith(DescribeParametersCommand, {
ParameterFilters: [{ Key: 'Name', Option: 'Contains', Values: ['ami-id'] }],
});
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: '/some/path/ami-id',
- });
+ expect(getParameters).toHaveBeenCalledWith(['/some/path/ami-id']);
});
it('handles wildcard SSM parameter patterns (*ami_id)', async () => {
@@ -606,14 +561,9 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/github-runner/config/ami_id' }).resolves({
- Parameter: {
- Name: '/github-runner/config/ami_id',
- Type: 'String',
- Value: 'ami-wildcard-underscore0001',
- Version: 1,
- },
- });
+ vi.mocked(getParameters).mockResolvedValue(
+ new Map([['/github-runner/config/ami_id', 'ami-wildcard-underscore0001']]),
+ );
await amiCleanup({
minimumDaysOld: 0,
@@ -625,9 +575,7 @@ describe("delete AMI's", () => {
expect(mockSSMClient).toHaveReceivedCommandWith(DescribeParametersCommand, {
ParameterFilters: [{ Key: 'Name', Option: 'Contains', Values: ['ami_id'] }],
});
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: '/github-runner/config/ami_id',
- });
+ expect(getParameters).toHaveBeenCalledWith(['/github-runner/config/ami_id']);
});
it('handles mixed explicit names and wildcard patterns', async () => {
@@ -649,14 +597,9 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/explicit/ami_id' }).resolves({
- Parameter: {
- Name: '/explicit/ami_id',
- Type: 'String',
- Value: 'ami-explicit0001',
- Version: 1,
- },
- });
+ vi.mocked(getParameters)
+ .mockResolvedValueOnce(new Map([['/explicit/ami_id', 'ami-explicit0001']]))
+ .mockResolvedValueOnce(new Map([['/discovered/ami-id', 'ami-wildcard0001']]));
mockSSMClient.on(DescribeParametersCommand).resolves({
Parameters: [
@@ -668,15 +611,6 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/discovered/ami-id' }).resolves({
- Parameter: {
- Name: '/discovered/ami-id',
- Type: 'String',
- Value: 'ami-wildcard0001',
- Version: 1,
- },
- });
-
await amiCleanup({
minimumDaysOld: 0,
ssmParameterNames: ['/explicit/ami_id', '*ami-id'],
@@ -688,15 +622,11 @@ describe("delete AMI's", () => {
ImageId: 'ami-unused0001',
});
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: '/explicit/ami_id',
- });
+ expect(getParameters).toHaveBeenCalledWith(['/explicit/ami_id']);
expect(mockSSMClient).toHaveReceivedCommandWith(DescribeParametersCommand, {
ParameterFilters: [{ Key: 'Name', Option: 'Contains', Values: ['ami-id'] }],
});
- expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, {
- Name: '/discovered/ami-id',
- });
+ expect(getParameters).toHaveBeenCalledWith(['/discovered/ami-id']);
});
it('handles SSM parameter fetch failures gracefully', async () => {
@@ -710,7 +640,7 @@ describe("delete AMI's", () => {
],
});
- mockSSMClient.on(GetParameterCommand, { Name: '/nonexistent/ami_id' }).rejects(new Error('ParameterNotFound'));
+ vi.mocked(getParameters).mockRejectedValue(new Error('ParameterNotFound'));
// Should not throw and should delete the AMI since SSM reference failed
await amiCleanup({
@@ -768,7 +698,7 @@ describe("delete AMI's", () => {
ImageId: 'ami-no-ssm0001',
});
expect(mockSSMClient).not.toHaveReceivedCommand(DescribeParametersCommand);
- expect(mockSSMClient).not.toHaveReceivedCommand(GetParameterCommand);
+ expect(getParameters).not.toHaveBeenCalled();
});
});
});
diff --git a/lambdas/functions/ami-housekeeper/src/ami.ts b/lambdas/functions/ami-housekeeper/src/ami.ts
index f61dea921c..4f0c63d045 100644
--- a/lambdas/functions/ami-housekeeper/src/ami.ts
+++ b/lambdas/functions/ami-housekeeper/src/ami.ts
@@ -8,9 +8,10 @@ import {
Filter,
Image,
} from '@aws-sdk/client-ec2';
-import { GetParameterCommand, SSMClient, DescribeParametersCommand } from '@aws-sdk/client-ssm';
+import { SSMClient, DescribeParametersCommand } from '@aws-sdk/client-ssm';
import { createChildLogger } from '@aws-github-runner/aws-powertools-util';
import { getTracedAWSV3Client } from '@aws-github-runner/aws-powertools-util';
+import { getParameters } from '@aws-github-runner/aws-ssm-util';
const logger = createChildLogger('ami');
@@ -184,22 +185,34 @@ async function deleteSnapshot(options: AmiCleanupOptions, amiDetails: Image, ec2
}
/**
- * Resolves the value of an SSM parameter by its name. Doesn't fail on errors,
- * but warns instead, as this process is best-effort.
+ * Resolves the values of multiple SSM parameters by their names.
+ * Delegates batching to the shared `getParameters` utility.
+ * Doesn't fail on errors, but warns instead, as this process is best-effort.
*
- * @param name - The SSM parameter name to resolve
- * @param ssmClient - Configured SSM client for making API calls
- * @returns The parameter value if successful, undefined if parameter doesn't exist or access fails
+ * @param names - Array of SSM parameter names to resolve
+ * @returns Array of parameter values in the same order as input (undefined for missing/failed parameters)
*/
-async function resolveSsmParameterValue(name: string, ssmClient: SSMClient): Promise {
+async function resolveSsmParameterValues(names: string[]): Promise<(string | undefined)[]> {
+ if (names.length === 0) {
+ return [];
+ }
+
try {
- const { Parameter: parameter } = await ssmClient.send(new GetParameterCommand({ Name: name }));
+ const parameterMap = await getParameters(names);
- return parameter?.Value;
- } catch (error: unknown) {
- logger.warn(`Failed to resolve image id from SSM parameter ${name}`, { error });
+ // Log warnings for parameters that couldn't be resolved
+ for (const name of names) {
+ if (!parameterMap.has(name)) {
+ logger.warn(`Failed to resolve image id from SSM parameter ${name}: Parameter not found or access denied`);
+ }
+ }
- return undefined;
+ // Return values in the same order as input names
+ return names.map((name) => parameterMap.get(name));
+ } catch (error: unknown) {
+ logger.warn(`Failed to resolve image ids from SSM parameters ${names.join(', ')}`, { error });
+ // Mark all parameters as undefined on failure
+ return names.map(() => undefined);
}
}
@@ -273,11 +286,12 @@ async function getAmisReferedInSSM(options: AmiCleanupOptions): Promise<(string
const explicitNames = options.ssmParameterNames.filter((n) => !n.startsWith('*'));
const wildcardPatterns = options.ssmParameterNames.filter((n) => n.startsWith('*'));
- const explicitValuesPromise = Promise.all(explicitNames.map((name) => resolveSsmParameterValue(name, ssmClient)));
+ // Batch fetch explicit parameter values in chunks of 10 (AWS API limit)
+ const explicitValuesPromise = resolveSsmParameterValues(explicitNames);
// Handle wildcard patterns by first discovering matching parameters, then
// fetching their values
- let wildcardValues: Promise<(string | undefined)[]> = Promise.resolve([]);
+ let wildcardValuesPromise: Promise<(string | undefined)[]> = Promise.resolve([]);
if (wildcardPatterns.length > 0) {
// Convert wildcard patterns to SSM ParameterFilters using Contains logic
// Example: "*ami-id" becomes a filter for parameters containing "ami-id"
@@ -287,24 +301,30 @@ async function getAmisReferedInSSM(options: AmiCleanupOptions): Promise<(string
Values: [p.replace(/^\*/g, '')],
}));
- try {
- // Discover parameters matching the wildcard patterns
- logger.debug('Describing SSM parameter', { filters });
- const ssmParameters = await ssmClient.send(new DescribeParametersCommand({ ParameterFilters: filters }));
-
- // Fetch the actual values of discovered parameters
- wildcardValues = Promise.all(
- (ssmParameters.Parameters ?? []).map((param) => resolveSsmParameterValue(param.Name!, ssmClient)),
- );
- } catch (e) {
- logger.warn('Failed to describe SSM parameters using wildcard patterns', { error: e });
- }
+ wildcardValuesPromise = (async () => {
+ try {
+ // Discover parameters matching the wildcard patterns
+ logger.debug('Describing SSM parameter', { filters });
+ const ssmParameters = await ssmClient.send(new DescribeParametersCommand({ ParameterFilters: filters }));
+
+ // Batch fetch the actual values of discovered parameters
+ const discoveredNames = (ssmParameters.Parameters ?? [])
+ .map((param) => param.Name)
+ .filter((name): name is string => name !== undefined);
+
+ return resolveSsmParameterValues(discoveredNames);
+ } catch (e) {
+ logger.warn('Failed to describe SSM parameters using wildcard patterns', { error: e });
+ return [];
+ }
+ })();
}
// Combine results from both explicit and wildcard parameter resolution
- const values = await Promise.all([explicitValuesPromise, wildcardValues]);
+ const [explicitValues, wildcardValues] = await Promise.all([explicitValuesPromise, wildcardValuesPromise]);
+ const values = [...explicitValues, ...wildcardValues];
logger.debug('Resolved SSM parameter values', { values });
- return values.flat();
+ return values;
}
export { amiCleanup, getAmisNotInUse };
diff --git a/lambdas/functions/control-plane/src/github/auth.test.ts b/lambdas/functions/control-plane/src/github/auth.test.ts
index 55026fa322..274496ea20 100644
--- a/lambdas/functions/control-plane/src/github/auth.test.ts
+++ b/lambdas/functions/control-plane/src/github/auth.test.ts
@@ -2,7 +2,7 @@ import { createAppAuth } from '@octokit/auth-app';
import { StrategyOptions } from '@octokit/auth-app/dist-types/types';
import { request } from '@octokit/request';
import { RequestInterface, RequestParameters } from '@octokit/types';
-import { getParameter } from '@aws-github-runner/aws-ssm-util';
+import { getParameters } from '@aws-github-runner/aws-ssm-util';
import { generateKeyPairSync } from 'node:crypto';
import * as nock from 'nock';
@@ -27,7 +27,7 @@ const GITHUB_APP_ID = '1';
const PARAMETER_GITHUB_APP_ID_NAME = `/actions-runner/${ENVIRONMENT}/github_app_id`;
const PARAMETER_GITHUB_APP_KEY_BASE64_NAME = `/actions-runner/${ENVIRONMENT}/github_app_key_base64`;
-const mockedGet = vi.mocked(getParameter);
+const mockedGetParameters = vi.mocked(getParameters);
beforeEach(() => {
vi.resetModules();
@@ -78,9 +78,32 @@ describe('Test createGithubAppAuth', () => {
process.env.ENVIRONMENT = ENVIRONMENT;
});
+ it('Throws early when PARAMETER_GITHUB_APP_ID_NAME is not set', async () => {
+ delete process.env.PARAMETER_GITHUB_APP_ID_NAME;
+
+ await expect(createGithubAppAuth(installationId)).rejects.toThrow(
+ 'Environment variable PARAMETER_GITHUB_APP_ID_NAME is not set',
+ );
+ expect(mockedGetParameters).not.toHaveBeenCalled();
+ });
+
+ it('Throws early when PARAMETER_GITHUB_APP_KEY_BASE64_NAME is not set', async () => {
+ delete process.env.PARAMETER_GITHUB_APP_KEY_BASE64_NAME;
+
+ await expect(createGithubAppAuth(installationId)).rejects.toThrow(
+ 'Environment variable PARAMETER_GITHUB_APP_KEY_BASE64_NAME is not set',
+ );
+ expect(mockedGetParameters).not.toHaveBeenCalled();
+ });
+
it('Creates auth object with createJwt callback including jti claim', async () => {
// Arrange
- mockedGet.mockResolvedValueOnce(GITHUB_APP_ID).mockResolvedValueOnce(b64);
+ mockedGetParameters.mockResolvedValueOnce(
+ new Map([
+ [PARAMETER_GITHUB_APP_ID_NAME, GITHUB_APP_ID],
+ [PARAMETER_GITHUB_APP_KEY_BASE64_NAME, b64],
+ ]),
+ );
const mockedAuth = vi.fn();
mockedAuth.mockResolvedValue({ token });
@@ -108,7 +131,12 @@ describe('Test createGithubAppAuth', () => {
});
const b64Key = Buffer.from(privateKey as string).toString('base64');
- mockedGet.mockResolvedValueOnce(GITHUB_APP_ID).mockResolvedValueOnce(b64Key);
+ mockedGetParameters.mockResolvedValueOnce(
+ new Map([
+ [PARAMETER_GITHUB_APP_ID_NAME, GITHUB_APP_ID],
+ [PARAMETER_GITHUB_APP_KEY_BASE64_NAME, b64Key],
+ ]),
+ );
let capturedCreateJwt: (appId: string | number, timeDifference?: number) => Promise<{ jwt: string }>;
mockedCreatAppAuth.mockImplementation((opts: StrategyOptions) => {
@@ -137,9 +165,41 @@ describe('Test createGithubAppAuth', () => {
expect(payload).toHaveProperty('iss');
});
+ it('Creates auth object with line breaks in SSH key.', async () => {
+ // Arrange
+ const b64PrivateKeyWithLineBreaks = Buffer.from(decryptedValue + '\n' + decryptedValue, 'binary').toString(
+ 'base64',
+ );
+ mockedGetParameters.mockResolvedValueOnce(
+ new Map([
+ [PARAMETER_GITHUB_APP_ID_NAME, GITHUB_APP_ID],
+ [PARAMETER_GITHUB_APP_KEY_BASE64_NAME, b64PrivateKeyWithLineBreaks],
+ ]),
+ );
+
+ const mockedAuth = vi.fn();
+ mockedAuth.mockResolvedValue({ token });
+ const mockWithHook = Object.assign(mockedAuth, { hook: vi.fn() });
+ mockedCreatAppAuth.mockReturnValue(mockWithHook);
+
+ // Act
+ const result = await createGithubAppAuth(installationId);
+
+ // Assert
+ expect(getParameters).toBeCalledWith([PARAMETER_GITHUB_APP_ID_NAME, PARAMETER_GITHUB_APP_KEY_BASE64_NAME]);
+ expect(mockedCreatAppAuth).toBeCalledTimes(1);
+ expect(mockedAuth).toBeCalledWith({ type: authType });
+ expect(result.token).toBe(token);
+ });
+
it('Creates auth object for public GitHub', async () => {
// Arrange
- mockedGet.mockResolvedValueOnce(GITHUB_APP_ID).mockResolvedValueOnce(b64);
+ mockedGetParameters.mockResolvedValueOnce(
+ new Map([
+ [PARAMETER_GITHUB_APP_ID_NAME, GITHUB_APP_ID],
+ [PARAMETER_GITHUB_APP_KEY_BASE64_NAME, b64],
+ ]),
+ );
const mockedAuth = vi.fn();
mockedAuth.mockResolvedValue({ token });
@@ -150,8 +210,7 @@ describe('Test createGithubAppAuth', () => {
const result = await createGithubAppAuth(installationId);
// Assert
- expect(getParameter).toBeCalledWith(PARAMETER_GITHUB_APP_ID_NAME);
- expect(getParameter).toBeCalledWith(PARAMETER_GITHUB_APP_KEY_BASE64_NAME);
+ expect(getParameters).toBeCalledWith([PARAMETER_GITHUB_APP_ID_NAME, PARAMETER_GITHUB_APP_KEY_BASE64_NAME]);
expect(mockedCreatAppAuth).toBeCalledTimes(1);
const callArgs = mockedCreatAppAuth.mock.calls[0][0] as Record;
@@ -171,7 +230,12 @@ describe('Test createGithubAppAuth', () => {
() => mockedRequestInterface as RequestInterface,
);
- mockedGet.mockResolvedValueOnce(GITHUB_APP_ID).mockResolvedValueOnce(b64);
+ mockedGetParameters.mockResolvedValueOnce(
+ new Map([
+ [PARAMETER_GITHUB_APP_ID_NAME, GITHUB_APP_ID],
+ [PARAMETER_GITHUB_APP_KEY_BASE64_NAME, b64],
+ ]),
+ );
const mockedAuth = vi.fn();
mockedAuth.mockResolvedValue({ token });
// eslint-disable-next-line @typescript-eslint/no-unused-vars
@@ -183,8 +247,7 @@ describe('Test createGithubAppAuth', () => {
const result = await createGithubAppAuth(installationId, githubServerUrl);
// Assert
- expect(getParameter).toBeCalledWith(PARAMETER_GITHUB_APP_ID_NAME);
- expect(getParameter).toBeCalledWith(PARAMETER_GITHUB_APP_KEY_BASE64_NAME);
+ expect(getParameters).toBeCalledWith([PARAMETER_GITHUB_APP_ID_NAME, PARAMETER_GITHUB_APP_KEY_BASE64_NAME]);
expect(mockedCreatAppAuth).toBeCalledTimes(1);
const callArgs = mockedCreatAppAuth.mock.calls[0][0] as Record;
@@ -207,7 +270,12 @@ describe('Test createGithubAppAuth', () => {
const installationId = undefined;
- mockedGet.mockResolvedValueOnce(GITHUB_APP_ID).mockResolvedValueOnce(b64);
+ mockedGetParameters.mockResolvedValueOnce(
+ new Map([
+ [PARAMETER_GITHUB_APP_ID_NAME, GITHUB_APP_ID],
+ [PARAMETER_GITHUB_APP_KEY_BASE64_NAME, b64],
+ ]),
+ );
const mockedAuth = vi.fn();
mockedAuth.mockResolvedValue({ token });
const mockWithHook = Object.assign(mockedAuth, { hook: vi.fn() });
@@ -217,8 +285,7 @@ describe('Test createGithubAppAuth', () => {
const result = await createGithubAppAuth(installationId, githubServerUrl);
// Assert
- expect(getParameter).toBeCalledWith(PARAMETER_GITHUB_APP_ID_NAME);
- expect(getParameter).toBeCalledWith(PARAMETER_GITHUB_APP_KEY_BASE64_NAME);
+ expect(getParameters).toBeCalledWith([PARAMETER_GITHUB_APP_ID_NAME, PARAMETER_GITHUB_APP_KEY_BASE64_NAME]);
expect(mockedCreatAppAuth).toBeCalledTimes(1);
const callArgs = mockedCreatAppAuth.mock.calls[0][0] as Record;
diff --git a/lambdas/functions/control-plane/src/github/auth.ts b/lambdas/functions/control-plane/src/github/auth.ts
index 927765523b..9a572c48a8 100644
--- a/lambdas/functions/control-plane/src/github/auth.ts
+++ b/lambdas/functions/control-plane/src/github/auth.ts
@@ -22,7 +22,7 @@ import { Octokit } from '@octokit/rest';
import { retry } from '@octokit/plugin-retry';
import { throttling } from '@octokit/plugin-throttling';
import { createChildLogger } from '@aws-github-runner/aws-powertools-util';
-import { getParameter } from '@aws-github-runner/aws-ssm-util';
+import { getParameters } from '@aws-github-runner/aws-ssm-util';
import { EndpointDefaults } from '@octokit/types';
const logger = createChildLogger('gh-auth');
@@ -91,13 +91,32 @@ function signJwt(payload: Record, privateKey: string): string {
}
async function createAuth(installationId: number | undefined, ghesApiUrl: string): Promise {
- const appId = parseInt(await getParameter(process.env.PARAMETER_GITHUB_APP_ID_NAME));
+ const appIdParamName = process.env.PARAMETER_GITHUB_APP_ID_NAME;
+ const appKeyParamName = process.env.PARAMETER_GITHUB_APP_KEY_BASE64_NAME;
+ if (!appIdParamName) {
+ throw new Error('Environment variable PARAMETER_GITHUB_APP_ID_NAME is not set');
+ }
+ if (!appKeyParamName) {
+ throw new Error('Environment variable PARAMETER_GITHUB_APP_KEY_BASE64_NAME is not set');
+ }
+
+ // Batch fetch both App ID and Private Key in a single SSM API call
+ const paramNames = [appIdParamName, appKeyParamName];
+ const params = await getParameters(paramNames);
+ const appIdValue = params.get(appIdParamName);
+ const privateKeyBase64 = params.get(appKeyParamName);
+ if (!appIdValue) {
+ throw new Error(`Parameter ${appIdParamName} not found`);
+ }
+ if (!privateKeyBase64) {
+ throw new Error(`Parameter ${appKeyParamName} not found`);
+ }
+
+ const appId = parseInt(appIdValue);
// replace literal \n characters with new lines to allow the key to be stored as a
// single line variable. This logic should match how the GitHub Terraform provider
// processes private keys to retain compatibility between the projects
- const privateKey = Buffer.from(await getParameter(process.env.PARAMETER_GITHUB_APP_KEY_BASE64_NAME), 'base64')
- .toString()
- .replace('/[\\n]/g', String.fromCharCode(10));
+ const privateKey = Buffer.from(privateKeyBase64, 'base64').toString().replace('/[\\n]/g', String.fromCharCode(10));
// Use a custom createJwt callback to include a jti (JWT ID) claim in every token.
// Without this, concurrent Lambda invocations generating JWTs within the same second
diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
index 7f797422cc..759be95089 100644
--- a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
+++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
@@ -132,8 +132,6 @@ async function getGithubRunnerRegistrationToken(githubRunnerConfig: CreateGitHub
repo: githubRunnerConfig.runnerOwner.split('/')[1],
});
- const appId = parseInt(await getParameter(process.env.PARAMETER_GITHUB_APP_ID_NAME));
- logger.info('App id from SSM', { appId: appId });
return registrationToken.data.token;
}
diff --git a/lambdas/functions/webhook/src/ConfigLoader.test.ts b/lambdas/functions/webhook/src/ConfigLoader.test.ts
index 11383cc326..fadc3954a5 100644
--- a/lambdas/functions/webhook/src/ConfigLoader.test.ts
+++ b/lambdas/functions/webhook/src/ConfigLoader.test.ts
@@ -1,4 +1,4 @@
-import { getParameter } from '@aws-github-runner/aws-ssm-util';
+import { getParameter, getParameters } from '@aws-github-runner/aws-ssm-util';
import { ConfigWebhook, ConfigWebhookEventBridge, ConfigDispatcher } from './ConfigLoader';
import { logger } from '@aws-github-runner/aws-powertools-util';
@@ -183,9 +183,15 @@ describe('ConfigLoader Tests', () => {
{ id: '2', arn: 'arn:aws:sqs:queue2', matcherConfig: { labelMatchers: [['b']], exactMatch: true } },
];
+ // Mock getParameters for batch fetching multiple paths
+ vi.mocked(getParameters).mockResolvedValue(
+ new Map([
+ ['/path/to/matcher/config-1', partialMatcher1],
+ ['/path/to/matcher/config-2', partialMatcher2],
+ ]),
+ );
+
vi.mocked(getParameter).mockImplementation(async (paramPath: string) => {
- if (paramPath === '/path/to/matcher/config-1') return partialMatcher1;
- if (paramPath === '/path/to/matcher/config-2') return partialMatcher2;
if (paramPath === '/path/to/webhook/secret') return 'secret';
return '';
});
@@ -205,15 +211,21 @@ describe('ConfigLoader Tests', () => {
const partialMatcher2 =
',{"id":"2","arn":"arn:aws:sqs:queue2","matcherConfig":{"labelMatchers":[["b"]],"exactMatch":true}}';
+ // Mock getParameters for batch fetching - returns incomplete JSON that will fail to parse
+ vi.mocked(getParameters).mockResolvedValue(
+ new Map([
+ ['/path/to/matcher/config-1', partialMatcher1],
+ ['/path/to/matcher/config-2', partialMatcher2],
+ ]),
+ );
+
vi.mocked(getParameter).mockImplementation(async (paramPath: string) => {
- if (paramPath === '/path/to/matcher/config-1') return partialMatcher1;
- if (paramPath === '/path/to/matcher/config-2') return partialMatcher2;
if (paramPath === '/path/to/webhook/secret') return 'secret';
return '';
});
await expect(ConfigWebhook.load()).rejects.toThrow(
- "Failed to load config: Failed to parse combined matcher config: Expected ',' or ']' after array element in JSON at position 196",
+ "Failed to load config: Failed to load/parse combined matcher config: Expected ',' or ']' after array element in JSON at position 196", // eslint-disable-line max-len
);
});
});
@@ -291,11 +303,13 @@ describe('ConfigLoader Tests', () => {
{ id: '2', arn: 'arn:aws:sqs:queue2', matcherConfig: { labelMatchers: [['y']], exactMatch: true } },
];
- vi.mocked(getParameter).mockImplementation(async (paramPath: string) => {
- if (paramPath === '/path/to/matcher/config-1') return partial1;
- if (paramPath === '/path/to/matcher/config-2') return partial2;
- return '';
- });
+ // Mock getParameters for batch fetching multiple paths
+ vi.mocked(getParameters).mockResolvedValue(
+ new Map([
+ ['/path/to/matcher/config-1', partial1],
+ ['/path/to/matcher/config-2', partial2],
+ ]),
+ );
const config: ConfigDispatcher = await ConfigDispatcher.load();
diff --git a/lambdas/functions/webhook/src/ConfigLoader.ts b/lambdas/functions/webhook/src/ConfigLoader.ts
index 910fbfe7c0..df7b933779 100644
--- a/lambdas/functions/webhook/src/ConfigLoader.ts
+++ b/lambdas/functions/webhook/src/ConfigLoader.ts
@@ -1,4 +1,4 @@
-import { getParameter } from '@aws-github-runner/aws-ssm-util';
+import { getParameter, getParameters } from '@aws-github-runner/aws-ssm-util';
import { RunnerMatcherConfig } from './sqs';
import { logger } from '@aws-github-runner/aws-powertools-util';
@@ -101,16 +101,27 @@ abstract class MatcherAwareConfig extends BaseConfig {
.split(':')
.map((p) => p.trim())
.filter(Boolean);
- let combinedString = '';
- for (const path of paths) {
- await this.loadParameter(path, 'matcherConfig');
- combinedString += this.matcherConfig;
- }
+ // Batch fetch all matcher config paths in a single SSM API call
try {
- this.matcherConfig = JSON.parse(combinedString);
+ const params = await getParameters(paths);
+ let combinedString = '';
+ for (const path of paths) {
+ const value = params.get(path);
+ if (value) {
+ combinedString += value;
+ } else {
+ this.configLoadingErrors.push(
+ `Failed to load parameter for matcherConfig from path ${path}: Parameter not found`,
+ );
+ }
+ }
+
+ if (combinedString) {
+ this.matcherConfig = JSON.parse(combinedString);
+ }
} catch (error) {
- this.configLoadingErrors.push(`Failed to parse combined matcher config: ${(error as Error).message}`);
+ this.configLoadingErrors.push(`Failed to load/parse combined matcher config: ${(error as Error).message}`);
}
}
}
diff --git a/lambdas/libs/aws-ssm-util/src/index.test.ts b/lambdas/libs/aws-ssm-util/src/index.test.ts
index 52e4242fdb..51a9d65027 100644
--- a/lambdas/libs/aws-ssm-util/src/index.test.ts
+++ b/lambdas/libs/aws-ssm-util/src/index.test.ts
@@ -1,6 +1,7 @@
import {
GetParameterCommand,
GetParameterCommandOutput,
+ GetParametersCommand,
PutParameterCommand,
PutParameterCommandOutput,
SSMClient,
@@ -9,7 +10,7 @@ import 'aws-sdk-client-mock-jest/vitest';
import { mockClient } from 'aws-sdk-client-mock';
import nock from 'nock';
-import { getParameter, putParameter, SSM_ADVANCED_TIER_THRESHOLD } from '.';
+import { getParameter, getParameters, putParameter, SSM_ADVANCED_TIER_THRESHOLD } from '.';
import { describe, it, expect, beforeEach, vi } from 'vitest';
const mockSSMClient = mockClient(SSMClient);
@@ -166,3 +167,90 @@ describe('Test getParameter and putParameter', () => {
});
});
});
+
+describe('Test getParameters (batch)', () => {
+ beforeEach(() => {
+ mockSSMClient.reset();
+ });
+
+ it('returns multiple parameters in a single call', async () => {
+ mockSSMClient.on(GetParametersCommand).resolves({
+ Parameters: [
+ { Name: '/app/param1', Value: 'value1' },
+ { Name: '/app/param2', Value: 'value2' },
+ ],
+ });
+
+ const result = await getParameters(['/app/param1', '/app/param2']);
+
+ expect(result).toEqual(
+ new Map([
+ ['/app/param1', 'value1'],
+ ['/app/param2', 'value2'],
+ ]),
+ );
+ expect(mockSSMClient).toHaveReceivedCommandWith(GetParametersCommand, {
+ Names: ['/app/param1', '/app/param2'],
+ WithDecryption: true,
+ });
+ });
+
+ it('returns empty map for empty input', async () => {
+ const result = await getParameters([]);
+
+ expect(result).toEqual(new Map());
+ expect(mockSSMClient).not.toHaveReceivedCommand(GetParametersCommand);
+ });
+
+ it('chunks requests when more than 10 parameters', async () => {
+ const names = Array.from({ length: 12 }, (_, i) => `/app/param${i}`);
+
+ mockSSMClient
+ .on(GetParametersCommand, { Names: names.slice(0, 10), WithDecryption: true })
+ .resolves({
+ Parameters: names.slice(0, 10).map((name) => ({ Name: name, Value: `val-${name}` })),
+ })
+ .on(GetParametersCommand, { Names: names.slice(10), WithDecryption: true })
+ .resolves({
+ Parameters: names.slice(10).map((name) => ({ Name: name, Value: `val-${name}` })),
+ });
+
+ const result = await getParameters(names);
+
+ expect(result.size).toBe(12);
+ expect(mockSSMClient).toHaveReceivedCommandTimes(GetParametersCommand, 2);
+ for (const name of names) {
+ expect(result.get(name)).toBe(`val-${name}`);
+ }
+ });
+
+ it('omits parameters with missing Name or Value', async () => {
+ mockSSMClient.on(GetParametersCommand).resolves({
+ Parameters: [
+ { Name: '/app/good', Value: 'value' },
+ { Name: '/app/no-value', Value: undefined },
+ { Name: undefined, Value: 'orphan' },
+ ],
+ });
+
+ const result = await getParameters(['/app/good', '/app/no-value']);
+
+ expect(result).toEqual(new Map([['/app/good', 'value']]));
+ });
+
+ it('propagates errors from SSM API', async () => {
+ mockSSMClient.on(GetParametersCommand).rejects(new Error('AccessDenied'));
+
+ await expect(getParameters(['/app/param1'])).rejects.toThrow('AccessDenied');
+ });
+
+ it('handles response with empty Parameters array', async () => {
+ mockSSMClient.on(GetParametersCommand).resolves({
+ Parameters: [],
+ });
+
+ const result = await getParameters(['/app/missing']);
+
+ expect(result).toEqual(new Map());
+ });
+});
diff --git a/lambdas/libs/aws-ssm-util/src/index.ts b/lambdas/libs/aws-ssm-util/src/index.ts
index 0b4925c17d..9173cbb210 100644
--- a/lambdas/libs/aws-ssm-util/src/index.ts
+++ b/lambdas/libs/aws-ssm-util/src/index.ts
@@ -1,4 +1,4 @@
-import { PutParameterCommand, SSMClient, Tag } from '@aws-sdk/client-ssm';
+import { GetParametersCommand, PutParameterCommand, SSMClient, Tag } from '@aws-sdk/client-ssm';
import { getTracedAWSV3Client } from '@aws-github-runner/aws-powertools-util';
import { SSMProvider } from '@aws-lambda-powertools/parameters/ssm';
@@ -17,6 +17,61 @@ export async function getParameter(parameter_name: string): Promise {
return result;
}
+/**
+ * Retrieves multiple parameters from AWS Systems Manager Parameter Store.
+ *
+ * This function uses the AWS SSM {@link GetParametersCommand} API to fetch the values
+ * for the provided parameter names. Requests are automatically chunked into batches
+ * of up to 10 names per call to comply with the AWS GetParameters API limit.
+ *
+ * Each successfully retrieved parameter is added to the returned {@link Map}, where:
+ * - The map key is the full parameter name as stored in Parameter Store.
+ * - The map value is the decrypted string value of the parameter.
+ *
+ * Parameter names that are not found in Parameter Store (or that cannot be returned
+ * by the API) are silently omitted from the resulting map. They will not appear as
+ * keys in the returned {@link Map}.
+ *
+ * @param parameter_names - An array of parameter names to retrieve from SSM Parameter Store.
+ * If the array is empty, an empty {@link Map} is returned without calling the AWS API.
+ *
+ * @returns A {@link Map} where each key is a parameter name and each value is the
+ * corresponding decrypted string value for that parameter. Only parameters that
+ * are successfully retrieved and have both a `Name` and a `Value` are included.
+ *
+ * @throws Error Propagates any error thrown by the underlying AWS SDK client,
+ * such as network errors, AWS service errors (e.g., access denied, throttling),
+ * or configuration issues (e.g., missing region or credentials).
+ */
+export async function getParameters(parameter_names: string[]): Promise> {
+ if (parameter_names.length === 0) {
+ return new Map();
+ }
+
+ const ssmClient = getTracedAWSV3Client(new SSMClient({ region: process.env.AWS_REGION }));
+ const result = new Map();
+
+ // AWS SSM GetParameters API has a limit of 10 parameters per call
+ const chunkSize = 10;
+ for (let i = 0; i < parameter_names.length; i += chunkSize) {
+ const chunk = parameter_names.slice(i, i + chunkSize);
+ const response = await ssmClient.send(
+ new GetParametersCommand({
+ Names: chunk,
+ WithDecryption: true,
+ }),
+ );
+
+ for (const param of response.Parameters ?? []) {
+ if (param.Name && param.Value) {
+ result.set(param.Name, param.Value);
+ }
+ }
+ }
+
+ return result;
+}
+
export const SSM_ADVANCED_TIER_THRESHOLD = 4000;
export async function putParameter(
diff --git a/modules/runners/job-retry/policies/lambda.json b/modules/runners/job-retry/policies/lambda.json
index 591ec04790..f1c9efd569 100644
--- a/modules/runners/job-retry/policies/lambda.json
+++ b/modules/runners/job-retry/policies/lambda.json
@@ -4,7 +4,8 @@
{
"Effect": "Allow",
"Action": [
- "ssm:GetParameter"
+ "ssm:GetParameter",
+ "ssm:GetParameters"
],
"Resource": [
"${github_app_key_base64_arn}",
diff --git a/modules/runners/policies/lambda-scale-down.json b/modules/runners/policies/lambda-scale-down.json
index d35be746b7..067a747c81 100644
--- a/modules/runners/policies/lambda-scale-down.json
+++ b/modules/runners/policies/lambda-scale-down.json
@@ -46,7 +46,8 @@
{
"Effect": "Allow",
"Action": [
- "ssm:GetParameter"
+ "ssm:GetParameter",
+ "ssm:GetParameters"
],
"Resource": [
"${github_app_key_base64_arn}",
diff --git a/modules/runners/policies/lambda-scale-up.json b/modules/runners/policies/lambda-scale-up.json
index 3b16e710d5..93faf506a3 100644
--- a/modules/runners/policies/lambda-scale-up.json
+++ b/modules/runners/policies/lambda-scale-up.json
@@ -30,7 +30,8 @@
{
"Effect": "Allow",
"Action": [
- "ssm:GetParameter"
+ "ssm:GetParameter",
+ "ssm:GetParameters"
],
"Resource": [
"${github_app_key_base64_arn}",
diff --git a/modules/runners/pool/policies/lambda-pool.json b/modules/runners/pool/policies/lambda-pool.json
index b0360a825c..91c9997ce4 100644
--- a/modules/runners/pool/policies/lambda-pool.json
+++ b/modules/runners/pool/policies/lambda-pool.json
@@ -51,7 +51,8 @@
{
"Effect": "Allow",
"Action": [
- "ssm:GetParameter"
+ "ssm:GetParameter",
+ "ssm:GetParameters"
],
"Resource": [
"${github_app_key_base64_arn}",
diff --git a/modules/webhook/policies/lambda-ssm.json b/modules/webhook/policies/lambda-ssm.json
index 9e33d1ca0a..b1ebca8c8b 100644
--- a/modules/webhook/policies/lambda-ssm.json
+++ b/modules/webhook/policies/lambda-ssm.json
@@ -3,7 +3,7 @@
"Statement": [
{
"Effect": "Allow",
- "Action": ["ssm:GetParameter"],
+ "Action": ["ssm:GetParameter", "ssm:GetParameters"],
"Resource": ${resource_arns}
}
]
From e78065d81ce3deeaab782d54daf766ed30214499 Mon Sep 17 00:00:00 2001
From: Brend Smits
Date: Wed, 11 Mar 2026 16:24:18 +0100
Subject: [PATCH 17/22] feat(logging): add log_class parameter to runner log
files configuration (#5036)
This pull request updates the logging configuration by introducing
support for the `log_class` property, allowing log groups to be created
with either the `STANDARD` or `INFREQUENT_ACCESS` class. The change is
applied throughout the configuration to ensure log groups and log files
can specify their class, defaulting to `STANDARD` if not set.
**Logging configuration enhancements:**
* Added a `log_class` property (defaulting to `"STANDARD"`) to the
`runner_log_files` and `multi_runner_config` variables in
`variables.tf`, `modules/runners/variables.tf`, and
`modules/multi-runner/variables.tf` to allow specifying the log group
class.
[[1]](diffhunk://#diff-05b5a57c136b6ff596500bcbfdcff145ef6cddea2a0e86d184d9daa9a65a288eR494)
[[2]](diffhunk://#diff-23e8f44c0f21971190244acdb8a35eaa21af7578ed5f1b97bef83f1a566d979cL398-R404)
[[3]](diffhunk://#diff-52d0673ff466b6445542e17038ea73a1cf41b8112f49ee57da4cebf8f0cb99c5R155)
* Updated the local log file definitions in `modules/runners/logging.tf`
to include the `log_class` property for each log file, defaulting to
`"STANDARD"`.
* Modified the CloudWatch log group resource in
`modules/runners/logging.tf` to use the specified `log_class` when
creating log groups, and refactored the logic to group log files by both
name and class.
**Documentation improvements:**
* Enhanced the description of the `runner_log_files` variable to
document the new `log_class` property and its valid values.
---------
Signed-off-by: Brend Smits
Co-authored-by: github-aws-runners-pr|bot
---
README.md | 3 ++-
examples/multi-runner/main.tf | 3 +++
main.tf | 5 +++++
modules/ami-housekeeper/README.md | 1 +
modules/ami-housekeeper/main.tf | 1 +
modules/ami-housekeeper/variables.tf | 11 ++++++++++
modules/lambda/README.md | 2 +-
modules/lambda/main.tf | 1 +
modules/lambda/variables.tf | 2 ++
modules/multi-runner/README.md | 9 ++++----
modules/multi-runner/ami-housekeeper.tf | 1 +
modules/multi-runner/runner-binaries.tf | 1 +
modules/multi-runner/runners.tf | 1 +
modules/multi-runner/termination-watcher.tf | 1 +
modules/multi-runner/variables.tf | 12 ++++++++++
modules/multi-runner/webhook.tf | 1 +
modules/runner-binaries-syncer/README.md | 1 +
.../runner-binaries-syncer.tf | 1 +
modules/runner-binaries-syncer/variables.tf | 11 ++++++++++
modules/runners/README.md | 3 ++-
modules/runners/job-retry.tf | 1 +
modules/runners/logging.tf | 22 +++++++++++++++----
modules/runners/pool.tf | 1 +
modules/runners/pool/README.md | 2 +-
modules/runners/pool/main.tf | 1 +
modules/runners/pool/variables.tf | 1 +
modules/runners/scale-down.tf | 1 +
modules/runners/scale-up.tf | 1 +
modules/runners/ssm-housekeeper.tf | 1 +
modules/runners/variables.tf | 14 +++++++++++-
modules/termination-watcher/README.md | 2 +-
modules/termination-watcher/variables.tf | 2 ++
modules/webhook/README.md | 1 +
modules/webhook/direct/README.md | 2 +-
modules/webhook/direct/variables.tf | 1 +
modules/webhook/direct/webhook.tf | 1 +
modules/webhook/eventbridge/README.md | 2 +-
modules/webhook/eventbridge/dispatcher.tf | 1 +
modules/webhook/eventbridge/variables.tf | 1 +
modules/webhook/eventbridge/webhook.tf | 1 +
modules/webhook/variables.tf | 11 ++++++++++
modules/webhook/webhook.tf | 2 ++
variables.tf | 14 +++++++++++-
43 files changed, 140 insertions(+), 17 deletions(-)
diff --git a/README.md b/README.md
index be25b537b8..9368db2dc1 100644
--- a/README.md
+++ b/README.md
@@ -158,6 +158,7 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh)
| [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no |
| [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no |
| [lambda\_tags](#input\_lambda\_tags) | Map of tags that will be added to all the lambda function resources. Note these are additional tags to the default tags. | `map(string)` | `{}` | no |
+| [log\_class](#input\_log\_class) | The log class of the CloudWatch log groups. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. | `string` | `"STANDARD"` | no |
| [log\_level](#input\_log\_level) | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | `string` | `"info"` | no |
| [logging\_kms\_key\_id](#input\_logging\_kms\_key\_id) | Specifies the kms key id to encrypt the logs with. | `string` | `null` | no |
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
@@ -198,7 +199,7 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh)
| [runner\_hook\_job\_completed](#input\_runner\_hook\_job\_completed) | Script to be ran in the runner environment at the end of every job | `string` | `""` | no |
| [runner\_hook\_job\_started](#input\_runner\_hook\_job\_started) | Script to be ran in the runner environment at the beginning of every job | `string` | `""` | no |
| [runner\_iam\_role\_managed\_policy\_arns](#input\_runner\_iam\_role\_managed\_policy\_arns) | Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role | `list(string)` | `[]` | no |
-| [runner\_log\_files](#input\_runner\_log\_files) | (optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details. | list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string })) | `null` | no |
+| [runner\_log\_files](#input\_runner\_log\_files) | (optional) List of logfiles to send to CloudWatch, will only be used if `enable_cloudwatch_agent` is set to true. Object description: `log_group_name`: Name of the log group, `prefix_log_group`: If true, the log group name will be prefixed with `/github-self-hosted-runners/`, `file_path`: path to the log file, `log_stream_name`: name of the log stream, `log_class`: The log class of the log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. Defaults to `STANDARD`. | list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })) | `null` | no |
| [runner\_metadata\_options](#input\_runner\_metadata\_options) | Metadata options for the ec2 runner instances. By default, the module uses metadata tags for bootstrapping the runner, only disable `instance_metadata_tags` when using custom scripts for starting the runner. | `map(any)` | { "http_endpoint": "enabled", "http_put_response_hop_limit": 1, "http_tokens": "required", "instance_metadata_tags": "enabled" } | no |
| [runner\_name\_prefix](#input\_runner\_name\_prefix) | The prefix used for the GitHub runner name. The prefix will be used in the default start script to prefix the instance name when register the runner in GitHub. The value is available via an EC2 tag 'ghr:runner\_name\_prefix'. | `string` | `""` | no |
| [runner\_os](#input\_runner\_os) | The EC2 Operating System type to use for action runner instances (linux,windows). | `string` | `"linux"` | no |
diff --git a/examples/multi-runner/main.tf b/examples/multi-runner/main.tf
index 13df82a0bb..0524a48859 100644
--- a/examples/multi-runner/main.tf
+++ b/examples/multi-runner/main.tf
@@ -139,6 +139,9 @@ module "runners" {
# Enable debug logging for the lambda functions
# log_level = "debug"
+ # Set log class to INFREQUENT_ACCESS for cost savings
+ log_class = "STANDARD"
+
# Enable to track the spot instance termination warning
# instance_termination_watcher = {
# enable = true
diff --git a/main.tf b/main.tf
index 1c07389116..dc7d842cc0 100644
--- a/main.tf
+++ b/main.tf
@@ -137,6 +137,7 @@ module "webhook" {
tracing_config = var.tracing_config
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
role_path = var.role_path
role_permissions_boundary = var.role_permissions_boundary
@@ -228,6 +229,7 @@ module "runners" {
tracing_config = var.tracing_config
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
enable_cloudwatch_agent = var.enable_cloudwatch_agent
cloudwatch_config = var.cloudwatch_config
runner_log_files = var.runner_log_files
@@ -307,6 +309,7 @@ module "runner_binaries" {
tracing_config = var.tracing_config
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
state_event_rule_binaries_syncer = var.state_event_rule_binaries_syncer
server_side_encryption_configuration = var.runner_binaries_s3_sse_configuration
@@ -349,6 +352,7 @@ module "ami_housekeeper" {
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
log_level = var.log_level
role_path = var.role_path
@@ -370,6 +374,7 @@ locals {
subnet_ids = var.lambda_subnet_ids
lambda_tags = var.lambda_tags
log_level = var.log_level
+ log_class = var.log_class
logging_kms_key_id = var.logging_kms_key_id
logging_retention_in_days = var.logging_retention_in_days
role_path = var.role_path
diff --git a/modules/ami-housekeeper/README.md b/modules/ami-housekeeper/README.md
index 8898e0c85e..711a72b39d 100644
--- a/modules/ami-housekeeper/README.md
+++ b/modules/ami-housekeeper/README.md
@@ -115,6 +115,7 @@ No modules.
| [lambda\_tags](#input\_lambda\_tags) | Map of tags that will be added to all the lambda function resources. Note these are additional tags to the default tags. | `map(string)` | `{}` | no |
| [lambda\_timeout](#input\_lambda\_timeout) | Time out of the lambda in seconds. | `number` | `60` | no |
| [lambda\_zip](#input\_lambda\_zip) | File location of the lambda zip file. | `string` | `null` | no |
+| [log\_class](#input\_log\_class) | The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. | `string` | `"STANDARD"` | no |
| [log\_level](#input\_log\_level) | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | `string` | `"info"` | no |
| [logging\_kms\_key\_id](#input\_logging\_kms\_key\_id) | Specifies the kms key id to encrypt the logs with | `string` | `null` | no |
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
diff --git a/modules/ami-housekeeper/main.tf b/modules/ami-housekeeper/main.tf
index 97ce2cef8a..40881d41f7 100644
--- a/modules/ami-housekeeper/main.tf
+++ b/modules/ami-housekeeper/main.tf
@@ -51,6 +51,7 @@ resource "aws_cloudwatch_log_group" "ami_housekeeper" {
name = "/aws/lambda/${aws_lambda_function.ami_housekeeper.function_name}"
retention_in_days = var.logging_retention_in_days
kms_key_id = var.logging_kms_key_id
+ log_group_class = var.log_class
tags = var.tags
}
diff --git a/modules/ami-housekeeper/variables.tf b/modules/ami-housekeeper/variables.tf
index 54bec6dc32..ff3024efb3 100644
--- a/modules/ami-housekeeper/variables.tf
+++ b/modules/ami-housekeeper/variables.tf
@@ -54,6 +54,17 @@ variable "logging_kms_key_id" {
default = null
}
+variable "log_class" {
+ description = "The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`."
+ type = string
+ default = "STANDARD"
+
+ validation {
+ condition = contains(["STANDARD", "INFREQUENT_ACCESS"], var.log_class)
+ error_message = "`log_class` must be either `STANDARD` or `INFREQUENT_ACCESS`."
+ }
+}
+
variable "lambda_subnet_ids" {
description = "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`."
type = list(string)
diff --git a/modules/lambda/README.md b/modules/lambda/README.md
index 26ff5e5c24..19e9c2a072 100644
--- a/modules/lambda/README.md
+++ b/modules/lambda/README.md
@@ -39,7 +39,7 @@ No modules.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [lambda](#input\_lambda) | Configuration for the lambda function. `aws_partition`: Partition for the base arn if not 'aws' `architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. `environment_variables`: Environment variables for the lambda. `handler`: The entrypoint for the lambda. `principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. `logging_kms_key_id`: Specifies the kms key id to encrypt the logs with `logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. `memory_size`: Memory size limit in MB of the lambda. `metrics_namespace`: Namespace for the metrics emitted by the lambda. `name`: The name of the lambda function. `prefix`: The prefix used for naming resources. `role_path`: The path that will be added to the role, if not set the environment name will be used. `role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda. `runtime`: AWS Lambda runtime. `s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. `s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. `s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. `security_group_ids`: List of security group IDs associated with the Lambda function. `subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. `tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `timeout`: Time out of the lambda in seconds. `tracing_config`: Configuration for lambda tracing. `zip`: File location of the lambda zip file. | object({ aws_partition = optional(string, "aws") architecture = optional(string, "arm64") environment_variables = optional(map(string), {}) handler = string lambda_tags = optional(map(string), {}) log_level = optional(string, "info") logging_kms_key_id = optional(string, null) logging_retention_in_days = optional(number, 180) memory_size = optional(number, 256) metrics_namespace = optional(string, "GitHub Runners") name = string prefix = optional(string, null) principals = optional(list(object({ type = string identifiers = list(string) })), []) role_path = optional(string, null) role_permissions_boundary = optional(string, null) runtime = optional(string, "nodejs24.x") s3_bucket = optional(string, null) s3_key = optional(string, null) s3_object_version = optional(string, null) security_group_ids = optional(list(string), []) subnet_ids = optional(list(string), []) tags = optional(map(string), {}) timeout = optional(number, 60) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) zip = optional(string, null) }) | n/a | yes |
+| [lambda](#input\_lambda) | Configuration for the lambda function. `aws_partition`: Partition for the base arn if not 'aws' `architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. `environment_variables`: Environment variables for the lambda. `handler`: The entrypoint for the lambda. `principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. `logging_kms_key_id`: Specifies the kms key id to encrypt the logs with `logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. `log_class`: The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. `memory_size`: Memory size limit in MB of the lambda. `metrics_namespace`: Namespace for the metrics emitted by the lambda. `name`: The name of the lambda function. `prefix`: The prefix used for naming resources. `role_path`: The path that will be added to the role, if not set the environment name will be used. `role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda. `runtime`: AWS Lambda runtime. `s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. `s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. `s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. `security_group_ids`: List of security group IDs associated with the Lambda function. `subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. `tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `timeout`: Time out of the lambda in seconds. `tracing_config`: Configuration for lambda tracing. `zip`: File location of the lambda zip file. | object({ aws_partition = optional(string, "aws") architecture = optional(string, "arm64") environment_variables = optional(map(string), {}) handler = string lambda_tags = optional(map(string), {}) log_level = optional(string, "info") log_class = optional(string, "STANDARD") logging_kms_key_id = optional(string, null) logging_retention_in_days = optional(number, 180) memory_size = optional(number, 256) metrics_namespace = optional(string, "GitHub Runners") name = string prefix = optional(string, null) principals = optional(list(object({ type = string identifiers = list(string) })), []) role_path = optional(string, null) role_permissions_boundary = optional(string, null) runtime = optional(string, "nodejs24.x") s3_bucket = optional(string, null) s3_key = optional(string, null) s3_object_version = optional(string, null) security_group_ids = optional(list(string), []) subnet_ids = optional(list(string), []) tags = optional(map(string), {}) timeout = optional(number, 60) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) zip = optional(string, null) }) | n/a | yes |
## Outputs
diff --git a/modules/lambda/main.tf b/modules/lambda/main.tf
index 25cbd3f9dd..7cc3094f28 100644
--- a/modules/lambda/main.tf
+++ b/modules/lambda/main.tf
@@ -56,6 +56,7 @@ resource "aws_cloudwatch_log_group" "main" {
name = "/aws/lambda/${aws_lambda_function.main.function_name}"
retention_in_days = var.lambda.logging_retention_in_days
kms_key_id = var.lambda.logging_kms_key_id
+ log_group_class = var.lambda.log_class
tags = var.lambda.tags
}
diff --git a/modules/lambda/variables.tf b/modules/lambda/variables.tf
index 7cbecba071..a6e27168fa 100644
--- a/modules/lambda/variables.tf
+++ b/modules/lambda/variables.tf
@@ -11,6 +11,7 @@ variable "lambda" {
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
+ `log_class`: The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`.
`memory_size`: Memory size limit in MB of the lambda.
`metrics_namespace`: Namespace for the metrics emitted by the lambda.
`name`: The name of the lambda function.
@@ -35,6 +36,7 @@ variable "lambda" {
handler = string
lambda_tags = optional(map(string), {})
log_level = optional(string, "info")
+ log_class = optional(string, "STANDARD")
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, 180)
memory_size = optional(number, 256)
diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md
index 9c6eb04c57..bd7c98d445 100644
--- a/modules/multi-runner/README.md
+++ b/modules/multi-runner/README.md
@@ -125,12 +125,12 @@ module "multi-runner" {
| [associate\_public\_ipv4\_address](#input\_associate\_public\_ipv4\_address) | Associate public IPv4 with the runner. Only tested with IPv4 | `bool` | `false` | no |
| [aws\_partition](#input\_aws\_partition) | (optiona) partition in the arn namespace to use if not 'aws' | `string` | `"aws"` | no |
| [aws\_region](#input\_aws\_region) | AWS region. | `string` | n/a | yes |
-| [cloudwatch\_config](#input\_cloudwatch\_config) | (optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details. | `string` | `null` | no |
+| [cloudwatch\_config](#input\_cloudwatch\_config) | (optional) Replaces the module default cloudwatch log config. See for details. | `string` | `null` | no |
| [enable\_ami\_housekeeper](#input\_enable\_ami\_housekeeper) | Option to disable the lambda to clean up old AMIs. | `bool` | `false` | no |
| [enable\_managed\_runner\_security\_group](#input\_enable\_managed\_runner\_security\_group) | Enabling the default managed security group creation. Unmanaged security groups can be specified via `runner_additional_security_group_ids`. | `bool` | `true` | no |
| [eventbridge](#input\_eventbridge) | Enable the use of EventBridge by the module. By enabling this feature events will be put on the EventBridge by the webhook instead of directly dispatching to queues for scaling. | object({ enable = optional(bool, true) accept_events = optional(list(string), []) }) | `{}` | no |
| [ghes\_ssl\_verify](#input\_ghes\_ssl\_verify) | GitHub Enterprise SSL verification. Set to 'false' when custom certificate (chains) is used for GitHub Enterprise Server (insecure). | `bool` | `true` | no |
-| [ghes\_url](#input\_ghes\_url) | GitHub Enterprise Server URL. Example: https://github.internal.co - DO NOT SET IF USING PUBLIC GITHUB. .However if you are using GitHub Enterprise Cloud with data-residency (ghe.com), set the endpoint here. Example - https://companyname.ghe.com\| | `string` | `null` | no |
+| [ghes\_url](#input\_ghes\_url) | GitHub Enterprise Server URL. Example: - DO NOT SET IF USING PUBLIC GITHUB. .However if you are using GitHub Enterprise Cloud with data-residency (ghe.com), set the endpoint here. Example - | `string` | `null` | no |
| [github\_app](#input\_github\_app) | GitHub app parameters, see your github app. You can optionally create the SSM parameters yourself and provide the ARN and name here, through the `*_ssm` attributes. If you chose to provide the configuration values directly here, please ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). Note: the provided SSM parameters arn and name have a precedence over the actual value (i.e `key_base64_ssm` has a precedence over `key_base64` etc). | object({ key_base64 = optional(string) key_base64_ssm = optional(object({ arn = string name = string })) id = optional(string) id_ssm = optional(object({ arn = string name = string })) webhook_secret = optional(string) webhook_secret_ssm = optional(object({ arn = string name = string })) }) | n/a | yes |
| [instance\_profile\_path](#input\_instance\_profile\_path) | The path that will be added to the instance\_profile, if not set the environment name will be used. | `string` | `null` | no |
| [instance\_termination\_watcher](#input\_instance\_termination\_watcher) | Configuration for the spot termination watcher lambda function. This feature is Beta, changes will not trigger a major release as long in beta. `enable`: Enable or disable the spot termination watcher. `memory_size`: Memory size limit in MB of the lambda. `s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. `s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. `timeout`: Time out of the lambda in seconds. `zip`: File location of the lambda zip file. | object({ enable = optional(bool, false) features = optional(object({ enable_spot_termination_handler = optional(bool, true) enable_spot_termination_notification_watcher = optional(bool, true) }), {}) memory_size = optional(number, null) s3_key = optional(string, null) s3_object_version = optional(string, null) timeout = optional(number, null) zip = optional(string, null) }) | `{}` | no |
@@ -145,17 +145,18 @@ module "multi-runner" {
| [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no |
| [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no |
| [lambda\_tags](#input\_lambda\_tags) | Map of tags that will be added to all the lambda function resources. Note these are additional tags to the default tags. | `map(string)` | `{}` | no |
+| [log\_class](#input\_log\_class) | The log class of the CloudWatch log groups. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. | `string` | `"STANDARD"` | no |
| [log\_level](#input\_log\_level) | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | `string` | `"info"` | no |
| [logging\_kms\_key\_id](#input\_logging\_kms\_key\_id) | Specifies the kms key id to encrypt the logs with | `string` | `null` | no |
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
| [matcher\_config\_parameter\_store\_tier](#input\_matcher\_config\_parameter\_store\_tier) | The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`. | `string` | `"Standard"` | no |
| [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. | object({ enable = optional(bool, false) namespace = optional(string, "GitHub Runners") metric = optional(object({ enable_github_app_rate_limit = optional(bool, true) enable_job_retry = optional(bool, true) enable_spot_termination_warning = optional(bool, true) }), {}) }) | `{}` | no |
-| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in https://github.com/github-aws-runners/terraform-aws-github-runner/blob/main/lambdas/functions/control-plane/src/aws/runners.ts" enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "DEPRECATED: Use `bidirectionalLabelMatch` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse." bidirectionalLabelMatch: "If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `exactMatch` which only checks that workflow labels are a subset of runner labels. When false, if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
+| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in " enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5* ** ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
| [parameter\_store\_tags](#input\_parameter\_store\_tags) | Map of tags that will be added to all the SSM Parameter Store parameters created by the Lambda function. | `map(string)` | `{}` | no |
| [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no |
| [pool\_lambda\_timeout](#input\_pool\_lambda\_timeout) | Time out for the pool lambda in seconds. | `number` | `60` | no |
| [prefix](#input\_prefix) | The prefix used for naming resources | `string` | `"github-actions"` | no |
-| [queue\_encryption](#input\_queue\_encryption) | Configure how data on queues managed by the modules in ecrypted at REST. Options are encrypted via SSE, non encrypted and via KMSS. By default encryptes via SSE is enabled. See for more details the Terraform `aws_sqs_queue` resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sqs_queue. | object({ kms_data_key_reuse_period_seconds = number kms_master_key_id = string sqs_managed_sse_enabled = bool }) | { "kms_data_key_reuse_period_seconds": null, "kms_master_key_id": null, "sqs_managed_sse_enabled": true } | no |
+| [queue\_encryption](#input\_queue\_encryption) | Configure how data on queues managed by the modules in ecrypted at REST. Options are encrypted via SSE, non encrypted and via KMSS. By default encryptes via SSE is enabled. See for more details the Terraform `aws_sqs_queue` resource . | object({ kms_data_key_reuse_period_seconds = number kms_master_key_id = string sqs_managed_sse_enabled = bool }) | { "kms_data_key_reuse_period_seconds": null, "kms_master_key_id": null, "sqs_managed_sse_enabled": true } | no |
| [repository\_white\_list](#input\_repository\_white\_list) | List of github repository full names (owner/repo\_name) that will be allowed to use the github app. Leave empty for no filtering. | `list(string)` | `[]` | no |
| [role\_path](#input\_role\_path) | The path that will be added to the role; if not set, the environment name will be used. | `string` | `null` | no |
| [role\_permissions\_boundary](#input\_role\_permissions\_boundary) | Permissions boundary that will be added to the created role for the lambda. | `string` | `null` | no |
diff --git a/modules/multi-runner/ami-housekeeper.tf b/modules/multi-runner/ami-housekeeper.tf
index 83ad4d49c2..385e6010c9 100644
--- a/modules/multi-runner/ami-housekeeper.tf
+++ b/modules/multi-runner/ami-housekeeper.tf
@@ -24,6 +24,7 @@ module "ami_housekeeper" {
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
log_level = var.log_level
role_path = var.role_path
diff --git a/modules/multi-runner/runner-binaries.tf b/modules/multi-runner/runner-binaries.tf
index e8779092f9..fb511bb3c5 100644
--- a/modules/multi-runner/runner-binaries.tf
+++ b/modules/multi-runner/runner-binaries.tf
@@ -22,6 +22,7 @@ module "runner_binaries" {
tracing_config = var.tracing_config
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
state_event_rule_binaries_syncer = var.state_event_rule_binaries_syncer
server_side_encryption_configuration = var.runner_binaries_s3_sse_configuration
diff --git a/modules/multi-runner/runners.tf b/modules/multi-runner/runners.tf
index 5cc51c5843..59b6307aa0 100644
--- a/modules/multi-runner/runners.tf
+++ b/modules/multi-runner/runners.tf
@@ -76,6 +76,7 @@ module "runners" {
tracing_config = var.tracing_config
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent
cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null)
runner_log_files = each.value.runner_config.runner_log_files
diff --git a/modules/multi-runner/termination-watcher.tf b/modules/multi-runner/termination-watcher.tf
index f317b66adf..5ddd4495bb 100644
--- a/modules/multi-runner/termination-watcher.tf
+++ b/modules/multi-runner/termination-watcher.tf
@@ -9,6 +9,7 @@ locals {
security_group_ids = var.lambda_security_group_ids
subnet_ids = var.lambda_subnet_ids
log_level = var.log_level
+ log_class = var.log_class
logging_kms_key_id = var.logging_kms_key_id
logging_retention_in_days = var.logging_retention_in_days
role_path = var.role_path
diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf
index ceb9f2c1e9..e105c2beae 100644
--- a/modules/multi-runner/variables.tf
+++ b/modules/multi-runner/variables.tf
@@ -152,6 +152,7 @@ variable "multi_runner_config" {
prefix_log_group = bool
file_path = string
log_stream_name = string
+ log_class = optional(string, "STANDARD")
})), null)
block_device_mappings = optional(list(object({
delete_on_termination = optional(bool, true)
@@ -328,6 +329,17 @@ variable "logging_kms_key_id" {
default = null
}
+variable "log_class" {
+ description = "The log class of the CloudWatch log groups. Valid values are `STANDARD` or `INFREQUENT_ACCESS`."
+ type = string
+ default = "STANDARD"
+
+ validation {
+ condition = contains(["STANDARD", "INFREQUENT_ACCESS"], var.log_class)
+ error_message = "`log_class` must be either `STANDARD` or `INFREQUENT_ACCESS`."
+ }
+}
+
variable "lambda_s3_bucket" {
description = "S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly."
type = string
diff --git a/modules/multi-runner/webhook.tf b/modules/multi-runner/webhook.tf
index f42274c749..900040c609 100644
--- a/modules/multi-runner/webhook.tf
+++ b/modules/multi-runner/webhook.tf
@@ -29,6 +29,7 @@ module "webhook" {
tracing_config = var.tracing_config
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
role_path = var.role_path
role_permissions_boundary = var.role_permissions_boundary
diff --git a/modules/runner-binaries-syncer/README.md b/modules/runner-binaries-syncer/README.md
index 2999be138f..9923e72c08 100644
--- a/modules/runner-binaries-syncer/README.md
+++ b/modules/runner-binaries-syncer/README.md
@@ -97,6 +97,7 @@ No modules.
| [lambda\_tags](#input\_lambda\_tags) | Map of tags that will be added to all the lambda function resources. Note these are additional tags to the default tags. | `map(string)` | `{}` | no |
| [lambda\_timeout](#input\_lambda\_timeout) | Time out of the lambda in seconds. | `number` | `300` | no |
| [lambda\_zip](#input\_lambda\_zip) | File location of the lambda zip file. | `string` | `null` | no |
+| [log\_class](#input\_log\_class) | The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. | `string` | `"STANDARD"` | no |
| [log\_level](#input\_log\_level) | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | `string` | `"info"` | no |
| [logging\_kms\_key\_id](#input\_logging\_kms\_key\_id) | Specifies the kms key id to encrypt the logs with | `string` | `null` | no |
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
diff --git a/modules/runner-binaries-syncer/runner-binaries-syncer.tf b/modules/runner-binaries-syncer/runner-binaries-syncer.tf
index 7565871531..00b6e700f5 100644
--- a/modules/runner-binaries-syncer/runner-binaries-syncer.tf
+++ b/modules/runner-binaries-syncer/runner-binaries-syncer.tf
@@ -70,6 +70,7 @@ resource "aws_cloudwatch_log_group" "syncer" {
name = "/aws/lambda/${aws_lambda_function.syncer.function_name}"
retention_in_days = var.logging_retention_in_days
kms_key_id = var.logging_kms_key_id
+ log_group_class = var.log_class
tags = var.tags
}
diff --git a/modules/runner-binaries-syncer/variables.tf b/modules/runner-binaries-syncer/variables.tf
index dd16a7c3ee..e274f043a2 100644
--- a/modules/runner-binaries-syncer/variables.tf
+++ b/modules/runner-binaries-syncer/variables.tf
@@ -134,6 +134,17 @@ variable "logging_kms_key_id" {
default = null
}
+variable "log_class" {
+ description = "The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`."
+ type = string
+ default = "STANDARD"
+
+ validation {
+ condition = contains(["STANDARD", "INFREQUENT_ACCESS"], var.log_class)
+ error_message = "`log_class` must be either `STANDARD` or `INFREQUENT_ACCESS`."
+ }
+}
+
variable "lambda_s3_bucket" {
description = "S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly."
type = string
diff --git a/modules/runners/README.md b/modules/runners/README.md
index 231e542fa6..6a27276624 100644
--- a/modules/runners/README.md
+++ b/modules/runners/README.md
@@ -185,6 +185,7 @@ yarn run dist
| [lambda\_timeout\_scale\_down](#input\_lambda\_timeout\_scale\_down) | Time out for the scale down lambda in seconds. | `number` | `60` | no |
| [lambda\_timeout\_scale\_up](#input\_lambda\_timeout\_scale\_up) | Time out for the scale up lambda in seconds. | `number` | `60` | no |
| [lambda\_zip](#input\_lambda\_zip) | File location of the lambda zip file. | `string` | `null` | no |
+| [log\_class](#input\_log\_class) | The log class of the CloudWatch log groups for the lambda functions. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. | `string` | `"STANDARD"` | no |
| [log\_level](#input\_log\_level) | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | `string` | `"info"` | no |
| [logging\_kms\_key\_id](#input\_logging\_kms\_key\_id) | Specifies the kms key id to encrypt the logs with | `string` | `null` | no |
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
@@ -213,7 +214,7 @@ yarn run dist
| [runner\_hook\_job\_started](#input\_runner\_hook\_job\_started) | Script to be ran in the runner environment at the beginning of every job | `string` | `""` | no |
| [runner\_iam\_role\_managed\_policy\_arns](#input\_runner\_iam\_role\_managed\_policy\_arns) | Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role | `list(string)` | `[]` | no |
| [runner\_labels](#input\_runner\_labels) | All the labels for the runners (GitHub) including the default one's(e.g: self-hosted, linux, x64, label1, label2). Separate each label by a comma | `list(string)` | n/a | yes |
-| [runner\_log\_files](#input\_runner\_log\_files) | (optional) List of logfiles to send to CloudWatch, will only be used if `enable_cloudwatch_agent` is set to true. Object description: `log_group_name`: Name of the log group, `prefix_log_group`: If true, the log group name will be prefixed with `/github-self-hosted-runners/`, `file_path`: path to the log file, `log_stream_name`: name of the log stream. | list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string })) | `null` | no |
+| [runner\_log\_files](#input\_runner\_log\_files) | (optional) List of logfiles to send to CloudWatch, will only be used if `enable_cloudwatch_agent` is set to true. Object description: `log_group_name`: Name of the log group, `prefix_log_group`: If true, the log group name will be prefixed with `/github-self-hosted-runners/`, `file_path`: path to the log file, `log_stream_name`: name of the log stream, `log_class`: The log class of the log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. Defaults to `STANDARD`. | list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })) | `null` | no |
| [runner\_name\_prefix](#input\_runner\_name\_prefix) | The prefix used for the GitHub runner name. The prefix will be used in the default start script to prefix the instance name when register the runner in GitHub. The value is available via an EC2 tag 'ghr:runner\_name\_prefix'. | `string` | `""` | no |
| [runner\_os](#input\_runner\_os) | The EC2 Operating System type to use for action runner instances (linux,windows). | `string` | `"linux"` | no |
| [runner\_run\_as](#input\_runner\_run\_as) | Run the GitHub actions agent as user. | `string` | `"ec2-user"` | no |
diff --git a/modules/runners/job-retry.tf b/modules/runners/job-retry.tf
index 130992667f..bcaec64625 100644
--- a/modules/runners/job-retry.tf
+++ b/modules/runners/job-retry.tf
@@ -13,6 +13,7 @@ locals {
kms_key_arn = var.kms_key_arn
lambda_tags = var.lambda_tags
log_level = var.log_level
+ log_class = var.log_class
logging_kms_key_id = var.logging_kms_key_id
logging_retention_in_days = var.logging_retention_in_days
metrics = var.metrics
diff --git a/modules/runners/logging.tf b/modules/runners/logging.tf
index 1b61f16f7b..ad875eecf3 100644
--- a/modules/runners/logging.tf
+++ b/modules/runners/logging.tf
@@ -7,25 +7,29 @@ locals {
"prefix_log_group" : true,
"file_path" : "/var/log/messages",
"log_group_name" : "messages",
- "log_stream_name" : "{instance_id}"
+ "log_stream_name" : "{instance_id}",
+ "log_class" : "STANDARD"
},
{
"log_group_name" : "user_data",
"prefix_log_group" : true,
"file_path" : var.runner_os == "windows" ? "C:/UserData.log" : "/var/log/user-data.log",
- "log_stream_name" : "{instance_id}"
+ "log_stream_name" : "{instance_id}",
+ "log_class" : "STANDARD"
},
{
"log_group_name" : "runner",
"prefix_log_group" : true,
"file_path" : var.runner_os == "windows" ? "C:/actions-runner/_diag/Runner_*.log" : "/opt/actions-runner/_diag/Runner_**.log",
- "log_stream_name" : "{instance_id}"
+ "log_stream_name" : "{instance_id}",
+ "log_class" : "STANDARD"
},
{
"log_group_name" : "runner-startup",
"prefix_log_group" : true,
"file_path" : var.runner_os == "windows" ? "C:/runner-startup.log" : "/var/log/runner-startup.log",
- "log_stream_name" : "{instance_id}"
+ "log_stream_name" : "{instance_id}",
+ "log_class" : "STANDARD"
}
]
)
@@ -33,9 +37,18 @@ locals {
"log_group_name" : l.prefix_log_group ? "/github-self-hosted-runners/${var.prefix}/${l.log_group_name}" : "/${l.log_group_name}"
"log_stream_name" : l.log_stream_name
"file_path" : l.file_path
+ "log_class" : l.log_class
}] : []
loggroups_names = distinct([for l in local.logfiles : l.log_group_name])
+ # Create a list of unique log classes corresponding to each log group name
+ # This maintains the same order as loggroups_names for use with count
+ loggroups_classes = [
+ for name in local.loggroups_names : [
+ for l in local.logfiles : l.log_class
+ if l.log_group_name == name
+ ][0]
+ ]
}
@@ -55,6 +68,7 @@ resource "aws_cloudwatch_log_group" "gh_runners" {
name = local.loggroups_names[count.index]
retention_in_days = var.logging_retention_in_days
kms_key_id = var.logging_kms_key_id
+ log_group_class = local.loggroups_classes[count.index]
tags = local.tags
}
diff --git a/modules/runners/pool.tf b/modules/runners/pool.tf
index c11673860a..53c5d1c2cd 100644
--- a/modules/runners/pool.tf
+++ b/modules/runners/pool.tf
@@ -22,6 +22,7 @@ module "pool" {
log_level = var.log_level
logging_retention_in_days = var.logging_retention_in_days
logging_kms_key_id = var.logging_kms_key_id
+ log_class = var.log_class
reserved_concurrent_executions = var.pool_lambda_reserved_concurrent_executions
s3_bucket = var.lambda_s3_bucket
s3_key = var.runners_lambda_s3_key
diff --git a/modules/runners/pool/README.md b/modules/runners/pool/README.md
index a9194e0b93..a09538aced 100644
--- a/modules/runners/pool/README.md
+++ b/modules/runners/pool/README.md
@@ -49,7 +49,7 @@ No modules.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [aws\_partition](#input\_aws\_partition) | (optional) partition for the arn if not 'aws' | `string` | `"aws"` | no |
-| [config](#input\_config) | Lookup details in parent module. | object({ lambda = object({ log_level = string logging_retention_in_days = number logging_kms_key_id = string reserved_concurrent_executions = number s3_bucket = string s3_key = string s3_object_version = string security_group_ids = list(string) runtime = string architecture = string memory_size = number timeout = number zip = string subnet_ids = list(string) parameter_store_tags = string }) tags = map(string) ghes = object({ url = string ssl_verify = string }) github_app_parameters = object({ key_base64 = map(string) id = map(string) }) subnet_ids = list(string) runner = object({ disable_runner_autoupdate = bool ephemeral = bool enable_jit_config = bool enable_on_demand_failover_for_errors = list(string) scale_errors = list(string) boot_time_in_minutes = number labels = list(string) launch_template = object({ name = string }) group_name = string name_prefix = string pool_owner = string role = object({ arn = string }) }) instance_types = list(string) instance_target_capacity_type = string instance_allocation_strategy = string instance_max_spot_price = string prefix = string pool = list(object({ schedule_expression = string schedule_expression_timezone = string size = number })) role_permissions_boundary = string kms_key_arn = string ami_kms_key_arn = string ami_id_ssm_parameter_arn = string role_path = string ssm_token_path = string ssm_config_path = string ami_id_ssm_parameter_name = string ami_id_ssm_parameter_read_policy_arn = string arn_ssm_parameters_path_config = string lambda_tags = map(string) user_agent = string }) | n/a | yes |
+| [config](#input\_config) | Lookup details in parent module. | object({ lambda = object({ log_level = string logging_retention_in_days = number logging_kms_key_id = string log_class = string reserved_concurrent_executions = number s3_bucket = string s3_key = string s3_object_version = string security_group_ids = list(string) runtime = string architecture = string memory_size = number timeout = number zip = string subnet_ids = list(string) parameter_store_tags = string }) tags = map(string) ghes = object({ url = string ssl_verify = string }) github_app_parameters = object({ key_base64 = map(string) id = map(string) }) subnet_ids = list(string) runner = object({ disable_runner_autoupdate = bool ephemeral = bool enable_jit_config = bool enable_on_demand_failover_for_errors = list(string) scale_errors = list(string) boot_time_in_minutes = number labels = list(string) launch_template = object({ name = string }) group_name = string name_prefix = string pool_owner = string role = object({ arn = string }) }) instance_types = list(string) instance_target_capacity_type = string instance_allocation_strategy = string instance_max_spot_price = string prefix = string pool = list(object({ schedule_expression = string schedule_expression_timezone = string size = number })) role_permissions_boundary = string kms_key_arn = string ami_kms_key_arn = string ami_id_ssm_parameter_arn = string role_path = string ssm_token_path = string ssm_config_path = string ami_id_ssm_parameter_name = string ami_id_ssm_parameter_read_policy_arn = string arn_ssm_parameters_path_config = string lambda_tags = map(string) user_agent = string }) | n/a | yes |
| [tracing\_config](#input\_tracing\_config) | Configuration for lambda tracing. | object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }) | `{}` | no |
## Outputs
diff --git a/modules/runners/pool/main.tf b/modules/runners/pool/main.tf
index ced73825d4..5363f3c3fb 100644
--- a/modules/runners/pool/main.tf
+++ b/modules/runners/pool/main.tf
@@ -72,6 +72,7 @@ resource "aws_cloudwatch_log_group" "pool" {
name = "/aws/lambda/${aws_lambda_function.pool.function_name}"
retention_in_days = var.config.lambda.logging_retention_in_days
kms_key_id = var.config.lambda.logging_kms_key_id
+ log_group_class = var.config.lambda.log_class
tags = var.config.tags
}
diff --git a/modules/runners/pool/variables.tf b/modules/runners/pool/variables.tf
index d005f3479e..4bfdd68010 100644
--- a/modules/runners/pool/variables.tf
+++ b/modules/runners/pool/variables.tf
@@ -5,6 +5,7 @@ variable "config" {
log_level = string
logging_retention_in_days = number
logging_kms_key_id = string
+ log_class = string
reserved_concurrent_executions = number
s3_bucket = string
s3_key = string
diff --git a/modules/runners/scale-down.tf b/modules/runners/scale-down.tf
index a36f3b0532..b304e8066e 100644
--- a/modules/runners/scale-down.tf
+++ b/modules/runners/scale-down.tf
@@ -62,6 +62,7 @@ resource "aws_cloudwatch_log_group" "scale_down" {
name = "/aws/lambda/${aws_lambda_function.scale_down.function_name}"
retention_in_days = var.logging_retention_in_days
kms_key_id = var.logging_kms_key_id
+ log_group_class = var.log_class
tags = var.tags
}
diff --git a/modules/runners/scale-up.tf b/modules/runners/scale-up.tf
index 73bf4b6df6..c5503f6394 100644
--- a/modules/runners/scale-up.tf
+++ b/modules/runners/scale-up.tf
@@ -85,6 +85,7 @@ resource "aws_cloudwatch_log_group" "scale_up" {
name = "/aws/lambda/${aws_lambda_function.scale_up.function_name}"
retention_in_days = var.logging_retention_in_days
kms_key_id = var.logging_kms_key_id
+ log_group_class = var.log_class
tags = var.tags
}
diff --git a/modules/runners/ssm-housekeeper.tf b/modules/runners/ssm-housekeeper.tf
index b591938fae..ab226024e7 100644
--- a/modules/runners/ssm-housekeeper.tf
+++ b/modules/runners/ssm-housekeeper.tf
@@ -59,6 +59,7 @@ resource "aws_cloudwatch_log_group" "ssm_housekeeper" {
name = "/aws/lambda/${aws_lambda_function.ssm_housekeeper.function_name}"
retention_in_days = var.logging_retention_in_days
kms_key_id = var.logging_kms_key_id
+ log_group_class = var.log_class
tags = var.tags
}
diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf
index db58a86b42..e2a33280b9 100644
--- a/modules/runners/variables.tf
+++ b/modules/runners/variables.tf
@@ -335,6 +335,17 @@ variable "logging_kms_key_id" {
default = null
}
+variable "log_class" {
+ description = "The log class of the CloudWatch log groups for the lambda functions. Valid values are `STANDARD` or `INFREQUENT_ACCESS`."
+ type = string
+ default = "STANDARD"
+
+ validation {
+ condition = contains(["STANDARD", "INFREQUENT_ACCESS"], var.log_class)
+ error_message = "`log_class` must be either `STANDARD` or `INFREQUENT_ACCESS`."
+ }
+}
+
variable "enable_ssm_on_runners" {
description = "Enable to allow access to the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances."
type = bool
@@ -395,12 +406,13 @@ variable "cloudwatch_config" {
}
variable "runner_log_files" {
- description = "(optional) List of logfiles to send to CloudWatch, will only be used if `enable_cloudwatch_agent` is set to true. Object description: `log_group_name`: Name of the log group, `prefix_log_group`: If true, the log group name will be prefixed with `/github-self-hosted-runners/`, `file_path`: path to the log file, `log_stream_name`: name of the log stream."
+ description = "(optional) List of logfiles to send to CloudWatch, will only be used if `enable_cloudwatch_agent` is set to true. Object description: `log_group_name`: Name of the log group, `prefix_log_group`: If true, the log group name will be prefixed with `/github-self-hosted-runners/`, `file_path`: path to the log file, `log_stream_name`: name of the log stream, `log_class`: The log class of the log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. Defaults to `STANDARD`."
type = list(object({
log_group_name = string
prefix_log_group = bool
file_path = string
log_stream_name = string
+ log_class = optional(string, "STANDARD")
}))
default = null
}
diff --git a/modules/termination-watcher/README.md b/modules/termination-watcher/README.md
index 788f4c5c13..dc6049ffec 100644
--- a/modules/termination-watcher/README.md
+++ b/modules/termination-watcher/README.md
@@ -82,7 +82,7 @@ No resources.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [config](#input\_config) | Configuration for the spot termination watcher. `aws_partition`: Partition for the base arn if not 'aws' `architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. `environment_variables`: Environment variables for the lambda. 'features': Features to enable the different lambda functions to handle spot termination events. `lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. `logging_kms_key_id`: Specifies the kms key id to encrypt the logs with `logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. `memory_size`: Memory size limit in MB of the lambda. `prefix`: The prefix used for naming resources. `role_path`: The path that will be added to the role, if not set the environment name will be used. `role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda. `runtime`: AWS Lambda runtime. `s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. `s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. `s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. `security_group_ids`: List of security group IDs associated with the Lambda function. `subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. `tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked. `tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `timeout`: Time out of the lambda in seconds. `tracing_config`: Configuration for lambda tracing. `zip`: File location of the lambda zip file. | object({ aws_partition = optional(string, null) architecture = optional(string, null) environment_variables = optional(map(string), {}) features = optional(object({ enable_spot_termination_handler = optional(bool, true) enable_spot_termination_notification_watcher = optional(bool, true) }), {}) lambda_tags = optional(map(string), {}) log_level = optional(string, null) logging_kms_key_id = optional(string, null) logging_retention_in_days = optional(number, null) memory_size = optional(number, null) metrics = optional(object({ enable = optional(bool, false) namespace = optional(string, "GitHub Runners") metric = optional(object({ enable_spot_termination = optional(bool, true) enable_spot_termination_warning = optional(bool, true) }), {}) }), {}) prefix = optional(string, null) principals = optional(list(object({ type = string identifiers = list(string) })), []) role_path = optional(string, null) role_permissions_boundary = optional(string, null) runtime = optional(string, null) s3_bucket = optional(string, null) s3_key = optional(string, null) s3_object_version = optional(string, null) security_group_ids = optional(list(string), []) subnet_ids = optional(list(string), []) tag_filters = optional(map(string), null) tags = optional(map(string), {}) timeout = optional(number, null) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) zip = optional(string, null) }) | n/a | yes |
+| [config](#input\_config) | Configuration for the spot termination watcher. `aws_partition`: Partition for the base arn if not 'aws' `architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. `environment_variables`: Environment variables for the lambda. 'features': Features to enable the different lambda functions to handle spot termination events. `lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. `log_class`: The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. `logging_kms_key_id`: Specifies the kms key id to encrypt the logs with `logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. `memory_size`: Memory size limit in MB of the lambda. `prefix`: The prefix used for naming resources. `role_path`: The path that will be added to the role, if not set the environment name will be used. `role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda. `runtime`: AWS Lambda runtime. `s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. `s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. `s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. `security_group_ids`: List of security group IDs associated with the Lambda function. `subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. `tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked. `tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `timeout`: Time out of the lambda in seconds. `tracing_config`: Configuration for lambda tracing. `zip`: File location of the lambda zip file. | object({ aws_partition = optional(string, null) architecture = optional(string, null) environment_variables = optional(map(string), {}) features = optional(object({ enable_spot_termination_handler = optional(bool, true) enable_spot_termination_notification_watcher = optional(bool, true) }), {}) lambda_tags = optional(map(string), {}) log_level = optional(string, null) log_class = optional(string, "STANDARD") logging_kms_key_id = optional(string, null) logging_retention_in_days = optional(number, null) memory_size = optional(number, null) metrics = optional(object({ enable = optional(bool, false) namespace = optional(string, "GitHub Runners") metric = optional(object({ enable_spot_termination = optional(bool, true) enable_spot_termination_warning = optional(bool, true) }), {}) }), {}) prefix = optional(string, null) principals = optional(list(object({ type = string identifiers = list(string) })), []) role_path = optional(string, null) role_permissions_boundary = optional(string, null) runtime = optional(string, null) s3_bucket = optional(string, null) s3_key = optional(string, null) s3_object_version = optional(string, null) security_group_ids = optional(list(string), []) subnet_ids = optional(list(string), []) tag_filters = optional(map(string), null) tags = optional(map(string), {}) timeout = optional(number, null) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) zip = optional(string, null) }) | n/a | yes |
## Outputs
diff --git a/modules/termination-watcher/variables.tf b/modules/termination-watcher/variables.tf
index a8d5fd4d7f..a7ad36da79 100644
--- a/modules/termination-watcher/variables.tf
+++ b/modules/termination-watcher/variables.tf
@@ -9,6 +9,7 @@ variable "config" {
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
+ `log_class`: The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
@@ -37,6 +38,7 @@ variable "config" {
}), {})
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
+ log_class = optional(string, "STANDARD")
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
diff --git a/modules/webhook/README.md b/modules/webhook/README.md
index c2ff43775e..7a0c66c739 100644
--- a/modules/webhook/README.md
+++ b/modules/webhook/README.md
@@ -79,6 +79,7 @@ yarn run dist
| [lambda\_tags](#input\_lambda\_tags) | Map of tags that will be added to all the lambda function resources. Note these are additional tags to the default tags. | `map(string)` | `{}` | no |
| [lambda\_timeout](#input\_lambda\_timeout) | Time out of the lambda in seconds. | `number` | `10` | no |
| [lambda\_zip](#input\_lambda\_zip) | File location of the lambda zip file. | `string` | `null` | no |
+| [log\_class](#input\_log\_class) | The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. | `string` | `"STANDARD"` | no |
| [log\_level](#input\_log\_level) | Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. | `string` | `"info"` | no |
| [logging\_kms\_key\_id](#input\_logging\_kms\_key\_id) | Specifies the kms key id to encrypt the logs with | `string` | `null` | no |
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
diff --git a/modules/webhook/direct/README.md b/modules/webhook/direct/README.md
index aa69347ae4..55ca0473da 100644
--- a/modules/webhook/direct/README.md
+++ b/modules/webhook/direct/README.md
@@ -40,7 +40,7 @@ No modules.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [config](#input\_config) | Configuration object for all variables. | object({ prefix = string archive = optional(object({ enable = optional(bool, true) retention_days = optional(number, 7) }), {}) tags = optional(map(string), {}) lambda_subnet_ids = optional(list(string), []) lambda_security_group_ids = optional(list(string), []) sqs_job_queues_arns = list(string) lambda_zip = optional(string, null) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 10) role_permissions_boundary = optional(string, null) role_path = optional(string, null) logging_retention_in_days = optional(number, 180) logging_kms_key_id = optional(string, null) lambda_s3_bucket = optional(string, null) lambda_s3_key = optional(string, null) lambda_s3_object_version = optional(string, null) lambda_apigateway_access_log_settings = optional(object({ destination_arn = string format = string }), null) repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ webhook_secret = map(string) }) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) lambda_tags = optional(map(string), {}) api_gw_source_arn = string ssm_parameter_runner_matcher_config = list(object({ name = string arn = string version = string })) }) | n/a | yes |
+| [config](#input\_config) | Configuration object for all variables. | object({ prefix = string archive = optional(object({ enable = optional(bool, true) retention_days = optional(number, 7) }), {}) tags = optional(map(string), {}) lambda_subnet_ids = optional(list(string), []) lambda_security_group_ids = optional(list(string), []) sqs_job_queues_arns = list(string) lambda_zip = optional(string, null) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 10) role_permissions_boundary = optional(string, null) role_path = optional(string, null) logging_retention_in_days = optional(number, 180) logging_kms_key_id = optional(string, null) log_class = optional(string, "STANDARD") lambda_s3_bucket = optional(string, null) lambda_s3_key = optional(string, null) lambda_s3_object_version = optional(string, null) lambda_apigateway_access_log_settings = optional(object({ destination_arn = string format = string }), null) repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ webhook_secret = map(string) }) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) lambda_tags = optional(map(string), {}) api_gw_source_arn = string ssm_parameter_runner_matcher_config = list(object({ name = string arn = string version = string })) }) | n/a | yes |
## Outputs
diff --git a/modules/webhook/direct/variables.tf b/modules/webhook/direct/variables.tf
index 5da98e548a..4c4088eb1b 100644
--- a/modules/webhook/direct/variables.tf
+++ b/modules/webhook/direct/variables.tf
@@ -18,6 +18,7 @@ variable "config" {
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
+ log_class = optional(string, "STANDARD")
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
diff --git a/modules/webhook/direct/webhook.tf b/modules/webhook/direct/webhook.tf
index fda61dfa91..912829019a 100644
--- a/modules/webhook/direct/webhook.tf
+++ b/modules/webhook/direct/webhook.tf
@@ -58,6 +58,7 @@ resource "aws_cloudwatch_log_group" "webhook" {
name = "/aws/lambda/${aws_lambda_function.webhook.function_name}"
retention_in_days = var.config.logging_retention_in_days
kms_key_id = var.config.logging_kms_key_id
+ log_group_class = var.config.log_class
tags = var.config.tags
}
diff --git a/modules/webhook/eventbridge/README.md b/modules/webhook/eventbridge/README.md
index 5c22c69010..fa6fa9b7f3 100644
--- a/modules/webhook/eventbridge/README.md
+++ b/modules/webhook/eventbridge/README.md
@@ -54,7 +54,7 @@ No modules.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
-| [config](#input\_config) | Configuration object for all variables. | object({ prefix = string archive = optional(object({ enable = optional(bool, true) retention_days = optional(number, 7) }), {}) tags = optional(map(string), {}) lambda_subnet_ids = optional(list(string), []) lambda_security_group_ids = optional(list(string), []) sqs_job_queues_arns = list(string) lambda_zip = optional(string, null) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 10) role_permissions_boundary = optional(string, null) role_path = optional(string, null) logging_retention_in_days = optional(number, 180) logging_kms_key_id = optional(string, null) lambda_s3_bucket = optional(string, null) lambda_s3_key = optional(string, null) lambda_s3_object_version = optional(string, null) lambda_apigateway_access_log_settings = optional(object({ destination_arn = string format = string }), null) repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ webhook_secret = map(string) }) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) lambda_tags = optional(map(string), {}) api_gw_source_arn = string ssm_parameter_runner_matcher_config = list(object({ name = string arn = string version = string })) accept_events = optional(list(string), null) }) | n/a | yes |
+| [config](#input\_config) | Configuration object for all variables. | object({ prefix = string archive = optional(object({ enable = optional(bool, true) retention_days = optional(number, 7) }), {}) tags = optional(map(string), {}) lambda_subnet_ids = optional(list(string), []) lambda_security_group_ids = optional(list(string), []) sqs_job_queues_arns = list(string) lambda_zip = optional(string, null) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 10) role_permissions_boundary = optional(string, null) role_path = optional(string, null) logging_retention_in_days = optional(number, 180) logging_kms_key_id = optional(string, null) log_class = optional(string, "STANDARD") lambda_s3_bucket = optional(string, null) lambda_s3_key = optional(string, null) lambda_s3_object_version = optional(string, null) lambda_apigateway_access_log_settings = optional(object({ destination_arn = string format = string }), null) repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ webhook_secret = map(string) }) tracing_config = optional(object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }), {}) lambda_tags = optional(map(string), {}) api_gw_source_arn = string ssm_parameter_runner_matcher_config = list(object({ name = string arn = string version = string })) accept_events = optional(list(string), null) }) | n/a | yes |
## Outputs
diff --git a/modules/webhook/eventbridge/dispatcher.tf b/modules/webhook/eventbridge/dispatcher.tf
index 2a0e733fbb..f199e129e9 100644
--- a/modules/webhook/eventbridge/dispatcher.tf
+++ b/modules/webhook/eventbridge/dispatcher.tf
@@ -73,6 +73,7 @@ resource "aws_cloudwatch_log_group" "dispatcher" {
name = "/aws/lambda/${aws_lambda_function.dispatcher.function_name}"
retention_in_days = var.config.logging_retention_in_days
kms_key_id = var.config.logging_kms_key_id
+ log_group_class = var.config.log_class
tags = var.config.tags
}
diff --git a/modules/webhook/eventbridge/variables.tf b/modules/webhook/eventbridge/variables.tf
index e39f24ab6d..907523d67d 100644
--- a/modules/webhook/eventbridge/variables.tf
+++ b/modules/webhook/eventbridge/variables.tf
@@ -18,6 +18,7 @@ variable "config" {
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
+ log_class = optional(string, "STANDARD")
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
diff --git a/modules/webhook/eventbridge/webhook.tf b/modules/webhook/eventbridge/webhook.tf
index 66d8baef18..60f4f1119f 100644
--- a/modules/webhook/eventbridge/webhook.tf
+++ b/modules/webhook/eventbridge/webhook.tf
@@ -62,6 +62,7 @@ resource "aws_cloudwatch_log_group" "webhook" {
name = "/aws/lambda/${aws_lambda_function.webhook.function_name}"
retention_in_days = var.config.logging_retention_in_days
kms_key_id = var.config.logging_kms_key_id
+ log_group_class = var.config.log_class
tags = var.config.tags
}
diff --git a/modules/webhook/variables.tf b/modules/webhook/variables.tf
index 6da7fc122d..e5eef96f39 100644
--- a/modules/webhook/variables.tf
+++ b/modules/webhook/variables.tf
@@ -82,6 +82,17 @@ variable "logging_kms_key_id" {
default = null
}
+variable "log_class" {
+ description = "The log class of the CloudWatch log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`."
+ type = string
+ default = "STANDARD"
+
+ validation {
+ condition = contains(["STANDARD", "INFREQUENT_ACCESS"], var.log_class)
+ error_message = "`log_class` must be either `STANDARD` or `INFREQUENT_ACCESS`."
+ }
+}
+
variable "lambda_s3_bucket" {
description = "S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly."
type = string
diff --git a/modules/webhook/webhook.tf b/modules/webhook/webhook.tf
index 0b9310afe3..0516a98c21 100644
--- a/modules/webhook/webhook.tf
+++ b/modules/webhook/webhook.tf
@@ -64,6 +64,7 @@ module "direct" {
role_path = local.role_path,
logging_retention_in_days = var.logging_retention_in_days,
logging_kms_key_id = var.logging_kms_key_id,
+ log_class = var.log_class,
lambda_s3_bucket = var.lambda_s3_bucket,
lambda_s3_key = var.webhook_lambda_s3_key,
lambda_s3_object_version = var.webhook_lambda_s3_object_version,
@@ -105,6 +106,7 @@ module "eventbridge" {
role_path = local.role_path,
logging_retention_in_days = var.logging_retention_in_days,
logging_kms_key_id = var.logging_kms_key_id,
+ log_class = var.log_class,
lambda_s3_bucket = var.lambda_s3_bucket,
lambda_s3_key = var.webhook_lambda_s3_key,
lambda_s3_object_version = var.webhook_lambda_s3_object_version,
diff --git a/variables.tf b/variables.tf
index fe97d6ce4b..fe0859be85 100644
--- a/variables.tf
+++ b/variables.tf
@@ -370,6 +370,17 @@ variable "logging_kms_key_id" {
default = null
}
+variable "log_class" {
+ description = "The log class of the CloudWatch log groups. Valid values are `STANDARD` or `INFREQUENT_ACCESS`."
+ type = string
+ default = "STANDARD"
+
+ validation {
+ condition = contains(["STANDARD", "INFREQUENT_ACCESS"], var.log_class)
+ error_message = "`log_class` must be either `STANDARD` or `INFREQUENT_ACCESS`."
+ }
+}
+
variable "block_device_mappings" {
description = "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`."
type = list(object({
@@ -485,12 +496,13 @@ variable "cloudwatch_config" {
}
variable "runner_log_files" {
- description = "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
+ description = "(optional) List of logfiles to send to CloudWatch, will only be used if `enable_cloudwatch_agent` is set to true. Object description: `log_group_name`: Name of the log group, `prefix_log_group`: If true, the log group name will be prefixed with `/github-self-hosted-runners/`, `file_path`: path to the log file, `log_stream_name`: name of the log stream, `log_class`: The log class of the log group. Valid values are `STANDARD` or `INFREQUENT_ACCESS`. Defaults to `STANDARD`."
type = list(object({
log_group_name = string
prefix_log_group = bool
file_path = string
log_stream_name = string
+ log_class = optional(string, "STANDARD")
}))
default = null
}
From ec2e7853612b774856f8f4163550d21f86da6d7a Mon Sep 17 00:00:00 2001
From: "runners-releaser[bot]"
<194412594+runners-releaser[bot]@users.noreply.github.com>
Date: Wed, 11 Mar 2026 16:33:02 +0100
Subject: [PATCH 18/22] chore(main): release 7.5.0 (#5063)
:robot: I have created a release *beep* *boop*
---
##
[7.5.0](https://github.com/github-aws-runners/terraform-aws-github-runner/compare/v7.4.1...v7.5.0)
(2026-03-11)
### Features
* **lambdas:** add batch SSM parameter fetching to reduce API calls
([#5017](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5017))
([24857c2](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/24857c2a0d7d02e38cbd9b4dda2e652973fcf975))
* **logging:** add log_class parameter to runner log files configuration
([#5036](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5036))
([3509d4c](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/3509d4c7afaff751715db940403287aa16be3c44))
---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Co-authored-by: runners-releaser[bot] <194412594+runners-releaser[bot]@users.noreply.github.com>
---
CHANGELOG.md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index d0257eedd2..48b9d7c414 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,13 @@
# Changelog
+## [7.5.0](https://github.com/github-aws-runners/terraform-aws-github-runner/compare/v7.4.1...v7.5.0) (2026-03-11)
+
+
+### Features
+
+* **lambdas:** add batch SSM parameter fetching to reduce API calls ([#5017](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5017)) ([24857c2](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/24857c2a0d7d02e38cbd9b4dda2e652973fcf975))
+* **logging:** add log_class parameter to runner log files configuration ([#5036](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5036)) ([3509d4c](https://github.com/github-aws-runners/terraform-aws-github-runner/commit/3509d4c7afaff751715db940403287aa16be3c44))
+
## [7.4.1](https://github.com/github-aws-runners/terraform-aws-github-runner/compare/v7.4.0...v7.4.1) (2026-03-09)
From 7ee2c4cf461dce36c9513da061222b22914c8cfc Mon Sep 17 00:00:00 2001
From: Noah <105475352+Noah-mh@users.noreply.github.com>
Date: Wed, 18 Mar 2026 22:51:40 +0800
Subject: [PATCH 19/22] fix(logging): update log_class to log_group_class in
CloudWatch agent configuration (#5073)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
### Description
CloudWatch agent config stored in SSM used log_class inside each
collect_list entry. The agent’s JSON schema only allows log_group_class
there, so validation failed with “Additional property log_class is not
allowed” and runner user-data exited before the GitHub runner started
([issue
#5065](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5065)).
This PR maps the Terraform log_class value to log_group_class in the
serialized logfiles blob passed to cloudwatch_config.json, and updates
loggroups_classes to read log_group_class from local.logfiles so
aws_cloudwatch_log_group behavior stays aligned.
## Related Issues
Fixes
[#5065](https://github.com/github-aws-runners/terraform-aws-github-runner/issues/5065)
---
modules/runners/logging.tf | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/modules/runners/logging.tf b/modules/runners/logging.tf
index ad875eecf3..8a348b7751 100644
--- a/modules/runners/logging.tf
+++ b/modules/runners/logging.tf
@@ -33,11 +33,12 @@ locals {
}
]
)
+ # CloudWatch agent collect_list schema expects log_group_class, not log_class
logfiles = var.enable_cloudwatch_agent ? [for l in local.runner_log_files : {
"log_group_name" : l.prefix_log_group ? "/github-self-hosted-runners/${var.prefix}/${l.log_group_name}" : "/${l.log_group_name}"
"log_stream_name" : l.log_stream_name
"file_path" : l.file_path
- "log_class" : l.log_class
+ "log_group_class" : l.log_class
}] : []
loggroups_names = distinct([for l in local.logfiles : l.log_group_name])
@@ -45,7 +46,7 @@ locals {
# This maintains the same order as loggroups_names for use with count
loggroups_classes = [
for name in local.loggroups_names : [
- for l in local.logfiles : l.log_class
+ for l in local.logfiles : l.log_group_class
if l.log_group_name == name
][0]
]
From 6a63b3688d3876c3062e1e364878ee87cf5e2962 Mon Sep 17 00:00:00 2001
From: Stuart Pearson <1926002+stuartp44@users.noreply.github.com>
Date: Mon, 30 Mar 2026 11:14:28 +0100
Subject: [PATCH 20/22] feat(runner): add source parameter to distinguish
between scale-up and pool lambda (#5054)
---
.../control-plane/src/aws/runners.d.ts | 2 +
.../control-plane/src/aws/runners.test.ts | 55 ++++++++++++++++++-
.../control-plane/src/aws/runners.ts | 2 +-
.../control-plane/src/pool/pool.test.ts | 32 +++++++++--
.../functions/control-plane/src/pool/pool.ts | 1 +
.../src/scale-runners/scale-up.test.ts | 1 +
.../src/scale-runners/scale-up.ts | 5 ++
7 files changed, 92 insertions(+), 6 deletions(-)
diff --git a/lambdas/functions/control-plane/src/aws/runners.d.ts b/lambdas/functions/control-plane/src/aws/runners.d.ts
index 7e9bf0fbba..c891500f27 100644
--- a/lambdas/functions/control-plane/src/aws/runners.d.ts
+++ b/lambdas/functions/control-plane/src/aws/runners.d.ts
@@ -1,4 +1,5 @@
import { DefaultTargetCapacityType, SpotAllocationStrategy } from '@aws-sdk/client-ec2';
+import { LambdaRunnerSource } from '../scale-runners/scale-up';
export type RunnerType = 'Org' | 'Repo';
@@ -42,6 +43,7 @@ export interface RunnerInputParameters {
instanceAllocationStrategy: SpotAllocationStrategy;
};
numberOfRunners: number;
+ source: LambdaRunnerSource;
amiIdSsmParameterName?: string;
tracingEnabled?: boolean;
onDemandFailoverOnError?: string[];
diff --git a/lambdas/functions/control-plane/src/aws/runners.test.ts b/lambdas/functions/control-plane/src/aws/runners.test.ts
index 63f1412dd0..4243e4b06b 100644
--- a/lambdas/functions/control-plane/src/aws/runners.test.ts
+++ b/lambdas/functions/control-plane/src/aws/runners.test.ts
@@ -21,6 +21,7 @@ import { beforeEach, describe, expect, it, vi } from 'vitest';
import ScaleError from './../scale-runners/ScaleError';
import { createRunner, listEC2Runners, tag, terminateRunner, untag } from './runners';
import type { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d';
+import { LambdaRunnerSource } from '../scale-runners/scale-up';
process.env.AWS_REGION = 'eu-east-1';
const mockEC2Client = mockClient(EC2Client);
@@ -318,6 +319,8 @@ describe('create runner', () => {
allocationStrategy: SpotAllocationStrategy.CAPACITY_OPTIMIZED,
capacityType: 'spot',
type: 'Org',
+ scaleErrors: [],
+ source: 'scale-up-lambda',
};
const defaultExpectedFleetRequestValues: ExpectedFleetRequestValues = {
@@ -325,6 +328,7 @@ describe('create runner', () => {
capacityType: 'spot',
allocationStrategy: SpotAllocationStrategy.CAPACITY_OPTIMIZED,
totalTargetCapacity: 1,
+ source: 'scale-up-lambda',
};
beforeEach(() => {
@@ -365,6 +369,25 @@ describe('create runner', () => {
});
});
+ it('calls create fleet of multiple instances with pool-lambda source when specified', async () => {
+ const instances = [{ InstanceIds: ['i-1234', 'i-5678', 'i-9012'] }];
+
+ mockEC2Client.on(CreateFleetCommand).resolves({ Instances: instances });
+
+ await createRunner({
+ ...createRunnerConfig({ ...defaultRunnerConfig, source: 'pool-lambda' }),
+ numberOfRunners: 3,
+ });
+
+ expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, {
+ ...expectedCreateFleetRequest({
+ ...defaultExpectedFleetRequestValues,
+ totalTargetCapacity: 3,
+ source: 'pool-lambda',
+ }),
+ });
+ });
+
it('calls create fleet of 1 instance with the on-demand capacity', async () => {
await createRunner(createRunnerConfig({ ...defaultRunnerConfig, capacityType: 'on-demand' }));
expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, {
@@ -425,6 +448,28 @@ describe('create runner', () => {
}),
});
});
+
+ it('calls create fleet with source set to scale-up-lambda when source is specified', async () => {
+ await createRunner(createRunnerConfig({ ...defaultRunnerConfig, source: 'scale-up-lambda' }));
+
+ expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, {
+ ...expectedCreateFleetRequest({
+ ...defaultExpectedFleetRequestValues,
+ source: 'scale-up-lambda',
+ }),
+ });
+ });
+
+ it('calls create fleet with source set to pool-lambda when source is specified', async () => {
+ await createRunner(createRunnerConfig({ ...defaultRunnerConfig, source: 'pool-lambda' }));
+
+ expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, {
+ ...expectedCreateFleetRequest({
+ ...defaultExpectedFleetRequestValues,
+ source: 'pool-lambda',
+ }),
+ });
+ });
});
describe('create runner with errors', () => {
@@ -433,12 +478,14 @@ describe('create runner with errors', () => {
capacityType: 'spot',
type: 'Repo',
scaleErrors: ['UnfulfillableCapacity', 'MaxSpotInstanceCountExceeded'],
+ source: 'scale-up-lambda',
};
const defaultExpectedFleetRequestValues: ExpectedFleetRequestValues = {
type: 'Repo',
capacityType: 'spot',
allocationStrategy: SpotAllocationStrategy.CAPACITY_OPTIMIZED,
totalTargetCapacity: 1,
+ source: 'scale-up-lambda',
};
beforeEach(() => {
vi.clearAllMocks();
@@ -546,12 +593,15 @@ describe('create runner with errors fail over to OnDemand', () => {
capacityType: 'spot',
type: 'Repo',
onDemandFailoverOnError: ['InsufficientInstanceCapacity'],
+ scaleErrors: [],
+ source: 'scale-up-lambda',
};
const defaultExpectedFleetRequestValues: ExpectedFleetRequestValues = {
type: 'Repo',
capacityType: 'spot',
allocationStrategy: SpotAllocationStrategy.CAPACITY_OPTIMIZED,
totalTargetCapacity: 1,
+ source: 'scale-up-lambda',
};
beforeEach(() => {
vi.clearAllMocks();
@@ -704,6 +754,7 @@ interface RunnerConfig {
tracingEnabled?: boolean;
onDemandFailoverOnError?: string[];
scaleErrors: string[];
+ source: LambdaRunnerSource;
}
function createRunnerConfig(runnerConfig: RunnerConfig): RunnerInputParameters {
@@ -724,6 +775,7 @@ function createRunnerConfig(runnerConfig: RunnerConfig): RunnerInputParameters {
tracingEnabled: runnerConfig.tracingEnabled,
onDemandFailoverOnError: runnerConfig.onDemandFailoverOnError,
scaleErrors: runnerConfig.scaleErrors,
+ source: runnerConfig.source,
};
}
@@ -735,6 +787,7 @@ interface ExpectedFleetRequestValues {
totalTargetCapacity: number;
imageId?: string;
tracingEnabled?: boolean;
+ source: LambdaRunnerSource;
}
function expectedCreateFleetRequest(expectedValues: ExpectedFleetRequestValues): CreateFleetCommandInput {
@@ -742,7 +795,7 @@ function expectedCreateFleetRequest(expectedValues: ExpectedFleetRequestValues):
{ Key: 'ghr:Application', Value: 'github-action-runner' },
{
Key: 'ghr:created_by',
- Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda',
+ Value: expectedValues.source,
},
{ Key: 'ghr:Type', Value: expectedValues.type },
{ Key: 'ghr:Owner', Value: REPO_NAME },
diff --git a/lambdas/functions/control-plane/src/aws/runners.ts b/lambdas/functions/control-plane/src/aws/runners.ts
index 7f7f5750bf..193c82d2e7 100644
--- a/lambdas/functions/control-plane/src/aws/runners.ts
+++ b/lambdas/functions/control-plane/src/aws/runners.ts
@@ -241,7 +241,7 @@ async function createInstances(
) {
const tags = [
{ Key: 'ghr:Application', Value: 'github-action-runner' },
- { Key: 'ghr:created_by', Value: runnerParameters.numberOfRunners === 1 ? 'scale-up-lambda' : 'pool-lambda' },
+ { Key: 'ghr:created_by', Value: runnerParameters.source },
{ Key: 'ghr:Type', Value: runnerParameters.runnerType },
{ Key: 'ghr:Owner', Value: runnerParameters.runnerOwner },
];
diff --git a/lambdas/functions/control-plane/src/pool/pool.test.ts b/lambdas/functions/control-plane/src/pool/pool.test.ts
index aaa6aea715..ee4e36a463 100644
--- a/lambdas/functions/control-plane/src/pool/pool.test.ts
+++ b/lambdas/functions/control-plane/src/pool/pool.test.ts
@@ -192,7 +192,13 @@ describe('Test simple pool.', () => {
it('Top up pool with pool size 2 registered.', async () => {
await adjust({ poolSize: 3 });
expect(createRunners).toHaveBeenCalledTimes(1);
- expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything());
+ expect(createRunners).toHaveBeenCalledWith(
+ expect.anything(),
+ expect.anything(),
+ 1,
+ expect.anything(),
+ 'pool-lambda',
+ );
});
it('Should not top up if pool size is reached.', async () => {
@@ -268,7 +274,13 @@ describe('Test simple pool.', () => {
it('Top up if the pool size is set to 5', async () => {
await adjust({ poolSize: 5 });
// 2 idle, top up with 3 to match a pool of 5
- expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything());
+ expect(createRunners).toHaveBeenCalledWith(
+ expect.anything(),
+ expect.anything(),
+ 3,
+ expect.anything(),
+ 'pool-lambda',
+ );
});
});
@@ -283,7 +295,13 @@ describe('Test simple pool.', () => {
it('Top up if the pool size is set to 5', async () => {
await adjust({ poolSize: 5 });
// 2 idle, top up with 3 to match a pool of 5
- expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything());
+ expect(createRunners).toHaveBeenCalledWith(
+ expect.anything(),
+ expect.anything(),
+ 3,
+ expect.anything(),
+ 'pool-lambda',
+ );
});
});
@@ -333,7 +351,13 @@ describe('Test simple pool.', () => {
await adjust({ poolSize: 5 });
// 2 idle, 2 prefixed idle top up with 1 to match a pool of 5
- expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything());
+ expect(createRunners).toHaveBeenCalledWith(
+ expect.anything(),
+ expect.anything(),
+ 1,
+ expect.anything(),
+ 'pool-lambda',
+ );
});
});
});
diff --git a/lambdas/functions/control-plane/src/pool/pool.ts b/lambdas/functions/control-plane/src/pool/pool.ts
index 685dcd1284..cece8d9951 100644
--- a/lambdas/functions/control-plane/src/pool/pool.ts
+++ b/lambdas/functions/control-plane/src/pool/pool.ts
@@ -106,6 +106,7 @@ export async function adjust(event: PoolEvent): Promise {
},
topUp,
githubInstallationClient,
+ 'pool-lambda',
);
} else {
logger.info(`Pool will not be topped up. Found ${numberOfRunnersInPool} managed idle runners.`);
diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts
index 458d89763e..8ac2c14489 100644
--- a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts
+++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts
@@ -113,6 +113,7 @@ const EXPECTED_RUNNER_PARAMS: RunnerInputParameters = {
tracingEnabled: false,
onDemandFailoverOnError: [],
scaleErrors: ['UnfulfillableCapacity', 'MaxSpotInstanceCountExceeded', 'TargetCapacityLimitExceededException'],
+ source: 'scale-up-lambda',
};
let expectedRunnerParams: RunnerInputParameters;
diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
index 759be95089..395c87e8f8 100644
--- a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
+++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts
@@ -11,6 +11,8 @@ import { publishRetryMessage } from './job-retry';
const logger = createChildLogger('scale-up');
+export type LambdaRunnerSource = 'scale-up-lambda' | 'pool-lambda';
+
export interface RunnerGroup {
name: string;
id: number;
@@ -248,11 +250,13 @@ export async function createRunners(
ec2RunnerConfig: CreateEC2RunnerConfig,
numberOfRunners: number,
ghClient: Octokit,
+ source: LambdaRunnerSource = 'scale-up-lambda',
): Promise {
const instances = await createRunner({
runnerType: githubRunnerConfig.runnerType,
runnerOwner: githubRunnerConfig.runnerOwner,
numberOfRunners,
+ source,
...ec2RunnerConfig,
});
if (instances.length !== 0) {
@@ -507,6 +511,7 @@ export async function scaleUp(payloads: ActionRequestMessageSQS[]): Promise
Date: Wed, 1 Apr 2026 13:22:36 +0000
Subject: [PATCH 21/22] docs: auto update terraform docs
---
modules/multi-runner/README.md | 8 ++++----
modules/webhook/README.md | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md
index bd7c98d445..3e8bcbdd85 100644
--- a/modules/multi-runner/README.md
+++ b/modules/multi-runner/README.md
@@ -125,12 +125,12 @@ module "multi-runner" {
| [associate\_public\_ipv4\_address](#input\_associate\_public\_ipv4\_address) | Associate public IPv4 with the runner. Only tested with IPv4 | `bool` | `false` | no |
| [aws\_partition](#input\_aws\_partition) | (optiona) partition in the arn namespace to use if not 'aws' | `string` | `"aws"` | no |
| [aws\_region](#input\_aws\_region) | AWS region. | `string` | n/a | yes |
-| [cloudwatch\_config](#input\_cloudwatch\_config) | (optional) Replaces the module default cloudwatch log config. See for details. | `string` | `null` | no |
+| [cloudwatch\_config](#input\_cloudwatch\_config) | (optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details. | `string` | `null` | no |
| [enable\_ami\_housekeeper](#input\_enable\_ami\_housekeeper) | Option to disable the lambda to clean up old AMIs. | `bool` | `false` | no |
| [enable\_managed\_runner\_security\_group](#input\_enable\_managed\_runner\_security\_group) | Enabling the default managed security group creation. Unmanaged security groups can be specified via `runner_additional_security_group_ids`. | `bool` | `true` | no |
| [eventbridge](#input\_eventbridge) | Enable the use of EventBridge by the module. By enabling this feature events will be put on the EventBridge by the webhook instead of directly dispatching to queues for scaling. | object({ enable = optional(bool, true) accept_events = optional(list(string), []) }) | `{}` | no |
| [ghes\_ssl\_verify](#input\_ghes\_ssl\_verify) | GitHub Enterprise SSL verification. Set to 'false' when custom certificate (chains) is used for GitHub Enterprise Server (insecure). | `bool` | `true` | no |
-| [ghes\_url](#input\_ghes\_url) | GitHub Enterprise Server URL. Example: - DO NOT SET IF USING PUBLIC GITHUB. .However if you are using GitHub Enterprise Cloud with data-residency (ghe.com), set the endpoint here. Example - | `string` | `null` | no |
+| [ghes\_url](#input\_ghes\_url) | GitHub Enterprise Server URL. Example: https://github.internal.co - DO NOT SET IF USING PUBLIC GITHUB. .However if you are using GitHub Enterprise Cloud with data-residency (ghe.com), set the endpoint here. Example - https://companyname.ghe.com\| | `string` | `null` | no |
| [github\_app](#input\_github\_app) | GitHub app parameters, see your github app. You can optionally create the SSM parameters yourself and provide the ARN and name here, through the `*_ssm` attributes. If you chose to provide the configuration values directly here, please ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). Note: the provided SSM parameters arn and name have a precedence over the actual value (i.e `key_base64_ssm` has a precedence over `key_base64` etc). | object({ key_base64 = optional(string) key_base64_ssm = optional(object({ arn = string name = string })) id = optional(string) id_ssm = optional(object({ arn = string name = string })) webhook_secret = optional(string) webhook_secret_ssm = optional(object({ arn = string name = string })) }) | n/a | yes |
| [instance\_profile\_path](#input\_instance\_profile\_path) | The path that will be added to the instance\_profile, if not set the environment name will be used. | `string` | `null` | no |
| [instance\_termination\_watcher](#input\_instance\_termination\_watcher) | Configuration for the spot termination watcher lambda function. This feature is Beta, changes will not trigger a major release as long in beta. `enable`: Enable or disable the spot termination watcher. `memory_size`: Memory size limit in MB of the lambda. `s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. `s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. `timeout`: Time out of the lambda in seconds. `zip`: File location of the lambda zip file. | object({ enable = optional(bool, false) features = optional(object({ enable_spot_termination_handler = optional(bool, true) enable_spot_termination_notification_watcher = optional(bool, true) }), {}) memory_size = optional(number, null) s3_key = optional(string, null) s3_object_version = optional(string, null) timeout = optional(number, null) zip = optional(string, null) }) | `{}` | no |
@@ -151,12 +151,12 @@ module "multi-runner" {
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
| [matcher\_config\_parameter\_store\_tier](#input\_matcher\_config\_parameter\_store\_tier) | The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`. | `string` | `"Standard"` | no |
| [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. | object({ enable = optional(bool, false) namespace = optional(string, "GitHub Runners") metric = optional(object({ enable_github_app_rate_limit = optional(bool, true) enable_job_retry = optional(bool, true) enable_spot_termination_warning = optional(bool, true) }), {}) }) | `{}` | no |
-| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in " enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5* ** ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
+| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in https://github.com/github-aws-runners/terraform-aws-github-runner/blob/main/lambdas/functions/control-plane/src/aws/runners.ts" enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "DEPRECATED: Use `bidirectionalLabelMatch` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse." bidirectionalLabelMatch: "If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `exactMatch` which only checks that workflow labels are a subset of runner labels. When false, if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
| [parameter\_store\_tags](#input\_parameter\_store\_tags) | Map of tags that will be added to all the SSM Parameter Store parameters created by the Lambda function. | `map(string)` | `{}` | no |
| [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no |
| [pool\_lambda\_timeout](#input\_pool\_lambda\_timeout) | Time out for the pool lambda in seconds. | `number` | `60` | no |
| [prefix](#input\_prefix) | The prefix used for naming resources | `string` | `"github-actions"` | no |
-| [queue\_encryption](#input\_queue\_encryption) | Configure how data on queues managed by the modules in ecrypted at REST. Options are encrypted via SSE, non encrypted and via KMSS. By default encryptes via SSE is enabled. See for more details the Terraform `aws_sqs_queue` resource . | object({ kms_data_key_reuse_period_seconds = number kms_master_key_id = string sqs_managed_sse_enabled = bool }) | { "kms_data_key_reuse_period_seconds": null, "kms_master_key_id": null, "sqs_managed_sse_enabled": true } | no |
+| [queue\_encryption](#input\_queue\_encryption) | Configure how data on queues managed by the modules in ecrypted at REST. Options are encrypted via SSE, non encrypted and via KMSS. By default encryptes via SSE is enabled. See for more details the Terraform `aws_sqs_queue` resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/sqs_queue. | object({ kms_data_key_reuse_period_seconds = number kms_master_key_id = string sqs_managed_sse_enabled = bool }) | { "kms_data_key_reuse_period_seconds": null, "kms_master_key_id": null, "sqs_managed_sse_enabled": true } | no |
| [repository\_white\_list](#input\_repository\_white\_list) | List of github repository full names (owner/repo\_name) that will be allowed to use the github app. Leave empty for no filtering. | `list(string)` | `[]` | no |
| [role\_path](#input\_role\_path) | The path that will be added to the role; if not set, the environment name will be used. | `string` | `null` | no |
| [role\_permissions\_boundary](#input\_role\_permissions\_boundary) | Permissions boundary that will be added to the created role for the lambda. | `string` | `null` | no |
diff --git a/modules/webhook/README.md b/modules/webhook/README.md
index 7a0c66c739..0c5f2b7bf2 100644
--- a/modules/webhook/README.md
+++ b/modules/webhook/README.md
@@ -88,7 +88,7 @@ yarn run dist
| [repository\_white\_list](#input\_repository\_white\_list) | List of github repository full names (owner/repo\_name) that will be allowed to use the github app. Leave empty for no filtering. | `list(string)` | `[]` | no |
| [role\_path](#input\_role\_path) | The path that will be added to the role; if not set, the environment name will be used. | `string` | `null` | no |
| [role\_permissions\_boundary](#input\_role\_permissions\_boundary) | Permissions boundary that will be added to the created role for the lambda. | `string` | `null` | no |
-| [runner\_matcher\_config](#input\_runner\_matcher\_config) | SQS queue to publish accepted build events based on the runner type. When exact match is disabled the webhook accepts the event if one of the workflow job labels is part of the matcher. The priority defines the order the matchers are applied. | map(object({ arn = string id = string matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = bool bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) })) | n/a | yes |
+| [runner\_matcher\_config](#input\_runner\_matcher\_config) | SQS queue to publish accepted build events based on the runner type. When exact match is disabled the webhook accepts the event if one of the workflow job labels is part of the matcher. The priority defines the order the matchers are applied. | map(object({ arn = string id = string matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = bool bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) })) | n/a | yes |
| [ssm\_paths](#input\_ssm\_paths) | The root path used in SSM to store configuration and secrets. | object({ root = string webhook = string }) | n/a | yes |
| [tags](#input\_tags) | Map of tags that will be added to created resources. By default resources will be tagged with name and environment. | `map(string)` | `{}` | no |
| [tracing\_config](#input\_tracing\_config) | Configuration for lambda tracing. | object({ mode = optional(string, null) capture_http_requests = optional(bool, false) capture_error = optional(bool, false) }) | `{}` | no |
From 3195ee1e4110dbe72e07729941eadfc130b589cd Mon Sep 17 00:00:00 2001
From: github-aws-runners-pr|bot
Date: Wed, 1 Apr 2026 13:23:56 +0000
Subject: [PATCH 22/22] docs: auto update terraform docs
---
modules/multi-runner/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md
index 7a050cdeee..3e8bcbdd85 100644
--- a/modules/multi-runner/README.md
+++ b/modules/multi-runner/README.md
@@ -151,7 +151,7 @@ module "multi-runner" {
| [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no |
| [matcher\_config\_parameter\_store\_tier](#input\_matcher\_config\_parameter\_store\_tier) | The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`. | `string` | `"Standard"` | no |
| [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. | object({ enable = optional(bool, false) namespace = optional(string, "GitHub Runners") metric = optional(object({ enable_github_app_rate_limit = optional(bool, true) enable_job_retry = optional(bool, true) enable_spot_termination_warning = optional(bool, true) }), {}) }) | `{}` | no |
-| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in https://github.com/github-aws-runners/terraform-aws-github-runner/blob/main/lambdas/functions/control-plane/src/aws/runners.ts" enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
+| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = { runner\_config: { runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)." runner\_architecture: "The platform architecture of the runner instance\_type." runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`. delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)" ebs\_optimized: "The EC2 EBS optimized configuration." enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once." enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners." enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later." scale\_errors: "List of aws error codes that should trigger retry during scale up. This list will replace the default errors defined in the variable `defaultScaleErrors` in https://github.com/github-aws-runners/terraform-aws-github-runner/blob/main/lambdas/functions/control-plane/src/aws/runners.ts" enable\_organization\_runners: "Register runners to organization, instead of repo level" enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI." enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances." enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI." instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`." instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet." instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`." instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)." job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged" minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy." pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported." runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner." runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored." runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner." runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM." runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided." runner\_group\_name: "Name of the runner group." runner\_name\_prefix: "Prefix for the GitHub runner name." runner\_run\_as: "Run the GitHub actions agent as user." runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check." scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down." scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations." userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored." enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details." enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`." cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances" userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances" runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job" runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job" runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications." runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role" vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`." subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`." idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle." runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details." block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`." job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app." pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)." } matcherConfig: { labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`" exactMatch: "DEPRECATED: Use `bidirectionalLabelMatch` instead. If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook. Note: this only checks that workflow labels are a subset of runner labels, not the reverse." bidirectionalLabelMatch: "If set to true, the runner labels and workflow job labels must be an exact two-way match (same set, any order, no extras or missing labels). This is stricter than `exactMatch` which only checks that workflow labels are a subset of runner labels. When false, if __any__ workflow label matches it will trigger the webhook." priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999." } redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries." } | map(object({ runner_config = object({ runner_os = string runner_architecture = string runner_metadata_options = optional(map(any), { instance_metadata_tags = "enabled" http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 1 }) ami = optional(object({ filter = optional(map(list(string)), { state = ["available"] }) owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) disable_runner_autoupdate = optional(bool, false) ebs_optimized = optional(bool, false) enable_ephemeral_runners = optional(bool, false) enable_job_queued_check = optional(bool, null) enable_on_demand_failover_for_errors = optional(list(string), []) scale_errors = optional(list(string), [ "UnfulfillableCapacity", "MaxSpotInstanceCountExceeded", "TargetCapacityLimitExceededException", "RequestLimitExceeded", "ResourceLimitExceeded", "MaxSpotInstanceCountExceeded", "MaxSpotFleetRequestCountExceeded", "InsufficientInstanceCapacity", "InsufficientCapacityOnHost", ]) enable_organization_runners = optional(bool, false) enable_runner_binaries_syncer = optional(bool, true) enable_ssm_on_runners = optional(bool, false) enable_userdata = optional(bool, true) instance_allocation_strategy = optional(string, "lowest-price") instance_max_spot_price = optional(string, null) instance_target_capacity_type = optional(string, "spot") instance_types = list(string) job_queue_retention_in_seconds = optional(number, 86400) minimum_running_time_in_minutes = optional(number, null) pool_runner_owner = optional(string, null) runner_as_root = optional(bool, false) runner_boot_time_in_minutes = optional(number, 5) runner_disable_default_labels = optional(bool, false) runner_extra_labels = optional(list(string), []) runner_group_name = optional(string, "Default") runner_name_prefix = optional(string, "") runner_run_as = optional(string, "ec2-user") runners_maximum_count = number runner_additional_security_group_ids = optional(list(string), []) scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)") scale_up_reserved_concurrent_executions = optional(number, 1) userdata_template = optional(string, null) userdata_content = optional(string, null) enable_jit_config = optional(bool, null) enable_runner_detailed_monitoring = optional(bool, false) enable_cloudwatch_agent = optional(bool, true) cloudwatch_config = optional(string, null) userdata_pre_install = optional(string, "") userdata_post_install = optional(string, "") runner_hook_job_started = optional(string, "") runner_hook_job_completed = optional(string, "") runner_ec2_tags = optional(map(string), {}) runner_iam_role_managed_policy_arns = optional(list(string), []) vpc_id = optional(string, null) subnet_ids = optional(list(string), null) idle_config = optional(list(object({ cron = string timeZone = string idleCount = number evictionStrategy = optional(string, "oldest_first") })), []) cpu_options = optional(object({ core_count = number threads_per_core = number }), null) placement = optional(object({ affinity = optional(string) availability_zone = optional(string) group_id = optional(string) group_name = optional(string) host_id = optional(string) host_resource_group_arn = optional(string) spread_domain = optional(string) tenancy = optional(string) partition_number = optional(number) }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool file_path = string log_stream_name = string log_class = optional(string, "STANDARD") })), null) block_device_mappings = optional(list(object({ delete_on_termination = optional(bool, true) device_name = optional(string, "/dev/xvda") encrypted = optional(bool, true) iops = optional(number) kms_key_id = optional(string) snapshot_id = optional(string) throughput = optional(number) volume_size = number volume_type = optional(string, "gp3") })), [{ volume_size = 30 }]) pool_config = optional(list(object({ schedule_expression = string schedule_expression_timezone = optional(string) size = number })), []) job_retry = optional(object({ enable = optional(bool, false) delay_in_seconds = optional(number, 300) delay_backoff = optional(number, 2) lambda_memory_size = optional(number, 256) lambda_timeout = optional(number, 30) max_attempts = optional(number, 1) }), {}) }) matcherConfig = object({ labelMatchers = list(list(string)) exactMatch = optional(bool, false) bidirectionalLabelMatch = optional(bool, false) priority = optional(number, 999) }) redrive_build_queue = optional(object({ enabled = bool maxReceiveCount = number }), { enabled = false maxReceiveCount = null }) })) | n/a | yes |
| [parameter\_store\_tags](#input\_parameter\_store\_tags) | Map of tags that will be added to all the SSM Parameter Store parameters created by the Lambda function. | `map(string)` | `{}` | no |
| [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no |
| [pool\_lambda\_timeout](#input\_pool\_lambda\_timeout) | Time out for the pool lambda in seconds. | `number` | `60` | no |