From 19a94ebc2abb34a97741b00e616243f357a1ea86 Mon Sep 17 00:00:00 2001 From: Iain Lane Date: Sat, 6 Dec 2025 18:37:14 +0000 Subject: [PATCH 1/2] feat(control-plane)!: add support for handling multiple events in a single invocation (#4603) Currently we restrict the `scale-up` Lambda to only handle a single event at a time. In very busy environments this can prove to be a bottleneck: there are calls to GitHub and AWS APIs that happen each time, and they can end up taking long enough that we can't process job queued events faster than they arrive. In our environment we are also using a pool, and typically we have responded to the alerts generated by this (SQS queue length growing) by expanding the size of the pool. This helps because we will more frequently find that we don't need to scale up, which allows the lambdas to exit a bit earlier, so we can get through the queue faster. But it makes the environment much less responsive to changes in usage patterns. At its core, this Lambda's task is to construct an EC2 `CreateFleet` call to create instances, after working out how many are needed. This is a job that can be batched. We can take any number of events, calculate the diff between our current state and the number of jobs we have, capping at the maximum, and then issue a single call. The thing to be careful about is how to handle partial failures, if EC2 creates some of the instances we wanted but not all of them. Lambda has a configurable function response type which can be set to `ReportBatchItemFailures`. In this mode, we return a list of failed messages from our handler and those are retried. We can make use of this to give back as many events as we failed to process. Now we're potentially processing multiple events in a single Lambda, one thing we should optimise for is not recreating GitHub API clients. We need one client for the app itself, which we use to find out installation IDs, and then one client for each installation which is relevant to the batch of events we are processing. This is done by creating a new client the first time we see an event for a given installation. We also remove the same `batch_size = 1` constraint from the `job-retry` Lambda. This Lambda is used to retry events that previously failed. However, instead of reporting failures to be retried, here we maintain the pre-existing fault-tolerant behaviour where errors are logged but explicitly do not cause message retries, avoiding infinite loops from persistent GitHub API issues or malformed events. Tests are added for all of this. Tests in a private repo (sorry) look good. This was running ephemeral runners with no pool, SSM high throughput enabled, the job queued check \_dis_abled, batch size of 200, wait time of 10 seconds. The workflow runs are each a matrix with 250 jobs. ![image](https://github.com/user-attachments/assets/0a656e99-8f1e-45e2-924b-0d5c1b6d6afb) --------- Signed-off-by: dependabot[bot] Co-authored-by: Niek Palm Co-authored-by: Niek Palm Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] --- README.md | 2 + .../control-plane/src/aws/runners.test.ts | 130 ++- .../control-plane/src/aws/runners.ts | 101 +- .../control-plane/src/lambda.test.ts | 230 +++-- lambdas/functions/control-plane/src/lambda.ts | 54 +- lambdas/functions/control-plane/src/local.ts | 42 +- .../control-plane/src/pool/pool.test.ts | 24 +- .../functions/control-plane/src/pool/pool.ts | 2 +- .../src/scale-runners/ScaleError.test.ts | 76 ++ .../src/scale-runners/ScaleError.ts | 26 +- .../src/scale-runners/job-retry.test.ts | 92 ++ .../src/scale-runners/scale-up.test.ts | 944 +++++++++++++++--- .../src/scale-runners/scale-up.ts | 275 +++-- .../aws-powertools-util/src/logger/index.ts | 10 +- main.tf | 46 +- modules/multi-runner/README.md | 2 + modules/multi-runner/runners.tf | 46 +- modules/multi-runner/variables.tf | 12 + modules/runners/README.md | 2 + modules/runners/job-retry.tf | 50 +- modules/runners/job-retry/README.md | 2 +- modules/runners/job-retry/main.tf | 7 +- modules/runners/job-retry/variables.tf | 16 +- modules/runners/scale-up.tf | 10 +- modules/runners/variables.tf | 20 + modules/webhook-github-app/README.md | 2 +- variables.tf | 16 + 27 files changed, 1727 insertions(+), 512 deletions(-) create mode 100644 lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts diff --git a/README.md b/README.md index b202597cc2..f242fe734b 100644 --- a/README.md +++ b/README.md @@ -155,6 +155,8 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. This key must be in the current account. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | diff --git a/lambdas/functions/control-plane/src/aws/runners.test.ts b/lambdas/functions/control-plane/src/aws/runners.test.ts index a02f62cd36..c4fd922fd0 100644 --- a/lambdas/functions/control-plane/src/aws/runners.test.ts +++ b/lambdas/functions/control-plane/src/aws/runners.test.ts @@ -1,26 +1,26 @@ +import { tracer } from '@aws-github-runner/aws-powertools-util'; import { CreateFleetCommand, - CreateFleetCommandInput, - CreateFleetInstance, - CreateFleetResult, + type CreateFleetCommandInput, + type CreateFleetInstance, + type CreateFleetResult, CreateTagsCommand, + type DefaultTargetCapacityType, DeleteTagsCommand, - DefaultTargetCapacityType, DescribeInstancesCommand, - DescribeInstancesResult, + type DescribeInstancesResult, EC2Client, SpotAllocationStrategy, TerminateInstancesCommand, } from '@aws-sdk/client-ec2'; -import { GetParameterCommand, GetParameterResult, PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; -import { tracer } from '@aws-github-runner/aws-powertools-util'; +import { GetParameterCommand, type GetParameterResult, PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; import { mockClient } from 'aws-sdk-client-mock'; import 'aws-sdk-client-mock-jest/vitest'; +import { beforeEach, describe, expect, it, vi } from 'vitest'; import ScaleError from './../scale-runners/ScaleError'; -import { createRunner, listEC2Runners, tag, untag, terminateRunner } from './runners'; -import { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d'; -import { describe, it, expect, beforeEach, vi } from 'vitest'; +import { createRunner, listEC2Runners, tag, terminateRunner, untag } from './runners'; +import type { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d'; process.env.AWS_REGION = 'eu-east-1'; const mockEC2Client = mockClient(EC2Client); @@ -110,7 +110,10 @@ describe('list instances', () => { it('check orphan tag.', async () => { const instances: DescribeInstancesResult = mockRunningInstances; - instances.Reservations![0].Instances![0].Tags!.push({ Key: 'ghr:orphan', Value: 'true' }); + instances.Reservations![0].Instances![0].Tags!.push({ + Key: 'ghr:orphan', + Value: 'true', + }); mockEC2Client.on(DescribeInstancesCommand).resolves(instances); const resp = await listEC2Runners(); @@ -132,7 +135,11 @@ describe('list instances', () => { it('filters instances on repo name', async () => { mockEC2Client.on(DescribeInstancesCommand).resolves(mockRunningInstances); - await listEC2Runners({ runnerType: 'Repo', runnerOwner: REPO_NAME, environment: undefined }); + await listEC2Runners({ + runnerType: 'Repo', + runnerOwner: REPO_NAME, + environment: undefined, + }); expect(mockEC2Client).toHaveReceivedCommandWith(DescribeInstancesCommand, { Filters: [ { Name: 'instance-state-name', Values: ['running', 'pending'] }, @@ -145,7 +152,11 @@ describe('list instances', () => { it('filters instances on org name', async () => { mockEC2Client.on(DescribeInstancesCommand).resolves(mockRunningInstances); - await listEC2Runners({ runnerType: 'Org', runnerOwner: ORG_NAME, environment: undefined }); + await listEC2Runners({ + runnerType: 'Org', + runnerOwner: ORG_NAME, + environment: undefined, + }); expect(mockEC2Client).toHaveReceivedCommandWith(DescribeInstancesCommand, { Filters: [ { Name: 'instance-state-name', Values: ['running', 'pending'] }, @@ -249,7 +260,9 @@ describe('terminate runner', () => { }; await terminateRunner(runner.instanceId); - expect(mockEC2Client).toHaveReceivedCommandWith(TerminateInstancesCommand, { InstanceIds: [runner.instanceId] }); + expect(mockEC2Client).toHaveReceivedCommandWith(TerminateInstancesCommand, { + InstanceIds: [runner.instanceId], + }); }); }); @@ -324,7 +337,10 @@ describe('create runner', () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, type: type })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, type: type }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + type: type, + }), }); }); @@ -333,24 +349,36 @@ describe('create runner', () => { mockEC2Client.on(CreateFleetCommand).resolves({ Instances: instances }); - await createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }); + await createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, totalTargetCapacity: 2 }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + totalTargetCapacity: 2, + }), }); }); it('calls create fleet of 1 instance with the on-demand capacity', async () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, capacityType: 'on-demand' })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, capacityType: 'on-demand' }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + capacityType: 'on-demand', + }), }); }); it('calls run instances with the on-demand capacity', async () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, maxSpotPrice: '0.1' })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, maxSpotPrice: '0.1' }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + maxSpotPrice: '0.1', + }), }); }); @@ -367,8 +395,16 @@ describe('create runner', () => { }, }; mockSSMClient.on(GetParameterCommand).resolves(paramValue); - await createRunner(createRunnerConfig({ ...defaultRunnerConfig, amiIdSsmParameterName: 'my-ami-id-param' })); - const expectedRequest = expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, imageId: 'ami-123' }); + await createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + amiIdSsmParameterName: 'my-ami-id-param', + }), + ); + const expectedRequest = expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + imageId: 'ami-123', + }); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, expectedRequest); expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, { Name: 'my-ami-id-param', @@ -380,7 +416,10 @@ describe('create runner', () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, tracingEnabled: true })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, tracingEnabled: true }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + tracingEnabled: true, + }), }); }); }); @@ -419,9 +458,12 @@ describe('create runner with errors', () => { }); it('test ScaleError with multiple error.', async () => { - createFleetMockWithErrors(['UnfulfillableCapacity', 'SomeError']); + createFleetMockWithErrors(['UnfulfillableCapacity', 'MaxSpotInstanceCountExceeded', 'NotMappedError']); - await expect(createRunner(createRunnerConfig(defaultRunnerConfig))).rejects.toBeInstanceOf(ScaleError); + await expect(createRunner(createRunnerConfig(defaultRunnerConfig))).rejects.toMatchObject({ + name: 'ScaleError', + failedInstanceCount: 2, + }); expect(mockEC2Client).toHaveReceivedCommandWith( CreateFleetCommand, expectedCreateFleetRequest(defaultExpectedFleetRequestValues), @@ -465,7 +507,12 @@ describe('create runner with errors', () => { mockSSMClient.on(GetParameterCommand).rejects(new Error('Some error')); await expect( - createRunner(createRunnerConfig({ ...defaultRunnerConfig, amiIdSsmParameterName: 'my-ami-id-param' })), + createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + amiIdSsmParameterName: 'my-ami-id-param', + }), + ), ).rejects.toBeInstanceOf(Error); expect(mockEC2Client).not.toHaveReceivedCommand(CreateFleetCommand); expect(mockSSMClient).not.toHaveReceivedCommand(PutParameterCommand); @@ -530,7 +577,7 @@ describe('create runner with errors fail over to OnDemand', () => { }), }); - // second call with with OnDemand failback + // second call with with OnDemand fallback expect(mockEC2Client).toHaveReceivedNthCommandWith(2, CreateFleetCommand, { ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, @@ -540,17 +587,25 @@ describe('create runner with errors fail over to OnDemand', () => { }); }); - it('test InsufficientInstanceCapacity no failback.', async () => { + it('test InsufficientInstanceCapacity no fallback.', async () => { await expect( - createRunner(createRunnerConfig({ ...defaultRunnerConfig, onDemandFailoverOnError: [] })), + createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + onDemandFailoverOnError: [], + }), + ), ).rejects.toBeInstanceOf(Error); }); - it('test InsufficientInstanceCapacity with mutlipte instances and fallback to on demand .', async () => { + it('test InsufficientInstanceCapacity with multiple instances and fallback to on demand .', async () => { const instancesIds = ['i-123', 'i-456']; createFleetMockWithWithOnDemandFallback(['InsufficientInstanceCapacity'], instancesIds); - const instancesResult = await createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }); + const instancesResult = await createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }); expect(instancesResult).toEqual(instancesIds); expect(mockEC2Client).toHaveReceivedCommandTimes(CreateFleetCommand, 2); @@ -580,7 +635,10 @@ describe('create runner with errors fail over to OnDemand', () => { createFleetMockWithWithOnDemandFallback(['UnfulfillableCapacity'], instancesIds); await expect( - createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }), + createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }), ).rejects.toBeInstanceOf(Error); expect(mockEC2Client).toHaveReceivedCommandTimes(CreateFleetCommand, 1); @@ -626,7 +684,10 @@ function createFleetMockWithWithOnDemandFallback(errors: string[], instances?: s mockEC2Client .on(CreateFleetCommand) - .resolvesOnce({ Instances: [instanceesFirstCall], Errors: errors.map((e) => ({ ErrorCode: e })) }) + .resolvesOnce({ + Instances: [instanceesFirstCall], + Errors: errors.map((e) => ({ ErrorCode: e })), + }) .resolvesOnce({ Instances: [instancesSecondCall] }); } @@ -673,7 +734,10 @@ interface ExpectedFleetRequestValues { function expectedCreateFleetRequest(expectedValues: ExpectedFleetRequestValues): CreateFleetCommandInput { const tags = [ { Key: 'ghr:Application', Value: 'github-action-runner' }, - { Key: 'ghr:created_by', Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda' }, + { + Key: 'ghr:created_by', + Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda', + }, { Key: 'ghr:Type', Value: expectedValues.type }, { Key: 'ghr:Owner', Value: REPO_NAME }, ]; diff --git a/lambdas/functions/control-plane/src/aws/runners.ts b/lambdas/functions/control-plane/src/aws/runners.ts index 6779dd39d2..d95dc99fa4 100644 --- a/lambdas/functions/control-plane/src/aws/runners.ts +++ b/lambdas/functions/control-plane/src/aws/runners.ts @@ -166,53 +166,62 @@ async function processFleetResult( ): Promise { const instances: string[] = fleet.Instances?.flatMap((i) => i.InstanceIds?.flatMap((j) => j) || []) || []; - if (instances.length !== runnerParameters.numberOfRunners) { - logger.warn( - `${ - instances.length === 0 ? 'No' : instances.length + ' off ' + runnerParameters.numberOfRunners - } instances created.`, - { data: fleet }, - ); - const errors = fleet.Errors?.flatMap((e) => e.ErrorCode || '') || []; - - // Educated guess of errors that would make sense to retry based on the list - // https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html - const scaleErrors = [ - 'UnfulfillableCapacity', - 'MaxSpotInstanceCountExceeded', - 'TargetCapacityLimitExceededException', - 'RequestLimitExceeded', - 'ResourceLimitExceeded', - 'MaxSpotInstanceCountExceeded', - 'MaxSpotFleetRequestCountExceeded', - 'InsufficientInstanceCapacity', - ]; - - if ( - errors.some((e) => runnerParameters.onDemandFailoverOnError?.includes(e)) && - runnerParameters.ec2instanceCriteria.targetCapacityType === 'spot' - ) { - logger.warn(`Create fleet failed, initatiing fall back to on demand instances.`); - logger.debug('Create fleet failed.', { data: fleet.Errors }); - const numberOfInstances = runnerParameters.numberOfRunners - instances.length; - const instancesOnDemand = await createRunner({ - ...runnerParameters, - numberOfRunners: numberOfInstances, - onDemandFailoverOnError: ['InsufficientInstanceCapacity'], - ec2instanceCriteria: { ...runnerParameters.ec2instanceCriteria, targetCapacityType: 'on-demand' }, - }); - instances.push(...instancesOnDemand); - return instances; - } else if (errors.some((e) => scaleErrors.includes(e))) { - logger.warn('Create fleet failed, ScaleError will be thrown to trigger retry for ephemeral runners.'); - logger.debug('Create fleet failed.', { data: fleet.Errors }); - throw new ScaleError('Failed to create instance, create fleet failed.'); - } else { - logger.warn('Create fleet failed, error not recognized as scaling error.', { data: fleet.Errors }); - throw Error('Create fleet failed, no instance created.'); - } + if (instances.length === runnerParameters.numberOfRunners) { + return instances; } - return instances; + + logger.warn( + `${ + instances.length === 0 ? 'No' : instances.length + ' off ' + runnerParameters.numberOfRunners + } instances created.`, + { data: fleet }, + ); + + const errors = fleet.Errors?.flatMap((e) => e.ErrorCode || '') || []; + + if ( + errors.some((e) => runnerParameters.onDemandFailoverOnError?.includes(e)) && + runnerParameters.ec2instanceCriteria.targetCapacityType === 'spot' + ) { + logger.warn(`Create fleet failed, initatiing fall back to on demand instances.`); + logger.debug('Create fleet failed.', { data: fleet.Errors }); + const numberOfInstances = runnerParameters.numberOfRunners - instances.length; + const instancesOnDemand = await createRunner({ + ...runnerParameters, + numberOfRunners: numberOfInstances, + onDemandFailoverOnError: ['InsufficientInstanceCapacity'], + ec2instanceCriteria: { ...runnerParameters.ec2instanceCriteria, targetCapacityType: 'on-demand' }, + }); + instances.push(...instancesOnDemand); + return instances; + } + + // Educated guess of errors that would make sense to retry based on the list + // https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html + const scaleErrors = [ + 'UnfulfillableCapacity', + 'MaxSpotInstanceCountExceeded', + 'TargetCapacityLimitExceededException', + 'RequestLimitExceeded', + 'ResourceLimitExceeded', + 'MaxSpotInstanceCountExceeded', + 'MaxSpotFleetRequestCountExceeded', + 'InsufficientInstanceCapacity', + ]; + + const failedCount = countScaleErrors(errors, scaleErrors); + if (failedCount > 0) { + logger.warn('Create fleet failed, ScaleError will be thrown to trigger retry for ephemeral runners.'); + logger.debug('Create fleet failed.', { data: fleet.Errors }); + throw new ScaleError(failedCount); + } + + logger.warn('Create fleet failed, error not recognized as scaling error.', { data: fleet.Errors }); + throw Error('Create fleet failed, no instance created.'); +} + +function countScaleErrors(errors: string[], scaleErrors: string[]): number { + return errors.reduce((acc, e) => (scaleErrors.includes(e) ? acc + 1 : acc), 0); } async function getAmiIdOverride(runnerParameters: Runners.RunnerInputParameters): Promise { diff --git a/lambdas/functions/control-plane/src/lambda.test.ts b/lambdas/functions/control-plane/src/lambda.test.ts index 2c54a4d541..2c9a98e420 100644 --- a/lambdas/functions/control-plane/src/lambda.test.ts +++ b/lambdas/functions/control-plane/src/lambda.test.ts @@ -8,7 +8,7 @@ import { scaleDown } from './scale-runners/scale-down'; import { ActionRequestMessage, scaleUp } from './scale-runners/scale-up'; import { cleanSSMTokens } from './scale-runners/ssm-housekeeper'; import { checkAndRetryJob } from './scale-runners/job-retry'; -import { describe, it, expect, vi, MockedFunction } from 'vitest'; +import { describe, it, expect, vi, MockedFunction, beforeEach } from 'vitest'; const body: ActionRequestMessage = { eventType: 'workflow_job', @@ -28,11 +28,11 @@ const sqsRecord: SQSRecord = { }, awsRegion: '', body: JSON.stringify(body), - eventSource: 'aws:SQS', + eventSource: 'aws:sqs', eventSourceARN: '', md5OfBody: '', messageAttributes: {}, - messageId: '', + messageId: 'abcd1234', receiptHandle: '', }; @@ -70,100 +70,190 @@ vi.mock('@aws-github-runner/aws-powertools-util'); vi.mock('@aws-github-runner/aws-ssm-util'); describe('Test scale up lambda wrapper.', () => { - it('Do not handle multiple record sets.', async () => { - await testInvalidRecords([sqsRecord, sqsRecord]); + it('Do not handle empty record sets.', async () => { + const sqsEventMultipleRecords: SQSEvent = { + Records: [], + }; + + await expect(scaleUpHandler(sqsEventMultipleRecords, context)).resolves.not.toThrow(); }); - it('Do not handle empty record sets.', async () => { - await testInvalidRecords([]); + it('Ignores non-sqs event sources.', async () => { + const record = { + ...sqsRecord, + eventSource: 'aws:non-sqs', + }; + + const sqsEventMultipleRecordsNonSQS: SQSEvent = { + Records: [record], + }; + + await expect(scaleUpHandler(sqsEventMultipleRecordsNonSQS, context)).resolves.not.toThrow(); + expect(scaleUp).toHaveBeenCalledWith([]); }); it('Scale without error should resolve.', async () => { - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(scaleUp).mockResolvedValue([]); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); it('Non scale should resolve.', async () => { const error = new Error('Non scale should resolve.'); - const mock = vi.fn(scaleUp); - mock.mockRejectedValue(error); + vi.mocked(scaleUp).mockRejectedValue(error); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); - it('Scale should be rejected', async () => { - const error = new ScaleError('Scale should be rejected'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); + it('Scale should create a batch failure message', async () => { + const error = new ScaleError(); + vi.mocked(scaleUp).mockRejectedValue(error); + await expect(scaleUpHandler(sqsEvent, context)).resolves.toEqual({ + batchItemFailures: [{ itemIdentifier: sqsRecord.messageId }], }); - vi.mocked(scaleUp).mockImplementation(mock); - await expect(scaleUpHandler(sqsEvent, context)).rejects.toThrow(error); }); -}); -async function testInvalidRecords(sqsRecords: SQSRecord[]) { - const mock = vi.fn(scaleUp); - const logWarnSpy = vi.spyOn(logger, 'warn'); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); + describe('Batch processing', () => { + beforeEach(() => { + vi.clearAllMocks(); }); - }); - const sqsEventMultipleRecords: SQSEvent = { - Records: sqsRecords, - }; - await expect(scaleUpHandler(sqsEventMultipleRecords, context)).resolves.not.toThrow(); + const createMultipleRecords = (count: number, eventSource = 'aws:sqs'): SQSRecord[] => { + return Array.from({ length: count }, (_, i) => ({ + ...sqsRecord, + eventSource, + messageId: `message-${i}`, + body: JSON.stringify({ + ...body, + id: i + 1, + }), + })); + }; - expect(logWarnSpy).toHaveBeenCalledWith( - expect.stringContaining( - 'Event ignored, only one record at the time can be handled, ensure the lambda batch size is set to 1.', - ), - ); -} + it('Should handle multiple SQS records in a single invocation', async () => { + const records = createMultipleRecords(3); + const multiRecordEvent: SQSEvent = { Records: records }; -describe('Test scale down lambda wrapper.', () => { - it('Scaling down no error.', async () => { - const mock = vi.fn(scaleDown); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); + vi.mocked(scaleUp).mockResolvedValue([]); + + await expect(scaleUpHandler(multiRecordEvent, context)).resolves.not.toThrow(); + expect(scaleUp).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ messageId: 'message-0' }), + expect.objectContaining({ messageId: 'message-1' }), + expect.objectContaining({ messageId: 'message-2' }), + ]), + ); + }); + + it('Should return batch item failures for rejected messages', async () => { + const records = createMultipleRecords(3); + const multiRecordEvent: SQSEvent = { Records: records }; + + vi.mocked(scaleUp).mockResolvedValue(['message-1', 'message-2']); + + const result = await scaleUpHandler(multiRecordEvent, context); + expect(result).toEqual({ + batchItemFailures: [{ itemIdentifier: 'message-1' }, { itemIdentifier: 'message-2' }], + }); + }); + + it('Should filter out non-SQS event sources', async () => { + const sqsRecords = createMultipleRecords(2, 'aws:sqs'); + const nonSqsRecords = createMultipleRecords(1, 'aws:sns'); + const mixedEvent: SQSEvent = { + Records: [...sqsRecords, ...nonSqsRecords], + }; + + vi.mocked(scaleUp).mockResolvedValue([]); + + await scaleUpHandler(mixedEvent, context); + expect(scaleUp).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ messageId: 'message-0' }), + expect.objectContaining({ messageId: 'message-1' }), + ]), + ); + expect(scaleUp).not.toHaveBeenCalledWith( + expect.arrayContaining([expect.objectContaining({ messageId: 'message-2' })]), + ); + }); + + it('Should sort messages by retry count', async () => { + const records = [ + { + ...sqsRecord, + messageId: 'high-retry', + body: JSON.stringify({ ...body, retryCounter: 5 }), + }, + { + ...sqsRecord, + messageId: 'low-retry', + body: JSON.stringify({ ...body, retryCounter: 1 }), + }, + { + ...sqsRecord, + messageId: 'no-retry', + body: JSON.stringify({ ...body }), + }, + ]; + const multiRecordEvent: SQSEvent = { Records: records }; + + vi.mocked(scaleUp).mockImplementation((messages) => { + // Verify messages are sorted by retry count (ascending) + expect(messages[0].messageId).toBe('no-retry'); + expect(messages[1].messageId).toBe('low-retry'); + expect(messages[2].messageId).toBe('high-retry'); + return Promise.resolve([]); + }); + + await scaleUpHandler(multiRecordEvent, context); + }); + + it('Should return all failed messages when scaleUp throws non-ScaleError', async () => { + const records = createMultipleRecords(2); + const multiRecordEvent: SQSEvent = { Records: records }; + + vi.mocked(scaleUp).mockRejectedValue(new Error('Generic error')); + + const result = await scaleUpHandler(multiRecordEvent, context); + expect(result).toEqual({ batchItemFailures: [] }); + }); + + it('Should throw when scaleUp throws ScaleError', async () => { + const records = createMultipleRecords(2); + const multiRecordEvent: SQSEvent = { Records: records }; + + const error = new ScaleError(2); + vi.mocked(scaleUp).mockRejectedValue(error); + + await expect(scaleUpHandler(multiRecordEvent, context)).resolves.toEqual({ + batchItemFailures: [{ itemIdentifier: 'message-0' }, { itemIdentifier: 'message-1' }], }); }); + }); +}); + +describe('Test scale down lambda wrapper.', () => { + it('Scaling down no error.', async () => { + vi.mocked(scaleDown).mockResolvedValue(); await expect(scaleDownHandler({}, context)).resolves.not.toThrow(); }); it('Scaling down with error.', async () => { const error = new Error('Scaling down with error.'); - const mock = vi.fn(scaleDown); - mock.mockRejectedValue(error); + vi.mocked(scaleDown).mockRejectedValue(error); await expect(scaleDownHandler({}, context)).resolves.not.toThrow(); }); }); describe('Adjust pool.', () => { it('Receive message to adjust pool.', async () => { - const mock = vi.fn(adjust); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(adjust).mockResolvedValue(); await expect(adjustPool({ poolSize: 2 }, context)).resolves.not.toThrow(); }); it('Handle error for adjusting pool.', async () => { const error = new Error('Handle error for adjusting pool.'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(adjust).mockImplementation(mock); + vi.mocked(adjust).mockRejectedValue(error); const logSpy = vi.spyOn(logger, 'error'); await adjustPool({ poolSize: 0 }, context); expect(logSpy).toHaveBeenCalledWith(`Handle error for adjusting pool. ${error.message}`, { error }); @@ -180,12 +270,7 @@ describe('Test middleware', () => { describe('Test ssm housekeeper lambda wrapper.', () => { it('Invoke without errors.', async () => { - const mock = vi.fn(cleanSSMTokens); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(cleanSSMTokens).mockResolvedValue(); process.env.SSM_CLEANUP_CONFIG = JSON.stringify({ dryRun: false, @@ -197,29 +282,20 @@ describe('Test ssm housekeeper lambda wrapper.', () => { }); it('Errors not throws.', async () => { - const mock = vi.fn(cleanSSMTokens); - mock.mockRejectedValue(new Error()); + vi.mocked(cleanSSMTokens).mockRejectedValue(new Error()); await expect(ssmHousekeeper({}, context)).resolves.not.toThrow(); }); }); describe('Test job retry check wrapper', () => { it('Handle without error should resolve.', async () => { - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.resolve(); - }); - vi.mocked(checkAndRetryJob).mockImplementation(mock); + vi.mocked(checkAndRetryJob).mockResolvedValue(); await expect(jobRetryCheck(sqsEvent, context)).resolves.not.toThrow(); }); it('Handle with error should resolve and log only a warning.', async () => { const error = new Error('Error handling retry check.'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(checkAndRetryJob).mockImplementation(mock); + vi.mocked(checkAndRetryJob).mockRejectedValue(error); const logSpyWarn = vi.spyOn(logger, 'warn'); await expect(jobRetryCheck(sqsEvent, context)).resolves.not.toThrow(); diff --git a/lambdas/functions/control-plane/src/lambda.ts b/lambdas/functions/control-plane/src/lambda.ts index 3e3ab90557..e2a0451c95 100644 --- a/lambdas/functions/control-plane/src/lambda.ts +++ b/lambdas/functions/control-plane/src/lambda.ts @@ -1,34 +1,66 @@ import middy from '@middy/core'; import { logger, setContext } from '@aws-github-runner/aws-powertools-util'; import { captureLambdaHandler, tracer } from '@aws-github-runner/aws-powertools-util'; -import { Context, SQSEvent } from 'aws-lambda'; +import { Context, type SQSBatchItemFailure, type SQSBatchResponse, SQSEvent } from 'aws-lambda'; import { PoolEvent, adjust } from './pool/pool'; import ScaleError from './scale-runners/ScaleError'; import { scaleDown } from './scale-runners/scale-down'; -import { scaleUp } from './scale-runners/scale-up'; +import { type ActionRequestMessage, type ActionRequestMessageSQS, scaleUp } from './scale-runners/scale-up'; import { SSMCleanupOptions, cleanSSMTokens } from './scale-runners/ssm-housekeeper'; import { checkAndRetryJob } from './scale-runners/job-retry'; -export async function scaleUpHandler(event: SQSEvent, context: Context): Promise { +export async function scaleUpHandler(event: SQSEvent, context: Context): Promise { setContext(context, 'lambda.ts'); logger.logEventIfEnabled(event); - if (event.Records.length !== 1) { - logger.warn('Event ignored, only one record at the time can be handled, ensure the lambda batch size is set to 1.'); - return Promise.resolve(); + const sqsMessages: ActionRequestMessageSQS[] = []; + const warnedEventSources = new Set(); + + for (const { body, eventSource, messageId } of event.Records) { + if (eventSource !== 'aws:sqs') { + if (!warnedEventSources.has(eventSource)) { + logger.warn('Ignoring non-sqs event source', { eventSource }); + warnedEventSources.add(eventSource); + } + + continue; + } + + const payload = JSON.parse(body) as ActionRequestMessage; + sqsMessages.push({ ...payload, messageId }); } + // Sort messages by their retry count, so that we retry the same messages if + // there's a persistent failure. This should cause messages to be dropped + // quicker than if we retried in an arbitrary order. + sqsMessages.sort((l, r) => { + return (l.retryCounter ?? 0) - (r.retryCounter ?? 0); + }); + + const batchItemFailures: SQSBatchItemFailure[] = []; + try { - await scaleUp(event.Records[0].eventSource, JSON.parse(event.Records[0].body)); - return Promise.resolve(); + const rejectedMessageIds = await scaleUp(sqsMessages); + + for (const messageId of rejectedMessageIds) { + batchItemFailures.push({ + itemIdentifier: messageId, + }); + } + + return { batchItemFailures }; } catch (e) { if (e instanceof ScaleError) { - return Promise.reject(e); + batchItemFailures.push(...e.toBatchItemFailures(sqsMessages)); + logger.warn(`${e.detailedMessage} A retry will be attempted via SQS.`, { error: e }); } else { - logger.warn(`Ignoring error: ${e}`); - return Promise.resolve(); + logger.error(`Error processing batch (size: ${sqsMessages.length}): ${(e as Error).message}, ignoring batch`, { + error: e, + }); } + + return { batchItemFailures }; } } diff --git a/lambdas/functions/control-plane/src/local.ts b/lambdas/functions/control-plane/src/local.ts index 2166da58fd..0b06335c8a 100644 --- a/lambdas/functions/control-plane/src/local.ts +++ b/lambdas/functions/control-plane/src/local.ts @@ -1,21 +1,21 @@ import { logger } from '@aws-github-runner/aws-powertools-util'; -import { ActionRequestMessage, scaleUp } from './scale-runners/scale-up'; +import { scaleUpHandler } from './lambda'; +import { Context, SQSEvent } from 'aws-lambda'; -const sqsEvent = { +const sqsEvent: SQSEvent = { Records: [ { messageId: 'e8d74d08-644e-42ca-bf82-a67daa6c4dad', receiptHandle: - // eslint-disable-next-line max-len 'AQEBCpLYzDEKq4aKSJyFQCkJduSKZef8SJVOperbYyNhXqqnpFG5k74WygVAJ4O0+9nybRyeOFThvITOaS21/jeHiI5fgaM9YKuI0oGYeWCIzPQsluW5CMDmtvqv1aA8sXQ5n2x0L9MJkzgdIHTC3YWBFLQ2AxSveOyIHwW+cHLIFCAcZlOaaf0YtaLfGHGkAC4IfycmaijV8NSlzYgDuxrC9sIsWJ0bSvk5iT4ru/R4+0cjm7qZtGlc04k9xk5Fu6A+wRxMaIyiFRY+Ya19ykcevQldidmEjEWvN6CRToLgclk=', - body: { + body: JSON.stringify({ repositoryName: 'self-hosted', repositoryOwner: 'test-runners', eventType: 'workflow_job', id: 987654, installationId: 123456789, - }, + }), attributes: { ApproximateReceiveCount: '1', SentTimestamp: '1626450047230', @@ -34,12 +34,34 @@ const sqsEvent = { ], }; +const context: Context = { + awsRequestId: '1', + callbackWaitsForEmptyEventLoop: false, + functionName: '', + functionVersion: '', + getRemainingTimeInMillis: () => 0, + invokedFunctionArn: '', + logGroupName: '', + logStreamName: '', + memoryLimitInMB: '', + done: () => { + return; + }, + fail: () => { + return; + }, + succeed: () => { + return; + }, +}; + export function run(): void { - scaleUp(sqsEvent.Records[0].eventSource, sqsEvent.Records[0].body as ActionRequestMessage) - .then() - .catch((e) => { - logger.error(e); - }); + try { + scaleUpHandler(sqsEvent, context); + } catch (e: unknown) { + const message = e instanceof Error ? e.message : `${e}`; + logger.error(message, e instanceof Error ? { error: e } : {}); + } } run(); diff --git a/lambdas/functions/control-plane/src/pool/pool.test.ts b/lambdas/functions/control-plane/src/pool/pool.test.ts index 6dd389873b..c05a8b8cb7 100644 --- a/lambdas/functions/control-plane/src/pool/pool.test.ts +++ b/lambdas/functions/control-plane/src/pool/pool.test.ts @@ -190,11 +190,7 @@ describe('Test simple pool.', () => { it('Top up pool with pool size 2 registered.', async () => { await adjust({ poolSize: 3 }); expect(createRunners).toHaveBeenCalledTimes(1); - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 1 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything()); }); it('Should not top up if pool size is reached.', async () => { @@ -270,11 +266,7 @@ describe('Test simple pool.', () => { it('Top up if the pool size is set to 5', async () => { await adjust({ poolSize: 5 }); // 2 idle, top up with 3 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 3 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything()); }); }); @@ -289,11 +281,7 @@ describe('Test simple pool.', () => { it('Top up if the pool size is set to 5', async () => { await adjust({ poolSize: 5 }); // 2 idle, top up with 3 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 3 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything()); }); }); @@ -343,11 +331,7 @@ describe('Test simple pool.', () => { await adjust({ poolSize: 5 }); // 2 idle, 2 prefixed idle top up with 1 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 1 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything()); }); }); }); diff --git a/lambdas/functions/control-plane/src/pool/pool.ts b/lambdas/functions/control-plane/src/pool/pool.ts index 07477572ce..aa690e97f6 100644 --- a/lambdas/functions/control-plane/src/pool/pool.ts +++ b/lambdas/functions/control-plane/src/pool/pool.ts @@ -92,11 +92,11 @@ export async function adjust(event: PoolEvent): Promise { environment, launchTemplateName, subnets, - numberOfRunners: topUp, amiIdSsmParameterName, tracingEnabled, onDemandFailoverOnError, }, + topUp, githubInstallationClient, ); } else { diff --git a/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts b/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts new file mode 100644 index 0000000000..0a7478c12f --- /dev/null +++ b/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts @@ -0,0 +1,76 @@ +import { describe, expect, it } from 'vitest'; +import type { ActionRequestMessageSQS } from './scale-up'; +import ScaleError from './ScaleError'; + +describe('ScaleError', () => { + describe('detailedMessage', () => { + it('should format message for single instance failure', () => { + const error = new ScaleError(1); + + expect(error.detailedMessage).toBe( + 'Failed to create instance, create fleet failed. (Failed to create 1 instance)', + ); + }); + + it('should format message for multiple instance failures', () => { + const error = new ScaleError(3); + + expect(error.detailedMessage).toBe( + 'Failed to create instance, create fleet failed. (Failed to create 3 instances)', + ); + }); + }); + + describe('toBatchItemFailures', () => { + const mockMessages: ActionRequestMessageSQS[] = [ + { messageId: 'msg-1', id: 1, eventType: 'workflow_job' }, + { messageId: 'msg-2', id: 2, eventType: 'workflow_job' }, + { messageId: 'msg-3', id: 3, eventType: 'workflow_job' }, + { messageId: 'msg-4', id: 4, eventType: 'workflow_job' }, + ]; + + it.each([ + { failedCount: 1, expected: [{ itemIdentifier: 'msg-1' }], description: 'default instance count' }, + { + failedCount: 2, + expected: [{ itemIdentifier: 'msg-1' }, { itemIdentifier: 'msg-2' }], + description: 'less than message count', + }, + { + failedCount: 4, + expected: [ + { itemIdentifier: 'msg-1' }, + { itemIdentifier: 'msg-2' }, + { itemIdentifier: 'msg-3' }, + { itemIdentifier: 'msg-4' }, + ], + description: 'equal to message count', + }, + { + failedCount: 10, + expected: [ + { itemIdentifier: 'msg-1' }, + { itemIdentifier: 'msg-2' }, + { itemIdentifier: 'msg-3' }, + { itemIdentifier: 'msg-4' }, + ], + description: 'more than message count', + }, + { failedCount: 0, expected: [], description: 'zero failed instances' }, + { failedCount: -1, expected: [], description: 'negative failed instances' }, + { failedCount: -10, expected: [], description: 'large negative failed instances' }, + ])('should handle $description (failedCount=$failedCount)', ({ failedCount, expected }) => { + const error = new ScaleError(failedCount); + const failures = error.toBatchItemFailures(mockMessages); + + expect(failures).toEqual(expected); + }); + + it('should handle empty message array', () => { + const error = new ScaleError(3); + const failures = error.toBatchItemFailures([]); + + expect(failures).toEqual([]); + }); + }); +}); diff --git a/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts b/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts index d7e71f8c33..9c1f474d17 100644 --- a/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts +++ b/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts @@ -1,8 +1,28 @@ +import type { SQSBatchItemFailure } from 'aws-lambda'; +import type { ActionRequestMessageSQS } from './scale-up'; + class ScaleError extends Error { - constructor(public message: string) { - super(message); + constructor(public readonly failedInstanceCount: number = 1) { + super('Failed to create instance, create fleet failed.'); this.name = 'ScaleError'; - this.stack = new Error().stack; + } + + /** + * Gets a formatted error message including the failed instance count + */ + public get detailedMessage(): string { + return `${this.message} (Failed to create ${this.failedInstanceCount} instance${this.failedInstanceCount !== 1 ? 's' : ''})`; + } + + /** + * Generate SQS batch item failures for the failed instances + */ + public toBatchItemFailures(messages: ActionRequestMessageSQS[]): SQSBatchItemFailure[] { + // Ensure we don't retry negative counts or more messages than available + const messagesToRetry = Math.max(0, Math.min(this.failedInstanceCount, messages.length)); + return messages.slice(0, messagesToRetry).map(({ messageId }) => ({ + itemIdentifier: messageId, + })); } } diff --git a/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts b/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts index c401ab4c2d..f807d06d8a 100644 --- a/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts +++ b/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts @@ -2,9 +2,11 @@ import { publishMessage } from '../aws/sqs'; import { publishRetryMessage, checkAndRetryJob } from './job-retry'; import { ActionRequestMessage, ActionRequestMessageRetry } from './scale-up'; import { getOctokit } from '../github/octokit'; +import { jobRetryCheck } from '../lambda'; import { Octokit } from '@octokit/rest'; import { createSingleMetric } from '@aws-github-runner/aws-powertools-util'; import { describe, it, expect, beforeEach, vi } from 'vitest'; +import type { SQSRecord } from 'aws-lambda'; vi.mock('../aws/sqs', async () => ({ publishMessage: vi.fn(), @@ -269,3 +271,93 @@ describe(`Test job retry check`, () => { expect(publishMessage).not.toHaveBeenCalled(); }); }); + +describe('Test job retry handler (batch processing)', () => { + const context = { + requestId: 'request-id', + functionName: 'function-name', + functionVersion: 'function-version', + invokedFunctionArn: 'invoked-function-arn', + memoryLimitInMB: '128', + awsRequestId: 'aws-request-id', + logGroupName: 'log-group-name', + logStreamName: 'log-stream-name', + remainingTimeInMillis: () => 30000, + done: () => {}, + fail: () => {}, + succeed: () => {}, + getRemainingTimeInMillis: () => 30000, + callbackWaitsForEmptyEventLoop: false, + }; + + function createSQSRecord(messageId: string): SQSRecord { + return { + messageId, + receiptHandle: 'receipt-handle', + body: JSON.stringify({ + eventType: 'workflow_job', + id: 123, + installationId: 456, + repositoryName: 'test-repo', + repositoryOwner: 'test-owner', + repoOwnerType: 'Organization', + retryCounter: 0, + }), + attributes: { + ApproximateReceiveCount: '1', + SentTimestamp: '1234567890', + SenderId: 'sender-id', + ApproximateFirstReceiveTimestamp: '1234567891', + }, + messageAttributes: {}, + md5OfBody: 'md5', + eventSource: 'aws:sqs', + eventSourceARN: 'arn:aws:sqs:region:account:queue', + awsRegion: 'us-east-1', + }; + } + + beforeEach(() => { + vi.clearAllMocks(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.JOB_QUEUE_SCALE_UP_URL = 'https://sqs.example.com/queue'; + }); + + it('should handle multiple records in a single batch', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ + data: { + status: 'queued', + }, + headers: {}, + })); + + const event = { + Records: [createSQSRecord('msg-1'), createSQSRecord('msg-2'), createSQSRecord('msg-3')], + }; + + await expect(jobRetryCheck(event, context)).resolves.not.toThrow(); + expect(publishMessage).toHaveBeenCalledTimes(3); + }); + + it('should continue processing other records when one fails', async () => { + mockCreateOctokitClient + .mockResolvedValueOnce(new Octokit()) // First record succeeds + .mockRejectedValueOnce(new Error('API error')) // Second record fails + .mockResolvedValueOnce(new Octokit()); // Third record succeeds + + mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ + data: { + status: 'queued', + }, + headers: {}, + })); + + const event = { + Records: [createSQSRecord('msg-1'), createSQSRecord('msg-2'), createSQSRecord('msg-3')], + }; + + await expect(jobRetryCheck(event, context)).resolves.not.toThrow(); + // There were two successful calls to publishMessage + expect(publishMessage).toHaveBeenCalledTimes(2); + }); +}); diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts index 477ef147fb..b876d31d50 100644 --- a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts +++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts @@ -1,5 +1,4 @@ import { PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; -import { Octokit } from '@octokit/rest'; import { mockClient } from 'aws-sdk-client-mock'; import 'aws-sdk-client-mock-jest/vitest'; // Using vi.mocked instead of jest-mock @@ -9,10 +8,10 @@ import { performance } from 'perf_hooks'; import * as ghAuth from '../github/auth'; import { createRunner, listEC2Runners } from './../aws/runners'; import { RunnerInputParameters } from './../aws/runners.d'; -import ScaleError from './ScaleError'; import * as scaleUpModule from './scale-up'; import { getParameter } from '@aws-github-runner/aws-ssm-util'; import { describe, it, expect, beforeEach, vi } from 'vitest'; +import type { Octokit } from '@octokit/rest'; const mockOctokit = { paginate: vi.fn(), @@ -29,6 +28,7 @@ const mockOctokit = { getRepoInstallation: vi.fn(), }, }; + const mockCreateRunner = vi.mocked(createRunner); const mockListRunners = vi.mocked(listEC2Runners); const mockSSMClient = mockClient(SSMClient); @@ -68,26 +68,33 @@ export type RunnerType = 'ephemeral' | 'non-ephemeral'; // for ephemeral and non-ephemeral runners const RUNNER_TYPES: RunnerType[] = ['ephemeral', 'non-ephemeral']; -const mocktokit = Octokit as vi.MockedClass; const mockedAppAuth = vi.mocked(ghAuth.createGithubAppAuth); const mockedInstallationAuth = vi.mocked(ghAuth.createGithubInstallationAuth); const mockCreateClient = vi.mocked(ghAuth.createOctokitClient); -const TEST_DATA: scaleUpModule.ActionRequestMessage = { +const TEST_DATA_SINGLE: scaleUpModule.ActionRequestMessageSQS = { id: 1, eventType: 'workflow_job', repositoryName: 'hello-world', repositoryOwner: 'Codertocat', installationId: 2, repoOwnerType: 'Organization', + messageId: 'foobar', }; +const TEST_DATA: scaleUpModule.ActionRequestMessageSQS[] = [ + { + ...TEST_DATA_SINGLE, + messageId: 'foobar', + }, +]; + const cleanEnv = process.env; const EXPECTED_RUNNER_PARAMS: RunnerInputParameters = { environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, numberOfRunners: 1, launchTemplateName: 'lt-1', ec2instanceCriteria: { @@ -134,14 +141,14 @@ beforeEach(() => { instanceId: 'i-1234', launchTime: new Date(), type: 'Org', - owner: TEST_DATA.repositoryOwner, + owner: TEST_DATA_SINGLE.repositoryOwner, }, ]); mockedAppAuth.mockResolvedValue({ type: 'app', token: 'token', - appId: TEST_DATA.installationId, + appId: TEST_DATA_SINGLE.installationId, expiresAt: 'some-date', }); mockedInstallationAuth.mockResolvedValue({ @@ -155,7 +162,7 @@ beforeEach(() => { installationId: 0, }); - mockCreateClient.mockResolvedValue(new mocktokit()); + mockCreateClient.mockResolvedValue(mockOctokit as unknown as Octokit); }); describe('scaleUp with GHES', () => { @@ -163,17 +170,12 @@ describe('scaleUp with GHES', () => { process.env.GHES_URL = 'https://github.enterprise.something'; }); - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); - }); - it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); @@ -181,7 +183,7 @@ describe('scaleUp with GHES', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { total_count: 0 }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -200,18 +202,18 @@ describe('scaleUp with GHES', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); @@ -219,35 +221,35 @@ describe('scaleUp with GHES', () => { it('does create a runner if maximum is set to -1', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '-1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toHaveBeenCalled(); expect(createRunner).toHaveBeenCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in a specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with ami id override from ssm parameter', async () => { process.env.AMI_ID_SSM_PARAMETER_NAME = 'my-ami-id-param'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, amiIdSsmParameterName: 'my-ami-id-param' }); }); @@ -256,15 +258,15 @@ describe('scaleUp with GHES', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(Error); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toBeInstanceOf(Error); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); }); it('Discards event if it is a User repo and org level runners is enabled', async () => { process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; - const USER_REPO_TEST_DATA = { ...TEST_DATA }; - USER_REPO_TEST_DATA.repoOwnerType = 'User'; - await scaleUpModule.scaleUp('aws:sqs', USER_REPO_TEST_DATA); + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].repoOwnerType = 'User'; + await scaleUpModule.scaleUp(USER_REPO_TEST_DATA); expect(createRunner).not.toHaveBeenCalled(); }); @@ -272,7 +274,7 @@ describe('scaleUp with GHES', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 2); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -283,7 +285,7 @@ describe('scaleUp with GHES', () => { }); it('Does not create SSM parameter for runner group id if it exists', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(0); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 1); }); @@ -291,9 +293,9 @@ describe('scaleUp with GHES', () => { it('create start runner config for ephemeral runners ', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, name: 'unit-test-i-12345', runner_group_id: 1, labels: ['label1', 'label2'], @@ -314,7 +316,7 @@ describe('scaleUp with GHES', () => { it('create start runner config for non-ephemeral runners ', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalled(); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -385,7 +387,7 @@ describe('scaleUp with GHES', () => { 'i-150', 'i-151', ]; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); const endTime = performance.now(); expect(endTime - startTime).toBeGreaterThan(1000); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 40); @@ -399,87 +401,307 @@ describe('scaleUp with GHES', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; - // `--url https://github.enterprise.something/${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; + // `--url https://github.enterprise.something/${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, // `--token 1234abcd`, // ]; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('uses the default runner max count', async () => { process.env.RUNNERS_MAXIMUM_COUNT = undefined; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('Check error is thrown', async () => { const mockCreateRunners = vi.mocked(createRunner); mockCreateRunners.mockRejectedValue(new Error('no retry')); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toThrow('no retry'); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toThrow('no retry'); mockCreateRunners.mockReset(); }); }); -}); -describe('scaleUp with public GH', () => { - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); + describe('Batch processing', () => { + beforeEach(() => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '1'; // Set to 1 so with 1 existing, no new ones can be created + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); // No runners should be created + expect(rejectedMessages).toHaveLength(3); // All 3 messages should be rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + // Override the default mock to return different installation IDs + mockOctokit.apps.getOrgInstallation.mockReset(); + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, 'https://github.enterprise.something/api/v3'); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, 'https://github.enterprise.something/api/v3'); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); + }); }); +}); +describe('scaleUp with public GH', () => { it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('not checking queued workflows', async () => { process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); }); @@ -487,7 +709,7 @@ describe('scaleUp with public GH', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { status: 'completed' }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -499,38 +721,38 @@ describe('scaleUp with public GH', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in s specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); }); @@ -543,44 +765,44 @@ describe('scaleUp with public GH', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with correct config and labels and on demand failover enabled.', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.ENABLE_ON_DEMAND_FAILOVER_FOR_ERRORS = JSON.stringify(['InsufficientInstanceCapacity']); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, onDemandFailoverOnError: ['InsufficientInstanceCapacity'], @@ -590,26 +812,25 @@ describe('scaleUp with public GH', () => { it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('ephemeral runners only run with workflow_job event, others should fail.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; - await expect( - scaleUpModule.scaleUp('aws:sqs', { - ...TEST_DATA, - eventType: 'check_run', - }), - ).rejects.toBeInstanceOf(Error); + + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].eventType = 'check_run'; + + await expect(scaleUpModule.scaleUp(USER_REPO_TEST_DATA)).resolves.toEqual(['foobar']); }); it('creates a ephemeral runner with JIT config.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -631,7 +852,7 @@ describe('scaleUp with public GH', () => { process.env.ENABLE_JIT_CONFIG = 'false'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -654,7 +875,7 @@ describe('scaleUp with public GH', () => { process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.RUNNER_LABELS = 'jit'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -674,21 +895,247 @@ describe('scaleUp with public GH', () => { it('creates a ephemeral runner after checking job is queued.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'true'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('disable auto update on the runner.', async () => { process.env.DISABLE_RUNNER_AUTOUPDATE = 'true'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); - it('Scaling error should cause reject so retry can be triggered.', async () => { + it('Scaling error should return failed message IDs so retry can be triggered.', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(ScaleError); + await expect(scaleUpModule.scaleUp(TEST_DATA)).resolves.toEqual(['foobar']); + }); + }); + + describe('Batch processing', () => { + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + beforeEach(() => { + setDefaults(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '1'; // Set to 1 so with 1 existing, no new ones can be created + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); // No runners should be created + expect(rejectedMessages).toHaveLength(3); // All 3 messages should be rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + // Override the default mock to return different installation IDs + mockOctokit.apps.getOrgInstallation.mockReset(); + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, ''); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, ''); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); }); }); }); @@ -698,17 +1145,12 @@ describe('scaleUp with Github Data Residency', () => { process.env.GHES_URL = 'https://companyname.ghe.com'; }); - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); - }); - it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); @@ -716,7 +1158,7 @@ describe('scaleUp with Github Data Residency', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { total_count: 0 }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -735,18 +1177,18 @@ describe('scaleUp with Github Data Residency', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); @@ -754,35 +1196,35 @@ describe('scaleUp with Github Data Residency', () => { it('does create a runner if maximum is set to -1', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '-1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toHaveBeenCalled(); expect(createRunner).toHaveBeenCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in a specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with ami id override from ssm parameter', async () => { process.env.AMI_ID_SSM_PARAMETER_NAME = 'my-ami-id-param'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, amiIdSsmParameterName: 'my-ami-id-param' }); }); @@ -791,15 +1233,15 @@ describe('scaleUp with Github Data Residency', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(Error); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toBeInstanceOf(Error); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); }); it('Discards event if it is a User repo and org level runners is enabled', async () => { process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; - const USER_REPO_TEST_DATA = { ...TEST_DATA }; - USER_REPO_TEST_DATA.repoOwnerType = 'User'; - await scaleUpModule.scaleUp('aws:sqs', USER_REPO_TEST_DATA); + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].repoOwnerType = 'User'; + await scaleUpModule.scaleUp(USER_REPO_TEST_DATA); expect(createRunner).not.toHaveBeenCalled(); }); @@ -807,7 +1249,7 @@ describe('scaleUp with Github Data Residency', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 2); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -818,7 +1260,7 @@ describe('scaleUp with Github Data Residency', () => { }); it('Does not create SSM parameter for runner group id if it exists', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(0); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 1); }); @@ -826,9 +1268,9 @@ describe('scaleUp with Github Data Residency', () => { it('create start runner config for ephemeral runners ', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, name: 'unit-test-i-12345', runner_group_id: 1, labels: ['label1', 'label2'], @@ -849,7 +1291,7 @@ describe('scaleUp with Github Data Residency', () => { it('create start runner config for non-ephemeral runners ', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalled(); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -920,7 +1362,7 @@ describe('scaleUp with Github Data Residency', () => { 'i-150', 'i-151', ]; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); const endTime = performance.now(); expect(endTime - startTime).toBeGreaterThan(1000); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 40); @@ -934,67 +1376,295 @@ describe('scaleUp with Github Data Residency', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; - // `--url https://companyname.ghe.com${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; + // `--url https://companyname.ghe.com${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, // `--token 1234abcd`, // ]; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('uses the default runner max count', async () => { process.env.RUNNERS_MAXIMUM_COUNT = undefined; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('Check error is thrown', async () => { const mockCreateRunners = vi.mocked(createRunner); mockCreateRunners.mockRejectedValue(new Error('no retry')); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toThrow('no retry'); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toThrow('no retry'); mockCreateRunners.mockReset(); }); }); + + describe('Batch processing', () => { + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + beforeEach(() => { + setDefaults(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '2'; + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(5); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, // 2 max - 1 existing = 1 new + }), + ); + expect(rejectedMessages).toHaveLength(4); // 5 requested - 1 created = 4 rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, ''); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, ''); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); + }); + }); }); function defaultOctokitMockImpl() { @@ -1034,12 +1704,12 @@ function defaultOctokitMockImpl() { }; const mockInstallationIdReturnValueOrgs = { data: { - id: TEST_DATA.installationId, + id: TEST_DATA_SINGLE.installationId, }, }; const mockInstallationIdReturnValueRepos = { data: { - id: TEST_DATA.installationId, + id: TEST_DATA_SINGLE.installationId, }, }; diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts index 638edd3232..35df7ea5d7 100644 --- a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts +++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts @@ -6,8 +6,6 @@ import yn from 'yn'; import { createGithubAppAuth, createGithubInstallationAuth, createOctokitClient } from '../github/auth'; import { createRunner, listEC2Runners, tag } from './../aws/runners'; import { RunnerInputParameters } from './../aws/runners.d'; -import ScaleError from './ScaleError'; -import { publishRetryMessage } from './job-retry'; import { metricGitHubAppRateLimit } from '../github/rate-limit'; const logger = createChildLogger('scale-up'); @@ -33,6 +31,10 @@ export interface ActionRequestMessage { retryCounter?: number; } +export interface ActionRequestMessageSQS extends ActionRequestMessage { + messageId: string; +} + export interface ActionRequestMessageRetry extends ActionRequestMessage { retryCounter: number; } @@ -114,7 +116,7 @@ function removeTokenFromLogging(config: string[]): string[] { } export async function getInstallationId( - ghesApiUrl: string, + githubAppClient: Octokit, enableOrgLevel: boolean, payload: ActionRequestMessage, ): Promise { @@ -122,16 +124,14 @@ export async function getInstallationId( return payload.installationId; } - const ghAuth = await createGithubAppAuth(undefined, ghesApiUrl); - const githubClient = await createOctokitClient(ghAuth.token, ghesApiUrl); return enableOrgLevel ? ( - await githubClient.apps.getOrgInstallation({ + await githubAppClient.apps.getOrgInstallation({ org: payload.repositoryOwner, }) ).data.id : ( - await githubClient.apps.getRepoInstallation({ + await githubAppClient.apps.getRepoInstallation({ owner: payload.repositoryOwner, repo: payload.repositoryName, }) @@ -211,23 +211,27 @@ async function getRunnerGroupByName(ghClient: Octokit, githubRunnerConfig: Creat export async function createRunners( githubRunnerConfig: CreateGitHubRunnerConfig, ec2RunnerConfig: CreateEC2RunnerConfig, + numberOfRunners: number, ghClient: Octokit, -): Promise { +): Promise { const instances = await createRunner({ runnerType: githubRunnerConfig.runnerType, runnerOwner: githubRunnerConfig.runnerOwner, - numberOfRunners: 1, + numberOfRunners, ...ec2RunnerConfig, }); if (instances.length !== 0) { await createStartRunnerConfig(githubRunnerConfig, instances, ghClient); } + + return instances; } -export async function scaleUp(eventSource: string, payload: ActionRequestMessage): Promise { - logger.info(`Received ${payload.eventType} from ${payload.repositoryOwner}/${payload.repositoryName}`); +export async function scaleUp(payloads: ActionRequestMessageSQS[]): Promise { + logger.info('Received scale up requests', { + n_requests: payloads.length, + }); - if (eventSource !== 'aws:sqs') throw Error('Cannot handle non-SQS events!'); const enableOrgLevel = yn(process.env.ENABLE_ORGANIZATION_RUNNERS, { default: true }); const maximumRunners = parseInt(process.env.RUNNERS_MAXIMUM_COUNT || '3'); const runnerLabels = process.env.RUNNER_LABELS || ''; @@ -252,103 +256,202 @@ export async function scaleUp(eventSource: string, payload: ActionRequestMessage ? (JSON.parse(process.env.ENABLE_ON_DEMAND_FAILOVER_FOR_ERRORS) as [string]) : []; - if (ephemeralEnabled && payload.eventType !== 'workflow_job') { - logger.warn(`${payload.eventType} event is not supported in combination with ephemeral runners.`); - throw Error( - `The event type ${payload.eventType} is not supported in combination with ephemeral runners.` + - `Please ensure you have enabled workflow_job events.`, - ); - } + const { ghesApiUrl, ghesBaseUrl } = getGitHubEnterpriseApiUrl(); - if (!isValidRepoOwnerTypeIfOrgLevelEnabled(payload, enableOrgLevel)) { - logger.warn( - `Repository ${payload.repositoryOwner}/${payload.repositoryName} does not belong to a GitHub` + - `organization and organization runners are enabled. This is not supported. Not scaling up for this event.` + - `Not throwing error to prevent re-queueing and just ignoring the event.`, - ); - return; + const ghAuth = await createGithubAppAuth(undefined, ghesApiUrl); + const githubAppClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + + // A map of either owner or owner/repo name to Octokit client, so we use a + // single client per installation (set of messages), depending on how the app + // is installed. This is for a couple of reasons: + // - Sharing clients opens up the possibility of caching API calls. + // - Fetching a client for an installation actually requires a couple of API + // calls itself, which would get expensive if done for every message in a + // batch. + type MessagesWithClient = { + messages: ActionRequestMessageSQS[]; + githubInstallationClient: Octokit; + }; + + const validMessages = new Map(); + const invalidMessages: string[] = []; + for (const payload of payloads) { + const { eventType, messageId, repositoryName, repositoryOwner } = payload; + if (ephemeralEnabled && eventType !== 'workflow_job') { + logger.warn( + 'Event is not supported in combination with ephemeral runners. Please ensure you have enabled workflow_job events.', + { eventType, messageId }, + ); + + invalidMessages.push(messageId); + + continue; + } + + if (!isValidRepoOwnerTypeIfOrgLevelEnabled(payload, enableOrgLevel)) { + logger.warn( + `Repository does not belong to a GitHub organization and organization runners are enabled. This is not supported. Not scaling up for this event. Not throwing error to prevent re-queueing and just ignoring the event.`, + { + repository: `${repositoryOwner}/${repositoryName}`, + messageId, + }, + ); + + continue; + } + + const key = enableOrgLevel ? payload.repositoryOwner : `${payload.repositoryOwner}/${payload.repositoryName}`; + + let entry = validMessages.get(key); + + // If we've not seen this owner/repo before, we'll need to create a GitHub + // client for it. + if (entry === undefined) { + const installationId = await getInstallationId(githubAppClient, enableOrgLevel, payload); + const ghAuth = await createGithubInstallationAuth(installationId, ghesApiUrl); + const githubInstallationClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + + entry = { + messages: [], + githubInstallationClient, + }; + + validMessages.set(key, entry); + } + + entry.messages.push(payload); } - const ephemeral = ephemeralEnabled && payload.eventType === 'workflow_job'; const runnerType = enableOrgLevel ? 'Org' : 'Repo'; - const runnerOwner = enableOrgLevel ? payload.repositoryOwner : `${payload.repositoryOwner}/${payload.repositoryName}`; addPersistentContextToChildLogger({ runner: { + ephemeral: ephemeralEnabled, type: runnerType, - owner: runnerOwner, namePrefix: runnerNamePrefix, - }, - github: { - event: payload.eventType, - workflow_job_id: payload.id.toString(), + n_events: Array.from(validMessages.values()).reduce((acc, group) => acc + group.messages.length, 0), }, }); - logger.info(`Received event`); + logger.info(`Received events`); - const { ghesApiUrl, ghesBaseUrl } = getGitHubEnterpriseApiUrl(); + for (const [group, { githubInstallationClient, messages }] of validMessages.entries()) { + // Work out how much we want to scale up by. + let scaleUp = 0; - const installationId = await getInstallationId(ghesApiUrl, enableOrgLevel, payload); - const ghAuth = await createGithubInstallationAuth(installationId, ghesApiUrl); - const githubInstallationClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + for (const message of messages) { + const messageLogger = logger.createChild({ + persistentKeys: { + eventType: message.eventType, + group, + messageId: message.messageId, + repository: `${message.repositoryOwner}/${message.repositoryName}`, + }, + }); - if (!enableJobQueuedCheck || (await isJobQueued(githubInstallationClient, payload))) { - let scaleUp = true; - if (maximumRunners !== -1) { - const currentRunners = await listEC2Runners({ - environment, - runnerType, - runnerOwner, + if (enableJobQueuedCheck && !(await isJobQueued(githubInstallationClient, message))) { + messageLogger.info('No runner will be created, job is not queued.'); + + continue; + } + + scaleUp++; + } + + if (scaleUp === 0) { + logger.info('No runners will be created for this group, no valid messages found.'); + + continue; + } + + // Don't call the EC2 API if we can create an unlimited number of runners. + const currentRunners = + maximumRunners === -1 ? 0 : (await listEC2Runners({ environment, runnerType, runnerOwner: group })).length; + + logger.info('Current runners', { + currentRunners, + maximumRunners, + }); + + // Calculate how many runners we want to create. + const newRunners = + maximumRunners === -1 + ? // If we don't have an upper limit, scale up by the number of new jobs. + scaleUp + : // Otherwise, we do have a limit, so work out if `scaleUp` would exceed it. + Math.min(scaleUp, maximumRunners - currentRunners); + + const missingInstanceCount = Math.max(0, scaleUp - newRunners); + + if (missingInstanceCount > 0) { + logger.info('Not all runners will be created for this group, maximum number of runners reached.', { + desiredNewRunners: scaleUp, }); - logger.info(`Current runners: ${currentRunners.length} of ${maximumRunners}`); - scaleUp = currentRunners.length < maximumRunners; + + if (ephemeralEnabled) { + // This removes `missingInstanceCount` items from the start of the array + // so that, if we retry more messages later, we pick fresh ones. + invalidMessages.push(...messages.splice(0, missingInstanceCount).map(({ messageId }) => messageId)); + } + + // No runners will be created, so skip calling the EC2 API. + if (missingInstanceCount === scaleUp) { + continue; + } } - if (scaleUp) { - logger.info(`Attempting to launch a new runner`); + logger.info(`Attempting to launch new runners`, { + newRunners, + }); - await createRunners( - { - ephemeral, - enableJitConfig, - ghesBaseUrl, - runnerLabels, - runnerGroup, - runnerNamePrefix, - runnerOwner, - runnerType, - disableAutoUpdate, - ssmTokenPath, - ssmConfigPath, - }, - { - ec2instanceCriteria: { - instanceTypes, - targetCapacityType: instanceTargetCapacityType, - maxSpotPrice: instanceMaxSpotPrice, - instanceAllocationStrategy: instanceAllocationStrategy, - }, - environment, - launchTemplateName, - subnets, - amiIdSsmParameterName, - tracingEnabled, - onDemandFailoverOnError, + const instances = await createRunners( + { + ephemeral: ephemeralEnabled, + enableJitConfig, + ghesBaseUrl, + runnerLabels, + runnerGroup, + runnerNamePrefix, + runnerOwner: group, + runnerType, + disableAutoUpdate, + ssmTokenPath, + ssmConfigPath, + }, + { + ec2instanceCriteria: { + instanceTypes, + targetCapacityType: instanceTargetCapacityType, + maxSpotPrice: instanceMaxSpotPrice, + instanceAllocationStrategy: instanceAllocationStrategy, }, - githubInstallationClient, - ); + environment, + launchTemplateName, + subnets, + amiIdSsmParameterName, + tracingEnabled, + onDemandFailoverOnError, + }, + newRunners, + githubInstallationClient, + ); - await publishRetryMessage(payload); - } else { - logger.info('No runner will be created, maximum number of runners reached.'); - if (ephemeral) { - throw new ScaleError('No runners create: maximum of runners reached.'); - } + // Not all runners we wanted were created, let's reject enough items so that + // number of entries will be retried. + if (instances.length !== newRunners) { + const failedInstanceCount = newRunners - instances.length; + + logger.warn('Some runners failed to be created, rejecting some messages so the requests are retried', { + wanted: newRunners, + got: instances.length, + failedInstanceCount, + }); + + invalidMessages.push(...messages.slice(0, failedInstanceCount).map(({ messageId }) => messageId)); } - } else { - logger.info('No runner will be created, job is not queued.'); } + + return invalidMessages; } export function getGitHubEnterpriseApiUrl() { diff --git a/lambdas/libs/aws-powertools-util/src/logger/index.ts b/lambdas/libs/aws-powertools-util/src/logger/index.ts index 195b552a74..2bad191a83 100644 --- a/lambdas/libs/aws-powertools-util/src/logger/index.ts +++ b/lambdas/libs/aws-powertools-util/src/logger/index.ts @@ -9,7 +9,7 @@ const defaultValues = { }; function setContext(context: Context, module?: string) { - logger.addPersistentLogAttributes({ + logger.appendPersistentKeys({ 'aws-request-id': context.awsRequestId, 'function-name': context.functionName, module: module, @@ -17,7 +17,7 @@ function setContext(context: Context, module?: string) { // Add the context to all child loggers childLoggers.forEach((childLogger) => { - childLogger.addPersistentLogAttributes({ + childLogger.appendPersistentKeys({ 'aws-request-id': context.awsRequestId, 'function-name': context.functionName, }); @@ -25,14 +25,14 @@ function setContext(context: Context, module?: string) { } const logger = new Logger({ - persistentLogAttributes: { + persistentKeys: { ...defaultValues, }, }); function createChildLogger(module: string): Logger { const childLogger = logger.createChild({ - persistentLogAttributes: { + persistentKeys: { module: module, }, }); @@ -47,7 +47,7 @@ type LogAttributes = { function addPersistentContextToChildLogger(attributes: LogAttributes) { childLoggers.forEach((childLogger) => { - childLogger.addPersistentLogAttributes(attributes); + childLogger.appendPersistentKeys(attributes); }); } diff --git a/main.tf b/main.tf index 74c4c54ec6..f0dadd6b66 100644 --- a/main.tf +++ b/main.tf @@ -210,28 +210,30 @@ module "runners" { credit_specification = var.runner_credit_specification cpu_options = var.runner_cpu_options - enable_runner_binaries_syncer = var.enable_runner_binaries_syncer - lambda_s3_bucket = var.lambda_s3_bucket - runners_lambda_s3_key = var.runners_lambda_s3_key - runners_lambda_s3_object_version = var.runners_lambda_s3_object_version - lambda_runtime = var.lambda_runtime - lambda_architecture = var.lambda_architecture - lambda_zip = var.runners_lambda_zip - lambda_scale_up_memory_size = var.runners_scale_up_lambda_memory_size - lambda_scale_down_memory_size = var.runners_scale_down_lambda_memory_size - lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout - lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout - lambda_subnet_ids = var.lambda_subnet_ids - lambda_security_group_ids = var.lambda_security_group_ids - lambda_tags = var.lambda_tags - tracing_config = var.tracing_config - logging_retention_in_days = var.logging_retention_in_days - logging_kms_key_id = var.logging_kms_key_id - enable_cloudwatch_agent = var.enable_cloudwatch_agent - cloudwatch_config = var.cloudwatch_config - runner_log_files = var.runner_log_files - runner_group_name = var.runner_group_name - runner_name_prefix = var.runner_name_prefix + enable_runner_binaries_syncer = var.enable_runner_binaries_syncer + lambda_s3_bucket = var.lambda_s3_bucket + runners_lambda_s3_key = var.runners_lambda_s3_key + runners_lambda_s3_object_version = var.runners_lambda_s3_object_version + lambda_runtime = var.lambda_runtime + lambda_architecture = var.lambda_architecture + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + lambda_zip = var.runners_lambda_zip + lambda_scale_up_memory_size = var.runners_scale_up_lambda_memory_size + lambda_scale_down_memory_size = var.runners_scale_down_lambda_memory_size + lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout + lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout + lambda_subnet_ids = var.lambda_subnet_ids + lambda_security_group_ids = var.lambda_security_group_ids + lambda_tags = var.lambda_tags + tracing_config = var.tracing_config + logging_retention_in_days = var.logging_retention_in_days + logging_kms_key_id = var.logging_kms_key_id + enable_cloudwatch_agent = var.enable_cloudwatch_agent + cloudwatch_config = var.cloudwatch_config + runner_log_files = var.runner_log_files + runner_group_name = var.runner_group_name + runner_name_prefix = var.runner_name_prefix scale_up_reserved_concurrent_executions = var.scale_up_reserved_concurrent_executions diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 198078feee..0515763f4c 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -137,6 +137,8 @@ module "multi-runner" { | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | diff --git a/modules/multi-runner/runners.tf b/modules/multi-runner/runners.tf index 811ab36260..d58e61f6ac 100644 --- a/modules/multi-runner/runners.tf +++ b/modules/multi-runner/runners.tf @@ -58,28 +58,30 @@ module "runners" { credit_specification = each.value.runner_config.credit_specification cpu_options = each.value.runner_config.cpu_options - enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer - lambda_s3_bucket = var.lambda_s3_bucket - runners_lambda_s3_key = var.runners_lambda_s3_key - runners_lambda_s3_object_version = var.runners_lambda_s3_object_version - lambda_runtime = var.lambda_runtime - lambda_architecture = var.lambda_architecture - lambda_zip = var.runners_lambda_zip - lambda_scale_up_memory_size = var.scale_up_lambda_memory_size - lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout - lambda_scale_down_memory_size = var.scale_down_lambda_memory_size - lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout - lambda_subnet_ids = var.lambda_subnet_ids - lambda_security_group_ids = var.lambda_security_group_ids - lambda_tags = var.lambda_tags - tracing_config = var.tracing_config - logging_retention_in_days = var.logging_retention_in_days - logging_kms_key_id = var.logging_kms_key_id - enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent - cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null) - runner_log_files = each.value.runner_config.runner_log_files - runner_group_name = each.value.runner_config.runner_group_name - runner_name_prefix = each.value.runner_config.runner_name_prefix + enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer + lambda_s3_bucket = var.lambda_s3_bucket + runners_lambda_s3_key = var.runners_lambda_s3_key + runners_lambda_s3_object_version = var.runners_lambda_s3_object_version + lambda_runtime = var.lambda_runtime + lambda_architecture = var.lambda_architecture + lambda_zip = var.runners_lambda_zip + lambda_scale_up_memory_size = var.scale_up_lambda_memory_size + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout + lambda_scale_down_memory_size = var.scale_down_lambda_memory_size + lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout + lambda_subnet_ids = var.lambda_subnet_ids + lambda_security_group_ids = var.lambda_security_group_ids + lambda_tags = var.lambda_tags + tracing_config = var.tracing_config + logging_retention_in_days = var.logging_retention_in_days + logging_kms_key_id = var.logging_kms_key_id + enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent + cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null) + runner_log_files = each.value.runner_config.runner_log_files + runner_group_name = each.value.runner_config.runner_group_name + runner_name_prefix = each.value.runner_config.runner_name_prefix scale_up_reserved_concurrent_executions = each.value.runner_config.scale_up_reserved_concurrent_executions diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index 119af5c36c..5c839e1104 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -724,3 +724,15 @@ variable "user_agent" { type = string default = "github-aws-runners" } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 +} diff --git a/modules/runners/README.md b/modules/runners/README.md index eddf7edb40..169cee7eac 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -177,6 +177,8 @@ yarn run dist | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_scale\_down\_memory\_size](#input\_lambda\_scale\_down\_memory\_size) | Memory size limit in MB for scale down lambda. | `number` | `512` | no | diff --git a/modules/runners/job-retry.tf b/modules/runners/job-retry.tf index e51c3903d4..130992667f 100644 --- a/modules/runners/job-retry.tf +++ b/modules/runners/job-retry.tf @@ -3,30 +3,32 @@ locals { job_retry_enabled = var.job_retry != null && var.job_retry.enable ? true : false job_retry = { - prefix = var.prefix - tags = local.tags - aws_partition = var.aws_partition - architecture = var.lambda_architecture - runtime = var.lambda_runtime - security_group_ids = var.lambda_security_group_ids - subnet_ids = var.lambda_subnet_ids - kms_key_arn = var.kms_key_arn - lambda_tags = var.lambda_tags - log_level = var.log_level - logging_kms_key_id = var.logging_kms_key_id - logging_retention_in_days = var.logging_retention_in_days - metrics = var.metrics - role_path = var.role_path - role_permissions_boundary = var.role_permissions_boundary - s3_bucket = var.lambda_s3_bucket - s3_key = var.runners_lambda_s3_key - s3_object_version = var.runners_lambda_s3_object_version - zip = var.lambda_zip - tracing_config = var.tracing_config - github_app_parameters = var.github_app_parameters - enable_organization_runners = var.enable_organization_runners - sqs_build_queue = var.sqs_build_queue - ghes_url = var.ghes_url + prefix = var.prefix + tags = local.tags + aws_partition = var.aws_partition + architecture = var.lambda_architecture + runtime = var.lambda_runtime + security_group_ids = var.lambda_security_group_ids + subnet_ids = var.lambda_subnet_ids + kms_key_arn = var.kms_key_arn + lambda_tags = var.lambda_tags + log_level = var.log_level + logging_kms_key_id = var.logging_kms_key_id + logging_retention_in_days = var.logging_retention_in_days + metrics = var.metrics + role_path = var.role_path + role_permissions_boundary = var.role_permissions_boundary + s3_bucket = var.lambda_s3_bucket + s3_key = var.runners_lambda_s3_key + s3_object_version = var.runners_lambda_s3_object_version + zip = var.lambda_zip + tracing_config = var.tracing_config + github_app_parameters = var.github_app_parameters + enable_organization_runners = var.enable_organization_runners + sqs_build_queue = var.sqs_build_queue + ghes_url = var.ghes_url + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds } } diff --git a/modules/runners/job-retry/README.md b/modules/runners/job-retry/README.md index 168f2d324e..f54b943855 100644 --- a/modules/runners/job-retry/README.md +++ b/modules/runners/job-retry/README.md @@ -42,7 +42,7 @@ The module is an inner module and used by the runner module when the opt-in feat | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | +| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used.
`lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_event_source_mapping_batch_size = optional(number, 10)
lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | ## Outputs diff --git a/modules/runners/job-retry/main.tf b/modules/runners/job-retry/main.tf index 807f52a49a..eba478b214 100644 --- a/modules/runners/job-retry/main.tf +++ b/modules/runners/job-retry/main.tf @@ -44,9 +44,10 @@ module "job_retry" { } resource "aws_lambda_event_source_mapping" "job_retry" { - event_source_arn = aws_sqs_queue.job_retry_check_queue.arn - function_name = module.job_retry.lambda.function.arn - batch_size = 1 + event_source_arn = aws_sqs_queue.job_retry_check_queue.arn + function_name = module.job_retry.lambda.function.arn + batch_size = var.config.lambda_event_source_mapping_batch_size + maximum_batching_window_in_seconds = var.config.lambda_event_source_mapping_maximum_batching_window_in_seconds } resource "aws_lambda_permission" "job_retry" { diff --git a/modules/runners/job-retry/variables.tf b/modules/runners/job-retry/variables.tf index 4741dd1b45..7ccfdf63b3 100644 --- a/modules/runners/job-retry/variables.tf +++ b/modules/runners/job-retry/variables.tf @@ -11,6 +11,8 @@ variable "config" { 'user_agent': Optional User-Agent header for GitHub API requests. 'github_app_parameters': Parameter Store for GitHub App Parameters. 'kms_key_arn': Optional CMK Key ARN instead of using the default AWS managed key. + `lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used. + `lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. `lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. @@ -45,12 +47,14 @@ variable "config" { key_base64 = map(string) id = map(string) }) - kms_key_arn = optional(string, null) - lambda_tags = optional(map(string), {}) - log_level = optional(string, null) - logging_kms_key_id = optional(string, null) - logging_retention_in_days = optional(number, null) - memory_size = optional(number, null) + kms_key_arn = optional(string, null) + lambda_event_source_mapping_batch_size = optional(number, 10) + lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0) + lambda_tags = optional(map(string), {}) + log_level = optional(string, null) + logging_kms_key_id = optional(string, null) + logging_retention_in_days = optional(number, null) + memory_size = optional(number, null) metrics = optional(object({ enable = optional(bool, false) namespace = optional(string, null) diff --git a/modules/runners/scale-up.tf b/modules/runners/scale-up.tf index 89d95a50d0..b1ea88652d 100644 --- a/modules/runners/scale-up.tf +++ b/modules/runners/scale-up.tf @@ -87,10 +87,12 @@ resource "aws_cloudwatch_log_group" "scale_up" { } resource "aws_lambda_event_source_mapping" "scale_up" { - event_source_arn = var.sqs_build_queue.arn - function_name = aws_lambda_function.scale_up.arn - batch_size = 1 - tags = var.tags + event_source_arn = var.sqs_build_queue.arn + function_name = aws_lambda_function.scale_up.arn + function_response_types = ["ReportBatchItemFailures"] + batch_size = var.lambda_event_source_mapping_batch_size + maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + tags = var.tags } resource "aws_lambda_permission" "scale_runners_lambda" { diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index 352285e786..a45075bb52 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -770,3 +770,23 @@ variable "user_agent" { type = string default = null } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 + validation { + condition = var.lambda_event_source_mapping_batch_size >= 1 && var.lambda_event_source_mapping_batch_size <= 1000 + error_message = "The batch size for the lambda event source mapping must be between 1 and 1000." + } +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 + validation { + condition = var.lambda_event_source_mapping_maximum_batching_window_in_seconds >= 0 && var.lambda_event_source_mapping_maximum_batching_window_in_seconds <= 300 + error_message = "Maximum batching window must be between 0 and 300 seconds." + } +} diff --git a/modules/webhook-github-app/README.md b/modules/webhook-github-app/README.md index 0c09a761c5..6de85ee30d 100644 --- a/modules/webhook-github-app/README.md +++ b/modules/webhook-github-app/README.md @@ -34,7 +34,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [github\_app](#input\_github\_app) | GitHub app parameters, see your github app. Ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). |
object({
key_base64 = string
id = string
webhook_secret = string
})
| n/a | yes | +| [github\_app](#input\_github\_app) | GitHub app parameters, see your GitHub app. Ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). |
object({
key_base64 = string
id = string
webhook_secret = string
})
| n/a | yes | | [webhook\_endpoint](#input\_webhook\_endpoint) | The endpoint to use for the webhook, defaults to the endpoint of the runners module. | `string` | n/a | yes | ## Outputs diff --git a/variables.tf b/variables.tf index 0bf3563145..7ff6ecece4 100644 --- a/variables.tf +++ b/variables.tf @@ -1021,3 +1021,19 @@ variable "user_agent" { type = string default = "github-aws-runners" } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 + validation { + condition = var.lambda_event_source_mapping_maximum_batching_window_in_seconds >= 0 && var.lambda_event_source_mapping_maximum_batching_window_in_seconds <= 300 + error_message = "Maximum batching window must be between 0 and 300 seconds." + } +} From d2d09ffb41a9f88553f5e532239294d11db87096 Mon Sep 17 00:00:00 2001 From: Niek Palm Date: Mon, 8 Dec 2025 09:57:45 +0100 Subject: [PATCH 2/2] feat!: Upgrade lambda runtime to Node24.x (#4911) Upgrade to Lambda runtime Node24.x - Upgrade minimal AWS Terraform provider - Upgrade all lambda runtimes by default to 24x - Breaking change! ## Dependency and environment upgrades: * Updated all references to Node.js from version 22 to version 24 in GitHub Actions workflows (`.github/workflows/lambda.yml`, `.github/workflows/release.yml`) and Dockerfiles (`.ci/Dockerfile`, `.devcontainer/Dockerfile`). [[1]](diffhunk://#diff-b0732b88b9e5a3df48561602571a10179d2b28cbb21ba8032025932bc9606426L23-R23) [[2]](diffhunk://#diff-87db21a973eed4fef5f32b267aa60fcee5cbdf03c67fafdc2a9b553bb0b15f34L33-R33) [[3]](diffhunk://#diff-fd0c8401badda82156f9e7bd621fa3a0e586d8128e4a80af17c7cbff70bee11eL2-R2) [[4]](diffhunk://#diff-13bd9d7a30bf46656bc81f1ad5b908a627f9247be3f7d76df862b0578b534fc6L1-R1) * Upgraded the base Docker images for both the CI and devcontainer environments to use newer Node.js image digests. [[1]](diffhunk://#diff-fd0c8401badda82156f9e7bd621fa3a0e586d8128e4a80af17c7cbff70bee11eL2-R2) [[2]](diffhunk://#diff-13bd9d7a30bf46656bc81f1ad5b908a627f9247be3f7d76df862b0578b534fc6L1-R1) Terraform provider updates: * Increased the minimum required version for the AWS Terraform provider to `>= 6.21` in all example `versions.tf` files. [[1]](diffhunk://#diff-61160e0ae9e70de675b98889710708e1a9edcccd5194a4a580aa234caa5103aeL5-R5) [[2]](diffhunk://#diff-debb96ea7aef889f9deb4de51c61ca44a7e23832098e1c9d8b09fa54b1a96582L5-R5) * Updated the `.terraform.lock.hcl` files in all examples to lock the AWS provider at version `6.22.1`, the local provider at `2.6.1`, and the null provider at `3.2.4` where applicable, along with updated hash values and constraints. [[1]](diffhunk://#diff-101dfb4a445c2ab4a46c53cbc92db3a43f3423ba1e8ee7b8a11b393ebe835539L5-R43) [[2]](diffhunk://#diff-2a8b3082767f86cfdb88b00e963894a8cdd2ebcf705c8d757d46b55a98452a6cL5-R43) --------- Co-authored-by: github-aws-runners-pr|bot Co-authored-by: Guilherme Caulada --- .ci/Dockerfile | 2 +- .devcontainer/Dockerfile | 2 +- .github/workflows/lambda.yml | 2 +- .github/workflows/release.yml | 2 +- .tflint.hcl | 2 +- README.md | 6 +- examples/base/README.md | 2 +- examples/base/versions.tf | 2 +- examples/default/.terraform.lock.hcl | 60 +++++----- examples/default/README.md | 2 +- examples/default/versions.tf | 2 +- examples/ephemeral/.terraform.lock.hcl | 60 +++++----- examples/ephemeral/README.md | 2 +- examples/ephemeral/versions.tf | 2 +- .../.terraform.lock.hcl | 60 +++++----- .../external-managed-ssm-secrets/README.md | 2 +- .../external-managed-ssm-secrets/versions.tf | 2 +- examples/lambdas-download/.terraform.lock.hcl | 60 +++++----- examples/multi-runner/.terraform.lock.hcl | 112 +++++++++--------- examples/multi-runner/README.md | 6 +- examples/multi-runner/versions.tf | 2 +- .../permissions-boundary/.terraform.lock.hcl | 112 +++++++++--------- examples/permissions-boundary/README.md | 6 +- examples/permissions-boundary/setup/README.md | 4 +- .../permissions-boundary/setup/versions.tf | 10 +- examples/permissions-boundary/versions.tf | 2 +- examples/prebuilt/.terraform.lock.hcl | 112 +++++++++--------- examples/prebuilt/README.md | 6 +- examples/prebuilt/versions.tf | 2 +- .../termination-watcher/.terraform.lock.hcl | 34 +++--- lambdas/.nvmrc | 2 +- modules/ami-housekeeper/README.md | 6 +- modules/ami-housekeeper/variables.tf | 2 +- modules/ami-housekeeper/versions.tf | 2 +- modules/download-lambda/README.md | 2 +- modules/download-lambda/versions.tf | 2 +- modules/lambda/README.md | 6 +- modules/lambda/variables.tf | 2 +- modules/lambda/versions.tf | 2 +- modules/multi-runner/README.md | 6 +- modules/multi-runner/variables.tf | 2 +- modules/multi-runner/versions.tf | 2 +- modules/runner-binaries-syncer/README.md | 6 +- modules/runner-binaries-syncer/variables.tf | 2 +- modules/runner-binaries-syncer/versions.tf | 2 +- modules/runners/README.md | 6 +- modules/runners/job-retry/README.md | 4 +- modules/runners/job-retry/versions.tf | 2 +- modules/runners/pool/README.md | 4 +- modules/runners/pool/versions.tf | 2 +- modules/runners/variables.tf | 2 +- modules/runners/versions.tf | 2 +- modules/setup-iam-permissions/README.md | 4 +- modules/setup-iam-permissions/versions.tf | 2 +- modules/ssm/README.md | 4 +- modules/ssm/versions.tf | 2 +- modules/termination-watcher/README.md | 2 +- .../notification/README.md | 4 +- .../notification/versions.tf | 2 +- .../termination-watcher/termination/README.md | 4 +- .../termination/versions.tf | 2 +- modules/termination-watcher/versions.tf | 2 +- modules/webhook/README.md | 6 +- modules/webhook/direct/README.md | 6 +- modules/webhook/direct/variables.tf | 2 +- modules/webhook/direct/versions.tf | 2 +- modules/webhook/eventbridge/README.md | 6 +- modules/webhook/eventbridge/variables.tf | 2 +- modules/webhook/eventbridge/versions.tf | 2 +- modules/webhook/variables.tf | 2 +- modules/webhook/versions.tf | 2 +- variables.tf | 2 +- versions.tf | 2 +- 73 files changed, 410 insertions(+), 400 deletions(-) diff --git a/.ci/Dockerfile b/.ci/Dockerfile index 2aa2dd93d2..3566cc1251 100644 --- a/.ci/Dockerfile +++ b/.ci/Dockerfile @@ -1,5 +1,5 @@ #syntax=docker/dockerfile:1.2 -FROM node@sha256:0c0734eb7051babbb3e95cd74e684f940552b31472152edf0bb23e54ab44a0d7 as build +FROM node@sha256:1501d5fd51032aa10701a7dcc9e6c72ab1e611a033ffcf08b6d5882e9165f63e as build WORKDIR /lambdas RUN apt-get update \ && apt-get install -y zip \ diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile index 2e7b5badb0..d20b2ced50 100644 --- a/.devcontainer/Dockerfile +++ b/.devcontainer/Dockerfile @@ -1 +1 @@ -FROM mcr.microsoft.com/vscode/devcontainers/typescript-node@sha256:acdce1045a2ddce4c66846d5cd09adf746d157fce9233124e4925b647f192b2e +FROM mcr.microsoft.com/vscode/devcontainers/typescript-node@sha256:d09eac5cd85fb4bd70770fa3f88ee9dfdd0b09f8b85455a0e039048677276749 diff --git a/.github/workflows/lambda.yml b/.github/workflows/lambda.yml index c0ff7774e8..50cbe8a6e8 100644 --- a/.github/workflows/lambda.yml +++ b/.github/workflows/lambda.yml @@ -20,7 +20,7 @@ jobs: name: Build and test lambda functions runs-on: ubuntu-latest container: - image: node:22@sha256:2bb201f33898d2c0ce638505b426f4dd038cc00e5b2b4cbba17b069f0fff1496 + image: node:24@sha256:aa648b387728c25f81ff811799bbf8de39df66d7e2d9b3ab55cc6300cb9175d9 defaults: run: working-directory: ./lambdas diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 5c87727470..a78de21732 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -30,7 +30,7 @@ jobs: - uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 with: - node-version: 22 + node-version: 24 package-manager-cache: false - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0 with: diff --git a/.tflint.hcl b/.tflint.hcl index 227338085f..1fa77a630a 100644 --- a/.tflint.hcl +++ b/.tflint.hcl @@ -5,7 +5,7 @@ config { plugin "aws" { enabled = true - version = "0.36.0" + version = "0.44.0" source = "github.com/terraform-linters/tflint-ruleset-aws" } diff --git a/README.md b/README.md index f242fe734b..75e4727fc1 100644 --- a/README.md +++ b/README.md @@ -70,14 +70,14 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.77 | +| [aws](#requirement\_aws) | >= 6.21 | | [random](#requirement\_random) | ~> 3.0 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.77 | +| [aws](#provider\_aws) | >= 6.21 | | [random](#provider\_random) | ~> 3.0 | ## Modules @@ -158,7 +158,7 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | | [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | | [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no | diff --git a/examples/base/README.md b/examples/base/README.md index 761f9ed457..95b6fcee52 100644 --- a/examples/base/README.md +++ b/examples/base/README.md @@ -4,7 +4,7 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers diff --git a/examples/base/versions.tf b/examples/base/versions.tf index d6eaf0ca72..b8ede4f3b6 100644 --- a/examples/base/versions.tf +++ b/examples/base/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" # ensure backwards compatibility with v5.x + version = ">= 6.21" # ensure backwards compatibility with v6.x } } required_version = ">= 1" diff --git a/examples/default/.terraform.lock.hcl b/examples/default/.terraform.lock.hcl index a3e7346b15..0f6cc37765 100644 --- a/examples/default/.terraform.lock.hcl +++ b/examples/default/.terraform.lock.hcl @@ -2,45 +2,45 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "6.0.0" - constraints = ">= 5.0.0, >= 5.27.0, >= 5.77.0, >= 6.0.0" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:dbRRZ1NzH1QV/+83xT/X3MLYaZobMXt8DNwbqnJojpo=", - "zh:16b1bb786719b7ebcddba3ab751b976ebf4006f7144afeebcb83f0c5f41f8eb9", - "zh:1fbc08b817b9eaf45a2b72ccba59f4ea19e7fcf017be29f5a9552b623eccc5bc", - "zh:304f58f3333dbe846cfbdfc2227e6ed77041ceea33b6183972f3f8ab51bd065f", - "zh:4cd447b5c24f14553bd6e1a0e4fea3c7d7b218cbb2316a3d93f1c5cb562c181b", - "zh:589472b56be8277558616075fc5480fcd812ba6dc70e8979375fc6d8750f83ef", - "zh:5d78484ba43c26f1ef6067c4150550b06fd39c5d4bfb790f92c4a6f7d9d0201b", - "zh:5f470ce664bffb22ace736643d2abe7ad45858022b652143bcd02d71d38d4e42", - "zh:7a9cbb947aaab8c885096bce5da22838ca482196cf7d04ffb8bdf7fd28003e47", - "zh:854df3e4c50675e727705a0eaa4f8d42ccd7df6a5efa2456f0205a9901ace019", - "zh:87162c0f47b1260f5969679dccb246cb528f27f01229d02fd30a8e2f9869ba2c", - "zh:9a145404d506b52078cd7060e6cbb83f8fc7953f3f63a5e7137d41f69d6317a3", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a4eab2649f5afe06cc406ce2aaf9fd44dcf311123f48d344c255e93454c08921", - "zh:bea09141c6186a3e133413ae3a2e3d1aaf4f43466a6a468827287527edf21710", - "zh:d7ea2a35ff55ddfe639ab3b04331556b772a8698eca01f5d74151615d9f336db", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.3" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:MCzg+hs1/ZQ32u56VzJMWP9ONRQPAAqAjuHuzbyshvI=", - "zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927", - "zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e", - "zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf", - "zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d", - "zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09", - "zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f", - "zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1", - "zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6", - "zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47", - "zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } diff --git a/examples/default/README.md b/examples/default/README.md index 618db0b633..28d1baa141 100644 --- a/examples/default/README.md +++ b/examples/default/README.md @@ -34,7 +34,7 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 6.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | diff --git a/examples/default/versions.tf b/examples/default/versions.tf index 734eaa0b3e..af642af83b 100644 --- a/examples/default/versions.tf +++ b/examples/default/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 6.0" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/ephemeral/.terraform.lock.hcl b/examples/ephemeral/.terraform.lock.hcl index a3e7346b15..0f6cc37765 100644 --- a/examples/ephemeral/.terraform.lock.hcl +++ b/examples/ephemeral/.terraform.lock.hcl @@ -2,45 +2,45 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "6.0.0" - constraints = ">= 5.0.0, >= 5.27.0, >= 5.77.0, >= 6.0.0" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:dbRRZ1NzH1QV/+83xT/X3MLYaZobMXt8DNwbqnJojpo=", - "zh:16b1bb786719b7ebcddba3ab751b976ebf4006f7144afeebcb83f0c5f41f8eb9", - "zh:1fbc08b817b9eaf45a2b72ccba59f4ea19e7fcf017be29f5a9552b623eccc5bc", - "zh:304f58f3333dbe846cfbdfc2227e6ed77041ceea33b6183972f3f8ab51bd065f", - "zh:4cd447b5c24f14553bd6e1a0e4fea3c7d7b218cbb2316a3d93f1c5cb562c181b", - "zh:589472b56be8277558616075fc5480fcd812ba6dc70e8979375fc6d8750f83ef", - "zh:5d78484ba43c26f1ef6067c4150550b06fd39c5d4bfb790f92c4a6f7d9d0201b", - "zh:5f470ce664bffb22ace736643d2abe7ad45858022b652143bcd02d71d38d4e42", - "zh:7a9cbb947aaab8c885096bce5da22838ca482196cf7d04ffb8bdf7fd28003e47", - "zh:854df3e4c50675e727705a0eaa4f8d42ccd7df6a5efa2456f0205a9901ace019", - "zh:87162c0f47b1260f5969679dccb246cb528f27f01229d02fd30a8e2f9869ba2c", - "zh:9a145404d506b52078cd7060e6cbb83f8fc7953f3f63a5e7137d41f69d6317a3", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a4eab2649f5afe06cc406ce2aaf9fd44dcf311123f48d344c255e93454c08921", - "zh:bea09141c6186a3e133413ae3a2e3d1aaf4f43466a6a468827287527edf21710", - "zh:d7ea2a35ff55ddfe639ab3b04331556b772a8698eca01f5d74151615d9f336db", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.3" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:MCzg+hs1/ZQ32u56VzJMWP9ONRQPAAqAjuHuzbyshvI=", - "zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927", - "zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e", - "zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf", - "zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d", - "zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09", - "zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f", - "zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1", - "zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6", - "zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47", - "zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } diff --git a/examples/ephemeral/README.md b/examples/ephemeral/README.md index 705f2d13b4..04f2177d7e 100644 --- a/examples/ephemeral/README.md +++ b/examples/ephemeral/README.md @@ -33,7 +33,7 @@ terraform output webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 6.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | diff --git a/examples/ephemeral/versions.tf b/examples/ephemeral/versions.tf index 734eaa0b3e..af642af83b 100644 --- a/examples/ephemeral/versions.tf +++ b/examples/ephemeral/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 6.0" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/external-managed-ssm-secrets/.terraform.lock.hcl b/examples/external-managed-ssm-secrets/.terraform.lock.hcl index a3e7346b15..0f6cc37765 100644 --- a/examples/external-managed-ssm-secrets/.terraform.lock.hcl +++ b/examples/external-managed-ssm-secrets/.terraform.lock.hcl @@ -2,45 +2,45 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "6.0.0" - constraints = ">= 5.0.0, >= 5.27.0, >= 5.77.0, >= 6.0.0" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:dbRRZ1NzH1QV/+83xT/X3MLYaZobMXt8DNwbqnJojpo=", - "zh:16b1bb786719b7ebcddba3ab751b976ebf4006f7144afeebcb83f0c5f41f8eb9", - "zh:1fbc08b817b9eaf45a2b72ccba59f4ea19e7fcf017be29f5a9552b623eccc5bc", - "zh:304f58f3333dbe846cfbdfc2227e6ed77041ceea33b6183972f3f8ab51bd065f", - "zh:4cd447b5c24f14553bd6e1a0e4fea3c7d7b218cbb2316a3d93f1c5cb562c181b", - "zh:589472b56be8277558616075fc5480fcd812ba6dc70e8979375fc6d8750f83ef", - "zh:5d78484ba43c26f1ef6067c4150550b06fd39c5d4bfb790f92c4a6f7d9d0201b", - "zh:5f470ce664bffb22ace736643d2abe7ad45858022b652143bcd02d71d38d4e42", - "zh:7a9cbb947aaab8c885096bce5da22838ca482196cf7d04ffb8bdf7fd28003e47", - "zh:854df3e4c50675e727705a0eaa4f8d42ccd7df6a5efa2456f0205a9901ace019", - "zh:87162c0f47b1260f5969679dccb246cb528f27f01229d02fd30a8e2f9869ba2c", - "zh:9a145404d506b52078cd7060e6cbb83f8fc7953f3f63a5e7137d41f69d6317a3", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a4eab2649f5afe06cc406ce2aaf9fd44dcf311123f48d344c255e93454c08921", - "zh:bea09141c6186a3e133413ae3a2e3d1aaf4f43466a6a468827287527edf21710", - "zh:d7ea2a35ff55ddfe639ab3b04331556b772a8698eca01f5d74151615d9f336db", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.3" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:MCzg+hs1/ZQ32u56VzJMWP9ONRQPAAqAjuHuzbyshvI=", - "zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927", - "zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e", - "zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf", - "zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d", - "zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09", - "zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f", - "zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1", - "zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6", - "zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47", - "zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } diff --git a/examples/external-managed-ssm-secrets/README.md b/examples/external-managed-ssm-secrets/README.md index fc208e02b0..5a9a725dd3 100644 --- a/examples/external-managed-ssm-secrets/README.md +++ b/examples/external-managed-ssm-secrets/README.md @@ -80,7 +80,7 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 6.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | diff --git a/examples/external-managed-ssm-secrets/versions.tf b/examples/external-managed-ssm-secrets/versions.tf index 734eaa0b3e..af642af83b 100644 --- a/examples/external-managed-ssm-secrets/versions.tf +++ b/examples/external-managed-ssm-secrets/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 6.0" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/lambdas-download/.terraform.lock.hcl b/examples/lambdas-download/.terraform.lock.hcl index f09822f0e2..bbebce4350 100644 --- a/examples/lambdas-download/.terraform.lock.hcl +++ b/examples/lambdas-download/.terraform.lock.hcl @@ -2,44 +2,44 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = "~> 5.27" + version = "6.22.1" + constraints = ">= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } diff --git a/examples/multi-runner/.terraform.lock.hcl b/examples/multi-runner/.terraform.lock.hcl index 045fb7350a..0f6cc37765 100644 --- a/examples/multi-runner/.terraform.lock.hcl +++ b/examples/multi-runner/.terraform.lock.hcl @@ -2,84 +2,84 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = ">= 5.0.0, ~> 5.0, ~> 5.27" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.2" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:IyFbOIO6mhikFNL/2h1iZJ6kyN3U00jgkpCLUCThAfE=", - "zh:136299545178ce281c56f36965bf91c35407c11897f7082b3b983d86cb79b511", - "zh:3b4486858aa9cb8163378722b642c57c529b6c64bfbfc9461d940a84cd66ebea", - "zh:4855ee628ead847741aa4f4fc9bed50cfdbf197f2912775dd9fe7bc43fa077c0", - "zh:4b8cd2583d1edcac4011caafe8afb7a95e8110a607a1d5fb87d921178074a69b", - "zh:52084ddaff8c8cd3f9e7bcb7ce4dc1eab00602912c96da43c29b4762dc376038", - "zh:71562d330d3f92d79b2952ffdda0dad167e952e46200c767dd30c6af8d7c0ed3", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:805f81ade06ff68fa8b908d31892eaed5c180ae031c77ad35f82cb7a74b97cf4", - "zh:8b6b3ebeaaa8e38dd04e56996abe80db9be6f4c1df75ac3cccc77642899bd464", - "zh:ad07750576b99248037b897de71113cc19b1a8d0bc235eb99173cc83d0de3b1b", - "zh:b9f1c3bfadb74068f5c205292badb0661e17ac05eb23bfe8bd809691e4583d0e", - "zh:cc4cbcd67414fefb111c1bf7ab0bc4beb8c0b553d01719ad17de9a047adff4d1", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0, ~> 3.2" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } provider "registry.terraform.io/hashicorp/random" { - version = "3.6.3" + version = "3.7.2" constraints = "~> 3.0" hashes = [ - "h1:zG9uFP8l9u+yGZZvi5Te7PV62j50azpgwPunq2vTm1E=", - "zh:04ceb65210251339f07cd4611885d242cd4d0c7306e86dda9785396807c00451", - "zh:448f56199f3e99ff75d5c0afacae867ee795e4dfda6cb5f8e3b2a72ec3583dd8", - "zh:4b4c11ccfba7319e901df2dac836b1ae8f12185e37249e8d870ee10bb87a13fe", - "zh:4fa45c44c0de582c2edb8a2e054f55124520c16a39b2dfc0355929063b6395b1", - "zh:588508280501a06259e023b0695f6a18149a3816d259655c424d068982cbdd36", - "zh:737c4d99a87d2a4d1ac0a54a73d2cb62974ccb2edbd234f333abd079a32ebc9e", + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:a357ab512e5ebc6d1fda1382503109766e21bbfdfaa9ccda43d313c122069b30", - "zh:c51bfb15e7d52cc1a2eaec2a903ac2aff15d162c172b1b4c17675190e8147615", - "zh:e0951ee6fa9df90433728b96381fb867e3db98f66f735e0c3e24f8f16903f0ad", - "zh:e3cdcb4e73740621dabd82ee6a37d6cfce7fee2a03d8074df65086760f5cf556", - "zh:eff58323099f1bd9a0bec7cb04f717e7f1b2774c7d612bf7581797e1622613a0", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", ] } diff --git a/examples/multi-runner/README.md b/examples/multi-runner/README.md index c277f51416..7b0798fa21 100644 --- a/examples/multi-runner/README.md +++ b/examples/multi-runner/README.md @@ -53,7 +53,7 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | ~> 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | @@ -61,8 +61,8 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| -| [aws](#provider\_aws) | 5.82.1 | -| [random](#provider\_random) | 3.6.3 | +| [aws](#provider\_aws) | 6.22.1 | +| [random](#provider\_random) | 3.7.2 | ## Modules diff --git a/examples/multi-runner/versions.tf b/examples/multi-runner/versions.tf index 6bd9371ab0..af642af83b 100644 --- a/examples/multi-runner/versions.tf +++ b/examples/multi-runner/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 5.27" # ensure backwards compatibility with v5.x + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/permissions-boundary/.terraform.lock.hcl b/examples/permissions-boundary/.terraform.lock.hcl index 6a9f669990..be40c689d7 100644 --- a/examples/permissions-boundary/.terraform.lock.hcl +++ b/examples/permissions-boundary/.terraform.lock.hcl @@ -2,84 +2,84 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = ">= 5.0.0, ~> 5.0, ~> 5.27, ~> 5.77" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.2" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:IyFbOIO6mhikFNL/2h1iZJ6kyN3U00jgkpCLUCThAfE=", - "zh:136299545178ce281c56f36965bf91c35407c11897f7082b3b983d86cb79b511", - "zh:3b4486858aa9cb8163378722b642c57c529b6c64bfbfc9461d940a84cd66ebea", - "zh:4855ee628ead847741aa4f4fc9bed50cfdbf197f2912775dd9fe7bc43fa077c0", - "zh:4b8cd2583d1edcac4011caafe8afb7a95e8110a607a1d5fb87d921178074a69b", - "zh:52084ddaff8c8cd3f9e7bcb7ce4dc1eab00602912c96da43c29b4762dc376038", - "zh:71562d330d3f92d79b2952ffdda0dad167e952e46200c767dd30c6af8d7c0ed3", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:805f81ade06ff68fa8b908d31892eaed5c180ae031c77ad35f82cb7a74b97cf4", - "zh:8b6b3ebeaaa8e38dd04e56996abe80db9be6f4c1df75ac3cccc77642899bd464", - "zh:ad07750576b99248037b897de71113cc19b1a8d0bc235eb99173cc83d0de3b1b", - "zh:b9f1c3bfadb74068f5c205292badb0661e17ac05eb23bfe8bd809691e4583d0e", - "zh:cc4cbcd67414fefb111c1bf7ab0bc4beb8c0b553d01719ad17de9a047adff4d1", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0, ~> 3.2" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } provider "registry.terraform.io/hashicorp/random" { - version = "3.6.3" + version = "3.7.2" constraints = "~> 3.0" hashes = [ - "h1:zG9uFP8l9u+yGZZvi5Te7PV62j50azpgwPunq2vTm1E=", - "zh:04ceb65210251339f07cd4611885d242cd4d0c7306e86dda9785396807c00451", - "zh:448f56199f3e99ff75d5c0afacae867ee795e4dfda6cb5f8e3b2a72ec3583dd8", - "zh:4b4c11ccfba7319e901df2dac836b1ae8f12185e37249e8d870ee10bb87a13fe", - "zh:4fa45c44c0de582c2edb8a2e054f55124520c16a39b2dfc0355929063b6395b1", - "zh:588508280501a06259e023b0695f6a18149a3816d259655c424d068982cbdd36", - "zh:737c4d99a87d2a4d1ac0a54a73d2cb62974ccb2edbd234f333abd079a32ebc9e", + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:a357ab512e5ebc6d1fda1382503109766e21bbfdfaa9ccda43d313c122069b30", - "zh:c51bfb15e7d52cc1a2eaec2a903ac2aff15d162c172b1b4c17675190e8147615", - "zh:e0951ee6fa9df90433728b96381fb867e3db98f66f735e0c3e24f8f16903f0ad", - "zh:e3cdcb4e73740621dabd82ee6a37d6cfce7fee2a03d8074df65086760f5cf556", - "zh:eff58323099f1bd9a0bec7cb04f717e7f1b2774c7d612bf7581797e1622613a0", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", ] } diff --git a/examples/permissions-boundary/README.md b/examples/permissions-boundary/README.md index 117a9e5877..a5b1857d62 100644 --- a/examples/permissions-boundary/README.md +++ b/examples/permissions-boundary/README.md @@ -35,7 +35,7 @@ terraform apply | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | ~> 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | @@ -43,8 +43,8 @@ terraform apply | Name | Version | |------|---------| -| [aws](#provider\_aws) | 5.82.1 | -| [random](#provider\_random) | 3.6.3 | +| [aws](#provider\_aws) | 6.22.1 | +| [random](#provider\_random) | 3.7.2 | | [terraform](#provider\_terraform) | n/a | ## Modules diff --git a/examples/permissions-boundary/setup/README.md b/examples/permissions-boundary/setup/README.md index 0b54340386..defdfa8873 100644 --- a/examples/permissions-boundary/setup/README.md +++ b/examples/permissions-boundary/setup/README.md @@ -4,7 +4,9 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | ~> 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | +| [local](#requirement\_local) | ~> 2.0 | +| [random](#requirement\_random) | ~> 3.0 | ## Providers diff --git a/examples/permissions-boundary/setup/versions.tf b/examples/permissions-boundary/setup/versions.tf index ac6bb23d38..af642af83b 100644 --- a/examples/permissions-boundary/setup/versions.tf +++ b/examples/permissions-boundary/setup/versions.tf @@ -2,7 +2,15 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 5.27" + version = ">= 6.21" + } + local = { + source = "hashicorp/local" + version = "~> 2.0" + } + random = { + source = "hashicorp/random" + version = "~> 3.0" } } required_version = ">= 1.3.0" diff --git a/examples/permissions-boundary/versions.tf b/examples/permissions-boundary/versions.tf index 6bd9371ab0..af642af83b 100644 --- a/examples/permissions-boundary/versions.tf +++ b/examples/permissions-boundary/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 5.27" # ensure backwards compatibility with v5.x + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/prebuilt/.terraform.lock.hcl b/examples/prebuilt/.terraform.lock.hcl index 6a9f669990..be40c689d7 100644 --- a/examples/prebuilt/.terraform.lock.hcl +++ b/examples/prebuilt/.terraform.lock.hcl @@ -2,84 +2,84 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = ">= 5.0.0, ~> 5.0, ~> 5.27, ~> 5.77" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.2" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:IyFbOIO6mhikFNL/2h1iZJ6kyN3U00jgkpCLUCThAfE=", - "zh:136299545178ce281c56f36965bf91c35407c11897f7082b3b983d86cb79b511", - "zh:3b4486858aa9cb8163378722b642c57c529b6c64bfbfc9461d940a84cd66ebea", - "zh:4855ee628ead847741aa4f4fc9bed50cfdbf197f2912775dd9fe7bc43fa077c0", - "zh:4b8cd2583d1edcac4011caafe8afb7a95e8110a607a1d5fb87d921178074a69b", - "zh:52084ddaff8c8cd3f9e7bcb7ce4dc1eab00602912c96da43c29b4762dc376038", - "zh:71562d330d3f92d79b2952ffdda0dad167e952e46200c767dd30c6af8d7c0ed3", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:805f81ade06ff68fa8b908d31892eaed5c180ae031c77ad35f82cb7a74b97cf4", - "zh:8b6b3ebeaaa8e38dd04e56996abe80db9be6f4c1df75ac3cccc77642899bd464", - "zh:ad07750576b99248037b897de71113cc19b1a8d0bc235eb99173cc83d0de3b1b", - "zh:b9f1c3bfadb74068f5c205292badb0661e17ac05eb23bfe8bd809691e4583d0e", - "zh:cc4cbcd67414fefb111c1bf7ab0bc4beb8c0b553d01719ad17de9a047adff4d1", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0, ~> 3.2" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } provider "registry.terraform.io/hashicorp/random" { - version = "3.6.3" + version = "3.7.2" constraints = "~> 3.0" hashes = [ - "h1:zG9uFP8l9u+yGZZvi5Te7PV62j50azpgwPunq2vTm1E=", - "zh:04ceb65210251339f07cd4611885d242cd4d0c7306e86dda9785396807c00451", - "zh:448f56199f3e99ff75d5c0afacae867ee795e4dfda6cb5f8e3b2a72ec3583dd8", - "zh:4b4c11ccfba7319e901df2dac836b1ae8f12185e37249e8d870ee10bb87a13fe", - "zh:4fa45c44c0de582c2edb8a2e054f55124520c16a39b2dfc0355929063b6395b1", - "zh:588508280501a06259e023b0695f6a18149a3816d259655c424d068982cbdd36", - "zh:737c4d99a87d2a4d1ac0a54a73d2cb62974ccb2edbd234f333abd079a32ebc9e", + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:a357ab512e5ebc6d1fda1382503109766e21bbfdfaa9ccda43d313c122069b30", - "zh:c51bfb15e7d52cc1a2eaec2a903ac2aff15d162c172b1b4c17675190e8147615", - "zh:e0951ee6fa9df90433728b96381fb867e3db98f66f735e0c3e24f8f16903f0ad", - "zh:e3cdcb4e73740621dabd82ee6a37d6cfce7fee2a03d8074df65086760f5cf556", - "zh:eff58323099f1bd9a0bec7cb04f717e7f1b2774c7d612bf7581797e1622613a0", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", ] } diff --git a/examples/prebuilt/README.md b/examples/prebuilt/README.md index 2969ef8698..882388783b 100644 --- a/examples/prebuilt/README.md +++ b/examples/prebuilt/README.md @@ -73,7 +73,7 @@ terraform output webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | @@ -81,8 +81,8 @@ terraform output webhook_secret | Name | Version | |------|---------| -| [aws](#provider\_aws) | 5.82.1 | -| [random](#provider\_random) | 3.6.3 | +| [aws](#provider\_aws) | 6.22.1 | +| [random](#provider\_random) | 3.7.2 | ## Modules diff --git a/examples/prebuilt/versions.tf b/examples/prebuilt/versions.tf index 650e894012..af642af83b 100644 --- a/examples/prebuilt/versions.tf +++ b/examples/prebuilt/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/termination-watcher/.terraform.lock.hcl b/examples/termination-watcher/.terraform.lock.hcl index 9526664229..4f33187500 100644 --- a/examples/termination-watcher/.terraform.lock.hcl +++ b/examples/termination-watcher/.terraform.lock.hcl @@ -2,24 +2,24 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = "~> 5.27" + version = "6.22.1" + constraints = ">= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } diff --git a/lambdas/.nvmrc b/lambdas/.nvmrc index 53d1c14db3..54c65116f1 100644 --- a/lambdas/.nvmrc +++ b/lambdas/.nvmrc @@ -1 +1 @@ -v22 +v24 diff --git a/modules/ami-housekeeper/README.md b/modules/ami-housekeeper/README.md index 127d1f96b1..8898e0c85e 100644 --- a/modules/ami-housekeeper/README.md +++ b/modules/ami-housekeeper/README.md @@ -67,13 +67,13 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -105,7 +105,7 @@ No modules. | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_memory\_size](#input\_lambda\_memory\_size) | Memory size limit in MB of the lambda. | `number` | `256` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_s3\_key](#input\_lambda\_s3\_key) | S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. | `string` | `null` | no | | [lambda\_s3\_object\_version](#input\_lambda\_s3\_object\_version) | S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. | `string` | `null` | no | diff --git a/modules/ami-housekeeper/variables.tf b/modules/ami-housekeeper/variables.tf index 6d9def766b..54bec6dc32 100644 --- a/modules/ami-housekeeper/variables.tf +++ b/modules/ami-housekeeper/variables.tf @@ -117,7 +117,7 @@ variable "lambda_s3_object_version" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/ami-housekeeper/versions.tf b/modules/ami-housekeeper/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/ami-housekeeper/versions.tf +++ b/modules/ami-housekeeper/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/download-lambda/README.md b/modules/download-lambda/README.md index 1d51508ab3..9971618089 100644 --- a/modules/download-lambda/README.md +++ b/modules/download-lambda/README.md @@ -30,7 +30,7 @@ module "lambdas" { | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3 | ## Providers diff --git a/modules/download-lambda/versions.tf b/modules/download-lambda/versions.tf index bb56e1e9ba..6bc038a353 100644 --- a/modules/download-lambda/versions.tf +++ b/modules/download-lambda/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } null = { source = "hashicorp/null" diff --git a/modules/lambda/README.md b/modules/lambda/README.md index 0420776ad4..26ff5e5c24 100644 --- a/modules/lambda/README.md +++ b/modules/lambda/README.md @@ -10,13 +10,13 @@ Generic module to create lambda functions | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -39,7 +39,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [lambda](#input\_lambda) | Configuration for the lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`handler`: The entrypoint for the lambda.
`principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics_namespace`: Namespace for the metrics emitted by the lambda.
`name`: The name of the lambda function.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, "aws")
architecture = optional(string, "arm64")
environment_variables = optional(map(string), {})
handler = string
lambda_tags = optional(map(string), {})
log_level = optional(string, "info")
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, 180)
memory_size = optional(number, 256)
metrics_namespace = optional(string, "GitHub Runners")
name = string
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, "nodejs22.x")
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
tags = optional(map(string), {})
timeout = optional(number, 60)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | +| [lambda](#input\_lambda) | Configuration for the lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`handler`: The entrypoint for the lambda.
`principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics_namespace`: Namespace for the metrics emitted by the lambda.
`name`: The name of the lambda function.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, "aws")
architecture = optional(string, "arm64")
environment_variables = optional(map(string), {})
handler = string
lambda_tags = optional(map(string), {})
log_level = optional(string, "info")
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, 180)
memory_size = optional(number, 256)
metrics_namespace = optional(string, "GitHub Runners")
name = string
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, "nodejs24.x")
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
tags = optional(map(string), {})
timeout = optional(number, 60)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | ## Outputs diff --git a/modules/lambda/variables.tf b/modules/lambda/variables.tf index bafd2372e8..7cbecba071 100644 --- a/modules/lambda/variables.tf +++ b/modules/lambda/variables.tf @@ -47,7 +47,7 @@ variable "lambda" { })), []) role_path = optional(string, null) role_permissions_boundary = optional(string, null) - runtime = optional(string, "nodejs22.x") + runtime = optional(string, "nodejs24.x") s3_bucket = optional(string, null) s3_key = optional(string, null) s3_object_version = optional(string, null) diff --git a/modules/lambda/versions.tf b/modules/lambda/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/lambda/versions.tf +++ b/modules/lambda/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 0515763f4c..32dab7e7c6 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -79,14 +79,14 @@ module "multi-runner" { | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3 | -| [aws](#requirement\_aws) | >= 5.77 | +| [aws](#requirement\_aws) | >= 6.21 | | [random](#requirement\_random) | ~> 3.0 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.77 | +| [aws](#provider\_aws) | >= 6.21 | | [random](#provider\_random) | ~> 3.0 | ## Modules @@ -140,7 +140,7 @@ module "multi-runner" { | [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | | [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | | [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no | diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index 5c839e1104..6ceab81ed6 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -364,7 +364,7 @@ variable "log_level" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/multi-runner/versions.tf b/modules/multi-runner/versions.tf index fa763f4e76..cf8961bb61 100644 --- a/modules/multi-runner/versions.tf +++ b/modules/multi-runner/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.77" + version = ">= 6.21" } random = { source = "hashicorp/random" diff --git a/modules/runner-binaries-syncer/README.md b/modules/runner-binaries-syncer/README.md index 740be89925..2999be138f 100644 --- a/modules/runner-binaries-syncer/README.md +++ b/modules/runner-binaries-syncer/README.md @@ -37,13 +37,13 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -89,7 +89,7 @@ No modules. | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_memory\_size](#input\_lambda\_memory\_size) | Memory size of the lambda. | `number` | `256` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_schedule\_expression](#input\_lambda\_schedule\_expression) | Scheduler expression for action runner binary syncer. | `string` | `"cron(27 * * * ? *)"` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | diff --git a/modules/runner-binaries-syncer/variables.tf b/modules/runner-binaries-syncer/variables.tf index 4a38fb24b0..dd16a7c3ee 100644 --- a/modules/runner-binaries-syncer/variables.tf +++ b/modules/runner-binaries-syncer/variables.tf @@ -220,7 +220,7 @@ variable "lambda_principals" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/runner-binaries-syncer/versions.tf b/modules/runner-binaries-syncer/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/runner-binaries-syncer/versions.tf +++ b/modules/runner-binaries-syncer/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/runners/README.md b/modules/runners/README.md index 169cee7eac..4ad4825113 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -53,13 +53,13 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -179,7 +179,7 @@ yarn run dist | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | | [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_scale\_down\_memory\_size](#input\_lambda\_scale\_down\_memory\_size) | Memory size limit in MB for scale down lambda. | `number` | `512` | no | | [lambda\_scale\_up\_memory\_size](#input\_lambda\_scale\_up\_memory\_size) | Memory size limit in MB for scale-up lambda. | `number` | `512` | no | diff --git a/modules/runners/job-retry/README.md b/modules/runners/job-retry/README.md index f54b943855..7ecd69deeb 100644 --- a/modules/runners/job-retry/README.md +++ b/modules/runners/job-retry/README.md @@ -13,13 +13,13 @@ The module is an inner module and used by the runner module when the opt-in feat | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/runners/job-retry/versions.tf b/modules/runners/job-retry/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/runners/job-retry/versions.tf +++ b/modules/runners/job-retry/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/runners/pool/README.md b/modules/runners/pool/README.md index 052a8be60c..4bd268c37c 100644 --- a/modules/runners/pool/README.md +++ b/modules/runners/pool/README.md @@ -11,13 +11,13 @@ The pool is an opt-in feature. To be able to use the count on a module level to | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 0.14.1 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/runners/pool/versions.tf b/modules/runners/pool/versions.tf index 5e8b391414..bceee0424e 100644 --- a/modules/runners/pool/versions.tf +++ b/modules/runners/pool/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index a45075bb52..846ddeafc6 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -597,7 +597,7 @@ variable "disable_runner_autoupdate" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/runners/versions.tf b/modules/runners/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/runners/versions.tf +++ b/modules/runners/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/setup-iam-permissions/README.md b/modules/setup-iam-permissions/README.md index 29096519ae..f2401278c0 100644 --- a/modules/setup-iam-permissions/README.md +++ b/modules/setup-iam-permissions/README.md @@ -42,13 +42,13 @@ Next execute the created Terraform code via `terraform init && terraform apply`. | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/setup-iam-permissions/versions.tf b/modules/setup-iam-permissions/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/setup-iam-permissions/versions.tf +++ b/modules/setup-iam-permissions/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/ssm/README.md b/modules/ssm/README.md index cb23d3aa87..19b872e919 100644 --- a/modules/ssm/README.md +++ b/modules/ssm/README.md @@ -10,13 +10,13 @@ This module is used for storing configuration of runners, registration tokens an | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/ssm/versions.tf b/modules/ssm/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/ssm/versions.tf +++ b/modules/ssm/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/termination-watcher/README.md b/modules/termination-watcher/README.md index c79939065f..788f4c5c13 100644 --- a/modules/termination-watcher/README.md +++ b/modules/termination-watcher/README.md @@ -61,7 +61,7 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers diff --git a/modules/termination-watcher/notification/README.md b/modules/termination-watcher/notification/README.md index a6de04ac27..1a26de3bce 100644 --- a/modules/termination-watcher/notification/README.md +++ b/modules/termination-watcher/notification/README.md @@ -4,13 +4,13 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/termination-watcher/notification/versions.tf b/modules/termination-watcher/notification/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/termination-watcher/notification/versions.tf +++ b/modules/termination-watcher/notification/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/termination-watcher/termination/README.md b/modules/termination-watcher/termination/README.md index 28b321aaa3..22912b9646 100644 --- a/modules/termination-watcher/termination/README.md +++ b/modules/termination-watcher/termination/README.md @@ -4,13 +4,13 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/termination-watcher/termination/versions.tf b/modules/termination-watcher/termination/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/termination-watcher/termination/versions.tf +++ b/modules/termination-watcher/termination/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/termination-watcher/versions.tf b/modules/termination-watcher/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/termination-watcher/versions.tf +++ b/modules/termination-watcher/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/webhook/README.md b/modules/webhook/README.md index 0dbd1429cf..10b0179672 100644 --- a/modules/webhook/README.md +++ b/modules/webhook/README.md @@ -36,14 +36,14 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -72,7 +72,7 @@ yarn run dist | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_memory\_size](#input\_lambda\_memory\_size) | Memory size limit in MB for lambda. | `number` | `256` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | | [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no | diff --git a/modules/webhook/direct/README.md b/modules/webhook/direct/README.md index ee3db410b9..aa69347ae4 100644 --- a/modules/webhook/direct/README.md +++ b/modules/webhook/direct/README.md @@ -4,14 +4,14 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3.2 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | | [null](#provider\_null) | ~> 3.2 | ## Modules @@ -40,7 +40,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs22.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
})
| n/a | yes | +| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs24.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
})
| n/a | yes | ## Outputs diff --git a/modules/webhook/direct/variables.tf b/modules/webhook/direct/variables.tf index 2a1b559c92..5da98e548a 100644 --- a/modules/webhook/direct/variables.tf +++ b/modules/webhook/direct/variables.tf @@ -28,7 +28,7 @@ variable "config" { repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") - lambda_runtime = optional(string, "nodejs22.x") + lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ diff --git a/modules/webhook/direct/versions.tf b/modules/webhook/direct/versions.tf index 3f6adcb64a..82776fc618 100644 --- a/modules/webhook/direct/versions.tf +++ b/modules/webhook/direct/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } null = { diff --git a/modules/webhook/eventbridge/README.md b/modules/webhook/eventbridge/README.md index 329ac3c232..5c22c69010 100644 --- a/modules/webhook/eventbridge/README.md +++ b/modules/webhook/eventbridge/README.md @@ -4,14 +4,14 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3.2 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.0 | +| [aws](#provider\_aws) | >= 6.21 | | [null](#provider\_null) | ~> 3.2 | ## Modules @@ -54,7 +54,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs22.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
accept_events = optional(list(string), null)
})
| n/a | yes | +| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs24.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
accept_events = optional(list(string), null)
})
| n/a | yes | ## Outputs diff --git a/modules/webhook/eventbridge/variables.tf b/modules/webhook/eventbridge/variables.tf index 8a884a6ba3..e39f24ab6d 100644 --- a/modules/webhook/eventbridge/variables.tf +++ b/modules/webhook/eventbridge/variables.tf @@ -28,7 +28,7 @@ variable "config" { repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") - lambda_runtime = optional(string, "nodejs22.x") + lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ diff --git a/modules/webhook/eventbridge/versions.tf b/modules/webhook/eventbridge/versions.tf index a3c66b7b40..82776fc618 100644 --- a/modules/webhook/eventbridge/versions.tf +++ b/modules/webhook/eventbridge/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.0" + version = ">= 6.21" } null = { diff --git a/modules/webhook/variables.tf b/modules/webhook/variables.tf index c1683f2d3c..5f0a39c0d2 100644 --- a/modules/webhook/variables.tf +++ b/modules/webhook/variables.tf @@ -142,7 +142,7 @@ variable "log_level" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "aws_partition" { diff --git a/modules/webhook/versions.tf b/modules/webhook/versions.tf index 14f948428d..e864c4f9ed 100644 --- a/modules/webhook/versions.tf +++ b/modules/webhook/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } null = { diff --git a/variables.tf b/variables.tf index 7ff6ecece4..17ea50bfcf 100644 --- a/variables.tf +++ b/variables.tf @@ -781,7 +781,7 @@ variable "disable_runner_autoupdate" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/versions.tf b/versions.tf index f1bce2af72..77f4d7b326 100644 --- a/versions.tf +++ b/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.77" + version = ">= 6.21" } random = { source = "hashicorp/random"