Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
cde9597
Merge pull request #59488 from github/repo-sync
docs-bot Feb 4, 2026
a524ca5
Fix Claude article versioning (#59492)
steveward Feb 4, 2026
17a3608
Fix broken custom-properties link versioning for GHES < 3.21 (#59490)
heiskr Feb 4, 2026
18f31fa
[2026-02-05] Frictionless Model Access in Copilot For Individuals (#5…
sunbrye Feb 4, 2026
6ceb0c0
Fix accessibility mismatch in version picker (WCAG 2.5.3) (#59485)
heiskr Feb 4, 2026
6968ef4
Add data-search attribute to landing pages for search indexing (#59491)
heiskr Feb 4, 2026
6efa8b8
Sync secret scanning data (#59483)
docs-bot Feb 5, 2026
ff1cd52
Update OpenAPI Description (#59484)
docs-bot Feb 5, 2026
52c3338
Update user-provisioning-with-scim-on-ghes.md to state that the same …
bss-mc Feb 5, 2026
83232d8
docs(frontend): clarify SAML revocation does not delete tokens (#59503)
jusuchin85 Feb 5, 2026
ee1529e
Clarify Gradle Wrapper update behavior in Dependabot documentation (#…
Copilot Feb 5, 2026
b268aa7
Add note about Copilot metrics endpoints closing down (#59460)
sophietheking Feb 5, 2026
d201684
Update timeline shortcut descriptions for clarity
hubwriter Feb 5, 2026
e460d69
Add further reading section to use-copilot-cli.md
hubwriter Feb 5, 2026
a9b2e7f
Copilot CLI: add more details for custom instructions (#59478)
hubwriter Feb 5, 2026
8d126d8
Note that Migrations REST API is currently unavailable on GHE.com (#5…
jfine Feb 5, 2026
daf3689
Update product metadata for CLI (#59486)
hubwriter Feb 5, 2026
164bff7
GitHub Actions: February 2026 Updates (#59441)
Steve-Glass Feb 5, 2026
a4e9132
Update OpenAPI Description (#59515)
docs-bot Feb 5, 2026
8291b77
Copilot CLI: Improvements for ACP documentation (#59507)
hubwriter Feb 5, 2026
39a367d
Removed claimed support for NVMe disk controller for older versions (…
bonsohi Feb 5, 2026
57a21fb
Document required OAuth callback URL for Azure subscription connectio…
Copilot Feb 5, 2026
34fd385
[EDI] Viewing metrics for Dependabot alerts (#59421)
mchammer01 Feb 5, 2026
df7af05
GraphQL schema update (#59520)
docs-bot Feb 5, 2026
189cfab
Bump @actions/core from 2.0.0 to 3.0.0 (#59475)
dependabot[bot] Feb 5, 2026
c490765
Consolidate search index failure notifications into single message (#…
heiskr Feb 5, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/actions/labeler/labeler.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
/* See function main in this file for documentation */

import coreLib from '@actions/core'
import * as coreLib from '@actions/core'
import { type Octokit } from '@octokit/rest'
import { CoreInject } from '@/links/scripts/action-injections'

Expand Down
61 changes: 52 additions & 9 deletions .github/workflows/index-general-search.yml
Original file line number Diff line number Diff line change
Expand Up @@ -230,21 +230,64 @@ jobs:
FASTLY_SURROGATE_KEY: api-search:${{ matrix.language }}
run: npm run purge-fastly-edge-cache

- name: Alert on scraping failures
if: ${{ steps.check-failures.outputs.has_failures == 'true' && github.event_name != 'workflow_dispatch' }}
uses: ./.github/actions/slack-alert
- name: Upload failures artifact
if: ${{ steps.check-failures.outputs.has_failures == 'true' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: search-failures-${{ matrix.language }}
path: /tmp/records/failures-summary.json
retention-days: 1

- uses: ./.github/actions/slack-alert
if: ${{ failure() && github.event_name != 'workflow_dispatch' }}
with:
slack_channel_id: ${{ secrets.DOCS_ALERTS_SLACK_CHANNEL_ID }}
slack_token: ${{ secrets.SLACK_DOCS_BOT_TOKEN }}
message: |
:warning: ${{ steps.check-failures.outputs.failed_pages }} page(s) failed to scrape for general search indexing (language: ${{ matrix.language }})

The indexing completed but some pages could not be scraped. This may affect search results for those pages.
notifyScrapingFailures:
name: Notify scraping failures
needs: updateElasticsearchIndexes
if: ${{ always() && github.repository == 'github/docs-internal' && github.event_name != 'workflow_dispatch' && needs.updateElasticsearchIndexes.result != 'cancelled' }}
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1

- name: Download all failure artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: search-failures-*
path: /tmp/failures
continue-on-error: true

Workflow: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
- name: Check if any failures were downloaded
id: check-artifacts
run: |
if [ -d /tmp/failures ] && [ "$(ls -A /tmp/failures 2>/dev/null)" ]; then
echo "has_artifacts=true" >> $GITHUB_OUTPUT
else
echo "has_artifacts=false" >> $GITHUB_OUTPUT
fi

- uses: ./.github/actions/slack-alert
if: ${{ failure() && github.event_name != 'workflow_dispatch' }}
- uses: ./.github/actions/node-npm-setup
if: ${{ steps.check-artifacts.outputs.has_artifacts == 'true' }}

- name: Aggregate failures and format message
if: ${{ steps.check-artifacts.outputs.has_artifacts == 'true' }}
id: aggregate
run: |
RESULT=$(npx tsx src/search/scripts/aggregate-search-index-failures.ts /tmp/failures \
--workflow-url "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}")
{
echo 'result<<EOF'
echo "$RESULT"
echo 'EOF'
} >> "$GITHUB_OUTPUT"

- name: Send consolidated Slack notification
if: ${{ steps.check-artifacts.outputs.has_artifacts == 'true' }}
uses: ./.github/actions/slack-alert
with:
slack_channel_id: ${{ secrets.DOCS_ALERTS_SLACK_CHANNEL_ID }}
slack_token: ${{ secrets.SLACK_DOCS_BOT_TOKEN }}
message: ${{ fromJSON(steps.aggregate.outputs.result).message }}
24 changes: 20 additions & 4 deletions content/actions/reference/runners/self-hosted-runners.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,20 +70,34 @@ When routing a job to a self-hosted runner, {% data variables.product.prodname_d

## Autoscaling

You can automatically increase or decrease the number of self-hosted runners in your environment in response to the webhook events you receive with a particular label.
Autoscaling allows you to dynamically adjust the number of self-hosted runners based on demand. This helps optimize resource utilization and ensures sufficient runner capacity during peak times while reducing costs during periods of low activity. There are multiple approaches to implementing autoscaling for self-hosted runners, each with different trade-offs in terms of complexity, reliability, and responsiveness.

### Supported autoscaling solutions
### {% data variables.product.prodname_actions_runner_controller %}

{% ifversion fpt or ghec %}

{% data variables.product.prodname_dotcom %}-hosted runners inherently autoscale based on your needs. {% data variables.product.prodname_dotcom %}-hosted runners can be a low-maintenance and cost-effective alternative to developing or implementing autoscaling solutions. For more information, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners).
{% data variables.product.github %}-hosted runners inherently autoscale based on your needs. {% data variables.product.github %}-hosted runners can be a low-maintenance and cost-effective alternative to developing or implementing autoscaling solutions. For more information, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners).

{% endif %}

The [actions/actions-runner-controller](https://github.com/actions/actions-runner-controller) (ARC) project is a Kubernetes-based runner autoscaler. {% data variables.product.prodname_dotcom %} recommends ARC if the team deploying it has expert Kubernetes knowledge and experience.
{% data variables.product.prodname_actions_runner_controller %} (ARC) is the reference implementation of {% data variables.product.github %}'s scale set APIs and the recommended Kubernetes-based solution for autoscaling self-hosted runners. ARC provides a complete, production-ready autoscaling solution for teams running {% data variables.product.prodname_actions %} in Kubernetes environments.

{% data variables.product.github %} recommends ARC for organizations with Kubernetes infrastructure and teams that have Kubernetes expertise. ARC handles the full lifecycle of runners within your cluster, from provisioning to job execution to cleanup.

For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller) and [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-support-for-actions-runner-controller).

### {% data variables.product.prodname_actions %} Runner Scale Set Client

The {% data variables.product.prodname_actions %} Runner Scale Set Client is a standalone Go-based module that empowers platform teams, integrators, and infrastructure providers to build custom autoscaling solutions for {% data variables.product.prodname_actions %} runners across VMs, containers, on-premise infrastructure, and cloud services, with support for Windows, Linux, and macOS platforms.

The client orchestrates {% data variables.product.github %} API interactions for scale sets while leaving infrastructure provisioning to you. You define how runners are created, scaled, and destroyed, and configure runners with multiple labels for flexible job routing and targeting. This gives organizations granular control over runner lifecycle management and real-time telemetry for job execution.

The client is designed to work out of the box with basic configurations, allowing teams to quickly implement autoscaling. However, its true power lies in its flexibility—the client is built to be extended and customized to meet each organization's specific infrastructure requirements, compliance constraints, and operational workflows. Whether you need simple scaling logic or complex, multi-environment provisioning strategies, the client adapts to your needs.

The {% data variables.product.prodname_actions %} Runner Scale Set Client is an open source project. The [actions/scaleset repository](https://github.com/actions/scaleset) contains the complete source code, comprehensive documentation, and practical examples to help you get started. You'll find implementation guides, sample configurations for various infrastructure scenarios, and reference architectures demonstrating how to integrate the client with different provisioning systems. The repository also includes contributing guidelines for teams interested in extending the client or sharing their autoscaling patterns with the community.

> **Note:** The Runner Scale Set Client is not a replacement for {% data variables.product.prodname_actions_runner_controller %} (ARC), which remains the reference implementation of the scale set APIs and the recommended Kubernetes solution for autoscaling runners. Instead, the client is a complementary tool for interfacing with the same scale set APIs to build custom autoscaling solutions outside of Kubernetes.

### Ephemeral runners for autoscaling

{% data variables.product.prodname_dotcom %} recommends implementing autoscaling with ephemeral self-hosted runners; autoscaling with persistent self-hosted runners is not recommended. In certain cases, {% data variables.product.prodname_dotcom %} cannot guarantee that jobs are not assigned to persistent runners while they are shut down. With ephemeral runners, this can be guaranteed because {% data variables.product.prodname_dotcom %} only assigns one job to a runner.
Expand Down Expand Up @@ -130,6 +144,8 @@ You can create your own autoscaling environment by using payloads received from
* For more information about the `workflow_job` webhook, see [AUTOTITLE](/webhooks-and-events/webhooks/webhook-events-and-payloads#workflow_job).
* To learn how to work with webhooks, see [AUTOTITLE](/webhooks).

> **Note:** This approach relies on the timeliness of webhook delivery for making scaling decisions, which can introduce delays and reliability concerns. Consider using Actions Controller or the Scale Set Client for larger volume autoscaling scenarios.

### Authentication requirements

You can register and delete repository and organization self-hosted runners using [the API](/rest/actions/self-hosted-runners). To authenticate to the API, your autoscaling implementation can use an access token or a {% data variables.product.prodname_dotcom %} app.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ The following features are currently unavailable on {% data variables.enterprise
| {% data variables.product.prodname_marketplace %} | {% data variables.product.prodname_marketplace %}, as a means of searching for, purchasing, and directly installing apps and actions, is unavailable. Ecosystem apps and actions can still be discovered and installed from their source, but they may require modification to work on {% data variables.enterprise.data_residency_site %}. | [{% data variables.product.prodname_actions %} workflows from {% data variables.product.prodname_marketplace %}](#github-actions-workflows-from-github-marketplace) |
| Certain features of {% data variables.product.prodname_github_connect %} | Although you can connect an enterprise on {% data variables.enterprise.data_residency_site %} to a {% data variables.product.prodname_ghe_server %} instance, certain features of {% data variables.product.prodname_github_connect %} are not available, including resolution of actions from {% data variables.product.prodname_dotcom_the_website %}. | [{% data variables.product.prodname_github_connect %}](#github-connect) |
| Some features currently in {% data variables.release-phases.public_preview %} or {% data variables.release-phases.private_preview %} | Certain features that are in a preview phase on {% data variables.product.prodname_dotcom_the_website %} may not be available on {% data variables.enterprise.data_residency_site %} until GA. | |
| Migrations REST API | Currently unavailable. | [AUTOTITLE](/rest/migrations) |

## Permanently unavailable features

Expand Down
22 changes: 22 additions & 0 deletions content/admin/data-residency/network-details-for-ghecom.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,28 @@ Japan region:
* `prodjpw01resultssa2.blob.core.windows.net`
* `prodjpw01resultssa3.blob.core.windows.net`

### OAuth callback URL for connecting an Azure subscription for billing

When you connect or update an Azure subscription for billing, you must allow access to the following URL:

* `https://github.com/enterprises/oauth_callback`

This URL is required during the OAuth authentication flow that occurs when:

* Connecting an Azure subscription to your enterprise for the first time
* Changing or updating an existing Azure subscription connection

> [!IMPORTANT]
> * The URL must be allowed with all query parameters, for example `https://github.com/enterprises/oauth_callback?code=...`
> * After the Azure subscription is successfully connected and the subscription ID is stored, you can remove this URL from your allowlist
> * To change or update your Azure subscription, you must add the URL back to your allowlist

The OAuth flow works as follows:

1. The user starts the connection process on `SUBDOMAIN.ghe.com`
1. Azure redirects to `https://github.com/enterprises/oauth_callback` to complete the OAuth flow
1. The system redirects back to `SUBDOMAIN.ghe.com` to finalize the connection

## IP ranges for {% data variables.product.prodname_importer_proper_name %}

If you're running a migration to your enterprise with {% data variables.product.prodname_importer_proper_name %}, you may need to add certain ranges to an IP allow list. See [AUTOTITLE](/migrations/using-github-enterprise-importer/migrating-between-github-products/managing-access-for-a-migration-between-github-products#configuring-ip-allow-lists-for-migrations).
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The following IdPs are partner IdPs. They offer an application that you can use
* Okta
* PingFederate ({% data variables.release-phases.public_preview %})

When you use a single partner IdP for both authentication and provisioning, {% data variables.product.company_short %} provides support for the application on the partner IdP and the IdP's integration with {% data variables.product.prodname_dotcom %}. Support for PingFederate is in {% data variables.release-phases.public_preview %}.
When you use a single partner IdP for both authentication and provisioning, {% data variables.product.company_short %} provides support for the application on the partner IdP and the IdP's integration with {% data variables.product.prodname_dotcom %}. The same application must be used for both SAML authentication and SCIM provisioning. Support for PingFederate is in {% data variables.release-phases.public_preview %}.

We do not have a supported partner application when using Entra ID for Azure Government.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,15 +64,15 @@ Other policies are available as blanket restrictions. These give you more contro

## Targeting policies with metadata

You can enable better governance through automated policy enforcement. This is possible with custom properties, allowing you to add structured metadata to your resources. See [AUTOTITLE](/admin/managing-accounts-and-repositories/managing-organizations-in-your-enterprise/custom-properties).
You can enable better governance through automated policy enforcement. This is possible with custom properties, allowing you to add structured metadata to your resources.{% ifversion ghec or ghes > 3.20 %} See [AUTOTITLE](/admin/managing-accounts-and-repositories/managing-organizations-in-your-enterprise/custom-properties).{% endif %}

With **repository custom properties**, you can classify repositories by attributes like risk level, team ownership, or compliance requirements. This metadata enables you to automatically apply different governance rules based on repository characteristics.

With **organization custom properties**, you can categorize organizations within your enterprise by data sensitivity, regulatory frameworks, or business units. You can then use these properties to selectively target organizations with enterprise rulesets.

Both types of custom properties integrate with rulesets, allowing you to create powerful governance frameworks that automatically enforce the right policies based on metadata rather than manual repository selection.

See [AUTOTITLE](/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization) and [AUTOTITLE](/admin/managing-accounts-and-repositories/managing-organizations-in-your-enterprise/managing-custom-properties-for-organizations).
See [AUTOTITLE](/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization){% ifversion ghec or ghes > 3.20 %} and [AUTOTITLE](/admin/managing-accounts-and-repositories/managing-organizations-in-your-enterprise/managing-custom-properties-for-organizations){% endif %}.

## Monitoring activity

Expand Down
Loading