Skip to content

Commit 9af93d6

Browse files
authored
Merge pull request #39580 from github/repo-sync
Repo sync
2 parents 718b5e1 + eff11ee commit 9af93d6

File tree

14 files changed

+263
-8
lines changed

14 files changed

+263
-8
lines changed

content/copilot/concepts/about-mcp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ For more information on MCP, see [the official MCP documentation](https://modelc
2525

2626
To learn how to configure and use MCP servers with {% data variables.copilot.copilot_chat_short %}, see [AUTOTITLE](/copilot/how-tos/context/model-context-protocol/extending-copilot-chat-with-mcp).
2727

28-
Enterprises and organizations can choose to enable or disable use of MCP for members of their organization or enterprise. See [AUTOTITLE](/copilot/how-tos/administer/enterprises/managing-policies-and-features-for-copilot-in-your-enterprise#mcp-servers-on-githubcom). The MCP policy only applies to members with {% data variables.copilot.copilot_business_short %}, {% data variables.copilot.copilot_enterprise_short %}, or {% data variables.product.prodname_copilot_short %} licenses assigned by the organization or enterprise that configures the policy. {% data variables.copilot.copilot_free_short %}, {% data variables.copilot.copilot_pro_short %}, or {% data variables.copilot.copilot_pro_plus_short %} do not have their MCP access governed by this policy.
28+
{% data reusables.copilot.mcp.mcp-policy %}
2929

3030
## About the {% data variables.product.github %} MCP server
3131

content/copilot/how-tos/provide-context/use-mcp/extend-copilot-chat-with-mcp.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ shortTitle: Extend Copilot Chat with MCP
55
intro: 'Learn how to use the Model Context Protocol (MCP) to extend {% data variables.copilot.copilot_chat_short %}.'
66
versions:
77
feature: copilot
8+
defaultTool: vscode
89
topics:
910
- Copilot
1011
redirect_from:
@@ -29,7 +30,7 @@ For information on currently available MCP servers, see [the MCP servers reposit
2930

3031
{% vscode %}
3132

32-
Enterprises and organizations can choose to enable or disable use of MCP for members of their organization or enterprise. See [AUTOTITLE](/copilot/how-tos/administer/enterprises/managing-policies-and-features-for-copilot-in-your-enterprise#mcp-servers-on-githubcom). The MCP policy only applies to members with {% data variables.copilot.copilot_business_short %}, {% data variables.copilot.copilot_enterprise_short %}, or {% data variables.product.prodname_copilot_short %} licenses assigned by the organization or enterprise that configures the policy. {% data variables.copilot.copilot_free_short %}, {% data variables.copilot.copilot_pro_short %}, or {% data variables.copilot.copilot_pro_plus_short %} do not have their MCP access governed by this policy.
33+
{% data reusables.copilot.mcp.mcp-policy %}
3334

3435
## Prerequisites
3536

content/copilot/how-tos/provide-context/use-mcp/use-the-github-mcp-server.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ intro: 'Learn how to use the GitHub Model Context Protocol (MCP) server to exten
44
shortTitle: Use the GitHub MCP Server
55
versions:
66
feature: copilot
7+
defaultTool: vscode
78
topics:
89
- Copilot
910
redirect_from:

content/graphql/guides/managing-enterprise-accounts.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,7 @@ The next steps will use Insomnia.
8080
1. Add the base url and `POST` method to your GraphQL client. When using GraphQL to request information (queries), change information (mutations), or transfer data using the GitHub API, the default HTTP method is `POST` and the base url follows this syntax:
8181
* For your enterprise instance: `https://<HOST>/api/graphql`
8282
* For GitHub Enterprise Cloud: `https://api.github.com/graphql`
83+
* For GitHub Enterprise Cloud with Data Residency: `https://api.SUBDOMAIN.ghe.com/graphql`
8384

8485
1. Select the "Auth" menu and click **Bearer Token**. If you've previously selected a different authentication method, the menu will be labeled with that method, such as "Basic Auth", instead.
8586
![Screenshot of the expanded "Auth" menu in Insomnia. The menu label, "Auth", and the "Bearer Token" option are outlined in dark orange.](/assets/images/developer/graphql/insomnia-bearer-token-option.png)
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
date: '2025-07-29'
2+
sections:
3+
security_fixes:
4+
- |
5+
Packages have been updated to the latest security versions.
6+
bugs:
7+
- |
8+
Administrators would occasionally encounter timeouts when downloading diagnostics via the Management Console.
9+
- |
10+
In full cluster topologies, some expensive stats queries are skipped during `ghe-cluster-support-bundle` to prevent overloading the nodes with identical requests.
11+
- |
12+
Unsuccessful attempts to sign in to the Management Console were reported in the audit log and were indistinguishable from successful attempts.
13+
known_issues:
14+
- |
15+
During the validation phase of a configuration run, a `No such object` error may occur for the Notebook and Viewscreen services. This error can be ignored as the services should still correctly start.
16+
- |
17+
If the root site administrator is locked out of the Management Console after failed login attempts, the account does not unlock automatically after the defined lockout time. Someone with administrative SSH access to the instance must unlock the account using the administrative shell. For more information, see "[AUTOTITLE](/admin/configuration/administering-your-instance-from-the-management-console/troubleshooting-access-to-the-management-console#unlocking-the-root-site-administrator-account)."
18+
- |
19+
On an instance with the HTTP `X-Forwarded-For` header configured for use behind a load balancer, all client IP addresses in the instance's audit log erroneously appear as 127.0.0.1.
20+
- |
21+
{% data reusables.release-notes.large-adoc-files-issue %}
22+
- |
23+
Admin stats REST API endpoints may timeout on appliances with many users or repositories. Retrying the request until data is returned is advised.
24+
- |
25+
When following the steps for [Replacing the primary MySQL node](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-the-primary-mysql-node), step 14 (running `ghe-cluster-config-apply`) might fail with errors. If this occurs, re-running `ghe-cluster-config-apply` is expected to succeed.
26+
- |
27+
Running a config apply as part of the steps for [Replacing a node in an emergency](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-a-node-in-an-emergency) may fail with errors if the node being replaced is still reachable. If this occurs, shutdown the node and repeat the steps.
28+
- |
29+
{% data reusables.release-notes.2024-06-possible-frontend-5-minute-outage-during-hotpatch-upgrade %}
30+
- |
31+
When restoring data originally backed up from a 3.13 or greater appliance version, the Elasticsearch indices need to be reindexed before some of the data will show up. This happens via a nightly scheduled job. It can also be forced by running `/usr/local/share/enterprise/ghe-es-search-repair`.
32+
- |
33+
An organization-level code scanning configuration page is displayed on instances that do not use GitHub Advanced Security or code scanning.
34+
- |
35+
In the header bar displayed to site administrators, some icons are not available.
36+
- |
37+
When enabling automatic update checks for the first time in the Management Console, the status is not dynamically reflected until the "Updates" page is reloaded.
38+
- |
39+
When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed.
40+
- |
41+
After a restore, existing outside collaborators cannot be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance.
42+
- |
43+
After a geo-replica is promoted to be a primary by running `ghe-repl-promote`, the actions workflow of a repository does not have any suggested workflows.
44+
- |
45+
Unexpected elements may appear in the UI on the repo overview page for locked repositories.
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
date: '2025-07-29'
2+
sections:
3+
security_fixes:
4+
- |
5+
Packages have been updated to the latest security versions.
6+
bugs:
7+
- |
8+
Administrators would occasionally encounter timeouts when downloading diagnostics via the Management Console.
9+
- |
10+
In full cluster topologies, some expensive stats queries are skipped during `ghe-cluster-support-bundle` to prevent overloading the nodes with identical requests.
11+
- |
12+
Unsuccessful attempts to sign in to the Management Console were reported in the audit log and were indistinguishable from successful attempts.
13+
- |
14+
Enterprise Managed Users (EMUs) who were restricted from creating user namespace repositories could still create repositories in organizations and transfer them to their user namespace.
15+
- |
16+
On instances with secret scanning enabled, repositories could display a persistent backfill banner due to pending scans associated with canceled job groups.
17+
- |
18+
Administrators and users could experience delays due to performance regressions affecting the background processing of notification jobs.
19+
changes:
20+
- |
21+
For administrators performing a live upgrade, a new entry point has been added to the upgrade container to clean up database tables. This utility can be run manually via `ghe-live-migrations -cleanup`, and is also executed automatically via `ghe-config-apply` after a complete upgrade.
22+
- |
23+
During pre-upgrade operations of a live upgrade, tables are now renamed instead of being dropped immediately. The tables are then dropped at a later stage via `ghe-config-apply`.
24+
known_issues:
25+
- |
26+
During the validation phase of a configuration run, a `No such object` error may occur for the Notebook and Viewscreen services. This error can be ignored as the services should still correctly start.
27+
- |
28+
If the root site administrator is locked out of the Management Console after failed login attempts, the account does not unlock automatically after the defined lockout time. Someone with administrative SSH access to the instance must unlock the account using the administrative shell. For more information, see "[AUTOTITLE](/admin/configuration/administering-your-instance-from-the-management-console/troubleshooting-access-to-the-management-console#unlocking-the-root-site-administrator-account)."
29+
- |
30+
On an instance with the HTTP `X-Forwarded-For` header configured for use behind a load balancer, all client IP addresses in the instance's audit log erroneously appear as 127.0.0.1.
31+
- |
32+
{% data reusables.release-notes.large-adoc-files-issue %}
33+
- |
34+
Admin stats REST API endpoints may timeout on appliances with many users or repositories. Retrying the request until data is returned is advised.
35+
- |
36+
When following the steps for [Replacing the primary MySQL node](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-the-primary-mysql-node), step 14 (running `ghe-cluster-config-apply`) might fail with errors. If this occurs, re-running `ghe-cluster-config-apply` is expected to succeed.
37+
- |
38+
Running a config apply as part of the steps for [Replacing a node in an emergency](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-a-node-in-an-emergency) may fail with errors if the node being replaced is still reachable. If this occurs, shutdown the node and repeat the steps.
39+
- |
40+
{% data reusables.release-notes.2024-06-possible-frontend-5-minute-outage-during-hotpatch-upgrade %}
41+
- |
42+
When restoring data originally backed up from a 3.13 or greater appliance version, the Elasticsearch indices need to be reindexed before some of the data will show up. This happens via a nightly scheduled job. It can also be forced by running `/usr/local/share/enterprise/ghe-es-search-repair`.
43+
- |
44+
An organization-level code scanning configuration page is displayed on instances that do not use GitHub Advanced Security or code scanning.
45+
- |
46+
In the header bar displayed to site administrators, some icons are not available.
47+
- |
48+
When enabling automatic update checks for the first time in the Management Console, the status is not dynamically reflected until the "Updates" page is reloaded.
49+
- |
50+
When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed.
51+
- |
52+
When initializing a new GHES cluster, nodes with the `consul-server` role should be added to the cluster before adding additional nodes. Adding all nodes simultaneously creates a race condition between nomad server registration and nomad client registration.
53+
- |
54+
Admins setting up cluster high availability (HA) may encounter a spokes error when running `ghe-cluster-repl-status` if a new organization and repositories are created before using the `ghe-cluster-repl-bootstrap` command. To avoid this issue, complete the cluster HA setup with `ghe-cluster-repl-bootstrap` before creating new organizations and repositories.
55+
- |
56+
After a restore, existing outside collaborators cannot be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance.
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
date: '2025-07-29'
2+
sections:
3+
security_fixes:
4+
- |
5+
The maintenance page in the Management Console did not include cross-site request forgery (CSRF) protection.
6+
- |
7+
Packages have been updated to the latest security versions.
8+
bugs:
9+
- |
10+
On instances in a cluster configuration, builds of GitHub Pages sites timed out in GitHub Actions workflows.
11+
- |
12+
Administrators would occasionally encounter timeouts when downloading diagnostics via the Management Console.
13+
- |
14+
In full cluster topologies, some expensive stats queries are skipped during `ghe-cluster-support-bundle` to prevent overloading the nodes with identical requests.
15+
- |
16+
Unsuccessful attempts to sign in to the Management Console were reported in the audit log and were indistinguishable from successful attempts.
17+
- |
18+
Enterprise Managed Users (EMUs) who were restricted from creating user namespace repositories could still create repositories in organizations and transfer them to their user namespace.
19+
- |
20+
In some scenarios, during an upgrade from GHES 3.14 to GHES 3.16 the `BackfillDefaultLegacyEnterpriseConfigurationsTransition` migration step could fail.
21+
- |
22+
Administrators and users could experience delays due to performance regressions affecting the background processing of notification jobs.
23+
changes:
24+
- |
25+
For administrators performing a live upgrade, a new entry point has been added to the upgrade container to clean up database tables. This utility can be run manually via `ghe-live-migrations -cleanup`, and is also executed automatically via `ghe-config-apply` after a complete upgrade.
26+
- |
27+
During pre-upgrade operations of a live upgrade, tables are now renamed instead of being dropped immediately. The tables are then dropped at a later stage via `ghe-config-apply`.
28+
- |
29+
Events for adding or removing issues and pull requests from a project, or changing their status within a project, are now included in the items timeline alongside existing events. This update helps administrators and users more comprehensively track project-related activity.
30+
known_issues:
31+
- |
32+
Custom firewall rules are removed during the upgrade process.
33+
- |
34+
During the validation phase of a configuration run, a `No such object` error may occur for the Notebook and Viewscreen services. This error can be ignored as the services should still correctly start.
35+
- |
36+
If the root site administrator is locked out of the Management Console after failed login attempts, the account does not unlock automatically after the defined lockout time. Someone with administrative SSH access to the instance must unlock the account using the administrative shell. For more information, see "[AUTOTITLE](/admin/configuration/administering-your-instance-from-the-management-console/troubleshooting-access-to-the-management-console#unlocking-the-root-site-administrator-account)."
37+
- |
38+
On an instance with the HTTP `X-Forwarded-For` header configured for use behind a load balancer, all client IP addresses in the instance's audit log erroneously appear as 127.0.0.1.
39+
- |
40+
{% data reusables.release-notes.large-adoc-files-issue %}
41+
- |
42+
Admin stats REST API endpoints may timeout on appliances with many users or repositories. Retrying the request until data is returned is advised.
43+
- |
44+
When following the steps for [Replacing the primary MySQL node](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-the-primary-mysql-node), step 14 (running `ghe-cluster-config-apply`) might fail with errors. If this occurs, re-running `ghe-cluster-config-apply` is expected to succeed.
45+
- |
46+
Running a config apply as part of the steps for [Replacing a node in an emergency](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-a-node-in-an-emergency) may fail with errors if the node being replaced is still reachable. If this occurs, shutdown the node and repeat the steps.
47+
- |
48+
{% data reusables.release-notes.2024-06-possible-frontend-5-minute-outage-during-hotpatch-upgrade %}
49+
- |
50+
When restoring data originally backed up from a 3.13 or greater appliance version, the Elasticsearch indices need to be reindexed before some of the data will show up. This happens via a nightly scheduled job. It can also be forced by running `/usr/local/share/enterprise/ghe-es-search-repair`.
51+
- |
52+
An organization-level code scanning configuration page is displayed on instances that do not use GitHub Advanced Security or code scanning.
53+
- |
54+
When enabling automatic update checks for the first time in the Management Console, the status is not dynamically reflected until the "Updates" page is reloaded.
55+
- |
56+
When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed.
57+
- |
58+
When initializing a new GHES cluster, nodes with the `consul-server` role should be added to the cluster before adding additional nodes. Adding all nodes simultaneously creates a race condition between nomad server registration and nomad client registration.
59+
- |
60+
Admins setting up cluster high availability (HA) may encounter a spokes error when running `ghe-cluster-repl-status` if a new organization and repositories are created before using the `ghe-cluster-repl-bootstrap` command. To avoid this issue, complete the cluster HA setup with `ghe-cluster-repl-bootstrap` before creating new organizations and repositories.
61+
- |
62+
In a cluster, the host running restore requires access the storage nodes via their private IPs.
63+
- |
64+
On an instance hosted on Azure, commenting on an issue via email meant the comment was not added to the issue.
65+
- |
66+
After a restore, existing outside collaborators cannot be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance.
67+
- |
68+
After a geo-replica is promoted to be a primary by running `ghe-repl-promote`, the actions workflow of a repository does not have any suggested workflows.
69+
- |
70+
Customers operating at high scale or near capacity may experience unexpected performance degradation, such as slow response times, background job queue spikes, elevated CPU usage, and increased MySQL load. Consider upgrading to {% ifversion ghes = 3.16 %}3.16{% endif %} {% ifversion ghes = 3.17 %}3.17{% endif %} with caution.

0 commit comments

Comments
 (0)