Skip to content

Commit 9d9a694

Browse files
chiedocrowdin-botJasonEtco
authored
Crowdin translations (translation-batch-1606244448) (#16615)
* New Crowdin translations by Github Action * Translation reverts * Keep pt-BR as main * Revert files to english Co-authored-by: Crowdin Bot <support+bot@crowdin.com> Co-authored-by: Chiedo <chiedo@users.noreply.github.com> Co-authored-by: Jason Etcovitch <jasonetco@github.com>
1 parent 6c0942d commit 9d9a694

556 files changed

Lines changed: 7599 additions & 3941 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

translations/de-DE/content/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups.md

Lines changed: 25 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,17 +33,27 @@ All organizations have a single default self-hosted runner group. Organizations
3333

3434
Self-hosted runners are automatically assigned to the default group when created, and can only be members of one group at a time. You can move a runner from the default group to any group you create.
3535

36-
When creating a group, you must choose a policy that defines which repositories have access to the runner group. You can configure a runner group to be accessible to a specific list of repositories, all private repositories, or all repositories in the organization.
36+
When creating a group, you must choose a policy that defines which repositories have access to the runner group.
3737

3838
{% data reusables.organizations.navigate-to-org %}
3939
{% data reusables.organizations.org_settings %}
4040
{% data reusables.organizations.settings-sidebar-actions %}
4141
1. In the **Self-hosted runners** section, click **Add new**, and then **New group**.
4242

4343
![Add runner group](/assets/images/help/settings/actions-org-add-runner-group.png)
44-
1. Enter a name for your runner group, and select an access policy from the **Repository access** dropdown list.
44+
1. Enter a name for your runner group, and assign a policy for repository access.
4545

46-
![Add runner group options](/assets/images/help/settings/actions-org-add-runner-group-options.png)
46+
{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %} You can configure a runner group to be accessible to a specific list of repositories, or to all repositories in the organization. By default, public repositories can't access runners in a runner group, but you can use the **Allow public repositories** option to override this.{% else if currentVersion == "enterprise-server@2.22"%}You can configure a runner group to be accessible to a specific list of repositories, all private repositories, or all repositories in the organization.{% endif %}
47+
48+
{% warning %}
49+
50+
**Warnung**
51+
{% indented_data_reference site.data.reusables.github-actions.self-hosted-runner-security spaces=3 %}
52+
Weitere Informationen findest Du unter „[Informationen zu selbst-gehosteten Runnern](/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)“.
53+
54+
{% endwarning %}
55+
56+
![Add runner group options](/assets/images/help/settings/actions-org-add-runner-group-options.png)
4757
1. Click **Save group** to create the group and apply the policy.
4858

4959
### Creating a self-hosted runner group for an enterprise
@@ -52,7 +62,7 @@ Enterprises can add their self-hosted runners to groups for access management. E
5262

5363
Self-hosted runners are automatically assigned to the default group when created, and can only be members of one group at a time. You can assign the runner to a specific group during the registration process, or you can later move the runner from the default group to a custom group.
5464

55-
When creating a group, you must choose a policy that grants access to all organizations in the enterprise or choose specific organizations.
65+
When creating a group, you must choose a policy that defines which organizations have access to the runner group.
5666

5767
{% data reusables.enterprise-accounts.access-enterprise %}
5868
{% data reusables.enterprise-accounts.policies-tab %}
@@ -61,7 +71,17 @@ When creating a group, you must choose a policy that grants access to all organi
6171
1. Click **Add new**, and then **New group**.
6272

6373
![Add runner group](/assets/images/help/settings/actions-enterprise-account-add-runner-group.png)
64-
1. Enter a name for your runner group, and select an access policy from the **Organization access** dropdown list.
74+
1. Enter a name for your runner group, and assign a policy for organization access.
75+
76+
{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %} You can configure a runner group to be accessible to a specific list of organizations, or all organizations in the enterprise. By default, public repositories can't access runners in a runner group, but you can use the **Allow public repositories** option to override this.{% else if currentVersion == "enterprise-server@2.22"%}You can configure a runner group to be accessible to all organizations in the enterprise or choose specific organizations.{% endif %}
77+
78+
{% warning %}
79+
80+
**Warnung**
81+
{% indented_data_reference site.data.reusables.github-actions.self-hosted-runner-security spaces=3 %}
82+
Weitere Informationen findest Du unter „[Informationen zu selbst-gehosteten Runnern](/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)“.
83+
84+
{% endwarning %}
6585

6686
![Add runner group options](/assets/images/help/settings/actions-enterprise-account-add-runner-group-options.png)
6787
1. Click **Save group** to create the group and apply the policy.

translations/de-DE/content/actions/reference/events-that-trigger-workflows.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -572,6 +572,8 @@ on:
572572

573573
{% data reusables.developer-site.pull_request_forked_repos_link %}
574574

575+
{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %}
576+
575577
#### `pull_request_target`
576578

577579
This event is similar to `pull_request`, except that it runs in the context of the base repository of the pull request, rather than in the merge commit. This means that you can more safely make your secrets available to the workflows triggered by the pull request, because only workflows defined in the commit on the base repository are run. For example, this event allows you to create workflows that label and comment on pull requests, based on the contents of the event payload.
@@ -589,6 +591,8 @@ on: pull_request_target
589591
types: [assigned, opened, synchronize, reopened]
590592
```
591593

594+
{% endif %}
595+
592596
#### `Push`
593597

594598
{% note %}
@@ -689,6 +693,8 @@ on:
689693
types: [started]
690694
```
691695

696+
{% if currentVersion == "free-pro-team@latest" or currentVersion ver_gt "enterprise-server@2.22" %}
697+
692698
#### `workflow_run`
693699

694700
{% data reusables.webhooks.workflow_run_desc %}
@@ -711,6 +717,8 @@ on:
711717
- requested
712718
```
713719

720+
{% endif %}
721+
714722
### Neue Workflows mit einem persönlichen Zugangs-Token auslösen
715723

716724
{% data reusables.github-actions.actions-do-not-trigger-workflows %} weitere Informationen findest Du unter „[Authentifizierung mit dem GITHUB_TOKEN](/actions/configuring-and-managing-workflows/authenticating-with-the-github_token)“.

translations/de-DE/content/actions/reference/workflow-syntax-for-github-actions.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -876,9 +876,40 @@ strategy:
876876

877877
{% endnote %}
878878

879+
##### Using environment variables in a matrix
880+
881+
You can add custom environment variables for each test combination by using `include` with `env`. You can then refer to the custom environment variables in a later step.
882+
883+
In this example, the matrix entries for `node-version` are each configured to use different values for the `site` and `datacenter` environment variables. The `Echo site details` step then uses {% raw %}`env: ${{ matrix.env }}`{% endraw %} to refer to the custom variables:
884+
885+
{% raw %}
886+
```yaml
887+
name: Node.js CI
888+
on: [push]
889+
jobs:
890+
build:
891+
runs-on: ubuntu-latest
892+
strategy:
893+
matrix:
894+
include:
895+
- node-version: 10.x
896+
site: "prod"
897+
datacenter: "site-a"
898+
- node-version: 12.x
899+
site: "dev"
900+
datacenter: "site-b"
901+
steps:
902+
- name: Echo site details
903+
env:
904+
SITE: ${{ matrix.site }}
905+
DATACENTER: ${{ matrix.datacenter }}
906+
run: echo $SITE $DATACENTER
907+
```
908+
{% endraw %}
909+
879910
### **`jobs.<job_id>.strategy.fail-fast`**
880911

881-
Wenn diese Option auf `true` gesetzt ist, bricht {% data variables.product.prodname_dotcom %} alle laufenden Aufträge ab, sobald ein `matrix`-Auftrag fehlschlägt. Standard: `true`
912+
Wenn diese Option auf `true` gesetzt ist, bricht {% data variables.product.prodname_dotcom %} alle laufenden Jobs ab, sobald ein Job der `matrix` fehlschlägt. Standard: `true`
882913

883914
### **`jobs.<job_id>.strategy.max-parallel`**
884915

translations/de-DE/content/admin/configuration/command-line-utilities.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ Dadurch können Sie den UUID Ihres Knotens in `cluster.conf` ermitteln.
8484
Allows you to exempt a list of users from API rate limits. For more information, see "[Rate Limiting](/enterprise/{{ page.version }}/v3/#rate-limiting)."
8585

8686
``` shell
87-
$ ghe-config app.github.rate_limiting_exempt_users "<em>hubot</em> <em>github-actions</em>"
87+
$ ghe-config app.github.rate-limiting-exempt-users "<em>hubot</em> <em>github-actions</em>"
8888
# Exempts the users hubot and github-actions from rate limits
8989
```
9090
{% endif %}

translations/de-DE/content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md

Lines changed: 68 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
5757
mysql-master = <em>HOSTNAME</em>
5858
redis-master = <em>HOSTNAME</em>
5959
<strong>primary-datacenter = default</strong>
60-
```
60+
```
6161

6262
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
6363

6464
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
6565
66-
datacenter = default
66+
```
67+
datacenter = default
68+
```
6769
6870
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
6971

70-
```shell
71-
[cluster "<em>HOSTNAME</em>"]
72-
<strong>datacenter = default</strong>
73-
hostname = <em>HOSTNAME</em>
74-
ipv4 = <em>IP ADDRESS</em>
72+
```shell
73+
[cluster "<em>HOSTNAME</em>"]
74+
<strong>datacenter = default</strong>
75+
hostname = <em>HOSTNAME</em>
76+
ipv4 = <em>IP ADDRESS</em>
77+
...
7578
...
76-
...
77-
```
79+
```
7880

79-
{% note %}
81+
{% note %}
8082

81-
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
83+
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
8284

83-
consul-datacenter = primary
85+
```
86+
consul-datacenter = primary
87+
```
8488

85-
{% endnote %}
89+
{% endnote %}
8690

8791
{% data reusables.enterprise_clustering.apply-configuration %}
8892

@@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
103107

104108
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
105109

106-
{% note %}
110+
{% note %}
107111

108-
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
112+
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
109113
110-
{% endnote %}
114+
{% endnote %}
111115
112116
{% data reusables.enterprise_clustering.ssh-to-a-node %}
113117
114118
3. Back up your existing cluster configuration.
115-
116-
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
119+
120+
```
121+
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
122+
```
117123
118124
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
119-
120-
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
125+
126+
```
127+
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
128+
```
121129
122130
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
123-
124-
git config -f ~/cluster-passive.conf --remove-section cluster
131+
132+
```
133+
git config -f ~/cluster-passive.conf --remove-section cluster
134+
```
125135
126136
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
127137
128138
```shell
129-
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
130-
```
139+
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
140+
```
131141
132142
7. Decide on a pattern for the passive nodes' hostnames.
133143

@@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
140150
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
141151

142152
```shell
143-
sudo vim ~/cluster-passive.conf
153+
sudo vim ~/cluster-passive.conf
144154
```
145155

146156
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
@@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
150160
- Add a new key-value pair, `replica = enabled`.
151161
152162
```shell
153-
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
163+
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
164+
...
165+
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
166+
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
167+
<strong>replica = enabled</strong>
168+
...
154169
...
155-
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
156-
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
157-
<strong>replica = enabled</strong>
158-
...
159-
...
160170
```
161171
162172
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
163173
164174
```shell
165-
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
166-
```
175+
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
176+
```
167177
168178
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
169179
170180
```shell
171-
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
172-
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
173-
```
181+
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
182+
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
183+
```
174184
175185
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
176186
177187
```shell
178-
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
188+
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
179189
```
180190
181-
{% warning %}
191+
{% warning %}
182192
183-
**Warning**: Review your cluster configuration file before proceeding.
193+
**Warning**: Review your cluster configuration file before proceeding.
184194
185195
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
186196
- In each section for an active node named `[cluster "<em>ACTIVE NODE HOSTNAME</em>"]`, double-check the following key-value pairs.
@@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
194204
- `replica` should be configured as `enabled`.
195205
- Take the opportunity to remove sections for offline nodes that are no longer in use.
196206

197-
To review an example configuration, see "[Example configuration](#example-configuration)."
207+
To review an example configuration, see "[Example configuration](#example-configuration)."
198208

199-
{% endwarning %}
209+
{% endwarning %}
200210

201211
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
202212

@@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
207217
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
208218

209219
```shell
210-
Finished cluster initialization
220+
Finished cluster initialization
211221
```
212222

213223
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -293,20 +303,28 @@ Initial replication between the active and passive nodes in your cluster takes t
293303
You can monitor the progress on any node in the cluster, using command-line tools available via the {% data variables.product.prodname_ghe_server %} administrative shell. For more information about the administrative shell, see "[Accessing the administrative shell (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)."
294304
295305
- Monitor replication of databases:
296-
297-
/usr/local/share/enterprise/ghe-cluster-status-mysql
306+
307+
```
308+
/usr/local/share/enterprise/ghe-cluster-status-mysql
309+
```
298310
299311
- Monitor replication of repository and Gist data:
300-
301-
ghe-spokes status
312+
313+
```
314+
ghe-spokes status
315+
```
302316
303317
- Monitor replication of attachment and LFS data:
304-
305-
ghe-storage replication-status
318+
319+
```
320+
ghe-storage replication-status
321+
```
306322
307323
- Monitor replication of Pages data:
308-
309-
ghe-dpages replication-status
324+
325+
```
326+
ghe-dpages replication-status
327+
```
310328
311329
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
312330

translations/de-DE/content/admin/enterprise-management/increasing-storage-capacity.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,8 @@ Wenn sich mehr Benutzer {% data variables.product.product_location %} anschließ
2020

2121
{% endnote %}
2222

23+
#### Minimum requirements
24+
2325
{% data reusables.enterprise_installation.hardware-rec-table %}
2426

2527
### Größe der Datenpartition erhöhen

0 commit comments

Comments
 (0)