You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/ngf/overview/resource-validation.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -62,8 +62,8 @@ More information on CEL in Kubernetes can be found [here](https://kubernetes.io/
62
62
This step catches the following cases of invalid values:
63
63
64
64
- Valid values from the Gateway API perspective but not supported by NGINX Gateway Fabric yet. For example, a feature in an HTTPRoute routing rule. For the list of supported features see [Gateway API Compatibility]({{< relref "./gateway-api-compatibility.md" >}}) doc.
65
-
- Valid values from the Gateway API perspective, but invalid for NGINX, because NGINX has stricter validation requirements for certain fields. These values will cause NGINX to fail to reload or operate erroneously.
66
-
- Invalid values (both from the Gateway API and NGINX perspectives) that were not rejected because Step 1 was bypassed. Similar to the previous case, these values will cause NGINX to fail to reload or operate erroneously.
65
+
- Valid values from the Gateway API perspective, but invalid for NGINX. NGINX has stricter validation requirements for certain fields. These values will cause NGINX to fail to reload or operate erroneously.
66
+
- Invalid values (both from the Gateway API and NGINX perspectives) that were not rejected because Step 1 was bypassed. These values will cause NGINX to fail to reload or operate incorrectly.
67
67
- Malicious values that inject unrestricted NGINX config into the NGINX configuration (similar to an SQL injection attack).
68
68
69
69
Below is an example of how NGINX Gateway Fabric rejects an invalid resource. The validation error is reported via the status:
Copy file name to clipboardExpand all lines: content/nginx/admin-guide/installing-nginx/installing-nginx-docker.md
+13-13Lines changed: 13 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ type:
24
24
25
25
26
26
<spanid="nginx_plus_official_images"></span>
27
-
## Using official NGINX Plus Docker images
27
+
## Use official NGINX Plus Docker images
28
28
29
29
Since NGINX Plus NGINX Plus [Release 31]({{< ref "nginx/releases.md#r31" >}}) you can get an NGINX Plus image from the official NGINX Plus Docker registry and upload it to your private registry.
30
30
@@ -66,7 +66,7 @@ The NGINX Plus registry contains images for the two most recent versions of NGIN
66
66
67
67
The image may contain a particular version of NGINX Plus or contain a bundle of NGINX Plus and NGINX Agent, and can be targeted for a specific architecture.
68
68
69
-
### Listing all tags
69
+
### List all tags
70
70
71
71
For a complete tag list for NGINX Plus bundled with NGINX Agent images, use the command:
72
72
@@ -90,7 +90,7 @@ where:
90
90
91
91
92
92
93
-
### Downloading the JSON Web Token or NGINX Plus certificate and key {#myf5-download}
93
+
### Download the JSON Web Token or NGINX Plus certificate and key {#myf5-download}
94
94
95
95
Before you get a container image, you should provide the JSON Web Token file or SSL certificate and private key files provided with your NGINX Plus subscription. These files grant access to the package repository from which the script will download the NGINX Plus package:
96
96
@@ -109,7 +109,7 @@ Before you get a container image, you should provide the JSON Web Token file or
109
109
110
110
{{% /tabs %}}
111
111
112
-
### Set up Docker for NGINX Plus Container Registry
112
+
### Set up Docker for NGINX Plus container registry
113
113
114
114
Set up Docker to communicate with the NGINX Container Registry located at `private-registry.nginx.com`.
{{< note >}} Starting from [NGINX Plus Release 33]({{< ref "nginx/releases.md#r33" >}}), the JWT file is required for each NGINX Plus instance. For more information, see [About Subscription Licenses]({{< ref "/solutions/about-subscription-licenses.md">}}). {{< /note >}}
211
211
@@ -390,7 +390,7 @@ Any change made to the files in the local directories `/var/www and /var/nginx/c
390
390
391
391
392
392
<span id="manage_copy"></span>
393
-
### Copying Content and Configuration Files from the Docker Host
393
+
### Copy content and configuration files from the Docker host
394
394
395
395
Docker can copy the content and configuration files from a local directory on the Docker host during container creation. Once a container is created, the files are maintained by creating a new container when files change or by modifying the files in the container.
396
396
@@ -421,7 +421,7 @@ To make changes to the files in the container, use a helper container as describ
421
421
422
422
423
423
<span id="manage_container"></span>
424
-
### Maintaining Content and Configuration Files in the Container
424
+
### Maintain content and configuration files in the container
425
425
426
426
As SSH cannot be used to access the NGINX container, to edit the content or configuration files directly you need to create a helper container that has shell access. For the helper container to have access to the files, create a new image that has the proper Docker data volumes defined for the image:
427
427
@@ -476,12 +476,12 @@ To exit the shell and terminate the container, run the `exit` command.
476
476
477
477
478
478
<span id="log"></span>
479
-
## Managing Logging
479
+
## Manage logging
480
480
481
481
You can use default logging or customize logging.
482
482
483
483
<span id="log_default"></span>
484
-
### Using Default Logging
484
+
### Use default logging
485
485
486
486
By default, the NGINX image is configured to send NGINX [access log](https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log) and [error log](https://nginx.org/en/docs/ngx_core_module.html#error_log) to the Docker log collector. This is done by linking them to `stdout` and `stderr`: all messages from both logs are then written to the file `/var/lib/docker/containers/container-ID/container-ID-json.log` on the Docker host. The container‑ID is the long‑form ID returned when you [create a container](#docker_oss_image). To display the long form ID, run the command:
487
487
@@ -507,7 +507,7 @@ To include only access log messages in the output, include only `stdout=1`. To l
507
507
508
508
509
509
<span id="log_custom"></span>
510
-
### Using Customized Logging
510
+
### Use customized logging
511
511
512
512
If you want to configure logging differently for certain configuration blocks (such as `server {}` and `location {}`), define a Docker volume for the directory in which to store the log files in the container, create a helper container to access the log files, and use any logging tools. To implement this, create a new image that contains the volume or volumes for the logging files.
513
513
@@ -524,7 +524,7 @@ Then you can [create an image](#docker_plus_image) and use it to create an NGINX
524
524
525
525
526
526
<span id="control"></span>
527
-
## Controlling NGINX
527
+
## Control NGINX
528
528
529
529
Since there is no direct access to the command line of the NGINX container, NGINX commands cannot be sent to a container directly. Instead, [signals](https://nginx.org/en/docs/control.html) can be sent to a container via Docker `kill` command.
Copy file name to clipboardExpand all lines: content/nginx/admin-guide/installing-nginx/installing-nginx-plus-google-cloud-platform.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ type:
12
12
[NGINX Plus](https://www.f5.com/products/nginx/nginx-plus), the high‑performance application delivery platform, load balancer, and web server, is available on the Google Cloud Platform as a virtual machine (VM) image. The VM image contains the latest version of NGINX Plus, optimized for use with the Google Cloud Platform Compute Engine.
13
13
14
14
15
-
## Installing the NGINX Plus VM
15
+
## Install the NGINX Plus VM
16
16
17
17
To quickly set up an NGINX Plus environment on the Google Cloud Platform, perform the following steps.
18
18
@@ -52,7 +52,7 @@ If you encounter any problems with NGINX Plus configuration, documentation is a
52
52
Customers who purchase an NGINX Plus VM image on the Google Cloud Platform are eligible for the Google Cloud Platform VM support provided by the NGINX, Inc. engineering team. To activate support, submit the [Google Cloud Platform Support Activation](https://www.nginx.com/gcp-support-activation/) form.
53
53
54
54
55
-
### Accessing the Open Source Licenses for NGINX Plus
55
+
### Access the Open Source Licenses for NGINX Plus
56
56
57
57
NGINX Plus includes open source software written by NGINX, Inc. and other contributors. The text of the open source licenses is provided in Appendix B of the _NGINX Plus Reference Guide_. To access the guide included with the NGINX Plus VM instance, run this command:
Copy file name to clipboardExpand all lines: content/nginx/admin-guide/load-balancer/using-proxy-protocol.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ Using this data, NGINX can get the originating IP address of the client in sever
40
40
41
41
42
42
<spanid="listen"></span>
43
-
## Configuring NGINX to Accept the PROXY Protocol
43
+
## Configure NGINX to Accept the PROXY Protocol
44
44
45
45
To configure NGINX to accept PROXY protocol headers, add the `proxy_protocol` parameter to the `listen` directive in a `server` block in the [`http {}`](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) or [`stream {}`](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#listen) block.
46
46
@@ -66,7 +66,7 @@ stream {
66
66
Now you can use the [`$proxy_protocol_addr`](https://nginx.org/en/docs/http/ngx_http_core_module.html#var_proxy_protocol_addr) and [`$proxy_protocol_port`](https://nginx.org/en/docs/http/ngx_http_core_module.html#var_proxy_protocol_port) variables for the client IP address and port and additionally configure the [HTTP](https://nginx.org/en/docs/http/ngx_http_realip_module.html) and [`stream`](https://nginx.org/en/docs/stream/ngx_stream_realip_module.html) RealIP modules to replace the IP address of the load balancer in the [`$remote_addr`](https://nginx.org/en/docs/http/ngx_http_core_module.html#var_remote_addr) and [`$remote_port`](https://nginx.org/en/docs/http/ngx_http_core_module.html#var_remote_port) variables with the IP address and port of the client.
67
67
68
68
<spanid="realip"></span>
69
-
## Changing the Load Balancer's IP Address To the Client IP Address
69
+
## Change the Load Balancer's IP Address To the Client IP Address
70
70
71
71
You can replace the address of the load balancer or TCP proxy with the client IP address received from the PROXY protocol. This can be done with the [HTTP](https://nginx.org/en/docs/http/ngx_http_realip_module.html) and [`stream`](https://nginx.org/en/docs/stream/ngx_stream_realip_module.html) RealIP modules. With these modules, the [`$remote_addr`](https://nginx.org/en/docs/http/ngx_http_core_module.html#var_remote_addr) and [`$remote_port`](https://nginx.org/en/docs/http/ngx_http_core_module.html#var_remote_port) variables retain the real IP address and port of the client, while the [`$realip_remote_addr`](https://nginx.org/en/docs/http/ngx_http_realip_module.html#var_realip_remote_addr) and [`$realip_remote_port`](https://nginx.org/en/docs/http/ngx_http_realip_module.html#var_realip_remote_port) variables retain the IP address and port of the load balancer.
72
72
@@ -105,7 +105,7 @@ To change the IP address from the load balancer's IP address to the client's IP
105
105
```
106
106
107
107
<spanid="log"></span>
108
-
## Logging the Original IP Address
108
+
## Log the Original IP Address
109
109
110
110
When you know the original IP address of the client, you can configure the correct logging:
Copy file name to clipboardExpand all lines: content/nginx/admin-guide/web-server/serving-static-content.md
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
description: Configure NGINX and F5 NGINX Plus to serve static content, with type-specific
3
3
root directories, checks for file existence, and performance optimizations.
4
4
docs: DOCS-442
5
-
title: Serving Static Content
5
+
title: Serve Static Content
6
6
toc: true
7
7
weight: 200
8
8
type:
@@ -108,11 +108,11 @@ location @backend {
108
108
For more information, watch the [Content Caching](https://www.nginx.com/resources/webinars/content-caching-nginx-plus/) webinar on‑demand to learn how to dramatically improve the performance of a website, and get a deep‑dive into NGINX’s caching capabilities.
109
109
110
110
<spanid="optimize"></span>
111
-
## Optimizing Performance for Serving Content
111
+
## Optimize Performance for Serving Content
112
112
113
113
Loading speed is a crucial factor of serving any content. Making minor optimizations to your NGINX configuration may boost the productivity and help reach optimal performance.
114
114
115
-
### Enabling`sendfile`
115
+
### Enable`sendfile`
116
116
117
117
By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile) directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection from entirely occupying the worker process, you can use the [sendfile_max_chunk](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile_max_chunk) directive to limit the amount of data transferred in a single `sendfile()` call (in this example, to `1` MB):
118
118
@@ -124,7 +124,7 @@ location /mp3 {
124
124
}
125
125
```
126
126
127
-
### Enabling`tcp_nopush`
127
+
### Enable`tcp_nopush`
128
128
129
129
Use the [tcp_nopush](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nopush) directive together with the [sendfile](https://nginx.org/en/docs/http/ngx_http_core_module.html#sendfile)`on;`directive. This enables NGINX to send HTTP response headers in one packet right after the chunk of data has been obtained by `sendfile()`.
130
130
@@ -136,7 +136,7 @@ location /mp3 {
136
136
}
137
137
```
138
138
139
-
### Enabling`tcp_nodelay`
139
+
### Enable`tcp_nodelay`
140
140
141
141
The [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive allows override of [Nagle’s algorithm](https://en.wikipedia.org/wiki/Nagle's_algorithm), originally designed to solve problems with small packets in slow networks. The algorithm consolidates a number of small packets into a larger one and sends the packet with a `200` ms delay. Nowadays, when serving large static files, the data can be sent immediately regardless of the packet size. The delay also affects online applications (ssh, online games, online trading, and so on). By default, the [tcp_nodelay](https://nginx.org/en/docs/http/ngx_http_core_module.html#tcp_nodelay) directive is set to `on` which means that the Nagle’s algorithm is disabled. Use this directive only for keepalive connections:
142
142
@@ -150,11 +150,11 @@ location /mp3 {
150
150
```
151
151
152
152
153
-
### Optimizing the Backlog Queue
153
+
### Optimize the Backlog Queue
154
154
155
155
One of the important factors is how fast NGINX can handle incoming connections. The general rule is when a connection is established, it is put into the “listen” queue of a listen socket. Under normal load, either the queue is small or there is no queue at all. But under high load, the queue can grow dramatically, resulting in uneven performance, dropped connections, and increased latency.
156
156
157
-
#### Displaying the Listen Queue
157
+
#### Display the Listen Queue
158
158
159
159
To display the current listen queue, run this command:
160
160
@@ -182,7 +182,7 @@ Listen Local Address
182
182
0/0/128 *.8080
183
183
```
184
184
185
-
#### Tuning the Operating System
185
+
#### Tune the Operating System
186
186
187
187
Increase the value of the `net.core.somaxconn` kernel parameter from its default value (`128`) to a value high enough for a large burst of traffic. In this example, it's increased to `4096`.
188
188
@@ -205,7 +205,7 @@ Increase the value of the `net.core.somaxconn` kernel parameter from its default
205
205
net.core.somaxconn = 4096
206
206
```
207
207
208
-
#### Tuning NGINX
208
+
#### Tune NGINX
209
209
210
210
If you set the `somaxconn` kernel parameter to a value greater than `512`, change the `backlog` parameter to the NGINX [listen](https://nginx.org/en/docs/http/ngx_http_core_module.html#listen) directive to match:
Copy file name to clipboardExpand all lines: content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ The `PREFIX` argument specifies the repo name in your private container registry
33
33
34
34
35
35
<spanid="amazon-eks"></span>
36
-
## Creating an Amazon EKS Cluster
36
+
## Create an Amazon EKS Cluster
37
37
You can create an Amazon EKS cluster with:
38
38
- the AWS Management Console
39
39
- the AWS CLI
@@ -46,7 +46,7 @@ This guide covers the `eksctl` command as it is the simplest option.
46
46
2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the <spanstyle="white-space: nowrap; font-weight:bold;">Managed nodes – Linux</span> option for each step. Note that the <spanstyle="white-space: nowrap;">`eksctl create cluster`</span> command in the first step can take ten minutes or more.
47
47
48
48
<spanid="amazon-ecr"></span>
49
-
## Pushing the NGINX Plus Ingress Controller Image to AWS ECR
49
+
## Push the NGINX Plus Ingress Controller Image to AWS ECR
50
50
51
51
This step is only required if you do not plan to use the prebuilt NGINX Open Source image.
52
52
@@ -81,7 +81,7 @@ This step is only required if you do not plan to use the prebuilt NGINX Open Sou
81
81
```
82
82
83
83
<spanid="ingress-controller"></span>
84
-
## Installing the NGINX Plus Ingress Controller
84
+
## Install the NGINX Plus Ingress Controller
85
85
86
86
Use [our documentation](https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/) to install the NGINX Plus Ingress Controller in your Amazon EKS cluster.
87
87
@@ -97,15 +97,15 @@ You need a Kubernetes `LoadBalancer` service to route traffic to the NGINX Ingre
97
97
98
98
We also recommend enabling the PROXY Protocol for both the NGINX Plus Ingress Controller and your NLB target groups. This is used to forward client connection information. If you choose not to enable the PROXY protocol, see the [Appendix](#appendix).
99
99
100
-
### Configuring a `LoadBalancer` Service to Use NLB
100
+
### Configure a `LoadBalancer` Service to Use NLB
101
101
102
102
Apply the manifest `deployments/service/loadbalancer-aws-elb.yaml` to create a `LoadBalancer` of type NLB:
0 commit comments