nginx configure static file directory

Nginx Configuration Files Explained

I. Installing Nginx

Before installing Nginx, make sure that gcc, openssl-devel, pcre-devel, and zlib-devel software libraries are installed on your system.

In particular, _with-http_stub_status_mole can be used to enable the NginxStatus feature of Nginx to monitor the operational status of Nginx.

II. Nginx’s Configuration File Structure

Nginx’s configuration file, nginx.conf, is located in the conf directory of its installation directory.

nginx.conf consists of several blocks, the outermost block is main, main contains Events and HTTP, HTTP contains upstream and multiple Servers, Server in turn contains multiple locations.

main (global settings), server (host settings), upstream (load-balancing server settings), and location (URL-matching location-specific settings).

1. The directives set in the main block will affect all other settings.

2. The directives in the server block are mainly used to specify the host and port.

3. The upstream directive is mainly used for load balancing, setting up a series of back-end servers.

4. The location block is used to match web page locations.

The relationship between these four: server inherits main, location inherits server, and upstream neither inherits nor is inherited by the other settings.

Each of the four sections contains a number of directives, including Nginx’s main module directives, event module directives, HTTP core module directives, and other HTTP module directives that can be used in each section, such as the HttpSSL module, HttpGzipStatic module, and HttpAddition modules, etc.

Three: Global Configuration of Nginx

The events event directive sets the operating mode of Nginx and the upper limit on the number of connections:

use is an event module directive that specifies the operating mode of Nginx. the operating modes supported by Nginx are select, poll, kqueue, epoll. rtsig, and /devtv/poll, rtsig, and /dev/poll.

Select and poll are standard modes of operation, while kqueue and epoll are highly efficient modes of operation, with the difference that epoll is used on Linux and kqueue is used on BSD systems. For Linux, epoll is the preferred mode of operation. worker_connections is also an event module directive that defines the maximum number of connections per process for Nginx, which is 1024 by default.

The maximum number of client connections is determined by worker_processes and worker_connections. determined by worker_processes and worker_connections, i.e., max_clients=worker_processes*worker_connections.

When acting as a reverse proxy, max_clients becomes: max_clients=worker_processes*worker_connections/4

The maximum number of connections for a process is limited by the maximum number of open files for a Linux process, and the setting of worker_connections will not take effect until the operating system command “ulimit-n65536” is executed.

Four, the following configuration Nginx HttpGzip module. This module supports online real-time compression of the output stream.

The /opt/nginx/sbin/nginx-V command allows you to view the compilation options when installing Nginx, and as you can see from the output, the HttpGzip module has been installed.

V. Load Balancing Configuration

The following is a list of servers to set up load balancing:

upstream is Nginx’s HTTPUpstream module, which achieves load balancing from the client’s IP to the back-end servers through a simple scheduling algorithm.

In the above setup, a load balancer name is specified via the upstream directive. This name can be specified arbitrarily and called directly later where needed. Nginx’s load balancing module currently supports four scheduling algorithms.

VI. server virtual host configuration

The following describes the configuration of the virtual host.

It is recommended that you write the configuration of the virtual host into another file and then include it through the include directive, which makes it easier to maintain and manage.

The server flag defines the start of the virtual host, listen is used to specify the service port of the virtual host, server_name is used to specify the IP address or domain name, and multiple domain names are separated by spaces. index is used to set the default home page address for accessing, and the root directive is used to specify the root directory of the web page of the virtual host, which can be a relative path or absolute path.

Charset is used to set the default encoding format for web pages. access_log is used to specify the path where the access logs of this virtual host are stored, and finally main is used to specify the output format of the access logs.

Seven, locationURL matching configuration

URL address matching is the most flexible part of the configuration of Nginx. location supports regular expression matching, but also supports conditional matching, the user can use the location directive to achieve Nginx filtering of dynamic and static web pages. Using the location URL matching configuration also allows you to implement a reverse proxy for dynamic PHP parsing or load balancing.

The following settings are used to analyze web page URLs through the location directive. All static files with extensions ending in .gif, .jpg, .jpeg, .png, .bmp, and .swf are given to nginx for processing, and expires are used to specify the expiration time of static files, which is 30 days in this case.

Eight, StubStatus module configuration

StubStatus module to get the Nginx since the last startup of the work of the state, this module is not a core module, you need to manually specify the compilation and installation of Nginx in order to use this feature.

stub_status is set to “on” to enable the StubStatus status statistics. access_log is used to specify the access log file for the StubStatus module. auth_basic is an authentication mechanism for Nginx. .

auth_basic_user_file is used to specify the password file for authentication. Since Nginx’s auth_basic authentication uses an Apache-compatible password file, you need to generate the password file using Apache’s htpasswd command.

Then enter the password twice and confirm that the user was added successfully.

To see the status of Nginx, you can type http://ip/NginxStatus and enter the username and password you created to see the status of Nginx.

Activeconnections indicates the number of currently active connections, and the three numbers on the third line indicate that Nginx has currently processed a total of 34,561 connections, created successful handshakes, and processed a total of 354,399 requests.

Reading in the last line indicates the number of client Header messages read by Nginx, Writing indicates the number of Header messages returned by Nginx to the client, and Waiting indicates the number of resident connections when Nginx has finished processing and is waiting for the next request command. waiting for the next request command.

This last setting sets the error page for the virtual host, which can be customized with the error_page directive. By default, Nginx looks for the specified return page in the html directory of the home directory.

Specifically, note that the size of the return page for these error messages must be larger than 512K, otherwise it will be replaced by the default error page for ie browsers.

Not to be missed Nginx configuration details, an article to take you to understand Nginx

Nginx is a high-performance HTTP and reverse proxy server, characterized by a small memory footprint, concurrency, in fact, the concurrency of Nginx does in the same type of web server performance. Nginx developed specifically for performance optimization, performance is the most important consideration, the implementation is very focused on efficiency and can withstand high loads. Nginx was developed specifically for performance optimization, where performance is the most important consideration, and is implemented with a strong focus on efficiency and the ability to withstand high loads, with reports suggesting that it can support up to 50,000 concurrent connections.

Customers need to configure the proxy server address in their browser.

For example, to access in mainland China, we need a proxy server, we go through the proxy server to access Google, this process is forward proxy.

Reverse proxy, the client is imperceptible to the proxy, because the client does not need any configuration can be accessed, we only need to send the request to the reverse proxy server, by the reverse proxy server to select the target server to obtain the data, in the return to the client, at this time, the reverse proxy server and the target server is a server externally, exposing the proxy server address, hiding the real server IP address.

Single server can not be solved, we increase the number of servers, and then the request will be distributed to the various servers, the original request is concentrated on a single server to the situation of the request will be distributed to multiple servers, the load will be distributed to the different servers, that is, we said the load balancing.

In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed. Reduce the pressure on the original single server.

Go to the following directory and use the command

Configuration file location: /usr/local/nginx/conf/nginx.conf

Consisting of the global block + events block + http block

p>From the beginning of the configuration file to the contents of the events between the main will set some of the overall operation of the Nginx server will affect the operation of the configuration directives, mainly including the configuration of running Nginx server users (groups), the number of allowed to generate the workerprocess, the process of the pid storage path, the path and type of storage of the logs and the introduction of the configuration file and so on.

Events block design directives mainly affect the Nginx server and the user’s network connections, commonly used settings include whether to enable serialization of network connections under multiple workprocesses, whether to allow simultaneous receipt of multiple network connections, which event-driven model is selected to handle connection requests, the maximum number of connections that can be supported by each workprocess, and so on. The following example shows the maximum number of connections each workprocess can support at the same time. The following example shows that the maximum number of connections per workprocess is 1024, which has a significant impact on the performance of Nginx and should be configured flexibly in practice.

The most frequent part of the Nginx server configuration is where most of the features and third-party modules, such as proxies, caching, and logging definitions, are configured. The http block consists of the http global block and the server block.

Commands configured in the http global block include file introduction, MIME-TYPE definitions, log customization, connection timeouts, and a maximum number of requests for a single link.

This block is closely related to virtual hosting, which is identical to a standalone hardware host from the user’s point of view, and the technology was created to save on Internet server hardware costs.

Each http block can include multiple server blocks, and each server block is equivalent to a virtual host.

Each server block can also be divided into global server blocks, as well as can contain multiple location blocks at the same time.

The most common configurations are the listen configuration for this virtual host and the name or IP configuration for this virtual host.

A single server block can be configured with multiple location blocks.

The main purpose of this block is to match strings other than the virtual host name (which can also be an IP alias) (e.g., /uri-string in the previous section) based on request strings (e.g., server_name/uri-string) received by the Nginx server for a specific request. Features such as address targeting, data caching, and answer control, as well as the configuration of many third-party modules, are also performed here.

Visit http://ip to see Tomcat’s main page at http://ip:8080.


Visit: to see is the Tomcat home page.

Jumps to a different server based on the path visited.

Visit http://ip:9001/e直接跳到http://

Visit http://ip:9001/vod直接跳到http://

< /p>

Nginx + JDK8 + configuration of two Tomcat, Tomcat configuration will not be told.



If the Nginx proxy server Server is configured as:, and jumps to:, the visitor’s IP is:

By visiting http:// to achieve the effect of load balancing, spreading evenly across ports 8080 and 8081.

Nginx+JDK8+2 Tomcat, one 8080, one 8081.

Access:, alternating between 8080 and 8081.

1 polling (default)

Each request is assigned to a different back-end server one by one in chronological order, and can be automatically culled if the back-end server goes down.


weight stands for weight, default is 1, the higher the weight the more clients are assigned.

Specifies the polling rate, weight is proportional to the access ratio, used in cases where the backend server performance is uneven.


Each request is allocated according to the hash result of the access IP, so that each visitor is fixed to access a back-end server, which can solve the problem of session, the example is as follows:

4fair (third-party)

Allocate requests according to the response time of the back-end server, with priority given to those with shorter response times.

Access to images:

Access to pages:

Access to directories: http:// (because autoindexon; is set)

Two machines, each with keepalived+Nginx+Tomcat.

Only one of the main backup keepalived servers, the master machine will have a VIP address, otherwise there will be a brain cracking problem.

[Tip] Add +x execution permissions to the script:

Just configure the virtual IP in Nginx.

An Nginx is made up of a master process and multiple worker processes.

The client sends a request to the master, which is then given to the workers, who then contend to handle the request.

1You can use nginx-sreload for hot deployment method;

2Each worker is an independent process, if there is a problem with one of them, the other workers independently continue to contend to realize the request process without causing service interruption;

The client sends a request to the Master and then gives it to the workers, which then contend to handle the request;

The client sends a request to the Master and then gives it to the workers.

Nginx is similar to Redis in that it uses io multiplexing. Each worker process can maximize its CPU, and it is generally optimal for the number of workers to be equal to the number of CPUs on the server.

Sending a request: accessing a static resource takes up two connections, and the reverse proxy takes up four connections.

[Warm Tip].

Nginx Configuration File Explained

When it comes to this directive, first of all, we have to explain what is called the “Surprise Group Problem”, which can be found in WIKI’s explanation. In the Nginx scenario, it means that when a new network connection comes in, multiple worker processes wake up at the same time, but only one of them can actually get the connection and process it. If too many processes are woken up at once, it can affect performance.

So here, if accept_mutexon is on, then multiple workers will be processed serially, and one of them will wake up; conversely, if accept_mutexoff is off, then all of them will wake up, but only one of them will be able to get a new connection, and the others will go back to sleep. workers will go back to sleep.

With rewrite forwarding, the url will be changed, then use proxy_pass, so I added the following configuration:

In order to locate the cause of the problem, I will virtual host under the other configuration note to comment out to debug, and finally found that when you comment out proxy_set_headerHost$http_host, you will see that it is the same as in the other configurations. headerHost$http_host; and finally found that when I commented out the proxy_set_headerHost$http_host; configuration, it was successfully forwarded. It was only then that I noticed the problem with the reverse proxy configuration. The existing environment in the original configuration can not just delete, the Internet to find the reason, found the following solution:

That is, in the location to add a proxy_set_headerHosthttp_host inside, it does not change the value of the request header, so when you want to forward to, the request header is still, the request header is still, the request header is still header is still the Host information of, there will be a problem; when Host is set to $proxy_host, the request header is reset to the Host information of

Additionally, the parameter about proxy_pass forwarding url can be done by using rewrite in location, so the perfected configuration is as follows:

After the URI is changed by rewrite in location, proxy_pass will use the changed URI. the above example (. *) is passing all parameters to 1 will splice in http://bbb.example.com后面.

First, let’s look at the syntax of proxy_set_header

Allows the redefinition or addition of a request header that is sent to the back-end server. value can contain text, a variable, or a combination of these. The proxy_set_header directive inherits the configuration from the level above if and only if it is not defined in the current configuration level. By default, only two request headers are redefined:

When a match is made to /customer/straightcustomer/download, it is processed using crmtest, and when it goes to upstream it matches to, which is directly converted to an IP for forwarding. Here, it is directly converted to IP and forwarded. If is configured in another nginx, the ip is, then $proxy_host is, which is equivalent to setting Host to If you want to set Host to, then $proxy_host is, which is equivalent to setting Host to If you want to set Host to, then $proxy_host is $Proxy_host., then set it as follows:

If you don’t want to change the value of the request header “Host”, you can set it like this:

However, if the client request header doesn’t carry this header, the request passed to the backend server doesn’t contain this header either. In this case, it is better to use the $host variable – its value is the value of the “Host” field when the request contains the “Host” request header, and the value of the “Host” field when the request does not carry the “Host” header. The value of the “Host” request header is the host’s primary domain name when the request does not contain a “Host” request header:

Additionally, the server name can be transmitted along with the port of the backend server:

If the value of a request header is null, then that request header will not be transmitted to the backend server:

The nginx configuration items. Inside the configuration items are proxy https, http, proxy static files, H5 distribution, proxy TCP connections, to meet most of the setup of the test environment to be used in the case of nginx, we encountered to use nginx when you can refer to

Nginx configuration file and basic commands

Configuration file named nginx.conf, Linux in the directory: /usr/local/nginx/conf, /etc/nginx, or /usr/local/etc/nginx; Windows in the installation directory \conf. Depending on the actual installation

nginx consists of directives that control modules specified in configuration files. Directives are categorized as simple or block directives:

Simple directives consist of a space-separated name and arguments, and end with a semicolon ;

Block directives have the same structure as simple directives, but are a set of additional directives surrounded by curly braces { and }. If a block directive has other directives inside the curly brackets, they are called contexts (e.g., events, http, server, and location);

Pseudo-directives placed outside of any context in the configuration file are considered to be in the main context. events and http directives reside in the main context, server in the http, and location in the server block. One profile one http, one and more servers, one server running a worker process and representing a virtual server;

The line where the # symbol resides is considered a comment;

Several top-level directives combine directives that apply to different traffic types:

For most directives, the subcontexts defined in a sub context will inherit the value of the pseudo-instruction contained in the parent; to override the value inherited from the parent process, the instruction needs to be included in the child context (i.e., the child context has to be explicitly declared).

Opening a configuration file (e.g., /usr/local/nginx/conf/nginx.conf), the default configuration file already contains several examples of server blocks, most of which are commented out. Now comment out all such blocks and start a new server block:

Each server context can specify the port to listen on, server_name, and when nginx decides which server will handle the request, it will test for the URI specified in the request header based on the parameters of the location directive defined inside the server block, such as the following configuration , the /data directory and its subdirectory /www are created on the system:

The first location block specifies the URI to be compared/prefixed with the URI in the request. For matching requests, the URI is added to the path specified in the root directive (i.e., /data/www), forming the path to the requested file on the local file system. If there are several matching location blocks, nginx will choose the location block with the longest prefix to match. The first location block above provides the shortest prefix length of 1, so it will only be used if all other location blocks fail to provide a match. The second location,would be to match requests with /images/, location / also matches such requests but with a shorter prefix, i.e. /images/ is longer than /.

This is already a server http://localhost/的工作配置 listening on standard port 80 and accessible on the local machine, port 80 and server_namelocalhost can be omitted, they are default values. In response to a request with a URI starting with /images/, the server will send files from the /data/images directory. For example, in response to http://localhost/images/logo.png请求, nginx will send the file /data/images/logo.png on the service. If the file does not exist, nginx will send a response indicating a 404 error. Requests for URIs that do not begin with /images/ are mapped to the /data/www directory. For example, in response to http://localhost/about/example.html请求时, nginx will send the file /data/www/about/example.html.

ReverseProxy should be one of the things that Nginx does the most. ReverseProxy means accepting connection requests on the Internet as a proxy server, then forwarding the requests to a server on the internal network, and returning the results from the server to the client requesting the connection on the Internet. Proxy servers are externally manifested as a reverse proxy server. Simply put, the real server can not be directly accessed by the external network, so you need a proxy server, and the proxy server can be accessed by the external network at the same time with the real server in the same network environment, of course, may also be the same server, the port is different.

Define the proxy server by adding a server block to the nginx configuration file containing the following:

This will be a simple server listening on port 8080 and mapping all requests to the /data/up1 directory on the local filesystem. Note that the root directive is located in the server block context and will be used when the location block selected for serving requests does not contain its own root directive. Creating the /data/up1 directory then allows you to place a static web page such as an index.html file into it and then visit http://localhost:8080/即可访问该文件.

So far, it is still configured for static resource access and is not a proxy server, then add or modify the existing location context to read as follows:

When a user accesses http://localhost:8080/时, it will return http://localhost:8181服务器的的资源.

The parameter after the location context can be a regular expression, and if it is a regular expression, it should be preceded by ~, for example:

The above configuration indicates that nginx receives all URIs ending in .gif, .jpg, or .png, and the corresponding requests will be mapped to the /data/images directory. When nginx selects a location block to serve a request, it first checks the location directive for the specified prefix, remembers the location with the longest prefix, and then checks the regular expression. If it matches the regular expression, nginx selects this location, otherwise it selects the one it remembered earlier.

To find the location that best matches the URI, NGINX first compares the URI to the location of the prefix string. It then searches for the location using regular expressions. Unless the regular expression is given a higher priority by using the ^~ modifier. Among the prefix strings, NGINX selects the most specific string (that is, the longest and most complete string). The exact logic for choosing where to process a request is given below:

Testing prefix strings for all URIs. The = (equals sign) modifier defines an exact match of the URI and the prefix string. If an exact match is found, the search stops. If the ^~(insertion symbol) modifier prepends the longest matching prefix string, the regular expression is not checked. Stores the longest matching prefix string. Tests the URI against the regular expression. breaks the first matching regular expression and uses the corresponding position. If no regular expression matches, the position corresponding to the stored prefix string is used.

The typical use case for the = modifier is / (forward slash) requests. If requests for / are frequent, specifying =/ as a parameter to the location directive speeds up processing because the search for matches stops after the first comparison.

To start nginx, run the executable. When nginx starts, you can control it by invoking the executable with the -s argument. Use the following syntax:

The value of the signal (signal) may be one of the following:

When the master process receives a signal to reload the configuration, it will check the syntactic validity of the new configuration file and attempt to apply the configuration provided in it. If this is successful, the master process starts the new worker processes and sends a message to the old ones requesting that they shut down. Otherwise, the master process rolls back the changes and continues to use the old configuration. The old worker process, receiving the shutdown command, stops accepting new connections and continues to maintain the current requests until all of them are maintained. After that, the old worker process exits.

Both in location, specify a path where alias is used to do the following configuration:

If you follow this configuration, ningx will automatically go to the /var/www/image/ directory to look for files when accessing files in the /img/ directory

If you follow this configuration, ningx will automatically go to the /img/ directory to find files when accessing files in the /img/ directory

When accessing files in the /img/ directory, ningx will go to the /img/ directory to look for files. directory, nginx will go to the /var/www/image/img/ directory to find files. alias is the definition of a directory alias, while root is the definition of the top-level directory, which refers to /var/www/image/img/. Another important difference is that alias must end with / or the file will not be found, while root is optional.

Also for index, the meaning is as follows

This way, when a user requests the / address, Nginx automatically looks for the files index.htm and index.html in the filesystem directory specified by the root configuration directive, in that order. If the index.htm file exists, it will directly initiate an “internal jump” to the new address /index.htm; while if the index.htm file does not exist, it will continue to check whether index.html exists. If it exists, the same “internal jump” to /index.html is initiated; if the index.html file still does not exist, then give up the processing right to the next module in the content stage.

Reference 1

Reference 2:Site B