Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 41

Nginx:

Prerequisites:
To start: I’d like to run the nginx server on a ubuntu container based.
If needed: create an image:

Installation:
To run the container:
Docker run - - name=nginx -it -p 80:80 -v .:/sites/demo ubuntu
IMPORTANT! RUN THE CONTAINER IN THE FOLDER WHERE WE HAVING
THE SITE FILES FROM.!

installing nginx using package manager (without modules):


1. Apt-get update && install nginx -y
2. Then confirm running by using -> ps aux | grep nginx, or check browser at:
localhost:80

Configuration files are at: /etc/nginx/

Nginx.org -> documentation, nginx.com -> product

Installing nginx using source code and adding modules:


We would want to do this part, since downloading modules can help us configure much
more better. Instead of just installing nginx
1. Apt-get update
2. Wget installation_link_from_nginx.org_download
3. Tar -zxvf nginx (then we will have an nginx folder)
4. Cd nginx (then we need to configure the source code, using a compiler)
5. Apt-get install build-essential -y
6. ./configure (to configure the source code)
7. Apt-get install libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev
8. ./configure

Now we can build the from the source code. To view the flags of the source code, check
nginx.org documentation.
To build the source code use the command: (this will also create the configs)
./configure - -sbin-path=/usr/bin/nginx1 - -conf-pat=/etc/nginx/nginx.conf2 - - error-log-
path=/var/log/nginx/error.log3 - -http-log-path=/var/log/nginx/access.log4 - - with-pcre5 - -
pid-path=/var/run/nginx.pid6 - -with-http_ssl_module7

This is the main advantage of installing nginx from source. It allows us to add different
configurations.
Then, run:
1. Make
2. make install
3. check that the configuration files are created at /etc/nginx/
4. then check the version of the nginx to see that its working using nginx -V
5. and then run nginx using the command: nginx
6. check the nginx is running using: ps aux | grep nginx
7. we can also see that nginx is running using the browser.

We can check the configurations that we wrote and ran using nginx -t.

Adding NGINX as a service in systemd:


This will let us:

 start, stop and restart nginx


 reload configurations.
 Start on boot.

We can send signals to the master process using nginx -s:


Nginx -s stop – to stop the nginx server

To start nginx as a service we can create a script from nginx init file:

Then create the init file and change some configurations:


Change the PIDFile into: /var/run/nginx.pid as we dont in the configuraions.
Change the ExecStartPre and ExecStart and ExecReload into: /usr/bin/nginx

From now on we can run nginx using systemctl, as we added nginx as a service.
Run nginx using:
Systemctl start nginx.service
Systemctl enable nginx.service to start at boot.

To view errors of nginx server check at /var/log/nginx/error.log

1
This sets the name of the nginx exe file to be at /usr/bin/nginx which is where the exe files are located at,
/usr/bin (all the commands which we execute are there)
2
Set the configuration path to be at /etc/nginx/nginx.conf, which is where the nginx configuration is located at.
3
Sets the name of the primary error, warnings and diagnostics file location.
4
Set file location to which logs of access to our https site will go
5
forces the usage of the PCRE library.
6
sets the name of an nginx.pid file that will store the process ID of the main process. This will allow us to know
the main process ID.
7
Allows to use https
Nginx configuration:
There are two terms used within nginx:
1. Directives – a specific configuration option that is set in the configuration file and
consists of a name and a value.
2. Context – a section within a configuration where directives are set. (like scope) we
can also have nested contexts.
3. Main context – where we configure global directives that apply to the master process.
To change the index.html page we need to have the files and edit, /etc/nginx/nginx.conf
Conf file content:

Root – where from we are going to forward the requests. So we if access lets say
localhost/hello then localhost will be looket at /sites/demo which is the root directory, and
then access /hello

Then reload the conf file using systemctl reload nginx.service


We use reload because reload prevent downtime if there are any errors in the config file.
If we would use restart and the conf file would have errors, it will refuse the come back.

To test the nginx.conf file run the command: nginx -t – to check syntax.
When we do it, we can see that we have only html without css, we can confirm it loads the
css using
Curl -i http://localhost/cover.css and see that it does work, but the content-type is plain-text,
so it doesn’t parse it as css.

To fix this we need to provide Nginx with content type for a given file extension to preview
it.

We can fix this using a types mapping like to map the content we want to a file extension:
We can do it in an easier way, using the mime.types file in /etc/nginx/.
This file contains the same mapping as above but with much more extensions.
We can use this file using in our nginx.conf file

Then reload the service and it should work!


Location Blocks in nginx.conf file:
We can configure in nginx.conf the option to forward requests to different parts in site using
the location blocks, and do something we receiving those requests.
Lets say we want to access /greet URI and we get 404.

The location context takes a parameter, this parameter is the URI which we redirect to. We
can preview another page, show a text, or anything that we would like to do.
The response that we return need to contain the status code and the returned value.

This is called a prefix location, which means that anything that will start with /greet will
work. Like /greetings or /greet/hello, will also work.
To create an exact match to /greet (to only redirect /greet) we need to add the = modifier.
We can also evaluate the redirection based on a regex match using the ~ sign.

This will allow access to to /greet with a number from 0-9 to it, which is also a case sensitive.

To create an insensitive case, we need to add use the ~* signs before the redirection value.

If we had a prefix match and a regex match, NGINX would give priority to the regex match.

We can also use preferential prefix using the ^~ which will give priority to the prefix match
instead of the regex match.
Priority of matches:
1. Exact match ( = ) – This will be the highest priority match because it matches the URI
exactly.
2. Preferential prefix ( ^~ ) – This will be the second priority as its preferred over the
regex match.
3. REGEX match (~ or ~*) – In case we have insensitive or sensitive case, NGINX
would take what comes first.
4. Prefix match – This would be the lowest priority, any match over this one will be
prioritized higher.
Variables:

We can create 2 types or vars:


1. Our own configuration vars: set $var ‘something’;
2. Nginx module variables: $http, $uri, $args (we can find more about those vars in the
nginx.org documentation.)

We can insert vars in the location redirection scope.

If we were access to /inspect?name=Daniel


We would get 3 outputs:
Localhost as our host, /inspect as the uri, name=Daniel as the args given.

We can also parse the args as given using:


$arg_name -> we would receive arg named name we would parse the value given.

If statements in nginx.conf:
 Note, it is highly discouraged to use if statements in the location context.
Here we check that if get an API key that is not 1234 we return a string.

This would work: http://localhost/inspect?apikey=1234&age=123


This would return Incorrect API KEY http://localhost/inspect?apikey=123123&age=123

To set local vars we can use the set command:


Set $VAR_NAME VAR_VALUE;
Rewrites and Redirects:
Lets assume that we want to access the image of our page, but we want to do it through the
/logo location. So we need to redirect from /logo to the image.

To do that we can return 307, which is a redirect, so we redirect to our image.

Since we are in /sites/demo we can access /thumb.png, but we can also access different
folders.

Difference between rewrite and redirect

Redirect will change the URL, since we were accessing /logo and the URL changed to
/thumb.png

Rewrite will NOT change the URL but will create a new request behind the scenes. So, we
access to /user/Daniel but got /greet (REWRITE IS INTERNALL!)

Example:
rewrite ^/user/\w+ /greet;
This means, rewrite a URI that starts with /user and has 1 or more-word chars, to /greet
So if a /user/Daniel will try to access it will be redirect to /greet

 Note that rewrite will make NGINX create a new request.

We can also capture certain parts of the original request using regex ():
Rewrite ^/user/(\w+) /greet/$1; -> This means that I can capture the name
Rewrite ^/user/(\w+1)/(somethingElse) /greet/$2
So when we capture certain parts we can also make a specific locations:

When we receive a /user/Daniel it will save the /Daniel and convert it into /greet/Daniel, then
see there is a location match and redirect into it.
In this case, if the user Daniel will log, it will rewrite him to his location, if any other user
logs in, it will direct him into /greet.
Because /greet is a prefix match, anything that will come after /greet will be redirected to
/greet, but something that will have a match location will be forwarded to there.

Using flag in rewrites:

We can use the last flag in a rewrite statement, the last flag states that after the rewrite was
done, it will be the last time it is going to rewrite, although there can be more equal matches.

If we have:
Rewrite ^/user/(\w+) /greet/$1;
rewrite /greet/john /thumb.png;
In this case, if /user/john accesses, it will change him to /greet/john, and then /greet/john will
be changed to /thumb.png.

If we add the last flag:


Rewrite ^/user/(\w+) /greet/$1 last;
rewrite /greet/john /thumb.png;
This will change /user/john into /greet/john and thats it, it will not change to /thumb.png
Try_Files and named locations:
Syntax:
Try_files arg1 arg2 redirect_arg
where each file is being checked relative to the root directory and the last argument being the
redirect argument to a location.

We can use try_files in either server context, so any request we get into the server will be
addressed there. Or inside a location.
If we would use try files in the server context, it would intercept every request and server
what we put there.
So, if we have: try_files /thumb.png /greet;
It will check if /sites/demo/thumb.png exists, if so, it will serve it. Since we put it in the
server context, it will always return the thumb.png image. If the /thumb.png would not exist,
then we would redirect into /greet location, since its the last argument.

In this case, when we try to access /cat.png, it will check if /sites/demo/cat.png exists. Since it
doesn’t exist it will redirect for /greet. /greet location exist, so it will change the request to
/greet, and then return 200 hello.

If we would change try_files into:

Try_files $uri /cat.png /greet;

It will check if the URI exists, if so, it will serve it. If the URI doesn’t exist, it will serve
/cat.png. If /cat.png doesn’t exist, it will serve /greet location.

 NOTE THAT TRY_FILES CHECKS FOR PAGES RELATIVE TO THE ROOT


DIRECTORY, AND ONLY THE LAST ARGUMENT WOULD REWRITE THE
REQUEST.
Since the last item of try_files needs to be something that never fails, we can redirect into an
error html page. So if the user wants to access something the doesn’t exist, we would redirect
him into an error page.

What will happened here, is that if we try to access /nothing:


1. It will check for /sites/demo/nothing, which doesn’t exist, since it checks relative to
the root directory, so it will check for a file.
2. It will check for /sites/demo/cat.png, which also doesn’t exist.
3. It will check for /sites/demo/greet, although we do have a /greet location, it checks
relative for the root directory, and since we don’t have a /greet page and /greet is not
the last argument it will fail.
4. It will REWRITE the request to /friendly404 and serve it.

 REMEMBER: ONLY THE LAST ARGUMENT WILL CAUSE A REWRITE!


To check that we do get 404, we do:
Curl -I localhost/error

To create a name location, add @ instead of / like so:


Try_files $uri /cat.png /greet @friendly_404;
Location @friendly_404
Logging:
Nginx provides two types of log types.
Error log: anything that failed.
Access log: log all requests to the server.

As we configured our server we set the two files at:


/var/log/nginx/error.log – for log errors.
/var/log/nginx/access.log – for access logs.

We can add our own access logs using access_log in a location:

Here, when a user accesses the /secure URI, we will log his access into both access.log and
secure.access.log (secure.access.log is our file that we created).

To disable access logs we can use: access_log off;


Inheritance and Directive types:
In nginx, when we wrote a location under a server, that location context inherited from the
server context its configurations.
The inheritance is from top to bottom.
Starting from the main context, as the file itself, down into the http context, server context,
and location context.

We have 3 types of directives in nginx.


1. Standard directive – can only be declared once. Childs can override his
configurations.
2. Array directive – This type of directive can be declared multiple times without
overriding the previous configuration. (Like access_log directive)
If we set an access_log directive in the main context, all other directives (like servers
and locations) will inherit from this directive. But, a child access_log can override the
main access_log.
3. Action directive – invoke some or other break in the configuration (like a return
directive / rewrite / try_files.
PHP Processing: (Reverse proxy)
We will configure a php-fpm server that when we receive requests that need a PHP we will
forward the request to the php-fpm server, get the response (HTTP) and return it to the user.

Since a container can hold 1 service at a time, we need to create 2 containers with docker
compose. One that will hold the nginx service, and one that will hold the php service.

Create docker compose file:

Then: docker compose -f ‘docker-compose.yml’ up -d - - build


Changes to our nginx.conf file:
1. Create an info.php file with: <?php phpinfo(); ?>
2. Link volume to the php server

If a request points to a directory, we want to tell nginx which file to load using the index
directive.
The default value is being our index.html file at /sites/demo/index.html.

Index index.php index.html;


So if a user requests a directory it will load index.php and if it doesn’t exist then it will look
for index.html

location /
this location will match anything, we will try to serve the URI, if doesn’t exist then the URI
with the directory, either .php or.html, if also doesn’t exist, return 404.

Location ~ \.php$
This location will match anything that ends with php. (using regex we can say, match
anything that end $ with \.php).
In this location we include fastcgi configurations and pass the php socket that we’ve created
on the second docker container. (where php is the container name and 9000 is the port the php
container is listening on)
Then set the script which needs to be run from (/var/www/html is the root directory of the
php server. So connect the root directory of the php server as where it gets it scripts from)

 To connect to the php server run: docker exec -it php /bin/bash
To preview index.php instead of index.html:

Create index.php file, and then when trying to access the main page we will get index.php
instead of index.html since we set at the index to first give index.php and then if doesn’t exist
index.html
Worker Processes:
When creating nginx we’ve created a master process. This process creates a worker process
under him that listens to clients. The default number of workers are 1.

We can set the number of worker processes in the nginx.conf file using: (at the main context)
Worker_processes 2;
Which will set the number of workers to 2.

Adding another worker doesn’t mean it will make the performance better. Since nginx is
async it means that it depends on the CPU cores. So, when we add workers, it doesn’t change
anything because we need to have more CPU cores.

Each core can hold 1 worker. This means that if we have 8 core CPU we can have 8 workers,
one per each core.

To know how many cores our system has, run:


Nproc – which will number how many cores we have.
lsCPU – much verbose.

We can automatically set the number of workers per core using:


Worker_procesess auto;

We can also set the number of connections each worker has using:
Worker_connections

To know how many files each process can open at a time, run:
Ulimit -n
This number is the number of files a worker can open, if we exceed this number, we will max
out our server.

To know how many requests we can handle using this formula:


Worker_processes * worker_connections = max_number_of_requests_to_handle
This way, if we have X worker, and each worker can open Y connections, we can handle XY
connections at most.

We can also set the new location at which nginx runs using PID directive.
The default value is /var/run/nginx.pid
Buffers and Timeouts:
These tweaks refer to the requests coming from clients, and not how the server processes the
requests.

First, a buffer is the operation which we write data to the memory, either from a request that
we read from, or a file that we load. We buffer the data into the memory.
In case we don’t have enough memory, it would make a buffer overflow, and write that data
into the hard disk, which is expensive memory, and also slower.

Secondly, a timeout is an option which we can say, after 60 when connected to a client, stop
the connection.

We add those configurations to the http context, so we apply the entire configurations to our
requests.
We can set the units of each directive as the following:
 100 – bytes
 10K – 10 KB
 10M – 10MB

1. Client_body_buffer_size – This directive is used to set the amount of memory to


allocate for buffering the POST data from a client.
(When a client sends us a POST request, the data is in the body of the request.)
2. Client_max_body_size – This directive sets the max number of body size. If we
receive a body size that is larger, it will drop the request and return 413 (request entity
too large).
3. Client_header_buffer_size – this directive sets the amount of memory to allocate for
buffering the POST header from a client.
4. Client_body_timeout / Client_header_timeout – These directives set the time it
between consecutive read operations. If the time exceeds the set time, it will drop the
connection. (These directives are not referring to the time it takes to write the data)
The timeout directives can set the units as X milliseconds, S for seconds, M for
minutes, H for hours, D for days, and Y for years.
5. Keepalive_timeout – this directive sets the max time to keep a connection open for X
time.
6. Send_timeout – if a client does not receive any of the response data in X time, abort
sending the response.
7. Sendfile on/off – will skip buffering static files, so either load from disk or use the
buffer.
8. Tcp_nopush on/off – optimize the size of packets sent to client.
Adding Dynamic Modules:
We can add modules to our nginx server after it has been built. We can rebuild the server with
the new modules. There are static and dynamic modules. The static modules are modules that
load each time the server starts and before everything. And dynamic modules are modules
that load dynamically.

To rebuild the server, we would need the installation file.


If nginx releases new version, we can simply install the new version, add our needed
configurations, and rewrite the nginx.conf file.

1. Copy the old configurations that we built the server on using NGINX -V
2. List all the modules using ./configure – -help in the installation folder .
3. Use ./configure and paste the old configurations, then also add the new modules.
4. Set - - modules-path=/etc/nginx/modules
(this flag, sets the path at which we can load dynamic modules, so we can load them
faster)
5. In case the configuration fails, install the needed packages.
6. Run make.
7. Run make install

Dynamic modules are not loaded automatically, we need to load them in nginx.conf.
8. Load_module modules/modules_name;

 Its important to notice that we have set the modules directory (/etc/nginx/modules) as
the same directory at which our nginx.conf file is located at (/etc/nginx/nginx.conf),
this way we can load the modules faster.
Performance:
Headers and expires:
An expire header is a header then when the server responses with, indicating how long the
client can cache the response.
So, the client doesn’t need to request something if it has it in cache. In case we change
something, then the client can request it again and get a different response.
This improves performance since we the server doesn’t need to look for that request if the
client already cached it.

In our nginx.conf file:

Then, when we request thumb.png, we can see using F12, in the network section, out header:
Or, using curl: curl -I /localhost:80/thumb.png

So, now we can set and control headers.


We can set the cache-control header to public, telling the receiving client that this resource or
response can be cached in any way.
add_header Cache-Control public;

And also set the Pragma header to public, as its the same header as above but an older
version.
Add_header Pragma public;

Adding the Vary header, which tells the client that the content of this response can vary with
the value being accept encoding. (Meaning that the response depends on the request header
named -> Accept_Encoding
Add_header Vary Accept-Encoding;

Setting the expire date of the cache using the expires header:
Expires 1M; (1 month)
This sets that the client will store the image, for 1 month, and then request from the server the
image again.

We can also set a location for static resources like images/css/js and so on:

Here, we set access_log as off, so we don’t log each time a client requests an image. Then we
add the normal headers and expire them for 1 month.
Compressed Responses with gzip:
When a user sends us a request, it can add a header named: Accept-Encoding.
This header tells us that we can return a response in an encoded way, like, GZIP.
This can compress the size of the file and make the sending much faster.
When the client receives the response, it can decompress it.

1. To enable gzip, add gzip on; in the http context. So every request will be able to be
gzip.
 Note that any child under the http context can override the gzip directive.
2. Set gzip_comp_level – this leveling used to tell how much to compress the files. A
lower number will result in a bigger file, but will use less resources (3)
A bigger number will result in a smaller file, which is better for the client, but will use
more resources from the server (10).
 Note that when setting the level about 5, the file size does reduce but not in much, and
using much more resources, so its better to used between 1-5.
3. Set gzip_types as: gzip_types text/css text/javascript, to tell what we can compress.
4. Since we do the compression when the client indicate that it accepts compressed files,
it needs to have Accept-Encoding header, this is why we have added: add_header
Vary Accept-Encoding; to tell the client that the response is depends on this header.

To see if we do get a gzip response:


CURL -I -H “Accept-Encoding: gzip” localhost/cover.css
And we receive: Content-Encoding: gzip
FastCGI Cache:
A nginx micro cache is a simple server-side cache that allows us to store dynamic language
responses in order to avoid or minimize server side language processing for websites, relying
heavily on server size languages and database access such as PHP and MYSQL.
This cache can provide performance benefits and reduce server load.

To enable microcaching:
1. Configure in the http context (so this will be applied to all servers), set
fastcgi_cache_path /tmp/nginx_cache levels=1:2
keys_zone=CACHEZONE:100M inactive=60m 8

2. Set what will be served per each reqeust from the cache, this means that if the user
requests a URI it will serve it depeneding on the request
Fastcgi_cache_key “$scheme$request_method$host$request_uri” -
scheme=http/https, request_method=post/get, host=localhost, request_uri=what
requested.

If we would remove $scheme we would serve both http and https over the same cache
entry, if we add $scheme, we would serve http and https from different entries.
 Note that the string is being cached.
3. In the location of ~\.php$ add: fastcgi_cache CACHEZONE so nginx where where
to store the cache.
8
so the cache will be written to /tmp/nginx_cache. In a level of which the last letter of the cache entry is a
directory and two letter from end are the subdirectory, but will can remove the level also. Also, set the size of
the cache using keys_zone. Also, set the time that a record will be deleted if not accesses using the inactive
keyword (default value of 10 minutes)
4. Set fastcgi_cache_valid 200 404 60m; this sets responses with 200 and 404 to be
valid for 60 minutes.

 To test performance install apache2-utils


 Then run ab -n 100 -c 10, which means that we run 10 connections and each
connections does 10 requests.
We will see that it takes 6millisecond per request to be completed. And we had 1662
requests per second
 We can even simulate the running time to make it slower by adding:
<?php sleep(1); ?> to slow the requests.
 Now we can see that we have 4 requests per second and each request take 2 seconds.
 After reloading the configuration and using the cache:
we can see that we have 2558 requests per second and it take 3milliseconds per
request!!!

We can check if a response was served from the server or from the cache:
$upstream_cache_status
We can pass this variable in a header to all the responses (in the http context):
add_header X-CACHE $upstream_cache_status
If we have a HIT response, this means that the response was served from the cache.
If we have a MISS response it means that the response was not server from cache.

We can add cache expections in the server context:

This means that we set a variable named no_cache and set him to false.
If we recieve a skipcache argument, we set the no_cache value to true, or if the request is a
POST method, then also set the value to true.
Then, at the location of a php serve, if the no_cache value is false, then its not going to
bypass it, and not going to write it to the disl.
But, if the value is true, then its going to bypass the cache, and not serve it, and also not write
the response to the disk.
:HTTP2
Differences between http1.1 and http2:
 http1 is a text protocol, which means we can see what is written, where http2 is a
binary protocol, which means we cannot read it.
 Transferring data using binary protocol is less error prune and faster.
 http2 compresses headers, which enables faster transmission and also uses persist
connection and multiplex streaming which takes a few connections or number of
streams and transmits the data using 1 connection stream. Where http1 create a
connection for each response, which takes time.

Opening connections:
Opening a connection takes time, since we need to create a TCP Handshake, and also pass
headers in the requests/responses. Thats why its better for us to add streams into 1 stream.

When requesting an html page from a server using http1, we will need more connections as
we request more parts of the page. Things like scripts, images and CSS files. These will
increase the number of connections and time it takes to see all the page and its context.
Since the browser can open only a specified number of connections, when the connection
stack up other connections will need to wait.

When requesting that same HTML page from a server using http2, the server will return us
the HTML page, and then when we request other parts of the page, the browser will use that
same connection, and the server will stream the data on one connection.
Meaning that we are opening less connections and enabling the option to send data faster.

To use http2 we need to enable HTTPS and SSL. We do have already installed SSL, but we
need to install our source code with the HTTPS build.
1. Go to the nginx folder after the installation.
2. Copy from NGINX -V the configurations.
3. Add to the ./configure, the old configurations + - -
modules-path=/etc/nginx/modules - - with-http_v2_module
4. Compile with make.
5. Install with m-ake install.

Then, configure SSL:


1. We would to create an SSL certificate, to do that we can create our own certificate:
create a directory in /etc/nginx named ssl:
mkdir/etc/nginx/ssl
2. Run: openssl req -x509 -days 10 -nodes -newkey rsa:2048 -keyout
/etc/nginx/ssl/self.key -out /etc/nginx/ssl/self.crt to create a certificate.

Because our container needs to listen on port 443, we need to change our docker-compose
file to listen on both 443 and 80:
Then to unable SSL in the nginx server
1. In the server context: Change the listen port to 443 and add ssl to enable the ssl
module. (listen 443 ssl)
2. Add ssl_certificate /etc/nginx/ssl/self.crt;
3. Tell nginx where to find the signing key which he signs the response using:
ssl_cerfiticate_key /etc/nginx/ssl/self.key;

To enable http2:
1. add in the server context the http2 module:
http2 on;

We can see that our server does return http2 responses using:
Curl -Ik https://localhost/index.html
Server push:
http2 has a server push feature. This feature enables us, lets say if we want to request a
HTML page, to send with that HTML altogether, the CSS and PNG files.

To test the push of files we can install nghttp2-client.


Test using: nghttp -nys https://localhost/index.html
This will show us that only index.html was given to us, since it was sent to us, and then we
requested the CSS and the PNG files.

If we would run nghttp -nysa:


We would see that after we receive the HTML page, we ask for both PNG and CSS file and
get them.
The -a flag says that we also download the linked assets from the requested URI.

If we would use the server push feature, we will ask only for the HTML page, using -nys but
we will also receive the CSS and PNG files.
In the server context, add a location:
location /index.html {
http2_push /style.css;
http2_push /thumb.png; }
note that we are not specifying the resource itself (style.css), but rather the request for the
resource (/style.css).

 NOTE – SERVER PUSH IS NOT SUPPORTED ANYMORE


Security:
Https (SSL):
If a user accesses our server through HTTP, we want to be able to handle the request.
Currently, if we access through HTTP, we can an error.
To fix this we can fix in 2 options:
1. Listen also on port 80, which is inefficient, since this port is not secure, and we will
lose our http2 optimization. (serving data through one stream)
2. Redirect requests from port 80 to the equivalent request in port 443:

a. In the HTTP context, create a virtual host that will redirect all traffic to HTTPS.
b. Make the server listen on port 80.
c. The server needs to listen on the same IP as the original server.
d. Return any request from the server to Https with to our host and with the same request
URI.

This will redirect any request from HTTP to HTTPS and fix the issue from before.

To make our server more secure:

In the https server context:

1. Disable SSL, since SSL is outdated, we need to disable it and enable TLSv1,
TLSv1.1, TLSv1.2
2. Optimize cipher suits and tell NGINX which ciphers to use and which not.
 Ciphers with ! are ciphers that we don’t want to use.
3. Allow our server to perform key exchanges between the client and server using DH
params.
 Note that these params needed to be created at /etc/nginx/ssl
4. Enable HSTS, which is a header that tells the browser not to load anything over
HTTP, so, we can minimize redirects.
5. Enable SSL cache, this will cache the TCP handshakes done between the server and
the client. The cache will hold for set amount of time. This will improve SSL
connections times since we there is no need to preform the handshake again. Also, we
want the session cache to be shared among all the workers. Also, we give the user a
ticker, this SSL ticket is trusted and allows the server to bypass the need to reread the
session.
To generate the params:

1. Openssl dhparam 2048 -out /etc/nginx/ssl/dhparam.pem


Rate Limiting:
Rate limiting is used to:
 Make our server more secure. To prevent brute force, we can set a rate limit to reduce
the number of connections we have.
 Make our server more reliable. We can prevent traffic spikes; in case we see several
connections, we can set a rate limit to not spike more.
 Make our server in a shape of priority. We can give priority to different user so one
can access with more connections and then other have less.

 To test our server, we can install SEIGE, which is a tool used to check the load of the
server.
 Run: siege -v -r 2 -c 5 (v – verbose. R – number of tests. C – connections. So here we
are doing 2 tests of 5 connections, which is 10 at total.)

To set rate limit: (at the http context – so to every request)


1. Define a new memory zone, in which to track connection limits:
set a key with which to identify how the rate limiting is applied.
Limit_req_zone $server_name OR $binary_remote_addr OR $request_uri
here, we can set the limit based on the server’s name (all requests to our server IP) or
based on the user with its own IP. (to prevent a brute force attack), or based on the
request URI (this one doesn’t apply per IP, but per request)
2. Specify Zone name, size, and rate number.
Limit_req_zone $request_uri zone=MYZONE:10m rate=1r/s;
this set the rate as per 1 request per second.
3. Then, we can either set the rate limit in the server context, so for all request to the server,
or to a location, at which the rate limit will work on that location.
limit_req_zone=MYZONE;

The rate limiting works like so:


If we set X connections to URI, and we get more the X, apply the rate limit.

If we have 5 connections to the server, and the rate limit is 1 per second, then the first one
will work, but the server will reject the other 4.

Setting a burst limit:


A burst limit is a number which allows to accept more request besides the number of
requests on the limit.
If we have 1r/s and a burst of 5, we will accept 1 + 5 requests.
So, the rate is 1, and then other 5 requests will wait, but will be fulfilled. The others will
get rejected.

We can set in the zone a burst allowance.


limit_req_zone $request_uri zone=MYZONE:10m rate=1r/s burst=5;
this sets a burst allowance of 5 for all request in this zone.
We can also add the burst allowance in the location:
limit_req_zone=MYZONE burst=5;
We send 4 requests at a second, 1 is forwarded immediately, and then 3 are sent to the
queue, which are then served at the rate of 1r/s. (the burst is like a buffer)

Here, we send 8 requests per second, 1 is forwarded immediately, 5 are sent to the queue
and served at the rate of 1r/s, we left with 2 that are rejected.

Nodelay keyword
We can also add the nodelay keyword, which is only being applicable to a zone that
already defines a burst value as well.
The nodelay keyword will serve the allowed burst requests as quickly as possible, so not
adhering to the rate limit, but it will still maintain the applicable time limit for any new
request.

If we have 1 r/s + 5 burst, then if we send 6 requests, we will accept them.


Then, if we send again 6 requests after 2 seconds, we will get only 2 allowed, since the
time it took to send the second burst is low, the server had time to free for only 2
requests.
Explanation:
What nodelay does is to server the requests ASAP, but let’s say if we have 1r/s + 5 burst,
then it will serve the requests fast, but it will still wait 6 seconds and then accept new
requests. ( since it needs to free the buffer )
Since we sent the first batch, it was served, the second batch was sent within those 6
seconds, the second batch was sent after 2 second, meaning the server could answer for
only 2 requests, for 2 seconds passed.

If we send 6 requests, all 6 will be served immediately, but the queue will be full, since
its 1+5, the queue is full, and each slot is freed after X as the rate, here, 1.

If we would send after 2 second again 6 requests, then the server will be free for 2
requests, so 2 will fill the buffer, and the other 4 will be rejected.
Basic Auth:
We can set an authentication to our side, allowing only allowed users to enter different parts
of our site.

To create a basic password:


Apt-get install apache2-utils.
Htpasswd -c /etc/nginx/.htpasswd user_name

Then, to create the basic auth, in the location context:


Auth_basic “Secure Area”;
Auth_basic_user_file /etc/nginx/.htpasswd;
Hardening NGINX:
1. Run apt-get update to update all the repositories and resources.
2. Run apt-get upgrade.
3. We might also update nginx, to check this, check for nginx version and then check in
nginx site if there is a critical update to do.
4. We want to hide our server version when we send responses to the user:
add in the http context:
server_tokens off;
5. To prevent malicious users embedding our site into their own (click-jacking)
 Click-jacking can be done by copying the server contents using iframe in HTML
body.
in the server context: (so going to all requests)
add_header X-Frame-Options “SAMEORIGIN”;
This means, allow to use iframe only if the domain is the same as ours
6. add_header X-XSS-PROTECTION “1; mode=block”;
in case someone tries to do a XSS on us, NGINX will disable loading the page.
7. To remove unused or dangerous engine modules:
we can add modules named without which when we will build the source code, those
modules will not be added.
Reverse Proxy and Load Balancing:
Reverse Proxy:
A reverse proxy is an intermediary between a client (a browser) and a server.
A reverse proxy will take the requests from the client, forward them to the backend (the
server), can a response back from the server and forward it to the client.

Note that the difference between FASTCGI and PROXY_PASS is that FASTCGI is when the
php and the nginx are on both server, and PROXY_PASS is when nginx and PHP are
different server that the nginx accepts the requests and then forwards it to the PHP server.
If we use php and nginx on containers, php can only listen using fast-cgi and not using http.

Adding headers to the client:


To add headers to the response to the client we can use the add_header directive.

Adding headers to send to the proxy server:


We can view the headers that the server has using:
<?php var_dump(getallheaders()) ?>r
Then, in the nginx.conf file add:
Proxy_set_header proxied nginx;
Load balancer:
A load balancer should distribute requests to multiple servers to reduce the load on those
individual servers.
And, also, to provide redundancy, when one or more servers goes down, our load balancer
can notice that, and redirect traffic from the downed server to ones that are up.

 I think I cannot do it in containers, since we need a low of servers and i need to


compose them in docker or kubernetes.
 Im not going to that on containers since it is much more complex, but we do need to
get the idea.
 When we do it in containers, we can either use php-cli image, which is an http server,
or we can use php-fdm, which uses fastcgi.

To create a load balancer, we need to create an upstream:


in the http context:

Then, in a location, proxy the data to our servers:

The load balancing will be in a round robin, which will send the requests to the server by
order.
If one server is dead, nginx will automatically still serve requests to the others servers alive.
Load Balancing Options:
Sticky Session:

When we use sticky session, the request is bound to the users IP and always, if possible,
proxied to the same server.
To allow sticky session, in the upstream, add:
ip_hash;
this will allow to create a memory where an IP address has the corresponding proxy server.

Least Connection Load Balancing:

When using this option, nginx will forward requests to the server with the least amount of
requests, so the load is balanced and not just forwarded to the next numbered server.
To enable least connection load balancing, add:
least_conn;

You might also like