NGINX http://www.xungui.org.cn The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Mon, 26 Aug 2019 20:55:33 +0000 en-US hourly 1 NGINX vs. Avi: Performance in the Cloud http://www.xungui.org.cn/blog/nginx-vs-avi-cloud-performance/ Mon, 26 Aug 2019 20:55:33 +0000 http://www.xungui.org.cn/?p=62911 Slow is the new down – according to Pingdom, the bounce rate for a website that takes 5 seconds to load is close to 40%. So what does that mean? If your site takes that long to load, nearly half of your visitors abandon it and go elsewhere. Users are constantly expecting faster applications and better [...]

Read More...

The post NGINX vs. Avi: Performance in the Cloud appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

Slow is the new down – according to Pingdom, the bounce rate for a website that takes 5 seconds to load is close to 40%. So what does that mean? If your site takes that long to load, nearly half of your visitors abandon it and go elsewhere. Users are constantly expecting faster applications and better digital experiences, so improving your performance is crucial to growing your business. The last thing you want is to offer a product or service the market wants while lacking the ability to deliver a fast, seamless experience to your customer base.

In this post we analyze the performance of two software application delivery controllers (ADCs), the Avi Vantage platform and the NGINX Application Platform. We measure the latency of client requests, an important metric for keeping clients engaged.

Testing Protocol and Metrics Collected

We used the load generation program wrk2 to emulate a client, making continuous requests over HTTPS for files during a defined period of time. The ADC data plane under test – the Avi Service Engine (SE) or NGINX Plus – acted as a reverse proxy, forwarding the requests to a backend web server and returning the response generated by the web server (a file) to the client. Across various test runs, we emulated real‑world traffic patterns by varying the number of requests per second (RPS) made by the client as well as the size of the requested file.

During the tests we collected two performance metrics:

  • Mean latency – Latency is defined as the amount of time between the client generating the request and receiving the response. The mean (average) is calculated by adding together the response times for all requests made during the testing period, then dividing by the number of requests.
  • 95th percentile latency – The latency measurements collected during the testing period are sorted from highest (most latency) to lowest. The highest 5% are discarded, and the highest remaining value is the 95th percentile latency.

Testing Methodology

Client

We ran the following script on an Amazon Elastic Compute Cloud (EC2) instance:

wrk2 -t1 -c50 -d180s -Rx --latency https://server.example.com:443/

(For the specs of all EC2 instances used, including the client, see the Appendix.) To simulate multiple clients accessing a web‑based application at the same time, in each test the script spawned one wrk2 thread and established 50 connections with the ADC data plane, then continuously requested a static file for 3 minutes (the file size varied across test runs). These parameters correspond to the following wrk2 options:

  • -t option – Number of threads to create (1).
  • -c option – Number of TCP connections to create (50).
  • -d option – Number of seconds in the testing period (180, or 3 minutes).
  • -Rx option – Number of requests per second issued by the client (also referred to as client RPS). The x was replaced by the appropriate client RPS rate for the test run.
  • --latency option – Includes detailed latency percentile information in the output.

As detailed below, we varied the size of the requested file depending on the size of the EC2 instance: 1 KB with the smaller instance, and both 10 KB and 100 KB with the larger instance. We incremented the RPS rate by 1,000 with the 1 KB and 10 KB files, and by 100 with the 100 KB file. For each combination of RPS rate and file size, we conducted a total of 20 test runs. The graphs below report the average of the 20 runs.

All requests were made over HTTPS. We used ECC with a 256‑bit key size and Perfect Forward Secrecy; the SSL cipher was ECDHE-ECDSA-AES256-GCM-SHA384.

Avi Reverse Proxy: Configuration and Versioning

We deployed Avi Vantage version 17.2.7 from the AWS Marketplace. Avi allows you to choose the AWS instance type for the SE, but not the underlying operating system. The Avi Controller preselects the OS on which it deploys SEs; at the time of testing, it was Ubuntu 14.04.5 LTS.

NGINX Reverse Proxy: Configuration and Versioning

Unlike the Avi Vantage platform, which bundles the control plane (its Controller component) and data plane (the SE), the NGINX control plane (NGINX Controller) is completely decoupled from the data plane (NGINX Plus). You can update NGINX Plus instances to any release, or change to any supported OS, without updating NGINX Controller. This gives you the flexibility to choose the release, OS, and OpenSSL version that provide optimum performance, making updates painless. Taking advantage of this flexibility, we deployed an NGINX Plus R17 instance running Ubuntu 16.04.

We configured NGINX Controller to deploy the NGINX Plus reverse proxy with a a cache that accommodates 100 upstream connections. This improves performance by enabling keepalive connections between the reverse proxy and the upstream servers.

NGINX Plus Web Server: Configuration and Versioning

As shown in the preceding topology diagrams, we used NGINX Plus R17 as the web server in all tests.

Performance Results

Latency on a t2.medium Instance

For the first test, we deployed Avi SE via Avi Controller and NGINX Plus via NGINX Controller, each on a t2.medium instance. The requested file was 1 KB in size. The graphs show the client RPS rate on the X axis, and the latency in seconds on the Y axis. Lower latency at each RPS rate is better.

The pattern of results for the two latency measurements is basically the same, differing only in the time scale on the Y axis. Measured by both mean and 95th percentile latency, NGINX Plus served nearly 2.5 times as many RPS than Avi SE (6,000 vs. 2,500) before latency increased above a negligible level. Why did Avi SE incur the dramatic increase in latency at such a low rate of RPS? To answer this, let’s take a closer look at the 20 consecutive test runs on Avi SE with client RPS at 2,500, for each of mean latency and 95th percentile latency.

As shown in the following table, on the 17th consecutive test on the Avi SE at 2500 RPS, mean latency spikes dramatically to more than 14 seconds, while 95th percentile latency spikes to more than 35 seconds.

Test Run Mean Latency (ms) 95th Percentile Latency (ms)
1 1.926 3.245
2 6.603 10.287
3 2.278 3.371
4 1.943 3.227
5 2.015 3.353
6 6.633 10.167
7 1.932 3.277
8 1.983 3.301
9 1.955 3.333
10 7.223 10.399
11 2.048 3.353
12 2.021 3.375
13 1.930 3.175
14 1.960 3.175
15 6.980 10.495
16 1.934 3.289
17 14020 35350
18 27800 50500
19 28280 47500
20 26400 47800

To understand the reason for the sudden spike, it’s important first to understand that t2 instances are “burstable performance” instances. This means that they are allowed to consume a baseline amount of the available vCPU at all times (40% for our t2.medium instances). As they run, they also accrue CPU credits, each equivalent to 1 vCPU running at 100% utilization for 1 minute. To use more than its baseline vCPU allocation (to burst), the instance has to pay with credits, and when those are exhausted the instance is throttled back to its baseline CPU allocation.

This output from htop, running in detailed mode after the latency spike occurs during the 17th test run, shows the throttling graphically:

The lines labeled 1 and 2 correspond to the t2.medium instance’s 2 CPUs and depict the proportion of each CPU that’s being used for different purposes: green for user processes, red for kernel processes, and so on. The cyan is of particular interest to us, accounting as it does for most of the overall usage. It represents the CPU steal time, which in a generalized virtualization context is defined as the “percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor”. For EC2 burstable performance instances, it’s the CPU capacity the instance is not allowed to use because it has exhausted its CPU credits.

At RPS rates lower than 2,500, Avi SE can complete all 20 test runs without exceeding its baseline allocation and CPU credits. At 2,500 RPS, however, it runs out of credits during the 17th test run. Latency spikes nearly 10x because Avi can’t use the baseline CPU allocation efficiently enough to process requests as fast as they’re coming in. NGINX Plus uses CPU much more efficiently than Avi SE, so it doesn’t exhaust its allocation and credits until 6,000 RPS.

Latency on a c4.large Instance

Burstable performance instances are most suitable for small, burstable workloads, but their performance degrades quickly when CPU credits become exhausted. In a real‑world deployment, it usually makes more sense to choose an instance type that consumes CPU in a consistent manner, not subject to CPU credit exhaustion. Amazon says its compute optimized instances are suitable for high‑performance web servers, so we repeated our tests with the Avi SE and NGINX Plus running on c4.large instances.

The testing setup and methodology is the same as for the t2.medium instance, except that the client requested larger files – 10 KB and 100 KB instead of 1 KB.

Latency for the 10 KB File

The following graphs show the mean and 95th percentile latency when the client requested a 10 KB file. As before, lower latency at each RPS rate is better.


As for the tests on t2.medium instances, the pattern of results for the two latency measurements is basically the same, differing only in the time scale on the Y axis. NGINX Plus again outperforms Avi SE, here by more than 70% – it doesn’t experience increased latency until about 7,200 RPS, whereas Avi SE can handle only 4,200 RPS before latency spikes.

As shown in the following table, our tests also revealed that Avi SE incurred more latency than NGINX Plus at every RPS rate, even before hitting the RPS rate (4,200) at which Avi SE’s latency spiked. At the RPS rate where the two products’ latency was the closest (2000 RPS), Avi SE’s mean latency was still 23x NGINX Plus’ (54 ms vs. 2.3 ms), and its 95th percentile latency was 79x (317 ms vs. 4.0 ms).

At 4000 RPS, just below the latency spike for Avi SE, the multiplier grew to 69x for mean latency (160 ms vs. 2.3 ms) and 128x for 95th percentile latency (526 ms vs. 4.1 ms). At RPS higher than the latency spike for Avi SE, the multiplier exploded, with the largest difference at 6,000 RPS: 7666x for mean latency (23 seconds vs. 3.0 ms) and 8346x for 95th percentile latency (43.4 seconds vs. 5.2 ms).

Avi SE’s performance came closest to NGINX Plus’ after NGINX Plus experienced its own latency spike, at 7,200 RPS. Even so, at its best Avi SE’s latency was never less than 2x NGINX Plus’ (93.5 seconds vs. 45.0 seconds for 95th percentile latency at 10,000 RPS).

Client RPS Mean Latency 95th Percentile Latency
Avi SE NGINX Plus Avi SE NGINX Plus
1000 84 ms 1.7 ms 540 ms 2.6 ms
2000 54 ms 2.3 ms 317 ms 4.0 ms
3000 134 ms 2.2 ms 447 ms 4.2 ms
4000 160 ms 2.3 ms 526 ms 4.1 ms
5000 8.8 s 2.7 ms 19.6 s 5.1 ms
6000 23.0 s 3.0 ms 43.4 s 5.2 ms
7000 33.0 s 4.3 ms 61 s 14.7 ms
8000 40.0 s 6.86 s 74.2 s 13.8 s
9000 46.8 s 16.6 s 85.0 s 31.1 s
10000 51.6 s 24.4 s 93.5 s 45.0 s

Latency for the 100 KB File

In the next set of test runs, we increased the size of the requested file to 100 KB.

For this file size, RPS drops dramatically for both products but NGINX Plus still outperforms Avi SE, here by nearly 40% – NGINX Plus doesn’t experience significant increased latency until about 720 RPS, whereas Avi SE can handle only 520 RPS before latency spikes.

As for the tests with 10 KB files, Avi SE also incurs more latency than NGINX Plus at every RPS rate. The following table shows that the multipliers are not as large as for 10 KB files, but are still significant. The lowest multiplier before Avi SE’s spike is 17x, for mean latency at 400 RPS (185 ms vs. 10.7 ms).

Client RPS Mean Latency 95th Percentile Latency
Avi SE NGINX Plus Avi SE NGINX Plus
100 100 ms 5.0 ms 325 ms 6.5 ms
200 275 ms 9.5 ms 955 ms 12.5 ms
300 190 ms 7.9 ms 700 ms 10.3 ms
400 185 ms 10.7 ms 665 ms 14.0 ms
500 500 ms 8.0 ms 1.9 s 10.4 ms
600 1.8 s 9.3 ms 6.3 s 12.4 ms
700 15.6 s 2.2 ms 32.3 s 3 ms
800 25.5 s 2.4 s 48.4 s 8.1 s
900 33.1 s 12.9 s 61.2 s 26.5
1000 39.2 s 20.8 s 71.9 s 40 s

CPU Usage on a c4.large Instance

Our final test runs focused on overall CPU usage by Avi SE and NGINX Plus on c4.large instances. We tested with both 10 KB and 100 KB files.

With the 10 KB file, Avi SE hits 100% CPU usage at roughly 5,000 RPS and NGINX Plus at roughly 8,000 RPS, representing 60% better performance by NGINX Plus. For both products, there is a clear correlation between hitting 100% CPU usage and the latency spike, which occurs at 4,200 RPS for Avi SE and 7,200 RPS for NGINX Plus.

The results are even more striking with 100 KB files. Avi SE handled a maximum of 520 RPS because at that point it hit 100% CPU usage. NGINX Plus’ performance was nearly 40% better, with a maximum rate of 720 RPS*. But notice that NGINX Plus was using less than 25% of the available CPU at that point – the rate was capped at 720 RPS not because of processing limits in NGINX Plus but because of network bandwidth limits in the test environment. In contrast to Avi SE, which maxed out the CPU on the EC2 instance, NGINX Plus delivered reliable performance with plenty of CPU cycles still available for other tasks running on the instance.

* The graph shows the maximum values as 600 and 800 RPS, but that is an artifact of the graphing software – we tested in increments of 100 RPS and the maxima occurred during the test runs at those rates.

Summary

We extensively tested both NGINX and Avi Vantage in a realistic test case where the client continuously generates traffic for roughly 12 hours, steadily increasing the rate of requests.

The results can be summarized as follows:

  • NGINX uses CPU more efficiently, and as a result incurs less latency.

    • Running on burstable performance instances (t2.medium in our testing) Avi exhausted the CPU credit balance faster than NGINX.
    • Running on stable instances with no CPU credit limits (c4.large in our testing) and under heavy, constant client load, NGINX handled more RPS before experiencing a latency spike – roughly 70% more RPS with the 10 KB file and roughly 40% more RPS with the 100 KB file.
    • With the 100 KB file, 100% CPU usage directly correlated with the peak RPS that Avi could process. In contrast, NGINX Plus never hit 100% CPU – instead it was limited only by the available network bandwidth.
  • NGINX outperformed Avi in every latency test on the c4.large instances, even when the RPS rate was below the level at which latency spiked for Avi.

As we mentioned previously, Avi Vantage does not allow you to select the OS for Avi SE instances. As a consequence, if a security vulnerability is found in the OS, you can’t upgrade to a patched version provided by the OS vendor; you have to wait for Avi to release a new version of Avi Controller that fixes the security issue. This can leave all your applications exposed for potentially long periods.

Appendix: Amazon EC2 Specs

The tables summarize the specs for the Amazon EC2 instances used in the test environment.

In each case, the instances are in the same AWS region.

Testing on t2.medium Instances

Role Instance Type vCPUs RAM (GB)
Client t2.micro 1 1
Reverse Proxy (Avi SE or NGINX Plus) t2.medium 2 4
NGINX Web Server t2.micro 1 1

Testing on c4.large Instances

Role Instance Type vCPUs RAM (GB)
Client c4.large 2 3.75
Reverse Proxy (Avi SE or NGINX Plus) c4.large 2 3.75
NGINX Web Server c4.large 2 3.75

The post NGINX vs. Avi: Performance in the Cloud appeared first on NGINX.

]]>
Transitioning to DevOps: Advice from an NGINX Expert http://www.xungui.org.cn/blog/transitioning-to-devops-advice-from-nginx-expert/ Tue, 20 Aug 2019 21:28:55 +0000 http://www.xungui.org.cn/?p=62892 Enterprises across the world today are becoming increasingly interested in leveraging DevOps to deliver applications and services at a fast pace. However, at most organizations, awareness of DevOps has remained confined to abstract principles rather than practical knowledge. There is a strong misunderstanding about the purpose of DevOps, and as a result many companies are [...]

Read More...

The post Transitioning to DevOps: Advice from an NGINX Expert appeared first on NGINX.

]]>
Enterprises across the world today are becoming increasingly interested in leveraging DevOps to deliver applications and services at a fast pace. However, at most organizations, awareness of DevOps has remained confined to abstract principles rather than practical knowledge. There is a strong misunderstanding about the purpose of DevOps, and as a result many companies are far less confident in implementing agile operations on the ground.

In this post we share advice from Kevin Jones, a Global Solutions Architect at NGINX and an expert on managing DevOps environments, on successfully transitioning to DevOps.

Setting the Goals

“What are we trying to achieve? Is it to modernize our applications? To transform our infrastructure to achieve agile development?”

Those are the questions for organizations to ask about any transition towards DevOps, according to Kevin, who stresses that outlining a clear set of goals is the first step for uncertain organizations.

“I think it’s about identifying where the organization really wants to be, from a technical perspective,” Kevin explains, “but also making that match what the business is trying to do. If you’re Uber, for example, you probably have it somewhat figured out already, but there’s always a roadmap – there’s always somewhere you’re trying to get to.”

Choosing the Right DevOps Tools

One of the first decisions organizations need to make during a DevOps transformation is which tools to adopt for their new operational framework. At this stage, many organizations start by looking at which products or devices they are going to incorporate. There is an unlimited number of tools available to modern IT teams; therefore, it’s important for companies to decide which tools best match their use cases.

Before purchasing a new tool or platform, it is vital to be proactive by meticulously testing each possible choice. If monitoring has been called out as an important feature, for example, you want to make sure that the tool you select reports on all the factors you care about – response time, memory usage, requests per second, and so on. For Kevin, “the key lies in making sure that they’re gonna solve the problems for those goals that you have”. Taking the time to ask those questions about the viability of new tools allows you to make a much more informed decision about the devices you adopt.

Fostering Cross-Functional Communication

Perhaps the most challenging obstacle most organizations face when transitioning to DevOps is fostering communication between the business and technical sides. All too often companies adopt DevOps processes but not the non‑siloed organizational structure required for effective collaboration between functional teams.

“In a lot of organizations,” Kevin explains, “DevOps teams don’t necessarily have much control over what’s going on. It’s kind of like a balancing act, with the business on one side and the infrastructure and technical people on the other.” Driving communication between the business and technical side of an organization, therefore, is fundamental to the DevOps philosophy.

As Kevin highlights, “it’s really helpful to inspire discussion with these teams to make sure that the right decisions are made”. If a department or stakeholder wants to adopt a particular tool, Kevin suggests that the DevOps team needs to be included in “helping to make a decision on whether it will improve the environment by adding value and not adding unnecessary complexity to the infrastructure. There are many tools out there to adopt, but as a group, teams can make educated decisions to accomplish the overall goal”.

For most DevOps teams, internal or cross‑team communication is a challenge due to their enormous workload of production escalations, ticket backlog, and change management planning, but Kevin suggests that this cannot be an excuse for a backseat approach. “Even though DevOps teams are typically busy,” he explains, “they should make themselves available for these kinds of discussions, being involved in the business as much as possible.” Opening a dialogue between disparate teams is as much a cultural change as it is an operational one.

Measuring Success in DevOps

Like any operational methodology, DevOps is about results. Being able to identify successes and failures is essential to gathering feedback when developing new processes. Many companies, however, find it difficult to measure new processes, such as DevOps, effectively.

From Kevin’s perspective, there are three main examples of measurement: cost, modernization, and performance. “I think there are different types of results,” he adds. “We can talk about economic results: what is it that you’re trying to achieve from a financial standpoint? Maybe it’s not related to money at all. Maybe it’s tracking a migration from using legacy applications or legacy infrastructure to more modern infrastructure.”

The third possible measure of success is the performance of the application over time. For this, he suggests asking some basic questions beforehand: “Is the application performing better than it was last month, or better than it was last week? Also, what is the end‑user experience?”

Successfully achieving a goal relies on open communication and unity between the business and technical sides of a company. Collaboration is necessary to outline what results you’re looking for and the tools you plan to use to deliver them. In regular discussions with the business teams, the DevOps team can provide expert guidance throughout the transition.

Making DevOps work for your organization is about starting with a clear end goal in mind. If you have a goal to work toward, you have a reference point to help you ask the right questions. As Kevin said, DevOps “is a lot about the culture and just being involved in the business as much as possible, every step of the way”.

To find out more about how NGINX can help transform your business through a DevOps approach, get in touch.

The post Transitioning to DevOps: Advice from an NGINX Expert appeared first on NGINX.

]]>
NGINX Updates Mitigate the August 2019 HTTP/2 Vulnerabilities http://www.xungui.org.cn/blog/nginx-updates-mitigate-august-2019-http-2-vulnerabilities/ Tue, 13 Aug 2019 17:26:11 +0000 http://www.xungui.org.cn/?p=62872 Today we are releasing updates to NGINX Open Source and NGINX Plus in response to the recent discovery of vulnerabilities in many implementations of HTTP/2. We strongly recommend upgrading all systems that have HTTP/2 enabled. In May 2019, researchers at Netflix discovered a number of security vulnerabilities in several HTTP/2 server implementations. These were responsibly reported to [...]

Read More...

The post NGINX Updates Mitigate the August 2019 HTTP/2 Vulnerabilities appeared first on NGINX.

]]>
Today we are releasing updates to NGINX Open Source and NGINX Plus in response to the recent discovery of vulnerabilities in many implementations of HTTP/2. We strongly recommend upgrading all systems that have HTTP/2 enabled.

In May 2019, researchers at Netflix discovered a number of security vulnerabilities in several HTTP/2 server implementations. These were responsibly reported to each of the vendors and maintainers concerned. NGINX was vulnerable to three attack vectors, as detailed in the following CVEs:

We have addressed these vulnerabilities, and added other HTTP/2 security safeguards, in the following NGINX versions:

  • NGINX 1.16.1 (stable)
  • NGINX 1.17.3 (mainline)
  • NGINX Plus R18 P1

The post NGINX Updates Mitigate the August 2019 HTTP/2 Vulnerabilities appeared first on NGINX.

]]>
Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD http://www.xungui.org.cn/blog/nginx-plus-ingress-controller-for-kubernetes-openid-connect-azure-ad/ Thu, 25 Jul 2019 20:35:25 +0000 http://www.xungui.org.cn/?p=62747 NGINX Open Source is already the default Ingress resource for Kubernetes, but NGINX Plus provides additional enterprise‑grade capabilities, including JWT validation, session persistence, and a large set of metrics. In this blog we show how to use NGINX Plus to perform OpenID Connect (OIDC) authentication for applications and resources behind the Ingress in a Kubernetes environment, in a [...]

Read More...

The post Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

NGINX Open Source is already the default Ingress resource for Kubernetes, but NGINX Plus provides additional enterprise‑grade capabilities, including JWT validation, session persistence, and a large set of metrics. In this blog we show how to use NGINX Plus to perform OpenID Connect (OIDC) authentication for applications and resources behind the Ingress in a Kubernetes environment, in a setup that simplifies scaled rollouts.

The following graphic depicts the authentication process with this setup:

To create the setup, perform the steps in these sections:

Notes:

  • This blog is for demonstration and testing purposes only, as an illustration of how to use NGINX Plus for authentication in Kubernetes using OIDC credentials. The setup is not necessarily covered by your NGINX Plus support contract, nor is it suitable for production workloads without modifications that address your organization’s security and governance requirements.
  • Several NGINX colleagues collaborated on this blog and I thank them for their contributions. I particularly want to thank the NGINX colleague (he modestly wishes to remain anonymous) who first came up with this use case!

Obtaining Credentials from the OpenID Connect Identity Provider (Azure Active Directory)

The purpose of OpenID Connect (OIDC) is to use established, well‑known user identities without increasing the attack surface of the identity provider (IdP, in ODC terms). Our application trusts the IdP, so when it calls the IdP to authenticate a user, it is then willing to use the proof of authentication to control authorized access to resources.

In this example, we’re using Azure Active Directory (AD) as the IdP, but you can choose any of the many OIDC IdPs operating today. For example, our earlier blog post Authenticating Users to Existing Applications with OpenID Connect and NGINX Plus uses Google.

To use Azure AD as the IdP, perform the following steps, replacing the sample values with the ones appropriate for your application:

  1. If you don’t already use Azure, create an account.

  2. Navigate to the Azure portal and click Azure Active Directory in the left navigation column.

    In this blog we’re using features that are available in the Premium version of AD and not the standard free version. If you don’t already have the Premium version (as is the case for new accounts), you can start a free trial as prompted on the AD Overview page.

  3. Click App registrations in the left navigation column (we have minimized the global navigation column in the screenshot).

  4. On the App registrations page, click New registration.

  5. On the Register an application page that opens, enter values in the Name and Redirect URI fields, click the appropriate radio button in the Supported account types section, and then click the  Register  button. We’re using the following values:

    • Name – cafe
    • Supported account types – Account in this organizational directory only
    • Redirect URI (optional) – Web: https://cafe.nginx.net/_codexch

  6. Make note of the values in the Application (client) ID and Directory (tenant) ID fields on the cafe confirmation page that opens. We’ll add them to the cafe-ingress.yaml file we create in Setting Up the Sample Application to Use OpenID Connect.

  7. In the Manage section of the left navigation bar, click Certificates & secrets (see the preceding screenshot). On the page that opens, click the New client secret button.

  8. In the Add a client secret pop‑up window, enter the following values and click the  Add  button:

    • Description – client_secret
    • Expires – Never

  9. Copy the value for client_secret that appears, because it will not be recoverable after you close the window. In our example it is kn_3VLh]1I3ods*[DDmMxNmg8xxx.

  10. URL‑encode the client secret. There are a number of ways to do this but for a non‑production example we can use the urlencoder.org website. Paste the secret in the upper gray box, click the  > ENCODE <  button, and the encoded value appears in the lower gray box. Copy the encoded value for use in configuration files. In our example it is kn_3VLh%5D1I3ods%2A%5BDDmMxNmg8xxx.

Installing and Configuring Kubernetes

There are many ways to install and configure Kubernetes, but for this example we’ll use one of my favorite installers, Kubespray. You can install Kubespray from the GitHub repo.

You can create the Kubernetes cluster on any platform you wish. Here we’re using a MacBook. We’ve previously used VMware Fusion to create four virtual machines (VMs) on the MacBook. We’ve also created a custom network that supports connection to external networks using Network Address Translation (NAT). To enable NAT in Fusion, navigate to Preferences > Network, create a new custom network, and enable NAT by expanding the Advanced section and checking the option for NAT.

The VMs have the following properties:

Name OS IP Address Alias IP Address Memory Disk Size
node1 CentOS 7.6 172.16.186.101 172.16.186.100 4 GB 20 GB
node2 CentOS 7.6 172.16.186.102 2 GB 20 GB
node3 CentOS 7.6 172.16.186.103 2 GB 20 GB
node4 CentOS 7.6 172.16.186.104 2 GB 20 GB

Note that we set a static IP address for each node and created an alias IP address on node1. In addition we satisfied the following requirements for Kubernetes nodes:

  • Disabling swap
  • Allowing IP address forwarding
  • Copying the ssh key from the host running Kubespray (the Macbook) to each of the four VMs, to enable connecting over ssh without a password
  • Modifying the sudoers file on each of the four VMs to allow sudo without a password (use the visudo command and make the following changes):

    ## Allows people in group wheel to run all commands
    # %Wheel           ALL=(ALL)    ALL
     
    ## Same thing without a password
    %Wheel  ALL=(ALL)       NOPASSWD: ALL

We disabled firewalld on the VMs but for production you likely want to keep it enabled and define the ports through which the firewall accepts traffic. We have SELinux in enforcing mode.

On the MacBook we also satisfied all the Kubespray prerequisites, including installation of an Ansible version supported by Kubespray.

Kubespray comes with a number of configuration files. We’re replacing the values in several fields in two of them:

  • group_vars/all/all.yml

    # adding the ability to call upstream DNS
    upstream_dns_servers:
      - 8.8.8.8
      - 8.8.4.4
  • group_vars/k8s-cluster/k8s-cluster.yml

    kube_network_plugin: flannel
    # Make sure the following subnets aren't used by active networks
    kube_service_addresses: 10.233.0.0/18
    kube_pods_subnet: 10.233.64.0/18
    # change the cluster name to whatever you plan to use
    cluster_name: k8s.nginx.net
    # add so we get kubectl and the config files locally
    kubeconfig_localhost: true
    kubectl_localhost: true

We also create a new hosts.yml file with the following contents:

all:
  hosts:
    node1:
      ansible_host: 172.16.186.101
      ip: 172.16.186.101
      access_ip: 172.16.186.101
    node2:
      ansible_host: 172.16.186.102
      ip: 172.16.186.102
      access_ip: 172.16.186.102
    node3:
      ansible_host: 172.16.186.103
      ip: 172.16.186.103
      access_ip: 172.16.186.103
    node4:
      ansible_host: 172.16.186.104
      ip: 172.16.186.104
      access_ip: 172.16.186.104
  children:
    kube-master:
      hosts:
        node1:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
        node4:
    etcd:
      hosts:
        node1:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

Now we run the following command to create a four‑node Kubernetes cluster with node1 as the single master. (Kubernetes recommends three master nodes in a production environment, but one node is sufficient for our example and eliminates any possible issues with synchronization.)

$ ansible-playbook -i inventory/mycluster/hosts.yml -b cluster.yml

Creating a Docker Image for the NGINX Plus Ingress Controller

NGINX publishes a Docker image for the open source NGINX Ingress Controller, but we’re using NGINX Plus and so need to build a private Docker image with the certificate and key associated with our NGINX Plus subscription. We’re following the instructions at the GitHub repo for the NGINX Ingress Controller, but replacing the contents of the Dockerfile provided in that repo, as detailed below.

Note: Be sure to store the image in a private Docker Hub repository, not a standard public repo; otherwise your NGINX Plus credentials are exposed and subject to misuse. A free Docker Hub account entitles you to one private repo.

Replace the contents of the standard Dockerfile provided in the kubernetes-ingress repo with the following text. One important difference is that we include the NGINX JavaScript (njs) module in the Docker image by adding the nginx-plus-module-njs argument to the second apt-get install command.

FROM debian:stretch-slim

LABEL maintainer="NGINX Docker Maintainers "

ENV NGINX_PLUS_VERSION 18-1~stretch
ARG IC_VERSION

# Download certificate and key from the customer portal (https://cs.nginx.com)
# and copy to the build context
COPY nginx-repo.crt /etc/ssl/nginx/
COPY nginx-repo.key /etc/ssl/nginx/

# Make sure the certificate and key have correct permissions
RUN chmod 644 /etc/ssl/nginx/*

# Install NGINX Plus
RUN set -x \
  && apt-get update \
  && apt-get install --no-install-recommends --no-install-suggests -y apt-transport-https ca-certificates gnupg1 \
  && \
  NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
  found=''; \
  for server in \
    ha.pool.sks-keyservers.net \
    hkp://keyserver.ubuntu.com:80 \
    hkp://p80.pool.sks-keyservers.net:80 \
    pgp.mit.edu \
  ; do \
    echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
    apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
  done; \
  test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
  echo "Acquire::https::plus-pkgs.nginx.com::Verify-Peer \"true\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::Verify-Host \"true\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::SslCert     \"/etc/ssl/nginx/nginx-repo.crt\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::SslKey      \"/etc/ssl/nginx/nginx-repo.key\";" >> /etc/apt/apt.conf.d/90nginx \
  && echo "Acquire::https::plus-pkgs.nginx.com::User-Agent  \"k8s-ic-$IC_VERSION-apt\";" >> /etc/apt/apt.conf.d/90nginx \
  && printf "deb https://plus-pkgs.nginx.com/debian stretch nginx-plus\n" > /etc/apt/sources.list.d/nginx-plus.list \
  && apt-get update && apt-get install -y nginx-plus=${NGINX_PLUS_VERSION} nginx-plus-module-njs \
  && apt-get remove --purge --auto-remove -y gnupg1 \
  && rm -rf /var/lib/apt/lists/* \
  && rm -rf /etc/ssl/nginx \
  && rm /etc/apt/apt.conf.d/90nginx /etc/apt/sources.list.d/nginx-plus.list


# Forward NGINX access and error logs to stdout and stderr of the Ingress
# controller process
RUN ln -sf /proc/1/fd/1 /var/log/nginx/access.log \
	&& ln -sf /proc/1/fd/1 /var/log/nginx/stream-access.log \
	&& ln -sf /proc/1/fd/1 /var/log/nginx/oidc_auth.log \
	&& ln -sf /proc/1/fd/2 /var/log/nginx/error.log \
	&& ln -sf /proc/1/fd/2 /var/log/nginx/oidc_error.log


EXPOSE 80 443

COPY nginx-ingress internal/configs/version1/nginx-plus.ingress.tmpl internal/configs/version1/nginx-plus.tmpl internal/configs/version2/nginx-plus.virtualserver.tmpl  /

RUN rm /etc/nginx/conf.d/* \
  && mkdir -p /etc/nginx/secrets

# Uncomment the line below to add the default.pem file to the image
# and use it as a certificate and key for the default server
# ADD default.pem /etc/nginx/secrets/default

ENTRYPOINT ["/nginx-ingress"]

We tag the Dockerfile with 1.5.0-oidc and push the image to a private repo on Docker Hub under the name nginx-plus:1.5.0-oidc. Our private repo is called magicalyak, but we’ll remind you to substitute the name of your private repo as necessary below.

To prepare the Kubernetes nodes for the custom Docker image, we run the following commands on each of them. This enables Kubernetes to place the Ingress resource on the node of its choice. (You can also run the commands on just one node and then direct the Ingress resource to run exclusively on that node.) In the final command, substitute the name of your private repo for magicalyak:

$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ docker login # this prompts you to enter your Docker username and password
$ docker pull magicalyak/nginx-plus:1.5.0-oidc

At this point the Kubernetes nodes are running.

In order to use the Kubernetes dashboard, we run the following commands. The first enables kubectl on the local machine (the MacBook in this example). The second returns the URL for the dashboard, and the third returns the token we need to access the dashboard (we’ll paste it into the token field on the dashboard login page).

$ cp inventory/mycluster/artifacts/admin.conf ~/.kube/config
$ kubectl cluster-info # gives us the dashboard URL
$ kubectl -n kube-system describe secrets \
   `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` \
       | awk '/token:/ {print $2}'

Installing and Customizing the NGINX Plus Ingress Controller

We now install the NGINX Plus Ingress Controller in our Kubernetes cluster and customize the configuration for OIDC by incorporating the IDs and secret generated by Azure AD in Obtaining Credentials from an OpenID Connect Identity Provider.

Cloning the NGINX Plus Ingress Controller Repo

We first clone the kubernetes-ingress GitHub repo and change directory to the deployments subdirectory. Then we run kubectl commands to create the resources needed: the namespace and service account, the default server secret, the custom resource definition, and role‑based access control (RBAC).

$ git clone https://github.com/nginxinc/kubernetes-ingress
$ cd kubernetes-ingress/deployments
$ kubectl create -f common/ns-and-sa.yaml
$ kubectl create -f common/default-server-secret.yaml
$ kubectl create -f common/custom-resource-definitions.yaml
$ kubectl create -f rbac/rbac.yaml

Creating the NGINX ConfigMap

Now we replace the contents of the common/nginx-config.yaml file with the following, a ConfigMap that enables the njs module and includes configuration for OIDC.

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
  #external-status-address: 172.16.186.101
  main-snippets: |
    load_module modules/ngx_http_js_module.so;
  ingress-template: |
    # configuration for {{.Ingress.Namespace}}/{{.Ingress.Name}}
    {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
    {{$oidc := index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
    {{- if eq $oidc "True"}}
    {{- $kv_zone_size := index $.Ingress.Annotations "custom.nginx.org/keyval-zone-size"}}
    {{- $refresh_time := index $.Ingress.Annotations "custom.nginx.org/refresh-token-timeout"}}
    {{- $session_time := index $.Ingress.Annotations "custom.nginx.org/session-token-timeout"}}
    {{- if not $kv_zone_size}}{{$kv_zone_size = "1M"}}{{end}}
    {{- if not $refresh_time}}{{$refresh_time = "8h"}}{{end}}
    {{- if not $session_time}}{{$session_time = "1h"}}{{end}}
    keyval_zone zone=opaque_sessions:{{$kv_zone_size}} state=/var/lib/nginx/state/opaque_sessions.json timeout={{$session_time}};
    keyval_zone zone=refresh_tokens:{{$kv_zone_size}} state=/var/lib/nginx/state/refresh_tokens.json timeout={{$refresh_time}};
    keyval $cookie_auth_token $session_jwt zone=opaque_sessions;
    keyval $cookie_auth_token $refresh_token zone=refresh_tokens;
    keyval $request_id $new_session zone=opaque_sessions;
    keyval $request_id $new_refresh zone=refresh_tokens;
    
    proxy_cache_path /var/cache/nginx/jwk levels=1 keys_zone=jwk:64k max_size=1m;
    
    map $refresh_token $no_refresh {
        ""      1;
        "-"     1;
        default 0;
    }
    
    log_format  main_jwt  '$remote_addr $jwt_claim_sub $remote_user [$time_local] "$request" $status '
                          '$body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
    
    js_include conf.d/openid_connect.js;
    js_set $requestid_hash hashRequestId;
    {{end}}{{end -}}
    {{range $upstream := .Upstreams}}
    upstream {{$upstream.Name}} {
        zone {{$upstream.Name}} 256k;
        {{if $upstream.LBMethod }}{{$upstream.LBMethod}};{{end}}
        {{range $server := $upstream.UpstreamServers}}
        server {{$server.Address}}:{{$server.Port}} max_fails={{$server.MaxFails}} fail_timeout={{$server.FailTimeout}}
            {{- if $server.SlowStart}} slow_start={{$server.SlowStart}}{{end}}{{if $server.Resolve}} resolve{{end}};{{end}}
        {{if $upstream.StickyCookie}}
        sticky cookie {{$upstream.StickyCookie}};
        {{end}}
        {{if $.Keepalive}}keepalive {{$.Keepalive}};{{end}}
        {{- if $upstream.UpstreamServers -}}
        {{- if $upstream.Queue}}
        queue {{$upstream.Queue}} timeout={{$upstream.QueueTimeout}}s;
        {{- end -}}
        {{- end}}
    }
    {{- end}}
    
    {{range $server := .Servers}}
    server {
        {{if not $server.GRPCOnly}}
        {{range $port := $server.Ports}}
        listen {{$port}}{{if $server.ProxyProtocol}} proxy_protocol{{end}};
        {{- end}}
        {{end}}
        {{if $server.SSL}}
        {{- range $port := $server.SSLPorts}}
        listen {{$port}} ssl{{if $server.HTTP2}} http2{{end}}{{if $server.ProxyProtocol}} proxy_protocol{{end}};
        {{- end}}
        ssl_certificate {{$server.SSLCertificate}};
        ssl_certificate_key {{$server.SSLCertificateKey}};
        {{if $server.SSLCiphers}}
        ssl_ciphers {{$server.SSLCiphers}};
        {{end}}
        {{end}}
        {{range $setRealIPFrom := $server.SetRealIPFrom}}
        set_real_ip_from {{$setRealIPFrom}};{{end}}
        {{if $server.RealIPHeader}}real_ip_header {{$server.RealIPHeader}};{{end}}
        {{if $server.RealIPRecursive}}real_ip_recursive on;{{end}}
        
        server_tokens "{{$server.ServerTokens}}";
        
        server_name {{$server.Name}};
        
        status_zone {{$server.StatusZone}};
        
        {{if not $server.GRPCOnly}}
        {{range $proxyHideHeader := $server.ProxyHideHeaders}}
        proxy_hide_header {{$proxyHideHeader}};{{end}}
        {{range $proxyPassHeader := $server.ProxyPassHeaders}}
        proxy_pass_header {{$proxyPassHeader}};{{end}}
        {{end}}
        
        {{if $server.SSL}}
        {{if not $server.GRPCOnly}}
        {{- if $server.HSTS}}
        set $hsts_header_val "";
        proxy_hide_header Strict-Transport-Security;
        {{- if $server.HSTSBehindProxy}}
        if ($http_x_forwarded_proto = 'https') {
        {{else}}
        if ($https = on) {
        {{- end}}
            set $hsts_header_val "max-age={{$server.HSTSMaxAge}}; {{if $server.HSTSIncludeSubdomains}}includeSubDomains; {{end}}preload";
        }
        
        add_header Strict-Transport-Security "$hsts_header_val" always;
        {{end}}
        
        {{- if $server.SSLRedirect}}
        if ($scheme = http) {
            return 301 https://$host:{{index $server.SSLPorts 0}}$request_uri;
        }
        {{- end}}
        {{end}}
        {{- end}}
        
        {{- if $server.RedirectToHTTPS}}
        if ($http_x_forwarded_proto = 'http') {
            return 301 https://$host$request_uri;
        }
        {{- end}}
        
        {{with $jwt := $server.JWTAuth}}
        auth_jwt_key_file {{$jwt.Key}};
        auth_jwt "{{.Realm}}"{{if $jwt.Token}} token={{$jwt.Token}}{{end}};
        
        {{- if $jwt.RedirectLocationName}}
        error_page 401 {{$jwt.RedirectLocationName}};
        {{end}}
        {{end}}
        
        {{- if $server.ServerSnippets}}
        {{range $value := $server.ServerSnippets}}
        {{$value}}{{end}}
        {{- end}}
        
        {{- range $healthCheck := $server.HealthChecks}}
        location @hc-{{$healthCheck.UpstreamName}} {
            {{- range $name, $header := $healthCheck.Headers}}
            proxy_set_header {{$name}} "{{$header}}";
            {{- end }}
            proxy_connect_timeout {{$healthCheck.TimeoutSeconds}}s;
            proxy_read_timeout {{$healthCheck.TimeoutSeconds}}s;
            proxy_send_timeout {{$healthCheck.TimeoutSeconds}}s;
            proxy_pass {{$healthCheck.Scheme}}://{{$healthCheck.UpstreamName}};
            health_check {{if $healthCheck.Mandatory}}mandatory {{end}}uri={{$healthCheck.URI}} interval=
                {{- $healthCheck.Interval}}s fails={{$healthCheck.Fails}} passes={{$healthCheck.Passes}};
        }
        {{end -}}
        
        {{- range $location := $server.JWTRedirectLocations}}
        location {{$location.Name}} {
            internal;
            return 302 {{$location.LoginURL}};
        }
        {{end -}}
        
        {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
        {{- $oidc_resolver := index $.Ingress.Annotations "custom.nginx.org/oidc-resolver-address"}}
        {{- if not $oidc_resolver}}{{$oidc_resolver = "8.8.8.8"}}{{end}}
        resolver {{$oidc_resolver}};
        subrequest_output_buffer_size 32k;
        
        {{- $oidc_jwt_keyfile := index $.Ingress.Annotations "custom.nginx.org/oidc-jwt-keyfile"}}
        {{- $oidc_logout_redirect := index $.Ingress.Annotations "custom.nginx.org/oidc-logout-redirect"}}
        {{- $oidc_authz_endpoint := index $.Ingress.Annotations "custom.nginx.org/oidc-authz-endpoint"}}
        {{- $oidc_token_endpoint := index $.Ingress.Annotations "custom.nginx.org/oidc-token-endpoint"}}
        {{- $oidc_client := index $.Ingress.Annotations "custom.nginx.org/oidc-client"}}
        {{- $oidc_client_secret := index $.Ingress.Annotations "custom.nginx.org/oidc-client-secret"}}
        {{ $oidc_hmac_key := index $.Ingress.Annotations "custom.nginx.org/oidc-hmac-key"}}
        set $oidc_jwt_keyfile "{{$oidc_jwt_keyfile}}";
        set $oidc_logout_redirect "{{$oidc_logout_redirect}}";
        set $oidc_authz_endpoint "{{$oidc_authz_endpoint}}";
        set $oidc_token_endpoint "{{$oidc_token_endpoint}}";
        set $oidc_client "{{$oidc_client}}";
        set $oidc_client_secret "{{$oidc_client_secret}}";
        set $oidc_hmac_key "{{$oidc_hmac_key}}";
        {{end -}}
        
        {{range $location := $server.Locations}}
        location {{$location.Path}} {
            {{with $location.MinionIngress}}
            # location for minion {{$location.MinionIngress.Namespace}}/{{$location.MinionIngress.Name}}
            {{end}}
            {{if $location.GRPC}}
            {{if not $server.GRPCOnly}}
            error_page 400 @grpcerror400;
            error_page 401 @grpcerror401;
            error_page 403 @grpcerror403;
            error_page 404 @grpcerror404;
            error_page 405 @grpcerror405;
            error_page 408 @grpcerror408;
            error_page 414 @grpcerror414;
            error_page 426 @grpcerror426;
            error_page 500 @grpcerror500;
            error_page 501 @grpcerror501;
            error_page 502 @grpcerror502;
            error_page 503 @grpcerror503;
            error_page 504 @grpcerror504;
            {{end}}
            
            {{- if $location.LocationSnippets}}
            {{range $value := $location.LocationSnippets}}
            {{$value}}{{end}}
            {{- end}}
            
            {{with $jwt := $location.JWTAuth}}
            auth_jwt_key_file {{$jwt.Key}};
            auth_jwt "{{.Realm}}"{{if $jwt.Token}} token={{$jwt.Token}}{{end}};
            {{end}}
            
            grpc_connect_timeout {{$location.ProxyConnectTimeout}};
            grpc_read_timeout {{$location.ProxyReadTimeout}};
            grpc_set_header Host $host;
            grpc_set_header X-Real-IP $remote_addr;
            grpc_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            grpc_set_header X-Forwarded-Host $host;
            grpc_set_header X-Forwarded-Port $server_port;
            grpc_set_header X-Forwarded-Proto $scheme;
            
            {{- if $location.ProxyBufferSize}}
            grpc_buffer_size {{$location.ProxyBufferSize}};
            {{- end}}
            
            {{if $location.SSL}}
            grpc_pass grpcs://{{$location.Upstream.Name}}
            {{else}}
            grpc_pass grpc://{{$location.Upstream.Name}};
            {{end}}
            {{else}}
            proxy_http_version 1.1;
            {{if $location.Websocket}}
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            {{- else}}
            {{- if $.Keepalive}}proxy_set_header Connection "";{{end}}
            {{- end}}
            
            {{- if $location.LocationSnippets}}
            {{range $value := $location.LocationSnippets}}
            {{$value}}{{end}}
            {{- end}}
            
            {{ with $jwt := $location.JWTAuth }}
            auth_jwt_key_file {{$jwt.Key}};
            auth_jwt "{{.Realm}}"{{if $jwt.Token}} token={{$jwt.Token}}{{end}};
            {{if $jwt.RedirectLocationName}}
            error_page 401 {{$jwt.RedirectLocationName}};
            {{end}}
            {{end}}
            
            {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc"}}
            auth_jwt "" token=$session_jwt;
            auth_jwt_key_request /_jwks_uri;
            error_page 401 @oidc_auth;
            {{end}}
            
            proxy_connect_timeout {{$location.ProxyConnectTimeout}};
            proxy_read_timeout {{$location.ProxyReadTimeout}};
            client_max_body_size {{$location.ClientMaxBodySize}};
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Port $server_port;
            proxy_set_header X-Forwarded-Proto {{if $server.RedirectToHTTPS}}https{{else}}$scheme{{end}};
            proxy_buffering {{if $location.ProxyBuffering}}on{{else}}off{{end}};
            {{- if $location.ProxyBuffers}}
            proxy_buffers {{$location.ProxyBuffers}};
            {{- end}}
            {{- if $location.ProxyBufferSize}}
            proxy_buffer_size {{$location.ProxyBufferSize}};
            {{- end}}
            {{- if $location.ProxyMaxTempFileSize}}
            proxy_max_temp_file_size {{$location.ProxyMaxTempFileSize}};
            {{- end}}
            {{if $location.SSL}}
            proxy_pass https://{{$location.Upstream.Name}}{{$location.Rewrite}};
            {{else}}
            proxy_pass http://{{$location.Upstream.Name}}{{$location.Rewrite}};
            {{end}}
            {{end}}
        }{{end}}
        {{if $server.GRPCOnly}}
        error_page 400 @grpcerror400;
        error_page 401 @grpcerror401;
        error_page 403 @grpcerror403;
        error_page 404 @grpcerror404;
        error_page 405 @grpcerror405;
        error_page 408 @grpcerror408;
        error_page 414 @grpcerror414;
        error_page 426 @grpcerror426;
        error_page 500 @grpcerror500;
        error_page 501 @grpcerror501;
        error_page 502 @grpcerror502;
        error_page 503 @grpcerror503;
        error_page 504 @grpcerror504;
        {{end}}
        {{if $server.HTTP2}}
        location @grpcerror400 { default_type application/grpc; return 400 "\n"; }
        location @grpcerror401 { default_type application/grpc; return 401 "\n"; }
        location @grpcerror403 { default_type application/grpc; return 403 "\n"; }
        location @grpcerror404 { default_type application/grpc; return 404 "\n"; }
        location @grpcerror405 { default_type application/grpc; return 405 "\n"; }
        location @grpcerror408 { default_type application/grpc; return 408 "\n"; }
        location @grpcerror414 { default_type application/grpc; return 414 "\n"; }
        location @grpcerror426 { default_type application/grpc; return 426 "\n"; }
        location @grpcerror500 { default_type application/grpc; return 500 "\n"; }
        location @grpcerror501 { default_type application/grpc; return 501 "\n"; }
        location @grpcerror502 { default_type application/grpc; return 502 "\n"; }
        location @grpcerror503 { default_type application/grpc; return 503 "\n"; }
        location @grpcerror504 { default_type application/grpc; return 504 "\n"; }
        {{end}}
        {{- if index $.Ingress.Annotations "custom.nginx.org/enable-oidc" -}}
        include conf.d/openid_connect.server_conf;
        {{- end}}
    }{{end}}

Now we deploy the ConfigMap in Kubernetes, and change directory back up to kubernetes-ingress.

$ kubectl create -f common/nginx-config.yaml
$ cd ..

Incorporating OpenID Connect into the NGINX Plus Ingress Controller

Since we are using OIDC resources, we’re taking advantage of the OIDC reference implementation provided by NGINX on GitHub. After cloning the nginx-openid-connect repo inside our existing kubernetes-ingress repo, we create ConfigMaps from the openid-connect.js and openid-connect.server-conf files.

$ git clone https://github.com/nginxinc/nginx-openid-connect
$ cd nginx-openid-connect
$ kubectl create configmap -n nginx-ingress openid-connect.js --from-file=openid_connect.js
$ kubectl create configmap -n nginx-ingress openid-connect.server-conf --from-file=openid_connect.server_conf

Now we incorporate the two files into our Ingress controller deployment as Kubernetes volumes of type ConfigMap, by adding the following directives to the existing nginx-plus-ingress.yaml file in the deployments/deployment subdirectory of our kubernetes-ingress repo:

    volumes:
    - name: openid-connect-js
      configMap:
        name: openid-connect.js
    - name: openid-connect-server-conf
      configMap:
        name: openid-connect.server-conf

We also add the following directives to nginx-plus-ingress.yaml to make the files accessible in the /etc/nginx/conf.d directory of our deployment:

         volumeMounts:
          - name: openid-connect-js
            mountPath: /etc/nginx/conf.d/openid_connect.js
            subPath: openid_connect.js
          - name: openid-connect-server-conf
            mountPath: /etc/nginx/conf.d/openid_connect.server_conf
            subPath: openid_connect.server_conf

Here’s the complete nginx-plus-ingress.yaml file for our deployment. If using it as the basis for your own deployment, replace magicalyak with the name of your private registry.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
labels:
  app: nginx-ingress
spec:
replicas: 1
selector:
  matchLabels:
    app: nginx-ingress
template:
  metadata:
    labels:
      app: nginx-ingress
    #annotations:
    #  prometheus.io/scrape: "true"
    #  prometheus.io/port: "9113"
  spec:
    containers:
    - image: magicalyak/nginx-plus:1.5.0-oidc
      imagePullPolicy: IfNotPresent
      name: nginx-plus-ingress
      args:
        - -nginx-plus
        - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
        - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
        - -report-ingress-status
        #- -v=3 # Enables extensive logging. Useful for troubleshooting.
        #- -external-service=nginx-ingress
        #- -enable-leader-election
        #- -enable-prometheus-metrics
        #- -enable-custom-resources
      env:
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      ports:
      - name: http
        containerPort: 80
      - name: https
        containerPort: 443
      #- name: prometheus
      #  containerPort: 9113
      volumeMounts:
      - name: openid-connect-js
        mountPath: /etc/nginx/conf.d/openid_connect.js
        subPath: openid_connect.js
      - name: openid-connect-server-conf
        mountPath: /etc/nginx/conf.d/openid_connect.server_conf
        subPath: openid_connect.server_conf
    serviceAccountName: nginx-ingress
    volumes:
    - name: openid-connect-js
      configMap:
        name: openid-connect.js
    - name: openid-connect-server-conf
      configMap:
        name: openid-connect.server-conf

Creating the Kubernetes Service

We also need to define a Kubernetes service by creating a new file called nginx-plus-service.yaml in the deployments/service subdirectory of our kubernetes-ingress repo. We set the ExternalIPs field to the alias IP address (172.16.186.100) we assigned to node1 in Installing and Configuring Kubernetes, but you could use NodePorts or other options instead.

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
  labels:
    svc: nginx-ingress
spec:
  type: ClusterIP
  clusterIP:
  externalIPs:
  - 172.16.186.100
  ports:
  - name: http
    port: 80
    targetPort: http
    protocol: TCP
  - name: https
    port: 443
    targetPort: https
    protocol: TCP
  selector:
    app: nginx-ingress

Deploying the Ingress Controller

With all of the YAML files in place, we run the following commands to deploy the Ingress controller and service resources in Kubernetes:

$ cd ../deployments
$ kubectl create -f deployment/nginx-plus-ingress.yaml
$ kubectl create -f service/nginx-plus-service.yaml
$ cd ..

At this point our Ingress controller is installed and we can focus on creating the sample resource for which we’re using OIDC authentication.

Setting Up the Sample Application to Use OpenID Connect

To test our OIDC authentication setup, we’re using a very simple application called cafe, which has tea and coffee service endpoints. It’s included in the examples/complete-example directory of the kubernetes-ingress repo on GitHub, and you can read more about it in NGINX and NGINX Plus Ingress Controllers for Kubernetes Load Balancing on our blog.

We need to make some modifications to the sample app, however – specifically, we need to insert the values we obtained from Azure AD into the YAML file for the application, cafe-ingress.yaml in the examples/complete-example directory.

We’re making two sets of changes, as shown in the full file below:

  1. We’re adding an annotations section. The file below uses the {client_key}, {tenant_key}, and {client_secret} variables to represent the values obtained from an IdP. To make it easier to track which values we’re referring to, in the list we’ve specified the literal values we obtained from Azure AD in the indicated step in Obtaining Credentials from the OpenID Connect Identity Provider. When creating your own deployment, substitute the values you obtain from Azure AD (or other IdP).

    • {client_key} – The value in the Application (client) ID field on the Azure AD confirmation page. For our deployment, it’s a2b20239-2dce-4306-a385-ac9xxx, as reported in Step 6.
    • {tenant_key} – The value in the Directory (tenant) ID field on the Azure AD confirmation page. For our deployment, it’s dd3dfd2f-6a3b-40d1-9be0-bf8xxx, as reported in Step 6.
    • {client_secret} – The URL-encoded version of the value in the Client secrets section in Azure AD. For our deployment, it’s kn_3VLh%5D1I3ods%2A%5BDDmMxNmg8xxx, as noted in Step 9.

    In addition, note that the value in the custom.nginx.org/oidc-hmac-key field is just an example. Substitute your own unique value that ensures nonce values are unpredictable.

  2. We’re changing the value in the hosts and host fields to cafe.nginx.net, and adding an entry for that domain to the /etc/hosts file on each of the four Kubernetes nodes, specifying the IP address from the ClusterIP field in nginx-plus-service.yaml. In our deployment, we set this to 172.16.186.100 in Installing and Configuring Kubernetes.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    custom.nginx.org/enable-oidc:  "True"
    custom.nginx.org/keyval-zone-size: "1m" #(default 1m)
    custom.nginx.org/refresh-token-timeout: "8h" #(default 8h)
    custom.ngnix.org/session-token-timeout: "1h" #(default 1h)
    custom.nginx.org/oidc-resolver-address: "8.8.8.8" #(default 8.8.8.8)
    custom.nginx.org/oidc-jwt-keyfile: "https://login.microsoftonline.com/{tenant}/discovery/v2.0/keys"
    custom.nginx.org/oidc-logout-redirect: "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/logout"
    custom.nginx.org/oidc-authz-endpoint: "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize"
    custom.nginx.org/oidc-token-endpoint: "https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token"
    custom.nginx.org/oidc-client:  "{client_key}"
    custom.nginx.org/oidc-client-secret: "{client_secret}"
    custom.nginx.org/oidc-hmac-key:  "vC5FabzvYvFZFBzxtRCYDYX+"
spec:
  tls:
  - hosts:
    - cafe.nginx.net
    secretName: cafe-secret
  rules:
  - host: cafe.nginx.net
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

We now create the cafe resource in Kubernetes:

$ cd examples/complete-example
$ kubectl create -f cafe-secret.yaml
$ kubectl create -f cafe.yaml
$ kubectl create -f cafe-ingress.yaml
$ cd ../..

To verify that the OIDC authentication process is working, we navigate to http://cafe/nginx.com/tea in a browser. It prompts for our login credentials, authenticates us, and displays some basic information generated by the tea service. For an example, see NGINX and NGINX Plus Ingress Controllers for Kubernetes Load Balancing.

The post Using the NGINX Plus Ingress Controller for Kubernetes with OpenID Connect Authentication from Azure AD appeared first on NGINX.

]]>
#Culture@NGINX http://www.xungui.org.cn/blog/culture-at-nginx/ Mon, 08 Jul 2019 23:51:17 +0000 http://www.xungui.org.cn/?p=62355 In our Life@NGINX post, we answer the question “what is life like at NGINX?”. In this post, we want to expand on a related topic: company culture. Senior Vice President and General Manager of NGINX at F5, Gus Robertson, recently noted how compatible the NGINX and F5 cultures are. In this post, we want to expand [...]

Read More...

The post #Culture@NGINX appeared first on NGINX.

]]>
In our Life@NGINX post, we answer the question “what is life like at NGINX?”. In this post, we want to expand on a related topic: company culture. Senior Vice President and General Manager of NGINX at F5, Gus Robertson, recently noted how compatible the NGINX and F5 cultures are. In this post, we want to expand on why we think the NGINX culture within F5 is so special.

Culture Is Key to Sustaining Success

Company culture is a popular topic in today’s headlines, and an important one considering the number of hours each of us spends at work. But what is company culture? More importantly, what is NGINX’s culture?

When we talk about our culture, we consider how our values affect the past, present, and future of our employees, our products, and our customers. We remember that our experiences shape our opinions and outlook – from our CEO to our interns, from our Cork office to our Sydney office and beyond. Regardless of where our teams are in the world, or which function we work in, the five core values we have held since the beginning are Curiosity, Openness, Progress, Excellence, and Mutual Accountability.

Read on to discover more about what we mean by each of our core values at NGINX, and if what you read reminds you of yourself, then take a look at our current vacancies!

NGINX’s Core Values

NGINX Core Values Mountain and Flags

Curiosity

At NGINX, curiosity is more than an eagerness to learn. Curiosity is not being afraid to ask why. It is digging into our customers’ goals and how we can help to exceed expectations. It is looking to the future and challenging the status quo, with the goal of making our products and customer service even better. It is having the autonomy to pursue a new idea, and being encouraged to try. It is #funfactfriday, our tradition of profiling one of our teammates each week, highlighting interesting facts about his or her life history and interests outside work. Each of us looks for ways to improve, to come together, and to make NGINX better.

Openness

Openness has many facets at NGINX: it starts with our open source roots, grows with each person who joins us, and brings us together as friends and as a family. We bring our whole selves to work: our experiences, our knowledge, and our pun jokes. We partner with our community, our customers, and each other. We look at problems and solutions from all angles. We strive for openness about where we’ve been, where we are, and where to go next. We celebrate our achievements together. The most important part of openness, however, is our transparency. We believe in providing everyone with all the information they need to succeed. This permeates every facet of the business, as explained by F5’s CEO, Fran?ois Locoh-Donou.

Progress

Progress is built into our DNA. NGINX co‑founder and CTO Igor Sysoev’s passion for solving the C10K problem led him to become the author of NGINX, and inspired by that passion, we continue to build products on the cutting edge of technology. Our desire to develop our professional skill set not only helps each of us on our career path within the NGINX business unit, but it also helps our teams to collectively push forward and reach further. Advancing our technology is important, and the NGINX family is the core to our success. We support each other individually through career growth, and as a family through initiatives like team‑building events, diversity discussions, and social gatherings.

Excellence

Excellence is what NGINX’s reputation is built on. It’s putting our best foot forward every day, building the best products for our open source users and enterprise customers alike, and supporting our colleagues, teams, and company. Excellence has been the goal of NGINX since its inception, and it’s the key to our identity within F5. Each of us strives for excellence individually; together, we strive for it as a team.

Mutual Accountability

Mutual accountability is a simple concept popularized by the former CEO of GE, Jeff Immelt. He sums up mutual accountability this way: Do your job, and take care of others along the way. To us, it means that we don’t operate in silos. We work as a team, supporting each other for the sake of our customers and our community. When we face challenges, we put our best foot forward because we know our teammates will do the same for us. Good teamwork has mutual accountability at its core, making it key to NGINX’s award‑winning and consistent success.

Join Us If You Agree

Curiosity, Openness, Progress, Excellence, and Mutual Accountability – separately, each of these values is important. Together, though, they unite and drive our underlying success. As we consider our values and as we work to maintain our culture as a positive environment, we know that curiosity fosters openness, that openness drives progress, that progress enables excellence, and that excellence depends upon mutual accountability. If our values speak to you, check out Life@NGINX on our blog, and apply to join the NGINX family.

The post #Culture@NGINX appeared first on NGINX.

]]>
#Life@NGINX http://www.xungui.org.cn/blog/life-at-nginx/ Mon, 08 Jul 2019 23:51:11 +0000 http://www.xungui.org.cn/?p=62350 One of the questions we hear most often from prospective employees is “what’s life like at NGINX?”. The answer is simple: we blend a tight-knit group of teams and colleagues who share a start‑up heritage and culture with the global support and benefits that come with being part of F5. In this blog post, we’ll [...]

Read More...

The post #Life@NGINX appeared first on NGINX.

]]>
One of the questions we hear most often from prospective employees is “what’s life like at NGINX?”.

The answer is simple: we blend a tight-knit group of teams and colleagues who share a start‑up heritage and culture with the global support and benefits that come with being part of F5. In this blog post, we’ll look at life at NGINX in our global offices.

NGINX Around the World

Life at NGINX is diverse. The NGINX business unit at F5 has office locations internationally – in San Francisco, Cork, Moscow, Singapore, Sydney, and Tokyo – not to mention our remote colleagues living and working in various places around the globe. While our team is scattered geographically, our culture is something that keeps us together as a community and makes Life at NGINX, and within the broader F5 family, what it is.

Our culture is driven by our core values: Curiosity, Openness, Progress, Excellence, and Mutual Accountability. As Gus Robertson, Senior Vice President and General Manager of NGINX at F5, explains, “Not everyone wants to climb mountains on a daily basis. Our team does.” And a team is just what we are, fueled by mutual accountability and close collaboration: a winning combination. Our cultural values align perfectly with F5, with the BeF5 mantra preserving all the values we came to know and love as NGINX employees.

Collaboration underpins us and, as a team, we love to come together. This happens in a lot of different ways – weekly office lunches and breakfasts, Happy Hours to celebrate the end of a successful week, team bowling outings, zip‑line adventures, summer barbecues, holiday parties, and sweet treats to celebrate birthdays, anniversaries, and soccer team victories.

Our Office Locations

San Francisco, CA

The original NGINX headquarters in San Francisco remains our largest office. We’re a mere 15-minute walk away from Market Street, a draw for tourists and shoppers alike. When you arrive at the office, you may well be greeted by some of our canine colleagues, such as Flower – a regular visitor! A short walk from the office brings you to the Yerba Buena Gardens, San Francisco Museum of Modern Art, and the bustling Union Square shopping district. And what about the gorgeous views in the Bay Area, including the famous Golden Gate Bridge and world‑renowned former prison, Alcatraz? With lots to do – cycling, sailing, hiking, eating, and drinking among them – San Francisco has it all.

Cork, Ireland

Cork is a bustling European city on the south coast of Ireland, and remains the NGINX business unit’s largest office in EMEA. It’s located in the heart of the city (or “town” as the locals call it) overlooking the River Lee on the South Mall. Our office is 15 minutes from Cork Airport and a stone’s throw from busy bars, cafés, and restaurants. Thanks to the huge choice of eateries, Lonely Planet regards Ireland’s second city as “arguably the best foodie scene in the country”. Did we mention all the activities that make Cork great? Music, dancing, rugby, soccer, Gaelic games, sailing, surfing, hiking, and more are all on our doorstep! Embrace the Wild Atlantic Way and come visit us in beautiful Cork, where céad míle fáilte (a hundred thousand welcomes) await you.

Moscow, Russia

Moscow is the capital of Russia and importantly for us, the home of Igor Sysoev, original author of NGINX Open Source and co‑founder of NGINX, Inc. Home to many members of our engineering team, the Moscow office is just 3 minutes’ walk from the Sportivnaya Metro station and 15 minutes’ walk from Luzhniki Stadium, the main venue for the 1980 Olympic Games. It’s also a short walk from the office to the banks of the Moskva River and 40 minutes to historic Gorky Park.

Singapore

Singapore’s strategic location at the southern tip of the Malaysian peninsula makes it an ideal location for NGINX in the Asia‑Pacific region. Our office is in in Suntec City, a retail and office complex in the Central Business District that’s home to many global tech companies, and just a stone’s throw away from landmarks like the Marina Bay Financial Centre, the 250‑acre Gardens By The Bay park, and the iconic 5‑star Marina Bay Sands hotel.

Sydney, Australia

Our APJC regional office in Australia is in Pyrmont, Sydney, across the popular Darling Harbour from the city center. Situated in a once‑industrial part of the city, the NGINX office on Harris Street is surrounded by hip restaurants, bars, and cafes, all there to cater to both the professionals and young crowd that flock to this old‑school district.

Tokyo, Japan

Last but not least, our APJC regional office in Japan is in Tokyo Square Garden, at the heart of the capital’s central business district. It’s just a block away from the famous Ginza district, and not far from the Imperial Palace and Gardens which features the Edo Castle, dating back to 1457. Don’t forget to head to nearby Chidori-ga-fuchi Moat during the beautiful cherry blossom season!

Join Us and Help Shape Life at NGINX

Life at NGINX is only as good as the people who work with us and share our values. To find out more about what drives us as part of the F5 family, check out #LifeatNGINX on Twitter and read about #Culture@NGINX on our blog. If our values speak to you, apply to join the team at NGINX, or browse through the current open roles available throughout F5 and Aspen Mesh.

The post #Life@NGINX appeared first on NGINX.

]]>
Catching Up with the NGINX Application Platform: What’s New in 2019 http://www.xungui.org.cn/blog/nginx-application-platform-whats-new-2019/ Tue, 02 Jul 2019 23:34:09 +0000 http://www.xungui.org.cn/?p=62585 More than ever before, enterprises are recognizing that digital transformation is critical to their survival. In fact, the Wall Street Journal reports that executives currently see legacy operations and infrastructure as the #1 risk factor jeopardizing their ability to compete with companies that are “born digital”. Cloud, DevOps, and microservices are key technologies that accelerate [...]

Read More...

The post Catching Up with the NGINX Application Platform: What’s New in 2019 appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

More than ever before, enterprises are recognizing that digital transformation is critical to their survival. In fact, the Wall Street Journal reports that executives currently see legacy operations and infrastructure as the #1 risk factor jeopardizing their ability to compete with companies that are “born digital”.

Cloud, DevOps, and microservices are key technologies that accelerate digital transformation initiatives. And they’re paying off at companies that leverage them – according to a study from Freeform Dynamics, commissioned by CA Technologies, organizations that have adopted DevOps practices have achieved 60% higher growth in revenue and profits than their peers, and are 2x more likely to be growing at more than 20% annually. Enterprises are also modernizing their app architectures – 86% of respondents in a survey commissioned by LightStep expect microservices to be their default architecture in 5 years.

We unveiled the NGINX Application Platform in late 2017 to enable enterprises undergoing digital transformation to modernize legacy, monolithic applications as well as deliver new, microservices?based applications and APIs at scale across a multi‑cloud environment. Enterprises deploy the NGINX Application Platform to improve agility, accelerate performance, and reduce capital and operational costs. Since the launch, we have been introducing enterprise‑grade capabilities at a regular pace to all of the component solutions, including NGINX Controller, NGINX Plus, and NGINX Unit. This blog outlines key updates to the NGINX Application Platform and the NGINX Ingress Controller for Kubernetes since the beginning of 2019.

The following table summarizes the new features and benefits introduced to each component since the beginning of 2019. For details, see the linked sections that follow.

Component Feature Benefits
NGINX Controller Load Balancing Module Policy‑based approach to configuration management using configuration templates Prevent misconfigurations and ensure consistency

Save time

Easily scale application of configurations across multiple NGINX Plus instances

ServiceNow integration Streamline troubleshooting workflows
NGINX Controller API Management Module Filtering and searching

Environment‑specific API definition visualizations

Improved usability:

  • More flexible API definition
  • Easy to filter and search by hostname and APIs
NGINX Plus Dynamic certificate loading

Shared configuration across cluster members

Simplified configuration workflows
Support for port ranges in server listen configuration NGINX Plus can be deployed as a proxy for an FTP server in passive mode
Certificates and keys can be stored in in‑memory key‑value store

Support for opaque session tokens

Enhanced security:

  • Secrets cannot be obtained from deployment images or filesystem backups
  • No personally identifiable information is stored on the client
TCP connection can be closed immediately when the server goes offline Improved reliability:

  • Client reconnects to a healthy server right away, eliminating delays due to timeout
NGINX Unit Experimental (beta-level) support for Java servlet containers Support for the most popular enterprise programming language brings the number of supported languages to seven
Internal routing Multiple applications can be hosted on the same IP address and port

Granular control of the target application

NGINX Ingress Controller for Kubernetes NGINX custom resources Using native Kubernetes-style API simplifies configuration
Additional Prometheus metrics Quick detection of performance and availability issues with the Ingress Controller itself
Load balancing traffic to external resources Easier migration to Kubernetes environments
Dedicated Helm chart repository Easy and effortless deployment of NGINX in Kubernetes environments

Updates in NGINX Controller 2.0–2.4

We have adopted a SaaS‑like upgrade cadence for NGINX Controller – we release a new version consisting of new features (sometimes minor, sometimes major) and bug fixes on a monthly basis.

Load Balancing Module in NGINX Controller 2.0–2.4

The Load Balancing Module in NGINX Controller enables you to configure, validate, and monitor all your NGINX Plus load balancers at scale across a multi‑cloud environment.

There are two primary enhancements to the Load Balancing Module:

  • Policy‑based approach to configuration management – You can create configuration templates for your NGINX Plus load balancers, including environment‑specific templates – for example, one for production environments and another for test environments. These templates save time, help you achieve scale, and eliminate issues due to misconfiguration. They can be version‑controlled, and you can revert to a ‘golden image’ in case there are any problems.
  • Integration with ServiceNow – You can streamline troubleshooting workflows by forwarding alerts from NGINX Controller to ServiceNow.

For more details about the changes to the Load Balancing Module, see our blog.

API Management Module in NGINX Controller 2.0 –2.4

The API Management Module empowers Infrastructure & Operations and DevOps teams to achieve full API lifecycle management including defining, publishing, securing, managing traffic, and monitoring APIs, without compromising performance. Built on an innovative architecture, and using NGINX as the data‑plane component, it is well‑suited to the needs of both traditional applications and modern distributed applications based on microservices.

The API Management Module became generally available in January of 2019. Since then, we’ve been hard at work on usability improvements to the API Definitions interface:

  • Entry point hostnames are color‑coded to indicate the state of the NGINX Plus API gateway configuration:
    • Grey – Config not pushed to the entry point
    • Green – Config pushed and all associated instances are online
    • Yellow – Config pushed but some instances remain offline
    • Red – Config pushed but all instances are offline
  • New card layout for API definitions to easily visualize and access different environments
  • Ability to filter by API name and hostname
  • Warnings when parts of the API definition are not routed to backend services
  • Error responses for unknown API endpoints (404 errors) can be customized

For details on defining APIs with the API Management Module, see our blog.

NGINX Plus R18

NGINX Plus’ flexibility, portability, and seamless integration with CI/CD automation tools help accelerate enterprise adoption of DevOps. NGINX Plus R18 advances this objective by simplifying configuration workflows and enhancing the security and reliability of your applications. Key enhancements in NGINX Plus R18 include:

  • Simplified configuration workflows

    • Dynamic certificate loading – TLS certificates are loaded into memory only when a request is made for a matching hostname. You can save time and effort by automating the upload of certificates and private keys into the key‑value store using the NGINX Plus API. This is especially ideal for deployments with large numbers of certificates or when configuration reloads are very frequent.
    • Support for port ranges for server configurations – You can specify port ranges for a virtual server to listen on, rather than just individual ports. This also allows NGINX Plus to act as a proxy for an FTP server in passive mode.
    • Simplified cluster management – NGINX Plus R15 introduced synchronization of runtime state across a cluster of NGINX Plus instances. This release enhances clustering by enabling the same clustering configuration to be used on all members of the cluster. This is particularly helpful in dynamic environments such as AWS Auto Scaling groups or containerized clusters.
  • Enhanced security

    • Minimizing exposure of certificates – With this release, NGINX Plus can load certificates and the associated private keys directly from the in‑memory key‑value store. Not storing secrets on disk means attackers can no longer obtain copies of them from deployment images or backups of the filesystem.
    • Support for opaque session tokens – NGINX Plus supports OpenID Connect authentication and single sign‑on for backend applications. NGINX Plus R18 adds support for opaque session tokens issued by OpenID Connect. Opaque tokens contain no personally identifiable information about the user so that no sensitive information is stored at the client.
  • Improved reliability

    • Enabling clients to reconnect upon failed health checks – NGINX Plus active health checks continually probe the health of upstream servers to ensure traffic does not get forwarded to servers that are offline. With this release, client connections can also be terminated immediately when a server goes offline for any of several reasons. As client applications then reconnect, they are proxied to a healthy backend server, thereby improving the reliability of your applications.

For more details about NGINX Plus R18, see our blog.

NGINX Unit 1.8.0

NGINX Unit is an open source lightweight, flexible, dynamic, polyglot app server that currently supports seven different languages. So far this year we have improved NGINX Unit with:

  • Experimental support for Java servlet containers – According to a report from the Cloud Foundry Foundation, an open source Platform-as-a-service project, Java is the dominant language for enterprise development. Addressing a request from many of our users, we introduced beta‑level support for Java servlet containers in NGINX Unit 1.8.0. Java is a registered trademark of Oracle and/or its affiliates.
  • Internal routing – Internal routing enables granular control over the target application. With this support, you can run many applications on the same IP address and port. NGINX Unit can determine which application to forward requests to based on host, URI, and HTTP method. Sample use cases for internal routing include:
    • POST requests that are handled by a special app, maybe written in a different language.
    • Requests to administrative URLs that need a different security group and fewer application processes than the main application.

For more details about NGINX Unit 1.8.0, see our blog.

NGINX Ingress Controller for Kubernetes 1.5.0

NGINX is the most deployed Ingress controller in Kubernetes environments. The NGINX Ingress Controller for Kubernetes provides advanced load balancing capabilities including session persistence, WebSocket, HTTP/2, and gRPC for complex applications consisting of many microservices. Release 1.5.0 introduces the following capabilities:

  • Defining ingress policies using NGINX custom resources – This is a new approach to configuration that follows the Kubernetes API style so that developers get the same experience as when using the Ingress resource. With this approach, users don’t have to use annotations – all features must now be part of the spec. It also enables us to support RBAC and other capabilities in a scalable and predictable manner.
  • Additional metrics – Provided by a streamlined Prometheus exporter, new metrics have been introduced in this release to quickly detect performance degradations and “uptime” of NGINX Ingress Controller itself.
  • Support for load balancing traffic to external services – The NGINX Plus Ingress Controller can now load balance requests to destinations outside of the cluster, making it easier to migrate to Kubernetes environments.
  • Dedicated Helm chart repository – Helm is becoming the preferred way to package applications on Kubernetes. Release 1.5.0 of the NGINX Plus Ingress Controller is available via our Helm repo.

For more details about NGINX Ingress Controller for Kubernetes 1.5.0, see our blog.

Continued Investments in NGINX

Looking ahead, now that we are part of F5 Networks we are planning to bolster our investments in open source as well as the NGINX Application Platform. F5 is committed to the NGINX open source technology, developers, and community. We anticipate that the additional investments will inject new vigor into open source initiatives and will enable us to develop open source features, host more open source events, and produce more open source content. Read this blog from Gus Robertson, GM of the NGINX business unit, on F5’s commitment to open source.

We also expect more cross‑pollination across our solutions – we want to leverage the rich security capabilities that F5 offers and embed them into NGINX solutions. F5 solutions will become more agile, flexible, and portable without compromising on reliability, security, and governance. We are excited for what comes next. Follow us on Twitter and LinkedIn to learn about updates to the NGINX Application Platform.

Please attend NGINX Conf 2019 to learn more about our vision for the future with F5. You will hear about new product releases and our roadmap plans as well as have an opportunity to learn from industry luminaries.

The post Catching Up with the NGINX Application Platform: What’s New in 2019 appeared first on NGINX.

]]>
Ask NGINX | June 2019 http://www.xungui.org.cn/blog/ask-nginx-june-2019/ Thu, 27 Jun 2019 21:52:49 +0000 http://www.xungui.org.cn/?p=62549 Do you have an NGINX Plus offline installer for RHEL/CentOS/Oracle Linux 7.4+? Yes. It takes advantage of the yumdownloader utility. Here’s the procedure: Follow the installation instructions in the NGINX Plus Admin Guide, through Step 5. (In other words, don’t run the yum install command for NGINX Plus itself.) Install yumdownloader, if you haven’t already: Download the latest version of [...]

Read More...

The post Ask NGINX | June 2019 appeared first on NGINX.

]]>
Do you have an NGINX Plus offline installer for RHEL/CentOS/Oracle Linux 7.4+?

Yes. It takes advantage of the yumdownloader utility. Here’s the procedure:

  1. Follow the installation instructions in the NGINX Plus Admin Guide, through Step 5. (In other words, don’t run the yum install command for NGINX Plus itself.)

  2. Install yumdownloader, if you haven’t already:

    # yum install yumdownloader
  3. Download the latest version of the NGINX Plus package:

    # yumdownloader nginx-plus
  4. Copy the NGINX Plus rpm package to each target machine and run this command there to install it:

    # rpm -ihv rpm-package-name

For further help, or if other operating systes, get in touch with the NGINX support team.

Can I install NGINX Plus on Ubuntu?

Yes, and it’s just one of the many operating systems supported by NGINX Plus. As of this writing, NGINX Plus supports the following versions of Ubuntu:

  • 14.04 LTS (Trusty)
  • 16.04 LTS (Xenial)
  • 18.04 (Bionic)
  • 18.10 (Cosmic)

For installation instructions, see the NGINX Plus Admin Guide. For the complete list of supported operating systems, see NGINX Plus Releases.

What are F5’s plans for investing in NGINX open source projects post‑acquisition?

F5 values the NGINX open source community. We’re committed not just to maintaining, but to increasing, investment in open source initiatives, as well as expanding community engagement and contributing to the open source community in an even more substantial way.

F5 is committed to providing the same level of access to the open source code as before the acquisition.

Will F5 employees be making contributions to NGINX OSS projects?

Yes. F5 employees in the NGINX business unit will continue to contribute to NGINX Open Source, NGINX Unit, and other projects hosted at nginx.org?. Many F5 employees already contribute to other third‑party open source projects, such as the F5 repository on GitHub. Along with F5 customers, they also contribute code to the 300,000 user‑strong F5 DevCentral community.

Ask Us!

Got a question for our Ask NGINX series? Leave a comment below or get in touch with our team, and we’ll be happy to help!

The post Ask NGINX | June 2019 appeared first on NGINX.

]]>
OpenTracing for NGINX and NGINX Plus http://www.xungui.org.cn/blog/opentracing-nginx-plus/ Mon, 17 Jun 2019 17:02:00 +0000 http://www.xungui.org.cn/?p=62494 For all its benefits, a microservices architecture also introduces new complexities. One is the challenge of tracking requests as they are processed, with data flowing among all the microservices that make up the application. A new methodology called distributed (request) tracing has been invented for this purpose, and OpenTracing is a specification and standard set [...]

Read More...

The post OpenTracing for NGINX and NGINX Plus appeared first on NGINX.

]]>
For all its benefits, a microservices architecture also introduces new complexities. One is the challenge of tracking requests as they are processed, with data flowing among all the microservices that make up the application. A new methodology called distributed (request) tracing has been invented for this purpose, and OpenTracing is a specification and standard set of APIs intended to guide design and implementation of distributed tracing tools.

In NGINX Plus Release 18 (R18), we added the NGINX OpenTracing module to our dynamic modules repository (it has been available as a third‑party module on GitHub for a couple of years now). A big advantage of the NGINX OpenTracing module is that by instrumenting NGINX and NGINX Plus for distributed tracing you get tracing data for every proxied application, without having to instrument the applications individually.

In this blog we show how to enable distributed tracing of requests for NGINX or NGINX Plus (for brevity we’ll just refer to NGINX Plus from now on). We provide instructions for two distributed tracing services (tracers, in OpenTracing terminology), Jaeger and Zipkin. (For a list of other tracers, see the OpenTracing documentation.) To illustrate the kind of information provided by tracers, we compare request processing before and after NGINX Plus caching is enabled.

A tracer has two basic components:

  • An agent which collects tracing data from applications running on the host where it is running. In our case, the “application” is NGINX Plus and the agent is implemented as a plug‑in.
  • A server (also called the collector) which accepts tracing data from one or more agents and displays it in a central UI. You can run the server on the NGINX Plus host or another host, as you choose.

Installing a Tracer Server

The first step is to install and configure the server for the tracer of your choice. We’re providing instructions for Jaeger and Zipkin; adapt them as necessary for other tracers.

Installing the Jaeger Server

We recommend the following method for installing the Jaeger server. You can also download Docker images at the URL specified in Step 1.

  1. Navigate to the Jaeger download page and download the Linux binary (at the time of writing, jaeger-1.12.0-linux-amd64.tar).

  2. Move the binary to /usr/bin/jaeger (creating the directory first if necessary), and run it.

    $ mkdir /usr/bin/jaeger
    $ mv jaeger-1.12.0-linux-amd64.tar /usr/bin/jaeger
    $ cd /usr/bin/jaeger
    $ tar xvzf jaeger-1.12.0-linux-amd64.tar.gz
    $ sudo rm -rf jaeger-1.12.0-linux-amd64.tar.gz
    $ cd jaeger-1.12.0-linux-amd64
    $ ./jaeger-all-in-one
  3. Verify that you can access the Jaeger UI in your browser, at http://Jaeger-server-IP-address:16686/ (16686 is the default port for the Jaeger server).

Installing the Zipkin Server

  1. Download and run a Docker image of Zipkin (we’re using port 9411, the default).

    $ docker run -d -p 9411:9411 openzipkin/zipkin
  2. Verify that you can access the Zipkin UI in your browser, at http://Zipkin-server-IP-address:9411/.

Installing and Configuring a Tracer Plug‑In

Run these commands on the NGINX Plus host to install the plug‑in for either Jaeger or Zipkin.

Installing the Jaeger Plug‑In

  1. Install the Jaeger plug‑in. The following wget command is for x86‑64 Linux systems:

    $ cd /usr/local/lib
    $ wget https://github.com/jaegertracing/jaeger-client-cpp/releases/download/v0.4.2/libjaegertracing_plugin.linux_amd64.so -O /usr/local/lib/libjaegertracing_plugin.so

    Instructions for building the plug‑in from source are available on GitHub.

  2. Create a JSON‑formatted configuration file for the plug‑in, named /etc/jaeger/jaeger-config.json, with the following contents. We’re using the default port for the Jaeger server, 6831:

    {
      "service_name": "nginx",
      "sampler": {
        "type": "const",
        "param": 1
      },
      "reporter": {
        "localAgentHostPort": "Jaeger-server-IP-address:6831"
      }
    }

    For details about the sampler object, see the Jaeger documentation.

Installing the Zipkin Plug‑In

  1. Install the Zipkin plug‑in. The following wget command is for x86‑64 Linux systems:

    $ cd /usr/local/lib
    $ wget -O - https://github.com/rnburn/zipkin-cpp-opentracing/releases/download/v0.5.2/linux-amd64-libzipkin_opentracing_plugin.so.gz | gunzip -c > /usr/local/lib/libzipkin_opentracing_plugin.so
  2. Create a JSON‑formatted configuration file for the plug‑in, named /etc/zipkin/zipkin-config.json, with the following contents. We’re using the default port for the Zipkin server, 9411:

    {
      "service_name": "nginx",
      "collector_host": "Zipkin-server-IP-address",
      "collector_port": 9411
    }

    For details about the configuration objects, see the JSON schema on GitHub.

Configuring NGINX Plus

Perform these instructions on the NGINX Plus host.

  1. Install the NGINX OpenTracing module according to the instructions in the NGINX Plus Admin Guide.

  2. Add the following load_module directive in the main (top‑level) context of the main NGINX Plus configuration file (/etc/nginx/nginx.conf):

    load_module modules/ngx_http_opentracing_module.so;
  3. Add the following directives to the NGINX Plus configuration.

    If you use the conventional configuration scheme, put the directives in a new file called /etc/nginx/conf.d/opentracing.conf. Also verify that the following include directive appears in the http context in /etc/nginx/nginx.conf:

    http {
        include /etc/nginx/conf.d/*.conf;
    }
    • The opentracing_load_tracer directive enables the tracer plug‑in. Uncomment the directive for either Jaeger or Zipkin as appropriate.
    • The opentracing_tag directives make NGINX Plus variables available as OpenTracing tags that appear in the tracer UI.
    • To debug OpenTracing activity, uncomment the log_format and access_log directives. If you want to replace the default NGINX access log and log format with this one, uncomment the directives, then change the three instances of “opentracing” to “main“. Another option is to log OpenTracing activity just for the traffic on port 9001 – uncomment the log_format and access_log directives and move them into the server block.
    • The server block sets up OpenTracing for the sample Ruby application described in the next section.
    # Load a vendor tracer
    #opentracing_load_tracer /usr/local/libjaegertracing_plugin.so 
    #                        /etc/jaeger/jaeger-config.json
    #opentracing_load_tracer /usr/local/lib/libzipkin_opentracing_plugin.so
    #                        /etc/zipkin/zipkin-config.json
    
    # Enable tracing for all requests
    opentracing on;
    
    # Set additional tags that capture the value of NGINX Plus variables
    opentracing_tag bytes_sent $bytes_sent;
    opentracing_tag http_user_agent $http_user_agent;
    opentracing_tag request_time $request_time;
    opentracing_tag upstream_addr $upstream_addr;
    opentracing_tag upstream_bytes_received $upstream_bytes_received;
    opentracing_tag upstream_cache_status $upstream_cache_status;
    opentracing_tag upstream_connect_time $upstream_connect_time;
    opentracing_tag upstream_header_time $upstream_header_time;
    opentracing_tag upstream_queue_time $upstream_queue_time;
    opentracing_tag upstream_response_time $upstream_response_time;
    
    #uncomment for debugging
    # log_format opentracing '$remote_addr - $remote_user [$time_local] "$request" '
    #                        '$status $body_bytes_sent "$http_referer" '
    #                        '"$http_user_agent" "$http_x_forwarded_for" '
    #                        '"$host" sn="$server_name" '
    #                        'rt=$request_time '
    #                        'ua="$upstream_addr" us="$upstream_status" '
    #                        'ut="$upstream_response_time" ul="$upstream_response_length" '
    #                        'cs=$upstream_cache_status' ;
    #access_log /var/log/nginx/opentracing.log opentracing;
     
    server {
        listen 9001;
    
        location / {
            # The operation name used for OpenTracing Spans defaults to the name of the
            # 'location' block, but uncomment this directive to customize it.
            #opentracing_operation_name $uri;
    
            # Propagate the active Span context upstream, so that the trace can be 
            # continued by the backend.
            opentracing_propagate_context;
    
            # Make sure that your Ruby app is listening on port 4567
            proxy_pass http://127.0.0.1:4567;
        }
    }
  4. Validate and reload the NGINX Plus configuration:

    $ nginx -t
    $ nginx -s reload

Setting Up the Sample Ruby App

With the tracer and NGINX Plus configuration in place, we create a sample Ruby app that shows what OpenTracing data looks like. The app lets us measure how much NGINX Plus caching improves response time. When the app receives a request like the following HTTP GET request for /, it waits a random amount of time (between 2 and 5 seconds) before responding.

$ curl http://NGINX-Plus-IP-address:9001/
  1. Install and set up both Ruby and Sinatra (an open source software web application library and domain‑specific language written in Ruby as an alternative to other Ruby web application frameworks).

  2. Create a file called app.rb with the following contents:

    #!/usr/bin/ruby
    
    require 'sinatra'
    
    get '/*' do
        out = "<h1>Ruby simple app</h1>" + "\n"
    
        #Sleep a random time between 2s and 5s
        sleeping_time = rand(4)+2
        sleep(sleeping_time)
        puts "slept for: #{sleeping_time}s."
        out += '<p>some output text</p>' + "\n"
    
        return out
    end
  3. Make app.rb executable and run it:

    $ chmod +x app.rb
    $ ./app.rb

Tracing Response Times Without Caching

We use Jaeger and Zipkin to show how long it takes NGINX Plus to respond to a request when caching is not enabled. For each tracer, we send five requests.

Output from Jaeger Without Caching

Here are the five requests displayed in the Jaeger UI (most recent first):

Here’s the same information on the Ruby app console:

- -> /
slept for: 3s. 
127.0.0.1 - - [07/Jun/2019: 10:50:46 +0000] "GET / HTTP/1.1" 200 49 3.0028
127.0.0.1 - - [07/Jun/2019: 10:50:43 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 2s. 
127.0.0.1 - - [07/Jun/2019: 10:50:56 +0000] "GET / HTTP/1.1" 200 49 2.0018 
127.0.0.1 - - [07/Jun/2019: 10:50:54 UTC] "GET / HTTP/1.0"1 200 49
- -> /
slept for: 3s. 
127.0.0.1 - - [07/Jun/2019: 10:53:16 +0000] "GET / HTTP/1.1" 200 49 3.0029 
127.0.0.1 - - [07/Jun/2019: 10:53:13 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 4s.
127.0.0.1 - - [07/Jun/2019: 10:54:03 +0000] "GET / HTTP/1.1" 200 49 4.0030 
127.0.0.1 - - [07/Jun/2019: 10:53:59 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:54:11 +0000] "GET / HTTP/1.1" 200 49 3.0012
127.0.0.1 - - [07/Jun/2019: 10:54:08 UTC] "GET / HTTP/1.0" 200 49

In the Jaeger UI we click on the first (most recent) request to view details about it, including the values of the NGINX Plus variables we added as tags:

Output from Zipkin Without Caching

Here are another five requests in the Zipkin UI:

The same information on the Ruby app console:

- -> /
slept for: 2s.
127.0.0.1 - - [07/Jun/2019: 10:31:18 +0000] "GET / HTTP/1.1" 200 49 2.0021 
127.0.0.1 - - [07/Jun/2019: 10:31:16 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:31:50 +0000] "GET / HTTP/1.1" 200 49 3.0029 
127.0.0.1 - - [07/Jun/2019: 10:31:47 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:32:08 +0000] "GET / HTTP/1.1" 200 49 3.0026 
127.0.0.1 - - [07/Jun/2019: 10:32:05 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 3s.
127.0.0.1 - - [07/Jun/2019: 10:32:32 +0000] "GET / HTTP/1.1" 200 49 3.0015 
127.0.0.1 - - [07/Jun/2019: 10:32:29 UTC] "GET / HTTP/1.0" 200 49
- -> /
slept for: 5s.
127.0.0.1 - - [07/Jun/2019: 10:32:52 +0000] "GET / HTTP/1.1" 200 49 5.0030 
127.0.0.1 - - [07/Jun/2019: 10:32:47 UTC] "GET / HTTP/1.0" 200 49

In the Zipkin UI we click on the first request to view details about it, including the values of the NGINX Plus variables we added as tags:

Tracing Response Times with Caching

Configuring NGINX Plus Caching

We enable caching by adding directives in the opentracing.conf file we created in Configuring NGINX Plus.

  1. In the http context, add this proxy_cache_path directive:

    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
  2. In the server block, add the following proxy_cache and proxy_cache_valid directives:

    proxy_cache one;
    proxy_cache_valid any 1m;
  3. Validate and reload the configuration:

    $ nginx -t
    $ nginx -s reload

Output from Jaeger with Caching

Here’s the Jaeger UI after two requests.

The first response (labeled 13f69db) took 4 seconds. NGINX Plus cached the response, and when the request was repeated about 15 seconds later, the response took less than 2 milliseconds because it came from the NGINX Plus cache.

Looking at the two requests in detail explains the difference in response time. For the first request, upstream_cache_status is MISS, meaning the requested data was not in the cache. The Ruby app added a delay of 4 seconds.

For the second request, upstream_cache_status is HIT. Because the data is coming from the cache, the Ruby app cannot add a delay, and the response time is under 2 milliseconds. The empty upstream_* values also indicate that the upstream server was not involved in this response.

Output from Zipkin with Caching

The display in the Zipkin UI for two requests with caching enabled paints a similar picture:

And again looking at the two requests in detail explains the difference in response time. The response is not cached for the first request (upstream_cache_status is MISS) and the Ruby app (coincidentally) adds the same 4-second delay as in the Jaeger example.

The response has been cached before we make the second request, so upstream_cache_status is HIT.

Conclusion

The NGINX OpenTracing module enables tracing of NGINX Plus requests and responses, and provides access to NGINX Plus variables using OpenTracing tags. Different tracers can also be used with this module.

For more details about the NGINX OpenTracing module, visit the NGINX OpenTracing module repo on GitHub.

To try OpenTracing with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

The post OpenTracing for NGINX and NGINX Plus appeared first on NGINX.

]]>
7 Reasons to Attend NGINX Conf 2019 http://www.xungui.org.cn/blog/7-reasons-to-attend-nginx-conf-2019/ Thu, 13 Jun 2019 19:15:05 +0000 http://www.xungui.org.cn/?p=62509 About six weeks ago we announced that NGINX Conf 2019 will be taking place in Seattle, WA, from September 10 for two full days of keynotes, breakout sessions, case studies, community networking, and so much more. We hope to see you there, but if you need to convince yourself (or your manager!) of the benefits of attending, [...]

Read More...

The post 7 Reasons to Attend NGINX Conf 2019 appeared first on NGINX.

]]>
.big { font-size: 150%; text-indent: -3em; margin-left: 3em; } li { list-style-type: none !important; } .indented { margin-left: 1.5em; }

About six weeks ago we announced that NGINX Conf 2019 will be taking place in Seattle, WA, from September 10 for two full days of keynotes, breakout sessions, case studies, community networking, and so much more. We hope to see you there, but if you need to convince yourself (or your manager!) of the benefits of attending, then read on.

NGINX Conf is the highlight of our calendar, when we get a chance to meet with businesses that are at every point on the journey to digital transformation – from taking the first steps towards modernizing hardware?based delivery of legacy applications to implementing service mesh for advanced microservices architectures. NGINX Conf is the focal point for the NGINX community, our partners, and most importantly, you.

Why You (and Your Team) Should Attend NGINX Conf 2019

Looking to make the business case to attend? At NGINX Conf 2019, you can connect with other attendees and members of the NGINX and F5 family, upskill at NGINX training sessions, and gain insights into optimized web performance from speakers and industry experts. Here are seven reasons why you should attend this year’s event for all things NGINX:

  • 1. Learn more about our vision for the future with F5.

    Now that NGINX has become part of the F5 family, learn more about what we have in store for our ADC and WAF solutions, including our plans and joint roadmaps for these solutions. Discover what’s in store for NGINX Open Source, customers, and partners as we become part of F5 Networks.

  • 2. Stay on top of latest and greatest product updates.

    Hear about new products and major updates to existing products, including the NGINX Controller API Management Module and Load Balancing Module, from our product gurus. Learn about various use cases and best practices to help you get the most from NGINX.

  • 3. Build the foundation for microservices.

    Be among the first to hear about new capabilities from NGINX that simplify the supporting infrastructure for your microservices, allowing for easy configuration, deployment, and communication between your applications.

  • 4. Save more than 25% on hands‑on training.

    Learn from both experts and experienced peers with hands‑on training sessions, where various specialists will answer all your technical questions. From developing NGINX modules to advanced load balancing, our training sessions and workshops ensure that you are equipped to unlock the full potential of NGINX. Sign up and take advantage of our full‑day training for just $550, a $200 discount off the regular price!

  • 5. Learn about the latest open source innovations.

    NGINX is proudly committed to driving our open source innovation further every day. Be among the first to get NGINX news and technical details on some of the world’s most popular open source projects.

  • 6. Directly influence the NGINX product strategy.

    Meet with NGINX leadership team, along with our partners and esteemed experts, and provide feedback on the tools and capabilities designed to solve your toughest digital problems.

  • 7. Accelerate ROI on your NGINX investments.

    Learn from NGINX technical experts, fellow customers and community members, and users how to optimize NGINX deployments and simplify the technology stack for both traditional applications and distributed ones based on microservices.

With opportunities to upskill, learn, and network with experts and community leaders, NGINX Conf is an eagerly awaited highlight on the DevOps and open source event calendar. Register today!

The post 7 Reasons to Attend NGINX Conf 2019 appeared first on NGINX.

]]>
色 亚洲 日韩 国产 在线