NGINX and the future of the web server


Robertson, CEO of NGINX: “Websites today aren’t really just websites, they’re apps.”

Image: Colin Barker

The NGINX web server company promotes itself as “the secret heart of the modern web” and claims to manage 60% of the world’s busiest websites.

CEO Gus Robertson is originally from Australia and has big ambitions for the company: while NGINX already has a significant presence in the United States, he now plans to expand his public profile around the world. ZDNet recently spoke to Robertson to find out more.

ZDNet: Tell me about NGINX.

When to use NGINX instead of Apache

When to use NGINX instead of Apache

They are both popular open source web servers but, according to NGINX CEO Gus Robertson, they have different use cases. What about Microsoft? Its web server fell below 10% of all active websites for the first time in 20 years.

Read more

Robertson: There are several different categories in the web server market. Apache is the original web server and was built 20 or 25 years ago as an open source web server.

It was designed for a different kind of Internet than what we have today. Then the websites were really brochureware. Today, websites aren’t really websites anymore, they’re apps. You connect to it, share, download videos and a host of other features.

NGINX started in 2004, as an open source project, written by one of our founders, Igor Sysoev, and he wrote the software himself, 100 percent.

Where did he come from?

Moscow, and when he started NGINX, he was really trying to scratch an itch that he had had for a while. At the company where he worked, he handled simultaneous inbound connections to the application he was working on, and Apache really couldn’t scale to 1,000 or maybe 2,000 concurrent connections.

He tried to write modules for Apache and then scale them beyond those limits. There was actually quite a challenge on the Internet at the time to see who could break through the 10,000 barrier.

Igor went home, wrote code, tested it, crossed the 10,000 barrier, and opened the code. That was in 2004. He ran the project on his own until 2011. By then it had grown too big because at this point around 50 million websites were using the software.

He was getting just too many requests for features and improvements, so he got together with two of his friends, formed a company and called it NGINX Inc. The idea was that they would be able to invest in more engineering and support staff around the project and then being able to monetize it somehow.

I joined the company in 2012 when there were seven guys in Moscow and myself in the US. Since then, we have been able to develop the business and now have more than 120 employees worldwide.

With this next step in our expansion, we have opened offices for the EMEA region in Cork, Ireland, and plan to recruit over 100 people there over the next three years. The business has grown year on year and we now have over 317 million websites using our software, 58% of which are the most visited sites in the world.

We are now the default and most popular web server for any reasonable traffic website. Think of sites like Uber, Netflix, BuzzFeed, the BBC, and SoundCloud.

Has it been a simple growth path?

Simple in terms of adoption and growth. It really took off around 2007, 2008. That’s when the way people interacted with websites changed.

That’s when websites really changed from brochure websites to sites with real content and real apps.

It was at this point that broadband became fully adopted and cell phones began to appear. There were so many connections and so many people were coming to the websites and the sites had to be able to scale.

NGINX became the default standard due to our architecture, which was very different from that of Apache.

Apache is an event-driven architecture, rather than a process-driven architecture. This means that they handle traffic in a very different way than we do.

What’s the difference between the way you and Apache handle traffic?

Rather than creating a separate amount of memory and a separate processor for each connection, and keeping it open, we only take memory and processor when there is a request from a connection, and pass it to the upstream server.

We don’t keep the connection open if it’s not in place, so we don’t lock down the processor and memory, and we can handle asynchronous traffic.

Would you describe your way of working as totally flexible in this sense?

Exactly. A good analogy is the idea of ​​a bank teller. You don’t create a cashier for every person and while you’re there and you don’t need to deposit or withdraw money, we don’t need a cashier in case you need to. silver. You go to the bank and ask for a deposit or withdrawal of money.

So where does the speed come from?

This is due to the lightweight nature of our software. Although we have an incredible amount of capability and functionality in the software, it is still less than 200,000 lines of code. If you install it, it is less than 3MB.

We’re very manic about not adding an extra line of code if it doesn’t have to be there. It’s very light and powerful software, we don’t want it to become bloatware.

What do you attribute the success of the business to? Is it just the quality of the software?

We are the world’s number one web server for successful websites. But what we’ve also done is extend the open source product for our commercial offering to handle more features that extend it from a web server to an application delivery platform (ADP).

Now an ADP does more than just deliver applications. It does load balancing, it does caching, it has security capabilities, and it acts as an application firewall. He performs health checks, surveillance, etc.

It is the natural bump in the thread to authenticate incoming traffic or to terminate or encrypt. It is the natural place to store commonly used content, like images, videos or HTML pages.

You can dramatically speed up the performance of an application by placing more of the heavy HTTP load on the front of the application, so that the application server on the back-end only has to make the logic of the application.

If you think about how apps are delivered today, say Amazon.com for example. Amazon.com is roughly 178 individual services, which means each individual app is there to do a very specific thing.

If you type in Nike shoes, for example, you get a lot of stuff. You get reviews, you get recommendations, you get sizes, you get all of this information and each is a separate service, or microservice that is focused on delivering that one thing.

While you are doing this, all of these services need to communicate and the way they communicate is through HTTP traffic – and how do they do that? They have NGINX.

So how do you manage a smaller site or application?

The same issues are there for the little guys as they are for the Amazons. You look at how you handle the incoming connection, how you handle the encrypted connection, whether I’m a bank or a small site, I still need to encrypt that traffic.

And if I’m on an app, I still expect a response time of less than a second. The problems that affect a small website are exactly the same as those that affect a large site, it’s just on a different level of magnitude.

How do you keep it all safe?

There are several ways. One would be an SSL. Another is the web application firewall – the ability to examine different traffic and monitor that traffic. We have a lot of discrete functions configured on the back end. For example, you can say, “I know all of my end users, so as users come in I can whitelist those I know or blacklist those I don’t know.” “

I can rate users so that I can limit what requests a certain user can make and that is really important, not only to monitor for DDoS attacks that are coming in, but you can also be DDoS internally by some other API.

And is all this simple?

We have a configuration file in NGINX, and NGINX is a model that runs on Linux, so it’s driven from the command line. We don’t have a configuration dashboard per se.

But we have a dashboard that shows you all the monitoring and analysis of all incoming traffic.

What are the biggest issues your customers are currently facing?

DDoS is huge: it’s a way that can bring down a site. But the traffic load is the most common.

If you look at the industry in the United States, Thanksgiving is one of the biggest [days for website traffic] as well as Black Friday and Cyber ​​Monday. Every year, big sites go down on these days because they didn’t plan or anticipate how much traffic they were going to get. And that’s good traffic. It’s not bad circulation. This is not a DDoS attack, but it can bring down a site as well.

People describe NGINX as a kind of shock absorber to the front of your website.

But surely there must be occasions when traffic can overload a site?

There are limitations, but since NGINIX does not block traffic, we can still handle very large amounts. We’re not saying we can handle it all. If you are overwhelmed by a massive DDoS attack, then this is what it is. But NGINIX is very good at absorbing the shock of massive amounts of Internet traffic.

If there is a limitation, it is the bandwidth.

What else is new with NGINX?

we extended NGINX Plus with load balancing, caching, SSL Plus, monitoring and analysis. What all of this does is it puts us up against another category of technology – the app delivery controller, and they’re made by companies like F5 and Citrix. They created a hardware approach to solve application acceleration.

We are seeing a transition from hardware to software, and from a network perspective to a software perspective. We are seeing many of our customers migrate from these expensive hardware appliances to our commercial product NGINX NGINIX Plus. It’s because of the cost savings, because it’s software, because it’s application centric, because it goes to the cloud, and it’s cloud native.

What we are seeing happening is that we are all moving from the monolithic approach, everything in a single package, to a microservices or distributed application approach.

Learn more about NGINX and web servers


Comments are closed.