You can listen to the audio and read the show notes here
Michael Smith: Welcome back to the podcast. I'm here with Kevin Jones, from NGINX, and he's going to be talking about the past, present, and future of the modern web using NGINX, which, as I understand it, is kind of like a virtual middle man between your app and the users of your app, and it has lots of applications. So, we're going to be looking at what it is, when you should be using it, looking at the 10-year history of it, and the hockey stick growth in feet just that's happening right now with it, how it works with docker containerization and microservices, application delivery controller, virtualized load balances, how it can implement smart layer seven security fast, the value of having a virtual proxy software with the power of engine stripped to let you customize it.
And welcome, Kevin.
Kevin Jones: Thank you. Nice to talk to you.
Michael Smith: Yeah. So, what exactly is NGINX?
Kevin Jones: Yeah, so NGINX has been around for a while. It's been around since 2007. It's actually originally an open source project, and still is an open source project, and essentially it's a lot of things. Essentially it's a web server, it's a reverse proxy, it can do HTTP caching, and it can also do load balancing of traffic, and it can also do TCP and UDP traffic as well, proxying. So, it's commonly used to do security control of web applications, being able to control access to those web applications. It can also do live video streaming, media streaming, and then, as I mentioned before, it can be used to serve static files as well.
So, it's commonly used. Right now there's about 350 million known websites on the internet today that use NGINX, so it has a really, really large footprint. It's widely adopted in the community, and now NGINX is a company. So, we've built NGINX Plus, which is a commercial version on top of that open source version, so we kind of handle both sides of the matrix.
Michael Smith: So, it's a startup company. How many people work there?
Kevin Jones: Yeah, so we're rather small right now. A lot of people hear NGINX and they hear the 350 million websites and they think, “Oh, it must be this huge company.” But we actually only have about 150 employees right, so we're just really hitting the ground running and growing. We've grown a lot. I think when I got hired I was number 50, so in the past two years we've hired about 100 employees. Yeah.
Michael Smith: Well, something must be going right.
Kevin Jones: Yeah, definitely, definitely.
Michael Smith: Yeah. So, it's basically, if I understand it right, it's a middle man between your ColdFusion application or whatever application software you're running, and your users, and it lets you … It's not like a hardware proxy, or a hardware low balance there, it's all configurable in software. Is that a fair description?
Kevin Jones: Yeah, definitely, because NGINX is such a lightweight platform, the RPM, or the Debian install, or the package, the tar ball is only about three megabytes, so it's a very, very small footprint, and it runs on WinEx. So, that being said, you can run NGINX or NGINX Plus inside of a DocuWare container, you can run it on a virtual machine or [inaudible 00:03:19], or we can run in the cloud on any kind of cloud instance, so on Amazon, EC2, or an Azure instance, or a Google compute engine.
That makes it really versatile and kind of agnostic to where it's deployed. Because it's so lightweight and so easy to configure, it easily can be used as a proxy in front of all of your web applications, adding the security layer, adding the proxy functionality, being able to inspect request and whatnot.
Michael Smith: So, who should be using this?
Kevin Jones: So, I would say all our people use NGINX just even in their own personal website, right. They might run NGINX in front as web server doing dynamic content to their web application, but we also are used for enterprise level stuff as well, so if you're running a large company or you're running a small company and you need to be able to do some scaling or low balancing, NGINX can be deployed and used, and that typically would be people in operations, so developer operations. You heard the term dev ops, which is really empowered developers being able to control their infrastructure a little bit. So, we're very popular in operations space and developer operation space.
Michael Smith: So, is this hard to install, or is there a docker container that you can just download and it installs straight away?
Kevin Jones: Yeah, so it's actually very easy, because NGINX … Well, there's multiple ways you can really install it. You can install it from source, so if you want to, for some reason, customize NGINX, you can actually compile it and create the binary for yourself to run, but then we also distribute it as a docker file. So, we have a docker file that you can go on to NGINX's get hub and get that, and that will allow you to build NGINX. NGINX is the number one downloaded docker image on the docker hub, so we're very popular in the docker space because we commonly use it as a proxy into the container, so it's really easy to spin it up on docker. All you got to do is download the docker file, two seconds and you're up and running. But if you want to control it and install it on a virtual machine or something like that, you can use package manager, so it's very, very easy.
Michael Smith: And it's been around for 10 years, I think you said earlier?
Kevin Jones: Yeah, so 2007 was the first initial release, and when it first came out it was really just more of a proxy, so it was acting as a proxy into a large amount of connections. The main reason NGINX was created is people were having issues with concurrency on their web applications, so they might be running Apache, and after about 10,000 users, web applications were falling over, Apache was falling over. So, NGINX was created as that first line of defense, and since then it's grown into more and more futures. The company has been around since 2013, and we came out with the first version of NGINX Plus during 2013.
Michael Smith: And then the number of features has really been going through a hockey stick growth in the last year, or what? Tell me about that?
Kevin Jones: Yeah, so previous to the start of NGINX, it was an open service project, so it was committed and managed by Igor Sysoev, who is the founder of NGINX, the company. At that point it was really just kind of maintenance mode. We weren't really doing a lot of development. We'd add some small features and add in features here and there, but since now that we're a company we have a larger team, so we've grown the engineering team. There's about 20, I think, core developers now, or somewhere in that range, and we've added … Just to give you some examples, we've added HTTP/2 support. We've added JSON logging, so you can log your log files in JSON format.
We've added TCPUTP low balancing in the last couple years, thread pulls, all sorts of cool stuff. NginScript, which I think we could probably talk about a little later, and then also we've enhanced NGINX Plus, which is the commercial version. So, I like to think of it as the development pillars, and we're kind of raising both of those up and adding features, and since, like I said, since the company's got launched we've really, really pushed a lot of features into both sides of the product.
Michael Smith: And Igor's still with the company?
Kevin Jones: Yeah, definitely. He is the co-founder of the company. He's still based in Russia, so we have offices in Ireland, Russia, the UK, and here in San Francisco around our base.
Michael Smith: Wow. Cool. Do you get to travel and meet these different people?
Kevin Jones: I mean, I've definitely met Igor, right. I pretty much know everyone in the company at this point, just because I've been here for about two years. But I don't get to travel too often. I would like to be able to travel overseas, but because we're a startup, we want to kind of work as much from here, from the San Francisco office as we can. But I do go to a lot of conferences, and I do a lot of talks, and kind of help evangelize NGINX. So, I do get to travel within the States, at least.
Michael Smith: Cool. So, tell us a bit more about if we wanted to use NGINX with some things, ColdFusion app, we'd put in a docker container, or we're using Microservices. How does NGINX help with that?
Kevin Jones: Yeah, so we play a really unique role from the term of Microservices. Because of NGINX's design, it's lightweight, and as I mentioned before, it can be installed inside of a docker container, or inside of a virtual machine. A lot of people might install NGINX as a proxy in front of their application, even local hosts. So, you might be running an application either in a container or in a virtual machine, and using NGINX in front of that to do interesting things. Maybe to reroute requests, or block requests that might be malformed. Maybe the ability to inspect and do authentication, so being able to allow or disallow certain things to your API or your backend application.
Because it's lightweight and because you can do other things like SSL optimization, so you can can bind SSL certificates in NGINX, and then proxy either inside the container or unencrypted, and that will allow you to do SSL everywhere. So, from your low balancer into the container can be a secure layer, and we can do some things with keep alives as well, to kind of keep that connection alive as well, which makes for a more reliable application between your infrastructure and your actual application. Keeps things kind of connected.
Michael Smith: So, you mentioned you can create a virtualized load balance there, so you don't have to buy a load balance a piece of hardware which costs several arms and legs, right. You can set up NGINX, load a script that configures it as a load balancer, and let it balance between service.
Kevin Jones: Yeah, definitely. A lot of people don't know that we can be used as a load balancer, but we do load balancing really well. Yeah, because it's virtualized, NGINX will just listen on a certain port and IP, and then you can proxy all of your requests back to a different port an IP, or a range of those, right. What we call them is an upstream pool, so essentially a pool of servers that you have on the backend, and because, like you said, it's virtual, you don't have to buy a piece of hardware, you can run it in a docker container on a virtual machine or in the cloud.
Michael Smith: Does this take a lot of performance, or is this high performance?
Kevin Jones: So, NGINX, by general means, is very good about handling mainly CPU. So, we have an event-driven, asynchronous, non-blocking architecture, which I'm going to talk about at my presentation at Into the Box. But essentially, NGINX can be used very efficiently, because it can accept connections and it binds … The NGINX process can bind to a specific worker, or … Sorry, the worker can bind to a specific process inside the actual system, and it will allow you to kind of use the most of your resources that you can.
That's why if you have an 8-core system, NGINX will spin up workers and it will actually dedicate each worker to each CPU in the system, so that way you're not wasting any system resources from the NGINX perspective. And then we also have what's called shared memory zones. So, all of the workers are notified of state changes in those shared memory zones, so all the workers have access to that memory zone, and so you don't have this issue where one worker is particularly taking up a large amount of CPU, or a large amount of memory. They're all kind of evened out and equalized.
Michael Smith: Wow. Really efficient. So, one of the other things I know it can do is you can implement smart layer seven security very easily. Tell us a bit about that, and why you'd want to do it.
Kevin Jones: Yeah, so a lot of companies have now actually built API gateways or security platforms on NGINX, and NGINX is commonly deployed in a DMZ type environment, because it can do a couple things. It can block access or allow access based on IPs or the CIDR range, so the ranger MPUs. Then it can also do things like offload authentication, so we support basic auth, and then we also support what's called the auth request modules. So, we can link into an existing authentication tool and allow access or disallow access based on the sub request that takes place to your authentication service.
We also have some functionality in the product for mod security, which is a web application firewall. It can be deployed on top of NGINX as well, and it can block malformed requests, so maybe cross site scripting vulnerabilities, or some kind of a vulnerability in an application specific URI or part of the application. NGINX can kind of read a heuristic file and then block access based on that heuristics file. So, it allows you to do more control from a security perspective of your web applications, and you don't have to worry about securing your web applications from a code perspective. NGINX will do that at that proxy layer by itself.
Michael Smith: And the difference between buying an off-the-shelf web application file, well, is you control the code, you've got the scripts, and you can update it any time you want.
Kevin Jones: Yes. As you know, as I mentioned before, NGINX is an open source platform, and we have NGINX Plus. There is an open source version of the web application firewall from mod security that you can download and use, and there's what's called a core rule set, which is basically a big list, a heuristics list of things that commonly should be blocked. So, you can load that file in and be off and running. But NGINX Plus has an official version of that web application firewall that we build in-house and distribute, and then same thing. We'd give you access to that core rule set to allow you to block. There's both ways it can be configured, but it is nicer to be able to build it on your own platform, because then you have the access control roles that you can actually manage.
Michael Smith: And then what about [inaudible 00:15:19] service? Does it deal with that, or is that not its thing?
Kevin Jones: Yeah. That's a great question, actually. So, NGINX also has the functionality to do two things. It can request limits, so it can create a hash pool, so it can take a request in and hash a variable and limit the amount of connections that that client can make, and it can also limit the amount of requests that that client can make. So, let's say that you wanted to … You had a way that you could identify a particular group of users, and you wanted to block access based on that particular thing, maybe the country they're coming from, or maybe the … I don't know, a certain header, or user agent, you can block access for that actual client. Let's say you only wanted to allow 10 connections per second, or you only want to allow 100 requests per second. You can put those into kind of a special role.
Michael Smith: Wow. So, although NGINX is not a network firewall, it lets you build some really powerful firewall-like features?
Kevin Jones: Yeah, exactly. We do run at the app limit's level, right, so it's software based. Certain level of stuff you would just want to do through your firewall, but NGINX can be that extra layer of security that goes past the firewall, so things as I mentioned before, specific IPs. We also can punch into a GeoIP database, so there's a module called the GeoIP module, so we can actually look up the user's IP address to find out where they're coming from. We can identify users from certain countries and block access based on that, yeah, and anything else. We can do all the authentication stuff, offloading the authentication, accessing or limiting the requests and the connection counts for specific users as well. So, it is really … Yeah, it's more of a extra line of defense past that firewall that you can enable.
Michael Smith: So, tell us a bit about virtual proxy software, because it lets you create your own virtual proxy.
Kevin Jones: Yeah. So, the way NGINX works is you essentially have what's called a virtual server, or a server block. NGINX can bind on any IP import very easily, and you can proxy back to a one-to-one, so if you have a web application and you just want to do a direct proxy you can do that. But then it can also do load balancing, so if you want to have more than one backend server, you can proxy and load balance between the backends. The open source version does what's called a round-robin load balancing, so it will just round-robin through the backends.
But NGINX Plus does an additional load balancing algorithm called least time, so it can actually route to a server with the fastest response time. It's constantly checking each backend, and it's validating the response time, and it allows you to kind of evenly load the balance based on that response time. NGINX can also do least con load balancing as well, so if you want to load balance to a server with the least amount of concurrent connections, you have that ability as well.
Michael Smith: Cool, so that's really flexible if you've got applications where some pages run quick and some are running slow. If they're big reports or something, you don't want to just round-robin people, you want to pick the server that has the least current load. You're measuring it in different ways, from response time or connections.
Kevin Jones: Yep, and you can also do something interesting too where you can tell NGINX that if for some reason that upstream didn't respond in certain time, you can have it to go ahead and tell it to try to go to the next one that's available. So, that way if, for some reason, one server might be a little more overloaded and for some reason it's still load balanced over there, NGINX will recue that and resubmit that request if it didn't succeed, like if it timed out to the other backend. So, you can do some cool stuff like that.
Kevin Jones: Yeah, yeah. If any of the users out there are familiar with NGINX, they would know that there's a module that you can use … Well, there's a couple modules you can use to extend the functionality of NGINX. So, NGINX does a lot out of the box, but occasionally there might be something you want to do a little more advanced. An example would be, let's say you want to manipulate an HTTP header, or let's say you want to, I don't know, maybe manipulate the body of the request that's coming through before it sends it back to the application.
Michael Smith: Well, that sounds powerful. Anything else about NGINX you want to share with us?
Kevin Jones: Yeah, no, I mean, I would hope that anybody listening out there that's going to be coming in to the Box, definitely come by and see my presentation. I'm going to go into really some of the core features and functionality of the products so that you can kind of get an idea of what it does. NGINX is a real big product, so there's so many different things that it does. I would just hope that you'd come by and check out my talk, and feel free to ask me any questions that you guys have.
Michael Smith: So, what are you looking forward to into the Box, Kevin?
Kevin Jones: Basically just that. I'm really looking forward to talk to people. I always like to talk to people that A, love NGINX and they've been using it for a while and might have questions. I also love talking to people that don't even know what it is. As we move into this new era of IT, we see a lot of developers that are becoming dev ops, and with that comes added need for learning new things, and that's one of the things I'm going to talk about in my talk is where is the industry going, and how is NGINX playing a part in that, because we are a huge part of that right now. My biggest thing is I'm really excited to tell everybody about what they can do with NGINX, and kind of help them with any kind of projects that they might have in mind. So, I'm really eager to learn and help people. Definitely stop by and talk to me.
Michael Smith: Fabulous. So, how would people find more about you if they wanted to learn more about you an NGINX?
Kevin Jones: Yeah, so definitely nginx.com is the number one source for anything related to NGINX and NGINX Plus from the company perspective. There's a blog link on there that you can click on, and we have a lot of really good blogs. I'm one of the writers on there. If you go up to the search bar when you go to nginx.com and just search Kevin Jones, you'll get a list of all my blogs. You can also just click blogs and you can get a list of all the blogs that we write. We do a lot of blogs, so I would say every other day we probably post something. So, it's a really good blog just to subscribe to and kind of to read. I would definitely recommend that. If you guys have any questions, you can definitely tweet me, so my Twitter is webopsx. So, it's W-E-B-O-P-S-X, and I'd be glad to answer any questions you guys have.
Michael Smith: Great. Well, thanks for being on the show, Kevin. I'm looking forward to seeing you in Houston at Into the Box.
Kevin Jones: Yeah, definitely. Thank you for your time. I appreciate it.