Interested in More About Nginx and WebSockets?

This blog doesn’t get a ton of traffic. At least not currently. But it’s pretty clear from the analytics that my post on proxying WebSockets with Nginx is by far the most popular thing I’ve written about to date. That article was fairly bare-bones, but people seem to find it useful, so I thought I’d just put the question out there: is there anything else related to the WebSockets functionality in Nginx you’d like to get some clarity on? Any examples you might find useful?

If so, please just let me know in the comments, and I’ll do my best to make some new posts covering the areas of interest.

5 Comments

  1. Hi Chris,

    The blog post was very useful. We’re using Socket.io for our websockets (as I image a lot of people are). But we can’t seem to get past 1500 concurrent connections on a AWS m1.small instance.

    It would be great to know how we could debug this, and optimise nginx and node to allow more concurrent connections. We have a lot of CPU spare, as well as memory, and I’ve followed a lot of tutorials involving ulimits, sysctl and lots of other linux stuff.
    Thanks,

    Steve

    • Yes, that seems like an important issue. I don’t know if that’s the case, but if it is, I’d much appreciate a reply.

      • So in general, “how do I debug issue X?” isn’t going to get a response with much useful information unless the person you’re asking has very specific knowledge of your setup, which in this case I don’t. So this may not be all that helpful. In this case, I don’t know exactly what you mean when you say “can’t get past 1500 connections”. Do additional connections at this point simply fail to make the TCP handshake? Or is there some other behavior that happens. Do the existing connections start to get flaky or are the established connections fine, but you can’t make new ones?

        Irregardless of these things, if I were in your shoes I would likely try the following things.

        1. Try using the exact same setup with a large instance on EC2, just for testing. It won’t cost more than a dollar or two to try this out. This should tell you if the problem is just “some resource on the small instance simply isn’t big enough”.
        2. Try serving something trivial like a static text file with Nginx, and see if you can make more than 1500 concurrent connections.
        3. Run up to what seems to be the maximum number of connections from one IP address. Then, from a second IP address try and make more, and run tcpdump to sniff for what’s happening from the second IP address to see if that tells you anything useful.

        Hope this helps at least somewhat.

  2. Hey Chris,

    too kind of you for asking, if I have a question! 🙂
    As it turns out, I have one that’s rather about SPDY than WebSockets.

    With `npm install spdy` we get a solid SPDY server for Node.js with full support for server push. The problem is, that node-spdy wants to do the TLS/SSL encryption stuff itself. To have two different SPDY apps running on the same host system, I’d need nginx to proxy HTTPS/SPDY requests without actually decrypting the connection. In theory nginx should be capable of routing the requests based on their (cleartext) Server Name Indication fields.
    Do you possibly know more about that?

    I’d love to read your reply. Thank you very much. 🙂

    • The thing to remember is that Nginx is, fundamentally, a web server. So it speaks HTTP, and when it load balances, it also speaks HTTP. What you’re looking for is a generic TCP load balancer that would just forward the data on without any regard to what it says. For this purpose, what you want is something that is simply a load balancer instead of a web server. I’d recommend looking into HAProxy, which should be able to do what you want.

Leave a Reply

Navigate