Microservice Gateway Comparison

Recently, due to the needs of new projects, the technology stack selected by our back-end is spring cloud micro-service, which needs to be selected for the gateway, so we have the performance test analysis of this variety of gateways.

Testing Machine


Using a 4-core 4G memory virtual machine, the system is Ubuntu18

Wrk test

The following are 10 threads, 200 concurrent, lasting 30 seconds, test results for different gateways. (wrk uses the http1.1 version)

Direct connection




zuul(After warming up),Timeout situation from time to time


gateway(After warming up)


linkerd(After warming up)


Test result analysis

  1. When there is no gateway and directly connect to the service, 22,433 requests can be processed per second, and the average delay of each thread is 13.25ms.
  2. When using the Nginx proxy, it can process 12507 requests per second, which is about 56% of the direct connection, with an average delay of 75.43ms.
  3. After using the warm-up zuul, it can process 6517 requests per second, which is about 29% of the direct connection, with an average delay of 105.06ms; multiple tests, occasionally there will be 3 or 4 timeout requests.
  4. Using the warm-up gateway, it can process 9228 requests per second, which is about 41% of the direct connection, and the average delay is 21.83ms.
  5. Using the linkerd after warm-up, it can process 9999 requests per second, which is about 44% of the direct connection, and the average delay is 20.35ms.

Ab test for http1.0

The following uses ab to do 200 concurrency, 50,000 request tests:

Direct connection




zuul(After warming up)


gateway(After warming up)


linkerd(After warming up)


Test result analysis

There is a post on the Internet with ab test, the test to the gateway performance is very poor, the official has been checked is ab used http1.0, and the bottom layer reactor-netty does not support http1.0, this problem has been fixed.



  1. Direct connection, 50,000 requests, total time 2.748 seconds, processing 18,195 requests per second, an average of 11ms a request
  2. Nginx, the total time is 5.021 seconds, processing 9,0 requests per second, about 55% of the direct connection, an average of 20ms a request
  3. After the warm-up zuul, the total time is 7.862 seconds, processing 6360 requests per second, about 35% of the direct connection, an average of 31ms a request
  4. After warming up the gateway, the total time is 6.762 seconds, processing 7394 requests per second, about 41% of the direct connection, an average of 27ms a request
  5. After the warm-up linkerd, the total time is 6.972 seconds, processing 7171 requests per second, about 39% of the direct connection, an average of 28ms a request

Simple summary

  1. According to the test situation, the performance of Nginx in a variety of gateways is undoubtedly the best, whether it is Taobao’s Tengine feature enhancement, or OpenResty’s module enhancements, are optimized for Nginx. However, from the perspective of positioning, I tend to position it as a traditional tool that only carries request forwarding, which is not so friendly to the technology stack we developed. In addition, OpenResty has a lua-resty-mysql module that can be used as a gateway to mysql. We have never used it and don’t know how it performs.
  2. Both zuul and gateway can seamlessly connect to the spring cloud. Combined with service discovery, the service layer can shield machine resources (ip or internal and external network domain names, etc.), and the service has no dependence on the machine. Their positioning is a gateway that is integrated into the microservices and carries the enhanced functionality of the microservices. For the second development, these two are easier for us to develop.
  3. Compared with zuul and gateway, according to the test situation, the performance of the gateway is still much better than zuul. In addition, the gateway also has a lot of features that zuul does not have, such as support for http2, websocket, according to domain name forwarding. For the high performance requirements of service forwarding, rather than database forwarding, the performance and Nginx are not much different.
  4. Linkerd is the concept of service grid. The principle of linkerd is system proxy request, which is non-intrusive to the service and has a mature monitoring management interface. The official test of gateway is that gateway performance is much better than linkerd, but testing on my virtual machine is similar. Linkerd is used in conjunction with docker and k8s, and the abstraction of machine resources goes one step further. Linkerd is more like TCP/IP between applications or microservices. Network monitoring, current limiting, and blowing are transparent to the service layer.

Above, for each gateway, individuals feel that they are not the same dimension, but can choose the corresponding gateway according to their own needs, technology, and so on.

original text