Prerequisites:
- Kubuntu 12.10
- Python 2.7.3 (default, Sep 26 2012, 21:51:14) [GCC 4.7.2] on linux2
- CPU Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz
- RAM 8GiB, DIMM DDR3 Synchronous 1067 MHz (0.9 ns), Dual channel
- Apache HTTP server benchmarking tool
- Node.js
- Gevent
sudo apt-get install apache2-utils nodejs python-gevent
Hello world for Node.js
var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(8124, "127.0.0.1");
Using Apache HTTP server benchmarking tool to test it:
$ ab -n 100000 -c 5 http://localhost:8124/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 10000 requests Completed 20000 requests Completed 30000 requests Completed 40000 requests Completed 50000 requests Completed 60000 requests Completed 70000 requests Completed 80000 requests Completed 90000 requests Completed 100000 requests Finished 100000 requests Server Software: Server Hostname: localhost Server Port: 8124 Document Path: / Document Length: 12 bytes Concurrency Level: 5 Time taken for tests: 9.374 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 7600000 bytes HTML transferred: 1200000 bytes Requests per second: 10668.24 [#/sec] (mean) Time per request: 0.469 [ms] (mean) Time per request: 0.094 [ms] (mean, across all concurrent requests) Transfer rate: 791.78 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 1 Processing: 0 0 0.4 0 8 Waiting: 0 0 0.4 0 8 Total: 0 0 0.4 0 8 Percentage of the requests served within a certain time (ms) 50% 0 66% 1 75% 1 80% 1 90% 1 95% 1 98% 1 99% 1 100% 8 (longest request)
Hello world with gevent
from gevent import wsgi class WebServer(object): def application(self, environ, start_response): start_response("200 OK", []) return ["Hello world!"] if __name__ == "__main__": app = WebServer() wsgi.WSGIServer(('', 8888), app.application, backlog=1024).serve_forever()
Benchmarking it:
$ ab -n 100000 -c 5 http://localhost:8888/ ... Server Software: gevent/0.13 Server Hostname: localhost Server Port: 8888 Document Path: / Document Length: 12 bytes Concurrency Level: 5 Time taken for tests: 11.659 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 14700000 bytes HTML transferred: 1200000 bytes Requests per second: 8577.05 [#/sec] (mean) Time per request: 0.583 [ms] (mean) Time per request: 0.117 [ms] (mean, across all concurrent requests) Transfer rate: 1231.28 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 1 Processing: 0 1 0.1 0 6 Waiting: 0 1 0.1 0 6 Total: 0 1 0.1 1 6 ERROR: The median and mean for the processing time are more than twice the standard deviation apart. These results are NOT reliable. ERROR: The median and mean for the waiting time are more than twice the standard deviation apart. These results are NOT reliable. Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 1 95% 1 98% 1 99% 1 100% 6 (longest request)
Hello world with gevent, but log turned off
from gevent import wsgi class WebServer(object): def application(self, environ, start_response): start_response("200 OK", []) return ["Hello world!"] if __name__ == "__main__": app = WebServer() wsgi.WSGIServer(('', 8888), app.application, log=None).serve_forever()
Benchmarking:
$ ab -n 100000 -c 5 http://localhost:8888/ ... Server Software: gevent/0.13 Server Hostname: localhost Server Port: 8888 Document Path: / Document Length: 12 bytes Concurrency Level: 5 Time taken for tests: 8.125 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 14700000 bytes HTML transferred: 1200000 bytes Requests per second: 12308.24 [#/sec] (mean) Time per request: 0.406 [ms] (mean) Time per request: 0.081 [ms] (mean, across all concurrent requests) Transfer rate: 1766.91 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 1 Processing: 0 0 0.1 0 2 Waiting: 0 0 0.1 0 1 Total: 0 0 0.1 0 2 Percentage of the requests served within a certain time (ms) 50% 0 66% 0 75% 0 80% 0 90% 0 95% 1 98% 1 99% 1 100% 2 (longest request)
Hello world with gevent, log turned off, with monkey patching of standard libs
from gevent import wsgi, monkey class WebServer(object): def application(self, environ, start_response): start_response("200 OK", []) return ["Hello world!"] if __name__ == "__main__": monkey.patch_all() app = WebServer() wsgi.WSGIServer(('', 8888), app.application, log=None).serve_forever()
Benchmarking:
$ ab -n 100000 -c 5 http://localhost:8888/ ... Server Software: gevent/0.13 Server Hostname: localhost Server Port: 8888 Document Path: / Document Length: 12 bytes Concurrency Level: 5 Time taken for tests: 8.154 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 14700000 bytes HTML transferred: 1200000 bytes Requests per second: 12264.46 [#/sec] (mean) Time per request: 0.408 [ms] (mean) Time per request: 0.082 [ms] (mean, across all concurrent requests) Transfer rate: 1760.62 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 1 Processing: 0 0 0.1 0 5 Waiting: 0 0 0.1 0 4 Total: 0 0 0.1 0 5 Percentage of the requests served within a certain time (ms) 50% 0 66% 0 75% 0 80% 0 90% 1 95% 1 98% 1 99% 1 100% 5 (longest request)
Aggregated results
Time taken for tests, seconds
|
Total transferred, bytes
|
Requests per second, mean
|
Time per request, ms, mean
|
Time per request, ms, mean across all concurrent requests
|
Transfer rate, Kbytes/sec
|
|
Node.js |
9.374
|
7,600,000
|
10,668.24
|
0.469
|
0.094
|
791.78
|
Gevent |
11.659
|
14,700,000
|
8,577.05
|
0.583
|
0.117
|
1,231.28
|
Gevent, log turned off |
8.125
|
14,700,000
|
12,308.24
|
0.406
|
0.081
|
1,766.91
|
Gevent, log turned off, with standard libs monkey patched |
8.154
|
14,700,000
|
12,264.46
|
0.408
|
0.082
|
1,760.62
|
The same indicators:
- Concurrency Level: 5
- Complete requests: 100000
- Failed requests: 0
- Write errors: 0
Conclusions
Gevent which uses green threads is a good alternative to Node.js.
great! testing bottle and flask over gevent too would be more then appreciated
ReplyDeleteThe previous comment didn't have the source, so here you go:
ReplyDeletehttps://gist.github.com/heath/6086184
Heath, looks like your nodes.js based version is running as many processes as many CPU's you have. So it's using multiple cores, while gevent version is using one? I am correct?
ReplyDelete