压力测试工具 Siege 安装与使用
文章目录
Siege 是Linux/Unix下的一个WEB系统的压力测试工具。
本文记录如何安装 Siege 以及基本使用方法介绍
1. 下载与安装
下载地址: http://download.joedog.org/siege/
$ wget http://download.joedog.org/siege/siege-latest.tar.gz
$ tar zxf siege-latest.tar.gz
$ cd siege-3.1.4/
$ ./configure
$ sudo make
$ sudo make install
查看是否安装成功
查看siege安装路径:
$ which siege
/usr/local/bin/siege
查看siege版本:
$ siege -V
SIEGE 3.1.4
1.1 参数说明
可以使用”siege -h”命令来查看帮助信息:
SIEGE 3.1.4
Usage: siege [options]
siege [options] URL
siege -g URL
Options:
-V, --version VERSION, prints the version number.
-h, --help HELP, prints this section.
-C, --config CONFIGURATION, show the current config.
-v, --verbose VERBOSE, prints notification to screen.
-q, --quiet QUIET turns verbose off and suppresses output.
-g, --get GET, pull down HTTP headers and display the
transaction. Great for application debugging.
-c, --concurrent=NUM CONCURRENT users, default is 10
-i, --internet INTERNET user simulation, hits URLs randomly.
-b, --benchmark BENCHMARK: no delays between requests.
-t, --time=NUMm TIMED testing where "m" is modifier S, M, or H
ex: --time=1H, one hour test.
-r, --reps=NUM REPS, number of times to run the test.
-f, --file=FILE FILE, select a specific URLS FILE.
-R, --rc=FILE RC, specify an siegerc file
-l, --log[=FILE] LOG to FILE. If FILE is not specified, the
default is used: PREFIX/var/siege.log
-m, --mark="text" MARK, mark the log file with a string.
-d, --delay=NUM Time DELAY, random delay before each requst
between .001 and NUM. (NOT COUNTED IN STATS)
-H, --header="text" Add a header to request (can be many)
-A, --user-agent="text" Sets User-Agent in request
-T, --content-type="text" Sets Content-Type in request
Copyright (C) 2015 by Jeffrey Fulmer, et al.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.
查看当前的配置信息
$ siege -C
1.2 使用说明
1.2.1 直接请求URL
$ siege -c 20 -r 10 http://www.cnwytnet.com
参数说明: -c 是并发量,并发数为20人 -r 是重复次数, 重复10次
1.2.2 随机选取urls.txt中列出所有的网址
在当前目录下创建一个名为”urls-demo.txt”的文件。 文件里边填写URL地址,可以有多条,每行一条,比如:
# URLs:
http://www.sogou.com/web?query=php&from=wang_yong_tao
https://www.baidu.com/
执行请求命令
$ siege -c 5 -r 10 -f urls-demo.txt $ siege -c 5 -r 10 -f /Users/WangYoungTom/temp/urls-demo.txt
参数说明: -c 是并发量,并发数为5人 -r 是重复次数, 重复10次 -f 指定使用文件,urls-demo.txt就是一个文本文件,每行都是一个url,会从里面随机访问的
Siege从Siege-V2.06起支持POST和GET两种请求方式。 如果想模拟POST请求,可以在urls-demo.txt中按照以下格式填写URL:
# URL (POST):
http://wangtest.com/index.php POST UserId=XXX&StartIndex=0&OS=Android&Sign=cff6wyt505wyt4c
http://wangtest.com/articles.php POST UserId=XXX&StartIndex=0&OS=iOS&Sign=cff63w5905wyt4c
使用示例
// 请求http://www.cnwytnet.com,并发人数为10,重复5次,每次请求间隔3秒
$ siege --concurrent=10 --reps=5 --delay=3 http://www.cnwytnet.com
$ siege -c 10 -r 5 -d 3 http://www.cnwytnet.com
1.2.3 json 格式参数 POST 请求
在当前目录下创建一个名为”postfile.json”的文件。 文件里边填写JSON格式的数据,如:
{"username":"admin","password":"123456"}
使用示例
siege -c 20 -r 5 '10.1.1.1:80/login/ POST <./postfile.json'
2. 结果说明
Transactions: 153 hits (处理次数,本次处理了153此请求)
Availability: 100.00 % (可用性/成功次数的百分比,比如本次100%成功)
Elapsed time: 17.22 secs (运行时间,本次总消耗17.22秒)
Data transferred: 7.70 MB (数据传送量)
Response time: 0.17 secs (响应时间)
Transaction rate: 8.89 trans/sec (处理请求频率,每秒钟处理8.89次请求)
Throughput: 0.45 MB/sec (吞吐量,传输速度)
Concurrency: 1.54 (实际最高并发连接数)
Successful transactions: 153 (成功的传输次数)
Failed transactions: 0 (失败的传输次数)
Longest transaction: 0.70 (处理传输是所花的最长时间)
Shortest transaction: 0.02 (处理传输是所花的最短时间)
3. 使用实例
$ siege -c 5 -r 10 http://www.baidu.com
Transactions: 386 hits
Availability: 100.00 %
Elapsed time: 37.40 secs
Data transferred: 19.47 MB
Response time: 0.43 secs
Transaction rate: 10.32 trans/sec
Throughput: 0.52 MB/sec
Concurrency: 4.45
Successful transactions: 386
Failed transactions: 0
Longest transaction: 2.38
Shortest transaction: 0.02
3.1 关于Concurrency参数的详解
以下摘录自Siege官网 Concurrency and the Single Siege
We’re frequently asked about concurrency. When a siege is finished, one of its characteristics is “Concurrency” which is described with a decimal number. This stat is known to make eyebrows furl. People want to know, “What the hell does that mean?”
In computer science, concurrency is a trait of systems that handle two or more simultaneous processes. Those processes may be executed by multiple cores, processors or threads. From siege’s perspective, they may even be handled by separate nodes in a server cluster.
When the run is over, we try to infer how many processes, on average, were executed simultaneously the web server. The calculation is simple: total transactions divided by elapsed time. If we did 100 transactions in 10 seconds, then our concurrency was 10.00.
当运行结束时, 我们尝试推断平均有多少个进程同时在执行 web 服务。计算方式很简单: 总记录数除以耗费时间。比方说,如果我们在10秒内有100条记录,那么并发量将是10
Bigger is not always better
并发量并不是越大越好
Generally, web servers are prized for their ability to handle simultaneous connections. Maybe your benchmark run was 100 transactions in 10 seconds. Then you tuned your server and your final run was 100 transactions in five seconds. That is good. Concurrency rose as the elapsed time fell.
通常, web 服务器处理高并发连接的能力是其价值所在。也许你的基准运行是在10秒内100条事务。然后你调整了你的服务器, 你的最终运行是100条事务在5秒内完成。这样很棒, 随着耗时减少, 并发性上升。
But sometimes high concurrency is a trait of a poorly functioning website. The longer it takes to process a transaction, the more likely they are to queue. When the queue swells, concurrency rises. The reasons for this rise can vary. An obvious cause is load. If a server has more connections than thread handlers, requests are going to queue. Another is competence – poorly written apps can take longer to complete then well-written ones.
但有时高并发性是一个功能不良的网站的特点。处理事务所需的时间越长, 排队的可能性就越大。 当队列膨胀时, 并发会上升。这种上升的原因可能有所不同。一个明显的原因是负载。 如果服务器处理的连接比线程多, 则请求将排队。另一方面, 写得不好的应用程序需要更长的时间才能完成请求应答。
We can illustrate this point with an obvious example. I ran siege against a two-node clustered website. My concurrency was 6.97. Then I took a node away and ran the same run against the same page. My concurrency rose to 18.33. At the same time, my elapsed time was extended 65%.
我们可以用一个明显的例子来说明这一点。我对一个两节点的群集网站进行了围攻。我的并发是6.97。然后, 我移除了一个节点, 并运行同样的 siege 命令。我的并发上升到18.33。同时, 消耗的时间延长了65%。
Sweeping conclusions
Concurrency must be evaluated in context. If it rises while the elapsed time falls, then that’s a Good Thing™. But if rises while the elapsed time increases, then Not So Much™. When you reach the point where concurrency rises and elapsed time is extended, then it might be time to consider more capacity.
必须在上下文中计算并发。如果并发量上升时, 耗时减少, 那么这是一件好事。但如果并发量上升, 耗时也增加, 这可不是个好消息了。当您到达并发上升和耗时增多的临界点时, 则可能是考虑扩容的时候了。
4. 文档和官方支持
文章作者 honour
上次更新 2018-06-30