Quantcast
Channel: Loadbalancer.org Blog » haproxy
Viewing all 17 articles
Browse latest View live

Load balancing Windows Terminal Server – HAProxy and RDP Cookies or Microsoft Connection Broker

$
0
0

When you have users depending on Windows Terminal Services for their main desktop, it’s a good idea to have more than one Terminal Server. RDP, however, is not an easy protocol to load balance; sessions are long-lived and need to be persistent to a particular server, and users may connect from different source addresses during one session.

The current development version of HAProxy has made an important step forward in making this possible. Thanks to work by Exceliance, it now supports RDP Cookies, offering a solution to the persistence problem.

We have been testing the latest development release of HAProxy, 1.4-dev4, on a loadbalancer.org Enterprise R16 device. The real servers were two Windows Server 2008 machines, with identical test users set up on both.

We settled upon the following HAProxy configuration (RDP Cookies):

   defaults
        clitimeout 1h
        srvtimeout 1h
   listen VIP1 192.168.0.10:3389
        mode tcp
        tcp-request inspect-delay 5s
        tcp-request content accept if RDP_COOKIE
        persist rdp-cookie
        balance rdp-cookie
        option tcpka
        option tcplog
        server Win2k8-1 192.168.0.11:3389 weight 1 check   inter 2000 rise 2 fall 3
        server Win2k8-2 192.168.0.12:3389 weight 1 check   inter 2000 rise 2 fall 3
        option redispatch

Note that this is only a fragment of the haproxy.cfg file, showing the relevant options.

The load balancer’s Virtual IP is set to 192.168.0.10, listening on port 3389 for RDP. The two real servers are on 192.168.0.11 and 192.168.0.12, in the same subnet as the Virtual IP.

The two new configuration directives are persist rdp-cookie and balance rdp-cookie. These instruct HAProxy to inspect the incoming RDP connection for a cookie; if one is found, it is used to persistently direct the connection to the correct real server. The two tcp-request lines help to ensure that HAProxy sees the cookie on the initial request.

The only other tweak needed is to increase the clitimeout and srvtimeout values to one hour. In testing, this was found to be necessary to keep idle RDP sessions established.

Testing involved making multiple connections with different usernames, from varying IP addresses, using both Windows XP Professional and Linux clients. Sessions were disconnected and reconnected, and real servers removed from the cluster and re-inserted.

We found that, once a user had established a session with a particular real server, that user consistently reconnected to the correct server if it was available. When we removed and re-inserted servers, existing sessions were unaffected. After a simulated server failure, users could start a session on the remaining server.

When a failed server was brought back on-line, users that had been connected to that server would reconnect to it again – even if they had started a new session on the other server in the meantime. This may not be what you want, and requires further testing.

With client and server time-outs set to one hour, we were able to leave idle sessions running for 16 hours without problems.

For more information on the new configuration options, see the development version of HAProxy’s Configuration Manual.

NB. For some daft reason Microsoft restricted the login cookie in RDP to 9 characters! Now as the domain is usually listed first (mydomain/myusername) the first 9 characters may always be the same and RDP cookie session persistence wont work. Two work arounds for this are either reduce the length of your domain name (ouch!) OR use the myusername@mydomain format when you log in….

So what about Microsoft Connection Broker (session directory or whatever they call it) ?

A simple one line change in your HAProxy configuration (RDP Connection Broker):

#Balance rdp-cookie ->        balance leastconn
i.e.
   defaults
        clitimeout 1h
        srvtimeout 1h
   listen VIP1 192.168.0.10:3389
        mode tcp
        tcp-request inspect-delay 5s
        tcp-request content accept if RDP_COOKIE
        persist rdp-cookie
        balance leastconn
        option tcpka
        option tcplog
        server Win2k8-1 192.168.0.11:3389 weight 1 check   inter 2000 rise 2 fall 3
        server Win2k8-2 192.168.0.12:3389 weight 1 check   inter 2000 rise 2 fall 3
        option redispatch

Note that this is only a fragment of the haproxy.cfg file, showing the relevant options.

Its about time we updated this post for the juicy new features in HAProxy – Development 1.5-dev7

Their were a couple of the problems with the hash method used with RDP cookie load balancing (as described above):

  1. Lots of people would like to use least connection load balancing with WTS/RDP clusters (this is not possible with a HASH based persistence method).
  2. When you add or remove servers the HASh table gets re-configured i.e. users hit the wrong server.

So Loadbalancer.org took the decission to sponsor the development of a stick-table based RDP persistence (we sponsored the origional source IP stick table work as well). When we looked at it in more detail we decided that what we needed was:

  1. Flexible stick tables that could be used for multiple future requirements i.e. SSL Session ID persistence.
  2. RDP stick table support in order to enable least connection based scheduling.
  3. Some way of restoring stick tables on session restart (and also replication to other HAProxy instances).
  4. Ensuring that TCP connections are properly closed on server failure (especially important on long connections).
  5. Ensuring that the stick table is cleared out on server failure.
  6. And finaly making sure that the fallback server can be made non-sticky! (really irritating if you get stuck on the sorry site down page).

To cut a long story short lets just dive in with a full configuration file and explain it as we go:

#HAProxy configuration file generated by LB Cloud appliance
global
	#uid 99
	#gid 99
	daemon
	stats socket /var/run/haproxy.stat mode 600 level admin
	log 127.0.0.1 local4
	maxconn 40000
	ulimit-n 81001
	pidfile /var/run/haproxy.pid
defaults
	log global
	mode	http
	timeout connect	4000
	timeout client	42000
	timeout server	43000
	balance	roundrobin
peers 	localpeer
	peer loadbalancer localhost:8888
listen	stats :7777
	stats	enable
	stats	uri /
	stats hide-version
	option	httpclose
frontend F1
	bind *:3389
	maxconn 40000
	default_backend B1
	mode tcp 
	option tcplog
backend B1
	mode tcp
	option tcpka
	balance leastconn
	tcp-request inspect-delay 5s
	tcp-request content accept if RDP_COOKIE
	persist rdp-cookie
	stick-table type string size 204800 expire 120m
	stick on rdp_cookie(mstshash)
	server R1 www.loadbalancer.org:3389 weight 1 check port 3389 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
	server R2 www.clusterscale.com:3389 weight 1 check port 3389 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
	server backup us.loadbalancer.org backup non-stick
	option redispatch
	option abortonclose

An important new section is the peers section:

peers 	localpeer
	peer loadbalancer localhost:8888

In this configuration we are syncronising all of the stick table information with localhost:8888 (it could be with another HAProxy instance for session table high-availability).
When HAProxy restarts it will run existing sessions on the old process until they expire, only new sessions will run on the new HAProxy instance (this can get quite confusing as the stats socket or page will only show the new sessions (not the old ones)
You will need to change your HAProxy start up scripts:

start() {
  /usr/local/sbin/$BASENAME -L loadbalancer -c -q -f /etc/$BASENAME/$BASENAME.cfg
  if [ $? -ne 0 ]; then
    echo "Errors found in configuration file."
    return 1
  fi

  echo -n "Starting $BASENAME: "
  daemon /usr/local/sbin/$BASENAME -D -f /etc/$BASENAME/$BASENAME.cfg -p /var/run/$BASENAME.pid -L loadbalancer
  RETVAL=$?
  echo
  [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$BASENAME
  return $RETVAL
}
reload() {
  /usr/local/sbin/$BASENAME -L loadbalancer -c -q -f /etc/$BASENAME/$BASENAME.cfg
  if [ $? -ne 0 ]; then
    echo "Errors found in configuration file."
    return 1
  fi
  /usr/local/sbin/$BASENAME -D -L loadbalancer -f /etc/$BASENAME/$BASENAME.cfg -p /var/run/$BASENAME.pid -sf $(cat /var/run/$BASENAME.pid)
}

The important thing is that the peers definition “loadbalancer” must be prsent in both the start up scripts and the haproxy.cfg file.

Now we have the new section to make the stick table use RDP cookies and the least connection scheduler:

	balance leastconn
	tcp-request inspect-delay 5s
	tcp-request content accept if RDP_COOKIE
	persist rdp-cookie
	stick-table type string size 204800 expire 120m
	stick on rdp_cookie(mstshash)

And the new clean and quick session kill options + making the backup server not go in the stick table:

	server R2 www.clusterscale.com:3389 weight 1 check port 3389 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
	server backup us.loadbalancer.org backup non-stick

I probably haven’t explained all that very well… but these tweeks ensure that servers that fail health checks immediately break the long held TCP connections

but feel free to ask questions :-) .

Someone asked for a complete configuration file , so here goes:

 

# HAProxy configuration file generated by loadbalancer.org appliance
global
	daemon
	stats socket /var/run/haproxy.stat mode 600 level admin
	pidfile /var/run/haproxy.pid
	maxconn 40000
	ulimit-n 81000
	tune.maxrewrite 1024
defaults
	mode http
	balance roundrobin
	timeout connect 4000
	timeout client 42000
	timeout server 43000
peers loadbalancer_replication
	peer lbmaster localhost:7778
	peer lbslave localhost:7778
listen RDP_Test
	bind 192.168.67.30:3389
	mode tcp
	balance leastconn
	server backup 127.0.0.1:9081 backup  non-stick
	option tcpka
	tcp-request inspect-delay 5s
	tcp-request content accept if RDP_COOKIE
	stick-table type string size 10240k expire 12h peers loadbalancer_replication
	stick on rdp_cookie(mstshash) upper
	timeout client 12h
	timeout server 12h
	option redispatch
	option abortonclose
	maxconn 40000
	server 2008_R2 192.168.64.50:3389  weight 1  check   inter 2000  rise 2  fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions
listen stats :7777
	stats enable
	stats uri /
	option httpclose
	stats auth loadbalancer:loadbalancer

Note some small changes in the timeouts & stick table section:

	stick-table type string size 10240k expire 12h peers loadbalancer_replication
	stick on rdp_cookie(mstshash) upper <-- Nice little tweek to ensure cookies don't have a case sensitive match!
	timeout client 12h
	timeout server 12h <-- Massive timeout but works well in office environments where people go for long lunch breaks and forget to log off...






EC2 load balancer appliance rocks, and its FREE… for now anyway.

$
0
0

Update: Sorry but as of Wednesday 6th Oct 2010, the free lifetime license is no longer available!

OK, so let me begin by saying that I am both excited and slightly scared by our latest product. I’m excited because after playing around with it in the Amazon cloud, I’ve become slightly addicted to launching multiple instances in different parts of the world and load balancing the traffic seamlessly. I’m slightly scared because this could change our whole business model from hardware load balancer vendor to online SAAS (Software As A Service) provider.

So why does the new Loadbalancer.org EC2 ENTERPRISE rock?

The Loadbalancer.org EC2 ENTERPRISE provides a simple and flexible cloud application management tool (aka. Load Balancer). You simply fire up an instance from our public ami, configure it for your application cluster and then for disaster recovery purposes simply bundle up the whole ami (pre configured).

“Hang on a minute, Doesn’t  the Amazon cloud already have a load balancing service?”, I here you say.

Ah yes, Amazons load balancing service is very good and very fast but:

  1. It is layer 4 only (round robin).
  2. Doesnt support SSL termination
  3. Doesnt support Cookies
  4. Doesnt support WAN or SNAT load balancing (i.e. non-local servers)
  5. Doesnt support URL matching rules or multiple backend clusters.
  6. Doesnt support application maintenance modes
  7. Doesnt support customized health checks

The Loadbalancer.org EC2 ENTERPRISE does all of the above (actually point 7 not yet but it will – fixed in RC-1 :-) . )

cloudlb

Now before you get too excited, this product is currently a BETA and by that I mean when you have it configured and tested it is probably perfectly fine in production BUT while configuring it and testing don’t be surprised if you find some gotchas in the web interface! It is also almost feature complete, it does most things that you would require it too and does them well….

Its a long story but this product has been in development hell (alpha) for nearly two years now!

So I have personally taken a solid week to kick it into its current BETA6 shape, and intend to get it to RC1 pretty damn quickly….

I’m a strong believer in Trump’s “Ready, Fire, Aim”:

” So anyone who uses the Loadbalancer.org EC2 ENTERPRISE (BETA) gets a free perpetual license (on request) to use the finished product and all future versions!”

Another reason for this is that we really need feedback on how to develop this product further, with questions like:

  • Does it need the ability to remotely start instances when load increases?
  • Does it need a heartbeat failover mechanism or just scripted ami failover?
  • Does it need SNMP / graphical statistics?
  • Is it fine as it is?

Warning: The following screen shot is not pretty… but it is functional and server maintenance is seamless and AJAXified….

backends

So you’ve either left by now or hopefully I’ve caught your interest!

So how do you get started with testing? Simple just open your AWS console (or Elastic Fox) and search for the public ami (ami-5eb9932a): loadbalancer.org/ENTERPRISE-EC2-v1-demo.manifest.xml
But make sure you are searching in EU-WEST… or US-EAST @ loadbalancer.org-us-east/ENTERPRISE-EC2-v1-demo.manifest.xml

find_public_amiOnce you’ve found it, simply right click it and say start instance!

Obviously you are going to need a security group with a few useful ports open:

  • 22 – SSH : Always useful
  • 9443 : This is your access to the web administration interface (Its HTTPS access only)
  • 7777 : This is for administrative access to the HAProxy status report
  • 80 & 443 : You will probably want these open in order to put some test clusters on them

Once the instance is up and running find the public DNS and access the web interface with something like:

https://ec2-79-125-XX-XX.eu-west-1.compute.amazonaws.com:9443/

username: loadbalancer
password: loadbalancer

To set up a cluster:

  • Click on the Server tab
  • Add a front end called F1 with port 80 and backend B1, mode = http.
Label Ports Default backend Mode
F1 80 B1 http
  • Then add a new back end called B1,persistence=cookies,fallback=127.0.0.1:80
  • Then add a new server label=myserver,DNS/IP=www.loadbalancer.org,port 80, weight 1

addbackend

Then if not already prompted you will need to use Maintenance > Restart HAProxy

Assuming you get no errors then simply go to:

http://ec2-79-125-XX-XX.eu-west-1.compute.amazonaws.com/

And your load balancer will re-direct you to www.loadbalancer.org!

Simple?

Anyway we’d love your feedback!

And yes we know it needs a load of Javascript sanity checking added (its very easy to break the URL rules section :-) .

BETA7 – UPDATE

OK, so beta7 is getting pretty close to feature complete:

You can now wrap up an exact copy of your load balnacer instance, upload and save the ami to an S3 bucket, register the image and then launch it as an autoscaling instance with an assigned elastic IP…. aka. HA load balancing solution.

In order to achieve this simply go to the accounts tab, fill in lots of fields and hit the save buttons… work from the top slowly and ready the messages! Section 3 ‘image wrapping’ can take about 30mins+ (It will tell you when its finished).

Section 4 ‘auto scaling’ WILL COST YOU MONEY i.e. it will launch a new instance that is VERY HARD TO DESTROY:

EC2 autoscale - hard to kill

EC2 autoscale - hard to kill

Thats why it shows the destroy script clearly on screen when it is finished! (If you are interested the creation/save scripts are /etc/loadbalancer.org/aws/bundle.sh & launch.sh)

#!/bin/bash
# /etc/loadbalancer.org/aws/rmlaunch.sh
# This script needs to be used to terminate an autoscaling instance (make a copy of it locally as it wont work if it terminates itself!)
export AWS_AUTO_SCALING_HOME="/etc/loadbalancer.org/aws/ec2-api-tools"
export EC2_HOME=/etc/loadbalancer.org/aws/ec2-api-tools
export EC2_PRIVATE_KEY=/etc/loadbalancer.org/aws/pk.pem
export EC2_CERT=/etc/loadbalancer.org/aws/cert.pem
export JAVA_HOME=/usr
/etc/loadbalancer.org/aws/ec2-api-tools/bin/as-update-auto-scaling-group EC2VAGroup --launch-configuration EC2VAConfig --availability-zones us-east-1a,us-east-1b --min-size 0 --max-size 0 --cooldown 100 --region us-east-1
sleep 120
/etc/loadbalancer.org/aws/ec2-api-tools/bin/as-delete-auto-scaling-group EC2VAGroup --region us-east-1 -f
/etc/loadbalancer.org/aws/ec2-api-tools/bin/as-delete-launch-config EC2VAConfig --region us-east-1 -f

You can launch the kill script from the origional load balancer image (i.e. not the autoscale one), or you can probably get away with running it on the actual autoscale image but obviously it will kill itself during the first sleep command….
So the auto-scaling group and launch configuration won’t actually get killed… but at least the image will terminate :-) .

RC-1 – UPDATE

OK, So we finally have a release candidate! Yeah!

  • Loads of bug fixes
  • Loads of input verification stuff
  • New extended health checks – nicked from ‘nagios’ – so in theory any nagios check can be implemented.
  • If you specify a check file i.e. index.html and a Response Expected i.e. OK , the specified file will be read on each server and the output grep’d for OK if it fails the real server is put in maintenance mode.
  • Password change functionality implemented for web interface.

ENTERPRISE EC2 v1 – UPDATE

Yeah – We are all systems go!

Loadbalancer.org/ENTERPRISE-EC2-v1-demo.manifest.xml
Loadbalancer.org/ENTERPRISE-EC2-v1-PAID.manifest.xml (20 cents an hour)
Loadbalancer.org-us-east/ENTERPRISE-EC2-v1-demo.manifest.xml
Loadbalancer.org-us-east/ENTERPRISE-EC2-v1-PAID.manifest.xml (20 cents an hour)

ENTERPRISE EC2 V1.5.2 – UPDATE

Ooops, We haven’t updated this Blog entry in a while!

The new EC2 v1.5.2 has a load of updates:

  • Improvements to stability and resource utilization
  • Stick tables now persist across HAProxy restarts
  • RDP cookies now have stick table support
  • TCP connections now disconect quickly on server failure
  • Fallback server is non-sticky by default
  • Default connection limits and timeouts increased
  • Feedback agent CPU Idle available as a Windows Service


G-Zip Compression and Loadbalancing

$
0
0

A couple of our customers have asked if our appliances would do G-Zip compression in the past we haven’t given it much thought. Then out of the blue a company offered us a card to test with http://www.aha.com/ and some of us in the office welcoming the opportunity to meddle with anything new jumped at the chance.

They provided us with a AHA360-PCIe 2.5 Gbits/sec GZIP Compression/Decompression Accelerator

Features

  • Open standard compression algorithm (GZIP/Deflate)
  • Performs both compression and decompression
  • Compresses and decompresses at a throughput up to 2.5 Gbits/sec
  • Uses the AHA3610
  • x4 PCI-Express interface
  • ASCII text compresses greater than 3.0:1
  • 3.6:1 Canterbury corpus compression ratio
  • Scatter Gather DMA
  • 64-bit OS support
  • Linux and Windows reference drivers with source code
  • SMP and Hypterthreading support
  • 32-bit and 64-bit Linux kernel support

As we all like stats here are a few numbers from the initial performance of the card.

Firstly I compared the G-Ziping (im sure thats not a real word) of a 533MB iso file with the same process running through the card. The numbers are quite good, as you can see it almost entirely offloaded the process to the card and ran the compression though in half the time it took the SW version of G-Zip

time ./ahazip -f=/usr/src/file31.iso -o=/usr/src/compressed.gz

real    0m15.566s
user    0m0.015s
sys    0m1.674s

[root@lbmaster bin]# time gzip /usr/src/file31.iso

real    0m38.915s
user    0m33.966s
sys    0m1.084s

I then decided to get a bit more adventurous and compress 2 files at once the performance of the card was quite heavily affected.

[root@lbmaster bin]# time ./ahazip -f=/usr/src/file32.iso -o=/usr/src/compressed1.gz

real    0m35.888s
user    0m0.011s
sys    0m1.717s

[root@lbmaster bin]# time ./ahazip -f=/usr/src/file31.iso -o=/usr/src/compressed.gz

real    0m28.262s
user    0m0.012s
sys    0m1.611s

and the results for G-Zipping the 2 files

[root@lbmaster bin]# time gzip /usr/src/file31.iso

real    0m58.823s
user    0m43.235s
sys    0m1.480s

[root@lbmaster bin]# time gzip /usr/src/file32.iso

real    1m7.898s
user    0m45.205s
sys    0m1.325s

What impressed me the most was the incredibly low overhead when using the card compared with using the system to do the compression. It barely touched the CPU, leaving the system to get on with other important jobs.

Back to the main event………

Here is how I did it – you need your G-Zip card to use the standard Zlib compression library, and obviously you need a proxy that can support it. In this instance we are using Nginx as in its current guise HAProxy (both HAProxy and Nginx are available on our appliances) does not support G-Zipping the datastream, so we had to proxy the proxy in order to get it to work. Ill assume your happy in installing it and getting everything running. Below is the procedure for getting the card we were supplied with to work with Nginx.

I had to use a slightly modified configure file as well as a modified makefile which were kindly given to me by their excellent support team.

The 2 files are here http://www.loadbalancer.org/download/gzipcompression/zlibfiles.tar.gz and you just need to replace them with the existing ones in the Zlib directory that came with the drivers that you downloaded from their site.

To configure Nginx I used the following command – ./configure   –prefix=/usr   –sbin-path=/usr/sbin/nginx   –conf-path=/etc/nginx/nginx.conf   –error-log-path=/var/log/nginx/error.log   –pid-path=/var/run/nginx/nginx.pid    –lock-path=/var/lock/nginx.lock   –user=nginx   –group=nginx   –with-http_ssl_module   –with-http_flv_module   –with-http_gzip_static_module   –http-log-path=/var/log/nginx/access.log   –http-client-body-temp-path=/var/tmp/nginx/client/   –http-proxy-temp-path=/var/tmp/nginx/proxy/   –http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ –with-zlib=/usr/src/AHA36x_ver_2_4_0_08312010/zlib/

The bit in red is the important bit, as that the location of the Zlib source files, these are compiled as you run the make for Nginx.

Then its just a simple –

make

make install

and if you have got your ./configure bit right everything should be fine and dandy and exactly where you wanted it.

I Finally managed to get Nginx to function as a proxy, after raiding various websites and hacking about the options suggested here http://www.cyberciti.biz/faq/rhel-linux-install-nginx-as-reverse-proxy-load-balancer/ ill include the config files for download later.

To do this take the files included in the archive nginxconf.tar and place the files included in it into /etc/nginx or where ever you set the location of the configuration files to be. Take a look at all of them, but for the moment the ones we are interested in are haproxyredirect.conf and options.conf.

options.conf – This is where you can set the G-Zip options, a full definition of these options is available http://wiki.nginx.org/NginxHttpGzipModule

gzip    on; <—– Enables G-Zip
gzip_http_version       1.1; <——– Compression version
gzip_comp_level         9; <———- Set the Compression level 0 is no compression 9 is the highest that it will allow, the higher the value the more work that the device has to do.
gzip_min_length 0; <—— it will only G-Zip files larger than this size
gzip_types text/plain text/css text/xml text/javascript application/x-javascript image/gif image/jpeg image/jpg image/png image/bmp image/x-icon; <– types of file to compress
#gzip_disable “MSIE [1-6].(?!.*SV1)”; <—as G-Zip is not compatable with IE 1-6 it checks for it and disables it.
gzip_vary on;

you can adjust the above settings to your requirements.

Next is the haproxyredirect.conf There are 2 lines that you are interested in – these are

upstream nixcraft  {
server 127.0.0.1:8081; <——– which should be the VIP of haproxy this is where the query made on the address below will be passed to.

}

server {
access_log  /var/log/access.log main;
error_log   /var/log/error.log;
index       index.html;
root        /usr/local/nginx/html;
listen 192.168.17.45:8080;
<—- this is the ip address and port that your users will connect too

and dont forget to restart Nginx after changing the configuration.

I started with a web page that contained an enormous jpeg file, after some discussion over the already compressed file format and how any differences would be negligible. I settled on a 7.3MB bitmap file. Which using gzip compression was able to be reduced to a paltry 182.3KB which impressed the heck out of me!

An additional note is that the way I compiled Nginx means that it had reverted to software compression, handily the tech support guys (as I have already mentioned) for the card know their stuff and were able to help me out. but it does mean that I was able to see the different load times under the different methods of compression. Software the load time was – 1.65 seconds and with the card enabled it was 1.02s it managed to knock half a second off….not bad.

Now the only question remaining is – haproxy then nginx or nginx then haproxy??

Nginx then haproxy – it must be this way to allow persistence to work, thankfully Nginx does not strip the cookies or any of the header information on the return path its just happy to G-Zip everything up on the way through.

My setup was:

192.168.17.45:8080 –> Njinx –> 127.0.0.1:8081 –> HAProxy –> 192.168.17.2

Below is the HAProxy configuration, which can be done from the user interface to save you a bit of typing, but here it is if you need it.

global
#uid 99
#gid 99
daemon
stats socket /var/run/haproxy.stat mode 600 level admin
maxconn 40000
ulimit-n 81000
pidfile /var/run/haproxy.pid
defaults
mode    http
contimeout      4000
clitimeout      42000
srvtimeout      43000
balance roundrobin
listen  VIP_Name 127.0.0.1:8081
mode    http
option  httpclose
option  forwardfor
cookie  SERVERID insert nocache indirect
balance leastconn
server RIP_Name 192.168.17.2:80 weight 1 cookie RIP_Name check  inter 2000 rise 2 fall 3
server  backup 127.0.0.1:9081 backup
option redispatch
option abortonclose
maxconn 40000

The config files are rather lengthy so Ive Tarred them up so you can download them at your leisure –

http://www.loadbalancer.org/download/gzipcompression/

If you have done all that correctly you should have a G-Zip compressing proxy!


IIS and X-Forwarded-For Header

$
0
0

So, you’re using IIS and you want to track your clients by IP address in your IIS logs. Unfortunately, out of the tin, this is not directly supported. The X-Forwarded-For (XFF) HTTP header is an industry standard method to find the IP address of a client machine that is connecting to your web server via an HTTP proxy, load balancer etc. Fortunately, depending on the version of IIS being used, there are a number of ways to enable this.

A – IIS 7 & later :

Microsoft do now have a solution – it’s called IIS Advanced Logging. This is an installable IIS feature and can be downloaded here. Once installed on the IIS server, you’ll see an extra option called ‘Advanced Logging’ for the sites in IIS.

Once installed, follow the steps below to add the X-Forwarded-For log field to IIS:

1. From your Windows Server 2008 or Windows Server 2008 R2 device, open IIS Manager
2. From the Connections navigation pane, click the appropriate server, web site, or directory on which you are configuring Advanced Logging. The Home page appears in the main panel
3. From the Home page, under IIS, double-click Advanced Logging
4. From the Actions pane on the right, click Edit Logging Fields
5. From the Edit Logging Fields dialog box, click the Add Field button, and then complete the following:
-in the Field ID box, type X-Forwarded-For
-from the Category list, select Default
-from the Source Type list, select Request Header
-in the Source Name box, type X-Forwarded-For
-click the OK button in the Add Logging Field box, and then click the OK button in the Edit Logging Fields box
6. Click a Log Definition to select it. By default, there is only one: %COMPUTERNAME%-Server. The log definition you select must have a status of Enabled
7. From the Actions pane on the right, click Edit Log Definition
8. Click the Select Fields button, and then check the box for the X-Forwarded-For logging field
9. Click the OK button
10. From the Actions pane, click Apply
11. Click Return To Advanced Logging
12. In the Actions pane, click Enable Advanced Logging

Now, when you look at the logs the client IP address is included.

B – IIS 6 :

Unfortunatey the Microsoft solution mentioned above is not available for IIS 6. luckily there are a number of solutions available to address this limitation – some that cost money and others that have been released as open source. One excellent example that we’ve tested with our products is F5′s X-Forwarded-For ISAPI filter. It’s avaialable in both in 32 & 64 bit versions.

1. Download the zipped archive from here and extract to an appropriate folder
2. Navigate to the relevant version (32 or 64 bit)
3. Copy F5XForwardedFor.dll to a suitable location on your server, e.g. C:ISAPIfilters
4. Make sure you have ISAPI Filters enabled on your IIS server
5. Open IIS Manager, right-click the site and select Properties
6. Select the ISAPI Filters tab
7. Click ‘add’, then in the popup enter a suitable name and select the DLL file stored in step 3
8. Restart your website

That’s it – you should now start seeing the IP address of the client PC’s in your IIS logs rather than the IP of the load balancer.

Apache and X-Forwarded-For Headers

$
0
0

As a follow on to my previous blog, its easier to get Apache to log client IP addresses utilizing X-Forwarded-For headers than it is using IIS. By default, the logs do not record source IP addresses for clients but this is very easy to change using the LogFormat directive in the httpd.conf file as explained below.

The standard LogFormat directive:
LogFormat “%h %l %u %t “%r” %>s %b” common

To add the clients source IP address, just change this to:
LogFormat “%h %l %u %t “%r” %>s %b %{X-Forwarded-For}i” common

To add the clients source IP address and put quotes around each field (useful when importing the logs into a spreadsheet or database):
LogFormat “”%h” “%l” “%u” “%t” “%r” “%>s” “%b” “%{X-Forwarded-For}i”" common

Once you’ve made the change, restart Apache and you’re done. The examples below show the resulting log entries for each configuration.

Standard logs:
192.168.2.210 – - [09/Feb/2011:09:59:31 +0000] “GET / HTTP/1.1″ 200 44

Client IP’s added:
192.168.2.210 – - [09/Feb/2011:10:00:16 +0000] “GET / HTTP/1.1″ 200 44 192.168.2.7

Client IP’s added and all fields encapsulated in quotes:
“192.168.2.210″ “-” “-” “[09/Feb/2011:10:01:10 +0000]” “GET / HTTP/1.1″ “200″ “44″ “192.168.2.7″

N.B.
192.168.2.210 is the IP of the Ethernet interface (eth0) on the load balancer
192.168.2.7 is the IP of my test PC

One other point, if you also have Pound SSL in your configuration, once you’ve added the X-Forwarded-For bit to your LogFormat directive, the logs will also record an additional entry for the Pound virtual server as shown below:

192.168.2.210 – - [09/Feb/2011:10:02:16 +0000] “GET / HTTP/1.1″ 200 44 192.168.2.7, 192.168.2.212

The  additional IP address (192.168.2.212) in this example  is the IP of the Pound Virtual Server.

Load Balancing Exchange 2010 CAS Array with HAProxy (Quick Guide)

$
0
0

This Blog is for anyone wanting to load balance the Exchange 2010 CAS role using only open source software. In my example I will be starting with a simple Debian net-install and building the HAProxy package from source because I wanted the latest feature set available. I would definitely recommend using a recent 1.5-dev build if following this guide or parts of the HAProxy configuration may be incompatible.


Update the system and install dependencies :

1. Update

root@localhost:~# apt-get update

2. Install dependencies

root@localhost:~# apt-get install build-essential make libpcre3 libpcre3-dev


Downloading/Building the HAProxy package :

1. Download the HAProxy Package available from http://haproxy.1wt.eu

root@localhost:~# wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev11.tar.gz

2. Extract the package

root@localhost:~# tar -xvzf haproxy-1.5-dev11.tar.gz

3. Change directory and make the package

root@localhost:~# cd haproxy-1.5-dev11
root@localhost:~/haproxy-1.5-dev11# make TARGET=linux2628 ARCH=x86_64 USE_PCRE=1

4. Install the newly compiled package and confirm it is installed.

root@localhost:~/haproxy-1.5-dev11# make install
root@localhost:~/haproxy-1.5-dev11# /usr/local/sbin/haproxy -vv

Assuming you didn’t run into any errors with the previous commands you should now have HAProxy installed.


Configuring startup script :

1. Create the startup script

root@localhost:~/haproxy-1.5-dev11# nano -w /etc/init.d/haproxy

2. Paste the following into the new file and save it(with Ctrl+X)

#!/bin/sh
# /etc/init.d/haproxy

PATH=/bin:/usr/bin:/sbin:/usr/sbin

pidfile=/var/run/haproxy.pid
binpath=/usr/local/sbin/haproxy
options="-f /etc/haproxy/haproxy.cfg"

test -x $binpath || exit 0

case "$1" in
  start)
    echo -n "Starting HAproxy"
        $binpath $options
    #start-stop-daemon --start --quiet --exec $binpath -- $options
    echo "."
    ;;
  stop)
    echo -n "Stopping HAproxy"
    kill `cat $pidfile`
        #start-stop-daemon --stop --quiet --exec $binpath --pidfile $pidfile
    echo "."
    ;;
  restart)
    echo -n "Restarting HAproxy"
    #start-stop-daemon --stop --quiet --exec $binpath --pidfile $pidfile
    kill `cat $pidfile`
        sleep 1
    $binpath $options
    echo "."
    ;;
  *)
    echo "Usage: /etc/init.d/haproxy {start|stop|restart}"
    exit 1
esac

exit 0

3. Change permissions and register the startup script

root@localhost:~/haproxy-1.5-dev11# cd /etc/init.d
root@localhost:~/etc/init.d# chmod +x haproxy
root@localhost:~/etc/init.d# update-rc.d haproxy defaults

You should now be able to start and stop haproxy with “service haproxy <action>” where the action is start/stop/restart.


Creating the HAProxy configuration file :

1. Create folder structure and open the config file for editing

root@localhost:~/etc/init.d# mkdir /etc/haproxy
root@localhost:~/etc/init.d# nano -w /etc/haproxy/haproxy.cfg

2. Paste in the example configuration and adapt for your settings.

N.B. The bind lines below can be adapted to listen on a specific IP address, simply add your desired local IP: bind 192.168.72.120:135,192.168.72.120:60200,192.168.72.120:60201

global
 daemon
 stats socket /var/run/haproxy.stat mode 600 level admin
 pidfile /var/run/haproxy.pid
 maxconn 40000
 ulimit-n 81000
 defaults
 mode http
 balance roundrobin
 timeout connect 4000
 timeout client 86400000
 timeout server 86400000
frontend CAS-RPC
 bind :135,:60200,:60201
 mode tcp
 maxconn 40000
 default_backend CAS-RPC-SERVERS
frontend CAS-WEB
 bind :80,:443
 mode tcp
 maxconn 40000
 default_backend CAS-WEB-SERVERS
frontend HT-SMTP
 bind :25
 mode tcp
 maxconn 40000
 default_backend HT-SERVERS
backend CAS-RPC-SERVERS
 stick-table type ip size 10240k expire 60m
 stick on src
 option redispatch
 option abortonclose
 balance leastconn
 server EXCH01 192.168.72.222 weight 1 check port 135 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
 server EXCH02 192.168.72.223 weight 1 check port 135 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
backend CAS-WEB-SERVERS
 stick-table type ip size 10240k expire 60m
 stick on src
 option redispatch
 option abortonclose
 balance leastconn
 server EXCH01 192.168.72.222 weight 1 check port 443 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
 server EXCH02 192.168.72.223 weight 1 check port 443 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
backend HT-SERVERS
 option redispatch
 option abortonclose
 balance leastconn
 server EXCH01 192.168.72.222 weight 1 check port 25 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
 server EXCH02 192.168.72.223 weight 1 check port 25 inter 2000 rise 2 fall 3 on-marked-down shutdown-sessions
listen stats :7777
 stats enable
 stats uri /
 option httpclose
 stats admin if TRUE
 stats auth admin:password

3. Start HAProxy with your new configuration

root@localhost:~/etc/init.d# service haproxy start

N.B. At this stage if you receive errors like below please check that something else is not listening on any of the required ports.

[ALERT] 205/123152 (2839) : Starting proxy CAS: cannot bind socket [192.168.72.120:135]

You now also have a WUI in the form of the HAProxy stats page which includes useful options such as taking a server offline etc.

http://<IP-ADDRESS>:7777/


Configuring the Exchange 2010 CAS role :

1. Either configure the ports manually or using the following Registry file(user beware)

Link to the Registry file = http://downloads.loadbalancer.org/RPC%20Ports.reg

Manual Static Port Configuration

To set a static port for the RPC Client Access Service, open the registry on each CAS and navigate to:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesMSExchangeRPC

Here, you need to create a new key named ParametersSystem, and under this key create a new DWORD(32-bit) Value named TCP/IP Port as shown below. The Value for the DWORD should be the port number you want to use. Microsoft recommends you set this to a unique value between 59531 and 60554 and use the same value on all CAS. In this Blog the port used is 60200.

N.B. Make sure you use a DWORD Value for this key




To set a static port for the Address Book Service, open the registry on each CAS and navigate to:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesMSExchangeAB

Here, you need to create a new key named Parameters, and under this key create a new String Value named RpcTcpPort as shown below. Microsoft recommends you set this to a unique value between 59531 and 60554 and use the same value on all CAS. In this Blog the port used is 60201.

N.B. Make sure you use a STRING Value for this key




2. Creating the DNS entry

Create a DNS record for the CAS Array, this should be the same as the load balancer’s IP address(bind address if used earlier), e.g. cas.domain.com

3. Configure the CAS array object

Use the following command from the Exchange 2010 management shell to create the object :

New-ClientAccessArray –Name “CAS-array” –FQDN “cas.domain.com” -Site “default-first-site-name”

N.B. change “default-first-site-name” to the AD site appropriate for your Client Access Servers
N.B. change “cas.domain.com” to the FQDN of the CAS array(same as the DNS entry)

If the mail database already existed before creating the array, you’ll also need to run the following command to relate the new CAS array to the database:

Set-MailboxDatabase "NameofDatabase" -RpcClientAccessServer “cas.domain.com”

To verify the configuration of the CAS array, use the following commands from the Exchange Shell :

get-ClientAccessServer

lists the available Client Access Servers

get-ClientAccessArray

lists the Client Access Array and its members


Finished

Once you’ve completed all the previous steps you can now access your CAS services via your load balancer IP, it should also be correctly load balancing connections for better performance and real server resilience. There are still many ways you could build further resilience or add more features to this solution such as HA, DAG’s and SSL Termination but this will still give you perfectly adequate load balancing of the CAS and HT roles.

Setting up HAProxy with Transparent Mode on Centos 6.x

$
0
0

Transparent mode with HAProxy allows you to see the IP Address of the clients computer while still having a high availability service using HAProxy.

This posting shows how to setup a blank virgin installation of Centos 6.3 64bit minimum installation.

This guide works on the assumption that you have a public facing IP Address of 192.168.10.50 (I know thats not a real public address) and are using an internal network address space of 10.10.10.x/24 with our two web servers on 10.10.10.10 and 10.10.10.15. So we will have two network interfaces on our LoadBalancer eth0 will be set with our real world IP of 192.168.10.50 and eth1 will be set up with 10.10.10.1.

After installing our basic Centos 6.3 64bit OS, it maybe worth running a ‘yum update‘ command first to ensure that the system is fully updated.

As this is a minimum installation you will also need to install a few other packages. These can be installed with the following command:

yum install make wget gcc pcre-static pcre-devel

I’m using the HAProxy 1.5 dev7 build for this example but at the time of writing dev12 is the latest available build and I’ll assume that the following will also work with that Development Release. However, to get all the features that we require we will need to build HAProxy from source and not from the package repository. The following steps enable us to do just that:

  • wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev7.tar.gz
  • tar -zxf haproxy-1.5-dev7.tar.gz
  • cd haproxy-1.5-dev7
  • make TARGET=linux26 USE_STATIC_PCRE=1 USE_LINUX_TPROXY=1
  • cp haproxy /usr/bin/haproxy
  • cp examples/haproxy.cfg /etc/haproxy.cfg

The installation is now completed. However, we have only an example configuration file installed at ‘/etc/haproxy.cfg’ this is the file that will store all of the settings that we require to ensure our website is available for the maximum number of visitors. So we now need to edit this configuration file I’m going to use ‘vim’ but if you are more familiar with ‘nano’, ‘ee’ or another editor please use that.

vim /etc/haproxy.cfg

Have a quick look through the file if you wish and see the basic structure of the configuration file, we are going to create a VERY basic config to start with just to make sure that our installation is working.

global
daemon
log /dev/log local4
maxconn 40000
ulimit-n 81000

defaults
log global
contimeout 4000
clitimeout 42000
srvtimeout 43000

listen http1
bind 192.168.10.50:80
mode http
balance roundrobin
server http1_1 10.10.10.10:80 cookie http1_1 check inter 2000 rise 2 fall 3
server http1_2 10.10.10.15:80 cookie http1_2 check inter 2000 rise 2 fall 3

Save the above configuration file and then to start the HAProxy service use the following command from the command line:

/usr/sbin/haproxy -f /etc/haproxy.cfg

If everything starts correctly you should be able to browse to your real IP Address using a different compute and see you default page, as mine are just two Debian Web Server I get the following:

If you see the above image or the page for your servers. Congratulations your two web servers are now in High Availability mode. If you do not see your default page stop HAProxy with a killall haproxy command and run /usr/bin/haproxy -d -f /etc/haproxy.cfg this will restart HAProxy with debugging displayed on the console screen to stop the debug info being printed and the HAProxy Service simply press Crtl+C

Now that the basic High Availability is working lets move to Transparent mode.

So with a stopped HAProxy service open your /etc/haproxy.cfg file again with your editor of choice and in the ‘listen http1 section’ add the following

option http-server-close
option forwardfor
source 0.0.0.0 usesrc clientip

You will now need to edit your iptables rules. I have this as my ‘iptables-rules.sh’ file:

iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK –set-mark 111
iptables -t mangle -A DIVERT -j ACCEPT
ip rule add fwmark 111 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

If you now run this file and then start your new modified HAProxy file and retest to your web server on the Real IP Address you should be able to see in the HTTP Access logs that the address that your site was visited from is not that of the LoadBalancer.

HAProxy Email Alerts Guide

$
0
0

In this guide I show a very simple solution to get HAProxy email alerts configured using Logwatch. While the first part is aimed at users of our V7 appliance I think anyone wanting to get email alerts for HAProxy will also find this a good example.

First from the WUI :

1. Set the external relay(Smart Host) under Edit Configuration > Physical – Advanced Configuration.

phys-adv-shot

2. Enable HAProxy Logging under Edit Configuration > Layer 7 – Advanced Configuration.

l7-adv

Then from the CLI :

1. Install the logwatch package using yum like so :

[root@lbmaster ~]# yum --disableexcludes=all install logwatch

2. Create the following file and set your To/From email addresses : /etc/logwatch/conf/logwatch.conf

MailTo = myemailaddress@example.com
MailFrom = LBMaster@example.com

At this point you’ll have a standard Logwatch install which will send an email once per day (you may want to disable this).

*To disable the daily logwatch email execute : chmod -x /etc/cron.daily/0logwatch

3. Create a custom Layer 7 check

a. Create the following file adding the contents below : /etc/logwatch/conf/logfiles/layer7.conf

LogFile = /var/log/haproxy.log
*OnlyHost
*ApplyStdDate

b. Next create the script : /etc/logwatch/scripts/services/layer7

use strict;
my $find = "is UP|is DOWN";
my @lines = <STDIN>;
for (@lines) {
     if ($_ =~ /$find/) {
         print "$_\n";
     }
 }

c. Finally create the following file : /etc/logwatch/conf/services/layer7.conf

Title = "Layer 7 Errors"
LogFile = layer7

4. Enable the Logwatch job to run every minute with Cron

a. Edit crontab with the following command :

[root@lbmaster ~]# crontab -e

b. Add the following new line to root’s crontab :

*/01 * * * * /bin/nice -n 19 /usr/sbin/logwatch --service layer7 --range '-1 minutes for that minute'

Once this is complete you’ll now receive an email in the event of a real server failure. The way this works is that logwatch will run every minute and search the previous minutes log entries for servers that are taken down or brought up during that time.


Microsoft drops support for mstshash cookies?

$
0
0

Recently we have seen quite a few customer issues where using RDP cookies (mstshash cookies see - http://www.snakelegs.org for more details) seems to result in multiple active sessions over several RDP servers as shown below (Notice the user Rob on both TS Servers).

Duplicate RDP Users

So we decided to investigate this and find out why……..

Where do we start?, well our first thought was to check if there were any issues with the load balancing application or its configuration. we found that everything there looked to be fine except that not every user seemed to be in the stick table and that testing the exact same environment with Windows XP or a Linux client we were unable to replicate the issue.

OK so what do we do now. we know the load balancer and its configuration seam to be working as it should as the XP client etc.  So what is windows 7 and above doing that is different and why is there not always all the users in the stick table?…

After some research we found the TechNet documentation on the x.224 Connection Request PDU  and the following key text.

<13> Section 3.2.5.3.1: Microsoft RDP 5.1, 5.2, 6.0, 6.1, 7.0, 7.1, and 8.0 clients always include the cookie field in the X.224 Connection Request PDU if a nonempty username can be retrieved for the current user and the routingToken field is not present (the IDENTIFIER used in the cookie string is the login name of the user truncated to nine characters).

Ok so that makes sense, when making a connection request the client should send a cookie made from the users username so the loadbalancer knows who the user is. We fired up Wireshark and what do you know we found that the new client does not always send this cookie. it seems it will not send it or the incorrect cookie in the following conditions:

  1. Starting the client and using the cached/saved credentials, then no hash is sent.
  2. Mistyping the username and then correcting, mstsc still uses the old username as the hash and not the updated username.
  3. Under certain conditions if a user entered the wrong password the cookie would default to domain/user and not the expected user@domain format that was entered in the username field.  This is an issue for users with domains over 9 characters long as the cookie is then allowed to be duplicated for many users or not match the previously used cookie.

Ok so what happens with the XP client.  Well this seems to follow the documentation and explains why that worked.

how do we fix this? Well the TechNet Docs say this is how it works, yet the client does not do this…

Sounds like a bug so lets log it with Microsoft……First problem. how the hell do you log a bug? nothing stands out from their site so instead we paid  and logged a support call with them (which is still ongoing 4 months later).

So where does this all lead.  Well so far Microsoft have stated this is and i quote (including spelling mistake) from them:

“The behavior that you are experiencing is a documentation bug and we are currently in the process of publishing a Knowledge Base article for it”

Err ok, so a Microsoft Product does not follow their own documentation and they say it is a Documentation bug? So their answer is if a product does not fit the design spec, just change the spec. I have now seen the new version of the documentation and it now says…

cookie (variable): An optional and variable-length ANSI character string terminated by a 0x0D0A two-byte sequence. This text string MUST be “Cookie: mstshash=IDENTIFIER”, where IDENTIFIER is an ANSI character string (an example cookie string is shown in section 4.1.1). The length of the entire cookie string and CR+LF sequence is included in the X.224 Connection Request Length Indicator field. This field MUST NOT be present if the routingToken field is present.”

So now its an optional item,and not always sent.  Well thats no good to anyone using RDP cookies.

They also say there is a perfectly valid workaround….. but failed to advise what it was, well after chasing for it we are told….

“This problem occurs only when the same MSTSC process is used to connect to a server with 2 different credentials. The scenario can easily workaround by starting a new MSTSC process before connecting to the server with different credentials.”

Really?  Well to us this indicates an issue with the MSTSC client where it is unable to update its own cookies without having to restart the application, and after many emails back and forth we get the very latest response.

“Based on my internal discussions, we conclude that for load balancing terminal servers, using Session Directory is the recommended approach. This whitepaper talks more about the implementation http://download.microsoft.com/download/8/6/2/8624174c-8587-4a37-8722-00139613a5bc/TS_Session_Directory.doc. In the business impact statement that you had provided, I did see that you have mentioned that you cannot use Session Directory.”

Well thats lovely for anyone with Windows 2003 Enterprise and above, what about users of 2003 Standard who do not have the Session Directory included? hell their document even says this.

“Note:

Terminal servers must be running Windows Server 2003 Enterprise Edition or Windows Server 2003 Datacenter Edition to participate in a Session Directory-enabled farm.”

Well we are still waiting for an answer to this from Microsoft but it looks like the RDP cookie maybe being dropped by Microsoft and no fix in sight… Guess we will just have to wait and see what they say next, however we would love to hear from anyone else who is experiencing the same issues as us so we can advise Microsoft of other users who are being hampered by this issue.

Microsoft drops support for mstshash cookies confirmed

$
0
0

Well it looks like Microsoft have indeed silently dropped support for mstshash cookies for load balancing as suspected in my last post.

As detailed in the last post we had a call open with Microsoft and have just received the following response that confirms our suspicions that they have in fact dropped support for mstshash silently in favour of their Session Directory/Broker solutions.

Mathew

Hope this email finds you well. I regret to state that, I’ve a negative news for you.

Based on the triage of code with product group, they suggest you to devise some alternate way to achieve load balancing.

According to them, the dependency you took on the optional cookie field is not recommended or supported.

The protocol documentation does not specify what is passed in that field and the value can change based on the scenario.

Unfortunately, there are no plans on updating the documentation as it does not call out what is being passed in this field and is already marked as optional.

Please let me know if you wish to have a conference call with us on this matter.

I would attempt to bring the PM on call.

Thank you for your understanding and patience. I again, is truly regretful.

Subheet Rastogi | Support Lead

Enterprise Platforms | Microsoft Corporation | Office: <number removed>

Unfortunately it seems they have decided to change the documentation to fit their product changes rather then fix their product to work as the original documentation detailed that has since been updated.

Oh well guess you win some and you lose some.

What do you mean my pipe is saturated?

$
0
0

Some of the most common questions we get at Loadbalancer.org are performance related. It is quite difficult to give a straight answer to these questions as the real answer is the slightly unsatisfactory, ” Um… well it depends on your application…”. The following graph showing HAProxy performance for different object sizes gives you a much better idea of the problem:

HaProxybench

As you can quickly see from this graph, the number of connections/s, bandwidth and object size are all closely correlated. Depending on your application and usage pattern you will get vastly different throughput results from your load balanced cluster.

Generally even our smallest appliance can fill a 1GB pipe (we have several customers easily doing 2Gb+), But we do have some guidelines for our sales guys:
For deployments using Layer 7 and expecting a very large number of connections/second, or deployments with a large number of SSL TPS – This is very CPU intensive so we generally recommend our MAX or Dell hardware.
For Layer 7 deployments with very large numbers of long connections i.e. Exchange 2010 with 5000+ users – This is very memory intensive so we generally recommend our MAX or Dell hardware or the ENTERPRISE VA.

So one of the problems that load balancer vendors have is specifying good looking numbers i.e big ones, relating to their load balancer performance. Loadbalancer.org is just as guilty as the other vendors in using best case scenarios for performance:

matrix

 Does this specification mean that you can get 60,000 HTTP requests a second AND 1.5GBps throughput? I don’t think so…….

Does this specification mean that you can get 500 SSL TPS on our least powerful appliance with a 2048 Bit key? I don’t think so……

Loadbalancer.org SSL stats are all based on 1024 Bit keys…..
One of our guys will shortly write a blog on the full test process we use + a comparison of the different cyphers and their effect on performance, he even has a $16K Thales crypto card he’s been putting through its paces for an interesting comparison of SSL Hardware/ASIC Acceleration versus generic multi-core CPUs….

 

Open Source Windows service for reporting server load back to HAProxy (load balancer feedback agent).

$
0
0

In general when you are load balancing a cluster you can evenly spread the connections through the cluster and you get pretty consistent and even load balancing. However with some applications such as RDS (Microsoft Terminal Servers), you can get very high load from just a  few users doing heavy work. The solution to this is to use some kind of server load feedback agent. We’ve had one of these for a while in our product but now with a lot of help from Simon Horman we’ve managed to integrate the functionality into the main branch (well soon anyway) of HAproxy. We thought it would be a good idea to open source the previous work on Ldirectord/LVS, make it compatible with HAProxy, and release our Windows service code as GPL.

Until the work is merged and tested with an official release of HAProxy we’ve compiled a patched version of HAProxy dev19 ish here…. (http://downloads.loadbalancer.org/agent/haproxy-agent-check-20130813.tar.gz) Or you can get the patches from the mailing list archive…

Simply compile as usual and then modify your RDS cluster:

listen RDSTest
	bind 192.168.69.22:3389
	mode tcp
	balance leastconn
	persist rdp-cookie
	server backup 127.0.0.1:9081 backup  non-stick
	tcp-request inspect-delay 5s
	tcp-request content accept if RDP_COOKIE
	timeout client 12h
	timeout server 12h
	option tcpka
	option redispatch
	option abortonclose
	maxconn 40000
	server Win2008R2 192.168.64.50:3389  weight 100  check agent-port 3333 inter 2000  rise 2  fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions

The important bit agent-port 3333 tells HAProxy to constantly monitor each backend server in the cluster by doing a telnet to port 3333 and grabbing the response which will usually be a percentage idle value i.e.

80% – I am not very busy please increase my weight and send me more traffic
10% – I’m busy please decrease my weight and stop sending me so much traffic
drain – Set the weight to 0 and gradually drain the traffic from this server for maintenance
stop – Stop all traffic immediately, kill this backend server

If you have a Linux backend you could create a simple service calling the following script:

#!/bin/bash
LOAD=(/usr/bin/vmstat 1 2| /usr/bin/tail -1| /usr/bin/awk '{print $15;}' | /usr/bin/tee)
echo "$LOAD%"
#This outputs a 1 second average CPU idle

Call the script  /usr/bin/lb-feedback.sh
make sure that you make it executable:

chmod +x /usr/bin/lb-feedback.sh


Insert this line into /etc/services

lb-feedback 3333/tcp # loadbalancer.org feedback daemon

Now create the following file called /etc/xinetd.d/lb-feedback

# default: on
# description: lb-feedback socket server
service lb-feedback
{
 port = 3333
 socket_type = stream
 flags = REUSE
 wait = no
 user = nobody
 server = /usr/bin/lb-feedback.sh
 log_on_success += USERID
 log_on_failure += USERID
 disable = no
}

Then change permissions and restart xinetd:

chmod 644 /etc/xinetd.d/lb-feedback
/etc/init.d/xinetd restart

You can now test this service by using telnet:

telnet 127.0.0.1 3333
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
95%
Connection closed by foreign host.

Now if you have a Windows server as your backend you can use our open source monitor service. You can download the Loadbalancer.org windows feedback agent here (http://downloads.loadbalancer.org/agent/CpuMonitor_4.2.4.zip)

Once you have installed Loadbalancer.org feedback service you should find the monitor.exe file in Program Files/LoadBalancer.org

Feedback

Simply hit the ‘start’ button and the agent should start responding to telnet on port 3333 (you may need to make an exception for that port in your Windows firewall).

You can change the ‘mode’ setting to drain then ‘apply settings and restart’ and HAProxy will then set the weight to 0 and status to drain (blue) i.e.:

drain

Or you can set the ‘mode’ to halt then ‘apply settings and restart’ and HAProxy will then immediately set the status to DOWN (yellow) i.e.:

down

When the agent is running in normal mode it will report back the percentage idle of the system based on the settings in the feedback agent XML file:

<xml>
<Cpu>
<ImportanceFactor value="1" />
<ThresholdValue value="100" />
</Cpu>
<Ram>
<ImportanceFactor value="0" />
<ThresholdValue value="100" />
</Ram>
<TCPService>
<Name value="HTTP" />
<IPAddress value="*" />
<Port value="80" />
<MaxConnections value="0" />
<ImportanceFactor value="0" />
</TCPService>
</xml>

Notice that you can control both the importance of CPU & RAM utilization and also a threshold, so the following logic is used:

If CPU importance = 0 then ignore
If RAM importance = 0 then ignore
If Threshold level is reached on any monitor then immediately go into DRAIN mode.

Otherwise to calculate the percentage idle reported by the agent we
would be to divide the utilization by the number of factors involved i.e.

If you are using two services then:

utilization = utilization + cpuLoad * cpuImportance%;
utilization = utilization + ramOccupied * ramImportance%;
utilization = utilization / 2

So if importance was 1 for both cpu and ram you would only get 0% reported if both CPU and RAM were 100%.

And if the importance is zero then ignore completely i.e.

utilization = utilization + cpuLoad * cpuImportance%;
//utilization = utilization + ramOccupied * 0 (importance is zero so ignore)
utilization = utilization (one service only so don’t divide)

Also the final section TCPService effictvley lets you load balance on number of established connections to your server, so you could balance based on the number of RDP connections to port 3389.

For this setting MaxConnections is important to specify as otherwise the agent will have no idea how to calculate the load i.e.
utilization = MaxConnections / 100 * number of current connections * importance%

In the following screen shot from a Loadbalancer.org appliance you can see that the Win2008R2 server is healthy and 99% idle, whereas the Linux server was busy at 43% idle before the Linux agent was put into maintenance mode and the server taken out of the group.

sysoverview

Does that make sense? Have a play with the config file and let us know what you think….

 

 

 

 

 

 

 

 

3 Ways To Send HAProxy Health Check Email Alerts

$
0
0

To follow up to Aarons blog on HAProxy emails alerts using logwatch I was looking into different ways to achieve the same results.
Now the ideal way to monitor the health of the real servers is to to have a dedicated monitoring system in place such as Nagios( It even has a HAProxy plugin). However this is not always an option for some so they require the loadbalancer to send an alert. So I investigated some different options.

Logwatch
Logwatch does achieve the desired results but is limited in what it can do. One of the downfalls is that you do not get real time alerts when a real servers status changes. Also because it is necessary to search through the log file it causes unnecessary load especially if you have a busy server. You can reduce the amount of work logwatch needs to do by creating a strict search criteria or truncating the log file using something like logrotate.
Another option is to use option “log-health-checks” and create a log file that only contains log entries when a server changes status. This will drastically reduce the amount of work logwatch need to do.

Polling the stats socket
One of the features of HAProxy is that you can view the stats by unix socket. This will tell you if a real server is UP or DOWN. So by polling the stats socket at regular intervals we can can monitor any changes.
I have done this with a simple python script that uses the socat command to retrieve the currents stats and compares the server status with the previous status.

#!/usr/local/bin/python3
#!/usr/local/bin/python3
import subprocess
import time
import smtplib
from email.mime.text import MIMEText
def main():
	firstrun = True
	currentstat=[]
	while True:
		readstats = subprocess.check_output(["echo show stat | socat unix-connect:/var/run/haproxy.stat stdio"], shell=True)
		vips = readstats.splitlines()
		#check if server is up or down and matches previous weight
		#
		for i in range(0,len(vips)):
			#store currnet status
			if "UP" in str(vips[i]):		 			
				currentstat.append("UP")
			elif "DOWN" in str(vips[i]):
				currentstat.append("DOWN")
			else:
				currentstat.append("none")
			#ignore first run as we have no old data to compare to
			if firstrun == False:
				#compare new and old stats
				if (currentstat[i] != oldstat[i] and currentstat[i]!="none") and ("FRONTEND" not in str(vips[i]) and "BACKEND" not in str(vips[i])):
					servername= str(vips[i])
					servername=servername.split(",")
					realserver = servername[0]
					alert=realserver[2:]+ " has changed status and is now "+ currentstat[i]
					mail(str(alert))
		firstrun=False
		oldstat=[]
		oldstat=currentstat
		currentstat=[]
		time.sleep(30)	
 
def mail(alert):
	msg=MIMEText(alert)
	me="from@email.com"
	you="to@email.com"
	msg["Subject"] = "Layer 7 alert"
	msg["From"] = me
	msg["To"] = you
 
	s = smtplib.SMTP("smtpserver.com")
	s.sendmail(me,[you],msg.as_string())
	s.quit
main()

This can also be download here incase the formatting is not correct.

So this works but is far from perfect we still do not have real time alerts and do not account for when a server’s status is MAINT or NOLB. But have neglected the need to read a log file and have reduced some IO which could possibly classed as an improvement over logwatch. You can change the polling time to make it check as often or as little as needed. Now this is not the most elegant way and has its downfalls so there must be a better way…

Patch HAProxy
So this brings us on to option 3 patch HAProxy to send the alerts, after all how hard can it be?
As I don’t really want to write my own SMTP client or use any other library’s lets go with the easy option of using mailx from the mailutils package as we know it works. The following was written for HAProxy dev18. Now I’m no developer so take the code more of a proof of concept instead of something to add to your production environment.

Most of the work is already done for us, as HAProxy has functions for setting a server up or down and also has an array containing the server name, server’s status etc. So all we need to do is add our own function to send the email and parse the email address from the configuration file.

This done in the following patch files:
cfgparse.c
checks.c
log.c
global.h

So in the configuration file I have added the option “email_alert” to the global section with to and from address.
So you add:

email_alert to@email.com from@email.com
Where the to address is required and the from address is optional.

So now when a server is marked as down or up you will receive an email in real time alerting you to this. Which will look something like this:


Subject: Loadbalancer layer7 alert
From: from@mail.com
To: to@mail.com
X-Mailer: mail (GNU Mailutils 2.2)
Message-Id: <1@test>
Date: Fri, 25 Oct 2013 14:55:03 +0100 (BST)

Server servers/server2 is DOWN, reason: Layer4 timeout, check duration: 2001ms. 0 active and 0 back

And that’s it you now have email alerts straight from HAProxy, albeit untested and liable break things.
So on choosing what you need the import questions are do you need to be alerted the very second a server goes down? And does it matter if you install an extra service to monitor things? Each method works but has its own benefits and downfalls so it is up to you to decide what suits your environment.
Feel free to use the code or even rewrite it all feedback is welcome!

Source IP Addresses, STunnel, Haproxy and Server Logs

$
0
0

When using proxies such as STunnel and HAProxy it’s easy to loose track of the client source IP address. This occurs for example when HAProxy is used in it’s default configuration to load balance a number of back-end web servers. By default, the source IP address of the packet reaching the web servers is the IP address of the load balancer and not the IP address of the client. One way around this is to enable X-Forward-For headers for HAProxy (the default for Loadbalancer.org appliances) and configure the web servers to track the IP address in this header. For more details on enabling this for IIS and Apache web servers, please see IIS and X-Forwarded-For Headers and Apache and X-Forwarded-For Headers.

For more complicated scenarios where SSL termination is also required on the load balancer and the original source IP address is still required, additional steps are needed.

 

STunnel & HAProxy

 

Stunnel+haproxy

 

By default, in the above example the IP address in the X-Forward-For header reaching the Web Servers is the load balancers own IP address. This is because STunnel is not transparent by default. To force STunnel to pass the original client IP address the protocol directive in STunnel must be added and set to proxy as shown below:

 

[STunnel]
cert = /etc/loadbalancer.org/certs/STunnel.pem
ciphers = ALL
accept = 192.168.10.10:443
connect = 192.168.10.10:80
options = NO_SSLv2
options = CIPHER_SERVER_PREFERENCE
<strong>protocol = proxy</strong>
TIMEOUTclose = 0

 

N.B. For more details on the protocol option please refer to this page.

 

This enables STunnel to pass the original client IP address to HAProxy.

 

HAProxy must also be configured to accept and use this information by inserting the accept-proxy & option forwardfor directives:

 

listen L7-VIP
bind 192.168.10.10:80 transparent <strong>accept-proxy</strong>
mode http
balance leastconn
cookie SERVERID insert nocache indirect
server backup 127.0.0.1:9081 backup non-stick
option http-keep-alive
<strong>option forwardfor</strong>
option redispatch
option abortonclose
maxconn 40000
server WEB1 192.168.10.11:80 weight 100 check inter 6000 rise 2 fall 3 minconn 0 maxconn 0 on-marked-down shutdown-sessions
server WEB2 192.168.10.12:80 weight 100 check inter 6000 rise 2 fall 3 minconn 0 maxconn 0 on-marked-down shutdown-sessions
server WEB3 192.168.10.13:80 weight 100 check inter 6000 rise 2 fall 3 minconn 0 maxconn 0 on-marked-down shutdown-sessions

 

N.B. For more details on the accept-proxy option please refer to this page , for more details on the proxy protocol please refer to this page.

 

Fortunately, STunnel & HAProxy can easily be configured in this way using the built-in Web User Interface:

 

a) Configuring STunnel

Using the WUI option: Cluster Configuration > SSL Terminaton  click [Modify] next to the relevant STunnel Virtual Service and enable the option Set as Transparent Proxy as shown below:

Stunnel+haproxy 2

 

b) Configuring HAProxy

Using the WUI option: Cluster Configuration > Layer 7 – Virtual Services click [Modify] next to the relevant HAProxy Virtual Service and enable the options Set X-Forward-for Header (enabled by default) and Proxy Protocol as shown below:

Stunnel+haproxy 3

 

c) Restart Services

Finally, restart HAProxy and STunnel to apply the changes using the restart buttons that appear at the top of the screen as shown below:

Stunnel+haproxy 4

 

As mentioned at the start you’ll also need to ensure that your web servers are correctly configured to log the X-Forward-For header details, but as far as configuring the load balancer is concerned, that’s it !

 

Simple Denial of Service DOS attack mitigation using HAProxy

$
0
0

Denial of Service (DOS) attacks can be especially effective against certain types of web application. If the application is highly dynamic or database intensive it can be remarkably simple to degrade or cripple the functionality of a site. This blog article describes some simple methods to mitigate single source IP DOS attacks using HAProxy. I’ve described how you would implement the techniques using the Loadbalancer.org appliance but they are easily transferable to any HAProxy based cluster.

The most important thing when blocking brute force attacks is not to block any legitimate traffic. Now if you know that your site is low traffic such as an internal application with authentication we can set some fairly strict and specific rules about blocking brute force access, but in most scenarios for a public facing web site we have to be fairly lenient.

We can usually make some generic assumptions for web browser traffic such as:

1) Web browsers wont generate more than 5-7 concurrent connections per domain.
2) Web browser users won’t click more than 10 pages in 10 seconds!
3) Genuine web users are unlikely to generate 10 application errors in 20 seconds – unless your application is broken and then you have bigger problems!

So lets construct some rules based on source IP address rate limiting. In order to do this with the Loadbalancer.org appliance you would create a Layer 7 Manual Configuration and ensure that the labels for the cluster “Secure_Web_Cluster” and the labels for SecureWeb1 & SecureWeb2 match your XML so that the system overview can still be used to manage the manual cluster configuration. For more information see custom configuration on page 105 of the Loadbalancer.org administration manual.

 

# MANUALLY DEFINE YOUR VIPS BELOW THIS LINE:
listen Secure_Web_Cluster
bind 192.168.64.29:80 transparent
mode http
timeout http-request 5s  #Slowloris protection
# Allow clean known IPs to bypass the filter (files must exist)
#tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
#tcp-request content reject if { src -f /etc/haproxy/blacklist.lst }

I’ve remarked out the blacklist and white list here as the files don’t exist on my system, however the whitelist would be very important if you have some customers behind a large proxy such as a corporate firewall i.e. they all appear to come from a single IP address.
Now lets move on to some actual blocking:

# Dont allow more than 10 concurrent tcp connections OR 10 connections in 3 seconds
tcp-request connection reject if { src_conn_rate(Abuse) ge 10 }
tcp-request connection reject if { src_conn_cur(Abuse) ge 10 }
tcp-request connection track-sc1 src table Abuse

These rules are pretty self explanatory, simply reject the tcp connection if this source IP has more than 10 concurrent connections or more than 10 requests in 3 seconds (as defined in the Abuse backend stick table).

You can test this with a simple apache bench command:

To test > 10 requests in 3 seconds:

ab -n 11 -c 1 http://192.168.64.29/

or to test concurrency blocking:

ab -n 1 -c 10 http://192.168.64.29/

So how about doing something more at the application level such as actual HTTP requests:

# ABUSE SECTION works with http mode dependent on src ip
tcp-request content reject if { src_get_gpc0(Abuse) gt 0 }
acl abuse src_http_req_rate(Abuse) ge 10
acl flag_abuser src_inc_gpc0(Abuse) ge 0
acl scanner src_http_err_rate(Abuse) ge 10
# Returns a 403 to the abuser and flags for tcp-reject next time
http-request deny if abuse flag_abuser
http-request deny if scanner flag_abuser

The above section is slightly more complex in that the first time a user is flagged as an abuser we return a 403 error message, and for subsequent abuse we reject the tcp connection. This is a bit kinder to friendly robots that may be crawling the site (you may also want to white list the Google bot…. etc.)

Because we are blocking HTTP requests here you can simply test it by opening a browser and hitting refresh 6 or 7 times to trigger the response, obviously you will need to generate an error on the web server if you wish to test the http_err_rate acl.

Now the following section simply configures the servers in the cluster, you can choose to use cookies or stick tables for persistence as required for your environment:

errorfile 403 /usr/share/nginx/html/index.html
balance leastconn
option redispatch
option abortonclose
maxconn 40000
timeout connect 5s
timeout server 10s
timeout client 30s
server SecureWeb1 192.168.64.13:80 weight 100 check port 80 inter 2000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down shutdown-sessions
server SecureWeb2 192.168.64.15:80 weight 100 check port 80 inter 2000 rise 2 fall 2 minconn 0 maxconn 0 on-marked-down shutdown-sessions

Now because HAProxy only allows one stick table per front end we will store all our abuse information in a completely separate section:

backend Abuse
stick-table type ip size 1m expire 30m store conn_rate(3s),conn_cur,gpc0,http_req_rate(10s),http_err_rate(20s)

The important thing to be aware of is that this table will potentially grow to 1 million entries and consume approx 100Mb of RAM. You may also want to be very lenient with the amount of time a user is blocked for. I have set the block time at an aggressive 30 minutes but you may well find 30 seconds is enough of a deterrent for most hacker bots to move on to easier targets :-).

If you want to inspect the table entries simply do a socat command:

[root@lbmaster ~]# echo "show table Abuse" | socat unix-connect:/var/run/haproxy.stat stdio
# table: Abuse, type: ip, size:1048576, used:1
0x968c9c: key=192.168.64.12 use=0 exp=1795159 gpc0=4 conn_rate(3000)=0 conn_cur=0 http_req_rate(10000)=13 http_err_rate(20000)=4

 

As a side note you should definitely try and get your DOS rules as close to the edge of your network as possible so if you have a firewall in front of the load balancer you can use simple layer 4 rate limiting rules on that instead of / as well as rules in HAProxy i.e.:

/sbin/iptables -A INPUT -p TCP --syn  --dport 80 -m recent --set 
/sbin/iptables -A INPUT -p TCP --syn --dport 80 -m recent --update --seconds 10 --hitcount 10 -j DROP

The above rule blocks more than 10 http connections in 10 seconds by watching for the connection setup syn packet, try here for extra information on iptables rate limiting .
NB. One obvious gotcha is that this will only work if keepalive is turned off on your web servers :-).

At Loadbalancer.org we are very cautious about making any of these defensive strategies our default behaviour as our clients have many different security requirements. You should carefully consider the potential unintended consequences of these mitigation techniques as everyone knows the quickest way to DOS a web site is by installing aggressive security :-).


Transparent Load balancing with HAProxy on Amazon EC2

$
0
0

This is a quick guide on how to setup transparent mode on HAProxy in Amazon’s EC2. One of our favoured methods of load balancing is using Layer 4 DR because it is transparent and fast. Unfortunately because of Amazon’s infrastructure this is not possible in EC2 so we need to use another method which means we are left with layer 4 NAT and transparent HAproxy using TProxy.

Scott has already covered configuring transparent Haproxy on CentOS here so I won’t cover that again but use it as a base to work from. For transparency to work the haproxy load balancer must be in the path of the return traffic from the real servers. There are 2 methods of doing this, with a dual subnet and a single subnet. Both methods also work for load balancing UDP and TCP at layer 4 NAT with LVS. So the post does not become too long I will presume you know the basics of launching instances in EC2, configuring security groups and creating a VPC.

Duel Subnet Method

To do this you will need 2 subnets inside your VPC in AWS, a public and private subnet. Here I am using a public subnet of 192.168.1.0/24 and a private subnet of 192.168.2.0/24.

• Firstly launch you CentOS instance into your public subnet and follow Scott’s blog to get your HAProxy load balancer up and running.
As I am using a different network to Scott my haproxy.cfg looks as below:

global
daemon
log /dev/log local4
maxconn 40000
ulimit-n 81000

defaults
log global
contimeout 4000
clitimeout 42000
srvtimeout 43000

listen http1
bind 192.168.1.50:80
mode http
balance roundrobin
server http1_1 192.168.2.55:80 cookie http1_1 check inter 2000 rise 2 fall 3
server http1_2 192.168.2.56:80 cookie http1_2 check inter 2000 rise 2 fall 3

• Disable the source / Destination check on the instance in AWS. To do this go to the EC2 console and select your load balancer instance. Then select “Actions > Network > Change source/Dest. check” and Disable this option. Doing so enables the instance to receive traffic which has a destination IP it does not own.

• The next step is to deploy your real servers into your private subnet. For my test I just used a couple of Ubuntu servers with Apache on.

• We now need to configure the routing on your private subnet to route back through your load balancer in the public network. This way the load balance replaces the NAT instance in a typical NAT setup as described

    ◦ Under the VPC dashboard, select Route Tables
    ◦ Select the route table that relates to the private subnet
    ◦ Select the Routes tab, and click Edit
    ◦ In the blank row at the bottom set the destination to 0.0.0.0/0 and set the target to be the ENI on
    the load balancer – in this example selecting the instance “i-7e9d4083 | HAProxy load balancer” insets the ENI as shown below.
    route_table

•To access from the internet I have mapped a Elastic IP to my VIP address 192.168.1.50.

And that is all there is too it. Now when accessing the VIP through a browser either from the internet or from inside the public network the clients IP will be seen by the real servers and show in the apache logs.

Single Subnet Method

For the single subnet method to work the clients need to be on a different subnet to the load balancer. So this works great over the internet when attaching an Elastic IP to your VIP. The steps are as follows:

• Follow all the same steps in Scotts blog to setup CentOS with HAproxy.

• Disable the source / Destination check on the instance in AWS. To

• Launch your real servers into the same subnet as the load balancer.

• Change the default gateway on the real servers to the IP address of your load balancer instance.

With the IP address of my load balancer instance being 192.168.1.182 my route table now looks like:
[root@ip-192-168-1-73 ec2-user]# route -n

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.182 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

Now when accessing my VIP through my Elastic IP address the real servers see the client IP address.

Blocking invalid range headers using ModSecurity and/or HAProxy (MS15-034 – CVE-2015-1635)

$
0
0

Microsoft quietly patched a fairly nasty little bug (MS15-034) in IIS last month: A simple HTTP request with an invalid range header field value to either kill IIS, reveal data or remotely execute code! We haven’t seen one of these in a while and obviously you are safe if you have automatic security patching turned on. However, with our renewed focus on web application security, I thought this would be a good example to show how easy virtual patching is with the industry standard tools used in the Loadbalancer.org appliance.

I’m cheating by using our pre-release Loadbalancer.org appliance v8 software as it has a built in WAF aka. ModSecurity. By default the WAF is obviously handling the blocking for the OWASP 10 threats and adding customized rules is simply a matter of editing the custom rules config file:

# Do not allow an invalid range from ping of death attack MS15034
SecRule REQUEST_HEADERS:Range "@rx (?i)^(bytes\s*=)(.*?)(([0-9]){10,})(.*)" \
"id:'100007',phase:1,t:none,block,setvar:tx.anomaly_score=+%{tx.critical_anomaly_score},msg:'Invalid header range MS15034 attack'"

Then restart the WAF in the interface (service httpd reload which is transaction safe).
You can test the new rule by triggering the rule with the following curl command:

curl -v http://192.168.64.28/ -H "Host: www.myhost.com" -H "Range: bytes = 10-18446744073709551615" -k

You should get an HTML response of 403 Forbidden, and the mod security error log as follows:

[Mon May 18 14:57:27 2015] [error] [client 82.70.17.214] ModSecurity: Warning. Pattern match "(?i)^(bytes\\\\s*=)(.*?)(([0-9]){10,})(.*)" at REQUEST_HEADERS:Range. [file "/etc/httpd/modsecurity.d/lb_rules_waf1.conf"] [line "41"] [id "100007"] [msg "Invalid range MS15034 attack"] [hostname "www.myhost.com"] [uri "/"] [unique_id "VVn9138AAAEAAEC3LVAAAAAC"]
[Mon May 18 14:57:27 2015] [error] [client 82.70.17.214] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(.*)" at TX:0. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_49_inbound_blocking.conf"] [line "26"] [id "981176"] [msg "Inbound Anomaly Score Exceeded (Total Score: 5, SQLi=, XSS=): Last Matched Message: "] [data "Last Matched Data: X-Forwarded-For"] [hostname "www.myhost.com"] [uri "/"] [unique_id "VVn9138AAAEAAEC3LVAAAAAC"]
[Mon May 18 14:57:27 2015] [error] [client 82.70.17.214] ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_60_correlation.conf"] [line "37"] [id "981204"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 5, SQLi=, XSS=): "] [hostname "www.myhost.com"] [uri "/"] [unique_id "VVn9138AAAEAAEC3LVAAAAAC"]

Notice that we are using anomaly score based blocking, which is much more flexible and effective than simply blocking on the first error. In the Loadbalancer.org appliance we have implemented ModSecurity with mod_proxy & mod_rapf.

 

We use an HAProxy front end for incoming traffic and an HAProxy backend for the web application cluster. This allows the flexibility to implement traffic handling rules at any point in the chain.

waf

  • If you need to turn off the WAF simply hit Halt on the real server WAF_1 and the fallback will automatically pass through unsecured traffic to the default backend Waf1_Backend
  • If you want a hard fail then remove the fallback server from WAF1_Front_End to kill all traffic, or point it to a holding page.
  • If you need to add more WAF’s to the cluster for scalabilty, you simply add them to the WAF1 front end.

The flexibility of the Loadbalancer.org solution allows you to handle security in the ideal location for your network. For this instance, when doing a simple block based on the value in the range header field, you are almost certainly better off doing it in the HAProxy front end. This is very simple to do in all the Loadbalancer.org products. Just change the configuration to a manual layer 7 config and add the following two lines to your HAProxy config:

listen Waf1_Backend
bind 192.168.64.28:81 transparent
mode http
 
# The following two lines do the security check
tcp-request inspect-delay 5s
http-request deny if { hdr_reg(Range) -i ^(bytes\s*=)(.*?)(([0-9]){10,})(.*) }
 
balance leastconn
cookie SERVERID insert nocache indirect
server backup 127.0.0.1:9081 backup non-stick
option accept-invalid-http-request
option http-keep-alive
option forwardfor
option redispatch
option abortonclose
maxconn 40000
server lb.org 176.34.178.134:80 weight 100 cookie lb.org agent-check agent-port 3389 agent-inter 6000 check inter 14000 rise 2 fall 3 minconn 0 maxconn 0 on-marked-down shutdown-sessions

To test this we can turn the WAF off and run the earlier curl command to check that HAProxy is blocking the malicious request:

curl -v http://192.168.64.28/ -H "Host: www.myhost.com" -H "Range: bytes = 10-18446744073709551615" -k

WAF_OFF

And we still get the expected response:

403 Forbidden 
Request forbidden by administrative rules.
* Closing connection 0

For reference, if anyone wants the rest of the ModSecurity configuration defaults, they are as follows:

SecRuleEngine On
 
SecRequestBodyAccess On
SecResponseBodyAccess On
 
SecResponseBodyLimitAction ProcessPartial
SecRequestBodyLimitAction ProcessPartial
 
SecAuditLog /var/log/httpd/modsec_audit_waf1.log
ErrorLog /var/log/httpd/error_waf1.log
LogLevel error
 
SecAuditEngine Off
 
# Don’t log to user_access
CustomLog /dev/null common
 
SecDefaultAction "phase:1,pass,log,auditlog"
 
RPAFenable On
RPAFsethostname Off
RPAFproxy_ips 127.0.
RPAFheader X-Forwarded-For
 
SecAction  "id:'100003', phase:1,  t:none, setvar:tx.outbound_anomaly_score_level=4,  nolog,  pass"
SecAction  "id:'100004', phase:1, t:none, setvar:tx.anomaly_score_blocking=on,  nolog,  pass"
SecAction  "id:'100002', phase:1, t:none, setvar:tx.inbound_anomaly_score_level=5, nolog, pass"
Viewing all 17 articles
Browse latest View live