observation with regard to the �swappiness� problem

Many of the prominent folks on Planet MySQL have encountered issues with mysql and linux going into swap (example 1 , 2 , 3 ).  None of the easy solutions (sysctl -w swappiness=0, innodb_flush_method       =O_DIRECT) have solved this problem for our configuration and I just deal by using a smaller innodb buffer pool than I�d like, and then occasionally using swapoff/swapon to temporarily bring things back to balance.

I did make one observation though, which may be a red herring but still interesting to me.  In one particular case I am using nested replication in the following configuration:

master1->master2->slave1
->slave2
->slave3
->slave4
->slave5

The master1 server handles all writes and some reads, and some of the slaves handle reads in a typical MySQL master/slave scenario.  All master2 does is batch out replication update to its slaves, and sits prepared to become active master if master1 dies.

The interesting part is that even though master1 and master2 are configured identically, master2 is the server that goes into swap while master1 is fine.  This is true even though master1 ends up being much busier with connections due to dealing with the applications directly.

I ended up reducing the innodb buffer pool on master2 so that it was smaller than master1.  And it went back into swap again, even though all it does is read one slave thread from master1 and batch out update to its 5 slaves.�

Very odd indeed. Is there something about having slaves that would increase the likelihood of mysql running into swap? If so perhaps there are some settings that could mitigate the issue there, such as reducing binlog size. I�m going to do some more investigation into this matter, but for now it�s just an interesting puzzle piece to consider.

Leave a Reply