Monday, August 03, 2009

Hack attack

I have been fighting an attack on the integrity of my websites for a little over a year now. At first there was no information online, but as the attack has continued I have found a few other people that suffered from the same attack. I finally decided it was time to put in the effort to track the issue down and correct it.

The attack was a code insertion in the footer of my pages. The attack script apparent searches for </body> and inserts a little piece of javascript right before it. The javascript loads a file located on one of a series of servers in china (

At this point I have a shell script that I am running sporadically - searching all files on my server for the telltale indicator. It compiles a list of the files - which I then send through a second shell script which removes the attack code.

It appears the attack originated with an old instance of phpBB - which I have removed. Once the attack was successful a series of backdoor scripts were uploaded - all of which accept a posted file and then eval it. (PHP local execution). I have a second script that is looking for all instances of eval and listing them in another file - this one has to be checked by hand at this point as I have not yet selected a unique identifier for the attacking code. Once this part is cleaned up I should be clear (finally), hopefully for a while.

I am pondering things I can do in a shared hosting environment that would make this kind of attack more difficult. The best I have thought of to date is to setup a pair of scripts to set my websites to 'read only' and to 'read write'. Each time I make an update I could then unlock the files, and when I was done I could lock it back down. Without paying for a VPS, this may be the best solution. Another option would be to run a daily cron job that scans all folders for files changed in the last 24 hours. Have it compile a list and email that list to me. This way I would be notified any time a file changes. I would have to tweak the settings as there are a couple of applications that I am running that make use of temporary files. Temp files would have to be scanned to make sure they were less than x days old (where x is about 2) and deleted once they are older than that. All to prevent the temp folders from being a useful attack vector.

I suspect there is a better approach to the problem than either I have specified - even thought I am on a shared hosting account. I will give it some more thought as I have time.