Rating: 1 Star2 Stars3 Stars4 Stars5 Stars

How to survive heavy traffic? A practical approach

Have you every wondered what if your website or blog page reaches to front page of big sites like Digg, Yahoo or StumbleUpon? You will recieve enormous traffic and this will surely kill your server if you haven’t optimized it to survive heavy traffic. There are various ways you can speed up your website but i am mentioning the practical optimization which doesn’t need any additional hardware or commercial software.

If you are familiar with the hosting, setup and system administration you can do it yourself otherwise you will need help of a person who knows how to handle server. Beware, if you don’t know what you’re doing you could seriously mess up your system.

Cache PHP Output

Every time a request hits your server, PHP has to do a lot of processing, all of your code has to be compiled & executed for every single visit. Even though the outcome of all this processing is identical for both visitor 21600 and 21601. So why not save the flat HTML generated for visitor 21600, and serve that to 21601 as well? This will relieve resources of your web server and database server because less PHP often means less database queries.

Now you could write such a system yourself but there’s a neat package in PEAR called Cache_Lite that can do this for us, benefits:

  • it saves us the time of inventing the wheel
  • it’s been thoroughly tested
  • it’s easy to implement
  • it’s got some cool features like lifetime, read/write control, etc.

Installing is like taking candy from a baby. On Ubuntu I would:

Create Turbo Charged Storage

With the PHP caching mechanism in place, we take away a lot of stress from your CPU & RAM, but not from your disk. This can be solved by creating a storage device with your system’s RAM, like this:

Now the directory /var/www/www.mysite.com/ramdrive is not located on your disk, but in your system’s memory. And that’s about 30 times faster 🙂 So why not store your PHP cache files in this directory? You could even copy all static files (images, css, js) to this device to minimize disk IO. Two things to remember:

  • All files in your ramdrive are lost on reboot, so create a script to restore files from disk to RAM
  • The ramdrive itself is lost on reboot, but you can add an entry to /etc/fstab to prevent that

CronJobs for heavy processings

Sometimes, you might be processing data which consumes lots of queries, calls to db, processing or maintaining counts etc. All those tasks should be left for CronJobs and the demon will run automatically and perform the required action per interval you set.

For example. if you are counting hits per article you are you are updating counter every time locking the record with WHERE statement. To avoid that you can simple use relativity performance-cheap SQL INSERTS into a separate table.  Now, the CronJob will process the gathered data in every 5 minutes which will be automatically run by the server. It counts the hits per article, then deletes the gathered data and updates the grand totals in a separate field the my article table. So finally accessing the hit count of an article takes no extra processing time or heavy queries.

Optimize your Database

Use the InnoDB storage engine

If you use MySQL, the default storage engine for tables is MyISAM. That not ideal for a high traffic website because MyISAM uses table level locking, which means during an UPDATE, nobody can access any other record of the same table. It puts everyone on hold!

InnoDB however, uses Row level locking. Row level locking ensures that during an UPDATE, nobody can access that particular row, until the locking transaction issues a COMMIT.

phpmyadmin allows you to easily change the table type in the Operations tab. Though it never caused me any problems, it’s wise to first create a backup of the table you’re going to ALTER.

Use optimal field types

Wherever you can, make integer fields as small as possible (not by changing the length but by changing it’s actual integer type). Here’s an overview:

What different integer field types can contain
range signed range unsigned
fieldtype min max min max
TINYINT -128 127 0 255
SMALLINT -32,768 32,767 0 65,535
MEDIUMINT -8,388,608 8,388,607 0 16,777,215
INT -2,147,483,648 2,147,483,647 0 4,294,967,295
BIGINT -9,223,372,036,


0 18,446,744,073,


So if you don’t need negative numbers in a column, always make a field unsigned. That way you can store maximum values with minimum space (bytes). Also make sure foreign keys have matching field types, and place indexes on them. This will greatly speedup queries.

In phpmyadmin there’s a link Propose Table Structure. Take a look sometime, it will try to tell you what fields can be optimized for your specific db layout.


Never select more fields than strictly necessary. Sometimes when you’re lazy you might do a:

even though a

would suffice. Normally that’s OK, but not when performance is your no.1 priority.

Tweak the MySQL config

Furthermore there are quite some things you can do to the my.cnf file, but I’ll save that for another article as it’s a bit out of this article’s scope.

Save some bandwidth

Save some sockets first

Small optimizations make for big bandwidth savings when volumes are high. If traffic is a big issue, or you really need that extra server capacity, you could throw all CSS code into one big .css file. Do this with the JS code as well. This will save you some Apache sockets that other visitors can use for their requests. It will also give you better compression rations, should you choose to mod_deflate or compress your JavaScript with Dean Edwards Packer.

I know what your thinking. No, don’t throw all the CSS and JS in the main page. You still really want this separation to:

  1. make use of the visitor’s browser cache. Once they’ve got your CSS, it won’t be downloaded again
  2. not pollute your HTML with that stuff

And now some bandwidth 😉

  • Limit the number of images on your site
  • Compress your images
  • Eliminate unnecessary whitespace or even compress JS with tools available everywhere.
  • Apache can compress the output before it’s sent back to the client through mod_deflate. This results in a smaller page being sent over the Internet at the expense of CPU cycles on the Web server. For those servers that can afford the CPU overhead, this is an excellent way of saving bandwidth. But I would turn all compression off to save some extra CPU cycles.

Store PHP sessions in your database

I you use PHP sessions to keep track of your logged in users, then you may want to have a look at PHP’s function: session_set_save_handler. With this function you can overrule PHP’s session handling system with you own class, and store sessions in a database table.

Now a key attribute to success, is to make this table’s storage engine: MEMORY (also known as HEAP). This stores all session information (should be tiny variables) in the database server’s RAM. Taking away disk IO stress from your web server, plus allowing to share the sessions with multiple web servers in the future, so that if you’re logged in on server A, you’re also logged in on server B, making it possible to load balance.

Sessions on tmpfs

If it’s too much of a hassle to store sessions in a MEMORY database, storing session files on a ramdisk is also a good options to gain some performance. Just make the /var/lib/php5 live in RAM.

More tips

Some other things to google on if you want even more:

  • eAccelerator
  • memcached
  • tweak the apache config
  • squid
  • turn off apache logging
  • Add ‘noatime’ in /etc/fstab on your web and data drives to prevent disk writes on every read

If you’ve got any thoughts, comments or suggestions for things we could add, leave a comment! Also please Subscribe to our RSS for latest tips, tricks and examples on cutting edge stuff.

Related Posts


How To Get Domain, Subdomain, TLD, CTLD & Path from URL In PHP


Integrating BrainTree Payment Gateway With PHP

How to create XML with PHP Mysql

How To Create XML With PHP MySQL

RBS WorldPay Integration With PHP

Integrating RBS WorldPay XML Direct Method with PHP