Ten PHP Design Patterns

Design patterns can speed up the development process by providing tested, proven development paradigms. Effective software design requires considering issues that may not become visible until later in the implementation. Reusing design patterns helps to prevent subtle issues that can cause major problems, and it also improves code readability for coders and architects who are familiar with the patterns.

Design patterns were introduced to the software community in Design Patterns, by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (colloquially known as the “gang of four”). The core concept behind design patterns, presented in the introduction, was simple. Over their years of developing software, Gamma et al found certain patterns of solid design emerging, just as architects designing houses and buildings can develop templates for where a bathroom should be located or how a kitchen should be configured. Having those templates, or design patterns, means they can design better buildings more quickly. The same applies to software.

Design patterns not only present useful ways for developing robust software faster but also provide a way of encapsulating large ideas in friendly terms. For example, you can say you’re writing a messaging system to provide for loose coupling, or you can say you’re writing an observer, which is the name of that pattern.

It’s difficult to demonstrate the value of patterns using small examples. They often look like overkill because they really come into play in large code bases. This article can’t show huge applications, so you need to think about ways to apply the principles of the example — and not necessarily this exact code — in your larger applications. That’s not to say that you shouldn’t use patterns in small applications. Most good applications start small and become big, so there is no reason not to start with solid coding practices like these.

Now that you have a sense of what design patterns are and why they’re useful, it’s time to jump into ten design patterns for PHP V5.

The factory pattern

Many of the design patterns in the original Design Patterns book encourage loose coupling. To understand this concept, it’s easiest to talk about a struggle that many developers go through in large systems. The problem occurs when you change one piece of code and watch as a cascade of breakage happens in other parts of the system — parts you thought were completely unrelated.

The problem is tight coupling. Functions and classes in one part of the system rely too heavily on behaviors and structures in other functions and classes in other parts of the system. You need a set of patterns that lets these classes talk with each other, but you don’t want to tie them together so heavily that they become interlocked.

In large systems, lots of code relies on a few key classes. Difficulties can arise when you need to change those classes. For example, suppose you have a User class that reads from a file. You want to change it to a different class that reads from the database, but all the code references the original class that reads from a file. This is where the factory pattern comes in handy.

The factory pattern is a class that has some methods that create objects for you. Instead of using new directly, you use the factory class to create objects. That way, if you want to change the types of objects created, you can change just the factory. All the code that uses the factory changes automatically.

Listing 1 shows an example of a factory class. The server side of the equation comes in two pieces: the database, and a set of PHP pages that let you add feeds, request the list of feeds, and get the article associated with a particular feed.

Listing 1. Factory1.php

An interface called IUser defines what a user object should do. The implementation of IUser is called User, and a factory class called UserFactory creates IUser objects. This relationship is shown as UML in Figure 1.

Figure 1. The factory class and its related IUser interface and user class
The factory class and its related IUser interface and user class

If you run this code on the command line using the php interpreter, you get this result:

The test code asks the factory for a User object and prints the result of the getName method.

A variation of the factory pattern uses factory methods. These public static methods in the class construct objects of that type. This approach is useful when creating an object of this type is nontrivial. For example, suppose you need to first create the object and then set many attributes. This version of the factory pattern encapsulates that process in a single location so that the complex initialization code isn’t copied and pasted all over the code base.

Listing 2 shows an example of using factory methods.

Listing 2. Factory2.php

This code is much simpler. It has only one interface, IUser, and one class called User that implements the interface. The User class has two static methods that create the object. This relationship is shown in UML in Figure 2.

Figure 2. The IUser interface and the user class with factory methods
The IUser interface and the user class with factory methods

Running the script on the command line yields the same result as the code in Listing 1, as shown here:

As stated, sometimes such patterns can seem like overkill in small situations. Nevertheless, it’s still good to learn solid coding forms like these for use in any size of project.

The singleton pattern

Some application resources are exclusive in that there is one and only one of this type of resource. For example, the connection to a database through the database handle is exclusive. You want to share the database handle in an application because it’s an overhead to keep opening and closing connections, particularly during a single page fetch.

The singleton pattern covers this need. An object is a singleton if the application can include one and only one of that object at a time. The code in Listing 3 shows a database connection singleton in PHP V5.

Listing 3. Singleton.php

This code shows a single class called DatabaseConnection. You can’t create your own DatabaseConnection because the constructor is private. But you can get the one and only one DatabaseConnection object using the static get method. The UML for this code is shown in Figure 3.

Figure 3. The database connection singleton
The database connection singleton

The proof in the pudding is that the database handle returned by the handle method is the same between two calls. You can see this by running the code on the command line.

The two handles returned are the same object. If you use the database connection singleton across the application, you reuse the same handle everywhere.

You could use a global variable to store the database handle, but that approach only works for small applications. In larger applications, avoid globals, and go with objects and methods to get access to resources.

The observer pattern

The observer pattern gives you another way to avoid tight coupling between components. This pattern is simple: One object makes itself observable by adding a method that allows another object, the observer, to register itself. When the observable object changes, it sends a message to the registered observers. What those observers do with that information isn’t relevant or important to the observable object. The result is a way for objects to talk with each other without necessarily understanding why.

A simple example is a list of users in a system. The code in Listing 4 shows a user list that sends out a message when users are added. This list is watched by a logging observer that puts out a message when a user is added.

Listing 4. Observer.php

This code defines four elements: two interfaces and two classes. The IObservable interface defines an object that can be observed, and the UserList implements that interface to register itself as observable. The IObserver list defines what it takes to be an observer, and the UserListLogger implements that IObserver interface. This is shown in the UML in Figure 4.

Figure 4. The observable user list and the user list event logger
The observable user list and the user list event logger

If you run this on the command line, you see this output:

The test code creates a UserList and adds the UserListLogger observer to it. Then the code adds a customer, and the UserListLogger is notified of that change.

It’s critical to realize that the UserList doesn’t know what the logger is going to do. There could be one or more listeners that do other things. For example, you may have an observer that sends a message to the new user, welcoming him to the system. The value of this approach is that the UserList is ignorant of all the objects depending on it; it focuses on its job of maintaining the user list and sending out messages when the list changes.

This pattern isn’t limited to objects in memory. It’s the underpinning of the database-driven message queuing systems used in larger applications.

The chain-of-command pattern

Building on the loose-coupling theme, the chain-of-command pattern routes a message, command, request, or whatever you like through a set of handlers. Each handler decides for itself whether it can handle the request. If it can, the request is handled, and the process stops. You can add or remove handlers from the system without influencing other handlers. Listing 5 shows an example of this pattern.

Listing 5. Chain.php

This code defines a CommandChain class that maintains a list of ICommand objects. Two classes implement the ICommand interface — one that responds to requests for mail and another that responds to adding users. The UML is shows in Figure 5.

Figure 5. The command chain and its related commands
The command chain and its related commands

If you run the script, which contains some test code, you see the following output:

The code first creates a CommandChain object and adds instances of the two command objects to it. It then runs two commands to see who responds to those commands. If the name of the command matches either UserCommand or MailCommand, the code falls through and nothing happens.

The chain-of-command pattern can be valuable in creating an extensible architecture for processing requests, which can be applied to many problems.

The strategy pattern

The last design pattern we will cover is the strategy pattern. In this pattern, algorithms are extracted from complex classes so they can be replaced easily. For example, the strategy pattern is an option if you want to change the way pages are ranked in a search engine. Think about a search engine in several parts — one that iterates through the pages, one that ranks each page, and another that orders the results based on the rank. In a complex example, all those parts would be in the same class. Using the strategy pattern, you take the ranking portion and put it into another class so you can change how pages are ranked without interfering with the rest of the search engine code.

As a simpler example, Listing 6 shows a user list class that provides a method for finding a set of users based on a plug-and-play set of strategies.

Listing 6. Strategy.php

The UML for this code is shown in Figure 6.

Figure 6. The user list and the strategies for selecting users
The user list and the strategies for selecting users

The UserList class is a wrapper around an array of names. It implements a find method that takes one of several strategies for selecting a subset of those names. Those strategies are defined by the IStrategy interface, which has two implementations: One chooses users randomly and the other chooses all the names after a specified name. When you run the test code, you get the following output:

The test code runs the same user lists against two strategies and shows the results. In the first case, the strategy looks for any name that sorts after J, so you get Jack, Lori, and Megan. The second strategy picks names randomly and yields different results every time. In this case, the results are Andy and Megan.

The strategy pattern is great for complex data-management systems or data-processing systems that need a lot of flexibility in how data is filtered, searched, or processed.

The adapter pattern

Use the adapter pattern when you need to convert an object of one type to an object of another type. Typically, developers handle this process through a bunch of assignment code, as shown in Listing 1. The adapter pattern is a nice way to clean this type of code up and reuse all your assignment code in other places. Also, it hides the assignment code, which can simplify things quite a bit if you’re also doing some formatting along the way.
Listing 6. Using code to assign values between objects

This example uses an AddressDisplay object to display an address to a user. The AddressDisplay object has two parts: the type of address and a formatted address string.

After implementing the pattern (see Listing 6.1), the PHP script no longer needs to worry about exactly how the EmailAddress object is turned into the AddressDisplay object. That’s a good thing, especially if the AddressDisplay object changes or the rules that govern how an EmailAddress object is turned into an AddressDisplay object change. Remember, one of the main benefits of designing your code in a modular fashion is to take advantage of having to change as little code as possible if something in the business domain changes or you need to add a new feature to the software. Think about this even when you’re doing mundane tasks, such as assigning values from properties of one object to another.
Listing 6.1. Using the adapter pattern

Figure 6.2 shows a class diagram of the adapter pattern.

Figure 6.2. Class diagram of the adapter pattern
Class diagram of the adapter pattern

Alternate method

An alternate method of writing an adapter — and one that some prefer — is to implement an interface to adapt behavior, rather than extending an object. This is a very clean way of creating an adapter and doesn’t have the drawbacks of extending the object. One of the disadvantages of using the interface is that you need to add the implementation into the adapter class, as shown in Figure 6.2.
Figure 6.3. The adapter pattern (using an interface)
The adapter pattern (using an interface)

The iterator pattern

The iterator pattern provides a way to encapsulate looping through a collection or array of objects. It is particularly handy if you want to loop through different types of objects in the collection.

Look back at the e-mail and physical address example in Listing 6. Before adding an iterator pattern, if you’re looping through the person’s addresses, you might loop through the physical addresses and display them, then loop through the person’s e-mail addresses and display them, then loop through the person’s IM addresses and display those. That’s some messy looping!

Instead, by implementing an iterator, all you have to do is call while($itr->hasNext()) and deal with the next item $itr->next() returns. An example of one of the iterators is shown in Listing 7. An iterator is powerful because you can add new types of items through which to iterate, and you don’t have to change the code that loops through the items. In the Person example, for instance, you could add an array of IM addresses; simply by updating the iterator, you don’t have to change any code that loops through the addresses for display.
Listing 7. Using the iterator pattern to loop through objects

If the Person object is modified to return an implementation of the AddressIterator interface, the application code that uses the iterator doesn’t need to be modified if the implementation is extended to loop through additional objects. You can use a compound iterator that wraps the iterators that loop through each type of address like the one listed in Listing 7. An example of this is available (see Download).

Figure 7 shows a class diagram of the iterator pattern.
Figure 7. Class diagram of the iterator pattern
Class diagram of the iterator pattern

The decorator pattern

Consider the code sample in Listing 8. The purpose of this code is to add a bunch of features onto a car for a Build Your Own Car site. Each car model has more features and an associated cost. With only two models, it would be fairly trivial to add these features with if then statements. However, if a new model came along, you’d have to go back through the code and make sure the statements worked for the new model.
Listing 8. Using the decorator pattern to add features

Enter the decorator pattern, which allows you to add this functionality onto the AutomobileModel in a nice, clean class. Each class remains concerned only about its price and options and how they’re added to the base model.

Figure 8 shows a class diagram for the decorator pattern.
Figure 8. Class diagram of the decorator pattern
Class diagram of the decorator pattern
An advantage of the decorator pattern is that you can easily tack on more than one decorator to the base at a time.

If you’ve done much work with stream objects, you have used a decorator. Most stream constructs, such as an output stream, are decorators that take a base input stream, then decorate it by adding additional functionality — like one that inputs streams from files, one that inputs streams from buffers, etc.

The delegate pattern

The delegate pattern provides a way of delegating behavior based on different criteria. Consider the code in Listing 9. This code contains several conditions. Based on the condition, the code selects the appropriate type of object to handle the request.
Listing 9. Using conditional statements to route shipping requests

With a delegate pattern, an object internalizes this routing process by setting an internal reference to the appropriate object when a method is called, like useRail() in Listing 10. This is especially handy if the criteria change for handling various packages or if a new type of shipping becomes available.
Listing 10. Using the delegate pattern to route shipping requests

The delegate provides the advantage that behavior can change dynamically by calling the useRail() or useTruck() method to switch which class handles the work.

Figure 10 shows a class diagram of the delegate pattern.
Figure 10. Class diagram of the delegate pattern
Class diagram of the delegate pattern

The state pattern

The state pattern is a similar to the command pattern, but the intent is quite different. Consider the code below.
Listing 11. Using code to build a robot

In this listing, the PHP code represents the operating system for a powerful robot that turns into a car. The robot can power up, power down, turn into a robot when it’s a vehicle, and turn into a vehicle when it’s a robot. The code is OK now, but you see that it can become complex if any of the rules change or if another state comes into the picture.

Now look at Listing 12, which has the same logic for handling the robot’s states, but this time puts the logic into the state pattern. The code in Listing 12 does the same thing as the original code, but the logic for handling states has been put into one object for each state. To illustrate the advantages of using the design pattern, imagine that after a while, these robots have discovered that they shouldn’t power down while being in robot mode. In fact, if they power down, they must change to vehicle mode first. If they’re already in vehicle mode, the robot just powers down. With the state pattern, the changes are pretty trivial.
Listing 12. Using the state pattern to handle the robot’s state

Listing 13. Small changes to one of the state objects

Something that doesn’t appear obvious when looking at Figure 9 is that each object in the state pattern has a reference to the context object (the robot), so each object can advance the state onto the appropriate one.
Figure 14. Class diagram of the state pattern
Class diagram of the state pattern


Using design patterns in your PHP code is one way to make your code more readable and maintainable. By using established patterns, you benefit from common design constructs that allow other developers on a team to understand your code’s purpose. It also allows you to benefit from the work done by other designers, so you don’t have to learn the hard lessons of design ideas that don’t work out. Taken from IBM developerworks

Please post your experiences with these design patterns. If you like this post kindly subscribe to our RSS for free updates and articles delivered to you.

Windows v/s Linux : 10 Technical differences

I have known that there are basic differences between Linux and Windows that will always set them apart. This is not, in the least, to say one is better than the other. It’s just to say that they are fundamentally different. Many people, looking from the view of one operating system or the other, don’t quite get the differences between these two powerhouses.

1: Full access vs. no access

Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.

You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.

2: Licensing freedom vs. licensing restrictions

With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.

3: Online peer support vs. paid help-desk support

This is one issue where most companies turn their backs on Linux. But it’s really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance). Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you’ll find your solution fairly quickly.

Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven’t had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can’t say one was more pleased than the other. This of course begs the question “Why do so many say that Microsoft support is superior to Linux paid support?”

4: Full vs. partial hardware support

Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.

With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it’ll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.

5: Command line vs. no command line

No matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can’t imagine administering a Linux machine without the command line. But for the end user — not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won’t even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?

6: Centralized vs. noncentralized application installation

With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I’m talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).

With Windows, you must know where to find the software you want to install, download the software (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.

7: Flexibility vs. rigidity

I always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can’t go further than Microsoft has deemed it necessary to go.

Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.

8: Fanboys vs. corporate types

I wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it’s a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn’t the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.

Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company’s image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.

9: Automated vs. nonautomated removable media

I remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close — but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually “mount” a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user’s files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it’s not the norm. And besides, who doesn’t want to manually edit the /etc/fstab fle?

10: Multilayered run levels vs. a single-layered run level

I couldn’t figure out how best to title this point, so I went with a description. What I’m talking about is Linux’ inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.

With Windows, you’re lucky to get to a command line via safe mode — and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don’t want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.

Your call…

Those are 10 fundamental differences between Linux and Windows. You can decide for yourself whether you think those differences give the advantage to one operating system or the other. Lets debate and find out..

Creating an IT policy that works

When it comes to building and implementing an IT policy, no quick-fix or one-size-fits-all solution will adequately serve your needs. Every business is different, and the approach taken to meet objectives and/or ensure compliance will vary from one environment to another, even in the same industries. But you can take advantage of certain best practices to increase your odds of crafting and implementing a policy that employees will support and that will help protect your organisation.

Executive support

For starters, no policy will succeed without the basic buy-in from senior leadership. Senior executives, directors, and managers should be asked to provide input and some form of approval to the policy. Obtain a clear statement of support before you start creating the policy and continue to keep senior management educated and involved as it is written. When the policy is ready for implementation, request that management formally present it to your organisation, stressing its importance.

Consensus building

As you begin formulating a policy, you should involve all interested parties in the discussion of its establishment by creating a committee. Your committee should consist of the owner of the policy, subject matter experts, frequent users of the policy, and representatives from groups affected by the policy. You may also want to consult specific groups within your particular organisation, such as Human Resources, Financial, and Legal. These groups can make recommendations based on the impact of the policy on the organisation as well as on its viability and legitimacy. This will ensure the policy you develop is fully understood by everyone concerned and that it has their backing once it’s implemented. That broad base of support is one of the best assurances for policy success.

Policy contents

Although policies vary from organisation to organisation, a typical policy should include a statement of purpose, description of the users affected, history of revisions (if applicable), definitions of any special terms, and specific policy instructions from management.

Make sure everyone has a clear understanding of the purpose of the policy. Are you creating this policy because you have to be in compliance with some ruling? Are you trying to cut down on costs or create additional savings? Are you ensuring liability will not be placed on the company?

Creating a uniform policy format to ensure that information will be presented to the reader in a consistent manner is paramount for policy success. A uniform format will make the policy easier to read, understand, implement, and enforce. Keep the scope of your policies manageable as well. Consider making separate, smaller polices that address specific needs.

The language of your policies must convey both certainty and unquestionable management support. Remember, you’re setting policy, not describing standards. A standard would, for example, define the number of secret key bits that are required in an encryption algorithm. A policy, on the other hand, would dictate the need to use an approved encryption process when sensitive information is sent over the public Internet system.

Standards will need to be changed considerably more often than policies because the manual procedures, organisational structures, business processes, and information system technologies change much more rapidly than policies. You can reference standards within a policy and modify that standard as the technology or compliance requirements change.

After you roll out a policy, you may see many examples of inappropriate use or violations, but it’s difficult to anticipate them. So it’s important to have catch-all clauses within your policies, such as:

  • “Viewing or downloading offensive, obscene, or inappropriate material from any source is forbidden.”
  • “The storing and transfer of illegal images, data, material, and/or text using this equipment is forbidden.”

Research and preparation

In drafting your policy, you will want to research related issues both inside and outside the company. Some common areas to research include:

  • Company policy library (if you have one)
  • Forms and documents required to develop or complete the policy: request forms, legal documentation, etc.
  • State and or federal laws that are relevant to your policy
  • Similar policies at other businesses

One of the biggest mistakes many companies often make when they begin designing policies is to create guidelines and restrictions without any understanding of how the company’s business actually works. Although there’s always going to be a factor of inconvenience with any security policy, the goal is to create a more secure environment without making things overly difficult or hard to understand for the people having to use the resources the policy is trying to protect.

Policies made outside the company’s business model will begin to become circumvented over a period of time and the overall environmental state can become worse than before the security measures were implemented. So make sure part of your research involves developing a solid understanding of business processes so that your policy can work with them, rather than against them.

Policy reviews

Even after you’ve finished drafting or updating a policy, the job is not complete. The policy should be reviewed by legal counsel to ensure that it complies with state and federal laws before it’s finalised and distributed to employees. Further, you should review the policies on a regular basis to make sure they continue to comply with applicable law and the needs of your organisation. New laws, regulations, and court cases can affect both the language of your policies and how you implement them.

Most experts suggest a thorough review of your policies at least once a year and the use of a dedicated notification system/service to keep employees informed of changes. And when revised policies are introduced, you should formally distribute and thoroughly explain them to all employees.

Policy pointers

  • Consider holding (depending on the size of your company) a series of meetings that involves all interested parties.
  • Do not fill policies with “techie” terms. Polices must be written in layman’s terms or the concepts may be lost on the end users.
  • Set out what behavior is reasonable and unreasonable and determine procedures for dealing with specific abuses.
  • Try to keep polices to the point. Long written polices are difficult to read and comprehend, and users may be confused or simply give up on trying to understand them.
  • Agree upon a framework for policy review. Usage and technology may change, so you need to be flexible and adapt the policy when it is required.
  • Decide, define and mandate “what” is to be protected.

Done right…

Well-crafted policies show that an organisation and its management are committed to security and expect employees to take it seriously. Such policies provide an overall security framework for the organisation, ensuring that security efforts are consistent and integrated rather than ad hoc or fragmented. A good, regularly reviewed policy can be both an effective employee relations tool and a helpful defense against lawsuits. In contrast, policies that are poorly drafted or misapplied can decrease efficiencies and create roadblocks for normal business activities. Invest the necessary amount of time and effort to make sure your policies are solidly built and properly implemented.

Prevent identity theft by avoiding these seven common mistakes

Identity theft may be on the rise, but you don’t have to make it easy for thieves — take steps to protect the personally identifiable information (PII) of your employees and clients.

Is your organization part of the solution or part of the problem? PII is pouring through the security floodgates and ending up in the wrong hands at an alarming rate.

To protect your organization’s employees and clients, you need to evaluate how well your company protects its PII. Here are seven common mistakes to avoid.

Keep users in the dark

Users will always be the weakest link in any enterprise network — and all of the gadgets and controls in the world won’t change that. If your users don’t know how to identify and handle PII, it’s only a matter of time before one of them discloses this data to the wrong source.

The solution is simple: Educate your users on your company’s policies and mechanisms to process PII. And don’t forget to include regularly scheduled refresher courses.

Partner with the wrong businesses

You’ve made sure your security is rock solid, and you’ve trained your users. But can your business partners say the same? Do you collect or share information with businesses that have little or no security?

If your company collects and shares PII with insecure partners, who do you think will end up in the paper and explaining to law enforcement about how a breach occurred? Your company will.

The solution is just as simple as the last dilemma: Educate and train your business partners on how to protect this sensitive information.Charge them for your expertise if you want, but get the job done.

Keep data around past its prime

What do you do with data once it’s served its purpose? If you aren’t destroying PII when it’s no longer required, then you’re not doing your job. That doesn’t mean throwing it away either — that means destroying it.

Dumpster divers make a living off of old bank statements and credit card receipts. That’s why you need to wipe out PII when it’s no longer necessary.If your organization doesn’t have a shredder, you need to get one today.

Don’t worry about physical security

It’s imperative that you implement physical access controls to prevent unauthorized people — including employees — from gaining access to PII. Get a door lock and a badge reader, and start controlling access.

Don’t lock up your records

If you don’t have specific storage areas on your network (as well as file cabinets) for PII, then how can your properly protect it? Take inventory of your network — and your paper copies — and develop a plan to protect that data. This would be a good time to research encrypting data-at-rest and locking some file cabinets.

Ignore activity on your network

I’ve said this before in columns, but it’s worth repeating: If you’re not going to actively monitor your network for suspicious activity or incidents, then stop collecting the data. Develop a method that’s within your capabilities and budget to monitor your network for suspicious activity or incidents. And while you’re at it, develop a response and mitigation strategy for security incidents.

Audits? Who needs audits?

A lot of businesses either don’t know what security events to audit or don’t read their security logs — or both. If you’re not sure which events to audit, find out. Set up security auditing, and start reviewing your logs today.

Final thoughts

Identity theft may be on the rise, but you don’t have to make it easy for thieves. You can help prevent identity theft both at home and at the office — you just need to take a few extra steps.

Pop-up windows: Know the difference

There’s been a lot of publicity about pop-up windows, and most of it hasn’t exactly been rave reviews. But it hasn’t always been this way.

In fact, pop-up windows were a positive component in the beginning. Created long before tabbed browsers, their purpose was to present information without interfering with the current browser window.

These days, due to security risks as well as the annoyance factor, a standard feature among browsers is to block or control pop-up behavior. But before you start telling your browser or other privacy programs to block all those pop-ups, you need to understand why they happen and what you should really be doing about them.

Most pop-ups are part of the content from the Web site the user is visiting, containing either requested information or info the site thinks one might like. But other pop-ups are just spam that’s both invasive and malicious in nature.

These types of pop-ups are actually an alarm telling you that something’s wrong with your computer and you need to fix it. Let’s divide pop-ups into two general categories — normal and alarms.

Normal pop-ups

Some pop-ups are information you’ve requested — music or video content from a link you just clicked or a download you requested (hopefully from a trusted site). Web-access e-mail programs use pop-ups to create or reply to e-mail, which mimics a traditional e-mail client.

In addition, some pop-ups are targeted advertising marketed specifically to consumers visiting a Web site. If you find yourself getting too many of these advertisements, it’s probably due to the sites you’re visiting.

In general, all of these types of pop-ups are the kind you want. And if not, you can easily dismiss them with a click on the X. These are the pop-ups you should be controlling with your browser or privacy program. But the other types of pop-ups are the ones you want to see — because they’re alerting you that something’s wrong with your system.

Alarm pop-ups

You don’t want to block the pop-ups that indicate a problem with your system — these are the ones you want to see and take action on to resolve. For example, if pop-ups are launching through the Windows Messenger Service, you’ve got a potentially serious problem.

To get rid of these pop-ups, you need to turn off the Messenger Service. Follow these steps:

1. Go to Start | Run, type services.msc, and click OK to launch the Services applet.
2. Scroll down to find Messenger.
3. Right-click Messenger, and select Properties.
4. On the General tab, select Disabled from the Startup Type drop-down list, and click OK.

This is a serious security issue. While the Messenger Service pop-up starts with data on UDP 135, this pop-up indicates that the Windows networking ports (i.e., TCP/UDP 135, 137 through 139, and 445) are open to the public. This pop-up is an alarm that you need to block these ports with your firewall.

Another type of alarm pop-up is the browser flood. As soon as your browser opens, you start receiving a swarm of pop-ups. This browser “spam” is telling you that spyware/adware is running on your system. While this is usually why people enable pop-up blockers, that’s comparable to rolling down your window and sticking your head outside so you can see to drive.

What’s the real solution? Clean your Windows! Blocking the alarm doesn’t solve the problem. If your system has experienced this type of behavior, start shopping for a spyware/adware removal tool (maybe several), and clean your system.

Final thoughts

While pop-ups can be a pain, they sometimes indicate a more serious problem. Don’t ignore all pop-ups — investigate the problem and make your system safer.

The Complexity Complex

When you’re designing or writing software, one issue that can often be glossed over is the matter of efficiency. It’s so easy at the beginning of a project to just concentrate on getting something working, so you can demonstrate progress, and then worry about making it fast later on. The unfortunate fact is though optimisation can only take you so far, the true efficiency issues are going to lie in your algorithm design. Most IT professionals have learned the basics at some point in their career, but in case you’re a little rusty read on and we’ll refresh your memory.

The first thing to consider is what kind of complexity you’re looking to reduce. The two major complexity areas are time — that is, how long an operation will take to complete — and space, or how much memory is needed. When talking complexity, we tend to rate speed in terms of how many steps (or blocks of memory for space complexity) are taken per input variable, rather than in absolutes, since they are so dependent on the specifics of the hardware. Likewise, the length of time an individual step will take is largely disregarded since for large inputs this time will be dominated by the complexity class.

To make comparing two algorithms easier we group them into classes by using a special kind of notation. There are a number of different ways to do this, based upon the best case, average case and worst case input scenario. I like to use the worst case most of the time, since that’s the time it’s going to make the most difference to how you perceive performance. To express this we use what’s called big O notation, which expresses the number of steps an algorithm will take for an input of size “n” in the worst case. So, take the following example, which simply sums the numbers in a list.

Treating each line as a single step, we can see that calling sum on a list of size n will take n+4 steps to complete, two for the initialisation of final_sum and n, one to set up the for loop, one for the return statement and then n times one for the loop body.

The problem has changed, and now you need to multiply each number by how many times it occurs in the list before adding it to the running total. Take the following implementation:

This does similarly to the last function, with the exception that before adding the current value to the running total, it goes through the list and counts the number of occurrences of each value. Calling this function of a list of size n means that 4 + n * (1 + n * 2) steps are carried out since the outer loop now contains 2n + 1 steps. In total this means that calling this function “costs” 2n2 + n + 4 steps. For a list of 10 numbers it takes 214 steps, but for a list of 100 numbers it will need more than 20,000 steps to complete. That’s quite an increase. When we rewrite it in another way, however, this changes:

In this example we precompute the number of times each value occurs in the list. To do this we use a new data type which can store these values. It’s not particularly important how this is implemented so long as we can be sure that we can insert and retrieve values in constant time. In languages that support them as standard this could be a hash or a dictionary, or if you’re not that lucky (say you’re using C) then you can think of it as an integer array of size max(a). The method simply returns true if this type contains a the given value.

Anyhow, you can see how rather than work out how many times each number occurs as we reach it we can do it all at the beginning and store it. Let’s look at how this helps — sum_multiple2 takes 3n + 6 steps: the usual initialisation steps, plus two for each input to build the dictionary of number occurrences, and then one for each input to sum them. For 10 inputs this will take 36 steps, for one hundred: 306. That’s more than 65 times faster for the second version when dealing with 100 inputs. If say, we had one million inputs it becomes two trillion vs three million and the second version is more than 650,000 times faster.

Now we’ve been taking a fairly casual view of the number of steps in each algorithm, treating each line as one step, when a statement like “sum += a[j] * numbers[a[j]]” contains multiple lookups and could be compiled into as many as 10 individual instructions on a hardware level. This is not really that important though, when you think about it, even if we assume that every step we’ve counted in the second example really takes 10, and the first program is unchanged then it still represents more than a 60,000 times improvement.

Really what we’re interested in is the order of the algorithm, for convenience, we reduce it to the size of the largest part. For example, sum_multiples we say is O(n2) whereas sum_multiples2 is O(n). This is often all you really need to know, for large enough values of n, O(n) algorithms will always beat O(n2) algorithms, regardless of the details.

Understanding the pros and cons of the Waterfall Model of software development

Waterfall development is a software development model involving a phased progression of activities, marked by feedback loops, leading to the release of a software product. This article provides a quick and dirty introduction to the model, explaining what it is, how it’s supposed to work, describing the six phases, and why the model can fail.

Say the words “waterfall development” to most people and chances are they’re going to be thinking of a bunch of condos under Niagara Falls. Imagine their surprise, then, when you tell them that waterfall development is actually a software development model which involves a phased progression of activities leading to the release of a software product. This article provides a quick and dirty introduction to the model, explaining what it is, how it’s supposed to work, and why it can fail.


Waterfall development isn’t new — it’s been around since 1970 — but most developers still only have a vague idea of what it means. Essentially, it’s a framework for software development in which development proceeds sequentially through a series of phases, starting with system requirements analysis and leading up to product release and maintenance. Feedback loops exist between each phase, so that as new information is uncovered or problems are discovered, it is possible to “go back” a phase and make appropriate modification. Progress “flows” from one stage to the next, much like the waterfall that gives the model its name.

A number of variants of this model exist, with each one quoting slightly different labels for the various stages. In general, however, the model may be considered as having six distinct phases, described below:

1. Requirements analysis: This first step is also the most important, because it involves gathering information about what the customer needs and defining, in the clearest possible terms, the problem that the product is expected to solve. Analysis includes understanding the customer’s business context and constraints, the functions the product must perform, the performance levels it must adhere to, and the external systems it must be compatible with. Techniques used to obtain this understanding include customer interviews, use cases, and “shopping lists” of software features. The results of the analysis are typically captured in a formal requirements specification, which serves as input to the next step.
2. Design: This step consists of “defining the hardware and software architecture, components, modules, interfaces, and data…to satisfy specified requirements” (Wikipedia). It involves defining the hardware and software architecture, specifying performance and security parameters, designing data storage containers and constraints, choosing the IDE and programming language, and indicating strategies to deal with issues such as exception handling, resource management and interface connectivity. This is also the stage at which user interface design is addressed, including issues relating to navigation and accessibility. The output of this stage is one or more design specifications, which are used in the next stage of implementation.
3. Implementation: This step consists of actually constructing the product as per the design specification(s) developed in the previous step. Typically, this step is performed by a development team consisting of programmers, interface designers and other specialists, using tools such as compilers, debuggers, interpreters and media editors. The output of this step is one or more product components, built according to a pre-defined coding standard and debugged, tested and integrated to satisfy the system architecture requirements. For projects involving a large team, version control is recommended to track changes to the code tree and revert to previous snapshots in case of problems.
4. Testing: In this stage, both individual components and the integrated whole are methodically verified to ensure that they are error-free and fully meet the requirements outlined in the first step. An independent quality assurance team defines “test cases” to evaluate whether the product fully or partially satisfies the requirements outlined in the first step. Three types of testing typically take place: unit testing of individual code modules; system testing of the integrated product; and acceptance testing, formally conducted by or on behalf of the customer. Defects, if found, are logged and feedback provided to the implementation team to enable correction. This is also the stage at which product documentation, such as a user manual, is prepared, reviewed and published.
5. Installation: This step occurs once the product has been tested and certified as fit for use, and involves preparing the system or product for installation and use at the customer site. Delivery may take place via the Internet or physical media, and the deliverable is typically tagged with a formal revision number to facilitate updates at a later date.
6. Maintenance: This step occurs after installation, and involves making modifications to the system or an individual component to alter attributes or improve performance. These modifications arise either due to change requests initiated by the customer, or defects uncovered during live use of the system. Typically, every change made to the product during the maintenance cycle is recorded and a new product release (called a “maintenance release” and exhibiting an updated revision number) is performed to enable the customer to gain the benefit of the update.


The waterfall model, as described above, offers numerous advantages for software developers. First, the staged development cycle enforces discipline: every phase has a defined start and end point, and progress can be conclusively identified (through the use of milestones) by both vendor and client. The emphasis on requirements and design before writing a single line of code ensures minimal wastage of time and effort and reduces the risk of schedule slippage, or of customer expectations not being met.

Getting the requirements and design out of the way first also improves quality; it’s much easier to catch and correct possible flaws at the design stage than at the testing stage, after all the components have been integrated and tracking down specific errors is more complex. Finally, because the first two phases end in the production of a formal specification, the waterfall model can aid efficient knowledge transfer when team members are dispersed in different locations.


Despite the seemingly obvious advantages, the waterfall model has come in for a fair share of criticism in recent times. The most prominent criticism revolves around the fact that very often, customers don’t really know what they want up-front; rather, what they want emerges out of repeated two-way interactions over the course of the project. In this situation, the waterfall model, with its emphasis on up-front requirements capture and design, is seen as somewhat unrealistic and unsuitable for the vagaries of the real world. Further, given the uncertain nature of customer needs, estimating time and costs with any degree of accuracy (as the model suggests) is often extremely difficult. In general, therefore, the model is recommended for use only in projects which are relatively stable and where customer needs can be clearly identified at an early stage.

Another criticism revolves around the model’s implicit assumption that designs can be feasibly translated into real products; this sometimes runs into roadblocks when developers actually begin implementation. Often, designs that look feasible on paper turn out to be expensive or difficult in practice, requiring a re-design and hence destroying the clear distinctions between phases of the traditional waterfall model. Some criticisms also center on the fact that the waterfall model implies a clear division of labor between, say, “designers”, “programmers” and “testers”; in reality, such a division of labor in most software firms is neither realistic nor efficient.

Customer needs

While the model does have critics, it still remains useful for certain types of projects and can, when properly implemented, produce significant cost and time savings. Whether you should use it or not depends largely on how well you believe you understand your customer’s needs, and how much volatility you expect in those needs as the project progresses. It’s worth noting that for more volatile projects, other frameworks exists for thinking about project management, notably the so-called spiral model…but that’s a story for another day!

Interviewing techniques to gather project requirements

There are many ways to gather requirements. Interviewing — where you talk to the people who will define the new solution to determine what they need — is probably the default technique for most projects.

Interviewing can be an easy process if the person you’re talking to is organised and logical in their thought process. Since that’s not always the case, however, you have to employ good interviewing techniques in order to start and guide the interview.

There are a number of formal interviewing techniques that will make the requirements gathering process go more smoothly. Here they are.

Start off with general questions.

Typically, you don’t start the discussion by asking narrow, targeted questions. You’ll want to start more high-level and general questions. These questions should be open-ended, in that they require the interviewee to explain or describe something and can’t be answered with a "yes" or a "no". Your goal is to gather as much information as possible with the fewest number of questions.

Ask probing questions.

After you ask general, open-ended questions, you should then start to probe for details. Probing questions are probably the most valuable type of questions, since they tend to result in the detailed requirement statements you’re looking for.

Be persistent.

If you experience any difficulty understanding the interviewee’s point, ask follow-up questions. You can also ask for examples that will illustrate the points the interviewee is making.

Ask direction questions when you need additional information.

Direction questions start to take the discussion in a certain direction. They’re used to provide context and background. For example, "Why is that important to you?" or "Why do you care about that?" are questions that can provide direction.

Ask indirect questions to gain better understanding. Indirect questions are used to follow up on specific points that were raised previously. These questions are used to gain more clarity so that you can ask better questions next. For example, "Is that important because of …" or "You said everything needs to be secure because …"

Ask questions that validate your understanding.

A good interviewing technique is to restate what you just heard back to the interviewee to validate that you understood him/her correctly. For example, after hearing an answer, you could say "If I understand you correctly …"


This is similar to the prior technique except that you would take a large amount of information from the receiver and simplify it in your own words. For instance, after hearing the interviewee give a five-minute answer on how a process works today, you could paraphrase the basic points of what he/she said in a quick bulleted list of process steps. For example "So, in other words, the process you use today is basically 1,2,3,4 …"

Ask for examples.

In many (or most) cases of gathering requirements, the interviewer is not totally familiar with all of the information provided by the interviewee. Therefore, one effective way to gain more understanding is to ask for examples. This can also be utilised when the interviewee provides feedback that does not sound totally valid or totally believable. Asking the interviewee for an example helps lend a concrete and specific instance that may help make the requirements clearer. For example, "Can you give me an example of how that affects you?" can help make a statement more clear.

Keep the discussion back on track.

Sometimes the interviewee starts to talk about things that are outside the scope of the specific information you’re trying to gather. This is sometimes caused by a misunderstanding of the question you asked. Other times it’s caused by a lack of focus or a desire to talk about things that are of more interest to the interviewee. When the discussion gets off track, the interviewer must get back on track. An example of this technique would be "That’s a good point. However, can you describe how that relates to (…restate your original question…)?"

Try to stimulate ideas.

Sometimes the interviewee gives the obvious answers but isn’t thinking about other areas that may not be as obvious. The interviewer should try to get the interviewee to stretch a little and think about things that are not quite as obvious. For instance, you can ask "Can you think of a couple options for this situation?" or "Are there other ways to solve this?"

The 10 most dangerous species of IT team leader

After yet more research into the various species that inhabit this working world of ours, I return with a new set of taxonomic classifications. This time, we concern ourselves with the team leader, in all his or her many guises.

As ever, there are many competent and sociable team leaders in IT departments; but they don’t make for great storytelling. Picking the worst and most dangerous types can help us recognise the signs and maybe even glean a little entertainment from them.

1: Dux Timeris

Fearful Leader

This team leader was persuaded to take the leadership job for two reasons: First, the powers that be decided that there should be a team leader so they could devolve some of their duties to someone who can’t answer back. They picked the least confident team member and tailored the promotion process to ensure that the correct candidate was appointed.

Second, this person was the last to get through the door when the call for volunteers went out. He may have been trampled underfoot in the rush and was slightly concussed when the job offer was made.

Now that this new leader is hooked, he’s too timid to ask to be re-graded and spends a lot of his free time worrying about the job. This is completely unfair, as this type is usually a good person trying to do a good job.

Favourite saying: "Can somebody help me please? Anybody?"

2: Dux Fulvus Nasus

I will leave you to translate the Latin for yourself

This leader does not think for himself but hangs on every word passed down to him from on high. If the boss told him that the sun was inhabited by pixies he would send Christmas cards to them.

He cannot believe what he is hearing when somebody on the team disagrees with a management decision; more worryingly, every critical word uttered within his earshot is reported back directly. Once aware of this, a team can make good use of it for propaganda purposes.

Consider the day of the Christmas party, for instance, when our help desk team was told that it was not permitted to attend. We weren’t expecting any calls, as virtually the whole company was at the bash. We decided, within the hearing of the Dux FN, that we would wait for the party to start and then go home. An hour later, our invite to the party had arrived, with an instruction to switch the phones over to voicemail.

Favourite saying: "I was talking to the boss this morning. Wonderful man!"

3: Dux Magnifica

The Paragon of all the Virtues

This person is under the misapprehension that he has arrived, that he or she is "Look on my works, ye Mighty, and despair!"

Yes, this jerk is full of himself. You would think he had just been appointed World President instead of a glorified tea boy. He took to wearing a pin-striped suit on appointment to the position.

He refers to the senior directors of the company as his colleagues or "fellow members of the management team." He is not a tattle-tale. The views of those under him are far too inconsequential to be listened to, much less acted upon.

Favourite saying: "Follow me lads; I know what I am doing!"

4: Dux Trogloditica

Cave Man

This leader is a technical expert who lives, eats, and breathes computers. He leaves the office at the end of the day and goes home to a techno cave where he spends his off-duty hours making his stock of computers, one that NASA would be proud to own, do things that they were never deigned to do.

When he is not thus engaged, he attends conventions dressed as his favourite character from a variety of science fiction films. D Trog is the first person to talk to about a technical problem and the last person to ask about any leadership or personal hygiene issue.

Favourite saying: "I think you’ll find that James T. Kirk never said, ‘Beam me up Scotty.’ The nearest he got was, ‘Scotty beam me up!’"

5: Dux Dictatorialis

The Dictator

You can say what you like about Dux Dictatorialis, but under him all the calls were logged on time. He (and it usually is a he) is an obnoxious person who can’t understand that people have a life outside of work and wants the world to know that HE is in charge. People who disagree with him usually disappear and are never seen again, although a trip to the media library or any other dark and dusty storage facility may give a clue as to their fate.

The worst thing about any dictatorship is that the weaker members of the team find themselves siding with the bully and become bullies themselves. Fortunately, this species is becoming rare in the wild, as there are many predators and few allies.

Favourite saying: "Come on, get with the program!"

6: Dux Nihilistica

Leader of Nothing and Nobody

D Nihilistica is an unhappy and lonely leader. He was made team leader, but the snag is that his team consists of just one person: himself. He has been doing the same job for a number of years and generally speaking, he does a pretty good job. A year ago, he was surfing a recruitment Web site and was spotted by his boss. They don’t want to lose him, as they would have great trouble in replacing him, especially at the paltry salary they currently offer.

Luckily, they persuaded him to stay by awarding him an upgrade in his status, a move that cost nothing.

Favourite saying: He doesn’t have one; there’s nobody to talk to.

7: Dux Amicus Bonissimus

The Best Mate

The Best Mate wants to please everybody all the time. Nobody ever explained the impossibility of this, so he continues trying, even though experience should tell him that he’s on a hiding to nothing. These leaders are known to go home at night wondering why everybody hates them. This is not true. We don’t hate them, we worry about them. In the futile commotion of trying to be all things to all people, they are in dire peril of going quietly mad.

Promotion is a double-edged sword that cuts both ways. When you are pleasing the bosses, you will upset the team. Stick up for the team and the bosses will blame you for not communicating their message properly.

Favourite saying: "Why does everybody hate me?"

8: Dux Reluctantis

The Reluctant Leader

The team had functioned well for a number of years, but there was a review and the question was asked, "Who is the team leader?" The answer was not what the big boss wanted to hear.

"You must have a team leader on every team." End of discussion.

People were invited to apply for the post, but nobody was keen. It was clear that the team dynamic was at risk and nobody wanted to rock the boat. Eventually, a person was picked, interviewed, and appointed. Even when the inevitable interview question was asked: "Why do you want this job?" the answer, "I don’t really want it," was not enough to put them off.

Sometimes, a D. Reluctantis is appointed because HR feels he needs a challenge to reveal his full potential.

Favourite saying: "If that’s okay with you…"

9: Dux Minoris

The Lesser Leader

This team leader is perfectly illustrated by Simon Travaglia’s Pimply Faced Youth (PFY) in his celebrated BOFH series of comic IT spoofs.

He is keen but has been led astray by a scheming and manipulative section manager. He is drawn into the various scams and schemes to do down the bean counters and is not above using the Argon-based fire systems to discretely dispose of those who stand in the way.

In reality, this character is easily diverted from his true path and finds himself in a tight corner when the schemes inevitably go wrong. He is great fun to work for, but you should always make sure that you take the key to the server room door with you if you enter alone.

Favourite saying: "Illegitimi non carborundum…" and he has the T-shirt to prove it.

10. Dux Severus

The Serious Team Leader

When some members of the team get promoted it goes to their heads.

Gone is the sociable, easy-going friend you worked with and out comes the martinet. The person who used to take 30-minute bathroom breaks suddenly starts to time your breaks and make scathing comments when you take more than four minutes. Having an upset stomach is no excuse because he knows that the loo break is often used as an unofficial break and an opportunity to catch up on office gossip with help desk colleagues.

Favourite saying: "We run a tight ship here."

This typifies the arrogance of the breed. Adopting the "Royal We" is always a sign that things are going to the bad.

Four tips for using metrics on your project

Identifying, gathering, and leveraging the right mix of metrics is a way to gain better control of your large project. The value can be quantified in a number of areas including:

1. Improved performance of the overall project fulfillment and delivery process
2. Improved estimating for future projects
3. Validation of duration, cost, effort and quality objectives for the project
4. Identification and communication of best practices
5. Improved client satisfaction

In general, metrics provide a more factual and quantitative basis for understanding how you are doing and the things that can be done better. Without at least some basic metric information, all discussions on performance and improvement are based on anecdotal evidence, perceptions, and guesses. Collect metrics, even if they are imperfect and imprecise. They still provide a better foundation than recollections, perceptions and guesses.

Collect the metrics you’ll use

You don’t want to collect metrics just for the sake of collecting them. Of course, you need to collect any metrics that are mandatory for your organisation. In addition, you should collect any other metrics that are needed by your particular project. However, if you don’t have a purpose for the metrics, or if your project isn’t long enough that you can really leverage the information, these customised project specific metrics are not worth collecting for your project.

Make sure the value of collecting a metric is worth the cost

Just as there is some cost associated with most project management activities, there’s a cost to collecting and managing a metrics process. Some metrics are interesting, but don’t provide the type of information that can be leveraged for improvement. Some metrics are just prohibitively expensive to collect. The cost to collect each metric must be balanced against the potential benefit that will be gained. Start by gathering metrics that your organisation requires. Then add metrics that have the lowest cost and effort to collect and can provide the highest potential benefit.

Make sure your metrics tell a complete story

The problem with many metrics is that they’re reported in a way that doesn’t tell a complete story. The project manager and project team may know what a given metric is telling them, but other people accessing the information may not.

One way to help is to always report the metric along with the target. For instance, if you report your current expenditures to date, also include your expected expenditures at this point in the project. If you report that your project has spent $100,000 so far and your total budget is $150,000, the reader still doesn’t have the context to know whether this is a good or bad situation. Sure you’re under budget, but the work is not complete either. The better way to report this information is to state that you have spent $100,000 to date and that according to your estimate you should have spent $110,000 at this point in the project. If the trend continued, you estimate the final cost of the project to be $135,000 versus your budget of $150,000. If you report the metrics with this context, your readers can understand what the numbers are saying.

Train your team in the purpose and value of metrics

The general definition of "metrics" is not always obvious. The project manager may be trying to create a metrics program for a large project, while the project team doesn’t make the connection between gathering metrics and the business value received. This disconnect may affect the client as well. The project manager should take the time to explain why metrics are needed and how the information collected will help drive improvements. Likewise, the team should understand how to look for metrics that will provide an indication as to the state of a process or a deliverable. Educating the team and the client will help the project manager obtain better metrics with less work effort and less pushback.