Achieving User Satisfaction in
Content Management Systems

Nick Day
MRes Human-Computer Interaction
Lancaster University

14 September 2007

Download PDF Version 5.9mb


A content management system in its simplest form allows a website owner to maintain their own website through a web browser interface. Much research has been conducted into producing usable content management systems, however there has been no research investigating how to achieve user satisfaction in such systems. This is particularly important given the daily use such systems can receive by the same users, as well as the pressures imposed on such systems used in a commercial environment. This thesis seeks to readdress the balance in content management system design. Research is conducted to determine which factors can affect user satisfaction in content management systems. These factors are then investigated and validated by means of redesigning and evaluating the THE SYSTEM content management system. The results produce an importance ranking of these factors, finds some particularly interesting correlations between the affective and functional notions of user satisfaction, and recommend areas for further study.


Company and system names have been anonymised to protect the interests of the parties involved in this project. The company offering a CMS as a service to their clients for website maintenance is referred to as THE PROVIDER. The company utilising this CMS within their organisation is referred to as THE CLIENT. The content management system itself is referred to as THE SYSTEM. Screenshots have been altered as necessary.


This dissertation is protected under the Creative Commons Attribution-No Derivative Works 2.0 UK: England & Wales license and remains the copyright of Nick Day. For full details visit


1 Introduction
 1.1 Motivation
 1.2 Aims, Collaboration & Intended Outcomes
 1.3 Structure
2 Background
 2.1 Usability & User Satisfaction
  2.1.1 Creating “Value”
  2.1.2 Summary
 2.2 Content Management Systems
  2.2.1 Business Motivation & Constraints
  2.2.2 Usability Principles
 2.3 Determinants of User Satisfaction in IT Systems
  2.3.1 Content
  2.3.2 Accuracy
  2.3.3 Format
  2.3.4 Ease of Use
  2.3.5 Timeliness
  2.3.6 Summary
 2.4 Possible Determinants of User Satisfaction in CMSs
  2.4.1 Funology
  2.4.2 Motivation & Job Satisfaction
  2.4.3 Aesthetics
  2.4.4 Learnability & Documentation
  2.4.5 Summary
 2.5 Contemporary Trends
  2.5.1 Priorities & Feature Selection
  2.5.2 Interface Design & Simplicity
  2.5.3 Web 2.0 & AJAX
 2.6 Summary
3 Case Study Development
 3.1 Current System Analysis
 3.2 Design
  3.2.1 Methodology
  3.2.2 Strategy
  3.2.3 Scope
  3.2.4 Summary
 3.3 Prototyping
 3.4 Implementation
  3.4.1 Constraints
  3.4.2 Screenshots
  3.4.3 Screencast
4 Evaluation
 4.1 Protocol
  4.1.1 Training
  4.1.2 First Impressions
  4.1.3 Verbal Protocol Analysis
  4.1.4 Satisfaction Questionnaire
 4.2 Results & Analysis
  4.2.1 Pre-Analysis Tasks
  4.2.2 Importance Analysis
  4.2.3 Outcome Analysis
 4.3 Issues & Evaluation Development
 4.4 Functional Evaluation
 4.5 Summary
5 Conclusion
 5.1 What factors determine user satisfaction in content management systems?
 5.2 How important is the overall notion of user satisfaction in content management systems?
 5.3 Are there any issues or constraints in achieving content management system user satisfaction in a commercial environment?
 5.4 Summary
A Content Management System Usability Principles
 A.1 Minimise the number of options
 A.2 Be robust and error-proof
 A.3 Provide task-based interfaces
 A.4 Hide implementation details
 A.5 Meet core usability guidelines
 A.6 Match authors’ mental models
 A.7 Support both frequent and infrequent users
 A.8 Provide efficient user interfaces
 A.9 Provide help and instructions
 A.10 Minimise training required
 A.11 Support self-sufficiency
B Functional Specification
 B.1 THE PROVIDER’s Requirements
 B.2 THE CLIENT’s Requirements
C Prototyping
 C.1 First Iteration: Paper-Based Prototypes
  C.1.1 Key Design Decisions
  C.1.2 Evaluation
 C.2 Second Iteration: Wireframes
  C.2.1 Key Design Decisions
  C.2.2 Evaluation
 C.3 Third Iteration: Revised Wireframes
  C.3.1 Key Design Decisions
  C.3.2 Evaluation
D Verbal Protocol Transcripts
 D.1 Participant One
 D.2 Participant Two
 D.3 Participant Three
 D.4 Participant Four
 D.5 Participant Five
 D.6 Participant Six
 D.7 Participant Seven
E Questionnaire Responses

Chapter 1

A content management system (CMS) in its simplest form allows a website owner to maintain their own website through a web browser interface. Typical features include the ability to create new pages, delete or edit existing pages and publish changes to a web server.

Usability applies to all aspects of a system with which a user might interact (Nielsen1993) and is broadly defined under the ISO 9241-11 standard as consisting of three distinct aspects: effectiveness, efficiency and satisfaction (Userfocus2007). That is, the user can accomplish the tasks they require, as timely, accurately and completely as possible, and the interaction experience of doing so is as pleasurable as possible with minimal user frustration.

There has been considerable research into the usability of CMSs (Byrne2005a,bKowalski2002a,bRobertson20032007) but this has focussed prominently on the development of usability guidelines and heuristics. As a result, this research centres very much around the more functionality-related effectiveness and efficiency aspects of usability. No research has been carried out into the importance or methodology of achieving user satisfaction in CMSs. This study is the first to date to apply measures of user satisfaction specifically to CMSs.

Although research has been conducted into user satisfaction in traditional IT systems, many of the significant studies were carried out at least ten to fifteen years ago (Doll & Torkzadeh1988Hiltz & Johnson1990Etezadi-Amoli & Farhoomand1991), with little subsequent “realignment” since with systems of today. This was obviously a time when computer usage was much less commonplace and computers did not have the “everyday object” status they have attained today. It is not clear a) how relevant these user satisfaction measures are to users of today; and b) how relevant these measures are in promoting user satisfaction in CMS design. A contemporary investigation of these measures is required, specifically targeted at influencing future CMS development.

1.1 Motivation

The management of web content using a CMS is often something on which a business depends (Bisson2003). Many large businesses have dedicated teams of content authors—usually within the marketing department—whose sole job it is to maintain the organisation’s website. However, the inability or unwillingness of content authors to use a CMS for user satisfaction reasons is likely to have a detrimental effect both inside and outside a business. Indeed “the success of a site often depends on content authors being able (and willing) to use the CMS” (Robertson20032007).

Internally, a system which is frustrating or tedious to operate is likely to affect many aspects of a content author’s daily working life—particularly job satisfaction, productivity levels and general morale and attitude. The effects of these aspects are likely to worsen over time too, with content authors becoming increasingly frustrated with the same issues day-in day-out.

As a consequence of these internal issues, the external-facing website is likely to suffer from a range of problems. In particular, content is likely to suffer from a lack of updates, become outdated quickly or not be updated at all—thereby defeating the purpose of actually having a CMS. Furthermore, the site (as seen by its audience) will become “stale” and other businesses will have the opportunity to take a competitive advantage.

The sheer lack of literature and research into user satisfaction pertaining to CMSs is therefore particularly worrying and suggests a major imbalance in design priorities. This thesis is motivated by the desire to readdress the balance in CMS design, to promote user satisfaction and therefore better support sustained usage.

1.2 Aims, Collaboration & Intended Outcomes

The aims of this thesis are to a) show that user satisfaction is an important yet often overlooked factor in CMS design; and b) investigate how user satisfaction can be achieved in CMSs. This thesis will therefore focus on addressing the following three objectives:

A case study will be carried out in collaboration with THE PROVIDER and THE CLIENT as a means of validating these objectives.

THE PROVIDER [...] have developed their own in-house CMS, THE SYSTEM, which is deployed on clients websites to allow them to maintain their sites themselves.

THE CLIENT delivers data, editorial, pictures and sounds in real-time to clients worldwide. They are one of THE PROVIDER’s largest clients and use THE SYSTEM CMS to maintain their website.

The secondary motivation for this thesis comes from THE PROVIDER who are looking to develop a “super-usable” interface for THE SYSTEM, with the aim of increasing their clients’ satisfaction and reducing technical support demands. Empirical evidence will be gathered from evaluation sessions with THE CLIENT to determine the factors of user satisfaction found to be particularly relevant in CMS design.

The intended outcome of this thesis is a clear definition of measures that have proven (by means of the case study) to elicit user satisfaction in CMSs. It is hoped that these measures will influence future CMS design.

1.3 Structure

Chapter 2 expands further on the notion of user satisfaction and investigates relevant literature. Contemporary trends and design methodologies are also discussed.

Chapter 3 documents the development of a case study which will be used to investigate the objectives of this thesis. The current THE SYSTEM system is also analysed.

Chapter 4 describes the evaluation carried out, analyses the results and raises issues and points for future development.

Finally, chapter 5 revisits the objectives of this thesis for explanation in light of the evaluation findings.

Chapter 2

2.1 Usability & User Satisfaction

As outlined in the introduction, usability consists of three aspects: effectiveness, efficiency and satisfaction (Userfocus2007). All research, however, carried out so far into CMS usability has accounted for only one or two aspects of usability, either effectiveness or efficiency. These aspects focus on the functional criteria and quantitative metrics that a system must meet—that is, the functions provided by the system and a quantitative measure of how efficient these functions are to carry out. Studies such as these make “risky assumptions about correlations between usability aspects” (Frøkjær et al.2000) and assume that by meeting the effectiveness and efficiency criteria that satisfaction will be assumed. However, this is not the case—Frøkjær et al. (2000) found that the three aspects of usability are in fact weakly correlated.

Therefore, the three aspects of usability must be considered individually—one cannot base conclusions on the measure of one aspect based on the results of the other two measures (Frøkjær et al.2000). For example, Frøkjær et al. (2000) found that some users preferred using a less efficient system—even though it had the same functionality as other systems—because it was subjectively more satisfying to use.

There is therefore much motivation for looking wider than pure usability to the subjective sum of an interaction, including user experience, satisfaction and subconscious affective reactions relating to moods, feelings or attitudes (Zajonc1980)—to ensure that a system is not only considered usable but also satisfying to use. As a result the user values their interaction experience.

2.1.1 Creating “Value”

The importance of looking wider than usability is backed by Cockton (2004b) who argues that modern HCI is currently facing its final challenge: the quest for value (Cockton2004a). HCI must become value-centred to support the design of useworthy systems (Cockton2005a). No longer is it sufficient just to focus on usability—true notions of satisfaction are derived from the value that users can derive from their interaction experience. Usability cannot reside in generic definitions, it is always relative to the specific context a system is operated under (Cockton2004a).

Cockton therefore argues that the use of usability guidelines is very much based on “luck and magic” (Cockton2004b). It is essentially “luck” as to whether a usability guideline is still applicable once the context and purpose of a system is taken into account. Too many developers are ready to believe in “guideline magic”, based on statistical evidence when usability guidelines do strike it lucky—“they work when the right users interact with a system in the right way” (Cockton2004b). However, usability guidelines do not always work, and a usability problem in one usage context may not be one in another (Cockton2006).

Cockton’s concern about usability guidelines is echoed by Bevan & Macleod (1994). In addition, there is often no guarantee that a set of guidelines is exhaustive for all aspects of a system—particularly if a system has been tailored to its users through highly user-centred design methods. There is also often a conflict between guidelines—“changing an interface feature to be compatible with one guideline often makes it incompatible with another” (Bevan & Macleod1994)—and it is then difficult to reach a compromise between the benefits of each guideline. Furthermore, Bevan & Macleod (1994) argue that the effectiveness with which guidelines are applied depends very much on the skill of designers in interpreting and applying them.

However, one should be careful not to disregard usability guidelines completely—they do still offer a valuable source of information based on empirical psychological evidence and validated research. It is important that HCI practitioners do not base key design decisions solely on their presence—a mutual balance should be developed between guidelines and the psychology of creating value-based systems (Cockton2004b).

The two defining principles of valuequality in use and fit to context—are qualities of user satisfaction experienced during interaction. There are however outcomes and lasting impacts that endure beyond interaction (Cockton2006). This suggests that while the determinants of user satisfaction can occur during an interaction—through concepts such as ease of use and system responsiveness—there is much possibility and scope of influencing the long lasting effect of user satisfaction beyond the “moment” of experience. In an environment where daily CMS usage is required this is thought to be particularly important (Section 2.4).

2.1.2 Summary

The notion of user satisfaction is particularly evasive (Lindagaard & Dudek2003). There is no “magic pill” to achieve user satisfaction—it comes from the subjective sum of the entire interaction experience—and centres around the inherently subjective and emotive issue of delivering value in design (Cockton2004b). It is therefore especially important that designers attempting to maximise user satisfaction have a deep understanding of their users and the context the system will be used under. To this end this chapter focusses strongly on how CMSs are used in the workplace and the motivations and importance of achieving user satisfaction in their usage.

2.2 Content Management Systems

Many businesses recognise the need to manage and publish growing amounts of information on their websites and generally invest in a CMS to assist them in doing so. The large majority of CMSs are highly complex (Byrne2005b) and difficult to operate (Byrne2005a) commercial products. They are often “obtuse and complex, packed with many gratuitous features at the expense of usability and user experience” and written “by geeks, for geeks” (Veen2004). Examples of CMSs with particularly unwieldy, complex and intimidating interfaces are TYPO3 (Figure 2.1(a)) and Vignette (Figure 2.1(b)).

(a) TYPO3
(b) Vignette
Figure 2.1: Complex CMS interfaces

Such commercial “off the shelf” products are often developed for mass-market appeal and designed for an ideal, general world—a “one fits all” solution to suit all users—and are not tailored to the users who will actually be using them on a regular basis. As one CMS researcher puts it: “It’s like trying to sell everyone an average-sized shoe” (Byrne2005a).

In the same way that the majority of people will never find satisfaction in an “average-sized shoe”, CMS users will never reach real levels of user satisfaction if they are forced to use a “one fits all” system. “Not every business runs [in a standardised way] like a factory floor” (Byrne2005a), and it is therefore common sense that any “one fits all” CMS will experience friction when trying to “fit in” with the goals, environment and culture of the majority of businesses.

There is therefore considerable motivation behind employing tailored design methods when designing a CMS, to ensure that the resultant system fits in with the needs, requirements, business processes and workflow relevant for users who will be using it on a daily basis. It is accepted that a good system design is one that “fits” its context of use thereby helping to achieve value in the system (Cockton2005b).

2.2.1 Business Motivation & Constraints

There is a lot of motivation for increasing user satisfaction in a CMS, primarily to foster a good working relationship between the provider and the client. CMS providers are often offering their clients a CMS as a service and the relationship between the provider and the client is paramount. If a CMS is technically successful and is delivered to the client on time and on budget then it can be considered a success—however it can still be a failure if users are unhappy with the result (Zviran et al.2006).

A CMS very much affects the “bottom line” of a business, and in heavily web-oriented businesses can mean the difference between success or failure. This is illustrated by Angeles (2006) with the following five-step chain of effects:

In the end, it’s about money (Angeles2006)—user experience and satisfaction in a CMS can lead to distinguishable business results and increased sales.

CMS providers are often working on a tight budget assigned by their client—a set cost of implementing a CMS. Clients often do not appreciate the benefits that CMS user satisfaction can bring, and generally “make do with what they have” (Angeles2006) unaware that relatively simple changes to achieve user satisfaction could have positive effects on, for example, motivation and productivity.

As clients often do not appreciate these benefits they are unlikely to assign specific portions of their budget to the quest of achieving user satisfaction, when they feel it could be better spent on system functionality.

Providers can try to “evangelise simplicity in the workplace”—possibly assisted by success stories such as 37signals (Section 2.5)—however it is thought that if user satisfaction is taken into account as part of the traditional development process then providers have more chance of implementing user satisfaction as standard with the system. Therefore, there is much incentive for providers to be aware of factors which have proven to elicit user satisfaction in CMSs, and strive to meet these factors during functional development work, with user experience and satisfaction as a “keystone” in their development strategy (Angeles2006).

2.2.2 Usability Principles

The usability of a CMS is now considered equally as important as its functionality, as if content authors cannot easily use a CMS it will never be a success—regardless of how powerful it may be (Robertson2007). Based on research of the field Robertson (2007) has developed eleven usability principles (Appendix A) which will be used to inform case study design decisions. However, these decisions will not be made solely on the suggestions of Robertson’s usability principles.

2.3 Determinants of User Satisfaction in IT Systems

The first major paper into end-user computing satisfaction was conducted by Doll & Torkzadeh (1988). Based on usability research at the time, the authors generated a 40-item questionnaire to measure end-user perception on a variety of measures. It is important to note that this study was conducted before substantial usability research as conducted by Nielsen (1993) was commonplace. The questionnaire was administered to 618 end users from a variety of industries and positions, and participants were asked to complete the questionnaire for an application of their choosing. The results found five determinants of user satisfaction: content, accuracy, format, ease of use and timeliness. The authors developed a 12-item model (Figure 2.2) to provide a common framework for analysis of user satisfaction.


Figure 2.2: A model for measuring end-user computing satisfaction (Doll & Torkzadeh1988)

For clarification, these criteria have been defined by Zviran et al. (2006) as follows:

User trust in site-provided information
Precision of site-provided information
Clarity of information presentation
Ease of Use:
Subjective impression of user
Temporal relevance of information

The model proposed by Doll & Torkzadeh (1988) has since been widely accepted as a standardised instrument for measuring user satisfaction (Doll et al.1994Doll & Torkzadeh1991)—particularly as it was the first model to incorporate the ease of use factor (Xiao & Dasgupta2002). In a later study, Doll et al. (1994) carried out further confirmatory factor analysis to estimate the reliability of individual factors and of the overall instrument. The results further supported their initial study, suggesting that the factors of content, accuracy, format, ease of use and timeliness can be used with confidence as a measure of user satisfaction. “The large number of organisations and variety of applications surveyed [also] support the generality of the findings to any computer-mediated system” (Doll et al.1994).

Since the development and validation of the user satisfaction model by Doll & Torkzadeh (1988), there has been little research on determinants of user satisfaction in web-based systems. Since Doll & Torkzadeh’s original study in 1988 there have been significant changes in IT and exponential growth of the Internet—it is therefore unwise to apply their model to contemporary studies of web-based systems and assume it is still reliable and relevant. The only other study of user satisfaction in web-based systems was conducted by Xiao & Dasgupta (2002) who found that all aspects of the model other than question C4 (Figure 2.2) are still applicable. This study was by no means extensive and did not look into any other measures which may exist—its aim was distinctly to validate the pre-existing model.

There has been little further research on “realigning” these five factors of user satisfaction with modern day web-based systems. A preliminary study (Zviran et al.2006) has attempted to expand the user satisfaction instrument (Doll & Torkzadeh1988) by incorporating web usability principles: personalisation, structure, navigation, layout, search and performance—however there has been no work to validate the effectiveness of these factors in achieving user satisfaction, and it is not clear how—if at all—these will be relevant for CMS design.

Achieving user satisfaction in a CMS is therefore a rather unknown area of research. There is a lack of extensive studies refining Doll & Torkzadeh’s five-factor model in relation to web-based systems. Furthermore, given that CMSs are a somewhat specialised area of web-based systems, it is necessary to investigate each of these five factors and only apply them to the case study development if they are actually relevant to CMSs. Each of the five factors are discussed below with respect to the definitions derived by Zviran et al. (2006). It is important to clarify at this stage that the following discussion is based on the Doll & Torkzadeh’s validated user satisfaction model and their validated definitions by Zviran et al. (2006).

2.3.1 Content

The information provided in a CMS will come directly from the site that the CMS is managing. There is no scope for this information changing between the end-user site and the CMS interface. The introduction of trust issues is therefore highly unlikely and this factor is thus thought to be irrelevant for CMS design.

2.3.2 Accuracy

The definition validated by Zviran et al. (2006) centres around the precision of information provided on the site. Similar to the content factor above, there is no scope for the precision and accuracy of information changing between the end-user site and the CMS interface. Therefore, this factor is also thought to be irrelevant for CMS design.

2.3.3 Format

Format is thought to be relevant for CMS design. Users should be able to use the system, browse the site and find and edit content easily, assisted by the clear, logical formatting of information. This itself draws many parallels with ease of use. Furthermore, the definition derived by Zviran et al. (2006) centres around clarity—I personally believe this implies that language should also be considered. Users should be able to understand clearly and consistently the information and terminology used in a CMS. This is also one of the usability principles suggested by Robertson (2007) (Appendix A). I therefore propose the redefinition of format to format and language.

2.3.4 Ease of Use

Ease of use is also thought to be especially relevant in CMSs, as corroborated by literature of the field: “If staff, particularly [content] authors, cannot easily make use of the CMS, then the system will never be a success, regardless of how powerful it may be” (Robertson2007). Furthermore, ease of use can help to encourage sustained usage in a workplace environment where a systems’ usage is mandatory—“if users are confronted with an easy-to-use technology, they will be more likely to continue using it” (Angeles2006). Taking considered advice (Section 2.1.1) from CMS usability principles (Appendix A) and web usability guidelines (Nielsen & Loranger2006Krug2005) will help to satisfy this factor.

2.3.5 Timeliness

A study by Abdinnour-Helm et al. (2005) showed that timeliness is not particularly relevant when positioned in a web-based context. Furthermore, the temporal relevance of a piece of information is thought to be simply irrelevant to the scope of this thesis and therefore not applicable to a CMS. However, timeliness is still relevant in the context of the speed at which the user can operate the CMS. Working under the demands of updating content to strict deadlines, users will wish to accomplish tasks using the CMS as efficiently and as quickly as possible. Therefore, I propose the redefinition of timeliness into two components: efficiency* (the number of steps taken to complete a task); and speed (the length of time taken to complete a task).

* An important distinction needs to be made between efficiency as one of the three general aspects of usability and efficiency as defined here. The definition of efficiency defined here refers to a short sequence of steps to complete a task, with minimal mouse clicks and the interface of the system not “getting in the way”. Speed (a component of the efficiency aspect of usability) is now considered separately under its own user satisfaction factor.

2.3.6 Summary

This section has considered the five identified and verified factors of user satisfaction generalised for IT and web-based systems. Format (redefined as format and language), ease of use and timeliness (redefined as efficiency and speed) were considered relevant enough to CMSs to warrant their further inclusion in this study. There are however likely to be more measures of user satisfaction particularly relevant to CMSs, which are investigated in the following section.

2.4 Possible Determinants of User Satisfaction in CMSs

User satisfaction is particularly relevant to CMSs because of the day-to-day use the system will receive by the same people. Allowing the user to “enjoy” using the system will alter their behaviour towards it which, in terms of content authors, may encourage them to produce better content. Indeed “getting authors to produce the content [a business] needs means changing their motivations about the goals of their content, and the value they are trying to deliver [to the site audience]” (Boiko2005). Novel, interesting and enjoyable systems are proven to help improve users motivation (Hassenzahl2004) and can “donate unexpected value” by delighting users’ with their capabilities and user experience (Cockton2004b).

The following sections discuss research carried out into four concepts, which are felt to be particularly relevant to users of a CMS who have to use the system in the workplace as part of their daily job.

2.4.1 Funology

Funology reflects the move in HCI studies from focussing on pure usability to a wider set of concerns to do with fun, enjoyment, aesthetics and the overall experience of use (Blythe et al.2004).

“The idea of designing for enjoyment rather than simply designing to reduce frustration is a relatively new one for HCI” (Nielsen2004) and extends beyond creating a system that simply allows users to get their tasks accomplished quickly and easily. In a recently study of 2000 people (Allison Van Dusen2007) 59% said that their job was a leading source of stress, showing reasonable impetus for reducing frustration by creating more “harmonious” interactions with systems that users will enjoy using. As previously mentioned, this could encourage content authors to produce better content, but it is also likely to make users more tolerant of system errors and annoyances and increase morale and productivity. Indeed, it is thought that enjoyment predicts efficiency, and people are generally at their happiest when they are at their most productive (Taylor1999).

The enjoyability factor of a system stems from how the system resonates with the user, and different users resonate to different things because they have “certain needs, desires and intentions, a social and cultural history and position” (Hummels et al.2004). There are several “ingredients” that result in a resonant interaction including: usability, human skills, context of use, aesthetics of interaction and engagement. Due to the personal character of resonant interaction, “developers should involve people for whom they are developing [for] right from the start” (Hummels et al.2004), thereby suggesting that user-centred design approaches are most suited for ensuring resonant interaction and therefore enjoyment. Aesthetics is discussed in depth in Section 2.4.3.

Enjoyment also helps to encourage cognitive absorption—a state of deep involvement and engagement with software—which helps to shape beliefs and perceptions influencing usage behaviour (Agarwal & Karahanna2000). CMSs “represent a substantial investment” for most businesses, “however their value is only realised when utilised by their intended users in a manner that contributes to the stratetic and operational goals of the business” (Agarwal & Karahanna2000). Allowing users to become involved and engaged with a system is therefore very important to ensure the system is used in a constructive manner—that is, content authors produce good content. Understanding these user reactions and relationships with technology is currently of particular interest in HCI.

In a workplace environment where CMS use is mandatory, the notion of achieving enjoyment is especially important to counteract any effects of frustration with the CMS. Causes of user frustration include software crashes, unclear error messages and confusing over-complicated interfaces (Preece et al.2002). Research by Lazar et al. (2006) shows that users lose as much as a half of their working lives through frustration with such issues. Frustration at work can harm interpersonal relationships and strongly influence user moods—in relation to all aspects of work, not just interaction with the system causing frustration (Lazar et al.2006).

The main cause of frustration is the interruption of the user from performing their task. Users have to respond to something “unexpected and unclear that interferes with their task goals” (Lazar et al.2006). Preventing the user from completing their task goals wastes time—which is often limited for workplace users (Lazar et al.2006)—and can produce the aforementioned personal issues.

Furthermore, it is not the case that the user can just “dismiss” the frustration they have encountered—Lazar et al. (2006) suggests that frustration has a very much ongoing effect in the workplace, particularly encouraging lower levels of job satisfaction (Murrell & Sprinkle1993). It is thought that prolonged job dissatisfaction can also lead to an unfavourable attitude towards the employer (Murrell & Sprinkle1993), which can raise obvious issues in terms of continuing employment.

2.4.2 Motivation & Job Satisfaction

There are many organisations who “employ people who view computers as a necessary evil: temporary fixes until something better comes along” (Brown2005). Research has shown that these types of mandatory users gain satisfaction from the “subjective sum of the interaction experience”, not just the degree to which the system enhances productivity (the effectiveness and efficiency aspects of usability) (Lindagaard & Dudek2003). It is therefore important that these users feel motivated to use a computer-based system, and it appeals to them beyond simply being quick and easy to use. In a workplace environment this is likely to have far reaching effects into the area of job satisfaction. Indeed “usable software increases employee satisfaction; difficult to use software reduces motivation and may increase staff turnover” (Bevan & Macleod1994).

The most conclusive study into the link between user satisfaction and job satisfaction was carried out by Ang & Soh (1997). Their study of people who use a computer system as an integral part of their jobs found that user satisfaction with such systems is a “very sound indication” of overall job satisfaction. One of the main determinants of job satisfaction found in the study was task variety. Users performing more administrative tasks—orderly tasks, with little variety and much repetition—were found to have lower levels of job satisfaction than those performing more research-related tasks with a higher level of variety. The range of tasks carried out in a CMS are quite limited for content authors and would fall under the bracket of administrative tasks. Given this constraint in achieving job satisfaction—and thereby user satisfaction—it is important that CMSs make these repetitious tasks as pleasant and motivational as possible. Furthermore, Ang & Soh (1997) found a positive relationship between users productivity levels and job satisfaction.

As well as this link between user and job satisfaction, user satisfaction also increases a user’s motivation to use that system. Even if the use of a CMS is mandatory, users will be more tolerant and accepting of it if they feel motivated to use it.

A study by F. Davis et al. (1992) reports an impetus for designing systems that are both more useful and more enjoyable, in order to increase their acceptability among potential users and increase motivation to use the system. Furthermore, the perceived usefulness and ease of use of a system will help to instill a positive attitude between the user and the system, thereby producing a behavioural intention to use the system with which they associate a positive affect (S. Davis & Wiedenbeck2001). This forms the basis of the technology acceptance model (Figure 2.3) (F. Davis1989)


Figure 2.3: Technology acceptance model (F. Davis1989)

It is also thought that determinants such as the first impression of a system and its aesthetic qualities will assist in motivating user interaction and creating general system satisfaction. These are discussed in the following section.

2.4.3 Aesthetics

The aesthetic/usability effect describes a phenomenon in which people perceive more-aesthetic designs as easier to use than less-aesthetic designs—whether they in fact are or not (Butler et al.2007).

A major component of the aesthetic/usability effect is the first impression that a system gives to its users, which builds on the psychological theory that users “emotional responses are immediate and precede intellectual ones” (Lindagaard & Dudek2003). In the same way that your first impressions count when meeting other people, your first impressions of a user interface often determines your attitude and interaction behaviour with that system, and aesthetically pleasing interfaces can help to instill a positive relationship between the system and the user. Zajonc (1980) found that such emotional responses can be made in a mere five milliseconds and occur in the vast majority of users.

A further study by Hiltz & Johnson (1990) found that these first impressions of a system are in fact quite lasting—“if computers were perceived initially as difficult to use, users were more likely to express dissatisfaction with the interface of the system after four months of use”. An attractive user interface is therefore not just a visual element of design, but has psychological influences on a user’s entire interaction experience and “may influence long term attitudes towards [a] system” (Tractinsky1997). This is an area touched on by Cockton (2006) to create value in using a system which lasts beyond the “moment” of interaction (Section 2.1.1).

The aesthetic/usability effect is particularly relevant for everyday users such as content authors, who use a CMS to perform rudimentary content management tasks as part of their everyday working life, using the same system day-in day-out. Cooper (2004) argues that everyday users such as these “should not have to acquire computer literacy to use computers [as part of their job]”. It is therefore considered important that a CMS “lowers the barrier to entry” (Brown2005) for such users by looking easy to use, even if it is not inherently usable. Furthermore, the barrier to entry can be lowered be presenting a simple non-intimidating “first look” interface to the user. Much research has been carried out into the apparent usability afforded by aesthetically-pleasing interfaces (Kurosu & Kashimura1995Tractinsky1997Angeli et al.2006Norman2003). In particular, a study by Kurosu & Kashimura (1995) found that the apparent usability of a system was more strongly affected by an aesthetic user interface than the actual inherent usability of the system. It is also thought that aesthetically pleasing interfaces help to promote creative thinking and problem solving (Butler et al.2007), something that a frequent user, such as a content author, may be lacking through repetitious use of the same system on a daily basis.

Another study was carried out by Tractinsky (1997) which validated the findings of Kurosu & Kashimura (1995). This study also compared how the aesthetic perceptions differ between cultures (Japanese and Israeli) and the preliminary findings found that the Japanese—known for their modern aesthetic tradition particularly in high-tech products—found aesthetically pleasing interfaces more apparently usable than Israelis—probably better known for their “action” orientation (Tractinsky1997) and not high-tech products. A further study (Desmet2004) actually found that Japanese people were more susceptible to admiration, satisfaction and fascination than other cultures. This raises an important point that the aesthetic/usability effect is open to interpretation between different users, and care should therefore be taken that the aesthetics of an interface do not favour one user group or alienate another.

However, designers sometimes have a “tendency to neglect usability in favo[u]r of aesthetics” (Nielsen1993) or overemphasise the aesthetic elements of an interface to the point of degrading usability (Tractinsky1997). It is therefore also important that a balance is struck between aesthetics and usability—“products should be apparently usable as well as inherently usable” (Kurosu & Kashimura1995).

2.4.4 Learnability & Documentation

It is generally considered that documentation is advantageous in a CMS to assist new users, minimise training required and support self-sufficiency (Appendix A). Gemoets & Mahmood (1990) found that user satisfaction was “considerably enhanced” and “strongly influenced” by the documentation provided in a system, providing it is “well organised and provides formal instructions”.

Documentation can also help to reduce frustration. It empowers users to “help themselves” and become self-sufficient, which Bessière et al. (2006) demonstrated as helping to reduce frustration. Empowering the user to tackle obstacles they face with a system themselves—rather than relying on their peers in the workplace—is also likely to bring a sense of achievement and encourage the user to become more tolerant with issues they encounter with the system in the future.

Furthermore, good quality user documentation may have a dramatic and continuing impact on user satisfaction (Doll & Ahmed1985)—especially important for a CMS which must support sustained usage and maintain user satisfaction over this time.

2.4.5 Summary

After consideration of some more contemporary literature, often related to mandatory computer usage in the workplace, the following four additional factors have been selected which are likely to influence user satisfaction in CMSs: funology, job satisfaction and motivation, aesthetics and learnability and documentation. There may be some overlap between these four factors and the four factors previously derived from user satisfaction studies of IT systems—however they are considered distinctive enough to be considered as eight separate factors. It will become clear if they are in fact correlated during the evaluation and results analysis.

2.5 Contemporary Trends

This section discusses contemporary web design trends and methodologies which are particularly relevant for enhancing user experience and satisfaction.

Many of the trends presented in this section are from the book Getting Real—the business, design, programming and marketing philosophies of 37signals (37signals2006). The Getting Real book contains details of their sometimes unconventional look at building web applications and has come to define a new era of web applications, “Web 2.0” (Section 2.5.3). The book gives a first hand look at how working methods are changing in the web industry. 37signals’ philosophies are also proven—their range of web-based applications achieve notably high levels of user satisfaction through their “small, simple and efficient” systems (Angeles2006) and are subject to much critical acclaim. Following aspects of the Getting Real philosophy—and being aware of other contemporary trends—during case study development is an excellent way of ensuring this thesis takes a fresh, contemporary look at CMS development.

2.5.1 Priorities & Feature Selection

Getting Real focusses on building less and leaving out anything that is not absolutely essential, delivering just what customers need in less time and with less complexity. This is the particularly key advantage of web applications over desktop applications—desktop application manufacturers need to sell a new version of their software every year and have to justify the expense of selling a new version by adding new features. This is inflating for the sake of inflating—driven by business needs and not user needs—leaving the application bloated and full of surplus, often unwanted features. This spiral of complexity is known as “feature creep” and is a result of developers generally “not noticing when more options make a [system] less usable” (Surowiecki2007). Indeed “more power means more complexity, so each feature inherently reduces the usability of a [system]” (Byrne2005b). A perfect example of this is Microsoft Word, which now has 31 toolbars and more than 1500 commands (Surowiecki2007)—many of which go unused by the large majority of users.

However, it is not only developers who are to blame for “feature creep”—users often unwillingly induce and indulge in this spiral of complexity themselves. Although users find systems overloaded with features unmanageable, they also find them attractive (Surowiecki2007). “Consumers give more weight to a product’s capability benefits and less weight to a product’s usability [...] despite the fact that a product’s usability strongly influences their satisfaction with the product” (Angeles2006).

Marketeers are very much aware of this, and so CMSs are often sold under the promise of, for example, “New Improved Functionality!” and “101 NEW Features!” to lure and tempt customers with features they think they will use and sound useful, but they most probably won’t touch at all—users are being “wowed by features over usability” (Angeles2006) and a technical economy is very much driving the design of the system (Cooper2004). “It is only once [the user uses] the system that [they] realise the virtues of simplicity” and “feature fatigue” sets in (Surowiecki2007). The user becomes frustrated by the plethora of features available to them, therefore finding the system difficult to use. “Feature fatigue” is difficult for users to avoid since people are not generally good at predicting what will make them happy in the future (Surowiecki2007). It is therefore down to the developer to ensure that all features of a system are essential to the user. The issue of selling the system and luring in customers must be left as purely a marketing exercise, not an implementation issue for developers to “fix” by introducing unnecessary features. Unfortunately such practices are commonplace among CMS vendors, who compete primarily on features and functionality in quite an immature marketplace (Byrne2005a).

37signals (2006) summarise the need for fewer features with the argument that it’s better to “build half a product, not a half-assed product”. It is often the case that if every feature idea is implemented, the resulting system has many average features. A better approach is to include half of the features, but spend your time implementing these features to an excellent quality. The notion of “It Just Doesn’t Matter” has been coined by 37signals (2006) to encourage developers to stick to only essential features. “This embodies what makes a product great—figuring out what matters and leaving out the rest”. This leaves the user free to concentrate on the task at hand and “they’ll achieve productivity they’ve never imagined” (37signals2006).

2.5.2 Interface Design & Simplicity

The “less-is-more” philosophy naturally follows through from feature selection to interface design, and as Cooper (2004) suggests “no matter how cool your interface is, less of it would be better”. The Getting Real approach to this issue involves designing the interface first—the real screens that people are going to use—and works “backwards” from this actual user experience (37signals2006). Many systems are developed with a program-first mentality, where the programming and logic is completed first with user interface then layered on top. This is a bad idea since programming is the most heavyweight and expensive component of the entire development (37signals2006). Designing the interface first brings three advantages.

Firstly, design is flexible—interface designs are much easier to change or scrap completely than programming logic—and programming first can severely restrict the functionality of the user interface.

Secondly, it allows for easy rapid prototyping to “flush out ideas that don’t work from the outset” (37signals2006). Interface designs produce “real, tangible prototypes” that when presented to the real users of a system will lead to real reactions and the truth about how satisfied users are with the system. This “rinse and repeat” approach ensures that prototypes are developed based strictly on user needs and feedback (37signals2006).

Thirdly, “the interface is your product” (37signals2006)—it is a users only point of direct interaction with the system and the whole experience of using a system is thought to be “the only thing users care about” (Merholz2007). Designing the interface first ensures that the original goals of the system are always at hand—does it make sense?; is it easy to use?; does it solve the problem at hand? (37signals2006)—ensuring you get the answers to those questions sooner rather than later on, at a stage where the system could be heavily entrenched in programming logic and constraints, to the detriment of user needs.

Radiant (Figure 2.4) and a custom-made CMS for (Figure 2.5) are good examples of CMSs with a strong focus on simple, aesthetic and effective interface design.


Figure 2.4: Radiant CMS (Radiant CMS2007)


Figure 2.5: Custom-made CMS for (Hamid2007)

Keeping these goals in the forefront of the design process also helps to ensure effective information architecture (the arrangement and organisation of information) and interface/navigation design (the presentation and navigation through that information in a way that facilitates understanding) (Garrett2002). For example, frequently used features will be easy to find (Marick2007) and users won’t need to “shuffle off to various places in the interface to accomplish simple and common tasks” (Cooper et al.2007).

2.5.3 Web 2.0 & AJAX

“Web 2.0” is a phrase coined by O’Reilly Media in 2003 to refer to the perceived “second generation” of web services, which embodies the transition of websites from information sources to sources of content and functionality, introducing the web as a platform to run web applications. The exact definition is open to much debate—and it is often considered a “buzzword”—but it centres around concepts of usability, user-centred design, joy of use and simplicity.

A key part of introducing the web as an application platform is to bring the user experience of a website closer to the immediacy of working within a desktop metaphor environment, where applications respond quickly and users can interact in a very fluid manner (Dürsteler2005). Web applications no longer operate strictly on a click-and-refresh basis on a page-by-page model, but are more interactive applications similar to the way a user may interact with a desktop-based application such as Microsoft Word. The technology that provides this functionality is AJAX (Asynchronous JavaScript and XML).

An area where AJAX could be particularly useful in a CMS is to cut the number of page refreshes required, making it more efficient to both frequent and infrequent users - in fact this is one of the recommended CMS usability principles (Appendix A). CMS vendors are only recently beginning to turn to AJAX-type technologies to overcome the constraints and trade-offs of traditional web applications—such as confusion about the use of the “back” button, having to refresh every screen and the inability to drag-and-drop items. It is thought that “[traditional click-and-refresh web applications] just feel slower” (Byrne2005b) and AJAX can generally improve usability leading to an increase in productivity (Stafford2006).


Figure 2.6: AJAX elements (auto completion, in-context validation, toolbar and menu)

Widespread use of AJAX within a web-based system does however introduce a desktop-on-website metaphor, which can break users expectations of how they should interact with a website and how it should look and behave (Higgins2007). This can lead to users treating a web application as if it were a desktop application—for example, closing a window and expecting the contents to be saved automatically, or expecting to be prompted to save—and this may have a negative impact on learnability, pleasantness of use and adoption (Higgins2007). It is for this very reason that the usability and appropriateness of AJAX is currently a hot topic of debate (for example, 9rules (2005); Mickiewicz (2006); MacManus (2007)), but it is generally thought that a focus on AJAX to increase usability is “a very good thing” (Byrne2007) when implemented carefully and not used for purely aesthetic reasons. It is also thought that AJAX can help in achieving the notion of funology and enjoyment, which were discussed in (Section 2.4.1).

2.6 Summary

This section has stressed the importance of looking more broadly than the traditional view of usability to notions of experience and value in interaction design. Four factors of user satisfaction relevant for CMS design have been suggested from derivation of traditional measures—format and language, ease of use, efficiency and speed. A further four factors have been proposed which are thought to be relevant to the context of CMS usage—funology, motivation and job satisfaction, aesthetics and documentation. Contemporary trends relevant to this thesis have also been discussed which are thought to be important in the implementation of a CMS aiming to achieve user satisfaction.

Based on these findings the next section documents the design, prototyping and development of a new interface for the THE SYSTEM CMS. This will be used as a case study to empirically validate the importance of the eight factors arrived at through this background research.

Chapter 3
Case Study Development

3.1 Current System Analysis

An informal interview was carried out with the Technical Director of THE PROVIDER, to analyse the existing THE SYSTEM CMS. During this interview I asked a range of questions concerning the design, development, user interface, user feedback and evaluation of the current system.

[The exact interview findings cannot be released publicly. However, the main issues during development were: restricted access to stakeholders and users during system development; no formal requirements capture process; and no prototypes were given to users until the late stages of development.]

To summarise, while THE SYSTEM was designed specifically for THE CLIENT its main web interface was not designed for a specific user type. As a result, content authors—the most common type of user—find the system difficult and complicated to use.

3.2 Design

3.2.1 Methodology

Based on the findings of background research, it has been decided to employ user-centred design methods for the case study development. This will ensure that real-world users are taken into account at every stage of system development (Garrett2002) which will help to minimise error and promote productivity and performance gains (Noyes & Barber1999). Furthermore, it will ensure that the system fits its real-world context of use thereby allowing users to derive maximum value from using the system. Users who are consulted at early stages are also thought to have less antagonism towards the introduction of a new system, particularly in a workplace environment (Gould & Lewis1985Greenbaum & King1991).

The appropriateness of user-centred design methods for achieving user satisfaction is also corroborated by several studies. Zviran et al. (2006) argues that “the better a web interface fits the user’s preferences, the higher would be the value and satisfaction attributed to the [system]” and Ang & Soh (1997) found that for “good” systems to be developed for commercial usage users must be “actively involved in the analysis, design and implementation of the system”. Baroudi et al. (1986) found a strong relation between user involvement in system design work and the user satisfaction derived from using the finished system, and (Keil & Carmel1995) found that employing user-centred methods generally increases the probability of a project’s success.

User-centred design methods can however be difficult to put into practice in a commercial environment (Trenner & Bawa1998) for a range of practical and political reasons, mostly related to gaining access to real users. Specifically, “people and companies don’t always appreciate the benefit it will bring” (Simpson1998) and therefore often take exception to providing ready access to users within their business. This issue is particularly prevalent amongst technically-minded developers who often feel as though the designer is encroaching on their “turf”—as was experienced during the original THE SYSTEM development—and dismiss the importance of usability. Many people think “usability is just common sense” (Kaderbhai1998).

A framework for user-centred design has been developed by Garrett (2002) consisting of five “planes”: strategy, scope, structure, skeleton and surface (Figure 3.1). Garrett’s user-centred design framework also shares many parallels with Cockton’s notion of a value-centred design framework (Cockton2005a)—in particular, involving users at preliminary stages to identify usage contexts, and following an iterative design and prototyping process. Garrett’s framework will be followed throughout case study design and prototyping activities to ensure that real-world users are considered throughout.


Figure 3.1: The “planes” of user experience (Garrett2002)

Strategy and scope are discussed in the following two sections; structure, skeleton and surface are taken into consideration throughout the prototyping section.

3.2.2 Strategy

The first stage of user-centred design is the most abstract and concerns defining objectives, goals and user needs for the system. A clear explicit understanding of what THE PROVIDER and THE CLIENT require is essential—and the more clearly this can be articulated, the more precisely design choices can be adjusted to meet the goals of the system (Garrett2002). Defining the strategy involves formulating objectives, success metrics and ascertaining user needs*. This section will also analyse and evaluate the existing system to help define the strategy for development.

* User-centred design also involves formalising this strategy into a strategy document to circulate to all project participants—however this is unnecessary as all user-centred work is being carried out by myself. Objectives & Success Metrics

THE CLIENT’s primary objective is to have a system which is easier and quicker for content authors to use, thereby ensuring that THE CLIENT’s website is kept up-to-date. The CMS is also used both in the UK and US to send bulletins to high-profile paying subscribers worldwide, and it is therefore vital that the CMS allows these bulletins to be sent correctly and on time. Through discussion of strategic objectives with the Marketing Director and Marketing Manager at THE CLIENT it was found that they were both keen to “roll out the CMS to more junior members of staff without worrying”, and the easier the system is to understand and learn then the better this objective can be met. This project will be considered a success if THE CLIENT can accomplish their tasks with the CMS in less time and with less frustration, and feel happy about giving access to the CMS to members of staff with little or no prior training.

THE PROVIER’s primary objective is to develop a super-usable interface for their THE SYSTEM CMS which is tailored to the needs and requirements of THE CLIENT. This should dramatically reduce the amount of support THE PROVIDER need to provide to THE CLIENT in using the CMS, and will define the success of this project from THE PROVIDER’s point of view. User Needs

Before gathering user needs it is important to define exactly who the user or users are through user segmentation and personas. For this case study I am focussing on the primary users being content authors at THE CLIENT, and secondary users being staff at THE PROVIDER. The needs of both parties are important, however priority will be given to THE CLIENT’s needs where they are not in conflict with constraints or technical requirements set by THE PROVIDER.

Based on discussions with THE CLIENT two personas (Figure 3.2) have been developed to represent the collective needs of content authors. Discussions were held with several content authors and the findings were distilled into two distinct types of user. These personas will help ensure that “users are kept in mind during the design process” (Garrett2002) and is it generally more manageable to relate design thoughts to two personas rather than a greater number of real people.


Figure 3.2: Personas: James and Roger Current System Analysis

To help ensure that any issues with the current THE SYSTEM system do not progress into the new version, it is important that the current system is evaluated. Issues known to content authors may be assumed tacit knowledge during an interview and not verbalised, or the user may not be aware of a more usable way to accomplish a difficult task so may not feel qualified to bring it forward for consideration. Content authors have also been using the current CMS for approximately 18 months, and during this time they will have learned how to work “in harmony” with the system’s idiosyncrasies. Users are therefore unlikely to volunteer all issues they encounter with the system during an interview, as they will be used to working in this manner on a regular basis, when a quicker, easier, more usable way to accomplish the same task could be introduced. It is also a good opportunity to look at the current system from an outsiders view based on contemporary trends and both CMS- and web-based usability principles.

Figures 3.33.7 contain annotated analysis of the user interface and workflow of the current THE SYSTEM CMS from the point of view of the two content author personas and myself as an outsider to the system. Positive points have a green arrow, negative points have a red arrow and references to CMS usability principles (Appendix A) are given in square brackets.


Figure 3.3: Main view and page manager


Figure 3.4: Creating a new folder, creating a new page


Figure 3.5: Inserting an image


Figure 3.6: Page information, deleting a page, editing a page


Figure 3.7: Reviewing and publishing

Analysis of the current THE SYSTEM system has identified a number of usability issues which, once fixed—in conjunction with user-centred design—it is hoped will help reduce frustration and increase user satisfaction. These issues are not surprising, given that the Technical Director at THE PROVIDER agreed that “five minutes thought” went into the user interface. The specific issues can be summarised as:

3.2.3 Scope

The second stage involves taking what THE PROVIDER and THE CLIENT want and working out how to satisfy these objectives in terms of functionality. Defining the Scope

Before gathering requirements it is important to define the scope for a project—that is, what will and will not be built. Similar to the concept of “feature creep”, “scope creep” encompasses a developer taking on additional requirements, which individually may not seem like much extra work, but when put together produce “a project rolling away out of control, blowing past deadlines and budget estimates on its way toward an inevitable final crash” (Garrett2002).

The first stage of scoping—“establishing the boundary for investigation” (Sutcliffe2002)—has been defined by THE PROVIDER: this case study just focusses on the web interface of THE SYSTEM as used for THE CLIENT’s content management. Other interfaces previously mentioned, and the Firefox toolbar, are outside of the scope of this case study development.

In practice scoping a user-centred design project is “rarely an easy process since users often don’t know what they want” (Sutcliffe2002). However, as this case study seeks to redesign an existing problematic system “scoping is less critical as the observed problem defines the initial scope for investigation”—therefore requirements gathering should be largely “problem-initiated” (Sutcliffe2002). However, in view of the fact that requirements gathering for the current system was non-existent, it is important that a combination of both problem-initiated and traditional requirements capture techniques are used. Requirements Gathering

Basic requirements were gathered through an informal interview process with both THE PROVIDER and THE CLIENT at their respective offices. Despite usability designers often facing a “fight for survival” (Trenner & Bawa1998) when gaining access to real users, I had no such issues as THE CLIENT have signed onto the fact that the results of this development work will directly benefit them.

The interviewing method of requirements gathering was chosen since it is relatively quick and easy and allows discussion and clarification of issues at the time of gathering. Interviews were carried out “on location” as “ people tend to be more confident and open when on their home ground” (Sutcliffe2002). As interviews can suffer from users not being able to volunteer their tacit knowledge (Sutcliffe2002) it was decided to use two other elicitation methods for THE CLIENT—observation and user participation—to gain a richer understanding of the current issues and their derived requirements. Brief observation was carried out for over an hour, with two users, in which time users used the CMS in their normal manner while being observed and voice recorded by myself. Users were asked to verbalise any particularly prevalent issues which may be undetected by myself as a relative newcomer to the system. As the observation was relatively short, users were asked to keep a notepad by their computer and use the system as they normally would over a week, writing down any further issues as and when they occurred. It was hoped that this combination of interviews, observation and user participation would elicit as many traditional and problem-initiated requirements as possible in a relatively short time period.

Further to eliciting system requirements, an issue specific to CMS design is to gain an insight into businesses “corporate processes” (Brown2005) to prevent a CMS from “working against” the established functions and workflow of the business. An understanding of how THE CLIENT works and functions is particularly important as user-centred design is “more than simply presenting design solutions for ratification; it requires consultation with users in their own language and terms” (Eason1995). An ethnographic study would have been ideal to elicit this kind of qualitative data, but is out of the scope of this thesis. Therefore, THE CLIENT were discussed at length with both the Account Manager at THE PROVIDER, who has dealt with THE CLIENT for over a year, and directly with the Marketing Manager at THE CLIENT prior to the requirements elicitation session. Functional Specification

Appendix B lists the functional requirements gathered and derived from the requirements gathering process.

3.2.4 Summary

This section has clarified the objectives, success metrics, scope and requirements of the new system. With this knowledge, prototypes can now be developed with reference to the structure, skeleton and surface “planes” of user-centred design.

3.3 Prototyping

There are two requirements for successful prototyping: to understand what is wrong and how to improve (achieved through user feedback); and a good starting point (Dix et al.2004). If you start with a bad design concept and iterate this design through prototyping, the result may just be a neater and more attractive version of the bed design idea. This problem—known as “local maxima”—is overcome by initially developing several design ideas (Dix et al.2004) and taking proven usability guidelines and advice into consideration when developing design ideas. Creating several design ideas also ensures that the designer doesn’t get emotionally attached to a single idea—which may not be favoured by the user and will have to be “thrown away”—and will be more open to considering the best features offered by each to merge into the next design iteration. It is also better to have frequent small evaluations of prototypes rather than fewer larger evaluations (Nielsen2001) to flush out design ideas that don’t work from the outset quickly.

Prototyping for the case study development consisted of three iterations. Each iteration was evaluated by both THE PROVIDER and THE CLIENT and the design decisions for the next iteration were based on this feedback. Appendix C graphically documents the prototyping process carried out.

The prototyping process has raised two important points about employing user-centred design methods. Firstly, the majority of the issues with each iteration were put forward by THE PROVIDER and not THE CLIENT. This corroborates the suggestion by Garrett (2002) and Boiko (2005) that involving all stakeholders is vital for effective user-centred design. It is important to cover as wide a range of stakeholders as possible as users may assume tacit knowledge. Secondly, some of the issues gathered during prototyping were in fact requirements which should have been elicited earlier in the design process. It is possible that THE CLIENT “did not have the current system in mind” (Sutcliffe2002) very clearly when requirements gathering was performed. Furthermore, it is thought that the changes introduced with each prototype iteration can prompt trails of thought, and spark further ideas in the user which would not have been apparent during requirements gathering. This proves the point made by (Garrett2002) that the “planes” of user experience should not be considered as discrete stages—they should partly overlap and be performed semi-concurrently to prevent decisions made on lower planes (such as strategy) from restricting decisions made on upper planes (Figure 3.8).


Figure 3.8: Building the “planes” of user experience concurrently (Garrett2002)

3.4 Implementation

The third iteration of prototypes was developed into a working system as a collaborative effort by myself and the Technical Director at THE PROVIDER. My work focussed primarily on developing the new user interface and the preliminary stages of integrating this with the existing system backend. The Technical Director carried out full integration of the new interface and added functionality where required.

3.4.1 Constraints

There were a few aspects present in the third iteration of prototypes which could not be realised due to technical and budget constraints.

There are also various small miscellaneous changes in the implemented system which were not present in the third prototype iteration.

3.4.2 Screenshots

Figures 3.93.10 show screenshots of the implemented system.


Figure 3.9: THE SYSTEM screenshots


Figure 3.10: THE SYSTEM screenshots

3.4.3 Screencast

As the interface contains many elements that are not apparent in static screenshots, a screencast has been created showing common tasks being carried out. The screencast is no longer publically available.

Chapter 4

An evaluation session was carried out at THE CLIENT’s offices and served two purposes. First and foremost it is to validate the three objectives of achieving user satisfaction in CMSs as defined in the Introduction chapter. Secondly the evaluation seeks to identify usability and functional issues related specifically to this case study. These issues will be provided to THE PROVIDER to guide future development of THE SYSTEM. Furthermore, it is important that all three aspects of usability—efficiency, effectiveness and satisfaction—are evaluated since the relationship between them is not well understood (Frøkjær et al.2000). Studies using a subset of the three aspects are at risk of either: assuming relations between these aspects; or ignoring rather important aspects (Frøkjær et al.2000) of a system that summate to produce a usable experience.

4.1 Protocol

The evaluation took place in a quiet room away from the participants normal working area. Participants were sitting in front of a laptop for the duration of the evaluation and there were no distractions present in the room. Participants were evaluated individually and each evaluation session took approximately 20 minutes.

The number of participants available for the evaluation was relatively small—only seven people were available. As the design and development work was highly user-centred towards THE CLIENT, only staff from THE CLIENT—who use the system under the context designed for—were suitable for evaluating the system. Participants also required a good level of experience with the existing system. These constraints limited the sample size, however research by Nielsen (2000b) does suggest that seven users are able to identify around 90% of usability issues and in testing more than five users “you are wasting your time by observing the same findings repeatedly but not learning much new”. THE SYSTEM is in fact used by only a handful more people than the seven evaluation participants, so whilst Nielsen’s research is open to debate (Faulkner2003Woolrych & Cockton2001) it is likely that, in this instance, seven participants will identify most relevant usability issues. Seven participants will, however, restrict the statistical significance of the results found.

The evaluation session consisted of four components: training, first impressions, verbal protocol analysis and a satisfaction questionnaire.

4.1.1 Training

Participants had not seen or used the new system prior to the evaluation session. Therefore, it was important that the participants were given a brief training session to attempt to factor out the effect of the introduction to the new system in the evaluation results. The training session was kept quite brief and participants did not use the system themselves, rather they were just “walked through” the main screens and primary functions of the system. The reasoning behind this methodology was to ensure the that first impressions captured really were users first impressions and were not affected by any particularly satisfying or frustrating experience with the system.

4.1.2 First Impressions

As discussed in Section 2.4.3 the first impressions of a system can be lasting and produce emotional responses which affect users’ interaction experience and attitudes. It was therefore decided to capture the participants’ emotional thoughts quickly—before gaining hands-on experience with the system—on the following criteria with a nine-point Likert scale: overall negative impression/overall positive impression, ugly/attractive, complicated/simple, dull/stimulating and intimidating/approachable. It will be interesting to observe how these initial first impressions change after using the system, and how they correlate with the eight factors of CMS user satisfaction being tested.

4.1.3 Verbal Protocol Analysis

Participants were asked to verbalise their mental state while using the system. An example verbalisation was given by myself—using Amazon to search for a book about web programming—to familiarise participants with the technique, and further clarification was given if the participant didn’t feel comfortable with what was required of them. No specific tasks or goals were set for the participant to complete. I wanted to try and avoid the participant having to stick to a strict schedule of tasks, and let them explore the system freely and interact with is as they felt necessary. Self-set goals and tasks are in fact thought to be more meaningful and produce more motivation and intent to perform a task, than goals set by an evaluation which may be unclear or simply rejected (Locke et al.1981). Participants were recorded using a digital dictaphone for later transcription. Verbal protocol analysis was carried out primarily to identify functionality issues and suggestions to convey to THE PROVIDER, but the qualitative results will also be used to corroborate the quantitate results gathered from the satisfaction questionnaire.

4.1.4 Satisfaction Questionnaire

Once their interaction with the system was complete, participants were presented with a three-page A4 questionnaire to complete (responses in Appendix E). The questionnaire was adapted from the QUIS (Chin et al.1988), PUEU (F. Davis1989) and CSUQ (Lewis1995) established satisfaction questionnaires, while also taking input from the background research findings and the user satisfaction model developed by Doll & Torkzadeh (1988). The questionnaire was split into sections for each of the user satisfaction factors being measured. It was decided to present the Motivation & Job Satisfaction factor as simply Motivation, as it was felt that participants may feel uncomfortable with divulging job-related information to a relative stranger, which would be more apparent if Job Satisfaction was explicitly mentioned. The Funology factor was also renamed to a more user-friendly Enjoyment, and will be referred to as this throughout the rest of this thesis. Spaces were available in each section for qualitative comments.

Similar to the QUIS, PUEU and CSUQ questionnaires, participants were presented with a range of statements which they responded to using a nine-point Likert scale. The scale was explained to the participant before beginning the questionnaire. Each statement corresponds to two scales, one measuring outcome and the other measuring importance. For outcome participants were asked to indicate their strength of agreement with the statement presented based on the observed outcome from their interaction. The importance scale measures how important the participant feels that statement is in contributing to their overall satisfaction of using the system. This two-scaled approach was decided upon to ensure that the findings from the evaluation were not just specific to THE SYSTEM and THE CLIENT, but also provide the preliminary stages to producing a generalised method of achieving user satisfaction in all CMSs.

The majority of statements were positively worded—that is, a higher numerical result would indicate a more positive outcome. Two statements were negatively worded: N2: The system is frustrating because it interrupts my natural flow of work; and A2: The aesthetic qualities of the user interface hampered my productivity. Lower numerical results for these questions would indicate a more positive outcome. Negatively worded questions were included to attempt to reduce the effect of acquiescence bias (participants agreeing with statements as presented) and ensure participants were actually thinking about their responses.

At the end of the questionnaire participants were asked to give their closing impressions of the system. This was used to find out if—following their interaction—their first impressions of the system have changed, and in what way this would affect them using the system regularly. Participants were also asked to rate the overall importance associated with each of the eight user satisfaction factors—without looking back at prior results—as a means of helping to verify and generally rank the eight factors.

4.2 Results & Analysis

4.2.1 Pre-Analysis Tasks

Once transcribed the verbal protocols were encoded for easier understanding of the participants’ behaviour, and the encoding scheme was developed in light of the data collected. The type of data collected did not lend itself to encoding directly to the eight user satisfaction factors being investigated. It was therefore decided to develop a coding scheme based on three basic factors of user satisfaction—positive outcome, negative outcome and uncertainty—and also identify any functional issues or suggestions verbalised by participants. The three basic factors will help to explain the quantitative questionnaire results, and the negative and functional issues in particular will assist THE PROVIDER with further THE SYSTEM development. The encoded verbal protocol transcripts are included in Appendix D.

During the evaluation session most of the participants questioned how the two negatively worded questions should be answered to correctly represent their response—for example, “should I vote low or high to represent that I’m not frustrated with the system?”. The scaling method was explained again to participants—with every attempt not to influence their response—and they were told to respond how they felt appropriate. It was noted which direction the participant voted in, and if necessary their response was inverted prior to analysis to represent the positive outcome with a higher numerical value. This ensures that quantitative comparison is accurate, but does however assume that participants considered the intervals in the Likert scale to be equidistant.

The first stage of analysis is to investigate the importance that participants associated with each each of the 27 statements based on their interaction with the system. Once an approximate ranking of these factors in producing user satisfaction is developed, correlation analysis will be carried out to attempt to explain these results in conjunction with qualitative data gathered.

4.2.2 Importance Analysis

The importance of each of the statements was averaged and a ranking produced to discover which factor participants found to be the most influential in affecting their satisfaction. This is referred to as the statement-level ranking. The results are shown in Table 4.1. P denotes Participant.

Table 4.1: Importance associated with each factor (statement-level)

The factors can also be ranked based on the overall importance given at the end of the questionnaire. This is used to validate the results given and should corroborate that a particularly low or high score given to a factor was in intentional. This is referred to as the factor-level ranking. The results are shown in Table 4.2.

Table 4.2: Importance associated with each factor (factor-level)

Therefore, the eight CMS user satisfaction factors can be ranked as shown in Table 4.3.

Table 4.3: Factor ranking

While the statement-level ranking will be considered the most accurate—given that responses were based on a range of detailed statements which “break down” a factor into particular characteristics—there are similarities with the factor-level ranking. Most importantly, the extremes of the ranking are the generally the same which suggests the statement-level ranking is reliable and participants were giving true responses despite having a large number of statements to mark. Further analysis in this thesis is based only on the statement-level ranking.

The ranking of the factors is highly interesting. As previously discussed a CMS very much affects the “bottom line” of a web-oriented business—in the end it’s all about money, leading to distinguishable business results and increased sales (Section 2.2.1). The ranking very much reflects this and shows that in a commercial environment, the most important factors of user satisfaction in a CMS from a user’s point of view are the factors you would traditionally associate with achieving a usable system rather than a satisfying system.

The top three factors—Ease of Use, Speed and Efficiency—are generally discrete, measurable and can be proven. These factors produce real tangible results, which in a commercial environment is especially important given the large investment many businesses make in purchasing a CMS.

The next two factors—Format & Language and Learnability & Documentation—can be considered as semi-measurable factors which contribute towards achieving the top three most important factors. Nicely formatted and clear messages, terminology and help will rarely be a feature that a CMS buyer is particularly looking for—there will be much more important functional requirements that take precedence.

The bottom three factors—Motivation & Job Satisfaction, Aesthetics and Enjoyment—can be considered more affective notions which are highly subjective and not easily measurable. Indeed the concept of Enjoyment in particular is “as intangible as it is appealing” (Desmet2004).

The ranking illustrates the issues present when trying to sell a CMS to a business. There is a strong focus on core aspects such as functionality, speed and features—which can lead to feature creep and feature fatigue (Section 2.5.1)—with little consideration or awareness of how more affective notions can influence these core aspects, leading to user satisfaction and happier, more productive users. It is therefore hoped that the results from the rest of the evaluation session will show positive relationships between each of the bottom five factors and the top three factors through performing correlation analysis.

Bivariate correlation analysis was performed on the importance data using SPSS to produce a Pearson product-moment correlation coefficient to identify relationships between the factors. Preliminary analysis was carried out to ensure no violations of normality, linearity and homoscedasticity.

Correlation analysis was carried out using the mean values of each factor to provide a true reflection of the central tendencies of each factor and remove the effects of anomalous results. Correlations based on mean values were considered to be significant at the p<.05 level.

Further bivariate correlation analysis was performed with all 27 statements and 5 first impression statements (denoted by the FI prefix). A stringent p value of .01 was necessary for this analysis—due to the small sample size and large number of tests—to help eliminate correlations marked as significant based on chance alone. Correlations from this more detailed p<.01 analysis should therefore be interpreted with caution.

A significant importance correlation was found between Learnability & Documentation and Aesthetics [r=.794, n=7, p=.033] suggesting that the aesthetic qualities of the interface positively affected how easy the system was for users to learn.

Participant 1: “That’s very obvious that they’re pages within there”; “The icons seem relatively obvious in terms of what they do”

Participant 3: “It’s quite obvious to me where I can edit and where I can’t, which is good”

Participant 5: “When I move my cursor around it’s clear what I can edit”

Although inferences about causality cannot be made from correlations, it is thought highly unlikely that the learnability and documentation offered by a system would help increase its aesthetic appeal. More detailed analysis found a strong positive correlation between two of the statements in these factors, L2 and A2 [r=.907, n=6, p=.012], however no logical explanation can be found for this relationship.

A positive correlation was also found between A2 and U5 [r=.919, n=7, p=.003]. This suggests that users can become skilful at using the system if their productivity and performance is not hampered by aesthetic qualities.

Participant 7: “The white as well, white background, it’s clearer to see what you’re doing”

This supports the findings of the background research that a balance should be struck between aesthetics and usability—“products should be apparently usable as well as inherently usable” (Kurosu & Kashimura1995). Furthermore, this correlation suggests that an aesthetically pleasing interface—which balances aesthetics and usability—can encourage an attitude in the user that will assist them in progressing from a novice to an expert, skilled user.

4.2.3 Outcome Analysis

Speed was found to be the second most important factor in achieving user satisfaction—however, it is the only factor that is not entirely controlled by either the developed system interface or by the evaluation participant. Speed can be vastly affected by the speed of the THE SYSTEM web server or the network connection at THE CLIENT’s office where the evaluation took place, and can therefore influence the outcome results. Partial correlation analysis, controlling for the Speed factor, was therefore carried out to investigate if results still remain significant. Unless otherwise indicated, all results were found to be significant once the Speed factor was controlled.

Some interesting correlations were found based on the first impressions users had of the system. Causality can be inferred from first impressions as one of the factors in the correlation (the first impression) is decided upon—that is, written on the questionnaire—before the other factors are considered (through actually using the system).

A correlation was found between the first impression statements FI2 and FI4 [r=.880, n=7, p=.009], which shows a positive relationship between a user perceiving an interface as being attractive and also perceiving it as being stimulating. This first impression of users being stimulated by the interface is very positive—to achieve enjoyment the senses must be engaged (Blythe & Hassenzahl2004) and users are more likely to be engaged if they feel stimulated by the system they are using. It is this stage of cognitive absorption (Section 2.4.1) that needs to be reached in CMS design to influence usage behaviour and help to ensure the system is being used in a constructive manner. A correlation was also found between FI5 and U1 [r=.957, n=7, p=.001], suggesting that if a user’s first impression of a system is that it is approachable (opposed to being intimidated by it), then they are also likely to find it user friendly.

Participant 6: “OK [the] first thing I’m thinking is that it looks very clean and very user friendly, which is good”

This partly corroborates the background research that a system can be perceived as being user friendly if it has a “lower barrier to entry” (Section 2.4.3), particularly for users who are new to the system as was the case with the evaluation participants (previous system knowledge aside). It is therefore likely that further research would show a strong correlation between an approachable first impression and the Aesthetics and Learnability & Documentation factors. Users who found the system to be approachable based on their first impressions also found the help to be self-explanatory (L2) [r=.923, n=6, p=.009], however this is likely to be based on chance alone—no logical explanation can be found for this relationship.

Perhaps the most predictable finding was a significant correlation between Format & Language and Ease of Use [r=.845, n=7, p=.017], given that it was identified during background research that both factors—adapted from Doll & Torkzadeh’s user satisfaction model—share many parallels with usability principles and guidelines developed by the likes of Nielsen & Loranger (2006) and Krug (2005). This finding is corroborated by more detailed correlation analysis, which shows a positive relationship between the F1 and U2 statements specifically [r=.880, n=7, p=.009], suggesting that the clarity and logical presentation of a CMS interface is highly associated with its perceived ease of use. While not being part of the Aesthetics factor, this correlation does hint at the presence of the aesthetic/usability effect (Section 2.4.3). Two findings from the verbal protocol analysis focussed primarily on the navigation aspect of the Format & Language factor:

Participant 6: “So I’m going to review now. It’s better that this is now one section rather than two, so I’m not clicking back and forth which was sometimes annoying when I had to publish first on the test server, then the live server”

Participant 7: “In general the whole thing is a bit clearer with just having these two main tabs”

Furthermore, two correlations were found between Format & Language and: Enjoyment [r=.817, n=7, p=.025]; and Motivation & Job Satisfaction [r=.824, n=7, p=.023]. While correlations do not infer causality, the specifics of this study determine that it is highly likely Format & Language influence Enjoyment and Motivation & Job Satisfaction more than Enjoyment and Motivation & Job Satisfaction influence Format & Language. The factors tested under the Format & Language factor are fixed in the system—clear and logical displaying of information [F1], clear and easy to understand instructions [F2], consistent terminology [F3]—and therefore the only source of variance emerges from how clear, logical, easy to understand and consistent the user thinks the system is. It is thought that if a user is finding their interaction particularly frustrating or boring, then they will be under the impression that the system is less clear, logical, easy to understand and consistent than it actually is. The suggestion that Format & Language influences Enjoyment and Motivation & Job Satisfaction is corroborated with the positive correlation between M2 and F1 [r=.954, n=7, p=.001], suggesting that the clarity and logical formatting of the CMS does help to provide motivation for regular repeated use.

Given the correlation between Enjoyment and Format & Language, and the correlation between Format & Language and Ease of Use, one would expect Enjoyment and Ease of Use to also be positively correlated. This was found to be the case [r=.831, n=7, p=.02]. These two factors do however have contrasting ratings—Ease of Use was found to be the most important factor in CMS user satisfaction; Enjoyment was found to be the least. There must therefore be a reason for a strong correlation in the outcome results, but a weak association in terms of user perceived importance.

Enjoyment is clearly not a factor that users thought relevant or even possible when using the CMS due to its low importance ranking. During the evaluation many participants were amused at the presence of “enjoyment” in the questionnaire, and some questioned how using a CMS as part of their daily job could ever “captivate their attention”. Participant 6 went so far as to say: “Enjoyment?!... Are you crazy???”. Enjoyment by nature is a very affective and subjective notion. When using a system it is highly unlikely a user will suddenly stop and appreciate the enjoyment they are receiving through using the system—but if presented with a frustrating experience their lack of enjoyment will quickly become apparent. As previous discussed, users find it hard to simply “dismiss” frustrating experiences (Section 2.4.1) and any sense of enjoyment they might appreciate with the system is therefore likely to be “outweighed” by these frustrating experiences. It is therefore thought that ratings of enjoyment in this study are the inverse of the number of frustrating experiences that users encountered. Based on this assumption, users are unlikely to appreciate any actual enjoyment they are experiencing, which may explain it receiving the lowest importance ranking.

Given the findings based on the first impressions users had of the system, which corroborated the background research carried out, it is worthwhile investigating how users’ impressions had changed through using the system. Participants 2 and 5 found the system easier to use than they originally thought—however it is hard to quantify this measure as being based on their genuine first impression of the new system, or whether these results come from the user thinking back to the frustration experienced with the old system. Participant 1 raises an interesting point, that his first impressions have not changed because the experience of using the system matched his expectations. Meeting users’ expectations is thought essential in achieving user satisfaction, since users will be instantly disappointed if the system doesn’t function as they expected. This result also suggests an easy transfer of users from the old to the new system, since the evaluation participants expectations will be based on the good and bad points from the old system. Participant 4 reserves his closing impression until using the system on a regular basis, to see how it handles transfer of users from the old to the new system, and how it operates under their work processes and high intensity use. This is a very valid point—further study of the system being used in real-world situations at THE CLIENT are required to validate these first impressions and see if—as the background research suggests—they are in fact quite lasting.

4.3 Issues & Evaluation Development

Due to the small number of participants available for the evaluation session, all participants were grouped together in a generic “content author” group of users. If more users were available it would have been beneficial to classify users and perform ANOVA to investigate the effects between groups, particularly if one group verbalised concurrently (as was the case) and the other group verbalised retrospectively. A larger sample size would also have produced more conclusive results, particularly for the Aesthetics and Efficiency factors which unfortunately produced little noteworthy results.

An inescapable bias present during the evaluation was the fact that the majority of participants were aware that the new system was developed by myself, as they had been involved in the user-centred design and prototyping activities. This is simply a consequence of carrying out single practitioner user-centred projects which include an evaluation component.

Performing a side-by-side comparison of users completing tasks with the old system and the new system was not possible. Technical changes implemented with the new system meant it ran significantly faster than the old system, to a level which wouldn’t be able to be reliably controlled for results analysis. There is also the issue of evaluation participants having much prior experience with the old system, and no prior use of the new system.

The verbal protocol analysis was purposely carried out in an informal manner to allow users to explore the system freely rather than complete a set of pre-defined tasks. While this brings advantages in terms of the kind of data that can be collected, it also prevents the direct quantifying and comparison of the verbal protocol transcripts. Different users were performing different tasks, which would have different outcomes in terms of user satisfaction. The evaluation participants consisted of a generic group of THE CLIENT content authors—with different levels of experience and expertise with the existing system, and differing amounts of time working for THE CLIENT—and participants were not classified into groups. A more formal approach would enable data to be quantified and encoded more strictly—for example, when transcribing the verbal protocols it was difficult to decide what constituted uncertainty, and what constituted the user just having a look around out of curiosity due to having little prior experience of the new system. It would be interesting to use innovative evaluation methods like hesitation analysis (Reeder & Maxion2006) to see what inferences are made towards achieving user satisfaction. It is also worth noting that Participants 2 and 7 in particular had some issues verbalising effectively throughout the evaluation session.

4.4 Functional Evaluation

This section documents a very brief functional evaluation carried out on the new THE SYSTEM system to ensure the requirements gathered in the functional specification (Appendix B) were met.

The following requirements were not met due to time constraints:

  1. The CMS should conform to the AA standard of Web Content Accessibility Guidelines.
  2. The CMS may offer facilities to alter the navigation of the THE CLIENT site, time permitting.
  3. The CMS interface will facilitate easier insertion of images into pages.

All other requirements were met. Comments for the verification of some of the requirements:

  1. The verbal protocol analysis and questionnaire results show that THE CLIENT appreciated the clean and simple user interface.
  2. Basic help was implemented as a proof of concept, and can easily be expanded throughout the system.
  3. The new system has reduced the number of clicks required by simplifying the navigation and using AJAX throughout the interface. This was reflected with the average ranking for the Efficiency factor of 7.48, but no correlation was found due to the small sample size used for the evaluation.
  4. Preliminary findings suggest that users require minimal training to adopt the new system, and many users did comment that the system generally looked and felt familiar.
  5. User-friendly language was used where possible, bearing in mind the constraints set by the underlying structure of the system (Appendix C.1.1).
  6. The evaluation found that many users thought the system was approachable, and a correlation was found between this and the system being considered as user friendly.

The success metric set by THE CLIENT at the beginning of the case study development has been met—THE CLIENT now feel happy about giving CMS access to users with little prior training. It will only become apparent whether their tasks can be accomplished in less time and with less frustration once the system gains some real-world usage.

THE PROVIDER’s initial success metric has also been met—they are very pleased with the usability and simplicity of the new interface and its tailoring to THE CLIENT. Whether it dramatically reduces the amount of support needed to be provided to THE CLIENT is yet to be seen.

4.5 Summary

This chapter has reported the results of the evaluation and analysed this data to determine what users found to be particularly important factors in achieving user satisfaction, as well as the correlation between these factors. The concluding chapter will revisit the original research objectives based on these findings and also suggest areas for future work.

Chapter 5

This thesis set out to satisfy and answer three research objectives. These objectives will now be revisited and explained in light of the research conducted.

5.1 What factors determine user satisfaction in content management systems?

To investigate factors determining user satisfaction in CMSs it was decided to first take a look at existing research in the area of general IT and web-based systems, and see if any of these factors were applicable to CMSs and could be “carried over” into this study.

The proven and validated user satisfaction model developed by Doll & Torkzadeh (1988) contained five factors—content, accuracy, format, ease of use and timeliness—of which format, ease of use and timeliness were deemed suitable for inclusion in this study. However, some redefinition of these factors was necessary to make them more applicable to CMS design—format was redefined as format and language, and timeliness was broken down into efficiency and speed. This produced four factors to investigate.

User satisfaction is a result of the subjective sum of an interaction experience and for users to derive value from their interaction a CMS must fit its context of use. The commercial environment a CMS is used in was studied, along with the pressures, demands and regular usage the system must support. Through this investigation, four determinants of user satisfaction particularly relevant for CMS design were proposed—funology (enjoyment), motivation and job satisfaction, aesthetics and learnability and documentation. This produced a further four factors to investigate.

The importance of these eight factors was then ranked through the evaluation of a user-centred case study redesigning the THE SYSTEM CMS. The results are shown in Figure 5.1.


Figure 5.1: CMS user satisfaction factors

It was interesting to learn that the top four factors of importance were those derived from the original model developed by Doll & Torkzadeh (1988), whereas the bottom four factors relate to more affective notions which are harder to measure and quantify.

Bivariate and partial correlation analysis was conducted and it was found that the bottom four factors are in fact positively correlated with all of the top four factors except speed. The results from this preliminary study suggest that while users may consider more affective notions of user satisfaction to be unimportant, they can have a more subconscious effect on the more concrete and measurable factors of ease of use, efficiency and format and language.

Particularly strong correlations were found between:

While considered important by users, no significant correlation was found between efficiency and any of the other seven factors. Limited correlation was found between aesthetics and any of the other seven factors, however all users did note that the aesthetic qualities of the user interface—particularly its clarity and simplicity—were particularly pleasing.

These results—and particularly the positive correlations—therefore provide a good starting point for further investigating the notion of achieving user satisfaction in CMSs.

5.2 How important is the overall notion of user satisfaction in content management systems?

The management of web content using a CMS is often something on which a business depends (Bisson2003) and “the success of a site often depends on content authors being able (and willing) to use the CMS” (Robertson20032007). However, the results of this study have shown that it is no longer sufficient to just focus on the effectiveness and efficiency aspects of usability—satisfaction must be considered.

This is particularly relevant to CMS design as content authors may be required to use a CMS on a daily basis. Experiencing usability issues on a regular basis can have a vast impact on many aspects of a content author’s working life—particularly job satisfaction, productivity levels and general morale and attitude. It is also thought that the effects of these aspects is likely to worsen over time. The affective notions of user satisfaction—particularly aesthetics and enjoyment—are thought to be particularly beneficial for these long-term users by keeping them motivated to use the system. In terms of commercial practicalities, it is believed that keeping content authors motivated can encourage them to produce better content, while also ensuring the site (as seen by its audience) is kept up to date and retains a competitive advantage.

A longer-term study is required to validate the importance of user satisfaction in CMSs, particularly on an ethnographic basis in a real-world commercial environment. This sort of study could also investigate whether the eight factors found in this study can be generalised to all CMSs.

“Leading HCI teams found that systems that did well in laboratory testing often did less well in the real world” (Cockton2004b)

5.3 Are there any issues or constraints in achieving content management system user satisfaction in a commercial environment?

The majority of large-scale CMSs are used in a commercial environment. This brings with it a demand for measurable business results, particularly as businesses often look on a CMS as an investment and a tool which should help increase revenue. The importance ranking (Figure 5.1) corroborates this theory, as the factors found most important are discrete and measurable. CMS owners will be looking to draw a direct chain of effect between, for example, the ease of use and operating speed of their CMS, leading to an increased rate of updates on their website, leading to measurable business results and profitability.

This mentality is driven very much by money, and as a result users are not convinced of the importance of more affective notions of user satisfaction such as motivation, aesthetics and enjoyment. However, the findings from this preliminary study suggest that these more affective notions do positively affect ease of use, efficiency and format and language. These three factors have proven to achieve user satisfaction through Doll & Torkzadeh’s original 1988 study, further validation and testing studies (Doll & Torkzadeh1991Doll et al.1994), and preliminary studies investigating the model in a web-based system context (Zviran et al.2006Xiao & Dasgupta2002). It can therefore be deduced from this—and from the positive correlations and qualitative data presented in this study—that ensuring these more affective notions are included in CMS design will lead to increased user satisfaction.

There are often issues and challenges in getting users to appreciate the benefits afforded by a usable system, even more so if usability is an additional cost in the system development. Given that the notion of user satisfaction is “particularly evasive” (Lindagaard & Dudek2003) and related to affective notions which are difficult to measure and quantify, it is felt that it will be even harder to convince paying clients of the benefits user satisfaction can bring. It is also made particularly difficult in the relatively immature CMS marketplace (Byrne2005a), where vendors compete with a strong focus on features and functionality. This focus on features and functionality is understandable however for the very reasons shown in this study—it is what users want and consider important. However, too strong a focus on features can lead to feature fatigue (Section 2.5.1) to the detriment of user satisfaction.

To summarise, there is currently somewhat of an “uphill struggle” facing CMS developers who wish to achieve user satisfaction in their systems, as well as communicate and instill the benefits of it to their clients.

5.4 Summary

This study has addressed all three research outcomes and has certainly achieved its functional goals—both THE PROVIDER and THE CLIENT are delighted with the new system and the benefits in usability and user satisfaction it brings.

The preliminary findings from this study suggest that user satisfaction is very much worth achieving in content management systems, and eight factors for doing so have been proposed and validated. A more exhaustive—possibly ethnographic—study is required of these eight factors with a larger sample size, and employing more advanced statistical analysis methods. Further research in this area is therefore recommended.


   37signals.(2006). Getting Real: The smarter, faster, easier way to build a successful web application.

   9rules. (2005). AJAX Is The New Flash. Retrieved on 28 June 2007 from

   Abdinnour-Helm, S., Chaparro, B., & Farmer, S. (2005). Using the End-User Computing Satisfaction (EUCS) Instrument to Measure Satisfaction with a Web Site. Decision Sciences, 36(2).

   Agarwal, R., & Karahanna, E.(2000). Time Flies When You’re Having Fun: Cognitive Absorption and beliefs about Information Technology usage. MIS Quarterly, 24(4).

   Allison Van Dusen. (2007). Don’t Let Workplace Stress Wreck Your Life. Retrieved on 28 June 2007 from

   Ang, J., & Soh, P.(1997). User information satisfaction, job satisfaction and computer background: An exploratory study. Information & Management, 32.

   Angeles, M. (2006). CMS Simplicity. Retrieved on 25 August 2007 from

   Angeli, A. D., Sutcliffe, A., & Hartmann, J. (2006). Interaction, Usability and Aesthetics: What Influences Users’ Preferences? In Dis ’06: Proceedings of the 6th acm conference on Designing Interactive Systems. ACM Press.

   Baroudi, J., Olson, M., & Ives, B. (1986). An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction. Communications of the ACM, 29(3).

   Bessière, K., Newhagen, J., Robinson, J., & Shneiderman, B.(2006). A model for computer frustration: the role of instrumental and dispositional factors on incident, session, and post-session frustration and mood. Computers in Human Behavior, 22(6).

   Bevan, N., & Macleod, M. (1994). Usability measurement in context. Behaviour & Information Technology, 13(1).

   Bisson, S. (2003). No site too small. Retrieved on 1 May 2007 from,,4750429-110837,00.html.

   Blythe, M., & Hassenzahl, M. (2004). The Semantics of Fun: Differentiating Enjoyable Experiences. In M. Blythe, K. Overbeeke, A. Monk, & P. Wright (Eds.), Funology: From Usability to Enjoyment. Kluwer Academic Publishers.

   Blythe, M., Overbeeke, K., Monk, A., & Wright, P. (Eds.). (2004). Funology: From Usability to Enjoyment. Kluwer Academic Publishers.

   Boiko, B.(2005). Content Management Bible (Second ed.). Wiley Publishing.

   Brown, D. (2005). Creating content management culture: A government case study. Retrieved on 1 May 2007 from

   Butler, J., Holden, K., & Lidwell, W. (2007). Universal Principles of Design: 100 ways to enhance usability, influence perception, increase appeal, make better design decisions and teach through design. Rockport Publishers.

   Byrne, T. (2005a). Oh What A Feature: Functional usability of web content management systems. EContent, 28(5).

   Byrne, T.(2005b). Oh What A Feeling: Applying usability princples to your CMS. EContent, 28(3).

   Byrne, T. (2007). CMS Watch: The Web CMS Report, Version 11 [Sample Edition]. Retrieved on 29 May 2007 from

   Chin, J., Diehl, V., & Norman, K. (1988). Development of an Instrument Measuring User Satisfaction of the Human-Computer Interface. ACM CHI’88 Proceedings.

   Cockton, G. (2004a). From Quality in Use to Value in the World. CHI ’04: CHI ’04 extended abstracts on Human factors in computing systems.

   Cockton, G. (2004b). Value-Centred HCI. NordiCHI ’04: Proceedings of the third Nordic conference on Human-computer interaction.

   Cockton, G. (2005a). A Development Framework for Value-Centred Design. CHI ’05: CHI ’05 extended abstracts on Human factors in computing systems.

   Cockton, G.(2005b). L’Avenir de l’Interface - The Future of the Interface. Retrieved on 4 August 2007 from

   Cockton, G. (2006). Designing Worth is Worth Designing. NordiCHI ’06: Proceedings of the 4th Nordic conference on Human-computer interaction.

   Cooper, A.(2004). The Inmates Are Running The Asylum: Why high-tech products drive us crazy and how to restore the sanity. Sams Publishing.

   Cooper, A., Reimann, R., & Cronin, D. (2007). About Face 3: The Essentials Of Interaction Design. Wiley Publishing.

   Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3).

   Davis, F., Bagozzi, R., & Warshaw, P. (1992). Extrinsic and Intrinsic Motivation to use Computers in the Workplace. Journal of Applied Social Psychology, 22(14).

   Davis, S., & Wiedenbeck, S. (2001). The mediating effects of intrinsic motivation, ease of use and usefulness perceptions on performance in first-time and subsequent computer users. Interacting with Computers, 13.

   Desmet, P. (2004). Measuring Emotion: Development and Application of an Instrument to Measure Emotional Responses to Products. In M. Blythe, K. Overbeeke, A. Monk, & P. Wright (Eds.), Funology: From Usability to Enjoyment. Kluwer Academic Publishers.

   Dix, A., Finlay, J., Abowd, G., & Beale, R. (2004). Human-Computer Interaction (Third ed.). Prentice Hall.

   Doll, W., & Ahmed, M. (1985). Documenting information systems for management: a key to maintaining user satisfaction. Information & Management, 8(4).

   Doll, W., & Torkzadeh, G. (1988). The Measurement of End-User Computing Satisfaction. MIS Quarterly, 12(2).

   Doll, W., & Torkzadeh, G. (1991). The Measurement of End-User Computing Satisfaction: Theoretical and Methodological Issues. MIS Quarterly, 15(1).

   Doll, W., Xia, W., & Torkzadeh, G.(1994). A Confirmatory Factor Analysis of the End-User Computing Satisfaction Instrument. MIS Quarterly, 18(4).

   Dürsteler, J. (2005). AJAX. Retrieved on 28 June 2007 from

   Eason, K.(1995). User-centred design: for users or by users? Ergonomics, 38, 1667-1673.

   Etezadi-Amoli, J., & Farhoomand, A. (1991). On End-User Computing Satisfaction. MIS Quarterly, 15(1).

   Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments & Computers, 35(3).

   Frøkjær, E., Hertzum, M., & Hornbæk, K. (2000). Measuring Usability: Are Effectiveness, Efficiency and Satisfaction Really Correlated? In Chi ’00.

   Garrett, J. J.(2002). The Elements of User Experience. New Riders.

   Gemoets, L., & Mahmood, M. A. (1990). Effect of the Quality of User Documentation on User Satisfaction with Information Systems. Information & Management, 18.

   Gould, J., & Lewis, C. (1985). Design for usability: Key principles and what designers think. Communications of the ACM, 28(3).

   Greenbaum, J., & King, M. (1991). Design at Work: Cooperative Design of Computer Systems. Lawrence Erlbaum Associates.

   Hamid, N. (2007). Under the Hood at Retrieved on 22 May 2007 from

   Hassenzahl, M. (2004). The Thing And I: Understanding the Relationship between User and Product. In M. Blythe, K. Overbeeke, A. Monk, & P. Wright (Eds.), Funology: From Usability to Enjoyment. Kluwer Academic Publishers.

   Higgins, B. (2007). The Uncanny Valley of User Interface Design. Retrieved on 25 May 2007 from'rface-design/.

   Hiltz, S. R., & Johnson, K. (1990). User Satisfaction with Computer-Mediated Communication Systems. Management Science, 36(6).

   Hummels, C., Overbeeke, K., & Helm, A. van der. (2004). The Interactive Installation ISH: In Search of Resonant Human Product Interaction. In M. Blythe, K. Overbeeke, A. Monk, & P. Wright (Eds.), Funology: From Usability to Enjoyment. Kluwer Academic Publishers.

   Kaderbhai, T. (1998). Overcoming Inertia within a Large Organisation. In L. Trenner & J. Bawa (Eds.), The Politics of Usability: A Practical Guide to Designing Usable Systems in Industry.

   Keil, M., & Carmel, E. (1995). Customer developer links in software development. Communications of the ACM, 38(5).

   Kowalski, M. (2002a). Content Management Usability. Retrieved on 1 May 2007 from

   Kowalski, M. (2002b). Evaluating CMS Usability. Retrieved on 1 May 2007 from

   Krug, S. (2005). Don’t Make Me Think!: A common sense approach to web usability (Second ed.). New Riders.

   Kurosu, M., & Kashimura, K. (1995). Apparent Usability vs. Inherent Usability. In Chi ’95 conference companion.

   Lazar, J., Jones, A., Hackley, M., & Shneiderman, B. (2006). Severity and impact of computer user frustration: A comparison of student and workplace users. Interacting with Computers, 18.

   Lewis, J. (1995). IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and Instructions for Use. International Journal of Human-Computer Interaction, 7(1).

   Liebel, G. (2007). Tab Navigation. Retrieved on 4 August 2007 from

   Lindagaard, G., & Dudek, C. (2003). What is this evasive beast we call user satisfaction? Interacting with Computers, 15.

   Locke, E., Shaw, K., Saari, L., & Latham, G. (1981). Goal setting and task performance. Psychological Bulletin.

   MacManus, R. (2007). Rich Internet Applications vs HTML. Retrieved on 28 June 2007 from

   Marick, B. (2007). Six years later: What the Agile manifesto left out. Retrieved on 25 May 2007 from'manifesto-left-out/.

   Merholz, P. (2007). Experience is the Product. Retrieved on 6 July 2007 from'htm.

   Mickiewicz, M. (2006). Interview with Jakob Nielsen. Retrieved on 28 June 2007 from

   Modal window. (2007). In Wikipedia, The Free Encyclopedia. Retrieved 3 August 2007 from

   Murrell, A., & Sprinkle, J. (1993). The impact of negative attitudes towards computers on employee’s satisfaction and commitment within a small company. Computers in Human Behaviour, 9(1).

   Nielsen, J.(1993). Usability Engineering. Academic Press.

   Nielsen, J. (2000a). Drop-Down Menus: Use Sparingly. Retrieved on 4 August 2007 from

   Nielsen, J. (2000b). Why You Only Need To Test With 5 Users. Retrieved on 8 May 2007 from

   Nielsen, J. (2001). Success Rate: The Simplest Usability Metric. Retrieved on 29 August 2007 from

   Nielsen, J. (2004). Introduction to Section 2. In M. Blythe, K. Overbeeke, A. Monk, & P. Wright (Eds.), Funology: From Usability to Enjoyment. Kluwer Academic Publishers.

   Nielsen, J. (2007). Does User Annoyance Matter? Retrieved on 4 August 2007 from

   Nielsen, J., & Loranger, H.(2006). Prioritizing Web Usability. New Riders.

   Norman, D. (2003). Attractive Things Work Better. Retrieved on 25 May 2007 from

   Noyes, J., & Barber, C.(1999). User-Centred Design of Systems. Springer-Verlag.

   Olsen, H. (2007). The dark side of prototyping. Retrieved on 1 August 2007 from

   Preece, J., Rogers, Y., & Sharp, H.(2002). Interaction Design: Beyond Human-Computer Interaction. Wiley Publishing.

   Radiant CMS.(2007). Radiant CMS. Retrieved on 4 July 2007 from

   Reeder, R., & Maxion, R. (2006). User Interface Defect Detection by Hesitation Analysis. International Conference on Dependable Systems and Networks (DSN’06).

   Robertson, J.(2003). The Importance Of Content Management System Usability. Retrieved on 5 April 2007 from

   Robertson, J. (2007). 11 Usability Principles For CMS Products. Retrieved on 11 May 2007 from'pdf.

   Saw, J. (2000). Design Notes: Gestalt. Retrieved on 4 August 2007 from

   Simpson, N. (1998). Cultivating an Effective Client Relationship to Promote a User-Centred Culture. In L. Trenner & J. Bawa (Eds.), The Politics of Usability: A Practical Guide to Designing Usable Systems in Industry.

   Stafford, J. (2006). Upgrading to AJAX apps increases productivity. Retrieved on 12 July 2007 from,289483,sid39_gci12297%'06,00.html.

   Surowiecki, J. (2007). Feature Presentation. In The New Yorker. Retrieved on 25 May 2007 from'ecki.

   Sutcliffe, A.(2002). User-Centred Requirements Engineering. Springer-Verlag.

   Taylor, W. (1999). Inspired by Work. Retrieved on 12 July 2007 from

   Tractinsky, N. (1997). Aesthetics and Apparent Usability: Emperically Assessing Cultural and Methodological Issues. In Chi ’97.

   Trenner, L., & Bawa, J. (1998). The Politics of Usability: A Practical Guide to Designing Usable Systems in Industry. Springer-Verlag.

   Userfocus. (2007). ISO 9241-11. Retrieved on 15 August 2007 from

   Veen, J. (2004). Making A Better CMS. Retrieved on 5 April 2007 from

   Wong, Y. Y.(1992). Rough and Ready Prototypes: Lessons from Graphic Design. In Chi ’92.

   Woolrych, A., & Cockton, G. (2001). Why and When Five Test Users Aren’t Enough. Retrieved on 30 August 2007 from

   Xiao, L., & Dasgupta, S. (2002). Measurement of User Satisfaction with Web-Based Information Systems: An Empirical Study. Eighth Americas Conference on Information Systems.

   Zajonc, R. (1980). Feeling and thinking: preferences need no references. American Psychologist, 35(2), 151-175.

   Zviran, M., Glezer, C., & Avni, I. (2006). User satisfaction from commercial web sites: The effect of design and use. Information & Management, 43.

Appendix A
Content Management System Usability Principles

The following eleven usability principles are defined in Robertson (2007) and summarised below.

A.1 Minimise the number of options

As the functionality of a CMS increases, so does the number of buttons, menu items and links. This can easily overwhelm users, increasing the learning time and introducing serious usability problems. The following guidelines should be followed:

A.2 Be robust and error-proof

While web-based applications are inherently less robust against user actions and system problems, every effort should be made to address these limitations. The following guidelines are suggested:

A.3 Provide task-based interfaces

CMSs are used to complete tasks and as such the CMS should provide a task-based interface that matches these common activities. It is suggested that the functionality be divided into broad categories that match the way the CMS will be used—for example, back-end administrative settings should be kept separate from authoring tasks used by general users.

A.4 Hide implementation details

Users should be able to manage their website without having to understand any of the behind-the-scenes implementation-level details. The primary reason for purchasing a CMS is often to reduce or eliminate the need for technical knowledge, so a CMS should therefore:

However, care has to be taken not to make the interface too abstract from the implementation details so users are not “divorced” from the realities of managing a website (see principle 6).

A.5 Meet core usability guidelines

A CMS should be cleanly laid out, with options clearly marked and information presented in an understandable way. Usability guidelines such as those by Nielsen & Loranger (2006) and Krug (2005) will help to achieve this.

A.6 Match authors’ mental models

The mental model that the majority of users have when using a CMS is the editing of a website consisting of pages of content linked together in various ways. A CMS much match this mental model to avoid user confusion and frustration, therefore:

This can be a particular issue for CMS based on a asset-centric model (comprising “content objects”) rather than a page-centric model, as the unfamiliar “content object” metaphor can be difficult for users to grasp and seem unsensical in a website comprising of (from the users perspective) “pages”.

A.7 Support both frequent and infrequent users

There are generally frequent and infrequent users of CMS who often have different needs. Frequent users will regularly create and maintain large numbers of pages and may require greater functionality. Infrequent users will create or maintain the occasional page and as such may prefer in-context editing. CMSs should cater for these two distinct sets of user needs.

A.8 Provide efficient user interfaces

As well as being easy to use a CMS should also be efficient particularly for frequent users. The efficiency of web-based systems is inherently at a disadvantage compared to desktop-based systems, but can be improved by:

Consideration should be given to striking a balance between ease of use and efficiency throughout a CMS.

A.9 Provide help and instructions

CMSs can be overwhelming especially for first-time users, particularly since most CMS vendors use their own terminology for different aspects of content creation and publishing. Clear guidance and instructions should be provided as well as context-sensitive help covering all major features of the CMS.

A.10 Minimise training required

The majority of users of CMSs will be general office staff, so they should be designed in a way that minimises the amount of initial training required. Reduced training is a by-product of principles 1, 6 and 9, but should also be kept in mind as a distinct entity when designing a CMS.

A.11 Support self-sufficiency

This is an extension of the core concepts of usability to make a CMS truly usable—users should be able to complete common tasks without having to rely on third-party assistance or support.

Appendix B
Functional Specification

The following requirements have been gathered and derived from the requirements gathering process. These requirements define the constraints upon the development and give a clear picture of what will be included in the new THE SYSTEM system. Requirements are numbered for future reference.

B.1 THE PROVIDER’s Requirements

  1. The CMS will build upon the existing THE SYSTEM backend.
  2. The CMS will remain operational with the THE SYSTEM Firefox toolbar.
  3. The CMS will not restrict or interfere with any of the other interfaces to THE SYSTEM e.g. Subversion or WebDAV.
  4. The CMS will be compatible with recent versions of the Firefox browser.
  5. The CMS will be free of unnecessary visual elements and present a simple, streamlined interface to the user.
  6. The CMS will provide basic in-context help and links to an external support system.
  7. The CMS should operate at the fastest speed possible by using highly optimised code.
  8. The CMS interface should use the jQuery JavaScript library if AJAX effects are required, rather than one of the other available JavaScript libraries.
  9. The CMS should conform to the AA standard of Web Content Accessibility Guidelines.
  10. The CMS may offer facilities to alter the navigation of the THE CLIENT site, time permitting.

B.2 THE CLIENT’s Requirements

  1. The CMS will cut down on the number of clicks required to perform common tasks.
  2. The CMS will provide sensible transition from the current system and users should require minimal training to adapt to the new system.
  3. The CMS will provide an interface and workflow that the Marketing Manager feels confident to hand to junior members of staff to use.
  4. The CMS will use user-friendly language and human-readable error messages to help minimise the training required to use the system.
  5. The CMS will provide help to content authors.
  6. The CMS interface will be less intimidating for new users.
  7. The CMS interface will facilitate easier insertion of images into pages.
  8. The CMS interface will provide a clearer indication as to which areas of a page are editable, and will increase the font size in the text editor.
  9. The CMS will make it simpler to publish the site and keep users informed of its progress.
  10. The CMS will be used on at least a 1024x768 screen resolution and must fit into this area without horizontal scrollbars.

Appendix C

This appendix documents the prototyping process carried out during case study development.

C.1 First Iteration: Paper-Based Prototypes

The first round of prototypes were developed as hand-drawn annotated sketches of the major components of the THE SYSTEM interface (Figures C.1C.4). Paper-based prototyping ensures that designs are “lightweight and fast to create” and serve to “identify issues and encourage discussion” (Wong1992). Furthermore, if detailed prototypes are presented to users their attention is inevitably drawn to the visual details—colour, typography, layout—rather than the much more important functional issues (Olsen2007). Paper-based prototyping also prevents the designer from wasting time on designing things that will be changed anyway, and ensures they don’t become “attached to their creation”—people hate changing things that have taken them a long time to create (Olsen2007).


Figure C.1: File manager


Figure C.2: Add folder, add page, delete page/folder


Figure C.3: Edit page, insert image


Figure C.4: Reviewing and publishing

C.1.1 Key Design Decisions

The current interface focuses strongly around the underlying file and folder structure of the THE CLIENT website—something which is generally not recommended (Appendix A)—and users must use page filenames to manage the site, for example “about/index.html” represents the “About Us” page. Ideally, users would be given with a representation of pages as they are structured on the THE CLIENT site—a structure they are likely to be familiar with—which would assist them in finding the page they require. The Firefox toolbar does combat this issue though in-context editing, however the toolbar cannot be relied on as the only method of interacting with the CMS. “Files” should therefore be represented as “pages” with a descriptive name (“About Us”) rather than an implementation-level filename (“about/index.html”). However, in practice there are several reasons why this cannot be implemented. The primary reason is that the THE CLIENT site does not have page titles which uniquely identify a page.

The page titles have been written in this “user unfriendly” manner for search engine optimisation purposes. If the user wished to edit the “About Us” page, they would have to click on a link with the text “Real-time statistics and data from THE CLIENT”—clearly not representative of the “About Us” page, and also the same link text as if the user wished to edit the home page. One way to combat this issue would be to insert a hidden tag into each page, with the true page title—for example “About Us”—however this would be time-consuming, difficult to manage and is something that THE PROVIDER are not keen to implement.

Therefore, although this is against recommended CMS usability principles, the file and folder structure will remain in the interface. The interface will however attempt to afford a consistent interaction metaphor—“files” will be referred to as “pages” consistently throughout the system. It is also the case that existing users are likely to have adapted the conceptual model of file/folder management and the operation of the CMS is this manner has “taken on the status of a convention” (Garrett2002). This conceptual model is also familiar to content authors at THE CLIENT through using the Windows operating system. Furthermore, keeping the file and folder conceptual model will help to minimise training required to adopt the new system [Requirement P2] and embracing convention will help to support user self-sufficiency (Appendix A).

One of the major findings of the design work was that users were frustrated at the amount of clicks required to complete common tasks [Requirement P1], an issue largely contributed to by the use of JavaScript alerts (Figure C.5(a)) for confirmation and notification. However, there is a tension between the needs of the two identified personas—James likes to be kept informed of what is happening; Roger doesn’t want to have to dismiss endless “OK” dialog boxes. An ideal solution is to use an AJAX “flash” message—a message which fades after a specified number of seconds, without user intervention (Figure C.5(b)). This satisfies the needs of both personas and will be used throughout the new interface.

(a) JavaScript alert
(b) AJAX “flash” message
Figure C.5: Notification methods

The following sections summarise the design decisions and rationale used on each of the paper-based prototypes.

C.1.1.1 Figure C.1

C.1.1.2 Figure C.2

C.1.1.3 Figure C.3

C.1.1.4 Figure C.4

C.1.2 Evaluation

These paper-based prototypes were discussed and during a meeting with THE PROVIDER and THE CLIENT. The following issues were raised and will help to guide the next prototyping iteration.



C.2 Second Iteration: Wireframes

The second round of prototypes were developed as wireframes of the major components of the THE SYSTEM interface (Figures C.6C.10). These wireframes include more styling and formatting than the sketches—but not to the extent that further changes to the wireframes would become difficult—and give the user a rough idea of the visual design of the finished interface.


Figure C.6: Site manager, modal windows, flash messages, advanced options


Figure C.7: Add page window


Figure C.8: Editing content


Figure C.9: Image manager


Figure C.10: Review and publish

C.2.1 Key Design Decisions

Based on the evaluation of the previous iteration of prototypes, and the introduction of the “publish on review” feature, the navigation has been streamlined into two tabs. The “Site Manager” encompasses the management of files, but presents them under the context of managing files as part of a website. The “Review & Publish” tab integrates the old “Review” and “Publish” tabs, groups related tasks and reduces the amount of clicking required to complete common tasks. The navigation remains in a tabbed format—an example of a “real world” metaphor that has proven to work well on the Internet (Liebel2007Krug2005).

It has been decided that modal windows will “fade out” the rest of the interface to ensure that users’ attention is drawn to the modal window itself.

The interface will also be tailored to the branding of THE CLIENT, including the logo and adopting the colour scheme where appropriate. It is hoped that the branding of the interface will help to build a more engaging relationship with the user, which in turn is likely to increase user satisfaction.

The following sections summarise the design decisions and rationale specific to each of the wireframes.

C.2.1.1 Figure C.6

C.2.1.2 Figure C.7

No design decisions specific to this wireframe have been made beyond the findings from the first iteration of prototypes.

C.2.1.3 Figure C.8

A yellow border will indicate the editable areas of a page. Yellow is a colour that has become synonymous with editing actions in many “Web 2.0” applications, including Flickr (Figure C.11(a)) and Basecamp (Figure C.11(b)).

(a) Flickr
(b) Basecamp
Figure C.11: Indicating an “edit” action

C.2.1.4 Figure C.9

THE CLIENT are by no means alone in finding long drop-down boxes difficult to use—a recent study by Nielsen (2007) found this was a particularly widespread cause of user annoyance. One of the main issues is that users are prevented from “seeing all their options in a single glance” (Nielsen2000a). Therefore, it has been decided to vertically expand the drop-down box so the user can see more options with less scrolling.

C.2.1.5 Figure C.10

Design decisions as in (Section C.2.1.1). It has also been decided to implement in-context help as yellow tooltips which appear when the user hovers their mouse over a help icon. Similar idioms for help systems are used in Microsoft Office (Figure C.12), an application familiar to content authors at THE CLIENT.


Figure C.12: In-context help in Microsoft Office

C.2.2 Evaluation

These wireframes were discussed and during a meeting with THE PROVIDER and THE CLIENT. The following issues were raised and will help to guide the next prototyping iteration.



Figure C.13: Basecamp icons


C.3 Third Iteration: Revised Wireframes

The third round of prototypes build on the wireframes from the previous iteration based on the evaluation feedback. The wireframes on the following pages only show the interfaces which have changed significantly from the previous iteration.


Figure C.14: Site manager


Figure C.15: Edit mode


Figure C.16: Add page window, editing content


Figure C.17: Review and publish

C.3.1 Key Design Decisions

The following sections summarise the design decisions and rationale specific to each of the revised wireframes.

C.3.1.1 Figure C.14

C.3.1.2 Figure C.15

A bar has been added to the in-context editing screen to remind users they are in editing mode, with a clear button provided to leave editing mode. The bar is located outside of the site design to ensure it draws the users attention and is not incorrectly considered as part of the actual site design. The bar keeps the yellow colour synonymous with editing.

C.3.1.3 Figure C.16

The icons in this wireframe have not been updated to reflect the findings from the previous iteration—this will take place during implementation. The “OK” button has been renamed “Save” to better reflect the action taking place, and the save icon will be removed from the toolbar to leave it designated to text formatting-related actions.

C.3.1.4 Figure C.17

Through creating the right column, the space left for the “Review & Publish” area of this screen has been reduced. It was therefore necessary to replace the “Revert” and “Approve” buttons with icons, which is also consistent with the visual style of the “Site Manager” screen. However this produced two green icons, of similar style, adjacent to each other, with very different actions. As “revert” is a similar to a “cancel” operation is was decided to colour its icon red to distinguish it from the “approve” icon.

C.3.2 Evaluation

The revised wireframes were discussed with both THE PROVIDER and THE CLIENT. All users were happy with the changes made and they felt they reflected their objectives for the system.

Appendix D
Verbal Protocol Transcripts

The transcripts from the verbal protocol analysis carried out during the evaluation session are shown in the following seven sections. Watching the screencast should clarify what actions the participants are carrying out, however further detail has been included in square brackets where thought necessary.

The coding scheme is as follows:

A positive comment.
The user is uncertain about either an action, or what has just happened.
A negative comment, suggesting area for improvement.
A functional issue or an explicitly suggested feature.

D.1 Participant One

right, so, i’m just looking at the file listing which is all clear (.) open and close folders (.) and that’s very obvious that they’re pages within there, those particular ones (.) um (.) the icons seem relatively obvious in terms of what they do, certainly once i’ve rolled over them once i don’t have any, any issues at all (.) um i can see the published icons down the right (.) um (.) they all say published at the moment so i’m not sure what the difference would be if they didn’t say that but i’m sure that’ll become obvious (.) so i’m going to click onto one of the pages (.) and i can see an editable version of the page with the areas clearly marked, and i can edit (.) they highlight when i roll over them (.) that’s good, clear (.) i’ve clicked into a section (.) seems to have jumped off the page (.) if i scroll down i can see that area again (.) it jumped again when i clicked into the text, but (.) [user types] (.) and then i’m going to save that section (.) um i’ll make that bold first (.) so, save that page, i’m confirming that with the dialog on the screen (.) and it’s let me know it’s been saved (.) i can see my changes have been saved in the page, styled in the right manner (.) ok so i’m gonna try and close this page now (.) um (.) not sure if cancel is the right button now i’ve finished editing, so i’ll just close the page [browser window] at the top (.) um (.) um (.) i can’t remember which page it was [user wishes to check the page they have edited] (.) i can’t see from the right column which one it was (.) [user refreshes page] (.) ah, i can see clearly it’s that one saying pending, very nice it stands out very clearly in the list (.) so i’m clicking on the review and publish tab and i can see there’s the page i have updated (.) i will approve this change (.) i can see there’s a message very briefly at the top of the screen which i think was confirming my edit (.) the message was a little bit quick (.) a delay on that message would help me (.) if i go back to the site manager now (.) expand all folders [user is looking to confirm page status has been updated] (.) ok (.) i’m going to add the word “fact” to the search box at the top (.) and i can see it’s been published (.) [user checks information] and i can see it was last edited by me at 3:15 (.) ok from there [the search results] without moving i’m now gonna try and add a folder within the about us folder (.) so small dialog has come up on the screen, typed in the name of my folder and clicked ok (.) um (.) [site manager now shows all files, not the search results - user hesitates] so if i then look for the folder (.) i’m gonna refresh the page to see if my folder then shows up (.) (.) ah, i think i have to approve my folder first (.) when the message appears to tell me the folder’s been added, it would be good if it reminded me it had to then be approved before i can see it (.) so i’m gonna go into the review and publish tab and approve that (.) [in site manager] so i can now see my new folder, and i’m going to now delete it (.) confirmation message, all very clear (.) and going back into the “about us” folder (.) ah it’s still there (.) oh wait i have to approve my deletion, makes sense really as i guess its quite a risky operation if it were full of pages (.) [user approves deletion] great! (.) and finally i’ll try publishing the site (.) um are these links as well? (.) hmm there’s no quick links to either that test or live server (.) it would be nice to have some way of doing that (.) ok (.) advanced options, i like the way this [advanced options] slides up and down (.) so just looking at the little help icons on the right (.) which (.) yeah i understand, clear help (.) so i click on publish to publish site to the live server (.) got a box with a little spinning thing in it (.) estimated time remaining (.) much quicker than it claimed which was good

D.2 Participant Two

i’m going to add a folder to the um about us section of the site (.) so i’m clicking on the about us folder and... the add folder button (.) um (.) and i can see my folder added there, but it’s telling me it’s pending so i need to go and approve that (.) going to review and publish tab and that’s where i found my folder to approve, so i’m gonna click on the approve button (.) i’ve got a comments box and i’m guessing this posts a comment to the server log, and i can choose whether to publish my change now or not (.) in this instance i’m not going to publish (.) click ok there (.) so i’m done, so i’m gonna go and add some pages into that folder now (.) going back to site manager, back into about us, and going to add a page by finding and clicking the add page button (.) ok now i’m going to choose my template, so i’m gonna go a click along to see if i can find an about template, which i can (.) give it a name, ok (.) cool now i’m going to edit my page to put some content in it (.) um (.) i’m going to edit the page title (.) i can see my text box [user types content] (.) going to make this bold, and this a link (.) so i’ve got a popup asking me to save or cancel, i want to save my changes (.) saved those (.) changes have been made (.) um... i need to save the page now [user hesitates] no i don’t, i’m going to view the page [user changes task due to hesitation] (.) page looks good (.) close that window (.) so... um... oops... now i’m a little unsure as to whether i need to save the page or not, but i’m gonna go ahead and just close the tab and assuming that when i click back... [user views page]... yeah the change has been saved (.) so i’m gonna go and approve this in the review and publish section (.) so i’m gonna approve my page, put a note in the comments box, and publish this change the the test server (.) um... so i think i’m done!

D.3 Participant Three

ok so i’m going to add a folder (.) so looking around to see how i’m going to do that (.) i’m drawn to this, which seems to be right [user views tooltip on icon] (.) um and um i quite like how everything is blanked out on the screen except this box (.) [user adds folder] yeah that’s quite obvious um though i suppose i’m not sure if it’s worked (.) so i’ve found my folder now, looks like it’s been added correctly um and i can see quite clearly it says pending, which i presume means it needs approving (.) i was going to say maybe it would be nice if the pending file was highlighted a bit more so it stood out... but actually, your eye is drawn to that status icon because it stands out so much from the others all saying published (.) ok i’ll add a page into this folder i’ve created (.) first thing i tried to do then was open the folder, but of course it’s empty, so that would be a bit daft (.) and i guess that’s it [user finds add page icon] (.) i didn’t see a tooltip explaining what it was but perhaps i was too quick (.) um so i guess you’d have thumbnails here of what each page would look like (.) that, transition maybe goes a bit too fast, or maybe that’s just an illusion ’cos all the thumbnails are the same at the moment (.) um (.) [user adds page] so i can see a message here, i know it’s doing something (.) um maybe i’d expect to see the file now in the site manager, but i guess i have to approve it first (.) ah yeah i can see my page and folder here which need approving (.) [user clicks to approve files] and yeah this option to publish a change to either server now looks particularly useful (.) um yeah and that worked really fast (.) so lets see if my changes have been approved [user opens site manager tab] looking for my folder, and oh yes here they are (.) going to edit this now (.) [user looks for edit page option] so i can see the view page icon, so i’m not sure whether to click on the page name or whether to click on that [user clicks view page icon] (.) ah you see that’s not it, i’m trying to find edit (.) ah [user clicks on page name] that’s it (.) um ok well first of all i can see these editable areas really really clearly um and i love this yellow bar at the top (.) um so yeah it’s quite obvious to me where i can edit and where i can’t which is good (.) um i’m just having a nosey around (.) these tooltips on the icons are helpful, i always get annoyed when there’s icons without tooltips, just a personal preference (.) um but i won’t click on those for now (.) [user clicks on editable area] (.) hmm i’m waiting for something to happen (.) ah... it has happened [editable area had opened further down the page, not visible without scrolling] (.) ah ok i didn’t expect that to happen but um i’ll just put in my change here (.) and um look for the save button (.) again i like the popup dialogs they work really well and you’re under no doubt what’s going on (.) and now i’m assuming that my change is there... yes there it is (.) um (.) so now, i’m not sure what i should do (.) um i’m looking for a return to site manager button (.) can’t find that so i’m just closing this editing window (.) ok so going to approve this change now (.) yeah and now i’ll do a publish of the site (.) just reading these instructions (.) um i’ll choose the live site (.) i expected that to be a link, i was quite curious as to what it was um going to do (.) and these help tooltips look good and explain some of the things that aren’t very clear to me (.) [user publishes site] and yeah again i like this popup thing so i know that i haven’t moved away from the page but i’m still in the page, if you get what i mean (.) so yeah cool that all makes sense

D.4 Participant Four

ok so i’m going to create a folder first (.) so i’m clicking into newsletters (.) um (.) i’m just having a look now through the options (.) so add folder [user adds folder] (.) and it’s gone to the bottom of the list, which is fine, which is nice, looks nice (.) right i’ve clicked into it and i wanna make a page here, so add page (.) um all these templates look a bit similar (.) but i’ll select that one [user types page name] (.) all seems very quick, that’s fine, and that’s... where’s it gone? [page doesn’t show up until it’s been approved] (.) oh yeah it needs to be approved (.) what would be good there is if it came up as some sort of thing, just so you know it needs approving (.) we sometimes create around ten pages in one go, and if you get distracted, have to take a call or something, and you go back to it and [because you can’t see the pages that haven’t been approved yet you] think oh crap how many, how many pages have i actually created (.) so i’ll go in here [review and publish tab] (.) um so it’s showing the folder i added, but not the page (.) um if i approve that folder, will that also approve the page i created within it as well? [evaluator responds] (.) it should show, ideally it would show the pages contained within a folder which need to be approved too, and i could go and select the folder [containing the pages to be approved] and that would approve the folder and it’s contents in one go, if you see what i mean (.) that would definitely be a good idea (.) ok so i’ll approve my folder um and i’ll select not to publish this (.) if i go back into site manager, and scroll down to find my folder and add one more page (.) [user adds page] so that’s in there and tells me it’s pending, it’s shown up immediately (.) and [user clicks review and publish tab] i can see it here... why’s that happened then? (.) ah right it’s because the folder containing the page had been approved (.) that’s fine approval is quick (.) go back to site manager (.) so under my folder, um, just clicking on the name i presume [user wished to edit page] (.) so i can click into that [editable area] (.) [user hesitates] ah right it’s opened bit down the page (.) right that looks good, nice and clean (.) [user adds content and saves it] that’s nice and quick, all the changes saved (.) um and now i’ll need to approve it again, so i’ll do that (.) what does this little red arrow icon do? [evaluator responsds] (.) right that’s handy (.) so i’ll publish that change (.) and go back to site manager (.) ok so i’ll finally do a publish of the site (.) yeah that’s fine, publishing, so that’s published

D.5 Participant Five

so i’m going to add a new page i think, so i will go to the one [icon] that looks like a little page, wait till it tells me what it is [tooltip] and then click on it (.) i’ve got a choice, which is good, and i’ll pick the template which i think i need and give it a name and ok that (.) and i’m hoping it appears in this list somewhere, so i’ll scroll down, there it is and it’s pending so i know it’s new (.) so i’ll click on it and see what that does (.) so we’re in and this looks quite familiar (.) when i move my cursor around it’s clear what i can edit (.) um and that bar at the top is good to remind me i’m editing, as sometimes i have loads of windows open and its easy to loose track of what each window is, now i can see that straight away (.) i’ve clicked into a story, and that [text editor] is bigger than it used to be (.) it used to be tiny then i had to drag it out (.) still a little bit fiddly though, especially on some of the pages with smaller bits of text (.) so i’ve changed that bit of text and i’m gonna hit save, the disk icon (.) telling me it’s saved (.) and then there’s my change in there (.) so i’m going to review now, it’s better than this is now one section rather than two so i’m not clicking back and forth, which was sometimes annoying having to publish first on the test server, then the live server (.) selecting my file and click approve (.) and it’s gone from this view (.) right will do a publish now (.) advanced options, the help is handy as a little refresher on what does what rather than having to pester simon [marketing manager] (.) [user publishes site] ah i like that it tells you what’s happening, cos usually i sit there twiddling my thumbs thinking it might have broken (.) yeah that’s great

D.6 Participant Six

ok first thing i’m thinking is that it looks very clean and very user friendly which is good (.) just having a look round (.) as i’ve used the old system quite a lot everything feels quite familiar in that regard which is good (.) i’m just having a bit of an explore (.) ok so i’ll edit one of these pages (.) i like the highlighting of these editable areas (.) [user edits contents] same as before, clicking on save (.) everything is where i expected it to be which is good (.) i’ve always liked this sort of edit box, still doing that funny thing where it drops down the end of the page, don’t know why that is (.) right so i’ve done that edit, close that window (.) let me add a folder (.) [user adds folder] so i’m going to have to approve that folder i guess (.) but i’ll try and add a page to that folder from here (.) so i’ve got my templates... ah that’s quite nice [template thumbnails], i like the idea behind that (.) [user adds two pages] is it the same thing like before where you can’t put spaces and all that kind of stuff in the page names? [evaluator responds] (.) i’m just getting a little confused here, i thought i’d done a page within that folder but i can’t see it (.) ah i have to approve the containing folder first, that is still pending (.) i don’t class this as a problem, it’s just one of those things you need to know (.) so i’ll approve this folder (.) [user approves folder] and publish that to test server from here (.) [user approves page] this first option that doesn’t publish it [“do not publish this change”], it does still approve the change so i think it needs to say that (.) approve but do not publish perhaps, just ’cos it’s a little confusing as to whether that option in fact does anything (.) [user publishes site] that’s good [the publishing progress window], publishing to live server, i like that (.) i like how these complicated things [publishing advanced options] are hidden (.) these bits of help are excellent, i don’t think you can ever have too much help really (.) so i’m going to go back to the site manager and check the statuses (.) yeah i can see this is all published now (.) but if i go and add a page [user adds page] then approve it but don’t publish it [user approves page], and then go back it should be like an amber approved status? [rhetorical question] (.) yeah it’s showing as published, but it is only approved, that could get confusing (.) um right yeah this search box, it’s clear that this searches from your current location (.) overall um i think its very good, i’m impressed and its all as i would have wanted it to be really, which is good

D.7 Participant Seven

um so i’ve created a folder (.) um in general the whole thing is a bit clearer with just having these two main tabs (.) the white as well, white background, it’s clearer to see what you’re doing (.) [user approves folder] that was clear with the um popup that came up (.) right so i’ve approved the folder now [user clicks site manager tab] and it tells me it’s empty at the moment so i can see there’s nothing in there (.) um i’ll add a page to this (.) these templates are different (.) so thats page has been added (.) and again i’ll go an approve it (.) and i can see it’s there (.) and that it’s been done (.) [user clicks site manager tab] um and yeah i can see the page is in there now (.) [user edits page] now it’s clear which bits of a page are editable (.) [user clicks editable area and hesitates] ah right and it’s opened further down the page (.) [user edits content] save that yeah (.) comes up with saving confirmation (.) that’s all fairly clear (.) now to go and review it (.) and do a publish

Appendix E
Questionnaire Responses

The questionnaire responses from the evaluation session are not publicly available.