Peachpit Press

Redesigning a Big Umbrella of Websites: The Informit CSS Overhaul

Date: Nov 5, 2004

Return to the article

Informit and its many sister sites, all divisions of the Pearson Technology Group, recently united in one code base. How does a huge conglomerate of independent web sites become one system of technologies that works for everyone? Meryl K. Evans tells the tale.

Introduction

You may have noticed a few months ago that Informit underwent a redesign. Not only did Informit get a facelift, but so did sites of other Pearson Technology Group (PTG) divisions, along with a few custom sites. The PTG clients of the Common Architecture Portal (CAP) include the following book publishers:

Launching after that first bunch were the following:

Are you tired after reading this list?

All of this work—which included project management, requirements, human factors, development, and quality assurance (QA)—was handled by one big team. All of these sites work from a common code base, which is managed through Visual Source Safe. A change to one business unit's code applies to all CAP sites according to a pre-established release schedule. Ditto for new functionality.

The ASP/XSL is hosted in a single directory, available for all sites to use. This structure has made things much more tolerable both for our developers, who end up doing less coding for the same return on investment, and our business partners, who don't have to get in line for new functionality based on financial or other priorities.

Deciding To Move to a Common Architecture

Each site was originally built from scratch with its own features and options. This means that if site A wanted a feature that had been implemented on site B, the team had to write it all over for site B. Or if site A had created a new feature, site B wouldn't get it until they asked for it (and the team wrote it again for them). Crazy, huh? To complicate matters, PTG changed hosting firms about the same time as the redesign work was underway, so the IT department had to plan and implement a data center move concurrently.

Product development convinced management that it would be cheaper in the long run to implement a common architecture and code base for all sites under PTG. The redesign uses the magic of CSS to ensure that each site has a unique user interface, while maintaining the same underlying code.

According to Michael Hughes of PTG's project management, the motivation to bring all of the partner sites into a common architecture, in a nutshell, was efficiency. This includes a reduction in the time required to produce common functionality one site at a time for multiple business units over 12–18 months, as well as planning and performing all aspects of new work (business development, prototyping, coding, and so on) over the life of the program. Now, when site A wants a feature on site B, all the team has to do is turn it on for site A. If site B wants a new feature, site A can have that new feature as well, as soon as it's available.

Challenges of the conversion? Many. First, everyone had to agree that this was the best course of action and would lead to improvements in service, bug fixes, and new feature implementation. Next came the development of a list of features to be supported, which of course had to be accepted by all parties. Documentation had to be created, describing all of the features for the development team.

Coordinating Between Teams

When code development got started, the human factors team began designing new UIs for all of the sites. Human factors also designed the site architecture, which resulted in an enormous flowchart showing every static page and every link on those pages. Marketing and editorial got busy writing copy for every single page of the new sites. Thankfully, the teams weren't starting from scratch; copy-and-paste became their best friend.

The development processes for the new sites generally flow as follows, dictating most of the interactions between teams:

  1. Product development and editorial come up with new functionality or functionality changes for the portal engine. They go through iterations and user testing and write a vision scope document (VSD). The VSD is reviewed and discussed by the other teams to make sure that everything is doable, and to determine what kind of timeline can be expected to produce the desired result. Any necessary modifications to the documentation occur at this time.

  2. Human factors takes the VSD and creates prototypes, XSL, and any accompanying CSS work, and then passes the prototypes to development.

  3. Development works interactively with human factors while developing from the VSD and the prototypes.

  4. When development believes that the code/functionality as written is good, it's built out to a QA server and turned over to the QA team to start beating on it.

  5. QA tests all functionality directly from the VSD. Bugs are entered in a bug-tracking application/database and assigned to developers or human factors.

  6. The QA cycle never takes longer than two weeks, but usually (95% of the time) is completed in one week. Nothing is released to production that isn't 100% working like VSD specs—or has been changed by product development during the development process, which isn't often.

The project management team is involved in all aspects of the cycle, helping to manage people, passing along information from stage to stage, and working with customers/clients (such as imprints or product development). The team often acts as the "glue" between teams and stages, keeping track of details that aren't passed from group to group.

Project Requirements

The requirements for this project were developed in-house by all project stakeholders. Project management conducted an audit of existing functionality, in coordination with the individual business units and the product development team. That stage led to the creation of a giant matrix of functionality in a spreadsheet. Internal customers and the project team reviewed the matrix and pared it down to what actually drove the business. The result was developed into a VSD consisting of detailed specs with pictures outlining the requirements of individual areas of functionality and, if appropriate, how they would be integrated into the back office.

When moving to the new servers, the team says luck helped. A few non–publicly accessible sites needed to remain unchanged on the legacy architecture, for the short term. This fact allowed the team to run the sites in both the existing J++ environment and on the new .NET servers in parallel, until they were able to run them through the QA process.

Informit and Exam Cram were launched about a month earlier than the other partners due to a contractual situation, so the team ran both platforms in production for a period of time. The other sites eventually migrated to .NET.

So there were no changes to requirements during the process, right? Wrong. Hughes says there were plenty! "Considering the number of stakeholders we had, our change management process had to be flexible, to say the least. We ran each request for a change past our development and human factors people to make sure that it was feasible; then we ran them by representatives of each business unit to get a consensus sign-off. Sometimes changes survived that process; others didn't. When they did, our product development team would revise and redistribute the requirements doc as appropriate," he says.

Human Factors

Human factors handles prototyping of ideas and does some of the visual design work for the sites. The team is also the production team when sites are coded and takes care of ongoing maintenance of the sites.

The target audience for this project had already been established, so that was one big job that didn't need addressing. According to Rich Evers of human factors, the main driving force of the change was technical—switching from Java to .NET architecture because of Microsoft's stopping its support for Java. "The architecture of the new sites isn't too far from what we had prior to the redesign," says Evers. "It consists of a handful of 'verticals,' each representing a feature of the site. Pretty standard setup. The main focus for the new sites is the bookstore, followed closely by content (articles and such). Some other common areas are the About section, My Account, and Search. Some sites have additional vertical areas that help make their sites unique. For example, Exam Cram has a section called Practice Exams that's unique to that portal."

Though it's a challenge for many web designers, convincing management to switch to XHTML and CSS wasn't an issue for this team. Evers indicates that a few specific factors helped to foster buy-in from management. The move from Java to .NET architecture required a rewrite, for example, so all the teams (product development, human factors, and development) could look for ways to make things more efficient. "Since many of our sites were very similar structurally, we tossed out to the developers the idea of running them all from the same set of back-end code," he notes. "We would apply styling mostly through CSS, along with a few unique-per-site pieces. The developers liked the idea, since such a setup would reduce their code significantly—one file drives the contents of a page across all sites. Very efficient."

At this point, Evers' team presented the idea to product development. The team demonstrated how it could control the visuals by showing the group the CSS Zen Garden site, which is an excellent example of how CSS can change the presentation of a site without touching XHTML.

Product development had to convince all of the publishing groups that the underlying structure of their sites needed to be the same. To help support the idea, human factors produced content-only prototypes—very basic wireframes of each page of the site (see Figure 1). These prototypes were used as discussion points to reach agreement on the content shown on the page and the approximate placement of each object. Human factors also noted variables to come from the databases and editable regions.

Figure 1Figure 1 A wireframe view of the main bookstore page.

As the publishers and product development fleshed out content for each page, human factors started developing the visuals. For most of the sites, the team launched the new architecture and used then-current designs to influence the new setup. Evers says that this technique saved a lot of time in the approval process, and that only a couple of sites had to go through a complete design makeover.

Each site has relatively the same graphic needs, so the team put together "package" files: image files containing all the necessary graphics for a site. This helped keep inventory very organized and easy to update.

Development

The technology and development weren't often an issue or challenge. It almost always was more about time and priorities; out of the 75 things product development wanted done, what could be done and/or bundled in the next production release? Production releases are now twice a month (though it used to be every week, which was tedious and unnecessary).

The application is a portal type of technology. PTG runs almost 30 web sites off of one piece of application code. Each site has its own images, CSS, and one XSLT file to define how it works, looks, and acts. Outside of those specific needs, all the sites share the same code base from front end to back end. No other site-specific files or logic are used, thus keeping the code size and complexity to a bare minimum. For Exam Cram and several other sites, only a couple of hours on the part of up to three developers were needed to get the site live.

The sites use a multi-tiered approach to the software infrastructure:

One of the biggest challenges, says the development team's Andy Hall, was "'Scope creep' during prototyping and sometimes during development. Sometimes new functionality or changed functionality that is voiced early on results in long prototyping cycles—have to watch it or it becomes analysis paralysis.

"Another area where problems sometimes occur after the VSD is written and passed along to each team is when product development requests changes during prototyping and communicates them to human factors and development, but not to QA — who writes up a lot of bugs that...aren't."

Managing the team and project is a big challenge with any large redesign effort. Hall comments, "Management for my team (development/operations) was mostly working out roles and responsibilities of certain major areas of functionality such as search, navigation, product pages, promotional pages, basic XML data transformation infrastructure, core/common object development, etc. We had regular meetings (once or twice a week) to work out some larger problems and/or make architecture-level decisions."

Testing Code

Several types of testing occurred simultaneously. Sting or unit testing occurred at the development level before passing the project to QA. System, integration, and acceptance testing occurred when the project came into QA. Regression testing happened when defect tickets were sent back to QA for retest.

To determine how much time to spend on testing, the rule was that QA should get about a third of the time it took to develop the site. QA's Sherry Valenti says, "The team usually got less than that. Overall, QA got about two weeks to test Informit and Exam Cram and three weeks to test all other network portals." She also notes that some of the resource time overlapped, as there was a one-week period when all portals would be in QA to be tested at the same time.

Test cases were determined by the requirements and prototypes for the rewrite. QA created the test cases using a matrix form and attempted to test all pages, functionality, interfaces, and database components.

QA reported errors through an open ticket to development and human factors, using a defect-tracking application. When the issue was resolved, the ticket was reassigned to QA for retest. If the ticket was retested successfully, the ticket was closed. If not, the ticket was reopened and submitted again.

The Payoff

Project management indicates that it's wonderful to maintain the common architecture. Rather than having two or three similar problems across our various sites, often it's one problem that can be solved with a single change and build. Previously, the team would be forced to prioritize issues like this according to which internal partner was the "squeakiest wheel" or had the most to lose from a revenue perspective, so the change has eliminated some political issues faced in the past.

Michael Hughes says, "The imprint newly added to the architecture [Fair Shake Press] is likely the most wrinkle-free implementation we've had to date. It wasn't perfect, mind you, but it was smooth sailing most of the way."

From the human factors standpoint, maintenance has been a fairly smooth ride. Changes that affect all sites have been extremely easy to make—considerably fewer files need editing than on the old system. Being able to address things affecting all of the clients at once is very nice. However, changes to individual sites can be a challenge. Some of the clients, naturally, want to do something different than what has launched. To a great extent, the team can accomplish the changes just by editing the CSS files. But the more the sites diverge, the trickier they become to manage.

Human factors' Evers says, "Of course, there's still more to do. We're not yet fully XHTML compliant, like we need to be. The code driving the advertisements was one of the culprits. Some pieces coming out of the CMS and a few of our back-end calls need editing as well. We're not there yet, but we're very close."

1301 Sansome Street, San Francisco, CA 94111