Home > Articles > Web Development > Content Management Systems

  • Print
  • + Share This
This chapter is from the book

Identifying the Different Roles of Web and Blogsites

Although the web has advanced tremendously since the 1990s, it’s still worth referencing the basic web channel approaches from back then to identify the primary, driving purpose of your website features. Then, in the world of Web 1.0, it was fairly clear what type of website you were visiting by its functionality. The content was very often awful and confusing; navigation was all over the place; design (if there was any design at all) was “experimental”; and the brand name may not be mentioned anywhere close to the top, but the functionality would make itself known. It had to. Programmers then didn’t have all the quick and easy resources to integrate every type of functionality under the sun. Page load times were a big issue—the Internet ran on slow, noisy modems, monitors were set to low resolution limiting display to 600×400 pixels, and browser frames were large and clunky, consuming more real estate than today. The costs of programming, inexperienced programmers, the incompatibilities of web languages and browsers, and the limitations of HTML web frames all directed the primary focal functionality of sites. Sound like your own private circle of hell? Believe it or not in some ways the limitations were advantageous, in that they elicited clear paths of web purpose and use.

So what were those identifiable site functions? Let’s see what we can come up with:

  • Directories: If you were visiting a directory, you knew it. It was a listing full of company names, URLs, and maybe email addresses and phone numbers for these, with brief company bios (if you were lucky). There also may have been banner ads.
  • Brochureware: A corporate website was typically what we called “brochureware”—marketing fluff copy about the company, bios, and contact info. Often with a “mandatory” 1990s Flash intro. Essentially, it was brochue copy converted into a website, hence “brochureware.”
  • Ad campaign landing pages: These were basically single pages on the backend of a domain that achieved hits from banner ads, redirects, or even magazine ads.
  • Black-hat SEO pages: These pages could have a variety of forms, but would often be full of meaningless jibber-jabber content and advertising (and metatags, and more metatags!). They would perform their SEO tricks to capture search hits from the various search engines to prove advertising value. Because such sites ruled search results, two things happened:

    • Valid directories became more important and would receive more traffic for people to find what they were looking for. Some of these even survive today. Yahoo! still has many legacy directories (local, sports, and so on).
    • Google was birthed out of the chaos to reward valid content for better search results and the age when metatags ruled search was dead.
  • Online news and magazine sites: These sites were either large, pushing the envelope and attempting to solve the profit game (such as The Wall Street Journal, which continues the fight to this day) or were merely online articles displaying banner ads.
  • Content Aggregator Portals: For lack of a better term, even then there were content aggregator sites which, like RSS feeds today, would pull in a variety of content options (images and/or text displayed across different frames) for the viewer to choose among for a deep dive. Today we can see such choose-your-content options displayed in paper.li, visual sites such as Pinterest and Tumblr, and as mentioned, RSS readers.
  • eCommerce: This was constantly being re-approached (in fact, an entire bubble was blown and burst). But out of this Amazon survived and continually contributed to the advent of social media (such as with its innovative consumer reviews, ratings, and recommendations engines). The majority of America’s side startup eCommerce sites didn’t fare so well.
  • Login Portals: Bigger corporations could have login “portals” where members could login to stores of niche industry information, specific data, or applications.
  • Encyclopedias and dictionaries: These were, as expected, lists of searchable content listings. Even back then the legendary Encyclopedia Britannica began its attempts to achieve revenue online. But Wikipedia also was created from this web content model and, as a nonprofit, content-collaborative, Web 2.0 “wiki,” it killed other encyclopedias as an industry. One of the ancient Web 1.0 golden oldies, still full of frames and all is NetLingo.com, which I reference to this day.
  • Forums: The predecessor to social media was the forum. Clunky, unattractive, text threads going in all directions—but they were great for social engagement, consumer content, prosumer (consumer turned producer) expert advice from the groundswell and pure hobby fun discussions. They remain extremely relevant today. People are still very active in forums, with similar online behavior. They show up in search results because they tend to be focused on a specific niche topic or industry, are full of content and various contributors, and achieve a lot of return visits and activity. Don’t knock them as antiquated. There are forum plug-ins for WordPress sites and are worth considering if helpful to your audience, such as a tech support forum.

Ironically, today’s website channels are not so clear-cut. Although we can physically walk into a book store, coffee shop, or grocery store and immediately know the difference among all three, today’s websites are full of myriad functionality and user content options. If asked, “What is social media?” we may immediately cite social networking software such as Facebook or Twitter. But the truth is almost every website out there today is social media. Incorporating social sharing, social follow, blogs, RSS feeds, ratings, reviews, and commenting, the state of Web 2.0 is the era of web-wide social media. Is a blog a blog or a magazine? Is a website primarily a website or a blog? And guess what: WordPress is driving this crazy train. It easily integrates all of these and more into sites. These functionality options are all available even for the simplest blog sites via the free WordPress.com. So, go nuts! If you’re reading this book, you’re already halfway there.

Refer back to my breakdown on these in Chapter 1, “What Is SEO and Do I Really Need It?”; even today, such examination can be good consideration to identify the major functional purpose of your site to serve your audience. Even with a free blogsite on wordpress.com you can strategically architect this way, with website, content aggregation, and social functionality (see Figure 3.9).

Figure 3.9

Figure 3.9 Free WordPress.com site with content tabs and social functionality.

After you’ve identified that the major purpose here is, for example, eCommerce sales, the navigation must be set up to conduct web visitors to your eCommerce goals for the site. If you want your site to maximize long-term customer retention sales, the site should accommodate this with customer program information or redemption vouchers. It doesn’t always work out that there is alignment between SEO, IA, and the web marketing goals, but via linking, all three areas might find unity. How about that—what does that make this? Tri-partisan?

SEO Value and Authority-Based Architecture

When you have a sitemap near completion, you want to identify your linking strategy. Links have the most value when inbound from other websites. However, even within your own site, there is some value on which to capitalize. I recommend setting up a link wheel with calls-to-action going from (example) page: “home/bathroom-decor/ceramic-tile,” to page: “home/kitchen-decor/tile-backboard,” and from there to page: “home/outdoor-decor/barbeque-counter.” Part of the reason for this is to input anchor text. Anchor links have text within the link itself (in contrast to just a spelled-out URL, or instead of saying “click here” you want to say “See our great bathroom decor ceramic tiles!”). This text can be valuable in SEO for the recipient page.

It is also valuable for the distribution of page-authority link juice. Just as the SEO analysis and SEO Book Tools assign authority scores to websites and specific web pages, you want to consider these valuation approaches within your own website linking.

I look at SEO architecture as a house of cards. The more cards you have in the mix is like the most pages of a website. You can set up the house of cards as a wide, endless structure that’s only two levels tall. But your table to hold this house can be only so wide (like a website top-level menu can be only so wide). How equitable is weight-distribution at only two levels? The pyramid can’t physically be too vertical and narrow; three cards can’t be stacked on only three cards. So balance your weight and cards, a.k.a. web pages, optimally.

You should also be conscious of authority, yet another weight-balance issue. You can’t have too many links on a page (regardless of website structure) because the spiders will be confused and devalue the page or links. The strongest, most crawl-friendly web page (like the home page) should be a good conduit for passing link juice (SEO page authority) to the next level of pages. But in most cases you don’t just want to direct the web visitor primarily back to the home page (with a call-to-action, and so on). Why not? Let me count the ways:

  • That’s typically the page that comes up before all others in the search engine results page (SERP).
  • Your web visitor probably visited there first.
  • The home page is typically light, marketing intro copy that won’t likely get your visitor closer to the goal.
  • If the web visitor wanted to start the journey over, finding the home page is usually obvious. In other words, visitors already know how to get home—they just may not know how to find exactly what they’re looking for.
  • So give credit, link juice, to the pages of your site that may not achieve all the general web traffic, yet still contain valuable content.
  • You want a call-to-action (CTA).

So for your on-page linking, how can you accommodate these, while addressing similar page interest and relevance for the reader? How to keep the crawlers in order and passing authority, without having too many links on the page? There’s no perfect answer here; you just have to look at your house of cards.

Fortunately, WordPress has great vehicles for testing all this. Don’t over test and constantly change your primary navigation, because it will confuse both crawlers and web visitors. But keep in mind that with WordPress you have categories, tags, archiving—all additional navigational options for your blog. And you can build your primary nav to integrate menu items easily for a specific category on-the-fly (for any blog posts you label with that category name; for example, under your top level Success Stories label, you could feature your content category Hardwood Replacements). Or you can link from one content page to a specific archive of blog posts by time period, common tag, or whatever.

Although these are interesting possibilities for testing visitor click throughs, don’t forget the canonicalization and “noFollow” issues inherent here (as discussed in this chapter). The last thing I will say here is very important. If you suspect specific user-content interest, or if your testing reveals this, it is far better to structure such content yourself in the site with actual navigation and linking. If you architect the site purposefully, directing to specific, individual pages and blog posts based on related interest (for example, with calls-to-action at the bottom of a page or post’s content), it will make sense to the spiders. You will also thereby fully allow the spiders to crawl the links and content (instead of going through automated, noFollow URLs).

How to sum up simply? After you know what the people want, and the spiders want, give it to them directly, within a clear structure, like a sturdy pyramid of cards.

  • + Share This
  • 🔖 Save To Your Account