Information Architecture: Navigation Best Practices for Big Site SEO (Webinar from SEOmoz with Rand Fishkin)Tuesday, November 2nd, 2010
Goals of successful information Architecture
- Semantically logical structure (Zoo animals –> African Savannah –> Lions) – If your site architecture is logical, your users will spend more time on site, find what they want easier, and convert more often.
- Minimize click depth (not JUST for search engines!) – so that users and search engines can reach any point on the site in a minimum number of site. Usability and SEO best practices are nearly identical.
- Maximize usability of navigation
Tips for Semantically Useful Navigation
- Initially design without keyword research – so that you aren’t bias in the way you organize by the keywords you discover. Rand suggests first organizing your content the way it makes sense to you and then incorporate the keywords that make sense for search engines, where they make sense.
- Add in keyword research based modifications to your draft IA
- Validate architecture/path with non-SEOs – make sure that your navigation still makes sense to non-SEOs and non-web users
Tips for Minimal Click-Depth
- Imitate the ideal navigation pyramid – in the first example, you’ll see you can get to 1 million pages with three clicks; in the second, you can only reach 150,000 pages with three clicks.
- Broad linking at top levels – at the top level, link to very broad categories; link to popular subcategories from the homepage. Rand uses Metacritic as an example.
- Editorial categorization > user-defined (hack: multi-level HTML sitemap – like this page at Rotten Tomatoes)
Tips for Usable Navigation
- Obvious navigation elements (like with MailChimp)
- Naming Conventions that Make Intent (not like Media Temple) – don’t use language no one outside your company won’t understand
- User & usability testing (using something like Silverback 2.0)
Avoiding Common “Big Site” Problems
- Duplicate content issues:
- Rel Canonical tags (although sometimes it isn’t perfect) – you’ll lose a tiny bit of PR, but you’ll save yourself before bad things happen. Rand always suggests using the rel canonical for the absolute URL of pages for your article/blog/products section(s).
- Google Webmaster Tools – use to ignore duplicate content
- SEOmoz web app
Scraping and Re-Publishing
- Scrappers (good or bad) that take content can be shown instead of original content.
- Employ absolute URLs (as in <a href=”http://www.seomoz.org/blog”> anchor </a>) not relative (<a href=”…blog”> anchor </a>)
- Don’t go overboard with bot blocking
- Don’t look at the site: command and compare it day to day. (Read this post by Rand.) Use track referrals instead.
- Check page “types” that don’t receive traffic (see this post by Rand)
- XML sitemaps – helps search engines crawl large websites
- Content syndication (use the allintitle: command)
- RSS feeds
- Twitter for indexation
“Search Results” in the SERPs
- Create category “landing” pages
- Remove obvious traces of “search” on landing pages
- Rel canonical can help
- Use AJAX to reload pages
- Offer facets only to loggin-in/cookie users
Sorry couldn’t stick around for the whole webinar, but here are two juicy tips:
Google Image search – less a new algorithm than a new interface. Text around image seems to be doing better than alt text.
Want a copy of Rand’s Firefox bookmarks? Here they are!