Where web design is concerned, there is one truth about internal linking that can’t be denied: it’s unavoidable. After all, hyperlinks are the primary means of browsing the Internet and hopping from one web page to another. It affects more than just SEO; pages that aren’t connected with links form a site that cannot be used by humans nor machines. Search engine crawlers can only index the pages they can find, and in order to find them, they need links – not that different from humans. (Why are we entrusting our future to robots again?)
This fact alone should motivate any webmaster to put effort into developing a proper internal link structure each time they create a site. Because so much depends on it, internal linking is one of the most potent weapons in SEO. Now, how do you hunt the fattest and tastiest game with this weapon?
- Keep Paths Between Pages Short
Make your web pages easy to find. Whether you are still Googling or have already arrived at your destination, looking for the page you need should never take too long. The rule of thumb is, two pages on the same site should never be further than three clicks away from each other. No matter which page is opened in your browser at the moment, there has to be a short path to a different page. Keep this principle in mind when you are planning your site’s structure.
Keep them close like this.
Of course, exceptions exist. For example, forums often have long-running threads with dozens and even hundreds of pages’ worth of comments; in cases like this, finding where you left off the other day can be a real hassle even with a search feature. If your site is such a monster, it’s understandable if a three-clicks path isn’t possible. Still, don’t give up on making it as short as you can wherever possible.
- Create Lots of Linkable High-Quality Content
Internal links are needed to spread authority between pages. What’s the best way to have many internal links that can do just that? Having a lot of great content, of course!
The union of on-page and off-page SEO gives your site as many high-authority pages as you can earn. Once you have them, it’s only a matter of strategically placing your internal links and letting them do their job.
- Direct Link Juice Where It’s Needed
A link acts as a channel that allows link juice flow from one page to another, increasing the latter’s authority in the eyes of search engines. It’s true both for external links (from another domain to yours) and internal links (those within the same domain). The amount of ranking power owned by your site’s pages should always be a factor in your internal linking strategy. Which pages could use a bit more link juice in order to be more visible in search results? And which pages would make good providers of that juice?
You’re going to need an SEO tool to look up a page’s authority value. In fact, WebCEO has a Page Authority Analysis tool that does just that.
- Don’t Make Excessive Links
It’s easy to get carried away and create too many internal links where they don’t necessarily have to be. The idea behind an internal linking strategy is to build a structure that allows the more important pages to have more authority. See that you don’t shoot yourself in the foot by directing link juice away from the page that needs it.
Besides, too much blue in a text is distracting and confusing.
You can’t get away with this unless you are Wikipedia.
- Make Sure Your Internal Links Are Relevant
You know how backlinks from unnatural sources may put your site’s position in search results in danger? The same is true for internal links. Going from a place to buy summer shorts online to an article about quantum physics without any context will do nothing but confuse the user (although in this case, I doubt context will help much). And while Google is unlikely to send you an email about what you’ve been doing with your internal links, they’ll still pick up on the quantum physics article’s increased bounce rate and lower its rankings.
Internal links on your site should be useful to your visitors, first and foremost.
- Mind the Number of Links on a Page
Here’s a fun fact about search engine spiders. The number of links they can crawl on a single page isn’t infinite: it’s roughly 150 tops. Depending on the kind of site you have, this number can be either too low or high enough that you’ll never have to worry about it. In any case, links that exceed this limit are effectively invisible to crawlers. If you intend to have that many on a single page, you might want to consider their order and decide which to sacrifice to obscurity. Some of them will end up passing no link juice even if they are dofollow links.
- Don’t Put Too Many Links in Site-Wide Footers
Footers are useful, no disagreements here. They offer quick access to every major page on a site, no matter where the user is at the moment. However, footers overstuffed with links look like a mess and make it difficult to find what the user needs. And there’s the crawl limit per page to consider, too. Link number 151 and the rest in line will be ignored by spiders.
- Use Anchor Texts Correctly
Anchor texts do more than just signal the user where to click in order to open a new page. They are also treated by search engines as keywords for pages they link to. Optimizing multiple pages of your site for the same keywords is called keyword cannibalization, and it’s as unpleasant as it sounds: it makes it hard for search engines to decide which page should be ranking for the keywords in question. As a result, the pages don’t rank as high as they could.
Avoid keyword cannibalization by not using the same anchor text while linking to different pages. You can use WebCEO’s Link Text Analysis tool to bring up all the anchor texts on your site and see if any of them are used in a way that compromises it.
There’s also the matter of having several identical internal links on the same page. Let’s say you run an online clothing shop and one of its pages has links to where visitors can buy men’s pants. Those links have anchor texts saying “the best pants for men”, “black trousers, men’s clothing” and “men’s skinny joggers”. What happens when you use them all at once on the same page? Here’s what: search engines will count only the first anchor text as a keyword. Sure, the links after the first will still be clicked on by users and pass link juice, but their anchor texts will lose their power.
- Don’t Make Your Internal Links “Nofollow”
You can put a rel=”nofollow” attribute inside a link. Doing so will tell search engine crawlers not to visit the linked page, and no ranking power will pass through it. Some webmasters are tempted to mark their internal links as “nofollow” and exert more control over the flow of link juice on their sites.
They are making a mistake.
He sure showed them.
Matt Cutts has commented on the subject, and his words should be a good reminder to those overly enthusiastic webmasters about what rel=”nofollow” does. It tells crawlers to ignore the link when you want them to find the page it points to, and then it won’t appear in search results! That’s why you shouldn’t “nofollow” internal links unless there are pages you don’t want indexed. Even then, you could use the robots.txt file for that.
- Make Your Links in HTML Format
Hyperlinks are made like this:
<a href=”https://url-address.com/”>anchor text</a>
It’s one of the first things you learn when you want to create your own website. However, HTML isn’t the only language which can be used to write a link; for example, JavaScript can do it, too.
var str = “anchor text”;
var result = str.link(“https://url-address.com”);
Problem is, links made in other languages can be uncrawlable by search engines or pass less authority, depending on how they are used. That’s why it’s better to stick to HTML links.
- Be Aware of Links That Can’t Be Crawled
It is possible to put a link where search engine spiders will never find it. Places where a link is doomed to never be crawled include:
- Flash, Java, or other plug-ins: links inside them cannot be accessed by spiders.
- Broken JavaScript code: when the code doesn’t work, neither does the link.
- Submission forms: crawlers won’t attempt to submit a form on your site.
- Internal search boxes: crawlers don’t perform searches on websites.
- Pages with hundreds of links: spiders have a crawl limit of 150.
Search engines also typically can’t crawl links inside HTML frames and inline frames, although workarounds are available to experienced webmasters.
Lastly, the robots.txt and the robots meta tag can be used to prevent spiders from accessing any page of your choice. Internal links to blocked pages will only be of use to visitors.
- Avoid Orphaned and Dead-End Pages
A page with no internal links on it is called a dead-end page. It doesn’t point to any other pages on a site, meaning it’s a dead-end for search engine crawlers. It’s bad for users, too: they can only leave such a page by pressing the Back button or manually typing a new URL. Neither is a comfortable way to navigate a site.
The opposite of a dead-end page is an orphan page. They are invisible to crawlers and are unlikely to be found by visitors. If you made such a page intentionally (maybe to test a feature), it’s okay. Otherwise, you risk losing a lot of traffic. Make sure that pages with good, rankable content always have inbound links pointing to them.