Most important ranking factor main image

The Most Important Factor for Successful SEO

I get asked this question every time someone is thinking about doing Search Engine Optimization on their own website or on a clients website. When you look around the internet you will see various opinions on what the most important factor is for a successful SEO plan.

Here is a quick common list;

  • Age of domain name
  • Organic link building
  • Social Media Signals
  • Quality Content
  • User Experience

The above list is in no specific order. they are all equally important to rank in the search engines. But there is one factor that rarely gets mentioned and that is the topic of today’s tip.

Crawlability is the most important factor to get your site ranked

Crawlability is the process a search robot does to crawl your web pages. The goal of each crawl is to have the robots see all of the intended text of the page. This includes text on you public side of the page, titles, alt text of images, and schema. As most of you already know, the search engines do not know or care what the site looks like, color of the site, the text and overall design is not what will get you better rankings. What the bots see in the background is the “code” for that specific page it is crawling. If the robots can’t crawl your site you will never rank!

A lot of Websites have Crawlability Challenges

Unfortunately, with today’s 3rd party web builders (Wix, Weebly, Squarespace etc.) and the arrival of open source framework (WordPress, Drupal, Joomla) a few years ago, you now have a lot of poorly written (coded) Themes, Page Builders, and website builders that can have areas or errors in the code that will stop a crawl or prevent the crawl from ever taking place (no-indexing). A partial page crawl is actually worse than a no page crawl.

What can stop a Crawl from taking place?

The list below is a short list of the reasons a page crawl will not happen.

  • Bots are refused access to crawl
  • Bad/confusing web page coding
  • Bad Script code entries
  • CSS Errors (Only Code Errors)
  • HTML code Errors
  • Pixel Codes written poorly

What Can Cause a Page Crawl to Stop

There are so many thing in todays world that will stop a page crawl from completing.

  • Bad Script code entries
  • Dynamic Header (fly-ins)
  • CSS Errors (Only Code Errors)
  • HTML code Errors
  • Tables and I Frame Code
  • 3rd party embeds (tracking codes, pop ups, etc..)

How Do I See what my page crawl is?

  1. In Google Search Console (Web Master Tools for us Old Timers) go to;
    1. Crawl >> Fetch as Google>>> Click ‘Fetch and Render’
    2. Do the public and crawl pages look the same?

You will never be successful if you pages are not being crawled completely.

Salterra Web Services can help you with all your Web Development and Internet Marketing Needs.

More About Salterra LLC

Salterra Web Services Specializes in Web Design, Internet Marketing, Search Engine Optimization, Hosting, Software Development, and Graphics. We also have a division that removes website malware, and maintains virus protection and the removal of malicious software. We founded the company in 2010. We are centrally located in Tempe, Arizona, but we have clients all over the world.