The SEO or Search Engine Optimization is the process of optimizing a website to make it more understandable and relevant to search engines in order to increase the organic traffic (not paid traffic) to a website.
Factors influencing Search Engine Optimization
According to experts and Google has recognized, there are more than 200 factors that affect the organic positioning of a website. The Moz company conducts a study every two years in which it tries to find the correlations that can help to know what these factors are and to what extent they affect Search Engine Optimization. To do this, it carries out a survey among more than 150 Search Engine Optimizationexperts whose results it publishes in the MOZ ranking with the factors that influence search engines to better understand the operation of these search engine algorithms. The company Searchmetrics also publishes a report with the ranking of the factors that influence Google US . The most recent is from 2016 and can be downloaded from its website.
An updated list of the 200 factors that determine the relevance of a website in Google results can be found on this Backlinko page .
These more than 200 factors that determine the positioning of a website in the results pages of search engines are classified into two large groups according to whether they are internal (on-page) or external (off-page) to a website.
Search Engine Optimization On-page and Off-page
The more than 200 factors that allow a website to appear in a position within search engine results pages are classified into two groups: internal factors (on-page) and external factors (off-page).
- Internal Factors (Search Engine Optimization On-page) : are those related to the elements of a website that we can directly control. Its main objective is to make life easier for search engines when they access our pages. The quality of the content, the web architecture and the HTML code are the most relevant factors of a website.
- External Factors (Search Engine Optimization Off-page) : They are those related to external factors to a website such as the number and quality of links, the presence in social networks, mentions in local media, brand authority and performance in the results of search, that is, the CTR that our results have in a search engine.
The Google robot
The Google robot is the generic name for the Google web crawler. It includes two types of trackers: that of computers, which simulates being a user navigating from your computer, and that of mobile devices, which recreates a user in this type of device.
Google constantly updates its search robot to adapt and even anticipate the evolution and needs of users. According to the Moz company, Google carries out between 500 and 600 updates of its algorithm every year. In order to know if any of these updates may have affected us, Moz tracks the changes in Google’s algorithm since 2000, which can help us explain the sudden changes in organic traffic.
Sometimes the Google robot, which is based on an algorithm, is significantly updated as in February 2011 with the update of Google Panda and in April 2012 with the update of Google Penguin that affected the search results of remarkable shape. Another relevant update was subsequently carried out to penalize websites that are not responsive or adapted to mobile devices.
Google Panda was launched in February 2011 with the aim of eliminating or giving little importance to low-quality content. For this reason, it mainly rewards those websites that generate original content, treating the themes with depth and quality. It also values the usability of the pages, which contain little advertising and that the content can be shared on social networks.
Google Penguin was launched in April 2012 to combat Webspam, which considers text links to other pages with the exact keyword to be positioned (it is not considered natural), links to a website on low quality sites and excess of use of keywords in the content of a page.
How does Google search?
Google uses the following processes to provide its results:
The tracking is the process by which the Google robot discovers new and updated pages and added to the Google index. To do this, it uses an algorithmic tracking process that determines the sites to be crawled, the frequency and the number of pages to be scanned on each of them.
The crawling process begins with a list of web page URLs from previous crawls and is expanded with data from sitemaps offered by webmasters. As the Google bot visits each of these websites, it detects links on their pages and adds them to the list of pages to crawl. New sites, changes to existing ones, and outdated links are detected and used to update the Google index.
We can also tell the Google robot not to crawl some pages or sections of our site, for this the robots.txt file is used . This file provides crawlers with information about the pages or files they may or may not request from a website. It is mainly used to prevent a site from being overloaded with requests, however it is not a mechanism to keep a web page out of Google’s reach. To do this you have to protect the page with a password or use the directives or noindex labels.
Google’s robot processes all the pages it crawls to compile a massive index of all the words it sees along with their location on each page. It also processes the information included in tags and key content attributes, such as “title” tags and “alt” attributes. The Google robot cannot process content from some interactive media files and dynamic pages with flash content for example.
These simple tips improve the indexing of a page:
- Create meaningful, short page titles.
- Use page headers that reflect the main topic.
- Stream content with text instead of images. At least, include alternative text or other attributes in the videos and images.
When a user enters a query, Google searches the index for the pages that match that query and displays the results it deems most relevant to the user. Google determines which are the highest quality responses and for this, it takes into account many aspects, such as the location, language and device of the users (computer or phone), to find out which result will offer the best user experience and the response more suitable. For example, if a user in Barcelona searches for “motorcycle repair shops”, they will get different answers from those of Madrid users who make the same query.
These simple tips allow you to improve your publication and positioning:
- Make the page load quickly and be optimized for mobile devices.
- Add useful content and keep it updated.
- Follow Google webmaster guidelines to provide a good user experience.
More information from Google
- How does Google search work?
- What are sitemaps?
- Search Engine Optimization (SEO) Guide for Beginners.
The PageRank was developed by the founders of Google, Larry Page (last name, from which this algorithm is named) and Sergey Brin, at Stanford University while studying postgraduate studies in computer science. PageRank is a registered trademark and patented by Google on January 9, 1999.
The PageRank represents the importance that Google assigns to a page based on the links coming from other web pages. In other words, each link to a page on a site included on another site adds value to the PageRank of the site that receives it. Not all links are created equal: Google identifies fraudulent links and other practices that negatively influence search results. The best types of links are those created by the quality of the content.
Matt Cutts, ex-director of the Google web spam team, explains in this short video how Google searches work .
The evolution of searches
Google explains in this video the history of the evolution of searches , highlighting some of the most important milestones of the last decade, and a sample of what is to come.
Not only has the Google search engine evolved since the launch of the company in September 1988. In the following video we can see how the ecosystem of Google products has evolved over time.
Search Engine Optimization tools