This folder on the SEO is updated regularly and details all the requirements for a good referencing.
The term “SEO” ( for Search Engine Optimization) means the whole of the techniques to improve the visibility of a website, namely the submission consisting of publicizing the site with the search tools and positioning consisting of positioning the pages of a site in good place in the search engine. the result pages for certain keywords;
The difficulty of the exercise lies not so much in promoting the site with search engines as in structuring content and internal and external meshing to be well positioned in the results on pre-selected keywords.
Indeed, a majority of Internet users use search engines to find information and, in this connection, query a search engine using keywords (in English keywords).). It is essential, first and foremost, to worry about the content that is being proposed in order to best meet the expectations of Internet users and to identify the key words that can be seized by them!
- 1 SERP
- 2 Is my site referenced?
- 3 Optimize a website for SEO
- 4 Add its site to search engines
- 5 Free SEO
- 6 Billing referencing as Adwords
- 7 Optimize SEO
- 8 Improve crawl budget
- 9 Social networks
- 10 SEO for a mobile website
- 11 Duplicate Content
- 12 Penalties
- 13 Google’s algorithm
- 14 Practical tools
- 15 Twitter accounts (USA)
The Search Engine
Result Pages means the search results as displayed after a request. It is essential to understand that from one user to another the results for a single search engine may vary, on the one hand, depending on the parameter chosen by the user (language, number of results per page) but also depending on the location (country, region) from where the request is made or the terminal (mobile, tablet, desktop) or the terminal. sometimes according to requests made beforehand by the user and finally because search engines do regular A/B testing to test different displays. In this respect, it is not uncommon for a site to disappear from the SERP on a request for 24 h to 48 h and then re-emerge. This means that it is necessary to wait a minimum of 72 h before worrying.
This means, therefore, that it is not because you see yourself in the first position that you are necessarily. To obtain a result as close as possible to what the majority of users see, it is advisable to disable the query history or even navigate through the private navigation of its browser.
The pages referenced in the first position obviously get more visits, then come the pages in second position, etc. The same applies to the pages referenced on the front page compared to the pages listed on the second page. So if a page is in 11 th position (so on second page), it is very interesting to try to optimize it in order to get it on the front page and get a significant gain of unique visitors.
Referencing only makes sense with keywords (in English keywords), i. e. the words used by visitors to search.
The first job is to determine the keywords on which you want to position the pages of the site. The keywords that you have in mind do not always correspond to the keywords used by visitors, because they tend to use the shortest possible terms or to make spelling mistakes.
There are tools to compare the volume of search for a keyword versus another and provide suggestions:
Finally, there are sites to know the keywords of competing sites:
SEO Black hat/White hat
In the area of natural referencing, there are generally two currents of thought:
- The SEO White hat (translated white hat), with the best respect for the instructions of the search engine search engines, in the hope of obtaining a sustainable referencing by playing with the rules of the game;
- The SEO Black hat (translated black hat), that adopt techniques contrary to the instructions of the search engines, in order to obtain a fast gain on pages with high monetization potential but with a high risk of decommissioning. The SEO black hat plays in chat and mouse with search engines, which regularly adapt their algorithms to identify and downgrade sites that do not respect the instructions. Techniques such as cloaking or spinning are thus considered dangerous and not recommended.
Is my site referenced?
To find out if your site is referenced in a search engine, simply enter the following command in the search field:
If your site is in the index, the engine should display a selection of pages that it knows about the site and a number of pages representing approximately the number of pages it indexed.
Optimize a website for SEO
Before talking about optimization optimization, the first step is to ensure that the main search engines and especially Google (because it is most used) identify the site and come to browse it regularly. As such “referencing its site” does not mean anything special, if only the pages of the site are present in the index of a search engine. To do this, it is enough:
- either to obtain links from sites themselves regularly indexed by search engines, so that they identify the existence of your own.
- either declare your site directly via the interface of the main search engines.
Add its site to search engines
For this purpose, there are online forms for submitting its website. Do not hesitate to put in place a web analytics solution such as Google Analytics or AT Internet (www.atinternet.com) which will inform you about the source of your visitors and the pages visited, as well as a large number of other useful information.
Google is the main search engine in France with 90% market shares. The page to reference a URL in Google is as follows: https://www.google.com/webmasters/tools/submit-url. The bid is free and completely free, but asks for a certain period, rather variable depending on the periods.
Bing referencing through the use of Webmasters’tools. Simply create an account and follow the procedure on the following page: http://www.bing.com/toolbox/webmaster
Now Yahoo relies on Bing for its search engine. The following page explains how to submit new Urls: https://fr.aide.yahoo.com/kb/SLN2217.html
This is. fr.This is the engine used by the services of Orange. fr. Even though it has a smaller market share than Google and Bing, it is interesting to include it. The address to refer to this is the following: http://referencement.ke.voila.fr/index.php
Exalead is an alternative French search engine. To submit its site on Exalead, simply use the following page: http://www.exalead.com/search/web/submit/
Referencing is not necessarily paid because search engines index the content of sites free of charge and it is not possible to pay them in order to better position their site.
It is enough that other sites point to your to make the engines visit him and the more the links are of good quality (i. e. on sites with a good reputation), the better your site will appear in search engines on the terms corresponding to those of your site. However, the methods to be implemented are numerous and sometimes complex and a simple error can have significant implications, so many companies call on the SEO professionals to advise and even assist them.
Billing referencing as Adwords
On the other hand, it is possible to buy keywords on search engines, it is then advertising location (called sponsored links), located around the so-called natural search results. We are talking about SEM (Search Engine Marketing) as opposed to SEO (Search Engine Optimization).
On the other hand, since SEO is a broad concept, asking for a lot of experience and with many hidden difficulties, it is advisable for companies to appeal to its specialized agencies in referencing that will know and accompany them.
Specialized agencies can help you position your site in search results. They may sometimes propose the realization or updating of the content of the site. However, beware of offers of “SEO in more than 200 search engines” or “Referencing in more than 1000 directory” or “Guaranteed SEO”/”First place in a Few days”. A natural referencing must remain natural, that is to say it must be progressive.
Attention to automatic referencing software. Some search engines will simply reject your site (in most cases you have to leave your email address with the fill-in form to fill in). In extreme cases, the use of such software, if it massively submits pages of your site in a large number of directories can be counter-productive and induce some engines to banish your site.
The reference element for search engines is the web page, so we need to think, when designing the website, to structure the pages, taking into account the above advice for each page.
Indeed most webmasters think that the home page of their site is correctly indexed, but it is usually the other pages that contain the most interesting content. It is therefore imperative to choose a title, URL, number (etc.) for each page of the site.
There are some site design techniques that make it possible to give more efficiency to referencing pages of a site:
- an original and attractive content,
- a title chosen,
- A suitable URL,
- a text body readable by SEARCH engines,
- META tags precisely describing the content of the page,
- well-thought-out links,
- ALT attributes to describe the contents of the images.
Contents of the web page
Search engines strive primarily to provide quality service to their users by giving them the most relevant results based on their search, so even before thinking about improving referencing it is essential to focus on creating an original and original content.
Original content does not mean content that is not proposed by any other site, it would be an impossible task. On the other hand, it is possible to deal with a subject and bring it added value by deepening certain points, organising it in an original way or putting in different information. Social networks are therefore an excellent vehicle for promoting content and for identifying the interest that readers have in your content.
On the other hand, in order to provide the best content to visitors, search engines attach importance to updating information. Updating the pages of the site allows you to increase the index given by the engine to the site or at least the frequency of passing the indexing robot.
The title is the preferred element to describe in a few words the content of the page, especially the first element that the visitor will read in the search engine result page, so it is essential to give it special importance. The title of a web page is described in the header of the web page between < TITLE > and </TITLE > tags. The title must describe as accurately as possible, in 6 or 7 words maximum, the contents of the web page and its recommended total length should not ideally exceed sixty characters. Finally, it must ideally be as unique as possible in the site so that the page is not considered to be duplicated content. The title is all the more important because it is information that will be displayed in the user’s favorites, in the title bar and browser tabs and in the history. As European users read from left to right, it is advisable to put the words in the left-hand direction. In particular, ensure that every page of your site has a single title, including pages with pagination. In the latter case, for example, you can ensure that the pages beyond page 1 include the page number in the title.
Some search engines attach crucial importance to keywords in the URL, including keywords in the domain name. It is therefore advisable to put an appropriate file name, containing one or two keywords, for each file of the site rather than names of the genre page1.html, page2.html, etc. The uses a technique called URL-Rewriting to write legible Urls containing the keywords of the page title. On CCM the hyphen is used as a separator:
In order to maximize the content of each page, it is necessary that it be transparent (as opposed to opaque content such as flash), that is, it contains maximum text, indexable by engines. The content of the page must be above all a quality content addressed to visitors, but it is possible to improve it by ensuring that different keywords are present.
Frames are strongly prevented because they sometimes prevent the site from being indexed in good conditions.
META Tags are not displayed tags to insert at the beginning of HTML document to describe the document finely. Given the abuse of number found in a large number of websites, the engines use less and less information when indexing pages. The meta tag “keywords” was officially abandoned by Google
The meta description tag allows you to add a description describing the page, without displaying them to visitors (e. g. terms in plural, or even with voluntary spelling errors). It is generally this description (or part of this description) that will appear in the SERP. It is recommended to use HTML coding for accented characters and not exceed keywords.
The meta robots tag is of particular importance because it describes the behavior of the robot vis-à-vis the page, including whether the page should be indexed or not, and whether the robot is allowed to follow the links. By default the lack of a robot tag indicates that the robot can index the page and follow the links it contains.
The robots tag can take the following values:
- follow: This instruction means not putting a robot tag since it is the default behavior.
- noindex, follow: the robot should not index the page (however the robot can return regularly to see if there are new links)
- index, nofollow: the robot should not follow the links of the page (by contrast the robot can index the page)
- noindex, nofollow: the robot should no longer index the page or follow the links. This will mean a drastic drop in the frequency of page visits by the robots.
So here’s an example of a robot tag:
< meta name = “robots” content = “noindex, nofollow”/>
Also note the existence of the following values, which can be cumulated with previous values:
- The robot should not offer users the cached version (especially for Google cache).
- The robot should not submit the description of DMOZ (Open Directory Project) by default
It is possible to specifically target Google’s exploration robots (Googlebot) by replacing the robots name by Googlebot (it is advisable to use the standard tag to stay generic):
< meta name = “googlebot” content = “noindex, nofollow”/>
In case a large number of pages are not indexed by search engines, it is preferable to block them via robots.txt because in this case the exploration robots do not waste time to crawler these pages and can thus concentrate all their energy on the useful pages.
On the questions of the forum not receiving answers are excluded from search engines, but they may continue to crawler pages to track links:
< meta name = “robots” content = “noindex, follow”/>
After a month, if the questions still do not have an answer, the meta tag becomes the following, so that the engine forgets it:
< meta name = “robots” content = “noindex, nofollow”/>
In order to give maximum visibility to each of your pages, it is advisable to establish internal links between your pages to allow the crawlers to browse all of your tree. Thus it may be interesting to create a page presenting the architecture of your site and containing pointers to each of your pages.
This means by extension that the navigation of the site (main menu) must be thought to give effectively access to pages with high potential in terms of SEO.
The term NetLinking refers to obtaining external links pointing to its website because it increases traffic and the notoriety of its site on the one hand, on the other because search engines take into account the number and quality of links pointing to a site to characterize its level of relevance (this is the case of Google with its known pagerank).
Links are by default followed by search engines (in the absence of nofollow META robots or a robots.txt file preventing the page indexing). However, it is possible to indicate to search engines not to follow certain links using the nofollow attribute.
This is recommended in particular if:The link is the subject of a trade agreement (pay links)The link is added by non-safe users in dedicated spaces of the site (comments, opinions, forums, etc.).
On the, links posted by anonymous users or not actively participating in the community (help on forums) are nofollow bonds. Links posted by active users and contributors are normal links (so-called “dofollow”).
ALT Attributes of images
The images of the site are opaque for search engines, i. e. they are unable to index the content, so it is advisable to put an ALT attribute on each of the images, allowing to describe its contents. The ALT attribute is also essential for blind people, using Braille terminals.
Here is an example of ALT attribute:
< img src = “images/the. gif”
width = “140”
height = “40”
border = “0”
alt = “THE logo” >
It is also advisable to provide a title attribute to display a infobulle to the user describing the image.
Improve crawl budget
SEO begins with the crawl (in French exploration) of your site by search engines for search engines. These are agents browsing sites looking for new pages to index or pages to update. An indexation robot acts like a virtual visitor in some way: it follows the links on your site to explore the maximum number of pages. These robots are identifiable in logs by the HTTP User-Agent header they send. These are users of the main search engines:
Below are examples of User-Agents chains for the most popular search engines: Object
Thus, it is necessary to ensure that the pages are intelligently switched through links so that robots can access the maximum number of pages as quickly as possible.
To improve the indexing of your site, there are several methods:
It is possible and desirable to block unnecessary pages in referencing using a robots.txt file to allow indexing robots to devote all their energy to useful pages. Duplicate pages (for example, unnecessary parameters for robots) or pages that have little interest in visitors from a search (internal search results of the site, etc.) must typically be blocked;
On the, the results of the internal search engine are explicitly excluded from referencing via robots.txt, so as not to provide users arriving by a search engine with results generated automatically, in accordance with Google’s instructions.
Speed of loading of pages – Page Speed
It is important to improve the loading time of the pages, using, for example, caching mechanisms, because this allows the user to improve the user experience and thus the satisfaction of the visitors, and the search engines increasingly take these types of signals into account in the positioning of the pages;
Creating a sitemap allows you to access the robots across your pages or the latest indexed pages.
More and more search engines take into account social sharing signals in their algorithm. Google Panda takes this criterion into account in determining whether a site is quality or not. In other words, promoting social sharing limits the risk of impact by algorithms such as Panda.
On the, pages contain asynchronous sharing buttons in order not to slow down the loading of pages, as well as META OpenGraph og: image allowing to indicate to social networks what image to display when a user shares a link.
SEO for a mobile website
The ideal is to have a mobile site designed for modular design because, in this case, the indexed page for desktop computers and mobile terminals is the same, only its display changes according to the display device.
If your mobile web site is on a separate domain or sub-domain, as is the case for the, simply redirect users to the mobile site by pointing out that each page redirected points to its equivalent on the mobile site. It is also necessary to ensure that the crawler Googlebot-Mobile is treated well as a mobile terminal!
Google indicated that “mobile-friendly” pages have SEO boost on non-mobile pages friendly in search results on mobile. This boost applies page by page and is reassessed over the water for each page, depending on whether or not it passes the test.
To further develop: SEO of a mobile site
To the extent possible, it is to create unique pages titles on the entire site, because search engines such as Google tend to ignore the duplicated content (in English duplicate content), i. e. many pages of the site having the same title or pages of the site whose main content exists on the site or sites. third parties.
The duplicate content is natural, if only by the fact that we are led to make quotes, to report words of personalities or to refer to official texts. However, too much content reproduced on a site may lead to a algorithmic penalty, so it is advisable to block such content using a robots.txt file or META robots with a “noindex” value.
When search engines detect duplicate content, they keep only one page, according to their own algorithms, which can sometimes lead to errors. Thus, it is advisable to include in pages with content duplicated a Canonical tag pointing to the page to be retained. Here is the syntax:
< link rel = “canonical” href = “http://votresite/pagefinale”/>
Generally, it is advisable to include in your pages a canonical tag with the URL of the current page. This includes limiting loss related to unnecessary parameters in URL such as http://www.mysite.net/forum/?page=1 or http://www.mysite.net/faq/?utm_source=mail!
This also serves for index pages because occasionally Google indexes your home page in its form http://www.mysite.net/ and http://www.mysite.net/index.php
There are generally two types of penalties:
- Manual penalties, i. e., resulting from human action, following non compliance with webmasters. It may be non-natural links (links purchased), artificial content, misleading redirects, etc. Penalties for the purchase of links are common and penalize the site having sold links as well as those purchased. These penalties may only be waived after correcting the problem (which implies having identified the problem) and made a request for review of the site via the form devoted. A review of a website may take several weeks and does not necessarily lead to a recovery of positions or sometimes partial;
- Algorithmic penalties, that is, resulting from no human action, usually linked to a set of factors that only the search engine knows. This is the case for example Google panda, the Google algorithm will replicate the so-called bad quality sites or Google Penguin, an algorithm targeting bad SEO practices. These penalties can only be lifted almost to have eliminated “signals” leading to decommissioning, at the next iteration of the algorithm.
Google’s algorithm is the whole insctructions allowing Google to give a result page following a query.
Originally the algorithm was based solely on the study of links between web pages and was based on an index assigned to each page and named PageRank (PR). The principle is simple: The more links a page has, the more its PageRank increases. The more PageRank a page has, the more it distributes to its outgoing links. By extension, we talk about the PageRank of a site to designate the PageRank on its home page, because it is usually the page that has the biggest PageRank from all pages of the site.Optimizations in the algorithmSince PageRank, the algorithm has taken into account a large number of additional signals, including (non-exhaustive list):the freshness of information;reference to the author;the time spent, the degree of involvement of the reader;sources of traffic other than SEOetc.
Google announces about 500 optimizations of the algorithm per year, or more than one modification per day. As a result, SERP can vary significantly according to changes made by Google’s teams.
Panda is the name given to Google’s filter to fight bad quality sites. The principle is to degrade the positioning of sites whose content is judged to be of too low quality:See Google Panda
Google Penguin is an update of Google penalizing sites whose SEO optimization is viewed as excessive. This is the case, for example, with sites where too many links come from sites judged to be “spammant”. It would also appear that an abuse of links between pages talking about disparate subjects is a factor that could result in a penalty via the Google Penguin algorithm. Google has thus put in place a form to disallow links potentially prejudicial to the referencing of a site (see the history of Google Penguin deployments).
Out of a Algorithmic Penalty
First, it is necessary to ensure that the drop in hearing is well linked to a change in algorithm. To do this, it is necessary to determine whether the decline coincides with a known deployment of Panda or Penguin. If that is the case, there is a great likelihood that it will be linked. It is noted that deployment may take several days or even weeks, which means that the decline is not necessarily brutal.
In order to get out of algorithmic, it is advisable to make a manual review of its main content pages and to check point by point if the quality is at the meeting and if the content is unique. In the case of insufficient quality, it will be necessary to modify this content to improve it or déréférencer (or delete it). With Google Penguin, you need to look at the links pointing to the site in Google Webmaster Tools and make sure that links are natural, on the other hand of good quality (in sites that don’t seem to be spam).
- Google Webmaster Tools
- Bing Webmaster Tools
- Google Trends in Research
- ÜberSuggest – Suggestion of Keywords
Twitter accounts (USA)
- @ google: Official account of Google
- @ mattcutts: Official account of Matt Cutts, responsible for Google‘s anti–spam cell.
- @ sengineland: official account of the SearchEngineLand site, an American reference site specialized in SEO/SEM/PPC.
- @ dannysullivan: Official account of Danny Sullivan, ÉDITEUR site editor
- @ seobook: official account of the American site SEObook, specialized in referencing
- @ SEOmoz: official account of SEOmoz, an American positioning tracking tool.
- @ googlewmc: Google Webmaster Tools official account, providing information about the latest developments in the tool.
Here are my sister page :
- Read the Page Rank Definition to become a real SEO Hero