Why are pages excluded from the search?
Site pages can disappear from the Yandex search results for a number of reasons. To find out the exact reason why a page was excluded, open Yandex Webmaster, go to Indexing → Searchable pages, and select Excluded pages. Learn more about the Excluded pages block
Reason for excluding a page |
Solution |
The page was considered low-value or in low-demand |
The algorithm decided not to include the page in search results because demand for the page is probably low. For example, this can happen if there's no content on the page, if the page is a duplicate of pages already known to the robot, or if its content doesn't completely suit user interests. The algorithm automatically checks the pages on a regular basis, so its decision may change later. To learn more, see Low-value or low-demand pages. If a site has such pages, it does not mean that it has violations and ranking restrictions. You can check if your site has restrictions on the Website optimization → Security and violations page in Yandex Webmaster. |
An error occurred when the robot was downloading or processing the page, and the server response contained a 3XX, 4XX, or 5XX HTTP code |
To find the error, use the Server response check tool. If the page is accessible to the robot, make sure that:
|
Page indexing is prohibited in the robots.txt file or using a meta tag with the noindex directive. |
Remove the prohibiting directives. If you didn't place the ban in Also make sure that the domain name isn't blocked due to the registration period expiry. |
The page redirects the robot to other pages |
Make sure that the excluded page should actually redirect users. To do this, use the Server response check tool. |
The page duplicates the content of another page |
If the page is identified as duplicate by mistake, follow the instructions in the Duplicate pages section. |
The page is not canonical |
Make sure that the pages should actually redirect the robot to the URL specified in the |
The site address is recognized as a secondary mirror |
If sites are grouped by mistake, follow the recommendations in the Separating mirrors section. |
Violations are found on the site |
To check if this is the case, go to Website optimization → Security and violations in Yandex Webmaster. |
The robot continues to visit the pages excluded from the search, and a special algorithm checks the probability of displaying them in the search results before each update of the search database. So the page may appear in the search within two weeks after the robot finds out about its change.
If you have fixed the problem, submit the page for reindexing. This way you'll inform the robot about the changes.
Questions and answers about pages excluded from the search
The page's description and Keywords meta tags and the title element are filled in correctly and the page meets all requirements. Why isn't it in the search results?
In addition to checking the tags on the page, the algorithm checks if the page content is unique, informative, in-demand and up-to-date, as well as many other factors. However, you should pay attention to meta tags. For example, the Description
meta tag and the title
element can be created automatically and duplicate each other.
If the site contains a lot of similar products that differ only by color, size or configuration, they may be excluded from the search. The same is true about the pagination pages, product selection and comparison pages, and image pages without text content.
Pages that are marked as excluded open normally in the browser. What does this mean?
This can happen for several reasons:
- Headers that the robot requests from the server differ from the headers that the browser requests. So excluded pages might open correctly in the browser.
- A page excluded from the search because of a download error disappears from the list of excluded pages only once it is available at the robot's request. Check the server response for the URL in question. If the response contains the HTTP 200 OK status, wait for the next robot crawl.
The “Excluded pages” list shows pages that aren't on the site anymore. How do I remove them?
The Excluded pages list on the Pages in search page shows the pages the robot accessed but didn't index (these may be non-existing pages previously known to the robot).
A page is removed from the excluded list if:
- It is not available to the robot for a certain period of time.
- It is not linked to from other pages on the site and external sources.
Excluded pages listed in the service don't affect the site position in the search results.