Low-value or low-demand pages
The Yandex search database contains data about numerous pages. A user search query may return a large number of results that are relevant to a greater or lesser extent.
The algorithm may decide not to include a page in search results if demand for this page is likely to be low. For example, this can happen if the page is a duplicate of pages already known to the robot, if there's no content on the page, or if its content doesn't completely match user queries. The algorithm automatically checks the pages on a regular basis, so its decision to include or not include a certain page in search may change over time.
To see the pages excluded from search results, open Yandex.Webmaster, go to Excluded pages), and look for the pages with the "Low-value or low-demand page" status.
(Why are pages considered low-value or low-demand?
When selecting pages, the algorithm takes into account many factors. Based on this, the pages excluded from search results can be divided into the following types:
A page may be considered low-value if it's a duplicate of another page, or if it doesn't contain any content visible to the robot.
- How to fix
-
- Check the page contents and its availability to the robot:
- Make sure that the page headings (title, h1, h2, etc.) are correct and describe its contents well.
- Check if there's important content that is provided as an image.
- Ensure that JS scripts aren't used to display important content. Check the server response to see the HTML code of the page as the robot gets it.
- Make sure that iframe isn't used to display content.
If a page doesn't have a value, it's better to hide it from indexing:
- If the page duplicates the content of other site pages, use the rel="canonical" directive to indicate the original page, or specify insignificant GET parameters in the Clean-param directive in robots.txt. You can also disable page indexing by using the HTTP 301 redirect, instructions in the robots.txt file, or the noindex meta tag or HTTP header.
- If the page is technical and has no useful content, prohibit its indexing using the Disallow directive in robots.txt, or the noindex meta tag or HTTP header.
The Yandex robot checks if the content of a page is in demand. Our algorithm evaluates each page to figure out whether it will appear in search results in the positions where users can find it. If the page doesn't have errors in the HTML code, has content, but in the search there aren't any users or queries that it could respond to, this page may be excluded from search results as a low-demand page.
- How to fix
-
If certain pages are excluded from search results despite having content, pay attention to this content. It might not match user queries. In this case, try editing the content so that it better suits users' interests.
Try to put yourself in your potential site visitors' shoes. How would you try to find information on the given topic? What search query could you use? To find relevant topics, use the Keyword statistics service, as well as the tools on the following Yandex.Webmaster pages: Query statistics, Managing groups.
The algorithm doesn't limit the site in the search results. For example, if any previously excluded page is updated and becomes eligible to appear in search results, the algorithm checks it again.
See our recommendations:
FAQ
Our algorithm regularly checks all pages, almost every day. Search results may change, which is why the relevance of pages in search results may change as well, even if their content remains the same. In this case, the algorithm may decide to exclude pages from the search or return them.
The algorithm doesn't re-index pages, but checks the contents of the pages that are currently included in the database. If the pages were previously available, responded with the 200 OK status code, and were indexed, the algorithm can continue to index them until the robot visits these pages again and tracks changes in the response code.
If you want such pages to be deleted faster, prohibit them from indexing in the site's robots.txt file. After that, the links will automatically disappear from the robot's database within two weeks.
If you can't do this, see the recommendations in Indexing.
When checking pages, the algorithm evaluates a very large amount of ranking and indexing factors. Decisions for different pages, even those with very similar content, may vary. It's possible that similar pages respond to the same user query, which is why the algorithm only includes the page that it considers the most relevant in the search results.
The number of pages excluded this way doesn't negatively affect the site's ranking. However, if these pages are removed as low-demand, they remain available and can take part in the search. This means the algorithm may include them in search results. If you are sure that such pages are not needed in search, it's better to prohibit indexing them.