Low-value or low-demand pages

The Yandex search database contains data about numerous pages. A user search query may return a large number of results that are relevant to a greater or lesser extent.

The algorithm may decide not to include a page in search results if demand for this page is likely to be low. For example, this can happen if the page is a duplicate of pages already known to the robot, if there's no content on the page, or if its content doesn't completely match user queries. The algorithm automatically checks the pages on a regular basis, so its decision to include or not include a certain page in search may change later.

If a site has such pages, it does not mean that it has violations and ranking restrictions. You can check if your site has restrictions on the Website optimization → Security and violations page in Yandex Webmaster.

To see the pages excluded from search results, open Yandex Webmaster, go to Indexing → Searchable pages (Excluded pages), and look for pages with the "low-value or low-demand" status.

Tip

If you own an online store, you can use Yandex Products and connect to Yandex Market using a YML feed. This will help Yandex get more detailed information about website pages and may improve their representation in search results.

Why are pages considered low-value or low-demand?

When selecting pages, the algorithm takes into account many factors. Based on this, the pages excluded from search can be divided into the following types:

A page may be considered low-value if it's a duplicate of another page, or if it doesn't contain any content visible to the robot.

How to fix

  • Check the page content and its availability to the robot:

    • Make sure that the page headings (title, h1, h2 etc.) are correct and describe its content well.
    • Check if there's important content that is provided as an image.
    • Ensure that JS scripts aren't used to display important content. Check the server response to see the HTML code of the page as the robot gets it.
    • Make sure that iframe is not used to display content.
  • If a page has no value, it's better to hide it from indexing:

    • If the page duplicates the content of other site pages, use the rel="canonical" directive to indicate the original page, or specify insignificant GET parameters in the Clean-param directive in robots.txt. You can also disable page indexing by using the HTTP 301 redirect, instructions in the robots.txt file, or the noindex meta tag or HTTP header.
    • If the page is technical and has no useful content, prohibit its indexing using the Disallow directive in robots.txt, or the noindex meta tag or HTTP header.

The Yandex robot checks if the content of a page is in demand. Our algorithm evaluates each page to figure out whether it will appear in search results in the positions where users can find it. If the page doesn't have errors in the HTML code, has content, but in the search there aren't any users or queries that it could respond to, this page may be excluded from search results as a low-demand page.

How to fix

If certain pages are excluded from search results despite having content, pay attention to this content. It might not match user queries. In this case, try editing the content so that it better suits users' interests.

Try to put yourself in the place of your site potential visitors. How would you try to find information on the given topic? What search query could you use? To find relevant topics, use the Keyword statistics service as well as the Query statistics and Managing groups pages in Yandex Webmaster.

The algorithm doesn't restrict the site in general. For example, if any previously excluded page is updated and becomes eligible to appear in search results, the algorithm checks it again.

Note

Yandex doesn't have quotas for the number of pages to be included in the index. All pages that the algorithm considers useful to users are indexed regardless or their number.

See our recommendations:

Questions and answers

Pages sometimes appear in search and sometimes disappear from there

Our algorithm checks all pages regularly, nearly every day. Search results may change causing the relevance of pages in search to change as well, even if their content remains the same. In this case, the algorithm may decide to exclude pages from the search or return them.

Our pages are configured to issue the 403/404 HTTP response code or the noindex element is used, but links are excluded from search results as low-demand

The algorithm doesn't re-index pages, but checks the contents of the pages that are currently included in the database. If the pages were previously available, responded with the 200 OK status code, and were indexed, the algorithm can continue to index them until the robot visits these pages again and tracks changes in the response code.

If you want such pages to be deleted faster, prohibit them from indexing in the site's robots.txt file. After that, the links will automatically disappear from the robot's database within two weeks.

There are similar pages on the site, but one page is included in search results and the other one is not

When checking pages, the algorithm evaluates a very large amount of ranking and indexing factors. Decisions for different pages, even those with very similar content, may vary. It's possible that similar pages respond to the same user query, which is why the algorithm includes in search only one page that it considers more relevant.

Our site contains pages that shouldn't be indexed, but they are excluded as low-demand pages

The number of pages excluded this way doesn't negatively affect the site's ranking. But if these pages are removed as low-demand ones, they remain available, which means it's possible that they can be displayed in search results. If you are sure that such pages are not needed in search, it's better to prohibit indexing them.

Why were duplicates excluded as low-demand pages?

The content of pages may be slightly different or may change dynamically, which is why such links cannot be considered duplicates. However, because of content similarity, pages can compete with each other in search and duplicate each other, making one of them less popular.

Contact support

Tell us what your question is about so we can direct you to the right specialist:

Pages with different content can be considered duplicates if they responded to the robot with an error message (for example, in case of a stub page on the site). Check how the pages respond now. If pages return different content, send them for re-indexing — this way they can get back in the search results faster.

To prevent pages from being excluded from the search if the site is temporarily unavailable, configure the 503 HTTP response code.




You can also go to