S
S
stasonua02020-05-10 11:19:39
Yandex
stasonua0, 2020-05-10 11:19:39

How to get rid of a lot of 404 errors in Wordpress + Woocommerce?

Hello, help me find a solution on how to get rid of 404 errors on Wordpress and Woocommece
Site - pixel.com

Every day I see that Yandex is trying to index products where / href is at the end of the address, I can’t understand where such addresses can come from.

SEO plugin - Rank Match SEO

During tests in the WebSite Auditor program, product addresses with /href at the end of the URL are also visible.

I set the following directive in robots.txt - Disallow: /*href (I don’t know if it helps?)

Yandex answer:
Hello, Stanislav!

Apparently, the robot somewhere on your site or on the Internet found links to pages at such addresses, so it tried to index them. The robot automatically tries to visit every page it knows the link to. Unfortunately, it is not possible to establish from where exactly the robot got to know certain pages, since such data is not stored in our database. They could appear in the robot's database for the following reasons:
the specified pages lead or have previously led external links.
You can analyze incoming links to pages using public tools: https://yandex.ru/search/?text=check%20internal...
relative links are incorrectly specified, the tag is missing.
Check if the tag is present on all pages of the site and make sure the specified links are correct;
there are errors on the side of the CMS or the "engine" of the site.
In this case, we recommend contacting CMS developers or looking for information on specialized forums;
pages may have appeared on the site as a result of hacking, it is possible that the source of infection is still in the files of your site, so we recommend that you carefully check all files for the presence of extraneous or suspicious code. You can find some tips on finding the code on our Help pages:

https://yandex.ru/support/webmaster/security/cure.html
https://yandex.ru/support/webmaster/security/send-...

If you do not want the robot to access these pages, you can prevent them from being indexed in the robots.txt file. In this case, the robot will know that they do not need to be indexed, and will not access them. And if external links do not lead to pages, information about them will be removed from Yandex.Webmaster over time. You can read more about the robots.txt file on our Help page: https://yandex.ru/support/webmaster/controlling-ro... . I also recommend that you use the robots.txt file verification tool: https://webmaster.yandex.ru/tools/robotstxt/ . Directly in the tool, you can correct the instructions and check whether the pages you need are allowed or prohibited for indexing.

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question