Answer the question
In order to leave comments, you need to log in
How to check a SPA site for compatibility with a search engine robot?
In the console, Google gives out thousands of pages that, in its opinion, are not canonical. And he gives a link to a page that he considers a duplicate. But these are two different pages with different content. The content is loaded dynamically after the page is loaded, and the page title is also generated after receiving data from the API server.
His own "show how the robot sees" tool shows two different pages with different content...
It used to happen that in the code there were pieces of non-transpiled ES6 code. The browser normally showed, but the engine of the search engine robot, apparently, was not ready for this. Is it possible to somehow check the compatibility of the site with their robots yet? Maybe there are compatibility errors in my code and I don't know about them?
Answer the question
In order to leave comments, you need to log in
it seems to me that it doesn’t matter, spa, not spa, read what exactly is interesting for specific search robots, namely meta-information, meta tags, titles, favicons, and in accordance with this, optimize the front
in fact, ssr was invented for such purposes, because spa sucks indexed
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question