Z
Z
Zimaell2020-02-20 21:53:32
Search Engine Optimization
Zimaell, 2020-02-20 21:53:32

Could there be a problem in robots.txt when a site is crawled by a search engine?

Two weeks ago I added a site to Google, after searching the search I came across an article that describes what needs to be checked where, for example, "checking optimization for mobile devices", I checked it and it gave me such a result
5e4ed408c067a876115380.jpeg
. robots.txt itself looks like this

User-agent: *
Disallow:
Host: https://.............
Sitemap: https://............./sitemap.xml

Is there a problem in it or is it just that the search engine has not "overclocked" yet?
it seems like it’s allowed for all bots, there is no prohibition ...
the coverage looks like this
5e4ed5333f1ac856369902.jpeg
efficiency so
5e4ed57ea06de998226848.jpeg
tell me, am I doing something wrong, can there be problems or everything is fine and you just need to wait?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
C
ConstKen, 2020-02-20
@ConstKen

User-agent: *
Disallow:
Host: https://.............
Sitemap: https://............./sitemap.xml

Well, according to the idea, the Disallow directive closes the page from the crawler.
You have nothing closed.
So just wait.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question