Answer the question
In order to leave comments, you need to log in
How to create /robots.txt for a Rails application?
Rails 5.0.0.
The robots.txt file is located in the application's public directory.
Also next to it are files such as: sitemap.xml.gz and rights confirmation files (Google and Yandex). All these files are available via site.ru/file.format in both development and production.
But for some reason the problem occurs exclusively with the robots.txt file. It was only available at development. Now it is not available at all. I rebooted it on the server - did not help.
The command curl http://site.ru/robots.txt
returns void.
What's the matter?
UPD
Now I have this following.
In development mode:
curl http://site.ru/robots.txt
User-agent: *
Disallow:
Disallow: /user
Disallow: /admin
Sitemap:
Answer the question
In order to leave comments, you need to log in
Perhaps someone is intercepting the request for robots.txt. Check routes.rb, run rake routes, disable all extraneous gems, if you are using Nginx, check that there are no handlers there either. You should be more careful, since the interception does not necessarily go through robots.txt, maybe some unreadable wildcard.
If you are using nginx, the robots.txt file must be specified in the config file, since the web server controls the distribution of files.
server {
.....
location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt {
gzip_static on;
expires max;
add_header Cache-Control public;
}
.....
}
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question