W
W
weranda2018-05-19 10:56:36
JavaScript
weranda, 2018-05-19 10:56:36

How not to get into trouble when using a redirect in JavaScript when protecting a site from copying?

I'm experimenting with copy protection options for the site. I understand that this is a "fight against windmills" and, if desired, they will copy everything, but still. I would like to understand the consequences of using such a move in case of copying the site page by a housewife / soulless crooked parser + third-party services (search engines, their bots and services).
Imagine. JS embedded in the page:

// устанавливаем имя домена
var $url = "example.com";
// если установленный домен на совпадает с текущим, редиректим на наш домен
if ( $url !== location.host) { location.href = 'http://' + $url; }

And here, like, everything works as it should work. If a person explores our site, then nothing will happen.
If someone copied the entire page or site and hosted it, then the embedded code will redirect to our domain. We exclude the option of searching and fixing the code by a housewife, we also exclude fixing the JS code by a lazy person, you can fix it, but we simply exclude this option.
And with search engines and their services, everything is not as simple as it seems to me. For example, if we embed this code and open some page for viewing in the Yandex.Metrics Webvisor, then we will see that from any page of our site we will be redirected to the main page of our own, since, apparently, the page of our site opens at some internal Metrika addresses, and everything is redirected to our main page, we will not see the required page in Webvisor. This can be fixed by using a slightly different JS entry (let's specify the exact URL for the page, not just the domain name):
// устанавливаем URL страницы
var $url = "http://example.com/page-x";
// если установленный ГКД на совпадает с текущим, редиректим на нужный URL
if ( $url !== location.href) { location.href = $url; }

And this option will work correctly in Webvisor, but the question is different! I don’t know how many similar nuances of the work of search engines, and such a solution can probably only harm the crawling and display of the site by search engines, since some of their algorithms, for example, parsing JS code and other things, are a secret with seven seals.
So I came to the main question - the use of such a move for the indicated purposes (without taking into account "what the hell is it, if they want it - they will spare it") is reasonable or there are pitfalls in the algorithms of search engines of their services that can lead to unhappy consequences when using this technique ?

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question