Answer the question
In order to leave comments, you need to log in
What other bases of keywords are there?
Hello. Previously, when collecting keywords, I used a bunch: Key Collector + UpBase (base) + KeyAssort (clustering).
BUT the base - UpBase - died. The creators seem to have closed the project and do not get in touch. It was a desktop solution that allowed you to pull out great low-end long-tail queries, something that you can’t collect from any wordstats.
Are there any other analogues of the base? In desktop solutions, what would you buy once and it was yours, maybe update it once every six months - a year.
I watched services like Rush Analytics, to be honest, it looks a bit expensive visually. How much does it take you on average to spend rubles, time to collect semantics + clustering in online services? (service name, number of requests/pages, time, amount).
Answer the question
In order to leave comments, you need to log in
If you collect through QC, then the cost of only a proxy - from 1000 rubles .. and anti-captcha is about $ 3-5 per month.
The collection time depends on the number of proxies, parsing depth, etc. For 99% of projects, for semantics it is enough to use wordstat, hints and LSI. Databases are needed if the pages are already maximally optimized for existing queries and there is a need to "get" a little traffic.
If you need to get low frequencies, then the Pastukhov base.
For my small projects, I collect from wordstat with my hands without software, add LSI and tips. I spend 0 rubles. I also cluster with my hands because it's better this way.
For large projects, I collect through QC, clustering with a rush and finishing clustering by hand. With large cores of 100-600 thousand requests, I can mess around for 2 weeks.
Serpstat does a good job. He has a cool filter in the selection of phrases, in which you can put the number of words in the query from, to, more, less, within the limit, and so on. So you immediately find longtails by the right word.
And do not listen to Fedor)) More precisely, compare yourself the speed of collecting the semantic core with your hands through wordstat, then all the processes of cleaning, clustering, for which he will not spend anything, as it seems to him, in money. And the same processes in the same Serpstat. In general, this service has an amazingly implemented API (by the way, very cheap) and there are scripts on the blog how to parse all the top 100 results according to the specified parameters into the table at once. In total, unlike Fedor, in a couple of minutes I get the semantics of the entire top 100 once I subscribe to the tariff, and I run, uh, I don’t even know how many times more projects .. 10 times than Fedor?)) And how do such "handy ones" live? SEOs, I don’t envy those customers who fell for him.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question