C
C
ckald2011-06-29 01:16:49
Facebook
ckald, 2011-06-29 01:16:49

Counting unique clicks on tweet share button?

To get all clicks, we look at:
urls.api.twitter.com/1/urls/count.json?url=http://...
But this is the total number, which allows us to simply wind up the number.
You can also use search with the source parameter:
search.twitter.com/search.json?source=tweet_button...
search.twitter.com/search.json?q=http://www.artleb...
Obviously, there are more of the latter . But this is not all tweets - only relevant according to Twitter and only for some time.
Is it possible to get the total number of unique shares?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Alex Kheben, 2011-08-09
@zBit

The data that the script displays on request urls.api.twitter.com/1/urls/count.json?url=http ://stackoverflow.com is not the number of unique clicks on the "tweet share button", but the total number of links that all users tweeted.
I understand that by "unique shares" you mean searching for single mentions in individual profiles, i.e. so that the link is counted from one profile once.
I can’t help you with the code, but in theory you can apply the following logic:
search.twitter.com/search.json?q=http ://google.com/&page=1
3 useful variables are passed there, which can be used to calculate just unique links: from_user, from_user_id, from_user_id_str. You can write one of these values ​​to an array, then remove the duplicate values ​​and at the end count the number of array elements and this will be the number of unique links. It's good if the search results are less than 15.
"results_per_page":15 - it is logical to assume that the number of results per page = 15 =)
Then there is another useful variable: next_page
From the example on the link above: "next_page":"?page=2&max_id= 100723523508637696&q=http%3A%2F%2Fgoogle.com%2F"
You can parse pages like this until they run out, but this business is very cumbersome, long and not the fact that it works flawlessly.
You can simply fill in the from_user_id array moving through the pages, and when they run out, simply remove all repetitions and calculate the length of the array.
It’s good if there are few links, but if you parse what I used in the example like this, then it will be just awful.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question