Answer the question
In order to leave comments, you need to log in
How to speed up http (GET) data loading?
I have the following program
import requests
# Длина списка urls = 35000+
urls = get_urls()
for url in urls:
request = requests.get(url)
response = request.text
do_something(text)
Answer the question
In order to leave comments, you need to log in
Haven't tested, but should work.
chriskiehl.com/article/parallelism-in-one-line
import requests
from multiprocessing.dummy import Pool as ThreadPool #Thread pool
# Длина списка urls = 35000+
urls = get_urls()
pool = ThreadPool(4) # создаем 4 потока - по количеству ядер CPU
results = pool.map(my_mega_function, urls) # запускает параллельно 4 потока и в каждом потоке обрабатывает одно из значений из списка urls с помощью функции my_mega_function. results - то что возвращает функция my_mega_function
pool.close()
pool.join()
def my_mega_function(url): #Функция, которая будет вызываться для каждого запроса
request = requests.get(url)
response = request.text
do_something(text)
Is it possible to somehow start parallel execution of queries?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question