Answer the question
In order to leave comments, you need to log in
FORBIDDEN 403 error. How to fix?
Good afternoon! I decided to try my hand at parsing. I found a site (do not believe it, for a friend), a guide and started writing the code itself. I wrote, but there is only a problem that the images are downloaded empty. I decided to follow the same links as the program, but in the end I got an error 403. The question is how to solve it? Here is the code itself -
import requests
from bs4 import BeautifulSoup
storage_number = 1
image_number = 0
category="/pics/non-nude/?page="
link = f"https://www.sex.com"
for storage in range(6):
responce = requests.get(f'{link}{category}{storage_number}').text
soup = BeautifulSoup(responce, 'lxml')
block = soup.find('div', class_ = 'home_container centered')
all_image = block.find_all('div', class_ = 'masonry_box small_pin_box')
for image in all_image:
image_link=image.find('a', class_='image_wrapper').get('href')
download_storage=requests.get(f'{link}{image_link}').text
download_soup=BeautifulSoup(download_storage, 'lxml')
download_block=download_soup.find('div', class_='big_pin_box').find('div', class_='image_frame')
result_link=download_block.find('img').get('src')
image_bytes=requests.get(f'{result_link}').content
with open(f"C:/Users/admin/Desktop/images/{image_number}.jpg", 'wb') as file:
file.write(image_bytes)
image_number += 1
print(f"Изображение {image_number}.jpg успешно скачано!")
storage_number += 1
Answer the question
In order to leave comments, you need to log in
Set headers like:
import requests
url = 'SOME_URL'
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_4) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36',
'Referer': 'https://www.sex.com/'
}
response = requests.get(url, headers=headers)
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question