Answer the question
In order to leave comments, you need to log in
How to parse subcategories?
There is a website: www.zagrya.ru
For now, you need to parse the categories, and each category has its own subcategories.
If you simply use find_all() in find(), there will be an error.
How to parse them?
I will be grateful :)
import requests
from bs4 import BeautifulSoup
URL = 'http://www.zagrya.ru/'
HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9/',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36',
}
def get_html(url, params = ''):
r = requests.get(url,headers =HEADERS, params=params)
return r
def get_content(html):
soup = BeautifulSoup(html, 'html.parser')
items = soup.find_all('li', class_='hor-menu__item has-subm')
tovari = []
for item in items:
tovari.append(
{
'karegoria': item.find('a', class_='hor-menu__lnk').find('span', class_='hor-menu__text').get_text(),
}
)
return tovari
html = get_html(URL)
print(get_content(html.text))
Answer the question
In order to leave comments, you need to log in
Viktor Kokorich , hello!
By the number of similar posts, it can be assumed that you, like the rest, are studying somewhere and solving a learning problem.
Here is a similar post .
There are several ways to parse subcategories. For example, get a list of pages with categories and bypass these pages with a parser.
import requests
from bs4 import BeautifulSoup
from pprint import pprint
url = 'http://www.zagrya.ru/'
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9/',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36',
}
html = requests.get(url, headers=headers)
soup = BeautifulSoup(html.content)
# словарь, в который записываются НАЗВАНИЯ всех категорий на сайте и ССЫЛКИ на них
categories = {}
for cat in soup.find_all('a', {"class": "hor-menu__lnk"}):
name = cat.find("span", {"class":"hor-menu__text"}).get_text()
url = "http://www.zagrya.ru" + cat.attrs['href']
categories[name] = url
print(categories)
# {
# 'НОВИНКИ': 'http://www.zagrya.ru/category/category_2578/',
# 'КНИГИ': 'http://www.zagrya.ru/category/knigi/',
# 'ИГРУШКИ': 'http://www.zagrya.ru/category/igrushki/',
# 'КАНЦТОВАРЫ': 'http://www.zagrya.ru/category/category_2639/',
# 'УЧЕБНАЯ ЛИТЕРАТУРА': 'http://www.zagrya.ru/category/uchyebnaya-lityeratura/',
# 'ЭНЦИКЛОПЕДИИ': 'http://www.zagrya.ru/category/entsiklopyedii/',
# 'РАСПРОДАЖА': 'http://www.zagrya.ru/category/rasprodazha_1/'
# }
# словарь, в который записываются НАЗВАНИЯ всех категорий и их ПОДКАТЕГОРИИ
subcategories = {}
for k, v in categories.items():
# перебираем все ссылки/переходим по ним
html = requests.get(v, headers=headers)
soup = BeautifulSoup(html.content)
sub_list = []
for subcat in soup.find_all("div", {"class": "subcat-wrapper__item sub-cat-nobd"}):
sub_list.append(subcat.find("div", {"class": "sub-cat__title"}).get_text())
subcategories[k] = sub_list
# вывод на экран
for k,v in subcategories.items():
print(k)
pprint(v)
# Грошь - цена Вам, как специалисту, если Вы самостоятельно не разберётесь
# с тэгами и работай библиотек Requests и BeautifulSoup
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question