• Python 爬虫实战之爬淘宝商品并做数据分析


    前言

    是这样的,之前接了一个金主的单子,他想在淘宝开个小鱼零食的网店,想对目前这个市场上的商品做一些分析,本来手动去做统计和分析也是可以的,这些信息都是对外展示的,只是手动比较麻烦,所以想托我去帮个忙。

    图片

    一、 项目要求:

    具体的要求如下:

    1.在淘宝搜索“小鱼零食”,想知道前10页搜索结果的所有商品的销量和金额,按照他划定好的价格区间来统计数量,给我划分了如下的一张价格区间表:

    图片

    2.这10页搜索结果中,商家都是分布在全国的哪些位置?

    3.这10页的商品下面,用户评论最多的是什么?

    4.从这些搜索结果中,找出销量最多的10家店铺名字和店铺链接。

    从这些要求来看,其实这些需求也不难实现,我们先来看一下项目的效果。

    二、效果预览

    获取到数据之后做了下分析,最终做成了柱状图,鼠标移动可以看出具体的商品数量。

    图片

    在10~30元之间的商品最多,越往后越少,看来大多数的产品都是定位为低端市场。

    然后我们再来看一下全国商家的分布情况:

    图片

    可以看出,商家分布大多都是在沿海和长江中下游附近,其中以沿海地区最为密集。

    然后再来看一下用户都在商品下面评论了一些什么:

    图片

    字最大的就表示出现次数最多,口感味道、包装品质、商品分量和保质期是用户评价最多的几个方面,那么在产品包装的时候可以从这几个方面去做针对性阐述,解决大多数人比较关心的问题。

    最后就是销量前10的店铺和链接了。

    图片

    在拿到数据并做了分析之后,我也在想,如果这个东西是我来做的话,我能不能看出来什么东西?或许可以从价格上找到切入点,或许可以从产品地理位置打个差异化,又或许可以以用户为中心,由外而内地做营销。

    越往深想,越觉得有门道,算了,对于小鱼零食这一块我是外行,不多想了。

    三、爬虫源码

    由于源码分了几个源文件,还是比较长的,所以这里就不跟大家一一讲解了,懂爬虫的人看几遍就看懂了,不懂爬虫的说再多也是云里雾里,等以后学会了爬虫再来看就懂了。

    1. import csv
    2. import os
    3. import time
    4. import wordcloud
    5. from selenium import webdriver
    6. from selenium.webdriver.common.by import By
    7. def tongji():
    8. prices = []
    9. with open('前十页销量和金额.csv', 'r', encoding='utf-8', newline='') as f:
    10. fieldnames = ['价格', '销量', '店铺位置']
    11. reader = csv.DictReader(f, fieldnames=fieldnames)
    12. for index, i in enumerate(reader):
    13. if index != 0:
    14. price = float(i['价格'].replace('¥', ''))
    15. prices.append(price)
    16. DATAS = {'<10': 0, '10~30': 0, '30~50': 0,
    17. '50~70': 0, '70~90': 0, '90~110': 0,
    18. '110~130': 0, '130~150': 0, '150~170': 0, '170~200': 0, }
    19. for price in prices:
    20. if price < 10:
    21. DATAS['<10'] += 1
    22. elif 10 <= price < 30:
    23. DATAS['10~30'] += 1
    24. elif 30 <= price < 50:
    25. DATAS['30~50'] += 1
    26. elif 50 <= price < 70:
    27. DATAS['50~70'] += 1
    28. elif 70 <= price < 90:
    29. DATAS['70~90'] += 1
    30. elif 90 <= price < 110:
    31. DATAS['90~110'] += 1
    32. elif 110 <= price < 130:
    33. DATAS['110~130'] += 1
    34. elif 130 <= price < 150:
    35. DATAS['130~150'] += 1
    36. elif 150 <= price < 170:
    37. DATAS['150~170'] += 1
    38. elif 170 <= price < 200:
    39. DATAS['170~200'] += 1
    40. for k, v in DATAS.items():
    41. print(k, ':', v)
    42. def get_the_top_10(url):
    43. top_ten = []
    44. # 获取代理
    45. ip = zhima1()[2][random.randint(0, 399)]
    46. # 运行quicker动作(可以不用管)
    47. os.system('"C:\Program Files\Quicker\QuickerStarter.exe" runaction:5e3abcd2-9271-47b6-8eaf-3e7c8f4935d8')
    48. options = webdriver.ChromeOptions()
    49. # 远程调试Chrome
    50. options.add_experimental_option('debuggerAddress', '127.0.0.1:9222')
    51. options.add_argument(f'--proxy-server={ip}')
    52. driver = webdriver.Chrome(optinotallow=options)
    53. # 隐式等待
    54. driver.implicitly_wait(3)
    55. # 打开网页
    56. driver.get(url)
    57. # 点击部分文字包含'销量'的网页元素
    58. driver.find_element(By.PARTIAL_LINK_TEXT, '销量').click()
    59. time.sleep(1)
    60. # 页面滑动到最下方
    61. driver.execute_script('window.scrollTo(0,document.body.scrollHeight)')
    62. time.sleep(1)
    63. # 查找元素
    64. element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]')
    65. items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]')
    66. for index, item in enumerate(items):
    67. if index == 10:
    68. break
    69. # 查找元素
    70. price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text
    71. paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class="deal-cnt"]').text
    72. store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class="location"]').text
    73. store_href = item.find_element(By.XPATH, './div[2]/div[@class="row row-2 title"]/a').get_attribute(
    74. 'href').strip()
    75. # 将数据添加到字典
    76. top_ten.append(
    77. {'价格': price,
    78. '销量': paid_num_data,
    79. '店铺位置': store_location,
    80. '店铺链接': store_href
    81. })
    82. for i in top_ten:
    83. print(i)
    84. def get_top_10_comments(url):
    85. with open('排名前十评价.txt', 'w+', encoding='utf-8') as f:
    86. pass
    87. # ip = ipidea()[1]
    88. os.system('"C:\Program Files\Quicker\QuickerStarter.exe" runaction:5e3abcd2-9271-47b6-8eaf-3e7c8f4935d8')
    89. options = webdriver.ChromeOptions()
    90. options.add_experimental_option('debuggerAddress', '127.0.0.1:9222')
    91. # options.add_argument(f'--proxy-server={ip}')
    92. driver = webdriver.Chrome(optinotallow=options)
    93. driver.implicitly_wait(3)
    94. driver.get(url)
    95. driver.find_element(By.PARTIAL_LINK_TEXT, '销量').click()
    96. time.sleep(1)
    97. element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]')
    98. items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]')
    99. original_handle = driver.current_window_handle
    100. item_hrefs = []
    101. # 先获取前十的链接
    102. for index, item in enumerate(items):
    103. if index == 10:
    104. break
    105. item_hrefs.append(
    106. item.find_element(By.XPATH, './/div[2]/div[@class="row row-2 title"]/a').get_attribute('href').strip())
    107. # 爬取前十每个商品评价
    108. for item_href in item_hrefs:
    109. # 打开新标签
    110. # item_href = 'https://item.taobao.com/item.htm?id=523351391646&ns=1&abbucket=11#detail'
    111. driver.execute_script(f'window.open("{item_href}")')
    112. # 切换过去
    113. handles = driver.window_handles
    114. driver.switch_to.window(handles[-1])
    115. # 页面向下滑动一部分,直到让评价那两个字显示出来
    116. try:
    117. driver.find_element(By.PARTIAL_LINK_TEXT, '评价').click()
    118. except Exception as e1:
    119. try:
    120. x = driver.find_element(By.PARTIAL_LINK_TEXT, '评价').location_once_scrolled_into_view
    121. driver.find_element(By.PARTIAL_LINK_TEXT, '评价').click()
    122. except Exception as e2:
    123. try:
    124. # 先向下滑动100,放置评价2个字没显示在屏幕内
    125. driver.execute_script('var q=document.documentElement.scrollTop=100')
    126. x = driver.find_element(By.PARTIAL_LINK_TEXT, '评价').location_once_scrolled_into_view
    127. except Exception as e3:
    128. driver.find_element(By.XPATH, '/html/body/div[6]/div/div[3]/div[2]/div/div[2]/ul/li[2]/a').click()
    129. time.sleep(1)
    130. try:
    131. trs = driver.find_elements(By.XPATH, '//div[@class="rate-grid"]/table/tbody/tr')
    132. for index, tr in enumerate(trs):
    133. if index == 0:
    134. comments = tr.find_element(By.XPATH, './td[1]/div[1]/div/div').text.strip()
    135. else:
    136. try:
    137. comments = tr.find_element(By.XPATH,
    138. './td[1]/div[1]/div[@class="tm-rate-fulltxt"]').text.strip()
    139. except Exception as e:
    140. comments = tr.find_element(By.XPATH,
    141. './td[1]/div[1]/div[@class="tm-rate-content"]/div[@class="tm-rate-fulltxt"]').text.strip()
    142. with open('排名前十评价.txt', 'a+', encoding='utf-8') as f:
    143. f.write(comments + '\n')
    144. print(comments)
    145. except Exception as e:
    146. lis = driver.find_elements(By.XPATH, '//div[@class="J_KgRate_MainReviews"]/div[@class="tb-revbd"]/ul/li')
    147. for li in lis:
    148. comments = li.find_element(By.XPATH, './div[2]/div/div[1]').text.strip()
    149. with open('排名前十评价.txt', 'a+', encoding='utf-8') as f:
    150. f.write(comments + '\n')
    151. print(comments)
    152. def get_top_10_comments_wordcloud():
    153. file = '排名前十评价.txt'
    154. f = open(file, encoding='utf-8')
    155. txt = f.read()
    156. f.close()
    157. w = wordcloud.WordCloud(width=1000,
    158. height=700,
    159. background_color='white',
    160. font_path='msyh.ttc')
    161. # 创建词云对象,并设置生成图片的属性
    162. w.generate(txt)
    163. name = file.replace('.txt', '')
    164. w.to_file(name + '词云.png')
    165. os.startfile(name + '词云.png')
    166. def get_10_pages_datas():
    167. with open('前十页销量和金额.csv', 'w+', encoding='utf-8', newline='') as f:
    168. f.write('\ufeff')
    169. fieldnames = ['价格', '销量', '店铺位置']
    170. writer = csv.DictWriter(f, fieldnames=fieldnames)
    171. writer.writeheader()
    172. infos = []
    173. options = webdriver.ChromeOptions()
    174. options.add_experimental_option('debuggerAddress', '127.0.0.1:9222')
    175. # options.add_argument(f'--proxy-server={ip}')
    176. driver = webdriver.Chrome(optinotallow=options)
    177. driver.implicitly_wait(3)
    178. driver.get(url)
    179. # driver.execute_script('window.scrollTo(0,document.body.scrollHeight)')
    180. element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]')
    181. items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]')
    182. for index, item in enumerate(items):
    183. price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text
    184. paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class="deal-cnt"]').text
    185. store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class="location"]').text
    186. infos.append(
    187. {'价格': price,
    188. '销量': paid_num_data,
    189. '店铺位置': store_location})
    190. try:
    191. driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click()
    192. except Exception as e:
    193. driver.execute_script('window.scrollTo(0,document.body.scrollHeight)')
    194. driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click()
    195. for i in range(9):
    196. time.sleep(1)
    197. driver.execute_script('window.scrollTo(0,document.body.scrollHeight)')
    198. element = driver.find_element(By.ID, 'mainsrp-itemlist').find_element(By.XPATH, './/div[@class="items"]')
    199. items = element.find_elements(By.XPATH, './/div[@data-category="auctions"]')
    200. for index, item in enumerate(items):
    201. try:
    202. price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text
    203. except Exception:
    204. time.sleep(1)
    205. driver.execute_script('window.scrollTo(0,document.body.scrollHeight)')
    206. price = item.find_element(By.XPATH, './div[2]/div[1]/div[contains(@class,"price")]').text
    207. paid_num_data = item.find_element(By.XPATH, './div[2]/div[1]/div[@class="deal-cnt"]').text
    208. store_location = item.find_element(By.XPATH, './div[2]/div[3]/div[@class="location"]').text
    209. infos.append(
    210. {'价格': price,
    211. '销量': paid_num_data,
    212. '店铺位置': store_location})
    213. try:
    214. driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click()
    215. except Exception as e:
    216. driver.execute_script('window.scrollTo(0,document.body.scrollHeight)')
    217. driver.find_element(By.PARTIAL_LINK_TEXT, '下一').click()
    218. # 一页结束
    219. for info in infos:
    220. print(info)
    221. with open('前十页销量和金额.csv', 'a+', encoding='utf-8', newline='') as f:
    222. fieldnames = ['价格', '销量', '店铺位置']
    223. writer = csv.DictWriter(f, fieldnames=fieldnames)
    224. for info in infos:
    225. writer.writerow(info)
    226. if __name__ == '__main__':
    227. url = 'https://s.taobao.com/search?q=%E5%B0%8F%E9%B1%BC%E9%9B%B6%E9%A3%9F&imgfile=&commend=all&ssid=s5-e&search_type=item&sourceId=tb.index&spm=a21bo.21814703.201856-taobao-item.1&ie=utf8&initiative_id=tbindexz_20170306&bcoffset=4&ntoffset=4&p4ppushleft=2%2C48&s=0'
    228. # get_10_pages_datas()
    229. # tongji()
    230. # get_the_top_10(url)
    231. # get_top_10_comments(url)
    232. get_top_10_comments_wordcloud()

    通过上面的代码,我们能获取到想要获取的数据,然后再Bar和Geo进行柱状图和地理位置分布展示,这两块大家可以去摸索一下。

  • 相关阅读:
    R语言使用hexSticker包将ggplot2包可视化的结果转换为六角图(六角贴、六角形贴纸、ggplot2 plot to hex sticker)
    相似度loss汇总,pytorch code
    【Go】十九、网络连接与请求发送
    Java网络编程
    接口测试方法论——WebSocket一点通
    重大技术问题,iPhone 15 Pro Max面临“烧屏门”风波 | 百能云芯
    T246836 [LSOT-1] 暴龙的土豆
    Feign远程调用
    167.二叉树:另一棵树的字树(力扣)
    黑客(网络安全)技术速成自学
  • 原文地址:https://blog.csdn.net/WBKJ_Noah/article/details/133856976