JA3 是一种创建 SSL/TLS 客户端指纹的方法,一般一个网站的证书是不变的,所以浏览器指纹也是稳定的,能区分不同的客户端。
Python requests库请求一个带JA3指纹网站的结果:
import requests
headers = {
'authority': 'tls.browserleaks.com',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7,da;q=0.6',
'cache-control': 'no-cache',
'pragma': 'no-cache',
'sec-ch-ua': '"Not/A)Brand";v="99", "Google Chrome";v="115", "Chromium";v="115"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'cross-site',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
}
response = requests.get('https://tls.browserleaks.com/json', headers=headers)
print(response.json())
可以看到,akamai_hash和akamai_text都是空的。
首先安装curl_cffi:
pip install curl_cffi
from curl_cffi import requests
headers = {
'authority': 'tls.browserleaks.com',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7,da;q=0.6',
'cache-control': 'no-cache',
'pragma': 'no-cache',
'sec-ch-ua': '"Not/A)Brand";v="99", "Google Chrome";v="115", "Chromium";v="115"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'cross-site',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
}
response = requests.get('https://tls.browserleaks.com/json', headers=headers, impersonate="chrome110")
print(response.text)
这里需要通过from curl_cffi import requests
引入requests,并在get方法里加入impersonate="chrome110"
。
可以看到,akamai_hash和akamai_text都有值了。
import asyncio
from curl_cffi.requests import AsyncSession
headers = {
'authority': 'tls.browserleaks.com',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'accept-language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7,da;q=0.6',
'cache-control': 'no-cache',
'pragma': 'no-cache',
'sec-ch-ua': '"Not/A)Brand";v="99", "Google Chrome";v="115", "Chromium";v="115"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'cross-site',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
}
urls = [
"https://tls.browserleaks.com/json",
"https://tls.peet.ws/",
# "https://kawayiyi.com/tls",
]
proxies = {"http": "socks5://127.0.0.1:10808", "https": "socks5://127.0.0.1:10808"}
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
async def main():
async with AsyncSession() as s:
tasks = []
for url in urls:
task = s.get(url, headers=headers, impersonate="chrome110", proxies=proxies)
tasks.append(task)
results = await asyncio.gather(*tasks)
print(results)
asyncio.run(main())
https://tls.browserleaks.com/json
https://github.com/salesforce/ja3
https://github.com/yifeikong/curl_cffi
https://github.com/lwthiker/curl-impersonate