Coomer.su doesn’t offer any official API. Pages are dynamically loaded, throttled per IP, and image links are buried in paginated JSON-like DOM structures. Traditional scrapers? Blocked. Headless browsers? Slow. I wanted speed, automation, and zero interaction.
So, I reverse engineered it.
🛠️ Tools I Used
-
Python3
-
requests
,beautifulsoup4
-
fake_useragent
-
aria2c
(for parallel image downloading) -
tor
(optional, for IP cycling) -
jq
(if parsing embedded data manually)
💻 The Script
import requests, os
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
ua = UserAgent()
headers = {'User-Agent': ua.random}
def fetch_image_links(profile_url):
r = requests.get(profile_url, headers=headers)
soup = BeautifulSoup(r.text, 'html.parser')
images = soup.find_all('img')
return [img['src'] for img in images if 'media' in img['src']]
def download_images(links, save_path='downloads'):
os.makedirs(save_path, exist_ok=True)
for link in links:
img_name = link.split('/')[-1]
with open(f"{save_path}/{img_name}", 'wb') as f:
f.write(requests.get(link, headers=headers).content)
print(f"Saved: {img_name}")
# Example use
profile = 'https://coomer.su/onlyfans/user/username'
images = fetch_image_links(profile)
download_images(images)
⚙️ Optional Tor Integration (for IP Bans)
Install and start Tor:
sudo apt install tor
sudo service tor start
Route requests via proxy:
proxies = {
'http': 'socks5h://127.0.0.1:9050',
'https': 'socks5h://127.0.0.1:9050'
}
r = requests.get(url, headers=headers, proxies=proxies)
🚀 Power Boost with aria2c
To mass-download from a generated .txt
of image URLs:
aria2c -x 16 -s 16 -j 5 -i image_urls.txt
This isn’t just a script it’s a method. Once you understand how these shady archivers serve media, you can scrape, parse, and pull from nearly any clone (like kemono.su, etc.). Just keep your ethics straight don’t profit off stolen work.