Code Monkey home page Code Monkey logo

ofdl's Introduction

OFDL

Onlyfans media downloader with graphical user interface using PyQt5.

Downloads media files from OF (images, videos, highlights, stories)

The only requirements should be requests and PyQt5

Before logging in, press F12 (or inspect/inspect element and go to the network tab) to bring up the "developer tools" and then log in. You should see "init" as shown below. If not, type in the search "init" and or refresh the page. Once you've found it, click on it.

Scroll down to the section "request headers" and everything you will need should be in this section (cookie, useragent, x-bc):

Copy these three values:

and put them into the textbox displayed after you click on the buttons pointed at in the below image:

After you've added all three values, and you click the "x" button on the Options window, it should then fetch a list of your subscriptions.

Requirements

Written using Python 3.9 so use 3.9 or anything above.

The only two requirements/dependencies should be requests and PyQt5.

There is a "requirements.txt" file that can be used to install the dependencies at the command line:

pip3 install -r requirements.txt

or

pip3 install requests
pip3 install pyqt5

The main script is OFDL.py and can be run on some systems by double clicking it (usually Windows) or by going into the directory using terminal or the command line and executing:

python3 OFDL.py

ofdl's People

Contributors

hashirama avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ofdl's Issues

Only partial list of files available

Everything works as it should, except that only a partial list of files are linked and downloadable. For example, one user lists 209 videos on their profile, and yet only 63 are linked and downloadable.

Is this a known issue?

catch API errors

Somehow the API has invalid source urls..

Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 140, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 83, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 598, in urlopen
httplib_response = self._make_request(conn, method, url,
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 852, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 284, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 149, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7f3c3cb163d0>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 430, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
retries = retries.increment(method, url, error=e, _pool=self,
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='us.upload.onlyfans.com', port=443): Max retries exceeded with url: /files/XXXX.mp4 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f3c3cb163d0>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "OFDL.py", line 513, in Download_Files
self.onlyfans.download(self, user_folder, file)
File "/home/OFDL/OFDL/module/OF.py", line 388, in download
response = self.session.get(file_name, stream=True)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 520, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 630, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='us.upload.onlyfans.com', port=443): Max retries exceeded with url: /files/XXXX.mp4 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f3c3cb163d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
^CTraceback (most recent call last):
File "OFDL.py", line 548, in
root.mainloop()
File "/usr/lib/python3.8/tkinter/init.py", line 1420, in mainloop
self.tk.mainloop(n)
KeyboardInterrupt

To fix that i added a check:

if src is None or not src.startswith('https://cdn'):
if type_src is None or not src.startswith('https://cdn'):

Login failed

Hi
Upto 2 days ago this was working fine Now I'm getting

Login failed
b'{"error":{"code":401,"message":"Request sign required"}}'

I've tried several attempt including in Chrome Logging out of OF clearing cache and login in again
Then rentering fresh cookie / user agent

BTW: During login I had to verify not a robot

Is this user error, EG me doing something wrong or is there a problem?

Thanx

Request Sign Required

OF has a new requirement for a request sign as a part of the session init request header.

Cookies which value

The guide doesn't exactly which of the three values I need to copy. Can someone tell me

OSX Whare are links?

I wanted only links - I clicked Retreive links, I see some summary - but where are the links??

Highlights Retrieve Links Error

Happens with one Model while getting Highlights

Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "OFDL.py", line 397, in Get_Links
self.onlyfans.get_links(dict_return, value, index)
File "/home/OFDL/OFDL/module/OF.py", line 287, in get_links
filename = link["source"].split('/')[-1]
KeyError: 'source'

skip downloaded files

is possible to make it skip the images and videos that is already been downloaded before?

Stopped working for me

It stopped working for me, I made sure to set all the variables as expected

Upon api call to init it now errors like this:
image

Resulting in not being logged in

No sc_is_visitor_unique???

I've had this script working several times, but now I'm trying to make it work again, but there is not sc_is_visitor_unique value showing up in my cookie. How do I find it?

Subscriptions not showing up

I've been using this program for quite a while but it seems after entering my cookie and user agent, my subscriptions dont show up anymore. Iver reentered the information multiple times to no avail. If this project is still active, please look into this

Faster Download alias increase ThreadPool size

Make it an Option to increase the ThreadPool...

For example: 64 Threads, a bit arbitrary but for me significant faster than with 2 Threads.

def download_profiles(self, user_post_ids: dict, total: List[int]) -> None:
        pool = ThreadPool(64, self.stop_event)
        profiles = self.return_all_subs()
        for username_key in user_post_ids:
            profile = profiles[username_key]
            data = {}
            data["info"] = "Starting download..."
            self.data_display.emit(data)
            pool.add_task(profile.download, self.stop_event, self.data_display,
                          user_post_ids[username_key], total)

Won't login

No matter how many times I set the cookie and the user-agent, I get the same error (see below). It's been working for a long time, so my guess is OF changed something.

Login failed
b'{"error":{"code":401,"message":"Please refresh the page"}}'

Exception in thread Thread-1

here is the cmd printout
Exception in thread Thread-1: Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2032.0_x64__qbz5n2kfra8p0\lib\threading.py", line 932, in _bootstrap_inner self.run() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2032.0_x64__qbz5n2kfra8p0\lib\threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "ofdlv2.2.py", line 404, in Get_Links self.onlyfans.get_links(dict_return, value, index) File "C:\Users\shahi\Desktop\OFDLv2.2\module\OF.py", line 390, in get_links js = json.loads(r.text) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2032.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 357, in loads return _default_decoder.decode(s) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2032.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2032.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Can you give me sign.json?

requests.exceptions.ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /hashypooh/dynamic_stuff/main/sign.json (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000016534AC1D60>: Failed to resolve 'raw.githubusercontent.com' ([Errno 11004] getaddrinfo failed)"))

Doesn't work.

It doesn't show any models and when I try to switch between active, expired or all it says that the object has no attribute with that name and asks if I meant ''display_subscription''

GPT chat connecting to the Onlyfans website

Hiii,

I have an OnlyFans agency and I would be happy to build Python script to automate chatting with costumers via GPT chat.
Could you please help me to create this code? As far as I know there is difficulties with API in Onlyfans website so the data could only be transferred by data scrapping, am I right?

post date and text

More of a request than a issue
Is it possible to save the post date and text some where and link it to the downloaded files?
Maybe in the database
or
save the JSON
Thanx

Does not grab all links / download all media

Does not grab all links / download all media, but less than half of them.

For instance, on one subscription the General tab states correctly that the model has 75 photos and 54 videos, 129 in total. But after grabing the links, the Links tab lists a much shorter list than that and same goes to Download tab after pressing the Download Files -button on the previous tab. As a result, only 29 videos and 33 photos were downloaded, 62 in total.

Another example: General tab states correctly 304 photos and 62 videos, 366 in total. Again Links tab lists a lot less than that and same goes to Download tab. As a result, only 120 photos and 39 videos were downloaded, 159 in total.

As a potentially related note, there are number of errors "QObject::connect: Cannot queue arguments of type 'QVector'
(Make sure 'QVector' is registered using qRegisterMetaType().)"
when fetching information about the subscriptions as well as (sometimes) when fetching the links from a subscription.

Incorrect Sizes

The last few days I've been having errors when scraping new content for somebody. I have to scrape multiple times and most of the times I restart it and end up with this. It also takes forever to scrape at an average speed of 2.1 to 3.6 kbps. Is there a way to get it to go faster?

Messages: 0
Highlights: 0
Images: 2
Videos: 0
Stories: 0
Archived: 0
Audio: 0
Total size: 220B

Files to be downloaded: 2
Download Size: 220B

change where files are saved?

Hi, I'm fairly new to using python so I'm wondering if there's a way to change where the files are saved? I have a pretty low-memory Macbook but I'm saving everything to an external hard drive. any way to cut out the middle man of putting them on my computer and then transferring them? would also reduce the chances of a download stopping midway because my machine runs out of space.

New timeout error?

Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "C:\Python\lib\http\client.py", line 1332, in getresponse
response.begin()
File "C:\Python\lib\http\client.py", line 303, in begin
version, status, reason = self._read_status()
File "C:\Python\lib\http\client.py", line 264, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Python\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "C:\Python\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "C:\Python\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 726, in urlopen
retries = retries.increment(
File "C:\Python\lib\site-packages\urllib3\util\retry.py", line 403, in increment
raise six.reraise(type(error), error, _stacktrace)
File "C:\Python\lib\site-packages\urllib3\packages\six.py", line 734, in reraise
raise value.with_traceback(tb)
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "C:\Python\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "C:\Python\lib\http\client.py", line 1332, in getresponse
response.begin()
File "C:\Python\lib\http\client.py", line 303, in begin
version, status, reason = self._read_status()
File "C:\Python\lib\http\client.py", line 264, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Python\lib\socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "C:\Python\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "C:\Python\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Python\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:\OnlyFans Scraper\OFDLv2.2.py", line 404, in Get_Links
self.onlyfans.get_links(dict_return, value, index)
File "D:\OnlyFans Scraper\module\OF.py", line 343, in get_links
r = self.session.head(type_src)
File "C:\Python\lib\site-packages\requests\sessions.py", line 565, in head
return self.request('HEAD', url, **kwargs)
File "C:\Python\lib\site-packages\requests\sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python\lib\site-packages\requests\sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "C:\Python\lib\site-packages\requests\adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))

i have a problem

I don't know which cookie to copy, it doesn't specify well, also when I install the requirements it tells me that my version of URLLIB3 is incompatible and I don't know how to update it, someone help me please.

Not working...

Old version was working fine last time I used it. Today tried to use it, keep getting:
Login failed
b'{"error":{"code":0,"message":"Access denied."}}'

Just installed version 2.2 and get the same thing.

According to the README file, I copy the underlined info as shown below: (I accidentally edited out the sess id, but I assure its included in the string
cookie

When that information is entered, you can see login failed:
ofdl

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.