Comments (6)
Also when accessing https://service.tesla.com/docs/Model3/ServiceManual/en-us/index.html after logging in (I have a Model 3) I get this:
And on the service page it redirects to I only see these options:
from teslaservicemanualscraper.
I've been scraping it for a while and only saw the recaptcha box once, so I assumed you had to be pretty aggregious to see it. I inputted a login delay variable to intercept manually if it happens.
Also, that same error appeared before I started using selenium stealth (they won't let obvious bots login).
from teslaservicemanualscraper.
For the second issue, you actually need to claim the free service manuals first.
https://service.tesla.com/service-subscription
from teslaservicemanualscraper.
Now I am getting this after refreshing your code:
C:\Users\Bond\TeslaServiceManualScraper>py scrape.py
Traceback (most recent call last):
File "C:\Users\Bond\TeslaServiceManualScraper\scrape.py", line 10, in <module>
from secrets import tesla_login
File "C:\Users\Bond\TeslaServiceManualScraper\secrets.py", line 3, in <module>
from scrape import login_delay
File "C:\Users\Bond\TeslaServiceManualScraper\scrape.py", line 10, in <module>
from secrets import tesla_login
ImportError: cannot import name 'tesla_login' from partially initialized module 'secrets' (most likely due to a circular import) (C:\Users\Bond\TeslaServiceManualScraper\secrets.py)
and with secrets.py
C:\Users\Bond\TeslaServiceManualScraper>py secrets.py
Traceback (most recent call last):
File "C:\Users\Bond\TeslaServiceManualScraper\secrets.py", line 3, in <module>
from scrape import login_delay
File "C:\Users\Bond\TeslaServiceManualScraper\scrape.py", line 10, in <module>
from secrets import tesla_login
File "C:\Users\Bond\TeslaServiceManualScraper\secrets.py", line 3, in <module>
from scrape import login_delay
ImportError: cannot import name 'login_delay' from partially initialized module 'scrape' (most likely due to a circular import) (C:\Users\Bond\TeslaServiceManualScraper\scrape.py)
from teslaservicemanualscraper.
Fixed!
from teslaservicemanualscraper.
Confirmed, it is working fine now! Thanks!
from teslaservicemanualscraper.
Related Issues (10)
- Browser sizing issue HOT 2
- Traceback issues with find_element return self.execute(Command.FIND_ELEMEN HOT 1
- indexerror (already downgraded to selenium 4.2.0 HOT 1
- Scraper errors
- macOS + Python ( HOT 4
- New Tip: Search Functionality HOT 4
- unable to scrap help HOT 17
- AttributeError: 'NoneType' object has no attribute 'endswith' HOT 2
- AttributeError: 'WebDriver' object has no attribute 'find_element_by_css_selector' HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from teslaservicemanualscraper.