Code Monkey home page Code Monkey logo

Comments (5)

eliasdabbas avatar eliasdabbas commented on May 24, 2024

Not at the moment. The whole thing is designed to write to disc incrementally, and that is very helpful for performance, not having to save/process huge data in memory.

I'm curious, why do you think this would be a good thing? What are you trying to achieve?

from advertools.

dmthomson avatar dmthomson commented on May 24, 2024

I am writing a frontend and would like to provide the status of the crawl as well as results. I am using FastAPI to trigger the crawl and was just wanting to avoid writing a bunch code to read the file.

I was also under the impression that the contents were written to the file all at once. But now I see thats not the case

from advertools.

eliasdabbas avatar eliasdabbas commented on May 24, 2024

Very interesting application. I'd love to know more.

would like to provide the status of the crawl

As the crawl is happening, a new line is added for each URL crawled. You can easily check for the number of lines of the output file, and display that to the user (X URLs crawled).
If crawling in list mod you can also provide a percentage, and/or "X URLs remaining".

wanting to avoid writing a bunch code to read the file

This is can be done with a single pandas command read_json. Or am I missing something?

from advertools.

dmthomson avatar dmthomson commented on May 24, 2024

yeah I am familiar with the pandas command read_json. I've used the tool before I was just looking to see if there was a way to stream things into a different place, like say Redis or Kafka topic. At the end of the day I need to persist the crawl data and return some of the infromation back to the client. I can definitely do this using the dataframe method and writing those to some persistent location.

This is can be done with a single pandas command read_json. Or am I missing something?

Yes, I will go that route where I read the lines of the file to check crawl status. The only other thing might be calculating the number of potential pages/lines.

I am taking a domain url as an argument so I have to create the list dynamically. Does the follow mode offer away to determine how many pages will get crawled or does advertools provide a good way to determine the pages I could put in the list? Perhaps crawling the sitemap would do.

If crawling in list mod you can also provide a percentage, and/or "X URLs remaining".

I am building a suite of SEO tools that focus around automation.

Very interesting application. I'd love to know more.

from advertools.

eliasdabbas avatar eliasdabbas commented on May 24, 2024

Cool. I think the XML sitemap can be downloaded quickly, and can provide a generally good estimate of how the big the site is, and estimated number of URLs.
Of course there could be discrepancies, but most of the time, I think it can provide a good estimate.

Looking forward to seeing what you build!

from advertools.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.