Code Monkey home page Code Monkey logo

manolo_scraper's Introduction

Build Status codecov Code Issues

Run scraper

python main.py

List of Entities

manolo_scraper's People

Contributors

aniversarioperu avatar carlosp420 avatar matiskay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

manolo_scraper's Issues

Agregar Dockerfile

Lo que debe hacer este Dockerfile es lo siguiente:

  • Agregar el archivo de configuracion config.json.
  • Instalar una base de datos Postgres.
  • Instalar scrapy y todas sus dependencias (pip install -r requirements.txt).
  • Montar las carpetas para trabajar.

Referencias:

Agregar un scraper para el Ministerio de Cultura

He creado un scraper para obtener las visitas del ministerio de Cultura https://github.com/matiskay/mc-visits. Este scraper usa beautifulsoup para parsear los datos, la idea es reescribir este script usando scrapy para integrarlo con este repositorio.

Link: http://visitas.mcultura.gob.pe/?r=consultas/visitaConsulta/index
Datos de las visitas: https://github.com/matiskay/mc-visits/tree/master/mc-visits
Ideas para crear el scraper: https://github.com/matiskay/mc-visits/blob/master/cultura.py#L21-L42

Investigar como usar Constraints en Manolo Scraper

Item Constrains
---------------
This module provides several classes that can be used as conditions to check
certain item constraints. Conditions are just callables that receive a dict and
*may* raise an AssertionError if the condition is not met.
Item constraints can be checked automatically (at scraping time) to drop items
that fail to meet the constraints. In order to do that, add the constraints
pipeline to your ITEM_PIPELINES:
    ITEM_PIPELINES = ['scrapylib.constraints.pipeline.ConstraintsPipeline']
And define the constraints attribute in your item:
    class Product(Item):
        name = Field()
        price = Field()
        colors = Field()
        constraints = [
            RequiredFields('name', 'price'),
            IsPrice('price'),
            IsList('colors'),
            MinLen(10, 'name'),
        ]

Use `extract_first` methods and set `''` as default value

The current pattern in the code to handle first element extraction is the following

        try:
            total_string = response.css('#LblTotal').xpath('./text()').extract()[0]
        except:
            total_string = ''

This pattern is pretty ugly the best way to do it is to use extract_first(default='') method.

sel.xpath('//div/[id="not-exists"]/text()').extract_first(default='not-found')
'not-found'

Agrega informacion meta a las spider.

Con el fin de rastrear los items scrapeados por una spider sugiro agregar la siguiente informacion a cada spider.

  • page_number
  • spider_name
  • crawled_at

Por ahora esos campos serian utiles.

Agregar parametro `date_end` a las spiders.

Con el fin de comenzar a testear las spider #24 y para verificar que los datos corresponden a una fecha especifica sugiero agregar una variable extra llamada date_end.

scrapy crawl minedu -a date_start="2015-08-14" -a date_end="2015-08-16"

Esta variable va permitir crawlear paginas espeficas y ademas va permitir testar las spider puesto que los datos de una determinada fecha son invariantes.

Investigar Spiders Contracts para testar las Spiders.

Ahora que se esta en proceso de refactorizar las spiders y agregar items loaders para la recoleccion de datos. Nos vemos con la necesidad de testar las spider de una manera programatica.

Actualmente mi forma de testar una spider es:

  • Comparar el total de elementos de una fecha.

  • Escoger un item de la primera pagina y buscar este elemento en la base de datos.

  • Escoger un item de una pagina intermedia y buscar este elemento en la base de datos.

  • Escoger un item de la pagina final y buscar este elemento en la base de datos.

    Cosas interesantes sobre los registros de visitas.

  • Hay "invarianza" en el total de visitas de una determinada fecha.

  • Hay "invarianza" en los items de una determinada fecha.

Si es que spider contracts no funciona para nuestro caso la idea seria usar pytest conectado a la base de datos y verificar que los registros estan en la base de datos.

Creo que se va necesitar agregar una comando para hacer crawling de una fecha especifica.

Link: Spider Contracts: http://doc.scrapy.org/en/latest/topics/contracts.html

Create a Captcha Solver for Ministerio de Relaciones Exteriores

In order to crawl Ministerio de Relaciones Exteriores, we need to create a captcha solver using HOG and a Machine Learning algorithm like Support Vector Machines. We can grab ideas from MNIST examples. The most difficult part here is to find the dataset for letter and numbers.

Example:

  • Input: captcha
  • Output: 4ZPZ1

References

Normalizar todos los nombres a Mayusculas

Hay muchos nombres que estan en minusculas por ejemplo
manolo buscador de visitas a instituciones del estado 2015-08-19 00-13-38

para tener una mejor consistencia en los datos sugiero pasar todos los nombres a Mayusculas. Esto se puede hacer facilmente usando una funcion en el Item Loader.

Improve ManoloBaseSpider

All the spiders that inherits from ManoloBaseSpider use the following piece of code

    def start_requests(self):
        d1 = datetime.datetime.strptime(self.date_start, '%Y-%m-%d').date()
        d2 = datetime.datetime.strptime(self.date_end, '%Y-%m-%d').date()
        # range to fetch
        delta = d2 - d1

        for day in range(delta.days + 1):
            my_date = d1 + timedelta(days=day)
            date_str = my_date.strftime("%d/%m/%Y")

            print("SCRAPING: {}".format(date_str))

It's possible to promote this piece of code to ManoloBaseSpider and use a method called initial_request that will be use to start the first request.

https://github.com/aniversarioperu/manolo_scraper/blob/master/manolo_scraper/manolo_scraper/spiders/spiders.py#L43-L55

This issue is related to #30

Update packages

Este proyecto necesita un poco mas de amor. La ultima version es 1.1.0.

Promote get_date_format to ManoloBaseSpider

This is a common pattern in all the spiders.

        date_obj = datetime.datetime.strptime(response.meta['date'], '%d/%m/%Y')
        date_str = datetime.datetime.strftime(date_obj, '%Y-%m-%d')

The idea is to promote this code to a method called get_date_format in ManoloBaseSpider.

upgrade scrapylib item processors

so we get rid of this warning when running tests:

/local/lib/python2.7/site-packages/scrapylib/processors/init.py:8: ScrapyDeprecationWarning: Module scrapy.contrib.loader.processor is deprecated, use scrapy.loader.processors instead
from scrapy.contrib.loader.processor import Compose, MapCompose, TakeFirst

Usar Item Loaders

Si usamos Items Loaders en los items podemos reducir la cantidad de codigo necesario para la Spider y podemos aprovechar las funciones para limpieza de datos.

Item Loaders provide a convenient mechanism for populating scraped Items. 
Even though Items can be populated using their own dictionary-like API,
Item Loaders provide a much more convenient API for populating them 
from a scraping process, by automating some common tasks like parsing 
the raw extracted data before assigning it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.