Code Monkey home page Code Monkey logo

e621_downloader's Introduction

E621 Downloader

Release Commits Since Stars Watchers Forks

Maintained? Lisence Downloads

Issues Open Issues Closed PR Open PR Closed

Rust

The e621_downloader is a low-level, close-to-hardware program meant to download a large number of images at a fast pace. It can handle bulk posts, single posts, sets, and pools via a custom easy-to-read language that I made.

Having tested this software extensively with downloading, I managed to download 10,000 posts (which was over 20+ GB) in just two hours, averaging around 20MB/s.

Goal

The goal of this application is to keep up-to-date with your favorite artist, download pools, grab images from normal, everyday tags, while most of all, staying reliable.

About E621/E926

E621 is a mature image board replacement for the image board Sidechan. A general audience image board, e926 (formerly e961) complements this site. E621 runs off of the Ouroboros platform, a danbooru styled software specifically designed for the site.

E621 has over 2,929,253+ images and videos hosted on its platform.

Todo list for future updates

This list is constantly updated and checked for new features and addons that need to be implemented in the future. My hope with all this work is to have a downloader that will last a long time for the next forseable future, as there are many downloaders like this that have either ceased developement, or found to be too hard or confusing to operate for the average person.

  • Add a menu system with configuration editing, tag editing, and download configuration built in.
  • Transition the tag file language into Json to integrate easily with the menu system.
  • Update the code to be more sound, structured, and faster.

Installation Guide (Windows)

  1. If you are on Windows, simply visit this link and install rust and cargo through the installer provided. You will need GCC or MSVC in order to compile the project, so choose either 1 or 2 for this.

    1. To get GCC, I would recommend this helpful little site which contains the most up to date versions of GCC. Note, however, that you will need to unzip this in a directory you make, and will have to link the bin folder in that directory to your PATH for it to work.
    2. For MSVC, just go to this link and download the Visual Studio Build Tools (at the very bottom of the page), which will install all the needed binaries without the full Visual Studio IDE.
  2. Now, once you've done that, you can either clone the GitHub project directly through Git, or download a zip of the latest version. You can download Git from here.

    • If you choose to use Git, find a proper directory you want the project in, and then type in git clone https://github.com/McSib/e621_downloader.git into a console and pressing Enter. This will clone the directory and prepare the project for you to modify or just compile.
  3. No matter what option you chose, you want to open a terminal (CMD or Terminal) and go into the root directory of the project (where Cargo.toml and Cargo.lock are located). Inside this directory, type in cargo build or cargo build --release. If the program compiles and works, you're good to go.

Installation Guide (Arch Linux)

  1. For Arch Linux users, you will need a couple things installed in order to get the project up and running. The first thing you want to do is get the packages required (if you haven't). Run this command to download everything you need.
sudo pacman -S rust base-devel openssl git gdb
  1. The next thing you will need to do is clone the git repository in a directory of your choosing.
git clone https://github.com/McSib/e621_downloader.git
  1. From there, go into the newly cloned directory, and see if you can build by running cargo build --release or cargo build. If it compiled okay, then you are good to go.

  2. You can also now download a prebuilt binary of the program on the release page if you just want to use the program with little hassle.

Installation Guide (Debian)

  1. This is very much like the Arch Linux setup with some minor tweaks to the package download command. Instead of the pacman command, enter: sudo apt install gcc g++ gdb cargo libssl-dev git and then follow step 2 from the Arch Linux installation forward.

FAQ

Why does the program only grab only 1,280 posts with certain tags?

When a tag passes the limit of 1,500 posts, it is considered too large a collection for the software to download as the size of all the files combined will not only put strain on the server, but on the program as well as the system it runs on. The program will opt to download only 5 pages worth of posts to compensate for this hard limit. The pages use the highest post limit the e621/e926 servers will allow, which is 320 posts per page. In total, it will grab 1,280 posts as its maximum.

Something to keep a note of, depending on the type of tag, the program will either ignore or use this limit. This is handled low-level by categorizing the tag into two sections: General and Special.

General will force the program to use the 1,280 post limit. The tags that register under this flag are as such: General (this is basic tags, such as fur, smiling, open_mouth), Copyright (any form of copyrighted media should always be considered too large to download in full), Species (since species are very close to general in terms of number of posts they can hold, it will be treated as such), and Character in special cases (when a character has greater than 1,500 posts tied to them, it will be considered a General tag to avoid longer wait times while downloading).

Tags that register under the Special flag are as such: Artist (generally, if you are grabbing an artist's work directly, you plan to grab all their work for archiving purposes. Thus, it will always be considered Special), and Character (if the amount of posts tied to the character is below 1,500, it will be considered a Special tag and the program will download all posts with the character in it).

This system is more complex than what I have explained so far, but in a basic sense, this is how the downloading function works with tags directly. These checks and grabs happen with a tight-knit relationship that is carried with the parser and the downloader. The parser will help grab the number of posts and also categorize the tags to their correct spots while the downloader focuses on using these tag types to grab and download their posts correctly.

Hopefully, this explains how and why the limit is there.

Notice for users using the new version (1.6.0 and newer)

If you are not logged into e621, a filter (almost like a global blacklist) is applied. This blacklist will nullify any posts that fall under its settings. So, if you notice images you're trying to download aren't showing up, log in and then download it, otherwise, this filter will continue blacklisting them.

Notice for users using a VPN

I have had a recurring "bug" that has shown in my issues the last couple of months, and they tend to crop up right after a new release, so I am going to supply a new notice for those using VPNs to prevent this becoming issue spam. There are users who are experiencing crashes consistently when parsing, obtaining blacklist, or downloading. It is an issue that is consistent, and each person thus far have been using a VPN with no other noticeable cause linked. After a multitude of testing, I have concluded that users using VPNs will occasionally have either e621 directly or Cloudflare prompt for a captcha, or a test for whether you are a robot. Since my program does not support GUI, or no tangible way of handling that, it will crash immediately. I have looked for fixes to this issue and have yet to find anything. So, if you are using a VPN, be warned, this can happen. The current work around for this issue is switching locations in the VPN (if you have that feature) or disabling the VPN altogether (if you have that option). I understand it is annoying, and can be a pain, but this is all I can do until I come across a fix. Sorry for the inconvenience, and apologies if you are some of the users experiencing this issue.

e621_downloader's People

Contributors

dependabot[bot] avatar head-gardener avatar mcsib avatar spbmnn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

e621_downloader's Issues

Config isn't saved after downloading images.

This program is meant to download images from a certain date up to today, but since it doesn't update the config with today's date, it will constantly check from the previous date forward. this may be fine for all artist's work or a first run, but not for people updating their archives, they'll be waiting for everything else to re-download just to get 10 new pieces.

Download directory isn't created on first run.

When the program runs for the first time, it doesn't create a directory for the images to go into, it crashes immediately when trying to download images into the nonexisting directory.

Blacklist has stopped working.

While preparing for the 1.5.3 release, I have decided to test the blacklist as some of the optimizations and changes involved it. There seems to be a bug with the blacklist rendering it useless. It isn't filtering any posts, or at least showing it. This needs to be fixed before anything is pushed as release for the next update.

Downloader randomly crashes in the middle of grabbing the tags

Ever since today, the program just started to randomly crash in the middle of grabbing all the tags. Sometimes it crashes at the very start, sometimes it's in the middle, sometimes it's at the very end, but there's always some sort of error like this.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Decode, source: Error("expected value", line: 1, column: 1) }

error decoding response body: expected value at line 1 column 1', src\e621\sender\mod.rs:290:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace```

Let me see if I can get a proper log generated, it seems to cut off at that error, not even finishing it printing, I only managed to get this via running the app in command line.

Blacklist Update for 1.6.0 Release

This is a published development and idea page on how the blacklist will be handled from now on. The blacklist in its current state is parsed 320 times when performing bulk searches, pool searches, and set searches. This issue will now discuss, and show the new change that will be done for the new update.

In light of the last issue created about the memory leak described here #33. The blacklist is clearly showing its limits now. Having been originally created for local client-side use, it didn't need any bells and whistles for how it should be implemented. But with the recent API change, the blacklist now needs to perform an API call for users, and because of this, it has broken its own rule on how it should be treated. The reflection of performance time has shown it clearly, waiting for long periods of time just to scan 320 posts, when it should take milliseconds. I have decided to redesign and take this to a new drawing board, giving the blacklist a better coat of paint.

With the recent light of having to develop this, I have made a design to reflect on how it will now function.

New Blacklist Diagram

Shown here is now the new set up for the blacklist. It will be stored in the WebConnector, and it will be the owner of the blacklist. The blacklist will be parsed and tokenized before being given to Grabber as a reference. This reference will be used to filter posts and bring far greater performance overall. Instead of parsing happening at runtime, it will now happen before grabbing even begins. This will give the program more freedom and breathing room to become more collected for the grabbing process. With hope, we will now have to perform another massive change for the blacklist again, unless the server updates and supports user blacklists server-side.

Blacklist Parser is Causing Memory Leak

Whenever a user is added to the blacklist (user:fluff-kelvar), the BlacklistParser will start a memory leak through a neverending loop. This loop will keep adding new memory until it is either closed, or the computer has crashed.

[ERROR] Json was unable to deserialize to "serde_json::value::Value"!

As of today, I started encountering this crash every time I try and start up the download.

03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:23] Printing system information out into log for debug purposes...
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:24] ARCH:           "x86_64"
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:25] DLL_EXTENSION:  "dll"
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:26] DLL_PREFIX:     ""
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:27] DLL_SUFFIX:     ".dll"
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:28] EXE_EXTENSION:  "exe"
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:29] EXE_SUFFIX:     ".exe"
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:30] FAMILY:         "windows"
03:50:57 [TRACE] (1) e621_downloader: [src\main.rs:31] OS:             "windows"
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:26] Starting e621 downloader...
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:27] Program Name: e621_downloader
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:28] Program Version: 1.7.0-hotfix.1
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:29] Program Authors: McSib <[email protected]>
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:30] Program Working Directory: D:\e621
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:39] Checking if config file exists...
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:47] Checking if tag file exists...
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:61] Login information loaded...
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:62] Login Username: system_searcher
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:63] Login API Key: ************************
03:50:57 [TRACE] (1) e621_downloader::program: [src\program.rs:64] Login Download Favorites: true
03:50:57 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:54] SenderClient initializing with USER_AGENT_VALUE "e621_downloader/1.7.0-hotfix.1 (by McSib <[email protected]> on e621)"
03:50:57 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:59] Prompt for safe mode...
03:50:58 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:72] Safe mode decision: false
03:50:58 [TRACE] (1) e621_downloader::program: [src\program.rs:71] Parsing tag file...
03:50:58 [ERROR] Json was unable to deserialize to "serde_json::value::Value"!
03:50:58 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:320] url_type_key: tag_bulk
03:50:58 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:321] tag: kadath

Does not work when logged in

As reported in the forum thread:

I recently removed appeal_count from the api response, which breaks serde deserialization. Just removing the field from the struct should work fine as a fix.

Program doesn't seem to respect the "config.json" file after the 1.7.0 update

My current config is this:
{ "downloadDirectory": "D:/e621/" }

Before the update, my folder structure looked like this:
[Program Root]/D_/e621/General Searches

After the update, it now looks like this:
[Program Root]/General Searches

Not sure if this is an intended change or something unforeseen, but it's ought to be reported regardless, since there was nothing about it in the changelog

Deleted posts are now showing up in bulk search API returns

They're testing into getting my program working again, I found that certain posts contain a file URL which is null, when it should contain a string that links to the image directly. After looking into the flag struct a few times, a structure that is returned inside of a PostEntry, I came to the conclusion that some posts are actually deleted, but they're still showing up in the return. In the old API, this didn't happen, it was very seldom you had a situation where it would. But now with this new API, deleted posts are showing up, and because of this, I'm going to have to filter out these deleted posts, as well as blacklisted posts as described in my most recent issue [#31].

Pools with non-filename-allowed characters in their name fail to download

Pools with colon in the name, like "ExamplePool: Chapter 1" are not downloaded correctly. The download folder for the pool is created correctly and the program seems to download files without giving any errors, however the folder will only contain one 0 byte file with the filename being whatever the pool name is before the colon, which in this case it would be "ExamplePool".

Also pools with quotes in the name, like "The "example" pool" are not downloaded correctly. Again the folder seems to be created correctly but the program will panic while downloading the first file and exit with a syntax error. No files are created in this case.

While I did not test this I assume pool names with other characters not allowed in filenames would also cause similar problems.

`reqwest::get` is making a client every time it is used and is slowing the program down.

Recent inspection of the documentation of reqwest showed that get is merely a shortcut for one-off calls to a URL. Due to the fact that this is being used as the main client, it is causing much more memory and CPU usage than necessary. This should be replace with an actual client instance described here:

let client = Client::new();
let resp = client.get("http://httpbin.org/").send()?;

This will allow for more efficient code and only one client maintaining the request towards the server. This will also handle the user-agent header data which needs to be set with the project name, version, and username of the E621 creator, as described here:

Capture

This code looks bad and should be improved.

e621/mod.rs:196:9

if key_date != DEFAULT_DATE {
    *key_date = date.format("%Y-%m-%d").to_string();
} else {
    key_date.remove(key_date.len() - 1);
    key_date.push_str("2");
}

Am I the only one who thinks this looks nasty? There has to be a better way of handling code like this.

There is a way to simplify the tag file even more.

I thought about how to handle the sorting of normal tags, artist tags, character tags, and pool IDs along with post IDs. The groups should be removed and simply hold all tags in one set. Then (when the tags are all parsed) run it through an identifier that figures out what the user is searching for. I could use an enum to hold what type of tag it is, then make the client search for that type with its own URL. This idea would greatly simplify the language and make the overall work for the parser finished.

We are at a point where we need a checker to ensure the parsed syntax is correct.

Right now, we have a very basic parser that can parse the syntax and check for any broken or messed up code, but we do not have a checker ensuring that the parsed code is doing something it shouldn't. Something that is pretty blatant right now is the default groups, since the sub-group feature is added, there is a chance that a user could use it on the default groups (which are used by the software for knowing how to grab and download posts). If they do, it could cause broken behavior that isn't intended. Something needs to be added to ensure that this won't happen.

Bring a better Date system.

The one that is currently being used is prone to cause issues. The config.json needs to update with new dates for each tag, so on the first check, it will download as much as it can (for artist) and then update the config with the tag used and a date of when it was grabbed.

{
  "createDirectories": true,
  "downloadDirectory": "downloads/",
  "lastRun": {
      "fluff-kevlar": "2019-03-28",
      "ichthy0stega": "2019-03-28"
  },
  "partUsedAsName": "md5"
}

The order of pools posts are incorrect on search

There is a current bug when downloading pools where the pool is ordered incorrectly. The cause is from how the posts are grabbed. Initially, to save performance, pools were made to be downloaded in a single bulk search with the tag pool:{pool_id_here}. Because of this, it orders from the highest id to the lowest, meaning it relies on when the post is uploaded rather than where the post resides in the pool. This will be worked on before the release of 1.7.0.

The group syntax maybe entirely useless.

I have took a long break from the project to work on other things in my life and while doing these other things, I thought on problems with the project. One of the main problems right now is the parser, I feel it is becoming overly complicated and trying too hard to be inclusive while using a basic syntax. This is clearly becoming more of a issue than an enhancement. In my eyes, I feel this feature should be backtracked and removed, whiped clean and allow for the original syntax to shine. Maybe in the future I can add a basic program to update or add to the tag file witout the user opening the file and editing the code directly. But currently, there needs to be more features added than a parser update, like what about: tag check to validate tags; a MD5 system to ensure that files already downloaded won't be overwritten with the same data (causing more read/write usage that isn't needed); or what about the much needed safe mode and download option for pools and general tags? These are just three that are much more important than the parser right now.

General Tag with colon causes crash

As discussed in #51, trying to download posts using a tag in it causes a crash. Included below is my tag file and log file.

Tag file:

# This is the tag file that you will use so the program can know what tags to search.
# If you wish to comment in this file, simply put `#` at the beginning or end of line.

# Insert tags you wish to download in the appropriate group (remove all example tags and IDs with what you wish to download):

[artists]
teranen

#[pools]

#[sets]

#[single-post]

[general]
rating:questionable

Console log:

Should enter safe mode? no
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src\e621\io\tag.rs:162:43
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

e621_downloader.log:

20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:23] Printing system information out into log for debug purposes...
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:24] ARCH:           "x86_64"
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:25] DLL_EXTENSION:  "dll"
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:26] DLL_PREFIX:     ""
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:27] DLL_SUFFIX:     ".dll"
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:28] EXE_EXTENSION:  "exe"
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:29] EXE_SUFFIX:     ".exe"
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:30] FAMILY:         "windows"
20:23:50 [TRACE] (1) e621_downloader: [src\main.rs:31] OS:             "windows"
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:26] Starting e621 downloader...
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:27] Program Name: e621_downloader
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:28] Program Version: 1.7.0-hotfix.1
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:29] Program Authors: McSib <[email protected]>
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:30] Program Working Directory: C:\Users\Andrew\Downloads\e_downloader_1.7.0-hotfix.1_release
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:39] Checking if config file exists...
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:47] Checking if tag file exists...
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:61] Login information loaded...
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:62] Login Username: 
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:63] Login API Key: 
20:23:50 [TRACE] (1) e621_downloader::program: [src\program.rs:64] Login Download Favorites: true
20:23:50 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:54] SenderClient initializing with USER_AGENT_VALUE "e621_downloader/1.7.0-hotfix.1 (by McSib <[email protected]> on e621)"
20:23:50 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:59] Prompt for safe mode...
20:23:51 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:72] Safe mode decision: false
20:23:51 [TRACE] (1) e621_downloader::program: [src\program.rs:71] Parsing tag file...

[Bug] Does not work on Windows 10 17763

When you download the release version. (https://github.com/McSib/e621_downloader/releases/download/1.4.3/e621_downloader.1.4.3.7z)

Error:
Error: Error(Json(Error("expected value", line: 1, column: 1)))

tags.txt:

[artists]
fluff-kevlar

btw why it is ".txt" format if it is an ini-like format?

The other files have not been changed.

  • Figure out where the JSON error is coming from, whether it be from the parser, or from the web connecter.
  • Figure out the steps to take to cause the error on my system.
  • Locate the problem in code.
  • Fix bug.

Naming posts by their id

Is it possible to configure to name all downloaded posts by their id instead of md5? It would be ideal to sort them by their id so they could be listed chronologically once downloaded. That would be especially useful for huge folders with hundreds or even thousands of posts to have them listed like on e621. That way they could be sorted form the first artists work to the latest or vice versa. I personally download posts by artists only.

JSON deserialization error when parsing tags that have no content

The relevant tag in my tags list is qualzaar, which is a typo and should be qualzar. That said, better behavior for blank tags would be nice, it was not clear that I had made a typo until I double checked a few other things first.

Trace

PS Z:\porn\furry\!! e621 downloader> cmd /C "set RUST_BACKTRACE=full && e621_downloader.exe"
Should enter safe mode? no
thread 'main' panicked at 'Json was unable to deserialize to Vec<AliasEntry>!: reqwest::Error { kind: Decode, source: Error("invalid type: map, expected a sequence", line: 1, column: 0) }', src\e621\sender.rs:762:9
stack backtrace:
   0: backtrace::backtrace::dbghelp::trace
             at C:\Users\VssAdministrator\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.46\src\backtrace/dbghelp.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at C:\Users\VssAdministrator\.cargo\registry\src\github.com-1ecc6299db9ec823\backtrace-0.3.46\src\backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src\libstd\sys_common/backtrace.rs:78
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src\libstd\sys_common/backtrace.rs:59
   4: core::fmt::write
             at src\libcore\fmt/mod.rs:1069
   5: std::io::Write::write_fmt
             at src\libstd\io/mod.rs:1504
   6: std::sys_common::backtrace::_print
             at src\libstd\sys_common/backtrace.rs:62
   7: std::sys_common::backtrace::print
             at src\libstd\sys_common/backtrace.rs:49
   8: std::panicking::default_hook::{{closure}}
             at src\libstd/panicking.rs:198
   9: std::panicking::default_hook
             at src\libstd/panicking.rs:218
  10: std::panicking::rust_panic_with_hook
             at src\libstd/panicking.rs:511
  11: rust_begin_unwind
             at src\libstd/panicking.rs:419
  12: core::panicking::panic_fmt
             at src\libcore/panicking.rs:111
  13: core::option::expect_none_failed
             at src\libcore/option.rs:1268
  14: e621_downloader::e621::sender::RequestSender::query_aliases
  15: e621_downloader::e621::io::tag::parse_tag_file
  16: e621_downloader::main
  17: std::rt::lang_start::{{closure}}
  18: std::rt::lang_start_internal::{{closure}}
             at src\libstd/rt.rs:52
  19: std::panicking::try::do_call
             at src\libstd/panicking.rs:331
  20: std::panicking::try
             at src\libstd/panicking.rs:274
  21: std::panic::catch_unwind
             at src\libstd/panic.rs:394
  22: std::rt::lang_start_internal
             at src\libstd/rt.rs:51
  23: main
  24: _tmainCRTStartup
  25: mainCRTStartup
  26: _report_error
  27: _report_error
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Content of tags.txt

# This is the tag file that you will use so the program can know what tags to search.
# If you wish to comment in this file, simply put `#` at the beginning or end of line.

# Insert tags you wish to download in the appropriate group (remove all example tags and IDs with what you wish to download):

[artists]
braeburned
funkybun
trigaroo
shoutingisfun
teranen
qualzaar

[pools]
20047 # Funky: Skinny-dip
21826 # Funky: Toy Store
19459 # Funky: Butt Stuff
16713 # Funky: Cafe Expose
13691 # Funky: They won't mind (nude beach)
13818 # female rose alt of above

[sets]

[single-post]

[general]

There is a problem with some of the tags.

If there is a comment on the same like as the tags or ids, there is an extra space in the string. While this doesn't affect the program at all, it would look nicer if it was gone, it also means the parser is leaking a character when it shouldn't.

Example

fluff-kevlar should just be fluff-kevlar.

Works flawlessly on Linux

This is not an issue, but I don't know a better place to put this. I compiled Version 1.6.1 on my Arch Linux system and it works perfectly

Jumps suddenly when the program is running

The downloads folder is not loaded when running the program for the first time
Causes it to crash when running later
how can i do
I've tried creating a "Downloads" folder, but it didn't work
(sorry for my grammar)

Searching for multiple tags.

Ok. I have searched and searched, but I can't seem to find the answer anywhere.

This downloader is apparently able to search for mutiple tags at once, but I am not sure if it means what I want. For example, lets say I want all 3d artwork of dragons. Not all images tagged 3d art. Not all images tagged dragon. Only images with both tagged. The benefit being I can get images that meet my criteria from multiple artists. Where would I post the tags? I've tried both tags together on the same line, and one below the other. The only category that would make sense is [general]. But that for sure doesn't work. I'm not even sure the program has this ability.

This is probably a very simple answer that I am not seeing. This isn't really an issue, but I see nowhere else to post a general question. Thanks for taking time out of your day to read this.

No way to enter safe mode.

EWeb has the ability to enter a safe mode, using e926 instead of e621, but there is no input to make this happen.

parsing error from reading blacklist

simple issue that will either take 5 minutes or 5 hours to fix
you can't have score:<0 in your blacklist as it can't recognize ":", there is likely more characters.

The program doesn't seem to be able to read all the E621 files

The search results of krystal in E621 show that there are more than 10,000 image files, but the program only captures 1280 files

e621_downloader.log
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:56] Printing system information out into log for debug purposes...
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:57] ARCH: "x86_64"
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:58] DLL_EXTENSION: "dll"
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:59] DLL_PREFIX: ""
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:60] DLL_SUFFIX: ".dll"
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:61] EXE_EXTENSION: "exe"
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:62] EXE_SUFFIX: ".exe"
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:63] FAMILY: "windows"
22:10:22 [TRACE] (1) e621_downloader: [src\main.rs:64] OS: "windows"
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:34] Starting e621 downloader...
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:35] Program Name: e621_downloader
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:36] Program Version: 1.7.1
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:37] Program Authors: McSib [email protected]
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:38] Program Working Directory: D:\e621
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:47] Checking if config file exists...
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:55] Checking if tag file exists...
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:69] Login information loaded...
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:70] Login Username: catuncle
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:71] Login API Key: ************************
22:10:22 [TRACE] (1) e621_downloader::program: [src\program.rs:72] Login Download Favorites: true
22:10:22 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:72] SenderClient initializing with USER_AGENT_VALUE "e621_downloader/1.7.1 (by McSib [email protected] on e621)"
22:10:22 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:64] Prompt for safe mode...
22:10:24 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:77] Safe mode decision: true
22:10:24 [TRACE] (1) e621_downloader::program: [src\program.rs:79] Parsing tag file...
22:10:25 [TRACE] (1) e621_downloader::program: [src\program.rs:84] Parsing user blacklist...
22:10:26 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:103] Grabbing posts...
22:10:26 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 1 of tag fav:catuncle
22:10:27 [INFO] �[38;5;39m�[3m"fav:catuncle"�[0m grabbed!
22:10:27 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 1 of tag krystal
22:10:30 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 2 of tag krystal
22:10:33 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 3 of tag krystal
22:10:35 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 4 of tag krystal
22:10:38 [INFO] �[38;5;39m�[3m"krystal"�[0m grabbed!
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:244] Total file size for all images grabbed is 2918854680KB
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:170] Printing Collection Info:
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:171] Collection Name: "Single Posts"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:172] Collection Category: ""
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:173] Collection Post Length: "0"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:177] Static file path for this collection: "downloads/Single Posts"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:217] Collection Single Posts is finished downloading...
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:170] Printing Collection Info:
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:171] Collection Name: "fav:catuncle"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:172] Collection Category: ""
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:173] Collection Post Length: "0"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:177] Static file path for this collection: "downloads/fav_catuncle"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:217] Collection fav:catuncle is finished downloading...
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:170] Printing Collection Info:
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:171] Collection Name: "krystal"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:172] Collection Category: "General Searches"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:173] Collection Post Length: "1280"
22:10:38 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:177] Static file path for this collection: "downloads/General Searches\krystal"

On the E621, the search result of krystal score:>50 shows that there are more than 4500 files
And the files captured by the program are only 377 files

e621_downloader.log
[TRACE] (1) e621_downloader: [src\main.rs:56] Printing system information out into log for debug purposes...
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:57] ARCH: "x86_64"
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:58] DLL_EXTENSION: "dll"
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:59] DLL_PREFIX: ""
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:60] DLL_SUFFIX: ".dll"
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:61] EXE_EXTENSION: "exe"
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:62] EXE_SUFFIX: ".exe"
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:63] FAMILY: "windows"
22:18:51 [TRACE] (1) e621_downloader: [src\main.rs:64] OS: "windows"
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:34] Starting e621 downloader...
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:35] Program Name: e621_downloader
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:36] Program Version: 1.7.1
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:37] Program Authors: McSib [email protected]
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:38] Program Working Directory: D:\e621
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:47] Checking if config file exists...
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:55] Checking if tag file exists...
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:69] Login information loaded...
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:70] Login Username: catuncle
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:71] Login API Key: ************************
22:18:51 [TRACE] (1) e621_downloader::program: [src\program.rs:72] Login Download Favorites: true
22:18:51 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:72] SenderClient initializing with USER_AGENT_VALUE "e621_downloader/1.7.1 (by McSib [email protected] on e621)"
22:18:51 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:64] Prompt for safe mode...
22:18:54 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:77] Safe mode decision: true
22:18:54 [TRACE] (1) e621_downloader::program: [src\program.rs:79] Parsing tag file...
22:18:56 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:379] No alias was found for score:>50...
22:18:56 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:380] Printing trace message for why None was returned...
22:18:56 [TRACE] (1) e621_downloader::e621::sender: [src\e621\sender\mod.rs:381] error decoding response body: invalid type: map, expected a sequence at line 1 column 0
22:18:56 [TRACE] (1) e621_downloader::program: [src\program.rs:84] Parsing user blacklist...
22:18:57 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:103] Grabbing posts...
22:18:57 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 1 of tag fav:catuncle
22:18:58 [INFO] �[38;5;39m�[3m"fav:catuncle"�[0m grabbed!
22:18:58 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 1 of tag krystal score:>50
22:19:01 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 2 of tag krystal score:>50
22:19:02 [DEBUG] (1) e621_downloader::e621::sender: Downloading page 3 of tag krystal score:>50
22:19:02 [INFO] �[38;5;39m�[3m"krystal score:>50"�[0m grabbed!
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:244] Total file size for all images grabbed is 892974251KB
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:170] Printing Collection Info:
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:171] Collection Name: "Single Posts"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:172] Collection Category: ""
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:173] Collection Post Length: "0"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:177] Static file path for this collection: "downloads/Single Posts"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:217] Collection Single Posts is finished downloading...
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:170] Printing Collection Info:
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:171] Collection Name: "fav:catuncle"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:172] Collection Category: ""
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:173] Collection Post Length: "0"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:177] Static file path for this collection: "downloads/fav_catuncle"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:217] Collection fav:catuncle is finished downloading...
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:170] Printing Collection Info:
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:171] Collection Name: "krystal score:>50"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:172] Collection Category: "General Searches"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:173] Collection Post Length: "377"
22:19:02 [TRACE] (1) e621_downloader::e621: [src\e621\mod.rs:177] Static file path for this collection: "downloads/General Searches\krystal score__50"

Is it because the program can only capture up to 1800 files?
(Sorry for the question again lol))

Bring general optimization and improve to the code.

This release is going to take a lot of time, and this issue will be the first step to make this project quicker. The first thing that needs to be handled is the code itself. Currently, there are unneeded allocations happening all over the place, poor usage of Vecs, and some downright ugly workarounds. I want to spend this time devoting work towards these issues, and ensure that the code will run faster than before. The goal here isn't to have more features, it's to have a sound structure and confident code. This is what I'll be working on for the coming months.

The client downloading images are slow.

Off of multiple tests, to download around 2000+ images takes around 20+ minutes. These images range from 500KB-5MB. The time it takes to download one of these images can take around 1-2 seconds each if over 1MB. If the size is below 1MB, then it will download 2 images a second. With my download speed being over 200MB a second, there should be no reason it's taking this long. A performance increase would be to use the client on a seperate thread using async to allow for it to have more CPU usage. Off of checking the CPU usage, 10-11% of the CPU (which has over 8 cores) is being, roughing 100% of the single thread that the program uses. So, if the work is broken off into another thread, it may produce an increase in performace. There also needs to be code cleanup because, currently, the code uses too many loops in the download function. Code cleanup is required to improve performance along with splitting the client request to another thread. If the download speed becomes too fast, then the thread needs to sleep for 500ms to fit to the request of the server. Passing this speed limit will force the program to stop requests because the server will start returning an error code. The end goal is too download 2 images a second no matter the size of the image. This could also be a server-side issue, if thats the case, maybe doing two downloads at the same time will fix the issue.

Program crashes on first run.

When the program runs for the first time, it will create the config file and tag file, but will not close itself so the user can prepare for images to be downloaded. This will crash the program immediately because there are no tags to use and the download directory doesn't exist and isn't created in runtime causing it to crash the program as well.

Long tag filters cause file paths too long to rename in Windows

Hi! Great program, but I'm noticing that if a tag line is fairly long the resulting folder name will also contain every tag from the file. This is great for organizing but if it is too long you can't rename it in Windows unless you move it to the root of a drive.

Would it be possible to have an option to remove any "-tagname", or other qualifying tag names from the folder name after the download is complete? I don't think anyone would want to keep the tags they are excluding in the resulting folder name. Thanks

Should there be a way to import a groups tags to another group?

I was thinking it would be neat to implement a feature to import tags from one group to another, that way groups aren't just labeled for certain modes the application can run in. While it will ignore the user created groups, the user could import their user made one into one of the default groups.

Example

# This is the tag that you will use so the program can know what tags to search.
# Currently, it will search all tags from `config.json` last_run date to today.
# If you wish to comment in this file, simply put `#` at the beginning of the line and the parser will ignore the current line.

# Insert tags here (remove the example artist with an artist you wish to keep track of):

[artists]
@import(artist-in-user-made-group)

[artist-in-user-made-group]
fluff-kevlar
braeburned
reccand
doxy
l-a-v # Testing comment on the same line!

[normal-tags]
score:>=70
feral

[pool] # comment test on group
12345
19234
89212

Pool filename numbering issue

After updating from 1.6.0 to 1.6.1 pool filename numbering seems to be messed up a bit.
Before it was file0001, file0002, file0003, ... file0009, file0010, file0011 and so on.
Now it is file1000, file2000, file3000, ... file9000, file1000, file1100 and so on.

Server API may now be working with their blacklist system

After a few messages on my main thread for this program, someone mentioned that the e621 blacklist system may now be working with the API as well. But, instead of removing the post like one would assume, it instead just nullifies the post. This means when I grab a collection of posts through a bulk search, a few of those posts will now be null. Any links to the file, preview, or sample, are all null now. So what I have to do, is I'm going to disable the blacklist system and see and then verify if this is actually the case. I'm also going to have to re-enable cookies and check if my user account will alter what post become null. Doing this will let me know if the blacklist on their site is actually working with the API now, or if there is a bug on the server side that I need to figure out or know about.

The program is sometimes crashing when downloading!

This is a major problem. The program is crashing when it's sometimes downloading images. I have a feeling this is because of too much traffic being pushed to e621 server, causing far too much stress on the server. I will now look into this.

Program Now Crashes Due to New E621 Update

There was a new update to the e621 plateform recently, and with this update came new reformated API responses. These responses are new JSON returns that are different from the old format. Because of this change, the entire program no longer works. The goal now is to release a new update for our software that works with these new responses. This shouldn't take too long.

Linux Compatibility

Program seems to work pretty well on linux. Was able to just clone the repo and cargo run just fine. although i do seem to be having an issue with pools and it just kinda downloading at random once it grabs the things i asked for. For this test case i tried the test pool of ghost in my attic 2 and then general tag of smolder. It says it grabs the pool but i don't ever see it appear in the downloads folder. it then grabbed all of smolder just fine, however then it just kinda starts downloading the entire site for some reason, i compared with the site and it seems to just be grabbing from the main posts page in order.

[kitsuna@kitsuna-tablet e621_downloader]$ cargo run Finished dev [unoptimized + debuginfo] target(s) in 0.57s Running target/debug/e621_downloader`
Should enter safe mode (Y/N)?
n
Parsed tag file.
"The_Ghost_In_My_Attic_2" grabbed!
"smolder" grabbed!
"" grabbed!

Duplicate found: skipping... 178 / 178 [===================================================================================================] 100.00 % 1068.93/s
Downloading: 8 / 1280 [>--------------------------------------------------------------------------------------------------------------------] 0.62 % 0.56/s 38m `

Tag is empty when first created.

When you run the program for the first time, it will create an empty tag file for tags to be inserted. There is a problem in this as the tag file has the ability to have comments and may also be confusing for anyone using the application. They won't know what tag.txt does unless it has those comments explaining its purpose.

I feel that there needs to be content in this file when created to allow for a much more user-friendly approach for those who may not be used to programs like this.

thread 'main' panicked at 'Json was unable to deserialize to entry!

Hi there, glad to see this update. I seem to be having issues with the latest version. during build i get warnings.

warning: unused import: `failure::Error`
 --> src/e621/mod.rs:9:5
  |
9 | use failure::Error;
  |     ^^^^^^^^^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

warning: unused import: `PostCollection`
  --> src/e621/mod.rs:15:37
   |
15 | use crate::e621::grabber::{Grabber, PostCollection};
   |                                     ^^^^^^^^^^^^^^

warning: unused import: `crate::e621::io::Login`
  --> src/e621/mod.rs:20:5
   |
20 | use crate::e621::io::Login;
   |     ^^^^^^^^^^^^^^^^^^^^^^

warning: unused import: `failure::Error`
 --> src/e621/blacklist.rs:3:5
  |
3 | use failure::Error;
  |     ^^^^^^^^^^^^^^

warning: unused import: `reqwest::get`
 --> src/e621/blacklist.rs:7:5
  |
7 | use reqwest::get;
  |     ^^^^^^^^^^^^

warning: unused import: `serde_json::Value`
 --> src/e621/grabber.rs:4:5
  |
4 | use serde_json::Value;
  |     ^^^^^^^^^^^^^^^^^

warning: unused import: `BulkPostEntry`
  --> src/e621/grabber.rs:10:5
   |
10 |     BulkPostEntry, PoolEntry, PostEntry, RequestSender, SetEntry, UserEntry,
   |     ^^^^^^^^^^^^^

warning: unused import: `WWW_AUTHENTICATE`
  --> src/e621/sender.rs:15:44
   |
15 | use self::reqwest::header::{AUTHORIZATION, WWW_AUTHENTICATE};
   |                                            ^^^^^^^^^^^^^^^^

warning: unused import: `to_string`
  --> src/e621/sender.rs:16:36
   |
16 | use self::serde_json::{from_value, to_string, Value};
   |                                    ^^^^^^^^^

warning: unused import: `std::fs::write`
  --> src/e621/sender.rs:19:5
   |
19 | use std::fs::write;
   |     ^^^^^^^^^^^^^^

warning: use of deprecated item 'e621::grabber::Grabber::grab_blacklist': This has been deprecated as the new blacklist system is being developed.
   --> src/e621/grabber.rs:117:17
    |
117 |         grabber.grab_blacklist();
    |                 ^^^^^^^^^^^^^^
    |
    = note: `#[warn(deprecated)]` on by default

warning: use of deprecated item 'e621::sender::RequestSender::get_pool_entry': This uses the old API to grab the pool and is no longer used for the new API
   --> src/e621/grabber.rs:248:68
    |
248 |             let mut searched_pool: PoolEntry = self.request_sender.get_pool_entry(id, page);
    |                                                                    ^^^^^^^^^^^^^^

warning: unused variable: `blacklist_entries`
   --> src/e621/grabber.rs:149:17
    |
149 |             let blacklist_entries: Vec<String> =
    |                 ^^^^^^^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_blacklist_entries`
    |
    = note: `#[warn(unused_variables)]` on by default

warning: variable does not need to be mutable
   --> src/e621/grabber.rs:248:17
    |
248 |             let mut searched_pool: PoolEntry = self.request_sender.get_pool_entry(id, page);
    |                 ----^^^^^^^^^^^^^
    |                 |
    |                 help: remove this `mut`
    |
    = note: `#[warn(unused_mut)]` on by default

warning: method is never used: `is_tag_special`
   --> src/e621/blacklist.rs:144:5
    |
144 |     fn is_tag_special(&self, tag: &String) -> bool {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: `#[warn(dead_code)]` on by default

warning: field is never read: `blacklist_entries`
   --> src/e621/blacklist.rs:450:5
    |
450 |     blacklist_entries: Vec<String>,
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: method is never used: `from_tags`
   --> src/e621/grabber.rs:115:5
    |
115 |     pub fn from_tags(groups: &[Group], request_sender: RequestSender) -> Grabber {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: method is never used: `grab_blacklist`
   --> src/e621/grabber.rs:135:5
    |
135 |     pub fn grab_blacklist(&mut self) {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: method is never used: `get_posts_from_pool`
   --> src/e621/grabber.rs:243:5
    |
243 |     pub fn get_posts_from_pool(&self, id: &str) -> (String, Vec<PostEntry>) {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: method is never used: `get_entry_from_id`
   --> src/e621/sender.rs:510:5
    |
510 | /     pub fn get_entry_from_id<T>(&self, id: &str, url_type_key: &str) -> T
511 | |     where
512 | |         T: DeserializeOwned,
513 | |     {
...   |
521 | |         .expect("Json was unable to deserialize to entry!")
522 | |     }
    | |_____^

warning: method is never used: `get_pool_entry`
   --> src/e621/sender.rs:564:5
    |
564 |     pub fn get_pool_entry(&self, id: &str, page: u16) -> PoolEntry {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: method is never used: `get_tag_by_id`
   --> src/e621/sender.rs:630:5
    |
630 |     pub fn get_tag_by_id(&self, id: &str) -> TagEntry {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: method is never used: `get_blacklist`
   --> src/e621/sender.rs:655:5
    |
655 |     pub fn get_blacklist(&self, login: &Login) -> UserEntry {
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

warning: 23 warnings emitted

But it does seem to finish, however when i then run the program

thread 'main' panicked at 'Json was unable to deserialize to entry!: reqwest::Error { kind: Decode, source: Error("expected ident", line: 1, column: 2) }', src/e621/sender.rs:543:28
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Running the backtrace shows

thread 'main' panicked at 'Json was unable to deserialize to entry!: reqwest::Error { kind: Decode, source: Error("expected ident", line: 1, column: 2) }', src/e621/sender.rs:543:28
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /build/rust/src/rustc-1.44.0-src/vendor/backtrace/src/backtrace/libunwind.rs:86
   1: backtrace::backtrace::trace_unsynchronized
             at /build/rust/src/rustc-1.44.0-src/vendor/backtrace/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:78
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:59
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1069
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1504
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:62
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:49
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:198
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:218
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:511
  11: rust_begin_unwind
             at src/libstd/panicking.rs:419
  12: core::panicking::panic_fmt
             at src/libcore/panicking.rs:111
  13: core::option::expect_none_failed
             at src/libcore/option.rs:1268
  14: core::result::Result<T,E>::expect
             at /build/rust/src/rustc-1.44.0-src/src/libcore/result.rs:963
  15: e621_downloader::e621::sender::RequestSender::get_entry_from_appended_id
             at src/e621/sender.rs:543
  16: e621_downloader::e621::WebConnector::process_blacklist
             at src/e621/mod.rs:72
  17: e621_downloader::main
             at src/main.rs:40
  18: std::rt::lang_start::{{closure}}
             at /build/rust/src/rustc-1.44.0-src/src/libstd/rt.rs:67
  19: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:52
  20: std::panicking::try::do_call
             at src/libstd/panicking.rs:331
  21: std::panicking::try
             at src/libstd/panicking.rs:274
  22: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  23: std::rt::lang_start_internal
             at src/libstd/rt.rs:51
  24: std::rt::lang_start
             at /build/rust/src/rustc-1.44.0-src/src/libstd/rt.rs:67
  25: main
  26: __libc_start_main
  27: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Tag with a colon in it causes a crash

Trying to download the tag "mao_mao:_heroes_of_pure_heart" results in the following error:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src\e621\io\tag.rs:162:43
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I assume it's because it's got a colon in the name?

Update the Tag parser.

The tag parser needs to be updated with new syntax. I want to implement groups for tag.txt. Depending on the group their in, they will be treated differently. I also would enjoy comments on the same line.

Example

# This is a test comment.
normal-tag:
score:>=70

artist:
fluff-kevlar

pool:
17010 # Id for `How to Draw Rabbits by ichthy0stega` (comment on the same line!)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.