Code Monkey home page Code Monkey logo

tippecanoe's People

Contributors

andrewharvey avatar bdon avatar brsbl avatar carimatics avatar cduruk avatar dbitech avatar e-n-f avatar emonty avatar flippmoke avatar hfs avatar jamesscottbrown avatar jingsam avatar joykuotw avatar jtmiclat avatar mburke-weatheranalytics avatar mmc1718 avatar philippelatulippe avatar plantain avatar pratikyadav avatar samhashemi avatar springmeyer avatar stevage avatar tfiner avatar tmcw avatar tordans avatar vincentsarago avatar wilsaj avatar wsw0108 avatar zabop avatar zstadler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tippecanoe's Issues

Proposal for MBTiles SQLite efficiency improvements

I have recently added some large (by area, not by vertex count) polygons to a large dataset, and was surprised to see a significant increase in the output MBTiles size. I've decided to dig deeper to understand the space usage, and I think I can suggest some large SQLite-level improvements, both in reducing storage overhead and in optimizing the tile fetching hot path.

I've already implemented those optimizations as an experiment, and I see 14% reduction in the output file size on my dataset.
I'd like to discuss these schema changes at a high-level first and to understand if a merge request based on that would be acceptable.

Proposed changes

Proposed on-disk schema in full
CREATE TABLE metadata (name text, value text);
CREATE UNIQUE INDEX name on metadata (name);
CREATE TABLE map (zoom_level INTEGER, tile_column INTEGER, tile_row INTEGER,
                  tile_id INTEGER NOT NULL,
                  PRIMARY KEY (zoom_level, tile_column, tile_row)) WITHOUT ROWID;
CREATE TABLE images (tile_id INTEGER PRIMARY KEY, tile_hash BLOB NOT NULL UNIQUE, tile_data BLOB);
CREATE VIEW tiles AS SELECT map.zoom_level AS zoom_level, map.tile_column AS tile_column, 
                            map.tile_row AS tile_row, images.tile_data AS tile_data 
                            FROM map JOIN images ON images.tile_id = map.tile_id
    /* tiles(zoom_level,tile_column,tile_row,tile_data) */;
  1. Replace map table + map_index index with a single "clustered index": WITHOUT ROWID map table.
    This essentially removes the existing map table on disk, and appends the tile_id column to the existing map_index index. This clustered index is efficient, and is enough to satisfy all point lookups with 1 logarithmic search.
CREATE TABLE map (zoom_level INTEGER, tile_column INTEGER, tile_row INTEGER,
                  tile_id TEXT,
                  PRIMARY KEY (zoom_level, tile_column, tile_row)) WITHOUT ROWID;
  1. Revert the on-disk schema changes from #40: stop copying identical blobs for several zoom levels. Provide a different efficient implementation of mbtiles_erase_zoom based on a temporary index (described later).
    This will drop the zoom_level column from images.
  2. Make the pre-existing implicit rowid surrogate key in images explicit, and use it instead of the TEXT/BLOB hash for joining.
    Change the map.tile_id field type to INTEGER NOT NULL accordingly. Insertions into images will use NULL to preserve rowid auto-increment semantics. A unique index on tile_hash will still exist as before.
    CREATE TABLE images (tile_id INTEGER PRIMARY KEY, tile_hash TEXT NOT NULL UNIQUE, tile_data BLOB);
    CREATE VIEW tiles AS SELECT map.zoom_level AS zoom_level, map.tile_column AS tile_column, 
                                map.tile_row AS tile_row, images.tile_data AS tile_data 
                                FROM map JOIN images ON images.tile_id = map.tile_id;
    At this point map will be a single highly-efficient B-Tree containing only 4-tuples of (mostly small) integers, allowing SQLite varint encoding to compress it even further. And as the tile_id will be a rowid alias in images, no additional index will be needed/queried when joining those tables on tile_id.
  3. Change the format/type of tile_hash from a decimal string/TEXT to a binary 8-byte little-endian/BLOB. This will lead to a ~58% length reduction on average. Note that due to the UNIQUE index all hashes are stored twice on-disk.

Implementing mbtiles_erase_zoom

As the mbtiles_erase_zoom is needed only when rolling back recently completed work of writing those tiles to the database, a SQLite temporary index (that is scoped to the database connection) should be enough to track which tile images have no references from other zoom levels and therefore should be removed. Unfortunately, temporary indexes are not directly supported in SQLite, but could be emulated with a WITHOUT ROWID temporary table and a trigger on the target table to update it.
I wondered for a while how to do this efficiently, and I think that creating the "index" lazily on the first mbtiles_erase_zoom call for a single connection is the best way.

Proposed (transient, per-connection) schema additions:

CREATE TEMP TABLE IF NOT EXISTS temp_images_zoom (zoom_level INTEGER, tile_id INTEGER, 
                                                  PRIMARY KEY (tile_id, zoom_level)) WITHOUT ROWID;
CREATE TEMP TRIGGER IF NOT EXISTS temp_insert_images_zoom AFTER INSERT ON main.map 
BEGIN 
    INSERT OR IGNORE INTO temp_images_zoom VALUES (NEW.zoom_level, NEW.tile_id); 
END;

This compact temporary table is, again, actually a "clustered index": a single B-Tree on a pair of integers. It will accumulate information about newly-inserted tiles from the current database connection. SQLite will handle spilling it from memory to a temporary file if it becomes large enough automatically.

Full proposed implementation of mbtiles_erase_zoom:

CREATE TEMP TABLE IF NOT EXISTS temp_images_zoom (zoom_level INTEGER, tile_id INTEGER, 
                                                  PRIMARY KEY (tile_id, zoom_level)) WITHOUT ROWID;
INSERT OR IGNORE INTO temp_images_by_zoom SELECT zoom_level, tile_id FROM map 
    WHERE NOT EXISTS (SELECT 1 FROM temp_images_by_zoom);
CREATE TEMP TRIGGER IF NOT EXISTS temp_insert_images_zoom AFTER INSERT ON main.map 
BEGIN 
    INSERT OR IGNORE INTO temp_images_zoom VALUES (NEW.zoom_level, NEW.tile_id); 
END;
DELETE FROM images WHERE tile_id IN (
    SELECT map.tile_id FROM map LEFT JOIN temp_images_zoom AS t 
    ON map.tile_id = t.tile_id AND map.zoom_level != t.zoom_level
    WHERE map.zoom_level = ? AND t.tile_id IS NULL);
DELETE FROM map WHERE zoom_level = ?;
DELETE FROM temp_images_zoom WHERE zoom_level = ?;

The first three statements create and fill the temp_insert_images_zoom table (only on the first call to mbtiles_erase_zoom).
Due to the trigger it is always guaranteed to contain all tiles in map (as it can be modified only through this connection).

The first DELETE query does essentially this:

  1. For all map.tile_id from the requested zoom level (map index scan):
    1.1. Search for a record with a different zoom level for this tile_id (temp_images_zoom index scan).
    This scan will always read only 1 or 2 rows from temp_images_zoom per tile_id.
    1.2. If no such record was found (t.tile_id IS NULL), add this tile id to the to-be-deleted set.
  2. Delete all the rows from images from the to-be-deleted set by rowid (a.k.a. images.tile_id).
    Delete them also from the tile_hash index (index lookup by tile_hash per row).

This implementation allows to effectively reference-count images without duplicating the rows, and is very fast.

Alternatives considered

  1. Make the images table WITHOUT ROWID too, preserve current joining by tile hashes.
    Tables with large blobs make very bad clustered indexes due to low fanout and space savings are small.
  2. Skip the filling INSERT OR IGNORE INTO temp_images_by_zoom and create the temp table/trigger in mbtiles_open.
    This will only be correct if there are no previously-written tiles at the time of mbtiles_open. I don't think that holds in all cases now, so it would not be possible to do. It would be a bit faster, though.

So, what do you think? If those changes are acceptable, I can prepare a merge request.

Issue while using overzoom.

we tried making tile join with version 2.3.1 like

it works
tile-join -z16 -Z0 -o /var/merged/'.$filename.'/'.$filename.'.mbtiles  -pk -pg';

it doesnt work

tile-join -z16 -Z0 -o /var/merged/'.$filename.'/'.$filename.'.mbtiles --overzoom -pk -pg';

We have two line layers one with zoom level 16 and other with zoom level 19. trying to use tile join where we can get the layer with zoom 16 to get overzoomed to zoom 19 in merged tiles.

Suggestion: `--drop-by-attribute-as-needed=ATTRIB_NAME`

Suggestion: add a new flag to drop objects from a MVT tile (if needed to keep within the tile size), and drop
e.g.: If the tile is too big, “sort” a objects by an attribute, and keep the top N objects which will fit inside this size (and including the N+1 object (when ordered by this attribute) would cause the tile to exceed the limit).

The real world example is that I have millions of little line segements, representing lots of waterways (rivers, streams etc). Each segment has a float for the size of the total river system it's part of. To make MVT pmtiles, I need to drop objects. The logical way is to keep the segments for large river basins.

The code is osm-river-basins on Github, and my tippecanoe command is here in functions.sh. I tried my best, and people are still noticing that it produces gaps in the lines. I believe, a --drop-as-needed-by-attribute=ATTRIB would solve this issue.

Obv. some reverse sorting option would probably be useful for others too.

About Tippecanoe-decode

When I use command "Tippecanoe-decode" decode a .mbtiles file, didn't find the output file.

Is there a hard limit on FlatGeoBuf size?

I seem to be finding that there is a hard limit on the file size of FlatGeobufs.

Anything up to around 20GB is ok but with files bigger than that I get a segmentation error. I don't see the same problem when using geojson.

build failures under clang 15.0.7

Currently seeing warnings, and a build failure on clang 15.0.7:

c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o geojson.o geojson.cpp
cc -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -c -o jsonpull/jsonpull.o jsonpull/jsonpull.c
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o tile.o tile.cpp
tile.cpp:1016:9: warning: variable 'count' set but not used [-Wunused-but-set-variable]
size_t count = 0;
^
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o pool.o pool.cpp
tile.cpp:2777:34: warning: implicit conversion from 'unsigned long' to 'double' changes value from 18446744073709551615 to 18446744073709551616 [-Wimplicit-const-int-float-conversion]
if (nmg <= mingap || nmg > ULONG_MAX) {
~ ^~~~~~~~~
/usr/include/sys/limits.h:63:19: note: expanded from macro 'ULONG_MAX'
#define ULONG_MAX __ULONG_MAX /* max for an unsigned long /
^~~~~~~~~~~
/usr/include/x86/_limits.h:63:21: note: expanded from macro '__ULONG_MAX'
#define __ULONG_MAX 0xffffffffffffffff /
max for an unsigned long /
^~~~~~~~~~~~~~~~~~
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o mbtiles.o mbtiles.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o geometry.o geometry.cpp
2 warnings generated.
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o projection.o projection.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o memfile.o memfile.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o mvt.o mvt.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o serial.o serial.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o main.o main.cpp
serial.cpp:550:16: warning: implicit conversion from 'long long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-const-int-float-conversion]
if (extent <= LLONG_MAX) {
~~ ^~~~~~~~~
/usr/include/sys/limits.h:69:19: note: expanded from macro 'LLONG_MAX'
#define LLONG_MAX __LLONG_MAX /
max for a long long /
^~~~~~~~~~~
/usr/include/x86/_limits.h:74:21: note: expanded from macro '__LLONG_MAX'
#define __LLONG_MAX 0x7fffffffffffffffLL /
max value for a long long */
^~~~~~~~~~~~~~~~~~~~
1 warning generated.
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o text.o text.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o dirtiles.o dirtiles.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o pmtiles_file.o pmtiles_file.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o plugin.o plugin.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o read_json.o read_json.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o write_json.o write_json.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o geobuf.o geobuf.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o flatgeobuf.o flatgeobuf.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o evaluator.o evaluator.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o geocsv.o geocsv.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o csv.o csv.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o geojson-loop.o geojson-loop.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o json_logger.o json_logger.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o visvalingam.o visvalingam.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o compression.o compression.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o enumerate.o enumerate.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o decode.o decode.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o tile-join.o tile-join.cpp
c++ -MMD -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -std=c++17 -c -o unit.o unit.cpp
In file included from unit.cpp:2:
In file included from ./catch/catch.hpp:74:
In file included from /usr/include/c++/v1/sstream:186:
In file included from /usr/include/c++/v1/istream:165:
In file included from /usr/include/c++/v1/ostream:168:
In file included from /usr/include/c++/v1/bitset:128:
In file included from /usr/include/c++/v1/string:551:
In file included from /usr/include/c++/v1/string_view:222:
In file included from /usr/include/c++/v1/algorithm:1851:
In file included from /usr/include/c++/v1/__algorithm/ranges_sample.h:13:
In file included from /usr/include/c++/v1/__algorithm/sample.h:18:
/usr/include/c++/v1/__random/uniform_int_distribution.h:235:5: error: static assertion failed due to requirement '__libcpp_random_is_valid_urng<Catch::RandomNumberGenerator, void>::value':
static_assert(__libcpp_random_is_valid_urng<_URNG>::value, "");
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/v1/__algorithm/shuffle.h:154:35: note: in instantiation of function template specialization 'std::uniform_int_distribution::operator()Catch::RandomNumberGenerator' requested here

Versions after 2.0.0 fail to compile in Alpine Linux

Something about 35b576d (v2.1.0) broke compilation within Alpine Linux. It's corresponding pull request is https://github.com/protomaps/tippecanoe/pull/10

This can be reproduced via these steps:

  • docker run -it --rm alpine:latest sh
  • apk add git build-base sqlite-dev zlib-dev
  • git clone https://github.com/felt/tippecanoe
  • cd tippecanoe
  • compiling 2.0.0 works
    • git checkout 7f69b803818e18b14d3e84245d653a66b06e01a3
    • make
    • git reset --hard
  • compiling 2.1.0 fails
    • git checkout bb2aa65bcc2e2a4779e28755e26b62fa3bcbbdb5
    • make

The error message it produces is:

In file included from ./flatbuffers/verifier.h:21,
                 from ./flatbuffers/buffer_ref.h:21,
                 from ./flatbuffers/flatbuffers.h:25,
                 from flatgeobuf/feature_generated.h:7,
                 from flatgeobuf.cpp:5:
./flatbuffers/util.h: In function 'void flatbuffers::strtoval_impl(int64_t*, const char*, char**, int)':
./flatbuffers/util.h:226:38: error: 'strtoll_l' was not declared in this scope; did you mean 'strtold_l'?
  226 |     #define __strtoll_impl(s, pe, b) strtoll_l(s, pe, b, ClassicLocale::Get())
      |                                      ^~~~~~~~~
./flatbuffers/util.h:244:12: note: in expansion of macro '__strtoll_impl'
  244 |     *val = __strtoll_impl(str, endptr, base);
      |            ^~~~~~~~~~~~~~
./flatbuffers/util.h: In function 'void flatbuffers::strtoval_impl(uint64_t*, const char*, char**, int)':
./flatbuffers/util.h:225:39: error: 'strtoull_l' was not declared in this scope; did you mean 'strtoull'?
  225 |     #define __strtoull_impl(s, pe, b) strtoull_l(s, pe, b, ClassicLocale::Get())
      |                                       ^~~~~~~~~~
./flatbuffers/util.h:249:10: note: in expansion of macro '__strtoull_impl'
  249 |   *val = __strtoull_impl(str, endptr, base);
      |          ^~~~~~~~~~~~~~~
make: *** [Makefile:74: flatgeobuf.o] Error 1

(I noticed mentions of sh vs bash in #18 and tested this via bash with the same result)

I believe this is related to google/flatbuffers#7265 which they resolved by moving #include "flatbuffers/util.h" from include/flatbuffers/verifier.h to src/binary_annotator.h (which doesn't exist in this repository)
https://github.com/google/flatbuffers/pull/7266/files

Commenting out #include "flatbuffers/util.h" from flatbuffers/verifier.h resulted in a successful compilation.

I'm quite out of my depth at this point, so I don't want to waste anyone's time by submitting a pull request which removes that line if that's not the appropriate path forward. Another approach might be to replace the files with updated versions from https://github.com/google/flatbuffers/tree/master/include/flatbuffers 🤷

I'm happy to continue helping resolve this issue, but I think it's best if I get direction on which of the paths is most appropriate ... if either of them are 😅

Can't suppress polygon simplification

Hello!

I have hexagons that look like this:

image

which I'm trying to turn into vector tiles. This is the command I'm using:

cat ./summary_hexagons.geojson | tippecanoe \
  --minimum-zoom=6 \
  --maximum-zoom=16 \
  --no-line-simplification \
  --no-tile-compression \
  --no-tiny-polygon-reduction \
  --no-feature-limit \
  --no-tile-size-limit \
  --force \
  --output-to-directory ./summary_hexagons

For some reason they come out looking a bit misshapen:

image

Can anyone think of what I might be doing wrong? I was hoping --no-line-simplification would do the trick, but it seems like some simplification might still be happening. Thank you any advance for any thoughts!

5.8 megabyte "hole-y polygon" dataset defaults to inefficiently high maxzoom

Natural Earth: Bathymetry 3,000 meters
ne_10m_bathymetry_H_3000.zip

Screen Shot 2022-09-16 at 2 21 43 PM

This dataset is a 5.8 megabyte shape file, but can take a long time (10+ minutes? I canceled it) because it defaults to a maxzoom of 14+.

This behavior might be correct in terms of determining the detail, but the runtime is disproportionate to the initial data size.

Maybe part of it is lots of redundancy in huge polygon areas that can be alleviated by changes related to #3 ?

Arrays in properties

I'm having problems to get a ['at', n, array] expression to work in maplibre with data produced with tippecanoe. I wonder if that is because array properties still get stringified in tippecanoe (as the Changelog states for version (1.7.2)[https://github.com/felt/tippecanoe/blob/main/CHANGELOG.md#172] ). Is there a way to overcome this and make use of expressions that can handle arrays?

Build failing on Fedora 38

A snippet of make:

/usr/include/c++/13/bits/basic_string.h:176:7: note:   conversion of argument 1 would be ill-formed:
pmtiles_file.cpp:384:20: error: ‘struct pmtiles::headerv3’ has no member named ‘internal_compression’
  384 |         if (header.internal_compression == pmtiles::COMPRESSION_NONE) {
      |                    ^~~~~~~~~~~~~~~~~~~~
pmtiles_file.cpp:384:53: error: ‘COMPRESSION_NONE’ is not a member of ‘pmtiles’
  384 |         if (header.internal_compression == pmtiles::COMPRESSION_NONE) {
      |                                                     ^~~~~~~~~~~~~~~~
pmtiles_file.cpp:386:27: error: ‘struct pmtiles::headerv3’ has no member named ‘internal_compression’
  386 |         } else if (header.internal_compression == pmtiles::COMPRESSION_GZIP) {
      |                           ^~~~~~~~~~~~~~~~~~~~
pmtiles_file.cpp:386:60: error: ‘COMPRESSION_GZIP’ is not a member of ‘pmtiles’
  386 |         } else if (header.internal_compression == pmtiles::COMPRESSION_GZIP) {
      |                                                            ^~~~~~~~~~~~~~~~
pmtiles_file.cpp:389:92: error: ‘struct pmtiles::headerv3’ has no member named ‘internal_compression’
  389 |                 fprintf(stderr, "Unknown or unsupported pmtiles compression: %d\n", header.internal_compression);
      |                                                                                            ^~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/13/x86_64-redhat-linux/bits/c++allocator.h:33,
                 from /usr/include/c++/13/bits/allocator.h:46,
                 from /usr/include/c++/13/bits/alloc_traits.h:39,
                 from /usr/include/c++/13/ext/alloc_traits.h:34,
                 from /usr/include/c++/13/bits/hashtable_policy.h:39,
                 from /usr/include/c++/13/bits/hashtable.h:35,
                 from /usr/include/c++/13/bits/unordered_map.h:33,
                 from /usr/include/c++/13/unordered_map:41,
                 from pmtiles_file.cpp:1:
/usr/include/c++/13/bits/new_allocator.h: In instantiation of ‘void std::__new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = pmtiles::entryv3; _Args = {const long unsigned int&, long long unsigned int&, long unsigned int&, int}; _Tp = pmtiles::entryv3]’:
/usr/include/c++/13/bits/alloc_traits.h:537:17:   required from ‘static void std::allocator_traits<std::allocator<_Tp1> >::construct(allocator_type&, _Up*, _Args&& ...) [with _Up = pmtiles::entryv3; _Args = {const long unsigned int&, long long unsigned int&, long unsigned int&, int}; _Tp = pmtiles::entryv3; allocator_type = std::allocator<pmtiles::entryv3>]’
/usr/include/c++/13/bits/vector.tcc:117:30:   required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {const long unsigned int&, long long unsigned int&, long unsigned int&, int}; _Tp = pmtiles::entryv3; _Alloc = std::allocator<pmtiles::entryv3>; reference = pmtiles::entryv3&]’
pmtiles_file.cpp:227:26:   required from here
/usr/include/c++/13/bits/new_allocator.h:187:11: error: no matching function for call to ‘pmtiles::entryv3::entryv3(const long unsigned int&, long long unsigned int&, long unsigned int&, int)’
  187 |         { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
      |           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pmtiles/pmtiles.hpp:159:9: note: candidate: ‘pmtiles::entryv3::entryv3()’
  159 |         entryv3()
      |         ^~~~~~~
pmtiles/pmtiles.hpp:159:9: note:   candidate expects 0 arguments, 4 provided
pmtiles/pmtiles.hpp:153:8: note: candidate: ‘constexpr pmtiles::entryv3::entryv3(const pmtiles::entryv3&)’
  153 | struct entryv3 {
      |        ^~~~~~~
pmtiles/pmtiles.hpp:153:8: note:   candidate expects 1 argument, 4 provided
pmtiles/pmtiles.hpp:153:8: note: candidate: ‘constexpr pmtiles::entryv3::entryv3(pmtiles::entryv3&&)’
pmtiles/pmtiles.hpp:153:8: note:   candidate expects 1 argument, 4 provided
/usr/include/c++/13/bits/new_allocator.h: In instantiation of ‘void std::__new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = pmtiles::entryv3; _Args = {const long unsigned int&, long long unsigned int&, int&, int}; _Tp = pmtiles::entryv3]’:
/usr/include/c++/13/bits/alloc_traits.h:537:17:   required from ‘static void std::allocator_traits<std::allocator<_Tp1> >::construct(allocator_type&, _Up*, _Args&& ...) [with _Up = pmtiles::entryv3; _Args = {const long unsigned int&, long long unsigned int&, int&, int}; _Tp = pmtiles::entryv3; allocator_type = std::allocator<pmtiles::entryv3>]’
/usr/include/c++/13/bits/vector.tcc:117:30:   required from ‘std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {const long unsigned int&, long long unsigned int&, int&, int}; _Tp = pmtiles::entryv3; _Alloc = std::allocator<pmtiles::entryv3>; reference = pmtiles::entryv3&]’
pmtiles_file.cpp:241:25:   required from here
/usr/include/c++/13/bits/new_allocator.h:187:11: error: no matching function for call to ‘pmtiles::entryv3::entryv3(const long unsigned int&, long long unsigned int&, int&, int)’
  187 |         { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
      |           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pmtiles/pmtiles.hpp:159:9: note: candidate: ‘pmtiles::entryv3::entryv3()’
  159 |         entryv3()
      |         ^~~~~~~
pmtiles/pmtiles.hpp:159:9: note:   candidate expects 0 arguments, 4 provided
pmtiles/pmtiles.hpp:153:8: note: candidate: ‘constexpr pmtiles::entryv3::entryv3(const pmtiles::entryv3&)’
  153 | struct entryv3 {
      |        ^~~~~~~
pmtiles/pmtiles.hpp:153:8: note:   candidate expects 1 argument, 4 provided
pmtiles/pmtiles.hpp:153:8: note: candidate: ‘constexpr pmtiles::entryv3::entryv3(pmtiles::entryv3&&)’
pmtiles/pmtiles.hpp:153:8: note:   candidate expects 1 argument, 4 provided
pmtiles/pmtiles.hpp: At global scope:
pmtiles/pmtiles.hpp:345:9: warning: ‘pmtiles::entryv3 pmtiles::{anonymous}::find_tile(const std::vector<pmtiles::entryv3>&, int)’ defined but not used [-Wunused-function]
  345 | entryv3 find_tile(const std::vector<entryv3> &entries, uint64_t tile_id) {
      |         ^~~~~~~~~
pmtiles/pmtiles.hpp:342:3: warning: ‘pmtiles::{anonymous}::colmajor_cmp’ defined but not used [-Wunused-variable]
  342 | } colmajor_cmp;
      |   ^~~~~~~~~~~~
pmtiles/pmtiles.hpp:301:5: warning: ‘pmtiles::{anonymous}::t_on_level’ defined but not used [-Wunused-variable]
  301 | zxy t_on_level(uint8_t z, uint64_t pos) {
      |     ^~~~~~~~~~
pmtiles/pmtiles.hpp:289:6: warning: ‘void pmtiles::{anonymous}::rotate(int64_t, int64_t&, int64_t&, int64_t, int64_t)’ defined but not used [-Wunused-function]
  289 | void rotate(int64_t n, int64_t &x, int64_t &y, int64_t rx, int64_t ry) {
      |      ^~~~~~
make: *** [Makefile:84: pmtiles_file.o] Error 1

Proof that I do have the appropriate libraries:

$ sudo dnf list installed | grep -E "sqlite|zlib"
sqlite.x86_64                                        3.40.1-2.fc38                       @fedora                                                          
sqlite-devel.x86_64                                  3.40.1-2.fc38                       @fedora                                                          
sqlite-libs.x86_64                                   3.40.1-2.fc38                       @anaconda                                                        
zlib.x86_64                                          1.2.13-3.fc38                       @anaconda                                                        
zlib-devel.x86_64                                    1.2.13-3.fc38                       @fedora   

Any tips on getting this error resolved?

transform feature property

I use tippecanoe to generate contours line.
I use gdal_contour -i 10 -a ele to generate the geojson to pass to tippecanoe.

I also need to use ogr2ogr to transform the ele to div as such:

ogr2ogr -dialect sqlite -sql "
SELECT
  ele,
  CASE
    WHEN ele % 1000 = 0 THEN 1000
    WHEN ele % 500 = 0 THEN 500
    WHEN ele % 250 = 0 THEN 250
    WHEN ele % 200 = 0 THEN 200
    WHEN ele % 100 = 0 THEN 100
    WHEN ele % 50 = 0 THEN 50
    WHEN ele % 20 = 0 THEN 20
    ELSE 10
  END AS div,
  geom
FROM
  contour
" /vsigzip/$name-contours-10m.geojson $name-contours-4326-10m.gpkg

Now ogr2ogr is really slow and also prevent me from using tippecanoe in "parallel".

I was wondering if there was a way to get rid of the ogr2ogr step. Is there any way to transform feature properties in tippecanoe (before filtering) so that ele would become the modulo div?

Thanks

Docker build breaks with update for 2.26.0/2.25.0: "error: unknown option '-o'"

Docker build process that was working on May 8, 2023 using v2.24.0 are not functioning with 2.26/2.25.

Environment:
Win10 Pro, build: 19044.2846
Docker Desktop, engine: 20.10.23

I am using a dummy container to create a volume to share the input and output files:

image
Observe the file in.geojson is available in the volume:

image

Execute docker build (as described in the readme, and was working with v.2.24.0), and observe no issues:

image

I had previously discovered that the example "run" command in the readme was incorrect.
The following version worked for me:

docker run -it --rm -v data:/data felt/tippecanoe:latest tippecanoe -o /data/out.mbtiles /data/in.geojson

When this command started failing with v2.25/2.26, I attempted the following versions inspired from the readme:

docker run -it --rm -v data:/data felt/tippecanoe:latest tippecanoe --output=/data/out.mbtiles /data/in.geojson
docker run -it --rm -v /data:/data felt/tippecanoe:latest tippecanoe --output=/data/out.mbtiles /data/in.geojson

Observe the following errors for the 3 versions of "run":

PS C:\REPOS\TileServerLocal> docker run -it --rm -v data:/data felt/tippecanoe:latest tippecanoe -o /data/out.mbtiles /data/in.geojson
error: unknown option '-o'
PS C:\REPOS\TileServerLocal> docker run -it --rm -v data:/data felt/tippecanoe:latest tippecanoe --output=/data/out.mbtiles /data/in.geojson
error: unknown option '--output=/data/out.mbtiles'
PS C:\REPOS\TileServerLocal> docker run -it --rm -v /data:/data felt/tippecanoe:latest tippecanoe --output=/data/out.mbtiles /data/in.geojson
error: unknown option '--output=/data/out.mbtiles'

image

I also receive the unknown option error when setting "Z" and "z" options.

Sample input file attached, but renamed to "txt" for upload:
in.txt

Input in EPSG:3857

Currently supported are EPSG:4326 (WGS84, the default) and EPSG:3857 (Web Mercator). In general you should use WGS84 for your input files if at all possible.

My input is in EPSG:3857. Are there any drawbacks (thats what the Readme sounds like to me). Should I reproject the input to EPSG:4326 first?

failure of allow-existing-test

A handful of coordinates differ when running this test. Example: https://gist.github.com/jpkotta/0218b91f268a5ced38043b4369e186e8

The numbers differ by either 0.000068 or 0.000086 in this case. Loss of precision maybe? It's kind of weird that there are many differing coordinates but there are only 2 unique differences. Honestly, I don't really understand what the test is testing, I'm just trying to build it from the Arch AUR, which does make test by default.

This is with 2.24.0

"GeoJSON extension" for other input types

I use the GeoJSON extension quite often to feed single zoom levels to tippecanoe and it is very useful for my work.

But there are some drawbacks adding the props on the feature level and not just in the properties

  • it is limited to GeoJSON/NDJSON input
  • not all tools allow it to write properties at feature-level (e.g. mapshaper)
  • the suggested ndjson-cli has problems with very large features (something like planetary coast line). My solution is to use pure shell commands to modify each line in a NDJSON. That is fast but still an extra step.

Wouldn't it be better to allow the evaluation of ordinary properties. Either specially named property keys like _tippecanoe_minzoom or additional arguments like --minzoom-field "myMinZkey".

This would could work not only for JSON but also for any other input type like FlatGeobuf.

Increase tile-join size limit

Tippecanoe supports overriding the default 500K tilesize (-M bytes or --maximum-tile-bytes=bytes: Use the specified number of bytes as the maximum compressed tile size instead of 500K.) Ever since I read https://www.mapbox.com/blog/customizable-tile-size-limits I've been taking advantage of this and built locally so I could modify the code in tile-join.cpp:

if (!pk && compressed.size() > 2500000) { fprintf(stderr, "Tile %lld/%lld/%lld size is %lld, >2500000. Skipping this tile\n.", ai->first.z, ai->first.x, ai->first.y, (long long) compressed.size()); }

Web apps using Mapbox GL JS have worked just fine with mbtiles tilesets, including those were thousands of Census block polygons are used in choropleths with changing demographic data. I've also tested a 665 Mb pmtiles file after modifying locally. It has boundaries for every US county, Congressional district and block and worked just fine in https://protomaps.github.io/PMTiles/.

image

-pf -pk with global data at zoom levels > 12

I am not sure this is a bug. Right now I am trying to establish why I am seeing unexpected behaviour from data produced by tippecanoe.

The backstory is that I am piping the entirety of the Who's On First dataset, plus a bit more, in to tippecanoe and converting the resultant MBTiles database into a Protomaps PMTiles database. For example:

I am doing this in order to use the PMTiles database as a fast and cheap point-in-polygon service (derive the tile for a point, fetch the features from that tile, do ray-tracing in memory). For example:

All of which works fine when I produce tiles at zoom 12. For example:

$> features -as-spr -require-polygons -writer-uri constant://?val=jsonl://?writer=stdout:// -iterator-uri org:///usr/local/data -spr-append-property wof:hierarchy whosonfirst-data://?prefix=whosonfirst-data-admin- sfomuseum-data://?prefix=sfomuseum-data-architecture | tippecanoe -P -z 12 -pf -pk -o /usr/local/data/whosonfirst_sfom.mbtiles

The features tool is part of https://github.com/whosonfirst/go-whosonfirst-tippecanoe and iterates through one or more WOF data repositories in an organization outputting features with polygons to STDOUT.

I decided to try the same but at zoom 14 in order to determine whether features in any given tile would have a noticeable impact on speed and the time to do ray-tracing. It took a while (not surprising) but eventually completed without, seemingly, any errors.

However when I try to query the resultant PMTiles database I am getting the Protomaps equivalent of a 404 error (204 no data). For example, this tile in the Richmond / GG Park area of San Francisco:

$> ./bin/pmtile -database whosonfirst_sfom_14 -tiles 's3blob://{BUCKET}?region={REGION}&prefix={PREFIX}&credentials=session' -z 14 -x 2617 -y 6332
2022/12/09 11:18:59 fetching whosonfirst_sfom_14 0-16384
2022/12/09 11:18:59 fetched whosonfirst_sfom_14 0-0
2022/12/09 11:18:59 fetching whosonfirst_sfom_14 12513321-11279
2022/12/09 11:19:00 fetched whosonfirst_sfom_14 12513321-11279
2022/12/09 11:19:00 /whosonfirst_sfom_14/14/2617/6332.mvt returns status code 204

The pmtile tool is part of https://github.com/whosonfirst/go-whosonfirst-spatial-pmtiles/ and dumps the contents of a tile as a GeoJSON FeatureCollection to STDOUT

Unfortunately, as I write this, I don't have the intermediary MBTiles database because it gets removed when the container doing all the work completes. Looking at the logs Protomaps seemed perfectly happy with the MBTiles data.

Curiously, the resultant PMTiles database is 7.1GB so I am scratching my head to understand what is or isn't in there.

Does any of what I've just described trigger any "Oh yeah, that's expected..." thoughts or, absent any obvious errors being reported, any ideas how to think about debugging this?

If zoom level 12 is the functional limit then I can live with that.

Does not build on FreeBSD

  • on Freebsd make is not GNU make, need to use gmake in its place
  • FreeBSD does not have bash by default, and when it does it's installed in /usr/local/bin not /bin
  • FreeBSD, like Apple/Darwin does not have statfs, but instead expects the use of the underlying headers
    sys/types.h
    sys/sysctl.h
    sys/param.h
    sys/mount.h

Limiting maxzoom to keep from generating many tiles

I have been using tippecanoe for a while, but it is the first time I encounter this and I did not find much discussion about it.

When creating vector tiles for a given world map of mine, everything works well if using PMTiles compression (and the job takes about 2 hours with -zg and -P). However, when I try to generate uncompressed PBF files, the process becomes extremely slow after repeatedly showing messages like Limiting maxzoom to -z10 to keep from generating 4577205 tiles.

Firstly, I am confused about what is going on. The -zg parameter is making tippecanoe find z8 and z12 as ideal zoom levels. So:

  1. why is it forcing z10 as the maximum only when generating uncompressed tiles?
  2. is it forcing z10 only for the offending features/tiles, or for the whole task? Reading the source code I got the impression that it is forcing z10 only for some specific features/tiles, but understanding this is of relevance.
  3. is there a way to bypass that limit in case memory and processing are not bottlenecks for me? I read the many ways to handle limits, but I did not quite get which parameter would affect that particular situation. Also, per the source code, the limit seems to be hard-coded, so not changeable - and also not bypassable by any flag.

Thanks!

show all points at certain zoom level and above

I am using below command to show all data points at zoom level=10. its working fine. I have noticed points are not disappearing even i zoom in further beyond 10 e.g: 12,13. Points are staying as is. This is what i wanted. i want to make sure whether this is expected behavior or not. Because it will avoid me to generate points for zoom level for 12 and 13 etc and save some pace in mbtiles .

tippecanoe -z10 -Z10 -f -o grid_26lacs_layers_time_z10.mbtiles -b0 -r1 -pk -pf /home/geos/data/input/grid_26lacs_layers_time_z10.csv

i used tippecanoe-decode command to verify points in zoom level 12 and 13. count shows 0. i am wondering whether its refresh issue of map or its expected behavior.

support for windows of tippecanoe

I want to ship my product which solely depends on tippecanoe, in windows environment, but cant as I am not able to run tippecanoe on windows, is there an alternative (nothing like wsl).

QGIS can't read tiles output to directory

Hello, and thank you for this tool which was been so helpful to me over the years!

I am trying to convert a GeoJSON file to a directory of tiles. This is the command I'm using:

tippecanoe -zg --drop-densest-as-needed -e ~/summary_hexagons ~/summary_hexagons.geojson 

The command seems to run fine:

For layer 0, using name "summary_hexagons"
120038 features, 11964453 bytes of geometry, 2041151 bytes of string pool
Choosing a maxzoom of -z6 for features typically 3791 feet (1156 meters) apart, and at least 1893 feet (577 meters) apart
tile 4/4/5 size is 501839 with detail 12, >500000    
Going to try keeping the sparsest 89.67% of the features to make it fit
tile 4/4/6 size is 668613 with detail 12, >500000    
[truncated for brevity]
  99.9%  6/18/24  

Next, I try viewing these in QGIS. No geometries appear on the map and if I try adding a random .pbf from the output directories, I get an error Invalid Data Source: /Users/me/summary_hexagons/1/0/0.pbf is not a valid or recognized data source.

If I inspect one of the .pbfs in the Protobuf Viewer macOS app, I'm not seeing a valid structure. I'm not a protobuf expert, but when I've used this app with valid tiles I usually see a tree-like structure. This just has a top-level Root with no apparent contents.

If I switch from tippecanoe -e ~/summary_hexagons to -o summary_hexagons.mbtiles it works fine, and I can view the tiles successfully in QGIS. I can also view the source GeoJSON in QGIS without issue.

Is there something specific to the -e flag I might be doing wrong here? I hope this is enough information to go on, and thanks so much in advance for any pointers!

docker build fails

git clone https://github.com/felt/tippecanoe.git
cd tippecanoe
docker build -t tc .

getting:

#10 15.21 projection.cpp: In function 'void lonlat2tile(double, double, int, long long int*, long long int*)':
#10 15.21 projection.cpp:25:32: error: 'fpclassify' was not declared in this scope
#10 15.21   int lat_class = fpclassify(lat);
#10 15.21                                 ^
#10 15.21 projection.cpp:25:32: note: suggested alternative:
#10 15.21 In file included from projection.cpp:5:0:
#10 15.21 /usr/include/c++/5/cmath:581:5: note:   'std::fpclassify'
#10 15.21      fpclassify(_Tp __x)
#10 15.21      ^
#10 15.21 projection.cpp: In function 'void epsg3857totile(double, double, int, long long int*, long long int*)':
#10 15.21 projection.cpp:77:30: error: 'fpclassify' was not declared in this scope
#10 15.21   int iy_class = fpclassify(iy);
#10 15.21                               ^
#10 15.21 projection.cpp:77:30: note: suggested alternative:
#10 15.21 In file included from projection.cpp:5:0:
#10 15.21 /usr/include/c++/5/cmath:581:5: note:   'std::fpclassify'
#10 15.21      fpclassify(_Tp __x)
#10 15.21      ^
#10 15.21 Makefile:74: recipe for target 'projection.o' failed
#10 15.21 make: *** [projection.o] Error 1
------
executor failed running [/bin/sh -c make   && make install]: exit code: 2

Segmentation Fault Processing Large GeoJSON (>100 GB)

I polygonized the National Land Cover Database (https://www.mrlc.gov/data/nlcd-2019-land-cover-conus) in QGIS and am attempting to build vector tiles from the resulting GeoJSON which is 129 GB. Running the following is producing a segmentation fault. Am I pushing beyond the bounds of what tippecanoe can handle? Or are there other pathways I should explore? Any pointers would be greatly appreciated!

tippecanoe -z10 -o nlcd_2019_land_cover_l48_20210604.mbtiles nlcd_2019_land_cover_l48_20210604.geojson
For layer 0, using name "nlcd_2019_land_cover_l48_20210604"
zsh: segmentation fault  tippecanoe -z10 -o nlcd_2019_land_cover_l48_20210604.mbtiles

Request for including bbox of clustered points in cluster properties

When using Tippecanoe with the cluster option, the resulting clusters only contain the number of points in the cluster and the average coordinates of the clustered points. I would like to request the ability to also include the bounding box (bbox) of all the clustered points in the cluster properties.

This information would be useful in cases where the original features need to be displayed or processed after clustering. For example, a user may want to zoom in on a cluster to see the individual features.

I understand that this may have implications on the size and complexity of the resulting vector tiles, but I believe the benefits of including this information would outweigh the drawbacks.

Thank you for considering this request.

Idea for feature - 'g' (guess) for simplification.

I had an idea for a feature, 'g' (guess) for simplification. The same as there is for max zoom, where you ask tippecanoe to calculate the best max zoom, maybe it could be nice to have the ability to ask tippecanoe to choose the best simplification level.

How much can I gain by shortening attribute names?

Hello!

I have a dataset of around five million points and I'd like to keep the tiles as small as possible. Each point has around six attributes with readable, descriptive names like:

  • download_speed_maximum
  • download_speed_minimum

My question is — how much could I stand to gain in terms of keeping the tiles small if I renamed the attributes in the source GeoJSON to something much shorter, like:

  • download_speed_max => dmax
  • download_speed_min => dmin

Are there already optimizations for attribute names when the tiles get encoded? I don't know protobufs well, but wondering if there's already sort of normalization that happens (e.g. download_speed_max gets encoded to a1 internally — my imagination is running wild here 😆)

Thank you in advance for any tips on this!

Tile Conversion Process Stuck

In attempting to diagnose why tippecanoe was seg faulting on a larger dataset derived from polygonizing a raster, I polygonized a subset of the image and now tippecanoe is freezing part way through the conversion. I've tried ingesting both a geojson and flatgeobuf representation of the data and got the same result. I'm wondering if anyone has thoughts on how to debug this behavior. At present, I'm using options -z10 -pt -pn. Recalling tippecanoe with only -z10 is yielding the same outcome.

Layer name in tippecanoe GeoJSON object doesn't always take precedence

This took a bit to isolate -- a map layer would sometimes simply not show and this appeared to be source data dependent...

According to the documentation, a layer name can be specified in GeoJSON using "tippecanoe" : { "layer" : "LAYER-NAME" }, and this name has precedence over the filename and those given by --layer directives.

However:

$ tippecanoe  \
    --force  \
    --projection=EPSG:4326  \
    --buffer=100  \
    --minimum-zoom=5  \
    --maximum-zoom=5  \
    --no-tile-size-limit  \
    --output=index.mbtiles  \
    --no-tile-compression  \
    -L'{"file": "layer1-features.json", "layer": "features"}'  \
    -L'{"file": "layer2-features.json", "layer": "features"}'  \
    -L'{"file": "layer2-paths.json", "layer": "pathways"}'

using the following GeoJSON results in layer1-features having the wrong layer name -- it has the name from the --layer directive used for layer2-paths, instead if that given by GeoJSON.

$ tippecanoe-decode index.mbtiles 5 15 15
{ "type": "FeatureCollection", "properties": { "zoom": 5, "x": 15, "y": 15, "compressed": false }, "features": [
{ "type": "FeatureCollection", "properties": { "layer": "layer2-features", "version": 2, "extent": 4096 }, "features": [
{ "type": "Feature", "id": 2, "properties": {  }, "geometry": { "type": "Polygon", "coordinates": [ [ [ 1.700134, 6.001265 ], [ 1.700134, 5.000658 ], [ 0.700378, 5.000658 ], [ 0.700378, 6.001265 ], [ 1.700134, 6.001265 ] ] ] } }
] }
,
{ "type": "FeatureCollection", "properties": { "layer": "layer2-paths", "version": 2, "extent": 4096 }, "features": [
{ "type": "Feature", "id": 3, "properties": {  }, "geometry": { "type": "LineString", "coordinates": [ [ 4.394531, 5.099151 ], [ 2.298889, 5.099151 ], [ 2.298889, 6.001265 ], [ 0.799255, 6.001265 ] ] } }
] }
,
{ "type": "FeatureCollection", "properties": { "layer": "pathways", "version": 2, "extent": 4096 }, "features": [
{ "type": "Feature", "id": 1, "properties": {  }, "geometry": { "type": "Polygon", "coordinates": [ [ [ -6.001282, 7.800800 ], [ -6.001282, 1.999106 ], [ -9.898682, 1.999106 ], [ -9.898682, 7.800800 ], [ -6.001282, 7.800800 ] ] ] } }
] }
] }

layer1-features.json

{
    "type": "FeatureCollection",
    "features": [
        {
            "type": "Feature",
            "id": 1,
            "tippecanoe": {
                "layer": "layer1-features"
            },
            "geometry": {
                "type": "Polygon",
                "coordinates": [[[-9.9, 7.8], [-6.0, 7.8], [-6.0, 2.0], [-9.9, 2.0], [-9.9, 7.8]]]
            },
            "properties": {
            }
        }
    ]

layer2-features.json

{
    "type": "FeatureCollection",
    "features": [
        {
            "type": "Feature",
            "id": 2,
            "tippecanoe": {
                "layer": "layer2-features"
            },
            "geometry": {
                "type": "Polygon",
                "coordinates": [[[0.7, 6.0], [1.7, 6.0], [1.7, 5.0], [0.7, 5.0], [0.7, 6.0]]]
            },
            "properties": {
            }
        }
    ]
}

layer2-path.json

{
    "type": "FeatureCollection",
    "features": [
        {
            "type": "Feature",
            "id": 2,
            "tippecanoe": {
                "layer": "layer2-features"
            },
            "geometry": {
                "type": "Polygon",
                "coordinates": [[[0.7, 6.0], [1.7, 6.0], [1.7, 5.0], [0.7, 5.0], [0.7, 6.0]]]
            },
            "properties": {
            }
        }
    ]
}

A bit confused by tile resolution docs

Hello!

Related to #88, I'm trying to convert polygons to MVTs without simplifying geometries at any zoom level and am running into some issues getting that to work. After some more digging, I think it might be related to the tile resolution flags (e.g. --full-detail, --low-detail), so I'd like to try tinkering with those. However, I'm having a little trouble understanding what they do. For example:

-d detail or --full-detail=detail: Detail at max zoom level (default 12, for tile resolution of 2^12=4096)

Does this mean that geometries only appear unsimplified at level 12 and above? Also, is this somehow overriding the max zoom I'm setting with -z? It's a hair confusing to me the term max zoom gets reused for what appear to be orthogonal parameters.

I'm also not sure if I understand the functionality of --low-detail:

-D detail or --low-detail=detail: Detail at lower zoom levels (default 12, for tile resolution of 2^12=4096)

Does this invert how details are rendered, so it's only at low zoom levels? That doesn't seem to check out since it's not a boolean argument, but I'm not quite sure.

I would be happy to help with this if I can. Once I understand how these flags work, I'd be glad to submit a PR. Thank you for your time! 😄

Compilation error during `docker build`

docker build fails with an error during the compilation of text.cpp:

$ docker build --force-rm --no-cache -t tippecanoe:local .
[+] Building 56.2s (10/10) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                       0.1s
 => => transferring dockerfile: 38B                                                                                                                                                                        0.1s
 => [internal] load .dockerignore                                                                                                                                                                          0.1s
 => => transferring context: 34B                                                                                                                                                                           0.1s
 => [internal] load metadata for docker.io/library/ubuntu:16.04                                                                                                                                            1.5s
 => [internal] load build context                                                                                                                                                                          2.3s
 => => transferring context: 27.93kB                                                                                                                                                                       2.3s
 => CACHED [1/6] FROM docker.io/library/ubuntu:16.04@sha256:1f1a2d56de1d604801a9671f301190704c25d604a416f59e03c04f5c6ffee0d6                                                                               0.0s
 => => resolve docker.io/library/ubuntu:16.04@sha256:1f1a2d56de1d604801a9671f301190704c25d604a416f59e03c04f5c6ffee0d6                                                                                      0.0s
 => [2/6] RUN apt-get update   && apt-get -y upgrade   && apt-get -y install build-essential libsqlite3-dev zlib1g-dev                                                                                    28.7s
 => [3/6] RUN mkdir -p /tmp/tippecanoe-src                                                                                                                                                                 0.5s
 => [4/6] WORKDIR /tmp/tippecanoe-src                                                                                                                                                                      0.0s
 => [5/6] COPY . /tmp/tippecanoe-src                                                                                                                                                                       0.2s
 => ERROR [6/6] RUN make   && make install                                                                                                                                                                25.0s
------
 > [6/6] RUN make   && make install:
#10 0.411 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o geojson.o geojson.cpp
#10 1.517 cc -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -c -o jsonpull/jsonpull.o jsonpull/jsonpull.c
#10 1.835 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o tile.o tile.cpp
#10 7.863 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o pool.o pool.cpp
#10 7.935 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o mbtiles.o mbtiles.cpp
#10 9.826 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o geometry.o geometry.cpp
#10 17.23 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o projection.o projection.cpp
#10 17.34 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o memfile.o memfile.cpp
#10 17.39 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o mvt.o mvt.cpp
#10 20.03 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o serial.o serial.cpp
#10 21.68 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o main.o main.cpp
#10 24.85 g++ -MMD  -I/usr/local/include -I. -g -Wall -Wshadow -Wsign-compare -Wextra -Wunreachable-code -Wuninitialized -Wshadow -O3 -DNDEBUG  -std=c++11 -c -o text.o text.cpp
#10 24.94 text.cpp: In function 'int integer_zoom(std::__cxx11::string, std::__cxx11::string)':
#10 24.94 text.cpp:134:17: error: 'isfinite' was not declared in this scope
#10 24.94   if (!isfinite(d) || d != floor(d) || d < 0 || d > 32) {
#10 24.94                  ^
#10 24.94 text.cpp:134:17: note: suggested alternative:
#10 24.94 In file included from milo/dtoa_milo.h:4:0,
#10 24.94                  from text.cpp:6:
#10 24.94 /usr/include/c++/5/cmath:601:5: note:   'std::isfinite'
#10 24.94      isfinite(_Tp __x)
#10 24.94      ^
#10 24.96 Makefile:74: recipe for target 'text.o' failed
#10 24.96 make: *** [text.o] Error 1
------
executor failed running [/bin/sh -c make   && make install]: exit code: 2

Analyzing geojson

Hi @e-n-f for thank you for all the work you've done to continue this great project and all the wonderful docs.

We have a use case very similar to felt where we want to use tippecanoe to take geojson files submitted by our users and convert them into tiles. The recipes you have provided are super helpful when you understand the nature of the data going in. I'm curious how you have tackled more of the free form input. Do you have scripts that analyze the nature of the geojson and then generate the right prompts? Curious if you have any strategies or scripts that are shareable. Thanks!

Update example "Show countries at low zoom levels but states at higher zoom levels"

This is a copy of mapbox/tippecanoe#920.

I ran into this in a project recently. My understanding is, that since https://github.com/mapbox/tippecanoe/pull/656/files (triggered by mapbox/tippecanoe#653) the example at https://github.com/felt/tippecanoe#show-countries-at-low-zoom-levels-but-states-at-higher-zoom-levels does not work anymore. At least a very similar workflow did not work for me, anymore.

However, the usecase of using multiple tippecanoe runs with different settings and then merge those is very useful to gain more control about which data is visible per zoom level.

I see two options…

  • update the docs to reflect that this does not work anymore
  • update the code to make it work again

Support build via conda?

Currently mapbox/tippecanoe is available on conda-forge to be built with conda. I've had a good experience installing and making use of tippecanoe via conda, but it appears active development of tippecanoe has moved here. I've opened an issue at conda-forge/tippecanoe-feedstock about repointing their source from mapbox/tippecanoe to felt/tippecanoe, but also wanted to reach out to gauge interest in making this fork available via conda-forge. I understand if this is a low or non-priority. Thanks for your thoughts and consideration!

FlatGeobuf input improvements

  • Streaming support. tippecanoe should accept a stream of FlatGeobuf on stdin just like GeoJsonSeq.
  • tippecanoe extensions. The tippecanoe GeoJSON extension is used to specify special keys like maxzoom, but FlatGeobuf is not extensible in the same way. We should instead reproduce the same behavior for namespaced keys e.g. tippecanoe:maxzoom.
  • heterogeneous files. A FlatGeobuf can have a mix of lines and polygons in a single file.
  • GeometryCollection. Similar to above, but where a single FGB Feature contains multiple geometry types.
  • Per-feature schemas: There currently is not any FGB writer tools that produce "Schemaless" output. The GDAL driver always outputs tabular data. This is necessary for efficiency on OpenStreetMap data and to have parity with GeoJSON input.

Installation instructions / Homebrew

I am not familiar with make and cannot get the installation instructions running https://github.com/felt/tippecanoe#installation.
Is there a way to get a packaged version on Homebrew like the mapbox instructions? Maybe under a new name like "felt-tippecanoe"?

Or, alternatively some Github Action that will create a realease hier in Github to be downloaded?


Update: The docker image looked like the best bet to get this running more easily https://github.com/felt/tippecanoe#docker-image. However, I ran into issues with docker: Error response from daemon: error while creating mount source path '/host_mnt/tiledata': mkdir /host_mnt/tiledata: input/output error. — Did not investigate this further.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.