Code Monkey home page Code Monkey logo

maxmind-db-reader-java's Introduction

MaxMind DB Reader

Description

This is the Java API for reading MaxMind DB files. MaxMind DB is a binary file format that stores data indexed by IP address subnets (IPv4 or IPv6).

Installation

Maven

We recommend installing this package with Maven. To do this, add the dependency to your pom.xml:

    <dependency>
        <groupId>com.maxmind.db</groupId>
        <artifactId>maxmind-db</artifactId>
        <version>3.1.0</version>
    </dependency>

Gradle

Add the following to your build.gradle file:

repositories {
    mavenCentral()
}
dependencies {
    compile 'com.maxmind.db:maxmind-db:3.1.0'
}

Usage

Note: For accessing MaxMind GeoIP2 databases, we generally recommend using the GeoIP2 Java API rather than using this package directly.

To use the API, you must first create a Reader object. The constructor for the reader object takes a File representing your MaxMind DB. Optionally you may pass a second parameter with a FileMode with a value of MEMORY_MAP or MEMORY. The default mode is MEMORY_MAP, which maps the file to virtual memory. This often provides performance comparable to loading the file into real memory with MEMORY.

To look up an IP address, pass the address as an InetAddress to the get method on Reader, along with the class of the object you want to deserialize into. This method will create an instance of the class and populate it. See examples below.

We recommend reusing the Reader object rather than creating a new one for each lookup. The creation of this object is relatively expensive as it must read in metadata for the file.

Example

import com.maxmind.db.MaxMindDbConstructor;
import com.maxmind.db.MaxMindDbParameter;
import com.maxmind.db.Reader;
import com.maxmind.db.DatabaseRecord;

import java.io.File;
import java.io.IOException;
import java.net.InetAddress;

public class Lookup {
    public static void main(String[] args) throws IOException {
        File database = new File("/path/to/database/GeoIP2-City.mmdb");
        try (Reader reader = new Reader(database)) {
            InetAddress address = InetAddress.getByName("24.24.24.24");

            // get() returns just the data for the associated record
            LookupResult result = reader.get(address, LookupResult.class);

            System.out.println(result.getCountry().getIsoCode());

            // getRecord() returns a DatabaseRecord class that contains both
            // the data for the record and associated metadata.
            DatabaseRecord<LookupResult> record
                = reader.getRecord(address, LookupResult.class);

            System.out.println(record.getData().getCountry().getIsoCode());
            System.out.println(record.getNetwork());
        }
    }

    public static class LookupResult {
        private final Country country;

        @MaxMindDbConstructor
        public LookupResult (
            @MaxMindDbParameter(name="country") Country country
        ) {
            this.country = country;
        }

        public Country getCountry() {
            return this.country;
        }
    }

    public static class Country {
        private final String isoCode;

        @MaxMindDbConstructor
        public Country (
            @MaxMindDbParameter(name="iso_code") String isoCode
        ) {
            this.isoCode = isoCode;
        }

        public String getIsoCode() {
            return this.isoCode;
        }
    }
}

You can also use the reader object to iterate over the database. The reader.networks() and reader.networksWithin() methods can be used for this purpose.

Reader reader = new Reader(file);
Networks networks = reader.networks(Map.class);

while(networks.hasNext()) {
    DatabaseRecord<Map<String, String>> iteration = networks.next();

    // Get the data.
    Map<String, String> data = iteration.getData();

    // The IP Address
    InetAddress ipAddress = InetAddress.getByName(data.get("ip"));

    // ...
}

Caching

The database API supports pluggable caching (by default, no caching is performed). A simple implementation is provided by com.maxmind.db.CHMCache. Using this cache, lookup performance is significantly improved at the cost of a small (~2MB) memory overhead.

Usage:

Reader reader = new Reader(database, new CHMCache());

Please note that the cache will hold references to the objects created during the lookup. If you mutate the objects, the mutated objects will be returned from the cache on subsequent lookups.

Multi-Threaded Use

This API fully supports use in multi-threaded applications. In such applications, we suggest creating one Reader object and sharing that among threads.

Common Problems

File Lock on Windows

By default, this API uses the MEMORY_MAP mode, which memory maps the file. On Windows, this may create an exclusive lock on the file that prevents it from being renamed or deleted. Due to the implementation of memory mapping in Java, this lock will not be released when the DatabaseReader is closed; it will only be released when the object and the MappedByteBuffer it uses are garbage collected. Older JVM versions may also not release the lock on exit.

To work around this problem, use the MEMORY mode or try upgrading your JVM version. You may also call System.gc() after dereferencing the DatabaseReader object to encourage the JVM to garbage collect sooner.

Packaging Database in a JAR

If you are packaging the database file as a resource in a JAR file using Maven, you must disable binary file filtering. Failure to do so will result in InvalidDatabaseException exceptions being thrown when querying the database.

Format

The MaxMind DB format is an open format for quickly mapping IP addresses to records. The specification is available, as is our Perl writer for the format.

Bug Tracker

Please report all issues with this code using the GitHub issue tracker.

If you are having an issue with a MaxMind database or service that is not specific to this reader, please contact MaxMind support.

Requirements

This API requires Java 11 or greater.

Contributing

Patches and pull requests are encouraged. Please include unit tests whenever possible.

Versioning

The MaxMind DB Reader API uses Semantic Versioning.

Copyright and License

This software is Copyright (c) 2014-2022 by MaxMind, Inc.

This is free software, licensed under the Apache License, Version 2.0.

maxmind-db-reader-java's People

Contributors

andyjack avatar asnare avatar autarch avatar aweigold avatar borisz avatar dependabot-preview[bot] avatar dependabot[bot] avatar faktas2 avatar horgh avatar kevcenteno avatar nchelluri avatar oalders avatar oschwald avatar phraktle avatar rafl avatar shadromani avatar spkrka avatar ugexe avatar umanshahzad avatar wesrice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

maxmind-db-reader-java's Issues

Enable basic sanity check for new database files

Hello,

A couple of days ago, I encountered an exception on every get(InetAddress) method call on a GeoIP2 City database file, dated from June 11, 2017:
capture d ecran 2017-06-18 a 00 00 28

For now, I don't see any (public) method that could allow some sanity check on a database file.

The MD5 hash computed on the downloaded file was matching the one available on your web site.

Appart from testing a new database file with some values, how could we ensure that we can safely switch to a valid database?
For production systems, this is very important.

Thanks,

Dimitri

Database reader does not work for mmdb files larger than 2gb

Trying to create a new reader (with or without a cache) for a 2,5gb database:

new Reader(file, new CHMCache());

Results in the following exception:

java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
	at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:1183)
	at com.maxmind.db.BufferHolder.<init>(BufferHolder.java:31)
	at com.maxmind.db.Reader.<init>(Reader.java:119)
	at com.maxmind.db.Reader.<init>(Reader.java:69)
	at app.service.geoip.LocalFilesystemGeoipDataProvider.loadDatabaseReader(LocalFilesystemGeoipDataProvider.java:72)

A quick search of the error message reveals that due to some old standards ByteBuffer size is limited to 2gb.

Is there any workaround or fix for this or is the reader simply unusable with a larger database?

Support for async caching

We have a Java class (called GeoIpService) that is a wrapper around https://github.com/maxmind/GeoIP2-java and exposes a simple API to lookup up geo information for a given IP (e.g. lookup city, lookup country etc.). This class in turn uses DatabaseReader to read from various maxmind databases, and sets up caching for each database used (we have an implementation of NodeCache that uses Caffeine under the hood).

The GeoIpService class is distributed internally via a library and is used by several high-traffic microservices that have a need to geolocate an IP address. The issue we observe under production load is high numbers of blocking calls for our implementation of NodeCache.get(key, loader), which in turn calls https://github.com/ben-manes/caffeine/blob/a03e95ddc69e03e2ca205b9d2ed08c89f3235a32/caffeine/src/main/java/com/github/benmanes/caffeine/cache/Cache.java#L55-L81.

Caffeine provides async caching, however we can't really switch to that atm because we are limited by the current NodeCache interface.

Could caching support for maxmind databases be extended to include an async interface?

Huge contention on BufferHolder.get()

Hi everyone.

Using DatabaseReader in highly loaded environment and noticed huge blocking time on this line.
If I understand correctly byte buffer is duplicated and only used locally on method's stack memory.
Why is it made as synchronized ? Are there any workarounds ?

Thanks!

Unsafe memory access operation

Did read through the same error that occurred before, #81.
We are using File Mode as memory mapped. The issue shows up on one of the containers and it fails some calls while passes few.

org.springframework.web.util.NestedServletException: Async processing failed; nested exception is java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod$ConcurrentResultHandlerMethod.lambda$new$0(ServletInvocableHandlerMethod.java:223) at java.base/jdk.internal.reflect.GeneratedMethodAccessor312.invoke(Unknown Source)

Does not look like underlying file was not loaded properly, else no request should have been successful after the failure.
Any pointers or the reasoning behind why it occurs and sustains on one of the containers. Was the file reference not correct? but if thats the case none of the requests to resolve ip would have been successful.

For now we had to restart the container to fix the issue.

Synchronization on database query

Hi

When invoking Reader's get method, I noticed that I reach a synchronized method in BufferHolder.get() - which has a performance impact when we are using several threads.
I do not understand why this synchronization is needed... you are just duplicating a read-only value...

In general, since everything is in memory (I am working in MEMORY file mode) - it seems weird that you have to synchronize between threads.
Is there is a way to use the API without having this synchronization?

Thanks!
Doron.

Dont use ranges for jackson dependency

https://github.com/maxmind/MaxMind-DB-Reader-java/blob/v1.2.1/pom.xml#L40

Using a range for the jackson dependency causes snapshots to be pulled in. Maven's range capabilities seems kinda busted, since using a deployed pinned version of MaxMind reader pulls in jackson 2.9.0-SNAPSHOT as of today.

This really messes with your determinism of your artifacts and depending on snapshots is in general bad juju for non-beta libraries.

From the following SO and disucssion
the consensus seems to be don't use ranges

Memory leak exception

I am running a web application in tomcat. I am using geoip2 0.7.1 java driver version. It throws

SEVERE: The web application [/anant] created a ThreadLocal with key of type
[com.maxmind.db.ThreadBuffer](value [com.maxmind.db.ThreadBuffer@8a64bb2]) and a value
of type [java.nio.DirectByteBufferR](value [java.nio.DirectByteBufferR[pos=12545442
lim=28655468 cap=28655468]]) but failed to remove it when the web application was
stopped. Threads are going to be renewed over time to try and avoid a probable memory

refer :

http://stackoverflow.com/questions/23801992/memory-leak-metrics-meter-tick-thread-and-new-i-o-client-worker

leak.

Is it possible to iterate over all records of a GeoIP database

Hi! We need to get all records contained in a database file, something like this:
"Network/mask", "Attributes..."
We we able to do that with CSV format exports of GeoIP2Lite database. Now that we are using GeoIP2 (which is binary format) we no longer can do that.

Is it possible with current API? Or will you be able to add such an API?

Improve error handling in Decoder.decodeMapIntoMap()

Hi team,

As part of an internal project for the company I work for, we are using the MaxMind DB reader without own custom schema. Due to a design oversight, I hit a situation where our MMDB file contained a map of floats but we were expecting a map of integers. When the DB reader attempts to decode the map, a ClassCastException is thrown when trying to cast float to integer in Decoder.decodeMapToMap().

java.lang.ClassCastException: Cannot cast java.lang.Double to java.lang.Integer
	at java.base/java.lang.Class.cast(Class.java:3889)
	at com.maxmind.db.Decoder.decodeMapIntoMap(Decoder.java:375)
	at com.maxmind.db.Decoder.decodeMap(Decoder.java:338)
	at com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at com.maxmind.db.Decoder.decode(Decoder.java:151)
	at com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:429)
	at com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at com.maxmind.db.Decoder.decode(Decoder.java:151)
	at com.maxmind.db.Decoder.decode(Decoder.java:76)
	at com.maxmind.db.Reader.resolveDataPointer(Reader.java:274)
	at com.maxmind.db.Reader.getRecord(Reader.java:183)

In this case, the above ClassCastException is returned all the way up to the point where Reader.getReader() was called.

I notice, however, if I were to create a similar data type discrepancy in a Plain Old Java Object (POJO), the decoder hits a DeserializationException in Decoder.decodeMapIntoObject().

Caused by: com.maxmind.db.DeserializationException: Error creating object of type: Location -  argument type mismatch in latitude MMDB Type: java.lang.Double Java Type: java.lang.Integer argument type mismatch in longitude MMDB Type: java.lang.Double Java Type: java.lang.Integer
	at com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:452)
	at com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at com.maxmind.db.Decoder.decode(Decoder.java:151)
	at com.maxmind.db.Decoder.decode(Decoder.java:89)
	at com.maxmind.db.NoCache.get(NoCache.java:17)
	at com.maxmind.db.Decoder.decode(Decoder.java:116)
	at com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:429)
	at com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at com.maxmind.db.Decoder.decode(Decoder.java:151)
	at com.maxmind.db.Decoder.decode(Decoder.java:76)
	at com.maxmind.db.Reader.resolveDataPointer(Reader.java:274)
	at com.maxmind.db.Reader.getRecord(Reader.java:183)

In this case, I see the existing error handling nicely converts the above exception into one more meaningful. At the point where Reader.getReader() was called, I saw:

com.maxmind.db.DeserializationException: Error getting record for IP /<test_ip_address> -  Error creating object of type: Location -  argument type mismatch in latitude MMDB Type: java.lang.Double Java Type: java.lang.Integer argument type mismatch in longitude MMDB Type: java.lang.Double Java Type: java.lang.Integer
	at com.maxmind.db.Reader.getRecord(Reader.java:186)

My suggestion would be to wrap the cast function with a try-catch at Decoder.java#L375 to then throw a DeserializationException with a relevant cause and message. This should allow values to be casted where the casting is valid.

Please let me know your thoughts, or if I need to re-clarify anything that is ambiguous. I am additionally happy to contribute any code change as needed.

Thanks!

Thom

Deserialization error for IP 80.75.212.75

I am getting this exception on location mmdb January 7 2024:

com.maxmind.db.DeserializationException: Error getting record for IP /80.75.212.75

So far it only reproduces with this specific IP. Other IPs work fine. Could this record be corrupt in the DB?

Decoder NullPointerException masks underlying exception

If I have an mdb file where one of the fields is the wrong type (in this case, lat and lon were stored as strings rather than doubles), then Decoder attempts to throw this exception:

java.lang.IllegalArgumentException: argument type mismatch
	at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:65)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486)
	at [email protected]/com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:441)
	at [email protected]/com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at [email protected]/com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at [email protected]/com.maxmind.db.Decoder.decode(Decoder.java:151)
	at [email protected]/com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:434)
	at [email protected]/com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at [email protected]/com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at [email protected]/com.maxmind.db.Decoder.decode(Decoder.java:151)
	at [email protected]/com.maxmind.db.Decoder.decode(Decoder.java:76)
	at [email protected]/com.maxmind.db.Reader.resolveDataPointer(Reader.java:411)
	at [email protected]/com.maxmind.db.Reader.getRecord(Reader.java:185)
	at [email protected]/com.maxmind.geoip2.DatabaseReader.get(DatabaseReader.java:280)
	at [email protected]/com.maxmind.geoip2.DatabaseReader.getCity(DatabaseReader.java:365)
	at [email protected]/com.maxmind.geoip2.DatabaseReader.tryCity(DatabaseReader.java:359)

Then Decoder attempts to format a nice message explaining to the user what the problem is, but in doing so throws a NullPointerException because some optional fields are null (in this case average_income is null):

java.lang.NullPointerException: Cannot invoke "Object.getClass()" because "parameters[index]" is null
	at [email protected]/com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:450)
	at [email protected]/com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at [email protected]/com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at [email protected]/com.maxmind.db.Decoder.decode(Decoder.java:151)
	at [email protected]/com.maxmind.db.Decoder.decodeMapIntoObject(Decoder.java:434)
	at [email protected]/com.maxmind.db.Decoder.decodeMap(Decoder.java:341)
	at [email protected]/com.maxmind.db.Decoder.decodeByType(Decoder.java:162)
	at [email protected]/com.maxmind.db.Decoder.decode(Decoder.java:151)
	at [email protected]/com.maxmind.db.Decoder.decode(Decoder.java:76)
	at [email protected]/com.maxmind.db.Reader.resolveDataPointer(Reader.java:411)
	at [email protected]/com.maxmind.db.Reader.getRecord(Reader.java:185)
	at [email protected]/com.maxmind.geoip2.DatabaseReader.get(DatabaseReader.java:280)
	at [email protected]/com.maxmind.geoip2.DatabaseReader.getCity(DatabaseReader.java:365)
	at [email protected]/com.maxmind.geoip2.DatabaseReader.tryCity(DatabaseReader.java:359)

So the user never gets the message about a field being stored with the wrong type, and instead is confused by a NullPointerException. I think line 450 probably ought to just ignore any parameters where parameters[index] is null.

More information on how to produce a file that caused this is at https://discuss.elastic.co/t/geoip-challenges-custom-city-mmdb/355173/13.

Unsafe memory access operation

We're using GeoIP (licensed db).

Following code is run in multiple threads (the static cityDatabaseReader property is shared).
The reloadDatabase() method is called periodically. First time in the constructor to initiate the db and then there's scheduled call in parent class to refresh database.

When the code is initially run it works as expected but on second reloadDatabase() call (from scheduler in parent class) it crashes with following error:

java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code

public class GeoAppender extends ContextRefresher {
    @SuppressWarnings("NullAway.Init")
    private static volatile DatabaseReader cityDatabaseReader;

    public GeoAppender() {
        super(Duration.ofHours(4));

        reloadDatabase();
    }

    @Override
    public void process(InetAddress ip) {
        try {
            CityResponse cityResponse = cityDatabaseReader.city(ip);
            ...
        } catch (GeoIp2Exception | IOException ignored) {
        }

        ...
    }

    private void reloadDatabase() {
        File file = new File("path/to/city.mmdb);

        try {
            cityDatabaseReader = new DatabaseReader
                .Builder(file)
                .withCache(new CHMCache())
                .build();
        } catch (IOException exception) {
            throw new RuntimeException(exception.getMessage());
        }
    }

}
Exception in thread "my.app.StreamThread-24" java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
	at java.base/java.nio.Buffer.checkIndex(Buffer.java:681)
	at java.base/java.nio.DirectByteBuffer.get(DirectByteBuffer.java:269)
	at com.maxmind.db.Reader.readNode(Reader.java:223)
	at com.maxmind.db.Reader.getRecord(Reader.java:170)
	at com.maxmind.geoip2.DatabaseReader.get(DatabaseReader.java:273)
	at com.maxmind.geoip2.DatabaseReader.getOrThrowException(DatabaseReader.java:240)
	at com.maxmind.geoip2.DatabaseReader.city(DatabaseReader.java:330)
	at my_ns..GeoAppender.process(GeoAppender.java:52)
	at my_ns..GeoAppender.process(GeoAppender.java:16)

Unable to read custom database

I've created a test database and I'm trying to use it with the Java API, but I always get the following error:

Exception in thread "main" com.maxmind.db.InvalidDatabaseException: The MaxMind DB file's search tree is corrupt: contains pointer larger than the database.
	at com.maxmind.db.Reader.resolveDataPointer(Reader.java:243)
	at com.maxmind.db.Reader.get(Reader.java:150)
	at my.Test.main(Test.java:12)

I use the following code for reading:

Reader reader = new Reader(new File("test.mmdb.gz"));
InetAddress inetAddr = InetAddress.getByName("1.1.1.1");
JsonNode res = reader.get(inetAddr);
System.out.println(res);

This is the code used to write the custom database:

#!/usr/bin/env perl
use strict;
use warnings;
use feature qw( say );
use local::lib 'local';
use MaxMind::DB::Writer::Tree;
use Net::Works::Network;
use String::Random;

sub _universal_map_key_type_callback {
  my $map = {
      word1                       => 'utf8_string',
      word2                       => 'utf8_string',
      word3                       => 'utf8_string',
  };
  my $callback = sub {
      my $key = shift;

      return $map->{$key} || die <<"ERROR";
Unknown tree key '$key'.
The universal_map_key_type_callback doesn't know what type to use for the passed
key.  If you are adding a new key that will be used in a frozen tree / mmdb then
you should update the mapping in both our internal code and here.
ERROR
  };
  return $callback;
}
my $filename = 'test.mmdb';
my $type = 'test';
my $tree = MaxMind::DB::Writer::Tree->new(
  record_size   => 28,
  ip_version    => 4,
  database_type => $type,
  languages     => [ 'en'],
  description   => {
      en => "TEST database"
  },
  map_key_type_callback => _universal_map_key_type_callback(),
);
my $rnd = new String::Random;
my $network = Net::Works::Network->new_from_string( string => '1.1.1.0/24' );
$tree->insert_network( $network, {
  word1 => 'Test 1',
  word2 => 'Test 2',
  word3 => $rnd->randpattern("CCcc!ccn")
});
$network = Net::Works::Network->new_from_string( string => '2.2.2.0/24' );
$tree->insert_network( $network, {
  word1 => 'Test 1',
  word2 => 'Test 2',
  word3 => $rnd->randpattern("CCcc!ccn")
});
open my $fh, '>:raw', $filename;
$tree->write_tree( $fh );
close $fh;
say "$filename has now been created";

The same database works perfectly in perl and python

#!/usr/bin/env perl

use strict;
use warnings;
use feature qw( say );
use local::lib 'local';

use Data::Printer;
use MaxMind::DB::Reader;
use Net::Works::Address;

my $ip = shift @ARGV or die 'Usage: perl test-read.pl [ip_address]';

my $reader = MaxMind::DB::Reader->new( file => 'test.mmdb' );

say 'Description: ' . $reader->metadata->{description}->{en};

my $record = $reader->record_for_address( $ip );
say np $record;
#!/usr/bin/env python

import maxminddb
reader = maxminddb.open_database('test.mmdb')
print reader.get('1.1.1.1')
reader.close()

I don't understand what i did wrong

examples.zip

Cache eviction for CHMCache, or another default cache

Right now CHMCache has no eviction policy - when it fills up, it never changes. But our GeoIP lookups are not just clustered (so benefiting from a cache) but also time-varying (benefiting from the cache changing over time). E.g. from around UTC 0:00 to 6:00, we switch from mostly-European to mostly-North-American IPs.

I understand that CHMCache is supposed to be mostly an example of how to write a cache, but we're dealing with applications written in JRuby, and that makes writing our own cache a little more painful and a lot slower.

Would it be possible to offer a default cache with an eviction policy?

ArrayIndexOutOfBoundsException in tryCity method from maxmind

Dear maxmind developers,

We are using the last lib

   <dependency>
        <groupId>com.maxmind.geoip2</groupId>
        <artifactId>geoip2</artifactId>
        <version>2.13.0</version>
    </dependency>

From time to time we observed some errors from maxmind lib, you can see the stacktrace below:

"stack_trace":"java.lang.ArrayIndexOutOfBoundsException: Index 34 out of bounds for length 16\n\tat com.maxmind.db.Decoder$Type.get(Decoder.java:53)\n\tat com.maxmind.db.Decoder.decode(Decoder.java:129)\n\tat com.maxmind.db.Decoder.decode(Decoder.java:88)\n\tat com.maxmind.db.Reader.resolveDataPointer(Reader.java:256)\n\tat com.maxmind.db.Reader.getRecord(Reader.java:176)\n\tat com.maxmind.geoip2.DatabaseReader.get(DatabaseReader.java:271)\n\tat com.maxmind.geoip2.DatabaseReader.tryCity(DatabaseReader.java:334)

As we analysed the code, the error starts in Decoder.class, specifically in this part:

static enum Type {
EXTENDED,
POINTER,
UTF8_STRING,
DOUBLE,
BYTES,
UINT16,
UINT32,
MAP,
INT32,
UINT64,
UINT128,
ARRAY,
CONTAINER,
END_MARKER,
BOOLEAN,
FLOAT;

    static final Decoder.Type[] values = values();

    private Type() {
    }

    static Decoder.Type get(int i) {
        return values[i];
    }

sometimes the i parameter is 36, but there are only 16 fixed values in the ENUM.

Thanks

Missing SNAPSHOT-Dependency com.fasterxml:oss-parent:pom:28

I get follow Error Message, when i try to build my project:

Failed to execute goal on project xxx::
Could not resolve dependencies for project xxx:
Failed to collect dependencies at com.maxmind.geoip2:geoip2:jar:2.8.0
-> com.maxmind.db:maxmind-db:jar:1.2.1
-> com.fasterxml.jackson.core:jackson-databind:jar:2.9.0-SNAPSHOT:
Failed to read artifact descriptor for com.fasterxml.jackson.core:jackson-databind:jar:2.9.0-SNAPSHOT:
Failure to find com.fasterxml:oss-parent:pom:28 in https://repo.maven.apache.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has elapsed or updates are forced

The reason is, that the parent of

	<project>
		<groupId>com.maxmind.db</groupId>
		<artifactId>maxmind-db</artifactId>
		<version>1.2.1</version>
	</project>

is

	<parent>
		<groupId>org.sonatype.oss</groupId>
		<artifactId>oss-parent</artifactId>
		<version>7</version>
	</parent>

which defines the repo

	<repository>
		<id>sonatype-nexus-snapshots</id>
		<name>Sonatype Nexus Snapshots</name>
		<url>https://oss.sonatype.org/content/repositories/snapshots</url>
		<releases>
			<enabled>false</enabled>
		</releases>
		<snapshots>
			<enabled>true</enabled>
		</snapshots>
	</repository>

and so i need a version from jackson, that is official (maven central) not supported yet.

Feature Inquiry: Native extension similar to Python?

The Python version of this library has a native extension (compiled against libmaxminddb) that improves performance of the library. Is there a plan to implement such a native extension for this Java library?

Would such an extension even provide a performance boost in Java, or is the Java memory mapping already as fast as it can get?

If such an extension would provide a benefit, I'd like to propose it as a feature for an upcoming release.

Issue with subdivisions when upgrading from legacy to GeoIP2

Hi

There was a change in the regions/subdivisions values in GeoIP2 - instead of FIPS 10-4 it now gives ISO 3166-2. This is problematic for us in backward compatibility terms... we have the FIPS values saved in our DB and we are not sure how we can upgrade the data to match the new API.

Is there any way, even not totally accurate (I understand the geographical areas are different), to map between the old and new values? Any way to get the old values from the new API? I saw some issues about it here and there but could not find an answer...

Thanks!

High object churn in DB Reader

We are using the GeoIP2-java library to read from the GeoIP2 database file. Our throughput is rather high with thousands of IPs resolved per second and we noticed that there is a high object churn in the Maxmind APIs with huge numbers of strings created only to be thrown away soon after again.

Are you aware of this issue? Any plans to improve the API in this aspect or any idea how we could improve this behaviour ourselves?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.