Code Monkey home page Code Monkey logo

rakam-api's Introduction

Build Status Deploy

Rakam

Rakam is an analytics platform that allows you to create your analytics services.

Features / Goals

Rakam is a modular analytics platform that gives you a set of features to create your own analytics service.

Typical workflow of using Rakam:

We also provide user interface for Rakam as a separate product called Rakam UI. You can create custom reports with SQL, dashboards, funnel and retention reports via Rakam UI.

All these features come with a single box, you just need to specify which modules you want to use using a configuration file (config.properties) and Rakam will do the rest for you. We also provide cloud deployment tools for scaling your Rakam cluster easily.

Deployment

If your event data-set can fit in a single server, we recommend using Postgresql backend. Rakam will collect all your events in row-oriented format in a Postgresql node. All the features provided by Rakam are supported in Postgresql deployment type. Please note that we support Postgresql 11 because we're using new features such as partitioning and BRIN indexes for performance.

However Rakam is designed to be highly scalable in order to provide a solution for high work-loads. You can configure Rakam to send events to a distributed commit-log such as Apache Kafka or Amazon Kinesis in serialized Apache Avro format and process data in PrestoDB workers and store them in a distributed filesystem in a columnar format.

Heroku

You can deploy Rakam to Heroku using Heroku button, it uses Heroku Postgresql add-on for your app and uses Postgresql deployment type.

Deploy

Docker

Run the following command to start a Postgresql server in docker container and Rakam API in your local environment.

docker run -d --name rakam-db -e POSTGRES_PASSWORD=dummy -e POSTGRES_USER=rakam postgres:10.1 && docker run --link rakam-db --name rakam -p 9999:9999 -e RAKAM_CONFIG_LOCK__KEY=mylockKey -e RAKAM_CONFIG_STORE_ADAPTER_POSTGRESQL_URL=postgres://rakam:dummy@rakam-db:5432/rakam buremba/rakam

After docker container is started, visit http://127.0.0.1:9999 and follow the instructions. You can also register your local Rakam API to Rakam BI at http://app.rakam.io or directly use Rakam API. You may also consult to API documentation for details of the API.

We also provide a docker-compose definition for a Postgresql backend. Create a docker-compose.yml with this definition and run the command docker-compose -f docker-compose.yml up -d.

version: '2.1'
services:
  rakam-db:
    image: postgres:11.4
    environment:
      - POSTGRES_PASSWORD=dummy
      - POSTGRES_USER=rakam
    healthcheck:
      test: ["CMD-SHELL", "pg_isready"]
      interval: 5s
      timeout: 5s
      retries: 3
  rakam-api:
    image: buremba/rakam
    environment:
      - RAKAM_CONFIG_STORE_ADAPTER_POSTGRESQL_URL=postgres://rakam:dummy@rakam-db:5432/rakam
      - RAKAM_CONFIG_LOCK__KEY=mylockKey
    ports:
      - "9999:9999"
    depends_on:
      rakam-db:
        condition: service_healthy

You can set config variables for Rakam instance using environment variables. All properties in config.properties file can be set via environment variable RAKAM_CONFIG_property_name_dots_replaced_by_underscore. For example, if you want to set store.adapter=postgresql you need to set environment variable RAKAM_CONFIG_STORE_ADAPTER=postgresql. Also the dash - is replaced by double underscore character __. Therefore the environment variable RAKAM_CONFIG_LOCK__KEY corresponds to lock-key config property.

Dockerfile will generate config.properties file from environment variables in docker container that start with RAKAM_CONFIG prefix.

In order to set environment variables for container, you may use -e flag for for docker run but we advice you to set all environment variables in a file and use --env-file flag when starting your container.

Then you can share same file among the Rakam containers. If Dockerfile can't find any environment variable starts with RAKAM_CONFIG, it tries to connect Postgresql instance created with docker-compose.

AWS (Terraform)

See https://github.com/rakam-io/rakam-api-terraform-aws.

Terraform installer is the recommended way to deploy Rakam in production because it automatically handles most of the complexity like fail over and load-balancing.

Custom

  • Download Java 1.8 for your operating system.
  • Download latest version from Bintray ([VERSION]/rakam-[VERSION]-.bundle.tar.gz) extract package.
  • Modify etc/config.properties (sample for Postgresql deployment type) file and run bin/launcher start.
  • The launcher script can take the following arguments: start|restart|stop|status|run. bin/launcher run will start Rakam in foreground.

Building Rakam

You can try the master branch by pulling the source code from Github and building Rakam using Maven:

Requirements
  • Java 8
  • Maven 3.2.3+ (for building)
git clone https://github.com/rakam-io/rakam.git
cd rakam
mvn clean install package -DskipTests
Running the application locally
rakam/target/rakam-*-bundle/rakam-*/bin/launcher.py run --config rakam/target/rakam-*-bundle/rakam-*/etc/config.properties

Note that you need to modify config.properties file in order to be able to start Rakam. (sample for Postgresql deployment type)

Running Rakam in your IDE

Since we already use Maven, you can import Rakam to your IDE using the root pom.xml file. We recommend using Intellij IDEA since the core team uses it when developing Rakam. Here is a sample configuration for executing Rakam in your IDE:

Main Class: org.rakam.ServiceStarter
VM Options: -ea -Xmx2G -Dconfig=YOUR_CONFIG_DIRECTORY/config.properties
Working directory: $MODULE_DIR$
Use classpath of module: rakam

Managed

We're also working for managed Rakam cluster, we will deploy Rakam to our AWS accounts and manage it for you so that you don't need to worry about scaling, managing and software updates. We will do it for you. Please shoot us an email to [email protected] if you want to test our managed Rakam service.

Web application

This repository contains Rakam API server that allows you to interact with Rakam using a REST interface. If you already have a frontend and developed a custom analytics service based on Rakam, it's all you need.

However, we also developed Rakam Web Application that allows you to analyze your user and event data-set but performing SQL queries, visualising your data in various charts, creating (real-time) dashboards and custom reports. You can turn Rakam into a analytics web service similar to Mixpanel, Kissmetrics and Localytics using the web application. Otherwise, Rakam server is similar to Keen.io with SQL as query language and some extra features.

Another nice property of Rakam web application is being BI (Business Intelligence) tool. If you can disable collect APIs and connect Rakam to your SQL database with JDBC adapter and use Rakam application to query your data in your database. Rakam Web Application has various charting formats, supports parameterized SQL queries, custom pages that allows you to design pages with internal components.

Contribution

Currently I'm actively working on Rakam. If you want to contribute the project or suggest an idea feel free to fork it or create a ticket for your suggestions. I promise to respond you ASAP. The purpose of Rakam is being generic data analysis tool which can be a solution for many use cases. Rakam still needs too much work and will be evolved based on people's needs so your thoughts are important.

Acknowledgment

YourKit

We use YourKit Java Profiler in order to monitor the JVM instances for identifing the bugs and potential bottlenecks. Kudos to YourKit for supporting Rakam with your full-featured Java Profile!

rakam-api's People

Contributors

buremba avatar gitter-badger avatar justayar avatar keremtiryaki avatar rameshbyndoor avatar sercanlir avatar waffle-iron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rakam-api's Issues

API documentation

We use apidoc for documentation for RESTful API but the web interface it generates is not really helpful. Readme.io seems like a good choice and supports apidoc but this feature is locked right now in my Readme.io account.

Cross Device Best Practice with RAKAM?

Hi,

we have a cross-device Product. People create an account inside their app and continue Onboarding within their web browser. Is there any documentation how to handle cross-device with Rakam?

cheers,

David

Failed to running docker image

Hi,

Can't run rakam with docker image, I got this error message.

root@vm~# docker run  -p 9998:9999 buremba/rakam
Starting PostgreSQL 9.4 database server: main.
Error: Could not find or load main class org.rakam.ServiceStarter

Live View page subsamples at low event rate

On the Live View page, with no filters, when I click "Subscribe new events", I'm able to see each event when I create events every three seconds :

while True:
    response = requests.post(url, data=json.dumps(
         {"api": {"api_key":"WRITE_KEY"},
          "collection": np.random.choice(events),
          "properties": {"_time": str(datetime.today()),
                         "_user": np.random.choice(users)}}))
     print(response.text)
     sleep(3)

When I change this to a two-second delay between events, the Live View page starts choking, only showing one in about every three events.

Any faster than that, and no events show up at all.

Add multiple team member at a time

Currently to add member, i have to add one at a time. When i have 20 users request then its taking time to add members. This is though a one time operation and is not a critical feature on priority. But definitely makes more sense from usability prospective.

Ability to show selected columns in filter dropdown of Funnel

  1. While building a funnel, people are currently able to see all columns of a collection as filter dropdown. Some of the columns of a collection may contain values which are large e.g. session_id or event_id which we dont want to show as part of filter dropdown. Instead we want a selected list of columns that should appear as part of all collection filters dropdown. This list could be configured as a comma separated list in custom json that will apply to all collection. Or it could be a comma separated list that will be skipped while showing the filter drop down. This kind of filtering is required for filters beside the funnel collection dropdown and similarly under advanced section for "custom connector field" text box. Because in custom connector field also we want to give specific columns as custom connector to group by. Instead of calling this as custom connector, i feel calling them as "Funnel by" makes more sense which people can easily understand and relate.

  2. We also want to have an ability where users can see possible values against a column when a column is selected from filter dropdown. For example if i select "os" as one column from filter dropdown and the select operator as "equals" and then in value text field, if we can show all distinct possible values for those columns that people can chose from so that people will not have to explicitly write or remember those values. For example "os" could have possible values as "android 1.0", "android 2.0", "iOS 6.0", "iOS 7.0" etc. This probably can be achieved by materialising a collection_filter table where we can maintain distinct values for selected columns. This tables needs to be populated (insert overwrite every time new data is reloaded). This process of materialising this collection_filter table should happen in a scheduled manner or on every data load time. We can take either approach, if we make it scheduled based then it becomes automatic and no dependency on when data is getting pushed.

Show saved funnel and retention results when user comes back

  1. If an user has saved a predefined date range specific funnel and named it as funnel_dt1, then he comes back tomorrow and wants to see his old saved funnel "funnel_dt1", it shows blank results since the funnel is saved with only name but not with results, all filter criteria, date etc. This is required since many analysts want to compare the funnels they created earlier and new funnels to compare how it behaves. The same approach is required for retention as well. Basically what users mean by save means, the output is saved and then when they come back they can check the old values, at the time the filters, date range in calendar should be repopulated to old values selected in old saved funnel query.

  2. In the saved funnel list section, the Search box says "Search in reports" and when i type anything it should show me list of saved funnels. Lets say users have created 100 + funnels, then they want to search funnels by typing initial 2 letters in the search box. Why should they search reports in funnel section. If some one has created a custom funnel by writing own SQL and saved as a Funnel in Reports section, then that should automatically show up in Funnel section and show when Search in funnels section.

Explorer Page Performance and Control

We need to have incremental materialization in explorer UI. As soon as a user enters Explorer page, it shows lot of stats about all collections. These data keep changing as data load happens. These should be pre incrementally materialised so that these stats should not be fetched from Presto backend raw data every time.

The collections panel on left side of Explorer UI should show top 10 collections by count. If i have 500 collections, we can not show all of them.

On the right hand side UI, we should provide limit on no of collections people can chose to see at a time for example 10/20 collections at a time. This should be configurable. This is to restrict large bad queries hitting presto e.g people chose all collections or when you enter the Explorer UI it fetches data for all collections (fetching data for all 500 collections on every page visit is going to unnecessarily load Presto).

Ability to disable a project when logged in as admin

When i logged in as admin to Rakam UI, i would like to see an option to disable an already registered project so that users will not be able to see /use it further. This will help us to promote rolling upgrade of projects and ensure we bring down one project and promote a new project for users to adopt.

Ability to show funnels by date range group by day, hour.

As an analysts, i created a funnel by selecting the collection, related fields and date range from date picker. Once the base funnel is created i want to group by dimensions like OS, Platform etc and then further see the distribution by day, hour.

Missing UI documentation

There is not documentation on how to start the UI service. The only hint comes from the RakamUIModule code that mentions ui.enable, but nothing more.

Getting error

2016-09-21T13:51:57.037Z        INFO    main    org.rakam.bootstrap.Bootstrap
2016-09-21T13:51:57.037Z        WARN    main    org.rakam.bootstrap.Bootstrap   UNUSED PROPERTIES
2016-09-21T13:51:57.037Z        WARN    main    org.rakam.bootstrap.Bootstrap   plugin.user.storage.identifier_column=id
2016-09-21T13:51:57.037Z        WARN    main    org.rakam.bootstrap.Bootstrap
2016-09-21T13:51:57.878Z        ERROR   main    org.rakam.bootstrap.Bootstrap   Uncaught exception in thread main
com.google.inject.CreationException: Unable to create injector, see the following errors:

1) Configuration property 'plugin.user.storage.identifier_column=id' was not used
  at org.rakam.bootstrap.Bootstrap.lambda$initialize$6(Bootstrap.java:198)

1 error
        at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:466)
        at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
        at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
        at com.google.inject.Guice.createInjector(Guice.java:96)
        at org.rakam.bootstrap.Bootstrap.initialize(Bootstrap.java:212)
        at org.rakam.ServiceStarter.main(ServiceStarter.java:85)

Any reason for AGPL?

Hi Everyone,

Great work on the full stack analytics engine which is quite easy to use.

I would like to use it to do relatime visaulization of Server / App logs and need to embed it inside the system. ( Pretty much convinced for using it with Kafka Presto! )
Any particular reason for selecting AGPL? Our company has a strict rule against it.

Thanks,
Mark

Refactor class packages

I separated modules and didn't care about the packages that I moved classes between modules. Rakam-spi is kind of messy right now and classes should be moved appropriate packages.

Postgresql leader election

Rakam must a leader node for updating continuous and materialized tables periodically.

Rakam-presto-kafka uses Zookeeper for leader election but we don't have that option for Postgresql.

Luckily Postgresql has locks that can be used for leader election. All nodes will try to acquire lock when they started. The first node that acquires lock will be the leader node. When it dies, the second one automatically acquires that lock.

Kudos: https://gist.github.com/andrewle/2395994

Multiple funnel by choice of configuration instead of only user.

Have a "funnel by" dropdown where we can configure predefined columns like user, session, demandid etc on which group by can be applied to funnel query. Also these should not appear again as part of global group by shown at bottom part where funnel graph is getting displayed so that people should not do multiple group by on same column. Basically exclude those "funnel by" fields from group by dropdown in funnel graph UI.

Cannot update materialized view

On some of my reports, they fail because rakam cannot update materialized view.

Error while updating materialized table '(SELECT * from taskhero."$materialized_segment_leavers" UNION ALL SELECT "id" FROM (SELECT * FROM taskhero."_users" WHERE "_time" > to_timestamp(1495476964)) data WHERE (("last_active_at" - interval '7 days') < "signup_date") ) data': ERROR: cannot change materialized view "$materialized_segment_leavers"

Rakam Presto Raptor Drop Tables

When we ingest data to Presto as a new event collection and try to drop the table, it does not delete actual storage, we need to understand if there are some settings we need to use for this.

Custom Connector Field Does Not Retain Saved Values

  1. Noticed that if i had saved the funnel with custom connector fields and when i came back, the custom connector shows unchecked but the query is actually getting fired with earlier selected / saved custom connector field. Its confusing. This is a bug. In the back ground it will be firing with custom connector field based group by where as user will be thinking its user based group by.

  2. Can we rename the label of custom connector to "Funnel By". The text box which shows list of fields should be based on previous JIRA where we can define which columns we want to make it available for Funnel by and which we dont. This should be configurable in custom json to skip the fields to show.

Label for funnel when select different custom connector

By default Rakam assumes the funnel queries are by "_user" and accordingly it shows the Label as part of funnel results in Graph as "All Users". But when i select the custom connector field as Session_Id, it although calculates the funnel by Session_Id, it however still displays "All Users" under graph section label. This is misleading.

Event explorer does not load

ERROR: column "_time" does not exist Position: 40

There is no error or stack trace in the server logs. Using postgresql deployment type with latest git version from 3/4.

Integration tests

  • Recipes
  • Real-time Reports (Should we test it for each deployment for just once for a generic ANSI SQL?)
  • EventStore implementation (AWS Kinesis, Postgresql)

Exception when clicking "save segment" in the people section of the UI

Using postgresql deployment type, latest git version from 3/4

Stack trace:

2017-03-04T19:10:09.713725+00:00 app[web.1]: com.facebook.presto.sql.parser.ParsingException: line 1:66: no viable alternative at input '<EOF>'
2017-03-04T19:10:09.713726+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlParser$1.syntaxError(SqlParser.java:45)
2017-03-04T19:10:09.713727+00:00 app[web.1]: 	at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:65)
2017-03-04T19:10:09.713728+00:00 app[web.1]: 	at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:566)
2017-03-04T19:10:09.713729+00:00 app[web.1]: 	at org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:308)
2017-03-04T19:10:09.713729+00:00 app[web.1]: 	at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:145)
2017-03-04T19:10:09.713730+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.booleanExpression(SqlBaseParser.java:5873)
2017-03-04T19:10:09.713731+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.booleanExpression(SqlBaseParser.java:5846)
2017-03-04T19:10:09.713731+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.querySpecification(SqlBaseParser.java:3737)
2017-03-04T19:10:09.713732+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.queryPrimary(SqlBaseParser.java:3458)
2017-03-04T19:10:09.713733+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.queryTerm(SqlBaseParser.java:3270)
2017-03-04T19:10:09.713734+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.queryNoWith(SqlBaseParser.java:3126)
2017-03-04T19:10:09.713734+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.query(SqlBaseParser.java:2599)
2017-03-04T19:10:09.713735+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.statement(SqlBaseParser.java:1273)
2017-03-04T19:10:09.713736+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.singleStatement(SqlBaseParser.java:231)
2017-03-04T19:10:09.713736+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlParser.invokeParser(SqlParser.java:92)
2017-03-04T19:10:09.713737+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlParser.createStatement(SqlParser.java:65)
2017-03-04T19:10:09.713738+00:00 app[web.1]: 	at org.rakam.plugin.MaterializedView.validateQuery(MaterializedView.java:50)
2017-03-04T19:10:09.713738+00:00 app[web.1]: 	at org.rakam.plugin.MaterializedView.<init>(MaterializedView.java:44)
2017-03-04T19:10:09.713739+00:00 app[web.1]: 	at org.rakam.postgresql.plugin.user.PostgresqlUserStorage.createSegment(PostgresqlUserStorage.java:125)
2017-03-04T19:10:09.713740+00:00 app[web.1]: 	at org.rakam.plugin.user.AbstractUserService.createSegment(AbstractUserService.java:58)
2017-03-04T19:10:09.713755+00:00 app[web.1]: 	at org.rakam.plugin.user.UserHttpService.createSegment(UserHttpService.java:227)
2017-03-04T19:10:09.713755+00:00 app[web.1]: 	at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
2017-03-04T19:10:09.713756+00:00 app[web.1]: 	at org.rakam.server.http.JsonParametrizedRequestHandler.handleInternal(JsonParametrizedRequestHandler.java:126)
2017-03-04T19:10:09.713756+00:00 app[web.1]: 	at org.rakam.server.http.JsonParametrizedRequestHandler.lambda$handle$1(JsonParametrizedRequestHandler.java:90)
2017-03-04T19:10:09.713757+00:00 app[web.1]: 	at org.rakam.server.http.RakamHttpRequest.handleBody(RakamHttpRequest.java:254)
2017-03-04T19:10:09.713757+00:00 app[web.1]: 	at org.rakam.server.http.HttpServerHandler.handleBody(HttpServerHandler.java:160)
2017-03-04T19:10:09.713758+00:00 app[web.1]: 	at org.rakam.server.http.HttpServerHandler.channelRead(HttpServerHandler.java:112)
2017-03-04T19:10:09.713762+00:00 app[web.1]: 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
2017-03-04T19:10:09.713762+00:00 app[web.1]: 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
2017-03-04T19:10:09.713763+00:00 app[web.1]: 	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
2017-03-04T19:10:09.713763+00:00 app[web.1]: 	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
2017-03-04T19:10:09.713764+00:00 app[web.1]: 	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
2017-03-04T19:10:09.713764+00:00 app[web.1]: 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
2017-03-04T19:10:09.713765+00:00 app[web.1]: 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
2017-03-04T19:10:09.713765+00:00 app[web.1]: 	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
2017-03-04T19:10:09.713766+00:00 app[web.1]: 	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:823)
2017-03-04T19:10:09.713766+00:00 app[web.1]: 	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:339)
2017-03-04T19:10:09.713767+00:00 app[web.1]: 	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:255)
2017-03-04T19:10:09.713767+00:00 app[web.1]: 	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
2017-03-04T19:10:09.713768+00:00 app[web.1]: 	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
2017-03-04T19:10:09.713768+00:00 app[web.1]: 	at java.lang.Thread.run(Thread.java:745)
2017-03-04T19:10:09.713769+00:00 app[web.1]: Caused by: org.antlr.v4.runtime.NoViableAltException
2017-03-04T19:10:09.713769+00:00 app[web.1]: 	at com.facebook.presto.sql.parser.SqlBaseParser.booleanExpression(SqlBaseParser.java:5823)
2017-03-04T19:10:09.713770+00:00 app[web.1]: 	... 35 more

User Explorer does not work

Hi,

jsut a reminder on my issue with your "peoples" feature in rakam. Did you already solved it?

cheers,

David

tests

We need a simulation engine to send data and analyse them on the fly for testing.

ability to coordinate cache and database for computation

Since we use sets for unique metrics, the data size can increase too much so in-memory caching may not be a good solution in this case.
We can coordinate cache and database layer for large sets (we will determine that limit value by calculating free memory) in order to keep the memory usage stable.

Add druid backend

Druid is on other option for analytics-oriented backend. Any plato to support Druid?

Exception in /user/get

Related to rakam-io/rakam-ui#38

Running latest Rakam from git as of 3/3

Stack trace

2017-03-03T21:49:00.575333+00:00 app[web.1]: java.lang.NullPointerException
2017-03-03T21:49:00.575333+00:00 app[web.1]: 	at org.rakam.postgresql.plugin.user.AbstractPostgresqlUserStorage.setValues(AbstractPostgresqlUserStorage.java:649)
2017-03-03T21:49:00.575334+00:00 app[web.1]: 	at org.rakam.postgresql.plugin.user.AbstractPostgresqlUserStorage.lambda$getUser$10(AbstractPostgresqlUserStorage.java:613)
2017-03-03T21:49:00.575337+00:00 app[web.1]: 	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
2017-03-03T21:49:00.575338+00:00 app[web.1]: 	at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
2017-03-03T21:49:00.575338+00:00 app[web.1]: 	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
2017-03-03T21:49:00.575339+00:00 app[web.1]: 	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
2017-03-03T21:49:00.575340+00:00 app[web.1]: 	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
2017-03-03T21:49:00.575340+00:00 app[web.1]: 	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

Add multiple team member at a time

Currently to add member, i have to add one at a time. When i have 20 users request then its taking time to add members. This is though a one time operation and is not a critical feature on priority. But definitely makes more sense from usability prospective.

Custom Connector Field in Advanced Setting of Funnel Page

When i select the check box for Custom Connector field, the text box is not editable immediately unless i uncheck and check it back again. Then it shows the list of columns for a collection and when i use mouse to scroll down and select, it does not work, instead if i use arrow down button in keyboard and press enter at selected value, it works. This is happening in Safari browser Version 10.1 (12603.1.30.0.34).

Cannot use generate_series

The following query runs fine against my postgresql database but crashes in Rakam

SELECT * FROM generate_series(now() - interval '1 month', now(), interval '1 day')

line 1:30: extraneous input '(' expecting {<EOF>, '.', ',', 'ADD', 'AS', 'WHERE', 'GROUP', 'ORDER', 'HAVING', 'LIMIT', 'NO', 'SUBSTRING', 'POSITION', 'TINYINT', 'SMALLINT', 'INTEGER', 'DATE', 'TIME', 'TIMESTAMP', 'INTERVAL', 'YEAR', 'MONTH', 'DAY', 'HOUR', 'MINUTE', 'SECOND', 'ZONE', 'JOIN', 'CROSS', 'INNER', 'LEFT', 'RIGHT', 'FULL', 'NATURAL', 'FILTER', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'ROW', 'SCHEMA', 'VIEW', 'REPLACE', 'GRANT', 'REVOKE', 'PRIVILEGES', 'PUBLIC', 'OPTION', 'EXPLAIN', 'ANALYZE', 'FORMAT', 'TYPE', 'TEXT', 'GRAPHVIZ', 'LOGICAL', 'DISTRIBUTED', 'SHOW', 'TABLES', 'SCHEMAS', 'CATALOGS', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'UNION', 'EXCEPT', 'INTERSECT', 'TO', 'SYSTEM', 'BERNOULLI', 'POISSONIZED', 'TABLESAMPLE', 'ARRAY', 'MAP', 'SET', 'RESET', 'SESSION', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'WORK', 'ISOLATION', 'LEVEL', 'SERIALIZABLE', 'REPEATABLE', 'COMMITTED', 'UNCOMMITTED', 'READ', 'WRITE', 'ONLY', 'CALL', 'INPUT', 'CASCADE', 'RESTRICT', 'INCLUDING', 'EXCLUDING', 'PROPERTIES', 'NFD', 'NFC', 'NFKD', 'NFKC', 'IF', 'NULLIF', 'COALESCE', IDENTIFIER, DIGIT_IDENTIFIER, QUOTED_IDENTIFIER, BACKQUOTED_IDENTIFIER}
Executed Query: 
SELECT * FROM generate_series(now() - interval '1 month', now(), interval '1 day')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.