Code Monkey home page Code Monkey logo

kibana2's Introduction

Kibana

NOTE: You probably don't want this repo! Kibana 2 is EOL. The latest version can be found at https://github.com/elastic/kibana

Copyright 2012 Rashid Khan <rashidkpc #kibana irc.freenode.net>

Kibana is a browser based interface for Logstash and ElasticSearch that allows you to efficiently search, visualize, analyze and otherwise make sense of your logs.

More information at http://www.kibana.org

Requirements

Base

  • ruby >= 1.8.7 (probably?)
  • bundler
  • logstash >= 1.1.0
  • elasticsearch >= 0.18.0

JRuby

  • java >= 1.6
  • warbler if you want to create an executable standalone war file

Installation

Install

  1. git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git
  2. cd Kibana
  3. gem install bundler
  4. bundle install

Configure
Set your elasticsearch server in KibanaConfig.rb:
Elasticsearch = "elasticsearch:9200"

Run
ruby kibana.rb

Use
Point your browser at http://localhost:5601

JRuby

To run Kibana with JRuby, e.g. if you have to run in on a windows machine, you can create a (executable) WAR archive.

git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git	
cd Kibana  	
jruby -S gem install bundler  
jruby -S bundle install   

Configure your environment (see above). jruby -S rake war or
jruby -S warble executable war if you want to include a webserver (default: jetty).

Run: java [-Djetty.port=5601] -jar Kibana.war

Todo: Externalize the configuration. Any help would be appreciated.

FAQ

Q: Why is there no last button?
A: ElasticSearch isn't so hot at going to the last result of a many million result query.

Q: Why is this Ruby instead of PHP now?
A: Closer integration with logstash, Ruby is shiny. Its mostly javascript anyway. If you want it in something else, it shouldn't be too hard to port.

Q: Why do I have to set a limit on events to analyze?
A: Big result sets take a long time to retrieve from elasticsearch and parse out

Q: Well then why don't you use the Elastic Search terms facet?
A: I've found the terms facet to cause out of memory crashes with large result sets. I don't know a way to limit the amount of memory a facet may use. Until there's a way to run a facet and know for sure it won't crash Elastic Search, I'm going to keep analysis features implemented in Ruby. I'm open to other suggestions though. I suggest you be careful with the Statistics mode, its more stable than terms, and I try to detect when it might be dangerous but can still bite you.

Q: Why do some results not show up when I search for a string I know is in the elasticsearch indexes?
A: If you are searching analyzed fields, which is the default in ES for string fields, remember that they are broken down into terms. For instance, a search for "test" will match records containing [email protected], since @ is a term boundary and is broken down into "test" and "bleh.com". However, this will NOT match records containing [email protected] because "test.com" is the full term and you are searching for an exact match. You would need to use test to match both of these records. Note you may also want to configure the ES analyze behavior for certain fields if this is not the desired behavior. Helpful References:

http://www.elasticsearch.org/guide/reference/mapping/core-types.html
http://www.elasticsearch.org/guide/reference/api/admin-indices-templates.html

Q: How do I run Kibana under Apache? A: There's a few samples in the sample/apache directory on how to do this.

Q: Kibana is great, but I want to make it so users have to authenticate in order to access it. How do I do that? A: This can be handled a number of ways. The best way is to run Kibana with Passenger and Apache or Nginx. There's sample configurations in the sample directory. You can then handle your preferred authentication mechanism with Apache or Nginx.

Q: Where can I get some help with this?
A: Find me on Freenode - rashidkpc in #logstash

kibana2's People

Contributors

awheeler avatar bobtfish avatar bragr avatar brandonbell avatar bryanwb avatar dav3860 avatar electrical avatar fabn avatar falkenbt avatar garlandkr avatar hmalphettes avatar igalic avatar jamtur01 avatar jordansissel avatar katzj avatar kmullin avatar lfrancke avatar mattmencel avatar meineerde avatar nathanaelle avatar nickstenning avatar nslowes avatar r-duran avatar rashidkpc avatar rasputnik avatar rmelissari avatar scashin133 avatar timconradinc avatar timl avatar tomdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kibana2's Issues

Add support for nested fields

First off, Kibana is spectacular! Right, onto the request.

I am currently messing around with creating log files with nested JSON structures. Logstash itself and ElasticSearch do not seem to have any trouble with them but both Kibana and the Logstash WebUI have trouble with them.

A sample of the JSON I am trying to display is here: https://gist.github.com/3694987

As you can see the field contains an array of hashes, each with various fields.

As a first step it would be lovely to see Kibana be able to display these hashes in a subtable of the field with names / values (at the moment it displays only "[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]" ).

Ideally in future it would be nice to be able to add / exclude records based on these fields like you can on the top-level fields (assuming that is even possible)

(I have not tried displaying an array of strings rather than hashes so I am not sure how Kibana handles them)

"console." debugging lines break JS execution

Using Firefox 3.6 without any extensions, the "console." lines throughout the JS code cause JS exceptions. Commenting them out, or installing the Firebug plugin solves the issue.

no search results, endless wait, no errors

I have hooked up Kibana according to the documentation successfully. I can run searches just fine on my logstash UI.
I fire up Kibana, try to execute the search and the "searching" animated graphic just spins forever.

I have output php errors turned on, and nothing.

Using CentOS 5 / Apache2 / PHP53

CSV File Time Range

The time range in the CSV file appears to be whatever is stored in the elasticsearch indices that intersect the from/to times. Also since the elasticsearch indices are based on UTC, a search for 15 minutes of logs gives you logs spanning multiple days (which does not feel right).

Timezone setting

Allow for user configurable timezones. This will probably require some kind of cookie to coordinate the PHP and Javascript. Open to other suggestions though.

Display JSON structured @fields

Logstash will happily pass along JSON structured @fields via many outputs (I've checked file, amqp, elasticsearch, but I imagine most outputs will work this way). I produce a few JSON @fields and would like to be able to see this in the web interface. Loggly does something similar and included this in their blog post a few months ago: http://loggly.com/blog/2011/06/on-the-way-to-impressive/ (specific screenshot URL: http://loggly.com/assets/4e24a17edabe9d56ea003ecb/status_code_json.png).

Here's a simple query on my elasticsearch showing a tiny bit of my JSON @field, "mdc":

--> curl -s http://localhost:9311/logstash-2012.07.06/_search?q=webapp | /tmp/pretty_print_json
{
    "_shards": {
        "failed": 0,
        "successful": 5,
        "total": 5
    },
    "hits": {
        "hits": [
            {
                "_id": "kYYc_eOrSHicqVogfP29Dg",
                "_index": "logstash-2012.07.06",
                "_score": 0.018579213000000001,
                "_source": {
                    "@fields": {
                        "instance": "us-qa-tomcat",
                        "level": "ERROR",
                        "logger": "servlet.jsp",
                        "mdc": {
                            "jsonRequestSnapshot": {
                                "cookies": {
                                    "JSESSIONID": "0106FA007566B27BD0395D57A1941533.us-qa_localhost",

Analyze should be able to analyze the current time period

Currently, 'score' will observe up to 20000 events, regardless of time period.

If this limitation is because the particular query is unsafe at high document counts, then I propose the following:

  • Use 'score' as-is to find a list of popular field values
  • Perform 'count' queries against each field value
  • Present those counts instead of the 'score' value

To better explain, maybe this will help -

If I do a 'score' analysis on @source_host, I see

  • "hits" (top right corner of the window) shows 450,000 hits for the 15 minute time period I observe
  • the top '@source_host' value is "examplefoo" which has a count listed in the score table of 11679
  • if I search for literally '@source_host:examplefoo' I see 310,285 hits

Does this make sense? I know sometimes a general 'count of hits' is sometimes estimated in large document search systems, but it seems better to use even an estimated value than it is to use "count of the last 20000 events"

Feature: Add+Rescore button in score/trend views

Perhaps add a button in the score views that allow you to create a new score with that item as the search query.

This would be equivalent to the search button (in that it adds to the query) but would take you back to the score view rather than the results view.

This would allow you to chase down the culprit tag (or any other array-type field) quickly.

There are hits, but Kibana doesn't display them. JS Errors

Can't call method 'concat' of undefined. Skitch here with a screen shot.

https://skitch.com/arimus/ebrqk/kibana

Indeed, I can see that the "data" element in the returned json is in one of the loader2.php calls and not in the other, however I'm not familiar at which point Kibana is expecting the request to come back with the data element vs. not.

https://skitch.com/arimus/ebrq7/kibana
https://skitch.com/arimus/ebrxp/kibana

Using Elasticsearch 0.19.3.

Feature Request: Truncate Fields at Configurable Length

I have some fields there are very long and cause one 'event' to actually take up 2 or 3 rows on my screen. It would be great if Kibana could truncate fields over n length and display "..." at the end so I can 'expand' and see the entire field's contents if desired.

Example:

A good example is an 'username' field such as:
cn=Chris Decker, ou=blah, ou=blah2, ou=blah3, o=blah, c=blah, st=blah

Explanation:

When I'm skimming my logs I really only need to see the cn portion of the line, so I'd configure Kibana to show me the first 25 characters of this field.

No output in Stream at all

I'm experiencing stream problems while there are no errors in logs. I keep receiving log messages from servers and all is OK except of that the stream page is completely empty.

Here's what I get when I switch the "stream" from 'Play' to 'Pause':

MY.IP - - [18/Sep/2012 11:06:33] "GET /api/stream/eyJzZWFyY2giOiIiLCJmaWVsZHMiOlsiQG1lc3NhZ2UiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOjkwMCwiZ3JhcGhtb2RlIjoiY291bnQifQ== HTTP/1.1" 200 - 0.0108
MY.IP - - [18/Sep/2012:11:06:33 CEST] "GET /api/stream/eyJzZWFyY2giOiIiLCJmaWVsZHMiOlsiQG1lc3NhZ2UiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOjkwMCwiZ3JhcGhtb2RlIjoiY291bnQifQ== HTTP/1.1" 200 0

When I restart kibana I also get errors which have been described in issue number 68, over here:
https://github.com/rashidkpc/Kibana/issues/68
at the beginning (when I do kibana + elasticsearch restart) but they dissapear as soon as everything is online.

Update: I use the kibana-ruby latest-version branch from https://github.com/rashidkpc/Kibana.git

Startup script correction for use with chkconfig

On CentOS 6 for the provided startup script to correctly work with chkconfig the following line must be added after line 1 and before line 11

"# chkconfig: 2345 20 80"

Without it, when running the command "chkconfig --add /etc/init.d/kibana" you get the following error message: "service kibana does not support chkconfig"

Score doesn't show all

I've noticed from time to time that the score feature doesn't show what I think it should.
I'm usually using @source_host -> score to sort out the host I want to focus on but I've noticed that some hosts are missing from time to time.
If I do a search on the same host or file I will find it but score won't.
If I after I found the logs do a @source_host -> score it will show.

Feature request: Multiple backends

At dreamhost we will have many separate elasticsearch clusters. I'd like to provide a single search interface.

I'll be working on implementing this, but just wanted to log this feature here.

Feature request: show any message in stream

We think it could make easier to debug messages if at each cycle the browser could show any message received and not just the latest 15.

The best is a clear button than acknowledges shown messages plus shows new ones if any.

Browser support

Define supported browsers. We know Firefox 3.6 doesn't work. Either support older browsers or display a message about browser support.

kibana-ruby extremely slow

First; Thank you!
I really like Kibana.
The php-version of Kibana loads in about 0.3 s
The ruby-version in 30 s and produces outputs like this :
http://pastebin.com/KrytfNBv
The browser says:
Oops! Something went terribly wrong.
I'm on a Ubuntu 12.04 x86_64 with ruby 1.8.7
I followed your install instruction but it looks like something is missing.

ISO8601

How do I change the (faulty) date format to ISO8601 format or whatever format I like on the ruby version?
Shouldn't that be an option?

We Swedish nerds honour the use of ISO 8601 because it's the standard within the language :-)

Forced query/filter term in config

I send multiple customer environment logs to the same ES backend. I would like to have separate kibana installs which only see the specified customer data forced by tag, e.g. @tags:"Customer1"

I suppose this could be accomplished by a filter as well as appending a query, but that's what I would request: A way to force kibana to only show data containing a given tag, or other query term match.

csv max lignes issue

Hello !

I get an error when I try to increase the 'export_show' field in conf file..
'export_show' => 5000, => is working but more... I get an error, probably a timeout.

Any idea ?

issue when making a search !

Hello !

I have some uri like :

/
/data/blabla/blabal
/data/blabla/balbalbalbal
etc...

When I try to make a search like : Uri:"/"
I always get no "result found..."

Any ideas ?

when I change my php.ini file,the kibana can't work

Warning: date_default_timezone_get() [function.date-default-timezone-get]: It is not safe to rely on the system's timezone settings. You are required to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Chongqing' for 'CST/8.0/no DST' instead in /home/tao.song/kibana/config.php on line 92

Allow setting default_operator for searches

The default is 'OR' but in my case the common need is to have the default operator be 'AND' - logstash web forces 'AND' for all queries. Making this tunable is probably desirable.

Show more/all entries in Microanalysis view

As of today there isn't really a good way to show a few more entries in the microanalysis view.
Of course a "Show all" button would make the table huge, but maybe "Show [n] more" (Show a few more) could be a bit more elegant.

Alternatively: A "Show all" link which points you to another place with some more page real estate could also be an idea, but that wouldnt really be a "Microanalysis" anymore, rather a "Megaanalysis" ;-)

Events displayed in reverse chronological order

Noticed events displayed in Kibana are in reverse chronological order, like below (see timestamp in the message field):

Time @message
09/20 11:30:20 11:30:20,243 DEBUG [Thread-2744] - record filterd in code by indicator :0
09/20 11:30:20 11:30:20,231 INFO [Thread-2744] - found 1 results
09/20 11:30:20 11:30:20,144 DEBUG [Thread-2750] - record filterd in code by indicator :0
09/20 11:30:20 11:30:20,131 INFO [Thread-2750] - found 1 results
09/20 11:30:20 11:30:20,130 INFO [Thread-2750] - query is :@Custom:ContentID:"3641009"
09/20 11:30:20 11:30:20,130 DEBUG [Thread-2750] - exluding:

However logstash UI preserves event sequence(see timestamp in the event field):

timestamp event
2012-09-20T02:08:20.434000Z
12:08:20,020 DEBUG [Thread-2968] - maxSearchResults set to :1
2012-09-20T02:08:20.436000Z
12:08:20,020 DEBUG [Thread-2968] - exluding:
2012-09-20T02:08:20.438000Z
12:08:20,020 INFO [Thread-2968] - query is :@Custom:ContentID:"5715266"
2012-09-20T02:08:20.440000Z
12:08:20,021 DEBUG [Thread-2973] - maxSearchResults set to :1
2012-09-20T02:08:20.444000Z
12:08:20,021 DEBUG [Thread-2973] - exluding:
2012-09-20T02:08:20.447000Z
12:08:20,021 INFO [Thread-2973] - query is :@Custom:ContentID:"5716715"
2012-09-20T02:08:20.449000Z
12:08:20,021 DEBUG [Thread-2971] - maxSearchResults set to :1
2012-09-20T02:08:20.451000Z
12:08:20,021 DEBUG [Thread-2971] - exluding:
2012-09-20T02:08:20.453000Z
12:08:20,021 INFO [Thread-2971] - query is :@Custom:ContentID:"5712968"
2012-09-20T02:08:20.454000Z
12:08:20,021 DEBUG [Thread-2970] - maxSearchResults set to :1
2012-09-20T02:08:20.456000Z
12:08:20,021 DEBUG [Thread-2970] - exluding:
2012-09-20T02:08:20.458000Z
12:08:20,022 INFO [Thread-2970] - query is :@Custom:ContentID:"5712971"
2012-09-20T02:08:20.461000Z
12:08:20,022 DEBUG [Thread-2969] - maxSearchResults set to :1
2012-09-20T02:08:20.464000Z
12:08:20,022 DEBUG [Thread-2974] - maxSearchResults set to :1
2012-09-20T02:08:20.465000Z
12:08:20,022 DEBUG [Thread-2969] - exluding:
2012-09-20T02:08:20.469000Z
12:08:20,022 DEBUG [Thread-2974] - exluding:
2012-09-20T02:08:20.470000Z
12:08:20,022 INFO [Thread-2969] - query is :@Custom:ContentID:"5713112"
2012-09-20T02:08:20.474000Z
12:08:20,022 INFO [Thread-2974] - query is :@Custom:ContentID:"5713019"
2012-09-20T02:08:20.479000Z
12:08:20,022 INFO [Thread-2968] - found 1 results
2012-09-20T02:08:20.482000Z
12:08:20,023 INFO [Thread-2973] - found 1 results
2012-09-20T02:08:20.484000Z
12:08:20,023 INFO [Thread-2971] - found 1 results
2012-09-20T02:08:20.490000Z
12:08:20,024 INFO [Thread-2974] - found 1 results

Is there a config in Kibana to change the display to chronological order?

Changing Default Fields Shown

Hi,

I'm currently using kibana to show my logs however I'm finding some trouble to increase the fields shown by default. Currently it shows time and message.

Is there a way to have more fields displayed by default?

thanks

Stream doesn't work (always empty) and NoMethodError exception in the log

Hi. I'm just trying out Kibana and it looks good. Thank you!

Sadly the streaming doesn't work for me. Whenever I click on stream I get a page with a pause button on a black header bar and below that the current query string. Then nothing.

In the log I can see an exception being thrown every few seconds. I've appended the exception trace below.

Stats: Kibana 269f953 (latest), Ruby 1.8.7, ES 0.19.8. Our logstash indices are named logstash-%{@source}-%{+YYYY.MM}".

Tim.

NoMethodError - undefined method `[]' for nil:NilClass:
    kibana.rb:193:in `GET /api/stream/:hash/?:from?'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:1264:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:1264:in `compile!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:835:in `[]'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:835:in `route!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:851:in `route_eval'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:835:in `route!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:872:in `process_route'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:870:in `catch'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:870:in `process_route'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:834:in `route!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:833:in `each'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:833:in `route!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:936:in `dispatch!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:769:in `call!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:921:in `invoke'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:921:in `catch'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:921:in `invoke'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:769:in `call!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:755:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-protection-1.2.0/lib/rack/protection/xss_header.rb:22:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-protection-1.2.0/lib/rack/protection/base.rb:47:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-protection-1.2.0/lib/rack/protection/path_traversal.rb:16:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-protection-1.2.0/lib/rack/protection/json_csrf.rb:17:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-protection-1.2.0/lib/rack/protection/base.rb:47:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-protection-1.2.0/lib/rack/protection/xss_header.rb:22:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/session/abstract/id.rb:205:in `context'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/session/abstract/id.rb:200:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/logger.rb:15:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/commonlogger.rb:20:in `call_without_check'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:136:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:129:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/head.rb:9:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/methodoverride.rb:21:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/showexceptions.rb:21:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:99:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:1389:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:1471:in `synchronize'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:1389:in `call'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/handler/webrick.rb:59:in `service'
    /usr/lib/ruby/1.8/webrick/httpserver.rb:104:in `service'
    /usr/lib/ruby/1.8/webrick/httpserver.rb:65:in `run'
    /usr/lib/ruby/1.8/webrick/server.rb:173:in `start_thread'
    /usr/lib/ruby/1.8/webrick/server.rb:162:in `start'
    /usr/lib/ruby/1.8/webrick/server.rb:162:in `start_thread'
    /usr/lib/ruby/1.8/webrick/server.rb:95:in `start'
    /usr/lib/ruby/1.8/webrick/server.rb:92:in `each'
    /usr/lib/ruby/1.8/webrick/server.rb:92:in `start'
    /usr/lib/ruby/1.8/webrick/server.rb:23:in `start'
    /usr/lib/ruby/1.8/webrick/server.rb:82:in `start'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/handler/webrick.rb:13:in `run'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/base.rb:1350:in `run!'
    /home/tim/trunk/nub/comp/logview/ruby/lib/ruby/gems/1.8/gems/sinatra-1.3.3/lib/sinatra/main.rb:25
    kibana.rb:260
 86.47.232.121 - - [10/Sep/2012:07:03:16 CDT] "GET /api/stream/eyJzZWFyY2giOiIgQHNvdXJjZTpcInRsc3ZjdGxcIiIsImZpZWxkcyI6WyJAbWVzc2FnZSJdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6OTAwLCJncmFwaG1vZGUiOiJjb3VudCIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0= HTTP/1.1" 500 5483

implicit "set :bind, '0.0.0.0' " is harmfull

because, Kibana only allows to set the Port, kibana binds the socket on 0.0.0.0 address .

this is just a trivial security issue, so :

  • can I have a config option for setting the listening address ?
  • can the default listening address be 127.0.0.1 or ::1 ?

Feature request: Charting numeric statistical facet

I'm capturing application logs in my logstash which contain execution time. The statistical facet provides min, max, average and more values when using a numeric field in elasticsearch. It would be nice to add a new option in addition to trend and score for numerical fields which would somehow chart this data.

In this case it may not be a time chart but just a simple table with the statistical values, or it could take an increment period so you could ask for a statistical graph of averages broken down in 10 minute increments.

Here is the example elasticsearch query if I have a custom field defined of "time" (note: this field must be defined as a numeric type in the elasticsearch index).

POST http://server:9200/_all/_search�{"query" : { "text" : {"@message" : "the_search_filter"} },�"facets" : {"time_facet" : { "statistical" : {"field" : "time"} }}}

Results:
{� "took" : 2511,� "facets" : {� "time_facet" : {� "_type" : "statistical",� "count" : 2419946,� "total" : 1531886.968960829,� "min" : 0.0,� "max" : 19348.49609375,� "mean" : 0.6330252695559443,� "sum_of_squares" : 8.470505106146345E8,� "variance" : 349.62796172029414,� "std_deviation" : 18.698341148890567� }� }�}

Ruby 1.8.7 - Curl::Err::HostResolutionError

Full console output here:

https://gist.github.com/3539790

Text displayed on website:

Oops! Something went terribly wrong.
I'm not totally sure what happened, but maybe refreshing, or hitting Reset will help. If that doesn't work, you can try restarting your browser. If all else fails, it is possible your configuation has something funky going on. 

If it helps, I received a 500 Internal Server Error from: api/search/eyJzZWFyY2giOiIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6OTAwLCJncmFwaG1vZGUiOiJjb3VudCJ9?_=1346357484407

Feature Request: Checkbox to (dis/en)able facet graph in UI

The facet graph is a wonderful tool, but if you sometimes don't need it, it can really tax the system, consuming cache memory, etc.

It would be beyond cool to have a button or checkbox to disable the facet graph when it is not required, and a configuration setting to make it either enabled or disabled by default.

relative URL are cool

actually, Kibana use 2 types of URLs :

  • absolute URL like /images/ or /js/
  • relative URL like images/

if you put a proxy in front of kibana, the absolutes URL are just annoying.

using relative URL avoids the pain of the useless processing and unwanted latency.

No Gemfile.lock

When other people clone your repo, and install the Gems listed inside the Gemfile, they might be getting different gem versions than what you are developing with. This can cause all sorts of problems.

As the main developer of this project, you should add your Gemfile.lock to the repository guaranteeing that others use the same exact gem versions that you are developing with.

PHP Warning: Creating default object from empty value in htdocs/loader2.php on line…

Kibana fills my PHP logs with the following error:

PHP Warning:  Creating default object from empty value in htdocs/loader2.php on line 207
PHP Warning:  Creating default object from empty value in htdocs/loader2.php on line 193
PHP Warning:  Creating default object from empty value in htdocs/loader2.php on line 175
PHP Warning:  Creating default object from empty value in htdocs/loader2.php on line 171
PHP Warning:  Creating default object from empty value in htdocs/loader2.php on line 382

(I thought I'd give you a select line numbers..

Yes, I do have E_STRICT on, because that's the default in 5.4. As it should be ;)

Feature request: Make the date picker behavior smarter

A few behaviorial improvements around custom dates I'd like to request.

First, if you select a "Custom" time period, the date picker should appear if it isn't already on the screen. For example, you do a default search for the last 15 minutes for something, find no results so no date picker shows up. If you select Custom, it would be nice if it would then appear.

Second, if you change the date in the date picker by hand without using the popup helper, the Filter button does not appear. You have to press enter or use the popup helper. This results in my users hitting my third request below a lot...

Third, if you change the values in the date picker they are only acknowledged if you press the Filter button that appears. If you press the Search button, even if you have Custom selected as the time period, your time adjustments since your last search are not honored.

Export all results

It would be really sweet if the export functionality exported all of the matching results, instead of only exporting the 50 that are displayed.

Handle php warnings

Hi,
due to php warnings returned by loader2.php kibana did not show any results. Setting "displayErrors Off" in the php.ini resolved this issue.

These are the warnings I'm getting:
Warning: Creating default object from empty value in xxx/kibana/htdocs/loader2.php on line 163

Warning: Creating default object from empty value in xxx/kibana/htdocs/loader2.php on line 175

Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are required to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in xxx/kibana/htdocs/loader2.php on line 184

What would be the best way to handle this? Just disabling warnings doesn't seem right to me.

Thanks
Thomas

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.