Code Monkey home page Code Monkey logo

graylog2-server's Introduction

Graylog

License Maven Central Build

Welcome! Graylog is a free and open log management platform.

You can read more about the project on our website and check out the documentation on the documentation site.

Issue Tracking

Found a bug? Have an idea for an improvement? Feel free to add an issue.

Contributing

Help us build the future of log management and be part of a project that is used by thousands of people out there every day.

Follow the contributors guide and read the contributing instructions to get started.

Do you want to get paid for developing our free and open product? Apply for one of our jobs!

Staying in Touch

Come chat with us in the #graylog channel on freenode IRC, the #graylog channel on libera or create a topic in our community discussion forums.

License

Graylog is released under version 1 of the Server Side Public License (SSPL).

graylog2-server's People

Contributors

aleksi avatar alex-konn avatar antonebel avatar bernd avatar dennisoelkers avatar dependabot-preview[bot] avatar dependabot[bot] avatar edmundoa avatar gally47 avatar garybot2 avatar gaya avatar github-actions[bot] avatar janheise avatar joshuaspaulding avatar kingzacko1 avatar kmerz avatar kroepke avatar kyleknighted avatar linuspahl avatar luk-kaminski avatar maxiadlovskii avatar moesterheld avatar mpfz0r avatar ousmaneo avatar patrickmann avatar roberto-graylog avatar ryan-carroll-graylog avatar thll avatar todvora avatar waab76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graylog2-server's Issues

Recent index name is hardcoded, so that independent instances of graylog2-server cannot use the same elasticsearch

It is possible to set the name of the recent index name in the web interface, but it is hard-coded in the server (in EmbeddedElasticSearchClient.java). This should be a config option, otherwise why bother having a configurable prefix when the recent index is a singleton.

The use case is where someone has multiple sources of logs that they want to manage independently. Perhaps one is very high volume and needs different server settings and deletion limits. At present, the recent index means that the data must be mixed together.

Upgrade to RabbitMQ 3.1.0

One day after the upgrade in #131 RabbitMQ released version 3.1.0 - We should upgrade to it. The develop branch is using it already.

Custom input message filters

As an extension of #7 it would be great to be able to have filters run before a message is inserted into the DB. An example:

A syslog message like 'host ident: ' can be filtered to allow just the JSON payload to be inserted into the db. This can then lead to #7 coming into the fray.

Users should be allowed to configure a regex based filter that could match or modify the input or simply delete it. The output could be configurable as string or JSON. That way, the unneeded messages would never be inserted into the DB, leaving more space in the capped collection for legitimate messages.

Taking the above example, the message 'host ident: ' would simply be available as '' to be passed on to make use of the functionality afforded, should #7 be implemented.

GELFDispatcher - Could not handle GELF message :: Writing GZipped Gelf to TCP socket fails

I am working on a fork from Gelf4Nlog https://github.com/farzadpanahi/Gelf4NLog to add TCP support. UDP is working fine and basically what I have done is to make the code write to a TCP socket rather than UDP. But it keeps failing with this error on server side:

2013-04-01 13:33:26,460 WARN : org.graylog2.inputs.gelf.GELFDispatcher - Could not handle GELF message.

java.lang.IllegalStateException: Failed to decompress the GELF message payload
at org.graylog2.gelf.GELFMessage.getJSON(GELFMessage.java:150)
at org.graylog2.gelf.GELFProcessor.messageReceived(GELFProcessor.java:62)
at org.graylog2.inputs.gelf.GELFDispatcher.messageReceived(GELFDispatcher.java:77)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.EOFException
at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:264)
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:171)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:78)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:90)
at org.graylog2.plugin.Tools.decompressGzip(Tools.java:159)
at org.graylog2.gelf.GELFMessage.getJSON(GELFMessage.java:139)
... 15 more

It looks like that the server fails to read the payload. Is there anything special that needs to be taken into consideration when writing to graylog-server tcp socket?

When the exact same generated bytes are written to a UDP socket, the graylog server reads the bytes and saves the log successfully.

I would appreciate any help/hint.

Code snippet responsible for writing to TCP socket:

public void Send(byte[] bytes, int length, IPEndPoint ipEndPoint)
{
using (var tcpClient = new TcpClient(ipEndPoint.Address.ToString(), ipEndPoint.Port))
{
var stream = tcpClient.GetStream();
stream.Write(bytes, 0, length);
stream.Close();
}
}

Code snippet responsible for writing to UDP socket:

public void Send(byte[] datagram, int bytes, IPEndPoint ipEndPoint)
{
using (var udpClient = new UdpClient())
{
udpClient.Send(datagram, bytes, ipEndPoint);
}
}

ps: I have also submitted this issue to graylog mailing list here: https://groups.google.com/d/msg/graylog2/2nV-_bLS2E0/WeJ-k9KygwQJ

How do I connect to mongodb replica sets?

I have a replica set of 3 mongodb servers - what is the syntax in the config file to allow me to have the graylog2-server (and web interface) connect to the correct one?

Metadata missing with large messages

Graylog doesn't display metadata (like source, date, host, ...) with large messages. Please check the attached screenshoots. These messages aren't normally as big as this example (caused by a wrong multiline filter in logstash) but certain stackstraces can be long as well.

tomcat1
tomcat2

null pointer exception at config parsing

Exception in thread "main" java.lang.NullPointerException
    at org.graylog2.messagehandlers.amqp.AMQP.isEnabled(AMQP.java:41)
    at org.graylog2.Main.main(Main.java:228)

New release 0.9.5 issues -- message cut

Hello,

I upgraded for this new version of graylog2 and now all the messages, for example:

Apr 11 14:03:37 SERVER1111 stunnel: LOG3[27267:3072547728]: SSL_read: Connection reset by peer (104)

Gets the hostname like "Connection" and cuts the message all wrong.

I installed a this new release and all the hosts list started to put some random words of my logs, some syslog were parsed good, some not.

I noticed some really slow performance too, maybe because of the syslog parsing wrong.

Well, I just rollback my setup to the 0.9.4p1. I can make a new one with this new release for more debug, if needed.

Cheers,
Luiz

Configurable blocking threads (Out of semaphores to get db connection)

I just started getting the following message:

com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection
    at com.mongodb.DBPortPool.get(DBPortPool.java:156)
    at com.mongodb.DBTCPConnector$MyPort.get(DBTCPConnector.java:274)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:195)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
    at com.mongodb.DB.getCollectionNames(DB.java:200)
    at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
    at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
    at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
    at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection
    at com.mongodb.DBPortPool.get(DBPortPool.java:156)
    at com.mongodb.DBTCPConnector$MyPort.get(DBTCPConnector.java:274)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:195)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
    at com.mongodb.DB.getCollectionNames(DB.java:200)
    at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
    at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
    at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
    at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)

flooding the log. Not if the number of blocker threads is configurable.

Missing index is needed

There's an issue when graylog creates the messages capped collection. It isn't created with an index on _id which is needed for the main page.

Thu Apr  7 14:27:11 [conn288] query graylog2.messages ntoreturn:100 scanAndOrder  reslen:41979 nscanned:2511032 { $query: {}, $orderby: { _id: -1 } }  nreturned:100 10479ms

There is a sort on _id but no index

Once the index is created it speeds right up.

negative regexes no longer work in 0.11.0

My negative regexes to exclude certain lines from a stream no longer work in 0.11. They worked fine in 0.9.6. The rule is like:

(?!Exclude this line)

but the line still shows up in the stream. Any ideas or is this a new bug? Thanks.

JSON payload can be an embedded document

Currently, at least with syslog, all the messages are being stored as a string. However, if the messages are pure JSON, they could be stored as embedded documents. This would allow an application to process these messages with mongo's queries.

For instance, a JSON payload would currently be inserted as:
{ message : "{ foo : "bar" }" }

Would be beneficial to store it as follows, instead:
{ message : { foo : "bar" } }

graphite output strips - from hostnames

Hello,

We use the following scheme for our hostnames: foo-bar-0x. The - are stripped from the hostnames when sent to Graphite. That's problematic for us, as we automatically generate graphs using GDash + Chef. We need to add a logic to strip the -.

Server (or client?) doesn't work with default max_chunk_size

Using graylog2_exceptions, it seemed to be sending messages ok (strategic puts in the file), but server never received it. I hardcoded the max_chunk_size for the Notifier it was using to be 'LAN', and then the server started receiving its messages. Not sure where the fault lies.

I do have other loggers sending from the same process, and they were set to use a 'WAN' chunk size, not sure if its the difference thats causing the problem, or the fact that a bunch of messages are happening at the same time, some with 'LAN' chunk size, others with 'WAN'

Blacklist improvement

As I already mentioned in the google group the blacklist in v0.10.0 doesn't seem to work for me.

I record apache access and error logs in graylog2. We have nagios and collectd running and doing some performance checks. In order to supress these messages I have a blacklist like this:

.collectd/5.1.0.git.
.GET /server-status?auto HTTP/1.1 200.

It would be great to have some advanced blacklisting like

http_useragent ~ collectd
http_response_code != 200

and

http_request ~ "/server-status.*"
http_response_code != 200

So the blacklist should look like the message stream feature just for dropping messages.

Pluggable bridges

hello,

The greylog2 server seems really nice and I like how it's structured. It would be really nice if a config option existed where I can tell it to load up different bridges/connectors rather than the mongo one and simply implement a new one against an interface.

Obviously for your main use the bridge into mongo is what you need, there exists though a need for a capable pluggable syslog server that we can extend into other systems. You have a great base server here that others could build on.

Can't build current master (tree e5ee9bb)

$ mvn assembly:assembly -U
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.graylog2:graylog2-server:jar:0.9.5
[WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-jar-plugin is missing. @ line 131, column 21
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building graylog2-server 0.9.5
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] >>> maven-assembly-plugin:2.2-beta-5:assembly (default-cli) @ graylog2-server >>>
Downloading: http://clojars.org/repo/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.pom
Downloading: http://repository.jboss.com/maven2/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.pom
Downloading: http://scala-tools.org/repo-releases/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.pom
Downloading: http://repo1.maven.org/maven2/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.pom
[WARNING] The POM for org.syslog4j:syslog4j:jar:0.9.46 is missing, no dependency information available
Downloading: http://clojars.org/repo/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.jar
Downloading: http://repository.jboss.com/maven2/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.jar
Downloading: http://scala-tools.org/repo-releases/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.jar
Downloading: http://repo1.maven.org/maven2/org/syslog4j/syslog4j/0.9.46/syslog4j-0.9.46.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.650s
[INFO] Finished at: Sun Apr 10 11:09:15 MSD 2011
[INFO] Final Memory: 3M/81M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project graylog2-server: Could not resolve dependencies for project org.graylog2:graylog2-server:jar:0.9.5: Could not find artifact org.syslog4j:syslog4j:jar:0.9.46 in clojars.org (http://clojars.org/repo) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

Zip error for GELF message

With new gelf gem and server 81fc234:
GELF::Notifier.new.notify(:short_message => 'A' * 10000000)

Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Received message is chunked. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Chunked GELF message <5b71ed250bc1945c3248fae3fdd2c332d162b2c91a5eec151ba9d54b8f466a57> complete. Handling now.
Tue Nov 09 21:00:24 MSK 2010 - INFO - Chunked GELF message <5b71ed250bc1945c3248fae3fdd2c332d162b2c91a5eec151ba9d54b8f466a57> is ZLIB compressed.
Tue Nov 09 21:00:25 MSK 2010 - CRITICAL - IO Error while handling GELF message: java.util.zip.ZipException: invalid stored block lengths
Tue Nov 09 21:00:38 MSK 2010 - INFO - Dropping incomplete chunked GELF message <5b71ed250bc1945c3248fae3fdd2c332d162b2c91a5eec151ba9d54b8f466a57>

Note the time of the last packet.

mongo capped collection does not create an index on _id field

Getting this mongodb error occasionally:

Tue Dec 14 16:00:13 [conn1176] assertion 10129 too much data for sort() with no index ns:graylog2.messages query:{ $query: { deleted: { $in: [ false, null ] }, message: { $nin: [ /\#24861/, /Could\ not\ find\ subtype\ object\ for\ subtypeID\ 0\ in\ mm_WebsitePeer::createJimdo/, /Exception\ while\ trying\ to\ ping\ BlogPingR/, /Allowed\ memory\ size\ of\ 67108864\ bytes\ exhausted/, /contains\ an\ suspicious\ activity\ pattern/, /MatrixController\ \-\ create\ failed:\ Already\ created\ a\ new\ module/, /\[PAYMENT\]\[DEBUG\]\ GC\ api\ request\ xml/ ] } }, $orderby: { _id: -1 } }
Tue Dec 14 16:00:13 [conn1176]  ntoskip:0 ntoreturn:100
Tue Dec 14 16:00:13 [conn1176] query graylog2.messages ntoreturn:100 exception  514ms

http://www.mongodb.org/display/DOCS/Capped+Collections says:

"An index is not automatically created on _id for capped collections by default"

creating an index like this fixed the issue for us:

> db.messages.ensureIndex({_id: 1}, {unique: true});

GELF timestamp not stored in MongoDB

Hello.

Looks like the graylog2-server doesn't store the timestamp carried in the GELF message. In fact, there's no timestamp field in the GELFMessage object, even though it's part of the GELF specification.

Speaking of specification, timestamp is specified as being unix timestamp and a decimal. I'm assuming this means it really can also contain decimal fraction, since having the timestamp with a resolution of one second would be a bit coarse...

graylog2-server 0.11.0 consumes too much CPU

In top output:

  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND                                                                                                       
 2312 root      20   0 12.5g 187m  12m S 124.7  0.6   2:31.94 java -jar graylog2-server.jar                                                                                 

I noticed that values of config directives processbuffer_processors and outputbuffer_processors impacts on CPU utilization.
Why CPU load is so high? Is it safe to decrease mentioned directives to 1?

Librato not working on 0.11

I´ve a Graylog2 v0.11 working and I just edit the graylog2.conf as on tutorial: http://support.torch.sh/help/kb/graylog2-server/using-librato-metrics-with-graylog2

´´´
enable_libratometrics_output = true
enable_libratometrics_system_metrics = false
libratometrics_api_user = EMAIL
libratometrics_api_token = TOKEN
libratometrics_prefix = Graylog2-
libratometrics_interval = 60
libratometrics_stream_filter =
libratometrics_host_filter =
´´´

After this I restarted the Graylog2 with command 'service graylog2-server restart' but the data isn´t been send.

Any ideia?

Flags/options should be able to be passed

Graylog 2 currently ignores all arguments passed to it. This means users cant:

  • Specify its PID file location (for example, in Gentoo, /var/tmp/run is the standard, not /tmp.)
  • Specify which configuration file to use.
  • Specify where, or if, to save log files.

Unless this can be amended, this'll also probably result in less adoption of Graylog2, since distro repository devs will have to hack at its code in order to configure it meet their distribution's standards.

Wrong message parsing with rsyslog

Hello,

i've an existing rsyslog infrasturcture. Many clients send their messages to a central rsyslog server with one of the "rsyslog methods". On this rsyslog server is graylog2 installed and listening on localhost for incoming messages. The rsyslog server forwards all messages in the RSYSLOG_TraditionalForwardFormat [1] to the local graylog2 server.

It looks like graylog2 is parsing the input in a wrong way (or the rsyslog message format template is wrong?).

Example:

rsyslog sends following message (received from the "jabber" host) to the local graylog2:

 <38>Nov 29 23:19:46 jabber sshd[27089]: last message repeated 2 times

But the message is shown in the graylog2 interface as:

Date: 29.11.2010 - 23:19:46
Host: localhost
Severity: Informational
Facility: Unknown
Message: Nov 29 23:19:46 jabber sshd[27089]: last message repeated 2 times

[1] http://www.rsyslog.com/doc/rsyslog_conf_templates.html

The syslog parser analyzes badly the logs

Hi,

I have a problem with the syslog parser. For exemple, with the following log :

<46>Mar 20 15:22:38 host_srv01 rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="8767" x-info="http://www.rsyslog.com";] (re)start

graylog2-web-interface displays "Mar" as the host source of the log (so it displays the name of the month instead of the host name).

But when I active the DNS lookup (RNDS) in graylog2-server, graylog2-web-interface displays the correct host name "host_srv01".

I don't know how graylog2 find the host source of the log (log analysis or retrieval from the ip source of the tcp/udp datagram).

Looking the source code, in SyslogProcessor.java, I see:

if (remoteAddress == null) {
remoteAddress = InetAddress.getLocalHost();
}

So theorically, graylog2 should use in the worst case the IP of the localhost. Maybe the problem is in SyslogDispatcher.java (a wrong IP address is sended to SyslogProcessor.java?):

InetSocketAddress remoteAddress = (InetSocketAddress) e.getRemoteAddress();
...
this.processor.messageReceived(new String(readable), remoteAddress.getAddress());

Thanks.
Best Regards.

Graylog2 CPU Usage

Hi Lennart,

I'm using the latest version of graylog2. Initially the CPU was more than 100% and when set the "processor_wait_strategy" to blocking brought down the CPU usage less than 60% even without no inputs/messages.

Is this a known issue? Can you please help me to fix it?

Thanks,
Seby.

Invalid GELF type

Seeing the below CRITICAL error in my logs, but not sure if I need to do anything about it, and I'm not sure why it has an invalid GELF type. This is from monit log notification to rsyslog that gets forwarded to graylog udp syslog receiver. This happens every time graylog receives that same message (different timestamp).

Thu Feb 10 18:20:37 EST 2011 - INFO - =======
Thu Feb 10 18:20:37 EST 2011 - CRITICAL - Invalid GELF type in message: org.gray
log2.messagehandlers.gelf.InvalidGELFTypeException
Thu Feb 10 18:20:39 EST 2011 - INFO - Received message: monit[22019]: 'cassandra
' process is not running
Thu Feb 10 18:20:39 EST 2011 - INFO - Host: matt
Thu Feb 10 18:20:39 EST 2011 - INFO - Facility: 3 (system daemon)
Thu Feb 10 18:20:39 EST 2011 - INFO - Level: 3 (Error)
Thu Feb 10 18:20:39 EST 2011 - INFO - Raw: <27>Feb 10 18:20:39 matt monit[22019]
: 'cassandra' process is not running
Thu Feb 10 18:20:39 EST 2011 - INFO - =======

GELF Specification clarification

Hello,

I'm trying to implement pure java client to support chunked GELF messages and I'm a bit confused with the spec.

On this page https://github.com/Graylog2/graylog2-docs/wiki/GELF it's clearly stated that GELF header should be 12 bytes, but in the server implementation (org/graylog2/messagehandlers/gelf/GELF.java):

/**
* GELF header is 70 bytes long.
*/

public static final int GELF_HEADER_LENGTH = 38;

Please advice.

Kind regards,
Anton

Conection reset exception breaks server

I started up the server last night, and left it running overnight. This morning, I was no longer able to connect to it with the ruby gelf client, though the syslog messages still seem to be getting there. Fortunately I had the server running with debug logging. The first trace is the first one in the log, it happens a number of times, then I start seeing the other traces below it. The last trace is the entirety of a single call from the ruby gelf client


Thu Feb 10 03:31:45 EST 2011 - INFO - Got GELF message: shortMessage: rails: Reading rubber configuration from /root/.ec2_backupifydev/rubber-secret.yml | fullMessage: null | level: 7 | host: matt | file: /mnt/backupify-matt/shared/bundle/ruby/1.9.1/gems/log4r-gelf-0.5.0/lib/log4r (...)
Thu Feb 10 03:31:45 EST 2011 - INFO - Received message is not chunked. Handling now.
Thu Feb 10 03:31:45 EST 2011 - INFO - Handling ZLIB compressed SimpleGELFClient
Thu Feb 10 03:31:45 EST 2011 - INFO - Got GELF message: shortMessage: rails: Reading rubber instances from /mnt/backupify-matt/releases/20110209165211/config/rubber/instance-matt.yml | fullMessage: null | level: 7 | host: matt | file: /mnt/backupify-matt/shared/bundle/ruby/1.9.1/g (...)
Feb 10, 2011 3:31:45 AM com.mongodb.DBTCPConnector$MyPort error
SEVERE: MyPort.error called
java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:168)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at org.bson.io.Bits.readFully(Bits.java:30)
        at com.mongodb.Response.(Response.java:34)
        at com.mongodb.DBPort.go(DBPort.java:95)
        at com.mongodb.DBPort.call(DBPort.java:55)
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
        at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
        at com.mongodb.DB.getCollectionNames(DB.java:200)
        at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
        at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
        at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
        at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)


Feb 10, 2011 3:31:45 AM com.mongodb.DBTCPConnector$MyPort error
SEVERE: MyPort.error called
java.io.EOFException
        at org.bson.io.Bits.readFully(Bits.java:32)
        at com.mongodb.Response.(Response.java:34)
        at com.mongodb.DBPort.go(DBPort.java:95)
        at com.mongodb.DBPort.call(DBPort.java:55)
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
        at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
        at com.mongodb.DB.getCollectionNames(DB.java:200)
        at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
        at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
        at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
        at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

Thu Feb 10 03:31:45 EST 2011 - WARNING - Could not handle GELF client: com.mongodb.MongoException$Network: can't call something
com.mongodb.MongoException$Network: can't call something
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:210)
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
        at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
        at com.mongodb.DB.getCollectionNames(DB.java:200)
        at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
        at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
        at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
        at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:168)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at org.bson.io.Bits.readFully(Bits.java:30)
        at com.mongodb.Response.(Response.java:34)
        at com.mongodb.DBPort.go(DBPort.java:95)
        at com.mongodb.DBPort.call(DBPort.java:55)
        at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
        ... 11 more

Thu Feb 10 07:52:14 EST 2011 - INFO - Received message is not chunked. Handling now.
Thu Feb 10 07:52:14 EST 2011 - INFO - Handling ZLIB compressed SimpleGELFClient
Thu Feb 10 07:52:14 EST 2011 - INFO - Got GELF message: shortMessage: rails: hit | fullMessage: null | level: 6 | host: matt | file: /mnt/backupify-matt/shared/bundle/ruby/1.9.1/gems/log4r-gelf-0.5.0/lib/log4r-gelf/gelf_outputter.rb | line: 54 | facility: 17 | version: 1.0 | addit (...)
Feb 10, 2011 7:52:14 AM com.mongodb.DBTCPConnector$MyPort error
SEVERE: MyPort.error called
java.io.EOFException
    at org.bson.io.Bits.readFully(Bits.java:32)
    at com.mongodb.Response.(Response.java:34)
    at com.mongodb.DBPort.go(DBPort.java:95)
    at com.mongodb.DBPort.call(DBPort.java:55)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
    at com.mongodb.DB.getCollectionNames(DB.java:200)
    at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
    at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
    at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
    at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Feb 10, 2011 7:52:14 AM com.mongodb.DBTCPConnector$MyPort error
SEVERE: MyPort.error called
java.io.EOFException
    at org.bson.io.Bits.readFully(Bits.java:32)
    at com.mongodb.Response.(Response.java:34)
    at com.mongodb.DBPort.go(DBPort.java:95)
    at com.mongodb.DBPort.call(DBPort.java:55)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
    at com.mongodb.DB.getCollectionNames(DB.java:200)
    at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
    at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
    at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
    at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Feb 10, 2011 7:52:14 AM com.mongodb.DBTCPConnector$MyPort error
SEVERE: MyPort.error called
java.io.EOFException
    at org.bson.io.Bits.readFully(Bits.java:32)
    at com.mongodb.Response.(Response.java:34)
    at com.mongodb.DBPort.go(DBPort.java:95)
    at com.mongodb.DBPort.call(DBPort.java:55)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
    at com.mongodb.DB.getCollectionNames(DB.java:200)
    at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
    at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
    at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
    at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Thu Feb 10 07:52:14 EST 2011 - WARNING - Could not handle GELF client: com.mongodb.MongoException$Network: can't call something
com.mongodb.MongoException$Network: can't call something
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:210)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:208)
    at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:256)
    at com.mongodb.DB.getCollectionNames(DB.java:200)
    at org.graylog2.database.MongoBridge.getMessagesColl(MongoBridge.java:59)
    at org.graylog2.database.MongoBridge.insertGelfMessage(MongoBridge.java:109)
    at org.graylog2.messagehandlers.gelf.SimpleGELFClientHandler.handle(SimpleGELFClientHandler.java:99)
    at org.graylog2.messagehandlers.gelf.GELFClientHandlerThread.run(GELFClientHandlerThread.java:60)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.EOFException
    at org.bson.io.Bits.readFully(Bits.java:32)
    at com.mongodb.Response.(Response.java:34)
    at com.mongodb.DBPort.go(DBPort.java:95)
    at com.mongodb.DBPort.call(DBPort.java:55)
    at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:201)
    ... 11 more

Thoughts about a Jabber forwarder

Hey, really psyched about Graylog2. Thinking about this stuff, it makes sense to me to send everything over to Graylog2, and forward certain (low traffic) streams to Jabber destinations. In this light, even an EmailForwarder makes sense, but I realize this might be conceptually different than what you were thinking. Is a JabberForwarder appealing at all? I'll give it a shot if so.

Also, it'd be great if you could even open Github issues for other features, might be something else I (or somebody else) can bite off.

Cannot connect to mongo db

I have a problem with graylog2-server: it cannot connect to mongo db

Could not create MongoDB connection: java.lang.Exception: Could not connect to Mongo DB. (com.mongodb.MongoException$Network: can't call something)

I'm using the default configuration shipped with graylog2-server. Both mongo db and graylog2 server are running on the same host.

I created graylog2 db and its user using mongo console and according to netstat it is running and listening on the usual port

graylog2-web-interface works fine.

Graylog2 won't start on Windows

I am cursed with a dearth of Linux boxen to run Graylog2 on in my organization. When trying to run the Graylog2 server on a windows box it tries to run bash presumably to get it's Process Id. It's Java, why do you need to tie it to the platform like this?

FATAL: org.graylog2.Tools - Could not determine own PID! Cannot run program "bas
h": CreateProcess error=2, The system cannot find the file specified
java.io.IOException: Cannot run program "bash": CreateProcess error=2, The syste
m cannot find the file specified
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
        at java.lang.Runtime.exec(Runtime.java:593)
        at java.lang.Runtime.exec(Runtime.java:466)
        at org.graylog2.Tools.getPID(Tools.java:57)
        at org.graylog2.Main.main(Main.java:147)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find th
e file specified
        at java.lang.ProcessImpl.create(Native Method)
        at java.lang.ProcessImpl.(ProcessImpl.java:81)
        at java.lang.ProcessImpl.start(ProcessImpl.java:30)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
        ... 4 more
FATAL: org.graylog2.Main - Could not write PID file: Could not determine PID.
java.lang.Exception: Could not determine PID.
        at org.graylog2.Main.main(Main.java:149)

Time based retention times without delete_by_query

Have a way to define that each index should keep messages of X hours or days. This allows to still always delete whole indices (like in current versions) instead of having to run delete_by_query (which would really mess up Lucene and cause high IO for merging segments) but to also be able to define how long to keep messages in a human time format without having to calculate how many messages to keep.

Ideas welcome! @kroepke, @joschi

Create a flag for checking the configuration file for errors

It would be helpful if there was a flag that could check the configuration file for errors, and immediately return a result (ie, report whether the conf file is valid, but not actually start-up the server).

Currently, the only way to check for conf errors (at least that I know of is) is to start the server, sleep for a few seconds, and then check whether or not the server has left a PID file: If it exists, the server should be running normally; if not, then that means it error'd out. It's quite a hacky solution.

Here's my init script that demonstrates having to do this.

Also, see issue #19.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.