Code Monkey home page Code Monkey logo

redpanda-data / console Goto Github PK

View Code? Open in Web Editor NEW
3.6K 44.0 325.0 52.63 MB

Redpanda Console is a developer-friendly UI for managing your Kafka/Redpanda workloads. Console gives you a simple, interactive approach for gaining visibility into your topics, masking data, managing consumer groups, and exploring real-time data with time-travel debugging.

Home Page: https://redpanda.com

Go 52.35% JavaScript 0.24% HTML 0.05% TypeScript 44.28% SCSS 2.64% CSS 0.39% Shell 0.05%
apache-kafka dataops react typescript kafka-ui kafka-gui web-ui go kafka

console's Introduction

Redpanda Console – A UI for Data Streaming

Go Report Card GitHub release (latest SemVer) Docker Repository

Redpanda Console (previously known as Kowl) is a web application that helps you manage and debug your Kafka/Redpanda workloads effortlessly.

preview.mp4

Features

  • Message viewer: Explore your topics' messages in our message viewer through ad-hoc queries and dynamic filters. Find any message you want using JavaScript functions to filter messages. Supported encodings are: JSON, Avro, Protobuf, XML, MessagePack, Text and Binary (hex view). The used encoding (except Protobuf) is recognized automatically.
  • Consumer groups: List all your active consumer groups along with their active group offsets, edit group offsets (by group, topic or partition) or delete a consumer group.
  • Topic overview: Browse through the list of your Kafka topics, check their configuration, space usage, list all consumers who consume a single topic or watch partition details (such as low and high water marks, message count, ...), embed topic documentation from a git repository and more.
  • Cluster overview: List ACLs, available brokers, their space usage, rack id and other information to get a high level overview of your brokers in your cluster.
  • Schema Registry: List all Avro, Protobuf or JSON schemas within your schema registry.
  • Kafka connect: Manage connectors from multiple connect clusters, patch configs, view their current state or restart tasks.

Getting Started

Prerequisites

  • Redpanda or any Kafka deployment (v1.0.0+) compatible
  • Docker runtime (single binary builds will be provided in the future)

Installing

We offer pre built docker images for RP Console, a Helm chart and a Terraform module to make the installation as comfortable as possible for you. Please take a look at our dedicated Installation documentation.

Quick Start

Do you just want to test RP Console against one of your Kafka clusters without spending too much time on the test setup? Here are some docker commands that allow you to run it locally against an existing Redpanda or Kafka cluster:

Redpanda/Kafka is running locally

Since Console runs in its own container (which has its own network scope), we have to use host.docker.internal as a bootstrap server. That DNS resolves to the host system's ip address. However since the brokers send a list of all brokers' DNS when a client has connected, you have to make sure your advertised listener is connected accordingly, e.g.: PLAINTEXT://host.docker.internal:9092

docker run -p 8080:8080 -e KAFKA_BROKERS=host.docker.internal:9092 docker.redpanda.com/redpandadata/console:latest

Docker supports the --network=host option only on Linux. So Linux users use localhost:9092 as an advertised listener and use the host network namespace instead. Console will then be run as it would be executed on the host machine.

docker run --network=host -p 8080:8080 -e KAFKA_BROKERS=localhost:9092 docker.redpanda.com/redpandadata/console:latest

Kafka is running remotely

Protected via SASL_SSL and trusted certificates (e.g. Confluent Cloud):

docker run -p 8080:8080 -e KAFKA_BROKERS=pkc-4r000.europe-west1.gcp.confluent.cloud:9092 -e KAFKA_TLS_ENABLED=true -e KAFKA_SASL_ENABLED=true -e KAFKA_SASL_USERNAME=xxx -e KAFKA_SASL_PASSWORD=xxx docker.redpanda.com/redpandadata/console:latest

I don't have a running Kafka cluster to test against

We maintain a docker-compose file that launches Redpanda and Console: /docs/local.

Community

Slack is the main way the community interacts with one another in real time :)

GitHub Issues can be used for issues, questions and feature requests.

Code of conduct code of conduct for the community

Contributing docs

Companies that use Redpanda Console

console's People

Contributors

andrewhsu avatar bakjos avatar birdayz avatar bochenekmartin avatar bojand avatar chris-palmer-deltatre avatar davidpratt512 avatar frankieshakes avatar holgeradam avatar joeirimpan avatar julcollas avatar jvorcak avatar malinskibeniamin avatar nicolaferraro avatar nilsroehrig avatar rikimaru0345 avatar rjmasikome avatar ronivay avatar rrva avatar sago2k8 avatar snakeice avatar tmgstevens avatar tomasz-sadura avatar tony2001 avatar victorgawk avatar vladoschreiner avatar weeco avatar wreet avatar yansb avatar yougotashovel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

console's Issues

Feature request/suggestion show estimated messages per seconds per topic

Hey,

I have a feature request/suggestion. To get an overview of topics it would be nice to know how much messages to expect (e.g. per minute).
Of course the number could vary over the span of e.g. a day, but already an estimate for the magnitude of messages (thoundands per minutes vs less than one per minute) would also be a helpful information.

Cheers :)

Add describe log dirs request

In order to get more details about kafka partitions (such as their size) Kafka offers an admin request called "DescribeLogDirs". Sarama does not implement this request/response, but as it offers very valuable information we should add that request/response to Sarama.

The original Kafka documentation does not document the protocol but refers to the implementation:

Response package: https://github.com/apache/kafka/blob/05ba5aa00847b18b74369a821e972bbba9f155eb/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsResponse.java

Request package:
https://github.com/apache/kafka/blob/05ba5aa00847b18b74369a821e972bbba9f155eb/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsRequest.java

Make consumerGoup detail and list view clearer

  • Should be able to sort ConsumeGroups by "Lag (Sum)" column
  • In ConsumerGroup detail view, add explanation to what the numbers mean behind the partitions
  • In the list, show "Total Groups" more separate from the "count by state" entries (maybe group them in a box or something)
  • Place quick-search in the name/ID header of each table as it consumes too much vertical space
  • Add small popup to the refresh button to make it immediately clear

Rework "Copy" Button

The copy button sucks, look at it:

copy button

Issues with it

  • You don't really know what you're copying...
    There should be multiple options:

    • Key
    • Value (as nicely formatted JSON)
    • Value (raw data)
    • Timestamp (in UTC seconds)
  • It takes up too much space!
    A whole column (even if its small) makes it more confusing to look at.

    • Maybe a context (rightclick) menu? Nahhh, that'd be the only use of a ctx menu, thus confusing UX
    • Maybe a button that only appears when the cursor is inside the row? Would make it visually less cluttered, but we'd still have to reserve the horizontal space for where the button will appear. (unless we want to re-layout, which is insanely bad lol)
    • Maybe make the copy button itself smaller (like the "expand message '+' icon"), and then put it inline as well (but where? into the 'value' column, together with the expand plus??)

Degrade `/groups` gracefully during rolling update of kafka brokers

Similar to issue #35, viewing consumer-groups also does not work as intended when not all kafka brokers are available (for example when they are being restarted one-by-one by a rolling update)

  • Backend should degrade gracefully (show as much info as possible, instead of returning a status:500)
  • Frontend should be able to handle .lag == null and show an icon in place of the missing information
  • HTTP error messages should be shown in the frontend. Currently the 'message' of statusCode: 500, message: "Could not describe consumer groups in the Kafka cluster" is not shown at all.

Support for Avro

As mentioned in #8 :

  • Avro data has to be transformed into Json in the backend.

  • When sending messages to the frontend, the original format of the message must be sent along as a simple description. Something like sourceFormat:"avro".

  • The frontend has to display the source format.

Add subscribed consumer group count per topic

Use case

Kafka clusters which have grown over time and used by several teams might have topics which aren't used by anyone (anymore). In order to figure out what topics potentially could be deleted it would be helpful to know how many consumer groups consume each topic. An active consumer group for a topic could be defined as a group which has active group offsets for at least one partition for a given topic.

Topics which do not have any consumer groups consuming it are likely good candidates to look at.

Common issues in all tables

  • All tables should remember their page size (and have an option to change it in the first place)
  • All columns should be sortable
  • (Maybe) allow filtering in each column, just like in the antd demo?

Add consumer group lags to consumer groups page

There are two ways how we could implement the consumer group lags.

  1. Implement the ListOffsetRequest in sarama (see: https://github.com/apache/kafka/blob/05ba5aa00847b18b74369a821e972bbba9f155eb/clients/src/main/java/org/apache/kafka/common/requests/ListOffsetRequest.java )

  2. Setup a GRPC endpoint in Kafka Minion and fetch Lags etc. from Kafka Minion

Pro/Cons:

  • Kafka Minion is an external dependency, which requires additional effort to run
  • Kafka Minion can only report these stats after having completely consumed the __consumer_offsets topic
  • Kafka Minion can offer more metrics (such as last commit activity), which we'll likely never get using the adminclient

Decision: We decided to opt in for the ListOffsetRequest to avoid external dependencies. We may add Kafka Minion integration at a later point which should definetely be optional.

TODO:

  • Submit PR to Sarama which implements the ListOffsetRequest / ListOffsetResponse
  • Once the PR has been merged, implement the current offset for each partition, as well as a "topic lag" in the /consumer-groups endpoint(s)

Refresh button in message tab

A button that just refreshes the last/current message search.
Sometimes you just want to redo the search. Having to change some search parameter just to search again is a bad solution.

Add build pipeline

We should create a pipeline which builds and potentially tests our containers. Upon creating a release we should also release that docker image into the package registry. Since we are activated for GitHub actions + package registry I suggest to use both.

Add log dir size on topics, partitions & broker page

The log dir size is an interesting metric to figure out how much space a partition/topic or broker requires. Therefore we should show this metric at all three aggregation levels (partition/topic/broker).

TODO:

  • Implement log size on /topics and /brokers endpoints
  • Add column on /topic and /brokers pages

This issue is dependent on: #15

Timed out requests sometimes do not return the results

For some requests Kafka Owl times out because the Kafka Cluster does not deliver the responses in a timely manner. The UI shows that it has aborted the request after 20s:

Backend API Error
Please check and modify the request before resubmitting.
Request '/api/topics/__consumer_offsets/messages?_offsetMode=-1&startOffset=-1&partitionID=94&pageSize=50&sortOrder=0&sortType=0' timed out after 20.0 sec

The backend logs:

{"level":"error","ts":"2019-12-27T09:50:50.433Z","msg":"Request was cancelled while waiting for messages from workers (probably timeout)","completedWorkers":0,"startedWorkers":1,"fetchedMessages":24}

It was expected that the backend returns the 24 fetched messages (all messages until the request context is cancelled) to the frontend, along with a flag that the request did not complete because it expired this timeout threshold.

The frontend should show the timeout properly as well (if it doesn't do so already).

Search messages with specific key/value in topic

Usecase:

As someone who works with Kafka I'd like to be able to find messages which have a specific key (which might have a semantic meaning - like a unique identifier for a customer). Equally I might want to filter/search for messages which have a specific value (either complete content or a subset of a JSON message like just some properties).

Kowl can do so by streaming all messages in a topic and evaluate each message against a set of user defined filters.

Examples:

  • Search all messages in topic orders which have key xzy-3711
  • Search all messages in topic orders where value customer.UUID = "abc123" && order.value > 6000
  • Search all messages in topic orders where key and value filter from above bullets need to apply

Challenges:

  • Value & Key can be plaintext, JSON, binary, avro, XML, ...
  • In Kowl Business you want to apply additional limits what a user can do (e. g. max 1GB streaming / day is allowed) otherwise excessive usage can cause further troubles (traffic costs, resource usage of Kowl/Kafka, ...)
  • Filters which do not just match a string but requiring the use of the correct type such as greater than equal, date comparisons, ..

Implementation hints:

Finding kafka messages by key: There are multiple partitioning strategies, but the default partitioner uses a 32-bit murmur2 to compute choose what PartitionID the message should go to. This could potentially be used to improve performance when searching a whole topic for a message by key.

Add getting started documentation

As of now the documentation is very rudimentary. The following things should be improved/added:

  • Check Flags again (e. g. Telemetry Host / Port doesn't make sense as they are the same as the REST API)
  • Add installation instructions
  • Add badges (Container, Release version, License, Go Report)
  • Feature list
  • Supported Kafka versions

Visually mark Tombstones in the Frontend

Messages with an empty value on a compact or compact,delete topic indicate a tombstone event. We should somehow mark them as Tombstone.

Either the backend or frontend could mark a message as Tombstone (this has yet to be discussed)

Add message count on topics list and detail

The estimated number of messages (watermark delta) should be added on the topic list for each topic, as well as on the topic detail (at the top), which should give users a better overview.

Check list message performance (response time)

When listing messages Kafka Owl has to do a couple of requests sequentially. It fetches the partitionIDs for the requested topics, the low & high watermarks, starts a partition consumer for each partition etc. I have seen response times which take up to 4.3s for a topic with 50 partitions (regardless of the message size).

Therefore we should make sure that as many requests as possible will be run concurrently (or can be periodically prefetched so that we can potentially save a request upon invocation).

  • Fetch watermarks concurrently
  • Check what requests could be periodically prefetched so that they do not need to be fetched upon invocation (this is now tracked under #20)

Add some extra data to sarama, display in kafka-owl

We should make it possible to decode things like:
https://github.com/apache/kafka/blob/05ba5aa00847b18b74369a821e972bbba9f155eb/clients/src/main/java/org/apache/kafka/common/requests/DescribeLogDirsResponse.java

Either by starting a PR to integrate that functionality directly into sarama, or just as a small module in kafka-owl.

  • Small tool that monitors specific files in github.com/apache/kafka for changes and alerts us on build. (so we never start a build with outdated struct information)

  • Add a way for us to request 'DescribeLogDirs'; interpret the result; and show it in the frontend

  • Add deleteRecords function

Embed topic documentation

Usecase

Kowl is my first stop if I want to get insights (key/value structure, example messages, topic config, ...) about a specific Kafka topic. The producers of a topic may maintain additional documentation at different places (Git repositories, Confluence/SharePoint, ...). If I could embed this documentation into Kowl so that it appears in my topic detail view (for instance as separate tab) I'd get the following advantages:

  • It's trivial to find the documentation for each Kafka topic
  • Kowl can be considered as the one tool to get insights about your topic
  • No context switching between tools (e. g. Kowl + Confluence) to get as much information as possible about a topic

The documentation could be a markdown file which will be commited via Git and can then be picked up and mapped to a Kafka Topic.

Examples:

The topic documentation could contain the following information:

  • Team which produces the topic
  • Semantic meaning of properties / messages
  • Versioning/Historical changes
  • GDPR / Data protection hints
  • Producing service
  • Schema (all properties along with descriptions)

Challenges:

  • Mapping from Markdown file to Kafka topic (maybe via filename?)
  • Schema could potentially be inferred automatically by analysing several messages in a topic. Investigate whether this can be used to enhance this documentation feature or can be neglected
  • Hot reload new markdown files?
  • How to keep config in sync for multiple stages (staging, pre, prod). Separate repo for docs?
  • Do we want to offer the ability to enforce some schema (e. g. a topic documentation must specify whether it's GDPR sensitive or not)
  • What information do we want to infer from Git (changelog, users who edited/created the files, ...)

Migrate from flags to config file

Due to the amount of config options flags become cumbersome to maintain and it is not possible to have a well-aranged set of options. It is expected that the number of config options grows. Thus we'd like to migrate from flags to config files, except for sensitive inputs (e.g. passwords) where flags would still be accepted.

Admin Panel

A page that shows all sorts of "admin information"

  • list of kowl users (+ roles and permissions)

  • find users by role, or by permission

  • list of all role bindings, the "Google Group" or "GitHub Team" they resolve to; and maybe a button to trigger a refresh of those immediately

  • new permission "actionManageKowl" element to allow/deny viewing this panel

  • license overview: how many seats in use? by which users?

Things to keep in mind:

  • What about users that only use direct entries (email addresses) instead of "Google Groups" or "GitHub Teams"

Add partition overview for topics

Beside a "messages" and "configuration" tab in the topics' details page we could have another tab "overview" which for now shows a table row for each partitionID along with it's low & high watermark. Something like this (ignore the "Group Offset" column):

gygYhKU

More settings saved per topic

  • Preview fields
  • QuickSearch (maybe?)
  • Message search parameters
  • Column sorting (a lot of work, pretty small feature, maybe later)

Add tooltips for Consumer Group states

The consumer group overview lists all consumer groups along with it's group state. For users it might not be clear what each state means. One could improve this by adding a tooltip for all possible group states.

w9dpthI

Existing group states are:

  • Stable - Consumer group has members which have been assigned partitions
  • Empty - Consumer group does not have any members
  • Unknown - Group state is not known
  • Dead - Consumer group does not have any members and it's metadata has been removed
  • Preparing Rebalance - A reassignment of partitions is required. Members are asked to stop consuming so that rebalancing can take place
  • Completing Rebalance - Kafka assigns partitions to group members

Additional information for reference: https://chrzaszcz.dev/2019/06/kafka-rebalancing/ (nice blogpost describing consumer group states)

Icons are matched here:

https://github.com/cloudhut/kowl/blob/master/frontend/src/components/pages/GroupDetails.tsx#L300-L305 (this list is not complete and requires more states to be added)

Show restricted topics in overview

If a user is allowed to see a topic but is not allowed to inspect contents such as partitions, messages etc. we should list it in the topics overview and mark it as "locked" so that it is clear the user can not inspect it.

Right now the user would see it just as all the other topics and would run into an error message after trying to inspect the topic.

Configuration endpoint returns empty instead of 404

When the '/configuration' endpoint is called with a topic that doesn't exist it returns an empty result.

It should instead return a 404 http error code.

This concept applies to all routes though, so they should be checked out as well.

Make admin page useful

  • When viewing a user, the roles/permissions are shown, but the page should also show how the user got that role! Specifically: what binding caused this role to be added to the user?

  • Which groups is a given user part of (directly, and indirectly)?

  • Who is in a given group?

  • Bug: Clicking on the + in the bindings table expands all lines

  • Bug: The expand icon is at the wrong position

  • Issue: Users Tab: space should not be distributed that way. In every column there's only very little text, and when each column is 1/3 of the width, the rows are hard to read because the values are so far apart. There should be an empty column on the right that takes up all the remaining space.

Prepare for Avro and XML content

Some people use Avro or XML as their message format. So of course kafka-owl has to support those too.

Since those formats are - just like Json - entirely self describing and also don't contain any structures that can't be expressed in Json, I think the best approach would be to convert them to Json in the backend.

Having a common data format like that will make things a lot easier (as opposed to literally copying all code 2 times, or even more if there is a new format...).

In order to be able to develop and test this, we need some sample data to work with.

  • Our local dummy Kafka setup needs two new topics: avro-topic and XML-topic.

  • Those topics need some sample data in them, so the already existing go-tool should be adapted to generate some random messages to populate the topics with.

  • The messages should not all have the same schema. Some variance would be good to simulate real world scenarios. (objects that have new, renamed, and/or deleted fields)

Handle decoding of failed member assignments

In https://github.com/kafka-owl/kafka-owl/blob/master/backend/pkg/kafka/describe_consumer_groups.go#L152 we return a 500 error if a member assignment can not be decoded as desired. This should never happen as all consumers of protocolType == "consumer" should follow a given schema, but theoretically it's possible.

Instead of returning a server error we should log this at the error level and possibly dump the byte array / hex string so that we can decode the package and find the culprit.

Collaborations features

These are some improvements that probably will be good for collaborations.

  1. Search query for the topics (in topic list)
  2. Anchor for the offset number or offset numbers (in messages tab)
  3. Search query for message search (message search tab)

Message Table: tooltips and view options

Users should be able to:

  • select what columns are visible, and in what order

  • change the way the timestamp is displayed (default, only date, only time, unix seconds, relative (x days ago))

  • 'No key' should have a tooltip to explain what the icon means

Add leader and replica broker IDs for each partition

In the partitions overview we should add one or two clumns which show the Leader + Replica Broker IDs for each partition.

  • Add leader and replica broker IDs in the /partitions response in the Backend
  • Add the new information to the frontend

Update: Backend already returns the required information

ACL overview

Add a page which shows all ACL rules for each resource type and principal

Add prometheus metrics

There are no prometheus metrics yet. Interesting metrics can be:

  • Request latency bucket (where request-type is a label)
  • Number of incoming requests
  • Bytes / IN / OUT - (helps to understand how busy the backend is - could be pretty excessive due to large messages or high throughput of messages)

Implement Cache layer

Use case

Most REST endpoints actually perform multiple (sequential) requests towards Kafka. There are various reasons why a cache is beneficial for Kafka Owl.:

  • We can reduce those endpoints' response times by periodically prefetching some of these requests (e. g. topics list, partitionIDs and watermarks).
  • By prefetching requests periodically the load towards Kafka becomes more predictable as we can share the responses across multiple concurrent requests (and endpoints) and distribute the requests equally over time
  • A cache will potentially save requests towards Kafka

This feature request is prioritized low because we haven't had any performance issues with Kafka yet. The more users Kafka Owl serves, the more important this issue becomes. Even tens of developers who concurrently browse through Kafka Owl shouldn't be a problem for most Kafka clusters.

Implementation

We should create a storage/cache interface which allows a Redis / InMemory implementation. Additionally we'd need to add fetchedAt timestamps in the responses so that we can show the age of the provided data.

One of the initial ideas was to create one Cache instance for each type of data we want to cache (e. g. topics, partitions, consumer groups, ..) with the advantage that we can shift more responsibility (such as RWMutexes, MaxAge gets, generic set/get interface{}) towards the Cache struct.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.