Code Monkey home page Code Monkey logo

chronix.server's Introduction

Build Status Coverage Status Sputnik Apache License 2 Download

Chronix Server

The Chronix Server is an implementation of the Chronix API that stores time series in Apache Solr. Chronix uses several techniques to optimize query times and storage demand. Thus Chronix achieves on a benchmark asking serveral ranges (.5 day up to 180 days) an average runtime per range-query of 23 milliseconds. The dataset contains about 3.7 billion pairs and takes 108 GB serialized as CSV. Chronix needs only 8.7 GB to store the dataset. Everything runs on a standard laptop computer. No need of clustering, parallel processing or another complex stuff. Check it out and give it a try.

The repository chronix.examples contains some examples.

How Chronix Server stores time series

Chronix Architecture

The key data type of Chronix is called a record. It stores a chunk of time series data in a compressed binary large object. The record also stores technical fields, time stamps for start and end, that describe the time range of the chunk of data, and a set of arbitrary user-defined attributes. Storing records instead of individual pairs of time stamp and value has two major advantages:

  1. A reduced storage demand due to compression
  2. Almost constant query times for accessing a chunk due to indexable attributes and a constant overhead for decompression.

The architecture of Chronix has the four building blocks shown in Figure. It is well-suited to the parallelism of multi-core systems. All blocks can work in parallel to each other to increase the throughput.

Semantic Compression

Semantic Compression is optional and reduces the amount of time series with the goal of storing fewer records. It uses techniques that exploit knowledge on the shape and the significance of a time series to remove irrelevant details even if some accuracy is lost, e.g. dimensionality reduction through aggregation.

Attributes and Chunks

Attributes and Chunks breaks down time series into chunks of n data points that are serialized into c Bytes. It also calculates the attributes and the pre-calculated values of the records. Part of this serialization is a Date-Delta Compaction that compares the deltas between time stamps. It serializes only the value if the aberration of two deltas is within a defined range, otherwise it writes both the time stamp and the value to the record's data field.

Basic Compression

Then Basic Compression uses gzip, a lossless compression technique that operates on c consecutive bytes. Only the record's data field is compressed to reduce the storage demand while the attributes remain uncompressed for access. Compression of operational time series data yields a high compression rate due its value characteristics. In spite of the decompression costs when accessing data, compression actually improves query times as data is processed faster.

Multi-Dimensional Storage

The Multi-Dimensional Storage holds the records in a compressed binary format. Only the fields that are necessary to locate the records are visible as so-called dimensions to the data storage system. Queries can then use any combination of those dimensions to locate records. Chronix uses Apache Solr as it ideally matches the requirements. Furthermore Chronix has built-in analysis functions, e.g, a trend and outlier detector, to optimize operational time series analyses.

Data model

Chronix allows one to store any kind of time series and hence the data model is open to your needs. Chronix Server per default uses the Chronix Time Series package. The data model for the Chronix Time Series package.

A time series has at least the following required fields:

Field Name Value Type
start Long
end Long
name String
type String
data Byte[]

The data field contains json serialized and gzip compressed points of time stamp (long) and numeric value (double). Furthermore a time series can have arbitrary user-defined attributes. The type of an attribute is restricted by the available fields of Apache Solr.

Chronix Server Client (Source)

A Java client that is used to store and stream time series from Chronix. The following code snippet shows how to setup an connection to Chronix and stream time series. The examples uses the Chronix API, Chronix Server Client, Chronix Time Series and SolrJ

//An connection to Solr
SolrClient solr = new HttpSolrClient("http://localhost:8983/solr/chronix/");

//Define a group by function for the time series records
Function<MetricTimeSeries, String> groupBy = ts -> ts.getName() + "-" + ts.attribute("host");

//Define a reduce function for the grouped time series records
BinaryOperator<MetricTimeSeries> reduce = (ts1, ts2) -> {
      MetricTimeSeries.Builder reduced = new MetricTimeSeries.Builder(ts1.getName(),ts1.getType())
            .points(concat(ts1.getTimestamps(), ts2.getTimestamps()),
                  concat(ts1.getValues(), ts2.getValues()))
            .attributes(ts1.attributes());
            return reduced.build();
        };

//Create a Chronix Client with a metric time series and the Chronix Solr Storage
ChronixClient<MetricTimeSeries,SolrClient,SolrQuery> chronix = 
                                          new ChronixClient<>(new MetricTimeSeriesConverter(),
                                          new ChronixSolrStorage<>(nrOfDocsPerBatch,groupBy,reduce));

//Lets stream time series from Chronix. We want the maximum of all time series that metric matches *load*.
SolrQuery query = new SolrQuery("name:*load*");
query.setParam("cf","metric{max}");

//The result is a Java Stream. We simply collect the result into a list.
List<MetricTimeSeries> maxTS = chronix.stream(solr, query).collect(Collectors.toList());

Chronix Server Parts

The Chronix server parts are Solr extensions (e.g. a custom query handler). Hence there is no need to build a custom modified Solr. We just plug the Chronix server parts into a standard Solr.

The following sub projects are Solr extensions and ship with the binary release of Chronix. The latest release of Chronix server is based on Apache Solr version 6.4.2

Chronix Server Query Handler (Source)

The Chronix Server Query Handler is the entry point for requests asking for time series. It splits a request based on the filter queries up in range or function queries:

  • cf={function;function};{function;function};... (for aggregations, analyses, or transformations)
  • cf='' (empty, for range queries)

But before the Chronix Query Handler delegates a request, it modifies the user query string. This is necessary as Chronix stores records and hence a query asking for a specific time range has to be modified. As a result it converts a query:

host:prodI4 AND name:\\HeapMemory\\Usage\\Used AND start:NOW-1MONTH AND end:NOW-10DAYS

in the following query:

host:prodI4 AND name:\\HeapMemory\\Usage\\Used AND -start:[NOW-10DAYS-1ms TO *] AND -end:[* TO NOW-1MONTH-1ms]

Range Query

A range query is answered using the default Solr query handler which supports all the great features (fields, facets, ...) of Apache Solr.

Example Result:

{
  "responseHeader":{
    "query_start_long":0,
    "query_end_long":9223372036854775807,
    "status":0,
    "QTime":3},
  "response":{"numFound":21,"start":0,"docs":[
      {
        "start":1377468017361,
        "name":"\\Load\\max",
        "end":1377554376850,
        "data":"byte[]" // serialized and compressed points
       },...
   ]
}

Function Query

A custom query handler answers function queries. Chronix determines if a query is a function query by using the filter query mechanism of Apache Solr. There are three types of functions: Aggregations, Transformations, and High-level Analyses.

Currently the following functions are available:

(See the GPL2 branch that has more functions)

  • Maximum (metric{max})
  • Minimum (metric{min})
  • Average (metric{avg})
  • Standard Deviation (metric{dev})
  • Percentiles (metric{p:[0.1,...,1.0]})
  • Count (metric{count}) (Release 0.2)
  • Sum (metric{sum}) (Release 0.2)
  • Range (metric{range}) (Release 0.2)
  • First/Last (metric{first/last}) (Release 0.2)
  • Bottom/Top (metric{bottom/top:10}) (Release 0.2)
  • Derivative (metric{derivative}) (Release 0.2)
  • Non Negative Derivative (metric{nnderivative}) (Release 0.2)
  • Difference (metric{diff}) (Release 0.2)
  • Signed Difference (metric{sdiff}) (Release 0.2)
  • Scale (metric{scale:0.5}) (Release 0.2)
  • Divide (metric{divide:4}) (Release 0.2)
  • Time window based Moving Average (metric{movavg:10,MINUTES}) (Release 0.2)
  • Samples based Moving Average (metric{smovavg:10}) (Release 0.4)
  • Add (metric{add:4}) (Release 0.2)
  • Subtract (metric{sub:4}) (Release 0.2)
  • A linear trend detection (metric{trend})
  • Outlier detection (metric{outlier})
  • Frequency detection (metric{frequency:10,6})
  • Time series similarity search (metric{fastdtw:compare(metric=Load),1,0.8})
  • Timeshift (metric{timeshift:[+/-]10,DAYS}) (Release 0.3)
  • Distinct (metric{distinct}) (Release 0.4)
  • Integral (metric{integral}) (Release 0.4)
  • SAX (metric{sax:*af*,10,60,0.01})

Multiple analyses, aggregations, and transformations are allowed per query. If so, Chronix will first execute the transformations in the order they occur. Then it executes the analyses and aggregations on the result of the chained transformations. For example the query:

cf=metric{max;min;trend;movavg:10,minutes;scale:4}

is executed as follows:

  1. Calculate the moving average
  2. Scale the result of the moving average by 4
  3. Calculate the max, min, and the trend based on the prior result.

A function query does not return the raw time series data by default. It returns all requested time series attributes, the analysis and its result. With the enabled option fl=+data Chronix will return the data for the analyses. The attributes are merged using a set to avoid duplicates. For example a query for a metric that is collected on several hosts might return the following result:

{
  "responseHeader":{
    "query_start_long":0,
    "query_end_long":9223372036854775807,
    "status":0,
    "QTime":3},
  "response":{"numFound":21,"start":0,"docs":[
      {
        "start":1377468017361,
        "name":"\\Load\\max",
        "end":1377554376850,
        "host:"["host-1","host-2", ...]
       }...
   ]
}

A few example analyses:

q=name:*load* // Get all time series that metric name matches *load*

+ cf=metric{max} //Get the maximum of 
+ cf=metric{p:0.25} //To get the 25% percentile of the time series data
+ cf=metric{trend} //Returns all time series that have a positive trend
+ cf=metric{frequency=10,6} //Checks time frames of 10 minutes if there are more than 6 points. If true it returns the time series.
+ cf=metric{fastdtw(metric:*load*),1,0.8} //Uses fast dynamic time warping to search for similar time series

Join Time Series Records

An query can include multiple records of time series and therefore Chronix has to know how to group records that belong together. Chronix uses a so called join function that can use any arbitrary set of time series attributes to group records. For example we want to join all records that have the same attribute values for host, process, and name:

cj=host,process,name

If no join function is defined Chronix applies a default join function that uses the name.

Modify Chronix' response

Per default Chronix returns (as Solr does) all defined fields in the schema.xml. One has three ways to modify the response using the fl parameter:

One specific user defined field

If only a specific user defined field is needed, e.g. the host field, one can set:

fl=host

Then Chronix will return the host field and the required fields (start,end,data,id).

Exclude a specific field

If one do not need a specific field, such as the data field, one can pass -data in the fl parameter.

fl=-data

In that case all fields, expect the data field, are returned. Even when the excluded field is a required field.

Explicit return of a field

This is useful in combination with an analysis. Analyses per default do not return the raw data for performance reasons. But if the raw data is needed, one can pass

fl=+data

Chronix Response Writer

This allows one to query raw (uncompressed) data from Chronix in JSON format. To execute the transformer you have to add it to the fl parameter:

q=name:*load*&fl=+dataAsJson //to get all fields and the dataAsJson field
q=name:*load*&fl=dataAsJson //to get only the required fields (except the data field) and dataAsJson

The records in the result contains a field called dataAsJson that holds the raw time series data as json. Note: The data field that normally ship the compressed data is not included in the result.

Example Result:

{
  "responseHeader":{
    "query_start_long":0,
    "query_end_long":9223372036854775807,
    "status":0,
    "QTime":3},
  "response":{"numFound":21,"start":0,"docs":[
      {
        "start":1377468017361,
        "name":"\\Load\\max",
        "end":1377554376850,
        "dataAsJson":"[[timestamps],[values]]" //as json string
       }...
   ]
}

Chronix Plug-ins

Chronix provides a plug-in mechanism to add user-defined types as well as function for types.

Types

See the Metric type for an example.

Functions

See the NoOp funtion for metric types for an example.

We will provide more information in the new documentation of Chronix.

Chronix Server Retention (Source)

The Chronix Server Retention plugin deletes time series data that is older than a given threshold. The configuration of the plugin is within the config.xml of the Solr Core. The following snippet of Solr config.xml shows the configuration:

<requestHandler name="/retention" class="de.qaware.chronix.solr.retention.ChronixRetentionHandler">
  <lst name="invariants">
   <!-- Use the end field of a record to determine its age. -->
   <str name="queryField">end</str>
   <!-- Delete time series that are older than 40DAYS -->
   <str name="timeSeriesAge">40DAYS</str> 
    <!-- Do it daily at 12 o'clock -->
   <str name="removeDailyAt">12</str>
   <!-- Define the source  -->
   <str name="retentionUrl">http://localhost:8983/solr/chronix/retention</str>
   <!-- Define how the index is updated after deletion -->
   <str name="optimizeAfterDeletion">false</str>
   <str name="softCommit">false</str>
  </lst>
</requestHandler>

Usage

All libraries are available in the Chronix Bintray Maven repository. A build script snippet for use in all Gradle versions, using the Chronix Bintray Maven repository:

repositories {
    mavenCentral()
    maven {
        url "http://dl.bintray.com/chronix/maven"
    }
}
dependencies {
   compile 'de.qaware.chronix:chronix-server-client:<currentVersion>'
   compile 'de.qaware.chronix:chronix-server-query-handler:<currentVersion>'
   compile 'de.qaware.chronix:chronix-server-retention:<currentVersion>'
}

Contributing

Is there anything missing? Do you have ideas for new features or improvements? You are highly welcome to contribute your improvements, to the Chronix projects. All you have to do is to fork this repository, improve the code and issue a pull request.

Building Chronix from Scratch

Everything should run out of the box. The only two things that must be available:

  • Git
  • JDK 1.8

Just do the following steps:

cd <checkout-dir>
git clone https://github.com/ChronixDB/chronix.server.git
cd chronix.server
./gradlew clean build

Maintainer

Florian Lautenschlager @flolaut

License

This software is provided under the Apache License, Version 2.0 license.

See the LICENSE file for details.

chronix.server's People

Contributors

gitter-badger avatar justdabuk avatar lamora avatar phxql avatar r4j4h avatar sebfz1 avatar sebhaub avatar stigkj avatar waffle-iron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chronix.server's Issues

Explicit option to enable data return.

Currently aggregations do not return the data field but high level analyses does.
We should provide an option that the user can decide if he needs the data or not.

The default is, that no data is returned for all types of analyses (aggregations / high level analyses). With the option fl=data enabled Chronix returns the rawa data.

Sever-side Analysis Scripts

Discuss if we should provide a way that a user can send a groovy script to Chronix that is evaluated on the side of the server. This could be a way to easily extend Chronix with missing analyses.

Small chunk compaction

Chronix' performance is best when the chunk size is ideal (1024 kbyte, uncompressed). But in a live monitoring we need small chunks (short time range) << 1024 kbyte. Hence the query and storage performance drops.

Feature:

  • Periodically check if Chronix has records with small chunks. If, group these records and build larger chunks.

Filter / Window Transformations

Chronix currently has aggregations and high-level analyses but no transformations like filter or window / sliding window.

Time Series Vectorization

Implement a transformation that does a server-side time series vectorization.
This is useful in many cases, e..g. data reduction on the client side.

Could be something like that:

transform=vector:points,threshold

MovingAverage implementation is wrong

The documentation says:

/** 
     * Calculates the moving average of the time series using the following algorithm:
     * <p>
     * 1) Get all points within the defined time window
     * -> Calculate the time and value average sum(values)/#points
     * <p>
     * 2) If the distance of two timestamps (i, j) is larger than the time window
     * -> Ignore the emtpy window
     * -> Use the j as start of the window and continue with step 1)
     *
     * @param timeSeries the time series that is transformed
     * @return the transformed time series
     */

That is not a moving average (and it's jumpy).

Data as JSON even for transformations

Currently the dataAsJson functionality only works for range queries without any functions (aggregations, transformations, analyses). We should provide this feature also for queries that include functions.

Bug when joining fields

If a field used for joining records is not defined in the requested fields, the join key contains "null" values leading to wrong joins.

FastDTW Analysis

In some cases a time series has two or more timestamps that have exactly the same value. FastDTW can not deal with that. Hence we have to filter / aggregate the points with the same timestamp.

Pass multiple analyses to Chronix

q=host:xyz&fq=aggregation=min,max, ...

The results of the aggregations and analyses are added to the resulting document:

start: A
end: B
data:[...]
min:X
max:Y
...

Hence can ask several values at once.

Chronix Simple Ingestion Interface

We should provide a simple ingestion interface for time series data, e.g. pairs of timestamp, value. We should adapt the protocols of InfluxDB, Graphite, ...

Merging attributes on aggregations and analyses

The attributes of the time series included in an analysis or aggregation are currently not merged. The attributes of the first time series are set in the result. We should merge the attributes using a set as value holding the attributes of the same keys.

Data-mining or real-time?

Asked in Gitter...

@FlorianLautenschlager I have a question about Chronix. Maybe about chronix-storage in particular...

It seems like Chronix is designed more for data-mining that real-time use, is that correct?

I ask, because it seems that a time series is only (should only) be added when a sufficient number of data points have been collected.

For example, in order to benefit from the compression it seems that "chunks" of data points need to be accumulated before adding the total series to Solr. If this is true, the "recent" values would not be available for query. Correct?

Or can I collect a set of metrics every 5 seconds, and add them through the storage service, whereby they can be queried? Does something underlying in Chronix "merge" them in some way into a document of "significant size" over time to achieve better compression and query performance?

My concern is that we are building a monitoring system with thousands (or tens of thousands) of disparate metrics collected every 5 seconds, but for any given host/metric pair there would only be 12 per minute -- but they need to be available "immediately" for query to display on real-time dashboards.

Frequency detection documentation is confusing.

I just had a look at the frequency detection code to find out what its purpose is.
Reading the documentation is not very enlightening: Detects if a points occurs multiple times within a defined time range

Reading the code, doesn't really help either:

  • It takes multiple time series as arguments, but only looks at the first one.
  • It uses the List<Long> currentWindow as a counter (just the size, contents irrelevant).
  • It seems to subdivide a timeseries into chunks just smaller than windowSize minutes in duration and returns true if a chunk has at least windowThreshold more observations than its predecessor.

Allow server-side response compression

Add this to the jetty-gzip.xml in chronix-X.X/chronix-solr-X.X.X/server/etc

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_3.dtd">

<!-- =============================================================== -->
<!-- Mixin the GZIP Handler                                          -->
<!-- This applies the GZIP Handler to the entire server              -->
<!-- If a GZIP handler is required for an individual context, then   -->
<!-- use a context XML (see test.xml example in distribution)        -->
<!-- =============================================================== -->

<Configure id="Server" class="org.eclipse.jetty.server.Server">
  <Call name="insertHandler">
    <Arg>
      <New id="GzipHandler" class="org.eclipse.jetty.server.handler.gzip.GzipHandler">
    <Set name="minGzipSize"><Property name="jetty.gzip.minGzipSize" deprecated="gzip.minGzipSize" default="0"/></Set>
    <Set name="checkGzExists"><Property name="jetty.gzip.checkGzExists" deprecated="gzip.checkGzExists" default="false"/></Set>
    <Set name="compressionLevel"><Property name="jetty.gzip.compressionLevel" deprecated="gzip.compressionLevel" default="1"/></Set>
    <Set name="excludedAgentPatterns">
      <Array type="String">
        <Item><Property name="jetty.gzip.excludedUserAgent" deprecated="gzip.excludedUserAgent" default=".*MSIE.6\.0.*"/></Item>
      </Array>
    </Set>

    <Set name="includedMethods">
      <Array type="String">
        <Item>GET</Item>
      </Array>
    </Set>

      </New>
    </Arg>
  </Call>
</Configure>

And the following snippet to chronix-X.X/chronix-solr-X.X.X/server/contexts/solr-jetty-context.xml

<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_0.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
  <Set name="contextPath"><Property name="hostContext" default="/solr"/></Set>
  <Set name="war"><Property name="jetty.base"/>/solr-webapp/webapp</Set>
  <Set name="defaultsDescriptor"><Property name="jetty.base"/>/etc/webdefault.xml</Set>
  <Set name="extractWAR">false</Set>

  <!-- Enable gzip compression -->
  <Set name="gzipHandler">
    <New class="org.eclipse.jetty.server.handler.gzip.GzipHandler">
      <Set name="minGzipSize">2048</Set>      
    </New>
  </Set>
</Configure>

Add the gzip.mod to chronix-X.X/chronix-solr-X.X.X/server/modules

New Aggregation: Range

Absolute difference between the minimum and the maximum:

min = -100
max = 200
range => 300

Key-value attributes

Hi,

I just learned about Chronix, so dare with me if I have overlooked that but is it possible to add key value metadata to the measurements like host:myhost, application:myapp? Like the InfluxDB format or the format described here: https://www.elastic.co/blog/elasticsearch-as-a-time-series-data-store.

Also it would be nice to have documentation about the http ingestion protocol and format, if available as well as the query api and aggregation functions.

New Aggregation: Signed Difference

Implement signed difference aggregation.

For negative values

first = -1
last = -10
=> diff = -9

For positive values

first = 1
last = 10
=> diff = 9

Positive first, negative last

first = 1
last = -10
=> diff = -11

Negative first, positive last

first = -1
last = 10
=> diff = 11

Result never returns

The Chronix client asks the amount of time series in a first call. If the further result (e.g. an analysis) reduces the amount of time series, then the result is never returned.

Enable CORS

Without enabled CORS the Grafana plugin wont work.
Solution: Enable it per default.

Add this to the web.xml

<!-- Activates CORS for queries data e.g. grafana -->
<filter>
    <filter-name>cross-origin</filter-name>
    <filter-class>org.eclipse.jetty.servlets.CrossOriginFilter</filter-class>
    <init-param>
         <param-name>allowedOrigins</param-name>
         <param-value>http://localhost*</param-value>
    </init-param>
     <init-param>
         <param-name>allowedMethods</param-name>
         <param-value>GET,POST,DELETE,PUT,HEAD,OPTIONS</param-value>
     </init-param>
     <init-param>
         <param-name>allowedHeaders</param-name>
         <param-value>origin, content-type, cache-control, accept, options, authorization, x-requested-with</param-value>
     </init-param>
    <init-param>
        <param-name>supportsCredentials</param-name>
        <param-value>true</param-value>
    </init-param>
    <init-param>
      <param-name>chainPreflight</param-name>
      <param-value>false</param-value>
    </init-param>
</filter>

<filter-mapping>
  <filter-name>cross-origin</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

Prometheus Integration

Build an integration with Prometheus and Chronix to read data out of Prometheus into Chronix, to use Chronix as long term storage.

When querying with fq, query handler raises an exception

Steps to reproduce:
Install chronix-solr-6.0.1
solr start
go to http://localhost:8983/solr/#/chronix/query
q = metric:Load
qf = anything you want

solr ui shows:
{
"responseHeader":{
"status":400,
"QTime":4},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"no field name specified in query and no default specified via 'df' param",
"code":400}}

solr.log shows:
2016-06-07 21:04:59.630 ERROR (qtp110456297-17) [ x:chronix] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: no field name specified in query and no default specified via 'df' param
at org.apache.solr.parser.SolrQueryParserBase.checkNullField(SolrQueryParserBase.java:700)

Moving Average based on a fixed size of samples

We currently only provide a moving average with a time window. Hence the amounts of points within a window varies. Therefore we should also provide a moving average transformation with a fixed amount of points.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.