Code Monkey home page Code Monkey logo

powertools-lambda-python's People

Contributors

am29d avatar dependabot[bot] avatar dreamorosi avatar eldritchideen avatar github-actions[bot] avatar gmcrocetti avatar groovydan avatar gwlester avatar heitorlessa avatar hjgraca avatar huonw avatar ivica-k avatar leandrodamascena avatar michaelbrewer avatar mploski avatar nmoutschen avatar peterschutt avatar ran-isenberg avatar roger-zhangg avatar rubenfonseca avatar seshubaws avatar step-security-bot avatar stephenbawks avatar sthuber90 avatar sthulb avatar tankanow avatar tonysherman avatar walmsles avatar whardier avatar wurstnase avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powertools-lambda-python's Issues

Metrics - Separate ColdStart metric from business metrics and their dimensions

Is your feature request related to a problem? Please describe.

When capturing cold start metric using the Metrics utility, it adds an additional dimension named function_name.

Based on how EMF and CloudWatch Metric visualization work, this means we accidentally have two operational modes:

  • Cold start behaviour: All custom metrics and ColdStart metric are plot with function_name, service, <any custom dimension> dimensions
  • Warm start behaviour: All custom metrics are plot with service, <any custom dimension>

This causes any custom metric to have data points in two different metrics with the same name when visualizing it, because from a CloudWatch point of view they're unique (different dimensions).

image

Describe the solution you'd like

After discussing this extensively with @cakepietoast and @nmoutschen, we could plot ColdStart metric as a separate EMF blob to ensure backwards compatibility, and prevent new data points to be split.

Describe alternatives you've considered

  • Adding function_name as a default dimension across all metrics - This would cause an increased billing on CloudWatch
  • Removing function_name dimension, and log function details as metadata - This would break existing customers dashboard, and make KPI impact analysis more difficult

Additional context

Error using tracer with aioboto3

Hello Heitor,

I have a lambda that uses the tracer from aws_power_lambda_tools and the asyncio module. Everything runs fine with boto3, but if I use aioboto3 I get the following error:

Traceback (most recent call last):
  File "/opt/python/aws_lambda_powertools/tracing/tracer.py", line 152, in decorate
    raise err
  File "/opt/python/aws_lambda_powertools/tracing/tracer.py", line 144, in decorate
    response = lambda_handler(event, context)
  File "/opt/python/aws_lambda_powertools/logging/logger.py", line 157, in decorate
    return lambda_handler(event, context)
  File "/var/task/onExecuteCampaign/lambda_function.py", line 228, in lambda_handler
    raise e
  File "/var/task/onExecuteCampaign/lambda_function.py", line 225, in lambda_handler
    loop.run_until_complete(main(event, context))
  File "/var/lang/lib/python3.6/asyncio/base_events.py", line 488, in run_until_complete
    return future.result()
  File "/var/task/onExecuteCampaign/lambda_function.py", line 200, in main
    raise e
  File "/var/task/onExecuteCampaign/lambda_function.py", line 196, in main
    count_list = await asyncio.gather(*(process_segment(campaignId, segment) for segment in segments))
  File "/var/task/onExecuteCampaign/lambda_function.py", line 161, in process_segment
    raise e
  File "/var/task/onExecuteCampaign/lambda_function.py", line 158, in process_segment
    count_list  = await asyncio.gather(*(process_segment_partition(campaignId, segment,partition) for partition in range(num_partitions)))
  File "/var/task/onExecuteCampaign/lambda_function.py", line 140, in process_segment_partition
    raise e
  File "/var/task/onExecuteCampaign/lambda_function.py", line 119, in process_segment_partition
    Limit = pageSize
  File "/opt/python/aioboto3/resources.py", line 299, in do_action
    response = await action.async_call(self, *args, **kwargs)
  File "/opt/python/aioboto3/resources.py", line 67, in async_call
    response = await getattr(parent.meta.client, operation_name)(**params)
  File "/opt/python/aws_xray_sdk/ext/aiobotocore/patch.py", line 36, in _xray_traced_aiobotocore
    meta_processor=aws_meta_processor,
  File "/opt/python/aws_xray_sdk/core/async_recorder.py", line 101, in record_subsegment_async
    stack=stack,
  File "/opt/python/aws_xray_sdk/ext/boto_utils.py", line 57, in aws_meta_processor
    resp_meta.get('HTTPStatusCode'))
  File "/opt/python/aws_xray_sdk/core/models/entity.py", line 102, in put_http_meta
    self._check_ended()
  File "/opt/python/aws_xray_sdk/core/models/entity.py", line 283, in _check_ended
    raise AlreadyEndedException("Already ended segment and subsegment cannot be modified.")
aws_xray_sdk.core.exceptions.exceptions.AlreadyEndedException: Already ended segment and subsegment cannot be modified.

Here is the relevant snippet of code with aioboto3 (with python 3.6):

@tracer.capture_method
async def process_segment_partition(campaignId, segment, partition):
  ...
  async with aioboto3.resource('dynamodb') as dynamo_resource:

            # async table resource
            async_table = dynamo_resource.Table(environ['TABLE_NAME'])
            # query first page
            response = await async_table.query(
                KeyConditionExpression=Key('pk').eq(segmentPartitionId),
                Limit = pageSize
            )
  ...
  

async def process_segment(campaignId, segment):
    ...   
    await asyncio.gather(*(process_segment_partition(campaignId, segment,partition) for partition in range(num_partitions)))
    ...

# main loop implementation
async def main(event, context):
  ...
  await asyncio.gather(*(process_segment(campaignId, segment) for segment in segments))
  ...

@tracer.capture_lambda_handler
@logger_inject_lambda_context
def lambda_handler(event, context):
    
    try:
        loop = asyncio.get_event_loop()
        loop.run_until_complete(main(event, context))
    except Exception as e:
        logger.exception(e)
        raise e
    finally:    
        loop.close()  

The code above raises the error mentioned.

If I just remove the aioboto3 part in the method process_segment_partition, it runs fine:

Using boto3:

@tracer.capture_method
async def process_segment_partition(campaignId, segment, partition):
    
    ...    
    response = table.query(
        KeyConditionExpression=Key('pk').eq(segmentPartitionId),
        Limit = pageSize
    )
    ...

Any ideas?

Logger: overwrite keys while logging new stuff

Hi!

I am here to ask you about the Keys of the logger.

I see that you can update the keys with structure_logs() method, but once you put a new key and assign a value, you won't be able to change it without calling structure_logs() again.

It would be interesting to be able to overwrite some key values at the same time you are logging, think about some KPI you would like to log as a key for instance, it would be nice to easily overwrite the value.

Although I know it is posible to write keys under "message" key it feels like not the best option for me, the main keys seem to be the place for this kind of information in order to catch it easily with tools like datadog or something like that.

I don't know if I am explaining it correctly but I hope you consider this as a new feature, I would also love to know your opinion about it as well as possible alternative aproaches, maybe I am just missunderstanding the way how you want the logger to be used and wirting more keys under messages is the correct way to do it!

Thank you very much in advance!
And congrats for such an amazing tool!

Metrics: add_dimension function allows non-string values which causes failure to publish metrics

There is currently no check when a dimension is added that the value is of type string. The values must be string as per [EMF schema]. The consequence of writing non string values for dimensions to the logs is a silent failure scenario where no metrics get published. (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html):

DimensionSet Array
A DimensionSet is an array of strings containing the dimension keys that will be applied to all metrics in the document. The values within this array MUST also be members on the root-nodeโ€”referred to as the Target Members
A DimensionSet MUST NOT contain more than 9 dimension keys.
The target member MUST have a string value.

Reproduce:

from aws_lambda_powertools.metrics import Metrics
metrics = Metrics()
metrics.add_namespace("test-namespace")

@metrics.log_metrics
def lambda_handler(event, context):
    metrics.add_metric(name="CartUpdated", unit=MetricUnit.Count, value=1)
    metrics.add_dimension(name="user_type", value=3)

Checklist before GA

UPDATE: Metrics.add_namespace and POWERTOOLS_METRICS_NAMESPACE will be removed during GA.

Creating a checklist of activities to remember after finishing docs website

Note - Checklist updated We might need to break into multiple repos as most tools are language specific, docs layout are vastly different, and dependabot can't seem to get multiple CHANGELOGs

RFC: parameter retrieval utility

Key information

  • RFC PR: #96
  • Related issue(s), if known:
  • Area: Utilities
  • Meet tenets: Yes

Summary

Add a utility to facilitate retrieval of parameters from the SSM Parameter store or Secrets Manager. This would have support for caching parameter values, preventing retrieving the value at every execution.

Motivation

Many Lambda users use either the Parameter Store or Secrets Manager to store things such as feature flags, third-party API keys, etc. In a serverless context, a simple approach is to either fetch at every invocation (which might be costly/run into limits) or fetch at initialisation (meaning no control over expiration and refresh). This would simplify that experience for customers.

Proposal

This utility should provide a very simple to use interface for customers that don't want to deep-dive into how this utility works. To prevent hitting the throughput limit for the Parameter Store, this should have a default cache value in the single digit seconds (e.g. 5).

Basic usage

# For SSM Parameter
from aws_lambda_powertools.utilities import get_parameter
# For Secrets Manager
from aws_lambda_powertools.utilities import get_secret

def handler(event, context):
    param = get_parameter("my-parameter")
    secret = get_secret("my-secret")

Changing the default cache duration

from aws_lambda_powertools.utilities import get_parameter

def handler(event, context):
    # Only refresh after 300 seconds
    param = get_parameter("my-parameter", max_age=300)

Convert from specific format

from aws_lambda_powertools.utilities import get_parameter

def handler(event, context):
    # Transform into a dict from a json string
    param = get_parameter("my-parameter", format="json")

    # Transform into bytes from base64
    param = get_parameter("my-parameter", format="binary")

Retrieve multiple parameters from a path

from aws_lambda_powertools.utilities import get_parameters

def handler(event, context):
    params = get_parameters("/param/path")
    # Access the item using a param.name notation
    print(params.Subparam)

    # Other modifications are supported
    params = get_parameters("/param/path", format="json")
    print(params.Subparam["key"])

    # Supports recursive fetching
    params = get_parameters("/param/path", recursive=true)
    print(params.Subparam.Subsubparam)

Drawbacks

  • This would add a dependency on boto3. Many functions probably use it in some form, but the Powertools don't require it directly at the moment. However, this is already pulled through the X-Ray SDK.
  • Many problems around parameters can be solved using environment variables, thus the usefulness is limited to cases where value could change with a short notice.

Rationale and alternatives

  • What other designs have been considered? Why not them? Replicating ssm-cache-python feature set, however this might be too feature-rich for this use-case.
  • What is the impact of not doing this? Users who want to retrieve dynamic parameters will have to think about the expiration logic if they don't want to risk getting throttles at scale.

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Feature: Docs website

Expand API docs to include detailed documentation including contributors guide, related projects etc

Portray could be an enabler here since we use pdoc3 for API docs - The less config and bootstrap the better

Combine logging and tracing ?

Hi

What should you put before the lambda handler to use both logging and tracing ?

I have this:

@logger_inject_lambda_context
@tracer.capture_lambda_handler
def handler(event, context)

RFC: SQS partial batch failure middleware

First things first: Congratulations for the amazing repo.

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known:
  • Area: Utilities
  • Meet tenets: Yes

Summary

A lambda processing a batch of messages from SQS is a very common approach and it works smooth for most use cases. Now, for the sake of example, suppose we're processing a batch and one of the messages failed, lambda is going to redrive this batch to the queue again, including the successful ones ! Re-running successful messages is not acceptable for all use cases.

Motivation

A very common execution pattern is running a lambda connected in sqs, in most cases with a batch size not equal to one. In such cases, an error to one of the processed messages will cause the whole batch to return to the queue. For some use cases, it's impossible to rely on such behavior - non idempotent actions. A solution for this problem would improve the experience of using a lambda with SQS.

Proposal

I'm going to propose a very simple code that's not complete.

from aws_lambda_powertools.middleware_factory import lambda_handler_decorator

@lambda_handler_decorator
def sqs_partial_batch_failure(handler, event, context):
	sent_records = event['Records']
	sqs_client = boto3.client('sqs')

	response = handler(event, context)

	successful_messages = get_successful(response)
	sqs_client.delete_message_batch(successful_messages)  # deletes 3rd and 7th messages


# batchsize of 10, fails for 3rd and 7th message
@sqs_partial_batch_failure
def handler(event, context):
	for record in event['Records']:
		do_sth(record)

Drawbacks

  • Add boto3 as dependency;
  • It may be unrelated with the proposal of this package;
  • Adds a little performance overhead to track failed records and delete the successful ones.

Rationale and alternatives

  • What other designs have been considered? Why not them?
    The lambda may be invoked with a single message (batch size one) but it just feels like ill-use of the full power of this integration. Running as much as possible message per lambda call is the best scenario.
    Inspired by (middy)[https://github.com/middyjs/middy/tree/master/packages]

  • What is the impact of not doing this?
    It may attract more users to use such a powerful option: batch processing with sqs.

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

Get the logger to respect module hierarchy as a normal logger

Feature request to have this logger have the same multiple module inheritance present on the normal logging module.

When we create a normal log on the main.py, then subsequence invocations of logging.getLogger(__name__) on submodule will inherit the handlers and levels set on the main.py, due to the singleton nature of the logging.Logger class , hence using the logging.getLogger(name) to get the correct instance of the logger.

This allows for any log message in sub-modules to have the same characteristics a the parent logger. In this specific case, since this formatter transforms everything into a nice JSON with the possibility of adding structure logs, would be nice for those structure logs to persist and be inherited by the child logging instances.

If I need to create another logger = Logger() this will indeed create a whole new instance which has no relationship with the logger on main.py, hence all the structure logs setup on main are now lost and need to be recreated, besides being repetitive, it might not be possible.

For example, these two logs, one from the lambda handler, another one from a submodule. The runID was lost, since it was added via logger.structure_logs(append=True, runID=run_id)

{"timestamp": "2020-07-31 06:29:32,699", "level": "INFO", "location": "lambda_handler:46", "service": "myService", "sampling_rate": 0.0, "cold_start": true, "function_name": "name, "function_memory_size": "512", "runID": "1b250a6732", "message": "my message on main.py"}
{"timestamp": "2020-07-31 06:29:33,161", "level": "INFO", "location": "start_execution:49", "service": "core.handlers.handler", "sampling_rate": 0.0, "message": "my message on a sub module"}

It would be ideal that some basic keys would be inherited, such that querying the logs afterwards can be easy, such as getting all logs where runID = 'abc'.

Re setup the structure_logs on every module is not possible, since modules might be agnostic to the concept of runID

Ability to access the added keys that are captured from the lambda context. (eg. coldstart)

Is your feature request related to a problem? Please describe.
I'd like to have a metric that visualises the number of cold starts.

Describe the solution you'd like
I want to read the coldstart true / false from the context that is added to the logger when using @logger.inject_lambda_context. That would mean I can build a metric based on this value.

Describe alternatives you've considered
I could implement a way to figure out a coldstart myself by having a variable setup in the initialisation code

Enhancement Suggestion - Log Sampling

Hi there,

Before anything, great work with the Library so far. I am wondering if log sampling capability is something you guys are thinking for this repo and if so what would it take to get it implemented?

I am willing to help with the coding of this feature if I could get some pointers on where to start.

Problem with long running functions

Hello Heitor,

I have the tracer decorator applied to a lambda that may take close to 15 minutes to execute.

If the execution happens in a short time, everything is fine and trace renders correctly in XRay.
However, if the execution takes close to 15 minutes, the trace shows as "Pending" and Service Map doesn't render correctly.

This lambda function reads a stream from S3 and executes a lot of Dynamo BatchWriteItem requests.

The function uses threading, but I don't think this is the cause of the problem. It seems that the problem is related to a long running task. I've found the following reference in the documentation:

https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html

For long-running requests, you can send an in-progress segment to notify X-Ray that the request was received, and then send subsegments separately to trace them before completing the original request.

When the request is complete, close the segment by resending it with an end_time. The complete segment overwrites the in-progress segment.

The snippets above suggest I should have called create_segment in the end passing end_time. I've taken a look at the code of the tracer and couldn't find any create_segment there.

Any thoughts?

Docs website inconsistent UX

Issue to track following fixes as observed when used across browsers and builds:

  • Major: Chrome service workers are not being refreshed with latest build
    • Action: remove offline plugin
  • Minor: Logo link not returning to homepage
  • Minor: Logo not appearing in Mobile version
  • Minor: API docs side bar doesn't have a link to return to main docs
    • Action: Create custom [mako template for pdoc](https://github.com/pdoc3/pdoc/blob
      /master/pdoc/templates/html.mako)
    • Using custom templates yields all sort of issues despite only needing to add a A tag. As we're no longer showing API ref results in the search, ignoring this now

@tracer.capture_lambda_handler does not support functions that return generator expressions

I have an async lambda listening to S3 bucket notifications and i wanted to track the performance of downloading a large json file. Original this was a regular function that worked but later when refactoring it to yield the file pointer it failed when deployed. Nothing was found during unit testing.

What were you trying to accomplish?
Add a @tracer.capture_method to a function that yields a result.

Expected Behavior

Either fail the same way during testing when POWERTOOLS_TRACE_DISABLED='true' or support generator expression

Current Behavior

Either results in a RuntimeError("generator didn't yield") or a Task timed out

Possible Solution

Detect and support generator expression, or have a better emulation during testing which returns an error when a return
returns a generator expression.

Steps to Reproduce (for bugs)

import contextlib
import json
import tempfile
from typing import IO

import boto3
from aws_lambda_powertools import Tracer

bucket_name = "some_bucket"
key = "file.json"
tracer = Tracer()
s3 = boto3.client("s3")


@contextlib.contextmanager
def yield_without_capture() -> str:
    yield "[]"


@tracer.capture_method
@contextlib.contextmanager
def yield_with_capture() -> str:
    yield "[]"


@tracer.capture_method
@contextlib.contextmanager
def boto_yield_with_capture() -> IO:
    with tempfile.TemporaryFile() as fp:
        s3.download_fileobj(bucket_name, key, fp)
        print("download complete")  # Never prints
        fp.seek(0)
        yield fp


@contextlib.contextmanager
def boto_yield_without_capture() -> IO:
    with tempfile.TemporaryFile() as fp:
        s3.download_fileobj(bucket_name, key, fp)
        fp.seek(0)
        print("download complete")
        yield fp


@tracer.capture_lambda_handler
def lambda_handler(event: dict, _):
    if event["case"] == "yield_without_capture":
        with yield_without_capture() as yielded_value:
            result = yielded_value
    if event["case"] == "yield_with_capture":
        with yield_with_capture() as yielded_value:
            result = yielded_value
    if event["case"] == "boto_yield_without_capture":
        with boto_yield_without_capture() as fp:
            result = fp.read().decode("UTF-8")
    if event["case"] == "boto_yield_with_capture":
        with boto_yield_with_capture() as fp:
            result = fp.read().decode("UTF-8")  # "Task timed out after 10.01 seconds"

    return {"statusCode": 200, "result": json.loads(result)}
  1. case yield_without_capture returns the expected value
  2. case yield_with_capture returns a RuntimeError("generator didn't yield")
  3. case boto_yield_without_capture returns the expected value
  4. case boto_yield_with_capture returns a "Task timed out after 900.01 seconds"

Environment

  • Powertools version used: 1.1.2
  • Packaging format (Layers, PyPi): PyPi
  • AWS Lambda function runtime: Python 3.8

Regression of support for log level constants

Before #99 it was possible to pass a logging constant like logging.INFO to the Logger constructor. After #99, I get this exception: AttributeError: 'int' object has no attribute 'upper' from logging/logger.py:122.

The docstrings are also confusing, since the environment variable says it accepts and integer or a string, but the constructor parameter is a string.

Expected Behavior

  • The Logger constructor should accept logging constants (integers) as well as strings.
  • The environment variable and constructor arg for log level should be consistent
  • The docstrings should be consistent with the code

Current Behavior

When an integer is passed, there is an AttributeError.

Steps to Reproduce (for bugs)

>>> import logging
>>> from aws_lambda_powertools import Logger
>>> Logger(service="demo", level=logging.INFO)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/site-packages/aws_lambda_powertools/logging/logger.py", line 122, in __init__
    self.log_level = self._get_log_level(level)
  File "/usr/local/lib/python3.8/site-packages/aws_lambda_powertools/logging/logger.py", line 138, in _get_log_level
    log_level = log_level.upper() if log_level is not None else logging.INFO
AttributeError: 'int' object has no attribute 'upper'

Environment

  • Powertools version used: 1.1.0
  • Packaging format (Layers, PyPi):
  • AWS Lambda function runtime: python3.8
  • Debugging logs

Improv: Separate module logger from code logger

From discussion with @jfuss, it'd best to create a separate environment variable to control powertools logging level as opposed to LOG_LEVEL.

While LOG_LEVEL simplifies DX it does raise concerns if you want to isolate logging level to your Lambda function only.

Convo: aws-powertools/powertools-lambda#15 (comment)


Changes

  • Create a POWERTOOLS_LOG_LEVEL environment variable
  • Update all modules to use POWERTOOLS_LOG_LEVEL
  • Use a Null Handler to cut noise in customers' logging
  • Create top-level logger aws-lambda-powertools
  • Update docs to reflect the change
  • Accept an existing logger in logger setup
  • Refactor Lambda Logging module to prevent root logging changes

Feature: Lambda Layers

Leaving this for consideration and see if we have customer traction as we go GA - Not necessarily a blocker for GA but could be helpful to customers.

If you'd like this to be available for all Python runtimes supported in addition to PyPi, please leave a ๐Ÿ‘

Metrics: Problem with default dimension creation

When metrics are being emitted during function execution using the log_metrics decorator e.g.:

from aws_lambda_powertools.metrics import Metrics

metrics = Metrics(service="cart-service")

@metrics.log_metrics
def lambda_handler(event, context):
    metrics.add_metric(name="CartUpdated", unit="Count", value=1)

The function runs successfully on cold starts, but throws an error due to missing dimensions on subsequent executions.

What were you trying to accomplish?

Expected Behavior

The default "service" dimension should be added to every metric created, not just the first one.

Current Behavior

The default "service" dimension is only being added to the first metric serialized, causing schema validation errors

Environment

  • Powertools version used: 0.10.0
  • Packaging format (Layers, PyPi): PyPi
  • AWS Lambda function runtime: 3.8

Compatibility with xray 2.6.0

Is your feature request related to a problem? Please describe.
We are currently using aws-xray-sdk-python 2.6.0 and I wanted to test the power tools but it seems that this project is not compatible with the latest release of the X-Ray SDK.

Describe the solution you'd like
This project to stay up to date with the new SDK releases, 2.6.0 has been released more than a month ago.

Describe alternatives you've considered
Downgrading the SDK but we want to keep up with the latest versions of packages to reduce bugs/vulnerabilities and benefit from new features.

Additional context
Not sure if it has a compatibility reason or just done out of foresight: ~= 2.5.0 see https://github.com/awslabs/aws-lambda-powertools-python/blob/0670e5efa09b60f0dd6d040a1a3b37225c94b516/pyproject.toml#L22

Investigate dependabot not returning CHANGELOG

Dependabot isn't picking up changes defined in the CHANGELOG

#46 might solve it but needs further investigation in dependabot docs, and possibly source code as to what logic it follows - e.g. does it need a Github release, or CHANGELOG should be sufficient?

Metrics not flushing

Using Python 3.8 and aws-lambda-powertools==0.9.1

It could be that I'm not using metrics as their design intended however I would like to make use of them in the following pattern so metrics can be consumed by classes shared across my project which are imported into my lambdas as needed (so my lambdas handlers have few lines of code) e.g.

from aws_lambda_powertools.metrics import Metrics
from mynamespace.mypackage import MyClass

metrics = Metrics()

@metrics.log_metrics 
def handler(request, context):
 
    # passing metrics to my class that does all the work for the lambda: 
    return MyClass(metrics).do_stuff()

Adding metrics to MyClass might look something like this (using randint for demo purposes obviously):

class MyClass:
    def __init__(self, metrics):
        self.metrics = metrics

    def do_stuff(self):
        success = random.randint(0, 1)

        if success:
            self.metrics.add_metric(name="Success", unit=MetricUnit.Count, value=1)
            self.metrics.add_dimension(name='service', value='MyClass')
            return True
        else:
            self.metrics.add_metric(name="Failure", unit=MetricUnit.Count, value=1)
            self.metrics.add_dimension(name='service', value='MyClass')
            return False

Following this pattern the desired metrics are created correctly but on subsequent executions metrics are not flushed so subsequent executions will output both a 'Success' AND a 'Failure' metric from a single execution, so the print output in logs looks like this:

{
    "Success": 1,
    "Failure": 1,
    "_aws": {
        "Timestamp": 1590714776621,
        "CloudWatchMetrics": [
            {
                "Namespace": "richard",
                "Dimensions": [
                    [
                        "service"
                    ]
                ],
                "Metrics": [
                    {
                        "Name": "Success",
                        "Unit": "Count"
                    },
                    {
                        "Name": "Failure",
                        "Unit": "Count"
                    }
                ]
            }
        ]
    },
    "service": "MyClass"
}

Metrics are supposed to be flushed by the log_metrics decorator on each handler execution no? Would love to get your thoughts on this if it's a bug or if I'm not using them as intended.

RFC: Collection of tiny utilities

Background

At GA, Powertools will offer Tracer, Metrics, Logger, and Middleware factory as the core utilities.

Optionally, I'm pondering on the idea of providing a set of tiny handy utilities that can be used either as standalone functions, or as part of a custom middleware - For example, JSON serialization, detect retries, fetch secrets, etc.

The benefit of providing utilities as functions is two-fold a) ease of maintenance, b) pick and choose and create a single custom middleware instead of nesting a myriad of decorators.

Propose solution

Use case: Custom middleware using a handful of utilities

from aws_lambda_powertools.middleware_factory import lambda_handler_decorator
from aws_lambda_powertools.utilities import validate_event, cors, decode_json, detect_retry

@lambda_handler_decorator
def custom_middleware_name(handler, event, context):
    # Before
    detect_retry(event)
    event = decode_json(event)
    validate_event(event)

    response = handler(event, context)

    # After
    response = cors(response)
    return response

@custom_middleware_name
def lambda_handler(event, context):
    ...

Use case: Using utilities standalone

from aws_lambda_powertools.utilities import validate_event, cors, decode_json, detect_retry

@custom_middleware_name
def lambda_handler(event, context):
    detect_retry(event)
    event = decode_json(event)
    validate_event(event)

    return cors(result)

Request for comments

As part of this RFC, I'd like to know what utilities are the most useful to have upfront - Leave a comment, and vote using ๐Ÿ‘ on each comment instead of a new comment.

For ideas, here are some utilities as decorators created by other awesome authors: Lambda Decorators by Grid smarter cities, and Lambda Decorators by Daniel Schep.

Tenets

  • AWS Lambda only โ€“ We optimise for AWS Lambda function environments only. Utilities might work with web frameworks and non-Lambda environments, though they are not officially supported.
  • Eases the adoption of best practices โ€“ The main priority of the utilities is to facilitate best practices adoption, as defined in the AWS Well-Architected Serverless Lens; all other functionality is optional.
  • Keep it lean โ€“ Additional dependencies are carefully considered for security and ease of maintenance, and prevent negatively impacting startup time.
  • We strive for backwards compatibility โ€“ New features and changes should keep backwards compatibility. If a breaking change cannot be avoided, the deprecation and migration process should be clearly defined.
  • We work backwards from the community โ€“ We aim to strike a balance of what would work best for 80% of customers. Emerging practices are considered and discussed via Requests for Comment (RFCs)
  • Idiomatic โ€“ Utilities follow programming language idioms and language-specific best practices.

* Core utilities are Tracer, Logger and Metrics. Optional utilities may vary across languages.

Feature: Powertools for other languages

Is your feature request related to a problem? Please describe.

Love the idea of this project. It would be great to have this for other languages.

Describe the solution you'd like

The same easy-of-use of this project, but tailored to the best practices for other languages.

Describe alternatives you've considered

There isn't an alternative. Lots of extra undifferentiated heavy lifting of having to implement metrics, tracing, and logging over and over for other projects.

Additional context

My personal next preference would be Go, but I'm eager to hear the team's general thoughts.

Add add_metadata in the metrics feature

Description of the Problem/Issue
When publishing metrics, it's would be useful to be able to reference the specific values that generated the specific metric value.

For example, in one project I'm crawling some URLs. In this case I log the size of the fetched object as a metric. Being able to annotate that metric (add_property in the aws-embedded-metrics-python library) with the URL it's very useful when doing insights after. If I rely on Tracing info, this means I depend on xray and also will be affected by sampling.

Logging this in a structured logging, will separate the information from the metric, so parsing the logs will be much harder, so being able to add this info along with the metric publishing would be useful.

Describe the solution you'd like
Add an add_metadata or add_property method to metrics similar to the one found in aws-embedded-metrics-python

Describe alternatives you've considered

  • Using Tracing - not useful due to:
    • Depend on Xray for reading the info
    • Will be affected by sampling
  • Using Logging - not useful due to complexity when querying the logs. Also, if this info is recorded at Debug level, lot of other info will be also published which probably is not the intention. The same at any other level.

Error calling decorated function from another decorated function

When decorating multiple functions with Tracer.capture_method(), a RuntimeError is thrown if one of those methods contains a call to the other as it tries to run an event loop which is already running.

The offending code is here: https://github.com/awslabs/aws-lambda-powertools/blob/develop/python/aws_lambda_powertools/tracing/tracer.py#L454

To reproduce:

import json
from aws_lambda_powertools.tracing import Tracer

tracer = Tracer()


@tracer.capture_method
def func_1():
    return 1

@tracer.capture_method
def func_2():
    return 2

@tracer.capture_method
def sums_values():
    return func_1() + func_2()  # Calling a decorated function from another decorated function causes an error.

def lambda_handler(event, context):
    val = sums_values()

    return {
        "statusCode": 200,
        "body": json.dumps({
            "message": val,
        }),
    }

Invalid format using metrics

Hello Heitor,

I am switching all my EMF logs from ordinary prints to the metric module.

All went fine except for this one. Below is the snippet that threw the error, the error raised and the equivalent print:

Error
Invalid format. Error: data._aws.CloudWatchMetrics[0].Metrics[0].Unit must match pattern ^(Seconds|Microseconds|Milliseconds|Bytes|Kilobytes|Megabytes|Gigabytes|Terabytes|Bits|Kilobits|Megabits|Gigabits|Terabits|Percent|Count|Bytes\/Second|Kilobytes\/Second|Megabytes\/Second|Gigabytes\/Second|Terabytes\/Second|Bits\/Second|Kilobits\/Second|Megabits\/Second|Gigabits\/Second|Terabits\/Second|Count\/Second|None)$, Invalid item: data._aws.CloudWatchMetrics[0].Metrics[0].Unit"

using single_metric
with single_metric(name="invoke-rate", unit=MetricUnit.CountPerSecond, value=int(rate)) as metric:
metric.add_namespace(name='Campaign/Orchestrator/WorkerInvokeRate')
metric.add_dimension(name='media', value='SMS')

ordinary print
print(json_dumps({
"_aws": {
"CloudWatchMetrics": [
{
"Namespace": "Campaign/Orchestrator/WorkerInvokeRate",
"Dimensions": [["media"]],
"Metrics": [
{
"Name": "rate",
"Unit": "Count/Second"
}
],
}
],
"Timestamp": int(datetime.now().timestamp()*1000)
},
"media": "SMS",
"rate": int(rate)
}))

API Reference docs not loaded correctly for the first time

We generate API reference with pdoc, and store the generated website under /api - https://awslabs.github.io/aws-lambda-powertools-python/api/

Docs website has a link to API reference, and when you try accessing it Gatsby complains that you're not viewing the latest information.

Issue reported in the marvellous theme we're using for the docs website: apollographql/gatsby-theme-apollo#118

Refreshing the page after hitting the issue solves it, though it needs some further investigation with Gatsby behaviour

Logger: inject_lambda_context overrides keys appended before in custom middleware

What were you trying to accomplish?

Have a custom middleware to inject additional keys to the Logger as well as using inject_lambda_context.

@lambda_handler_decorator(trace_execution=True)
def process_booking_handler(
    handler: Callable, event: Dict, context: Any, logger: Logger = None
) -> Callable:
    if logger is None:
        logger = Logger()

    handler = logger.inject_lambda_context(handler)

    logger.info("Injecting and annotating process booking state machine")
    process_booking_context = _build_process_booking_model(event)
    _logger_inject_process_booking_sfn(logger=logger, event=event)
    _tracer_annotate_process_booking_sfn(process_booking_context=process_booking_context)

    return handler(event, context)

Expected Behavior

Have additional key added to the log when structure_logs is used outside the handler e.g. shared file, before the handler

Current Behavior

Any additional key is removed once the handler is executed with inject_lambda_context

Possible Solution

inject_lambda_context is overwriting existing structured logs instead of appending

Steps to Reproduce (for bugs)

  1. Use structure_logs before inject_lambda_context is used

Code

    logger = Logger()
    logger.structure_logs(append=True, additional_key="test")

    @logger.inject_lambda_context
    def handler(event, context):
        logger.info("Hello") # will not contain additional_key

    handler({}, lambda_context)

Environment

  • Powertools version used: 1.0.0
  • Packaging format (Layers, PyPi): PyPi
  • AWS Lambda function runtime: Python 3.7
  • Debugging logs

How to enable debug mode**

# paste logs here

boto level logging

Hello Heitor,

I am using samplig rate for my logger. An annoying side effect is that when it's enabled, it gets flooded with boto debug logs.

I took a look in the code and found a boto_level parameter in the setup function under aws_lambda_logging.py. I assume if I set this parameter to a different level then it would solve this problem.

Can I set this up via the logger_setup function in logger.py?

Duplicate logs: structured and unstructured

My logs are appearing twice in CloudWatch: once in a structured format and once unstructured.
I'd only like to have the structured logs.
I didn't see this behavior before so it might be something on my side.

Expected Behavior

Having the logs once in structured format.

Current Behavior

Having the logs both structured and unstructured.

Environment

  • Powertools version used: 1.1.2
  • AWS Lambda function runtime: Python
  • Debugging logs

Powertools is setup like this:

from aws_lambda_powertools import Logger, Tracer, Metrics
from aws_lambda_powertools.metrics import MetricUnit

logger = Logger()
tracer = Tracer()
metrics = Metrics()

@metrics.log_metrics
@logger.inject_lambda_context
@tracer.capture_lambda_handler
def handle(event, context):
   ...
   logger.info('validated')

I am seeing the logs twice:
image

Seems like the root logger is still capturing and logging.

Why do I have duplicate logs? What am I missing here?

RFC: Metrics: Create a default dimension using the "service" name

Key information

  • RFC PR:
  • Related issue(s), if known:
  • Area: Metrics
  • Meet tenets: Yes

Summary

Add a default dimension named "service" with a value of the provided service name if it exists (via Metrics constructor or env var).

Motivation

This would simplify what I think is the most common use case where users don't need multiple dimensions. It would enable creation of simple custom metrics without needing to think about the underlying CloudWatch dimension construct.

Proposal

With PR #60 the metrics interface was changed to behave more similarly to the other core utils, by accepting a service parameter or reading the POWERTOOLS_SERVICE_NAME env var. In the metrics util, this service is translated to the namespace used for metrics. We could also use this value to set a dimension by default: {"name": "service", "value": service_name}. This could be handled in the MetricManager constructor, by adding to the dimension_set dict if the service parameter is set.
This would remove the need to make a call to add_dimension, so the simplest implementation could look like:

metrics = Metrics(service="ServerlessAirline")

@metrics.log_metrics
def lambda_handler():
    # without this change we have to make a call to add dimensions here
    metrics.add_metric(name="Something", unit="Count", value=1)

Drawbacks

  1. We would potentially create additional billable custom metrics for users who want specify their own dimensions.
  2. The migration path is a bit awkward for anyone who is currently using the POWERTOOLS_SERVICE_NAME env var. Right now they are forced to use add_dimension, so they will start to see another dimension for all their metrics when they upgrade.

Rationale and alternatives

  • What other designs have been considered? Why not them?
  • What is the impact of not doing this?

Users of the library need to understand how CloudWatch metrics dimensions work. In my opinion this is quite hard to grasp from documentation. If we don't abstract this away by implementing this change, users who are not already familiar with CloudWatch can be faced with a time consuming trial/error cycle to discover how to properly use dimensions in their custom metrics. This is compounded by the fact that custom metrics can't be deleted - if mistakes are made here they will pollute the dashboard for 2 weeks until they expire.

Unresolved questions

Optional, stash area for topics that need further development e.g. TBD

RFC: Validate incoming and outgoing events utility

Key information

  • RFC PR: (leave this empty)
  • Related issue(s), if known: #30
  • Area: Utilities
  • Meet tenets: Yes

Summary

This utility helps you validate incoming events from Lambda event sources as well as the response from your Lambda handler - All based on JSON Schemas. You can either validate an entire incoming event including the chosen event source wrapper, or only the event payload/body if you will.

Motivation

Well-Architected Serverless Lens recommends validating events under the Security pillar. As of now, customers have to implement their own validation mechanisms, bring additional dependencies, and end up crafting multiple JSON Schemas for popular event sources.

We could ease that by including in Powertools and lowering the bar for entry.

Proposal

This utility would use the already present Fast JSON Schema lib to validate both incoming and outgoing events using a preferred JSON Schema.

Another plus is that we can use our Middleware Factory to make this easier to implement, and automatically trace it with X-Ray to profile performance impact when necessary.

Validating both inbound/outbound

from aws_lambda_powertools.utilities import validator

@validator(inbound=inbound_schema_dict, outbound=outbound_schema_dict)
def lambda_handler(evt, ctx):
    ...

For customers wanting to validate only the payload of popular event sources, say API Gateway, this utility will work in tandem with an extractor utility - That will provide the following benefits:

  • Validate only the actual event payload of a popular event source
    • e.g. validate API GW POST payload event (event['body']) itself not the whole API GW event
  • Craft your own JSON Schema to help you validate payload as well as things like headers/message attributes, etc.

By default, envelopes will pluck only the payload of a message within the event. Allowing multiple paths can easily add complexity, so we will defer to customers creating their own envelopes if they want to.

Validating inbound with built-in popular event source schemas

from aws_lambda_powertools.utilities import validator
from aws_lambda_powertools.utilities.extractor import envelopes

@validator(inbound=inbound_schema, envelope=envelopes.api_gateway_rest)
def lambda_handler(evt, ctx):
    ...

Drawbacks

  • JSON Schemas are not offered today for Lambda Event Sources; this means maintaining it
  • Validating JSON Schemas can add up to ~25ms during cold start executions
    • Invalid schemas or compiled-once are microseconds

Rationale and alternatives

  • What other designs have been considered? Why not them?: Lambda Decorators Validate - It'd mean bringing two additional dependencies for a single utility, and schema is slower than fastjsonschema we already include in Powertools
  • What is the impact of not doing this?: Customers less experienced with Lambda might end up reinventing the wheel, and accidentally bringing less performant or incorrect validations.

Unresolved questions

  • Should we validate the event source per se, the event within the event source envelope, or both
    • e.g. message body in API GW, message within SQS Array object, etc.
    • UPDATE: We'll create an extractor utility to not violate SRP
  • Should we also optionally serialize the event before validating?

Optional, stash area for topics that need further development e.g. TBD

Problem with async tracer

Hello Heitor,

I had the chance to try version 0.9.3 and tested the async tracer functionality.

Unfortunately it threw the exception below. Any ideas?

Python version 3.8
Power Tools is deployed as a lambda layer.

{
  "error": "AlreadyEndedException",
  "cause": {
    "errorMessage": "Already ended segment and subsegment cannot be modified.",
    "errorType": "AlreadyEndedException",
    "stackTrace": [
      "  File \"/opt/python/aws_lambda_powertools/tracing/tracer.py\", line 266, in decorate\n    response = lambda_handler(event, context)\n",
      "  File \"/opt/python/aws_lambda_powertools/logging/logger.py\", line 442, in decorate\n    return lambda_handler(event, context)\n",
      "  File \"/var/task/loadSegmentChunk/lambda_function.py\", line 231, in lambda_handler\n    run(main(event, context))\n",
      "  File \"/var/lang/lib/python3.8/asyncio/runners.py\", line 43, in run\n    return loop.run_until_complete(main)\n",
      "  File \"/var/lang/lib/python3.8/asyncio/base_events.py\", line 616, in run_until_complete\n    return future.result()\n",
      "  File \"/var/task/loadSegmentChunk/lambda_function.py\", line 201, in main\n    await s3_to_dynamo(event, context, shard_index, s3obj, table)\n",
      "  File \"/var/task/loadSegmentChunk/lambda_function.py\", line 150, in s3_to_dynamo\n    await gather(*(dynamoBatchWrite(batch, table, segmentId) for batch in limitBatchArr))\n",
      "  File \"/var/task/loadSegmentChunk/lambda_function.py\", line 50, in dynamoBatchWrite\n    await batch_writer.put_item(\n",
      "  File \"/opt/python/aioboto3/dynamodb/table.py\", line 67, in put_item\n    await self._add_request_and_process({'PutRequest': {'Item': Item}})\n",
      "  File \"/opt/python/aioboto3/dynamodb/table.py\", line 76, in _add_request_and_process\n    await self._flush_if_needed()\n",
      "  File \"/opt/python/aioboto3/dynamodb/table.py\", line 94, in _flush_if_needed\n    await self._flush()\n",
      "  File \"/opt/python/aioboto3/dynamodb/table.py\", line 99, in _flush\n    response = await self._client.batch_write_item(RequestItems={self._table_name: items_to_send})\n",
      "  File \"/opt/python/aws_xray_sdk/ext/aiobotocore/patch.py\", line 32, in _xray_traced_aiobotocore\n    result = await xray_recorder.record_subsegment_async(\n",
      "  File \"/opt/python/aws_xray_sdk/core/async_recorder.py\", line 93, in record_subsegment_async\n    meta_processor(\n",
      "  File \"/opt/python/aws_xray_sdk/ext/boto_utils.py\", line 56, in aws_meta_processor\n    subsegment.put_http_meta(http.STATUS,\n",
      "  File \"/opt/python/aws_xray_sdk/core/models/entity.py\", line 102, in put_http_meta\n    self._check_ended()\n",
      "  File \"/opt/python/aws_xray_sdk/core/models/entity.py\", line 283, in _check_ended\n    raise AlreadyEndedException(\"Already ended segment and subsegment cannot be modified.\")\n"
    ]
  }
}

Logger examples for logging exception

What were you initially searching for in the docs?

I am looking for code examples for logging an error with the exception

Is this related to an existing part of the documentation? Please share a link
https://awslabs.github.io/aws-lambda-powertools-python/core/logger/

Describe how we could make it clearer
Some examples of logging an exception and maybe more examples using other levels. Currently non of the code completion works in PyCharm.

If you have a proposed update, please share it here

Some sample examples or have python stubs to be used with Pycharm or VSCode.

    try:
        logger.debug("Some which might fail")
    except SomeErrorWeSuppress as e:
        logger.exception(e)

Feature: Create your own middleware

As we prepare for GA, we need to reduce the effort in creating new middlewares, and ensure any middleware is traced opt in. This would help us move faster in creating more utilities, allow customers to create their own utilities (outside of this package), and debug middleware performance as they use multiple nested options.

This feature should allow:

  • Decorator to create middlewares
  • Control before, after, handle exceptions, and whether handler should be executed if they want to
  • Opt in option to trace middleware as an annotation with no modification to their code
  • Pass data between middlewares (e.g. correlation id, cold_start, etc.)

We have to decide between two options to pass data between middlewares:

  • Lambda Context
    • Pros: Ease to extend and available between middlewares, no additional code to maintain
    • Cons: There could be naming collision if Lamda decides to use context more extensively, each middleware needs to handle AttributeError if key is not there
  • Global Stash
    • Pros: Available anywhere in the code and handles AttributeError if data is non existent
    • Cons: Additional code to maintain

Unit tests of example do not pass

Current behaviour

I followed the instructions in the README and installed the dependencies in a virtual environment

pip install -r hello_world/requirements.txt && pip install -r requirements-dev.txt

Then I run the tests with : POWERTOOLS_TRACE_DISABLED=1 python -m pytest.

The test fails with output:
FAILED tests/unit/test_handler.py::test_lambda_handler - aws_lambda_powertools.metrics.exceptions.SchemaValidationError: Invalid format. Error: data._aws.CloudWatchMetrics[0].Namespace must be string, Invalid item: data._aws.CloudWatchMetrics[0...
( I can provide a full stacktrace if necessary)

Potential solution
I looked into base.py of the aws_lambda_powertools metrics package and found that it is looking for an env var POWERTOOLS_METRICS_NAMESPACE.
This env var is not there when running the tests.

First setting the env var and then running it solves the issue:
POWERTOOLS_TRACE_DISABLED=1 POWERTOOLS_METRICS_NAMESPACE=test python -m pytest tests -v

If my assessment is correct, I propose to update the README. If incorrect can you help me by determining what is causing this problem?

aiohttp is now a required dependency

Even if you don't use or require aiohttp, powertools now has it as a dependency. So now projects without this already declared will fail

/opt/python/3.7.6/lib/python3.7/site-packages/aws_lambda_powertools/tracing/__init__.py:3: in <module>
    from aws_xray_sdk.ext.aiohttp.client import aws_xray_trace_config as aiohttp_trace_config
/opt/python/3.7.6/lib/python3.7/site-packages/aws_xray_sdk/ext/aiohttp/client.py:4: in <module>
    import aiohttp
E   ModuleNotFoundError: No module named 'aiohttp'

Reusing Tracer across your code disables patching boto3

What were you trying to accomplish?
I am have my shared code include @tracer but i don't want to double patch boto3 etc..

Expected Behavior

tracer = Tracer(auto_patch=False) should not disable patching in the main lambda handler

Current Behavior

When i go into the console now of the boto3 activity appears

# main.py
from aws_lambda_powertools import Tracer
from app import shared

tracer = Tracer(service="payment")

@tracer.capture_lambda_handler
def handler(event, context):
     pass
# shared.py
from aws_lambda_powertools import Tracer

tracer = Tracer(auto_patch=False)

Possible Solution

I think because the shared.py - tracer = Tracer(auto_patch=False) code is called before the main.py. The patching has already been disabled.

Environment

  • Powertools version used: 1.1.1

Question: No module named 'aws_lambda_logging'

Tried the logging code from README and got the error:

{ "errorMessage": "Unable to import module 'service': No module named 'aws_lambda_logging'", "errorType": "Runtime.ImportModuleError" }

The lambda runtime is Python 3.7.

CHANGELOG.* not picked up by @dependabot

Although there is a CHANGELOG.* in the project it is not being picked up by https://github.com/dependabot

It might either be that it is not in the base of the project (currently it is in a sub folder "python") or that is does not include an anchor tag at the release tag.

Otherwise adding maybe doing a git tag on release and creating github releases works without a CHANGELOG., but it is nice have the CHANGELOG. in the project itself.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.