Code Monkey home page Code Monkey logo

newrelic-python-agent's Introduction

header

New Relic Python Agent

The newrelic package instruments your application for performance monitoring and advanced performance analytics with New Relic.

Pinpoint and solve Python application performance issues down to the line of code. New Relic APM is the only tool you'll need to see everything in your Python application, from the end user experience to server monitoring. Trace problems down to slow database queries, slow 3rd party APIs and web services, caching layers, and more. Monitor your app in a production environment and make sure your app can stand a big spike in traffic by running scalability reports.

Visit Python Application Performance Monitoring with New Relic to learn more.

Usage

This package can be installed via pip:

$ pip install newrelic

(These instructions can also be found online: Python Agent Quick Start.)

  1. Generate the agent configuration file with your license key.

    $ newrelic-admin generate-config $YOUR_LICENSE_KEY newrelic.ini
  2. Validate the agent configuration and test the connection to our data collector service.

    $ newrelic-admin validate-config newrelic.ini
  3. Integrate the agent with your web application.

    If you control how your web application or WSGI server is started, the recommended way to integrate the agent is to use the newrelic-admin wrapper script. Modify the existing startup script, prefixing the existing startup command and options with newrelic-admin run-program.

    Also, set the NEW_RELIC_CONFIG_FILE environment variable to the name of the configuration file you created above:

    $ NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program $YOUR_COMMAND_OPTIONS

    Examples:

    $ NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program gunicorn -c config.py test_site.wsgi
    
    $ NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program uwsgi uwsgi_config.ini

    Alternatively, you can also manually integrate the agent by adding the following lines at the very top of your python WSGI script file. (This is useful if you're using mod_wsgi.)

    import newrelic.agent
    newrelic.agent.initialize('/path/to/newrelic.ini')
  4. Start or restart your Python web application or WSGI server.

  5. Done! Check your application in the New Relic UI to see the real time statistics generated from your application.

Additional resources may be found here:

Support

Should you need assistance with New Relic products, you are in good hands with several support diagnostic tools and support channels.

This troubleshooting framework steps you through common troubleshooting questions.

New Relic offers NRDiag, a client-side diagnostic utility that automatically detects common problems with New Relic agents. If NRDiag detects a problem, it suggests troubleshooting steps. NRDiag can also automatically attach troubleshooting data to a New Relic Support ticket.

If the issue has been confirmed as a bug or is a Feature request, please file a Github issue.

Support Channels

Privacy

At New Relic we take your privacy and the security of your information seriously, and are committed to protecting your information. We must emphasize the importance of not sharing personal data in public forums, and ask all users to scrub logs and diagnostic information for sensitive information, whether personal, proprietary, or otherwise.

We define "Personal Data" as any information relating to an identified or identifiable individual, including, for example, your name, phone number, post code or zip code, Device ID, IP address and email address.

Please review New Relic's General Data Privacy Notice for more information.

Product Roadmap

See our roadmap, to learn more about our product vision, understand our plans, and provide us valuable feedback.

Contributing

We encourage your contributions to improve the New Relic Python Agent! Keep in mind when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project. If you have any questions, or to execute our corporate CLA, required if your contribution is on behalf of a company, please drop us an email at [email protected].

A note about vulnerabilities

As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.

If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through our bug bounty program.

License

The New Relic Python Agent is licensed under the Apache 2.0 License. The New Relic Python Agent also uses source code from third-party libraries. You can find full details on which libraries are used and the terms under which they are licensed in the third-party notices document.

newrelic-python-agent's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

newrelic-python-agent's Issues

Error trace timestamps should be milliseconds

Description
When posting error_data, the Python agent uses seconds since the epoch for its event timestamp. The spec, and other agents, use milliseconds since the epoch.

Expected Behavior
The posted timestamp should be milliseconds, not seconds.

Installation fails if `setuptools_scm<3.2.0` is installed

Description

Installing newrelic==5.16.0.145 fails if a version of setuptools_scm is installed that does not support the git_describe_command option introduced in v3.2.0

In our use case, it's not clear what's causing this old version to be installed, so it's not obvious where to rectify the problem.

Changing setup_requires in setup.py to ['setuptools_scm>=3.2.0'], should hopefully work.

> pip install setuptools_scm==3.1.0
Collecting setuptools_scm==3.1.0
  Downloading setuptools_scm-3.1.0-py2.py3-none-any.whl (23 kB)
Installing collected packages: setuptools-scm
Successfully installed setuptools-scm-3.1.0

> pip --no-cache-dir install newrelic==5.16.0.145

Collecting newrelic==5.16.0.145
  Downloading newrelic-5.16.0.145.tar.gz (548 kB)
     |████████████████████████████████| 548 kB 8.2 MB/s 
    ERROR: Command errored out with exit status 1:
     command: /{...}/bin/python3.6 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-install-iz2r6kus/newrelic/setup.py'"'"'; __file__='"'"'/private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-install-iz2r6kus/newrelic/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-pip-egg-info-50pfyx8a
         cwd: /private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-install-iz2r6kus/newrelic/
    Complete output (23 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-install-iz2r6kus/newrelic/setup.py", line 239, in <module>
        run_setup(with_extensions=True)
      File "/private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-install-iz2r6kus/newrelic/setup.py", line 208, in run_setup
        _run_setup()
      File "/private/var/folders/c5/03fzs2ld3z18vnhg06wj8kzc0000gn/T/pip-install-iz2r6kus/newrelic/setup.py", line 194, in _run_setup
        setup(**kwargs_tmp)
      File "/{...}/lib/python3.6/site-packages/setuptools/__init__.py", line 163, in setup
        return distutils.core.setup(**attrs)
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/distutils/core.py", line 108, in setup
        _setup_distribution = dist = klass(attrs)
      File "/{...}/lib/python3.6/site-packages/setuptools/dist.py", line 430, in __init__
        k: v for k, v in attrs.items()
      File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/distutils/dist.py", line 281, in __init__
        self.finalize_options()
      File "/{...}/lib/python3.6/site-packages/setuptools/dist.py", line 721, in finalize_options
        ep(self)
      File "/{...}/lib/python3.6/site-packages/setuptools/dist.py", line 728, in _finalize_setup_keywords
        ep.load()(self, ep.name, value)
      File "/{...}/lib/python3.6/site-packages/setuptools_scm/integration.py", line 23, in version_keyword
        dist.metadata.version = get_version(**value)
    TypeError: get_version() got an unexpected keyword argument 'git_describe_command'
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Expected Behavior
Install should complete successfully, or give a useful error message.

Steps to Reproduce
In a new virtualenv,
pip install setuptools_scm==3.1.0
pip install newrelic==5.16.0.145

Your Environment
Observed on macOS 10.15.5 and Ubuntu 16.04 using Python 3.6
Install succeeds with setuptools_scm not installed, or setuptools_scm>=3.2.0 installed, or when installing newrelic<=5.16.0.145.
Install fails for newrelic==5.16.0.145 with setuptools_scm<3.2.0 installed.

ASGI - Starlette/Fast API Framework

Overview:

Starlette is a lightweight ASGI framework/toolkit, which is ideal for building high performance asyncio services. It supports both HTTP and WebSockets. With over 500K downloads a month, it is growing rapidly in popularity: https://pypistats.org/packages/starlette

Currently, customers will not see see data in NR when monitoring their Starlette ASGI application. This MMF addresses this, giving those customers visibility their transactions and downstream services.

The key is to show transactions data and ensure they can also view those transactions are viewable in DT.

Acceptance Criteria:

  • Extract and send web-transaction events (start/end time) for http requests (not web sockets since they do not create transactions, since this can cause memory explosion for long-running transactions)

  • Attach appropriate attributes to those events i.e.

    • number of bytes on output automatically captured
    • Request headers automatically captured
    • Request method
    • Request URI
    • Response headers
    • Response status
  • Send span event data / trace id? (goal here to to ensure those transactions translate to a span and there is an id that is passed from the transaction to the external service so we can track them in DT view.

  • Add to New Relic docs and website that we offer Starlette support

  • Units Tests

  • Demo

Nerdlet: Run Time Metrics

Background

Customers would like more visibility into the performance of their Python process and interpreter (VM). Python customers would like to address issues such as inefficient garbage collection, memory, and CPU usage.

Many of our other agents already report these metrics (Ruby, Node, and Java) so we’re providing Python customers the same level of visibility our other agents offer.

Acceptance Criteria

Let’s give create a Profiling Nerdlet to give customers the following information:

Make Garbage collection profiling:

  • Amount of time spent in garbage collection (by generation)
  • Garbage collection over time and its associated # of objects collected

Memory usage of Python processes. Python customers would like to see which objects are using the highest memory or have the highest instance counts.

  • Average size per instance

Note: As Python code works within containers via a distributed processing framework, each container contains a fixed amount of memory. If the code execution exceeds the memory limit, then the container will terminate. This is when development experiences memory errors.

CPU Usage:

  • User CPU usage (time spent executing user code divided by wall clock time)
  • Machine CPU usage (time spent in system divided by wall clock time)

Profiling should be configurable. Customers should be able to turn the profiler on and off

Internal Blog Post

Thread Profiling?

Demo

Issue running starlette: TypeError: _run_asgi2() takes 2 positional arguments but 4 were given

I am getting the following traceback when trying to launch an asgi server with starlette:
I am using this run command: newrelic-admin run-program gunicorn -w 4 -k uvicorn.workers.UvicornWorker my_package:app

Traceback (most recent call last):
  File "/app/.heroku/python/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 391, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/app/.heroku/python/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/api/asgi_application.py", line 387, in nr_async_asgi
    return await coro
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/common/async_proxy.py", line 148, in __next__
    return self.send(None)
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/common/async_proxy.py", line 120, in send
    return self.__wrapped__.send(value)
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/api/asgi_application.py", line 387, in nr_async_asgi
    return await coro
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/common/async_proxy.py", line 148, in __next__
    return self.send(None)
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/common/async_proxy.py", line 120, in send
    return self.__wrapped__.send(value)
  File "/app/.heroku/python/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "<string>", line 5, in wrapper
  File "/app/.heroku/python/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/app/.heroku/python/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/hooks/framework_starlette.py", line 108, in middleware_wrapper
    return await FunctionTraceWrapper(wrapped, name=name)(*args, **kwargs)
  File "/app/.heroku/python/lib/python3.8/site-packages/newrelic/api/function_trace.py", line 150, in literal_wrapper
    return wrapped(*args, **kwargs)
TypeError: _run_asgi2() takes 2 positional arguments but 4 were given

Using:

newrelic==5.24.0.153
starlette==0.13.8

Hook for asyncpg Postgres Connector

Is your feature request related to a problem? Please describe.

By default, hooks are available for different Postgres connectors, such as psycopg2. But for the async frameworks, it's common to use async Postgres connector, for instance asyncpg.

Feature Description

It would have been nice to have a hook for asyncpg connector implemented in the hooks.

Describe Alternatives

An alternative solution to this is to use 3rd party instruments available, such as this one, which is lacking some of the useful features (i.e. what queries have been executed).

Priority

This is a Must-Have for us at the moment.

Expand Custom Attribute Limit

Details:

  • A customer is asking for the custom attribute limit to be expanded to 90+ for their use case.
  • The current limit is 64 custom attributes.
  • NRDB supports a maximum of 256 attributes, counting intrinsic and custom attributes together.
  • The limit could therefore reasonably be expanded to 128 with plenty of space left for intrinsics.

Add Distributed Tracing Headers

Related Epic

Support for httpx client

Acceptance Criteria

  • Request input is captured

  • Headers attribute is modified to add DT headers (via external_traceAPI)

  • Tests are written to verify DT headers are added

    • Ensures nesting is correct

    • Validates the outbound CAT, w3c, and “better” CAT headers

    • Mutation testing for ordering

  • Generate DT headers before starting the external trace and observe a test failure

Note: External trace must be started before generating the DT header

Crash on ASGI request with Referer header w/ non-ASCII characters

Description

We have a Starlette/uvicorn-based application that crashes w/ a traceback inside newrelic when it encounters a Referer header w/ non-ASCII characters.

As best as I can tell from the ASGI spec and from uvicorn source, the headers in the ASGI scope are expected to be pairs of bytestrings, and ASGIWebTransaction captures them raw off the scope:

headers = scope["headers"] = tuple(scope["headers"])

But later, in WebTransaction._update_agent_attributes(), the referer header is pulled directly from the raw headers:

if 'referer' in self._request_headers:
self._add_agent_attribute('request.headers.referer',
_remove_query_string(self._request_headers['referer']))

And then the raw bytestring is passed to urllib.parse.urlsplit() which tries to coerce it to ascii, when the expected encoding for headers is latin-1.

def _remove_query_string(url):
out = urlparse.urlsplit(url)
return urlparse.urlunsplit((out.scheme, out.netloc, out.path, '', ''))

Here's the traceback.

UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 30: ordinal not in range(128)
  File "<string>", line 5, in wrapper
  File "starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "starlette/routing.py", line 582, in __call__
    await route.handle(scope, receive, send)
  File "starlette/routing.py", line 243, in handle
    await self.app(scope, receive, send)
  File "starlette/routing.py", line 57, in app
    await response(scope, receive, send)
  File "starlette/responses.py", line 143, in __call__
    await send({"type": "http.response.body", "body": self.body})
  File "starlette/exceptions.py", line 68, in sender
    await send(message)
  File "starlette/middleware/errors.py", line 156, in _send
    await send(message)
  File "newrelic/api/asgi_application.py", line 140, in send_inject_browser_agent
    return await self.send(message)
  File "newrelic/api/asgi_application.py", line 264, in send
    self.__exit__(*sys.exc_info())
  File "newrelic/api/asgi_application.py", line 254, in __exit__
    return super(ASGIWebTransaction, self).__exit__(exc, value, tb)
  File "newrelic/api/transaction.py", line 469, in __exit__
    self._update_agent_attributes()
  File "newrelic/api/web_transaction.py", line 328, in _update_agent_attributes
    _remove_query_string(self._request_headers['referer']))
  File "newrelic/api/web_transaction.py", line 129, in _remove_query_string
    out = urlparse.urlsplit(url)
  File "urllib/parse.py", line 423, in urlsplit
    url, scheme, _coerce_result = _coerce_args(url, scheme)
  File "urllib/parse.py", line 124, in _coerce_args
    return _decode_args(args) + (_encode_result,)
  File "urllib/parse.py", line 108, in _decode_args
    return tuple(x.decode(encoding, errors) if x else '' for x in args)
  File "urllib/parse.py", line 108, in <genexpr>
    return tuple(x.decode(encoding, errors) if x else '' for x in args)

Expected Behavior

I'd expect newrelic to parse Referer URLs w/ non-ascii characters, and fail gracefully if encountering unicode errors during the parsing.

Steps to Reproduce

  1. Instrument a starlette/uvicorn application with the NR python agent.
  2. Send an HTTP request w/ a Referer header containing a non-ascii character.

Your Environment

  • Python 3.8.2
  • starlette==0.14.1
  • uvicorn[standard]==0.12.3
  • newrelic==5.22.1.152

Add httpx Timing Functionality

Related Epic

Support for httpx client

Acceptance Criteria

  • Appropriate timing is added for external traces

  • Tests are added to verify timing mechanism

    • Verify the metrics generated from the external trace

Python 2.7 wheel support broken

Manylinux has dropped support for py27 in the most recent versions of their containers. The existing action we use does not support a method for choosing a version of the container that does. Opting to fork the repo and create a version that does support py27 still.

commit history and permalinks broken?

I have some permalinks to this repo that are now broken due to recent changes. For example this link is now a 404: https://github.com/newrelic/newrelic-python-agent/blob/v5.10.0.138/newrelic/newrelic/core/data_collector.py

The history of commits seems to also have been force-pushed over. This breaks git mirroring into our downstream custom version of this package. Additionally, all the useful git history is now gone. The oldest commit is now only 27 days ago? So when we experience issues between older versions, we cannot see the commits that happened between versions...

AWS IMDS V2 Support

Is your feature request related to a problem? Please describe.

AWS has created a newer protocol to communicate with the Metadata server (context), that enables clients to use a session based request which is more secure. With the idea of deprecating the usage of the V1 protocol, we rely on the agent to no longer use that authentication method.

Feature Description

Adopt the session based authentication with the AWS metadata server.

Describe Alternatives

None that I can think of.

Additional context

https://krebsonsecurity.com/2019/08/what-we-can-learn-from-the-capital-one-hack/

Priority

Please help us better understand this feature request by choosing a priority from the following options:
Must Have

Sanic response header (ex. Content-Length) not logged on latest versions.

Response headers do not get parsed on latest versions of Sanic.

Description

  • Bug occurs due to the deprecation and subsequent removal of BaseHTTPResponse._parse_headers.
  • BaseHTTPResponse.get_headers was briefly used instead, sometimes in conjunction with _parse_headers.
  • Both were removed in this PR which was released in v20.12.2
  • Tests are broken on versions after this PR (v19.3.1) as the middleware logic changed to disallow duplicate middleware.
  • Tests are broken due to the removal of default app in this PR (v19.6.0)

Open Questions

  • Can a user register middleware through a blueprint in a way that is uninstrumented?
    • Tests written to verify, the answer is no.
  • How can we instrument response headers without _parse_headers or get_headers?

HTTPX Instrumentation fails to process response headers with event hooks.

Description

New HTTPX instrumentation encounters an issue when an event_hook raises an exception, where the processing of CAT headers is bypassed.

Expected Behavior

CAT headers should be processed regardless of exceptions raised when receiving the response.

Steps to Reproduce

Register an event hook such as the following, and cause an appropriate exception. Observe that the CAT headers in the response are never processed.

def raise_on_4xx_5xx(response):
    response.raise_for_status()

client = httpx.Client(event_hooks={'response': [raise_on_4xx_5xx]})

Your Environment

HTTPX < 0.17

Additional context

N/A

Starlette/FastAPI transactions are inaccessible when using synchronous routes.

I was using version 5.18.0.148 and this was working fine:

@app.get('/')
def index():
    newrelic.agent.ignore_transaction(flag=True)
    return 'Hello World!'

By updating to 5.20.0.149, it started showing up in New Relic dashboard.
Tested in 5.20.1.150 and the bug persists.

Using Uvicorn / FastAPI, Python 3.8.

newrelic-admin run-program uvicorn server:app

Agent reports "extra" COMMITs when using SQLAlchemy with isolation_level=AUTOCOMMIT

Description

When using SQLAlchemy with isolation_level=AUTOCOMMIT the agent reports one COMMIT after each DML statement, plus a final ROLLBACK, that did not actually happen.

Expected Behavior

No "extra" COMMITs or ROLLBACKs in the trace.

Steps to Reproduce

  • Execute DML statements with an SQLAlchemy connection that uses isolation_level=AUTOCOMMIT against a PostgreSQL instance running with -c log_statement=all

  • Note how many COMMITs and ROLLBACKs appear in PostgreSQL's logs

  • Check the traces in New Relic and see that there are more entries for "Postgres commit" and "Postgres rollback" than logged by Postgres.

Example app available here: https://github.com/marzocchi/newrelic-agent-issue .

Your Environment

Python agent 6.0.1.155 (verified with older versions too).

Additional context

The following code (from the example app)...

    engine = create_engine("postgresql://foo:foo@db:5432/foo")
    conn = engine.connect().execution_options(isolation_level="AUTOCOMMIT")

    conn.execute('BEGIN;')
    for i in range(0, 3):
        conn.execute(text('INSERT INTO foo (name, created_at) VALUES (:name || pg_sleep(1), now())'), name=f"foo {i}")

    conn.execute('COMMIT;')

... results in this trace:

sqltest   sqltest   New Relic One 2021-02-24 23-30-11

However, the COMMITs marked 1, 2 and 3 did not really happen (stack trace of #3, they all look the same), and the ROLLBACK at 5 also does not appear in Postgres' log (stack trace).

The COMMIT at 4 corresponds to the conn.execute('COMMIT;'), and its stack trace is a bit different than the first three.

These are the statements seen by Postgres (running with -c log_statements=all),
after a few boilerplate statements from SQLAlchmemy or psycopg.

LOG:  statement: BEGIN;
LOG:  statement: INSERT INTO foo (name, created_at) VALUES ('foo 0' || pg_sleep(1), now())
LOG:  statement: INSERT INTO foo (name, created_at) VALUES ('foo 1' || pg_sleep(1), now())
LOG:  statement: INSERT INTO foo (name, created_at) VALUES ('foo 2' || pg_sleep(1), now())
LOG:  statement: COMMIT;

As you can see, only one COMMIT, and no ROLLBACK, hit the server. This can also be verified by inspecting the table in between INSERTs from another connection.

I think the issue is somewhere around here, where a DatabaseTrace is created even though with AUTOCOMMIT, self.__wrapped__.commit and self.__wrapped__.rollback will do nothing.

def commit(self):
transaction = current_transaction()
with DatabaseTrace('COMMIT', self._nr_dbapi2_module,
self._nr_connect_params):
return self.__wrapped__.commit()
def rollback(self):
transaction = current_transaction()
with DatabaseTrace('ROLLBACK', self._nr_dbapi2_module,
self._nr_connect_params):
return self.__wrapped__.rollback()

Support passing license_key into newrelic.agent.initialize

Is your feature request related to a problem? Please describe.

I'd like to have the option to pass the license key directly into the agent initialization function. I don't want to include it in the newrelic.ini file, since the key is a secret and is only available to the application during deployment.

Feature Description

if my_app.config["environment"] in ("stage", "live"):
    newrelic.agent.initialize(
        config_file="newrelic.ini",
        environment=my_app.config["environment"],
        log_level=my_app.config["loglevel"],
        license_key=my_app.config["new_relic_license_key"],  # <-- this is what would it look like
    )
    newrelic.agent.register_application()

Describe Alternatives

I could also have newrelic read the license key from the env, but our app doesn't use the env for configuration as of yet. And splitting configuration options over multiple avenues would be a bad design, so we'd rather not do it.

As a workaround, we currently manipulate os.environ before importing any newrelic modules.

Additional Context

I tried to write a PR myself, but the config parsing for newrelic is wild, and I couldn't get it to work by myself.

Priority

[Nice to Have]

Nested `extra` dictionaries are converted to strings

Description
Using the guide for adding context in python logs, and having an issue with the current application as all extra params are turned into a string. i.e. it prevents logging integers or dictionaries.

Also, python formatter moves all extra kwargs to an extra namespace.

Expected Behavior
logging.info("Some text", extra={"part1": {"a": "b", "c": 1}}) should log {..., "part1": {"a": "b", "c": 1}}

Steps to Reproduce

Configure python logging using new relic formatter as described in the guide

import logging
logging.info("Some text", extra={"part1": {"a": "b", "c": 1}})

Logs {"timestamp": 1601210154563, "message": "Some text", "log.level": "INFO", "logger.name": "root", "thread.id": 140243587753792, "thread.name": "MainThread", "process.id": 7, "process.name": "MainProcess", "file.name": "/app/app/tasks.py", "line.number": 22, "entity.type": "SERVICE", "extra.part1": "{'a': 'b', 'c': 1}"}

Instead it should be

{"timestamp": 1601210154563, "message": "Some text", "log.level": "INFO", "logger.name": "root", "thread.id": 140243587753792, "thread.name": "MainThread", "process.id": 7, "process.name": "MainProcess", "file.name": "/app/app/tasks.py", "line.number": 22, "entity.type": "SERVICE", "part1": {"a": "b", "c": 1}}

Your Environment

Docker - python:3.8-slim, Django 3.

django==3.0.8
newrelic==5.20.0.149

Additional context

Tried searching for similar tickets, but didn't find anything related.

DataDog parses nested dictionaries correctly btw.

Is there a way to get the trace and span id without using the newrelic formatter?

As a workaround it's possible to use python json logger and add span and trace id there, I just want to find out how these can be retrieved.

JSON serialization failure when agent initialized with a PosixPath object

When initializing the New Relic agent with the configuration file provided as a PosixPath object, then the agent will fail with a JSON serialization failure.

Issue occurs on 5.18.0.148
Issue doesn't not occur on 5.14.0.142

Description
When initializing the agent like this:

from pathlib import Path
import newrelic.agent

BASE_DIR = Path(__file__).parents[1]
newrelic.agent.initialize(BASE_DIR / 'newrelic.ini')  # initialize with a `PosixPath` type

We get this failure:

{
"asctime": "2020-09-09 06:32:07,356",
"levelname": "ERROR",
"name": "newrelic.core.application",
"message": "Unexpected exception when registering agent with the data collector. If this problem persists, please report this problem to New Relic support for further investigation.",
"pathname": "/venv/lib/python3.8/site-packages/newrelic/core/application.py",
"lineno": 367,
"funcName": "connect_to_data_collector",
"exc_info": "Traceback (most recent call last):\n File \"/venv/lib/python3.8/site-packages/newrelic/core/application.py\", line 346, in connect_to_data_collector\n active_session = create_session(None, self._app_name,\n File \"/venv/lib/python3.8/site-packages/newrelic/core/data_collector.py\", line 232, in create_session\n return DeveloperModeSession(\n File \"/venv/lib/python3.8/site-packages/newrelic/core/data_collector.py\", line 40, in __init__\n self._protocol = self.PROTOCOL.connect(\n File \"/venv/lib/python3.8/site-packages/newrelic/core/agent_protocol.py\", line 458, in connect\n protocol.send(\"agent_settings\", (global_settings_dump(settings),))\n File \"/venv/lib/python3.8/site-packages/newrelic/core/agent_protocol.py\", line 205, in send\n params, headers, payload = self._to_http(method, payload)\n File \"/venv/lib/python3.8/site-packages/newrelic/core/agent_protocol.py\", line 251, in _to_http\n return params, self._headers, json_encode(payload).encode(\"utf-8\")\n File \"/venv/lib/python3.8/site-packages/newrelic/common/encoding_utils.py\", line 104, in json_encode\n return json.dumps(obj, **_kwargs)\n File \"/usr/local/lib/python3.8/json/__init__.py\", line 234, in dumps\n return cls(\n File \"/usr/local/lib/python3.8/json/encoder.py\", line 199, in encode\n chunks = self.iterencode(o, _one_shot=True)\n File \"/usr/local/lib/python3.8/json/encoder.py\", line 257, in iterencode\n return _iterencode(o, 0)\n File \"/venv/lib/python3.8/site-packages/newrelic/common/encoding_utils.py\", line 91, in _encode\n raise TypeError(repr(o) + ' is not JSON serializable')\nTypeError: PosixPath('/app/newrelic.ini') is not JSON serializable",
"correlation_id": "30d062bf-5d60-4fe5-9409-fdd7f52f532e",
"forwarded_user": null
}

Expected Behavior
Previously (version 5.14.0.142 or older) this form of initialization succeeded.

Troubleshooting
To investigate, I have altered the initialization to pass in a string, rather than a PosixPath object:

from pathlib import Path
import newrelic.agent

BASE_DIR = Path(__file__).parents[1]
newrelic.agent.initialize(f"{BASE_DIR / 'newrelic.ini'}")  # initialize with a `str` type

When doing this, the exception does not occur, and the agent initializes and connects successfully.

Steps to Reproduce
Initialize the 5.18.0.148 agent with a PosixPath object.

Your Environment
Python 3.8 + Django 2.2.16 running in a Docker container with the New Relic Python agent 5.18.0.148.

Additional context
New Relic support case 425788

Wrap httpx send methods

Related Epic

Support for httpx client

Acceptance Criteria

  • send method on httpx._client, Client.send, and AsyncClient.send are wrapped (with hasattr() safeguards)

  • Tests are written to verify this functionality

Transactions are incorrect in FastAPI/Starlette when middleware and background tasks in application

In a FastAPI or Starlette application that contains middleware and has background tasks, the endpoints for the methods that call the background task(s) are incorrectly displayed in the UI.

Description
Endpoints that call background task(s) are incorrectly shown in the UI. The transaction name appears to be replaced by the name of the background task method (rather than the name of the endpoint), and the transaction shows as a non-web transaction, rather than a web transaction.

Expected Behavior
The transaction should be classified as a web transaction, and the transaction name should be the name of the endpoint, instead of the name of the background task method.

Steps to Reproduce
To reproduce the issue:

app.py:

import time
from fastapi import FastAPI, Request
from routers import routes

app = FastAPI()

app.include_router(routes.router)

@app.middleware("http")
async def test_async_middleware(request, call_next):
     response = await call_next(request)
     response.headers["test"] = "testing middleware"
     return response

routers/routes.py:

from fastapi import Request, BackgroundTasks, APIRouter
import asyncio
from time import sleep
router = APIRouter()

async def some_task():
    asyncio.sleep(2)
    return

async def some_async_task():
    await asyncio.sleep(2)
    return

@router.post("/test-async-endpoint")
async def test_async_endpoint(
    request: Request,
    background_tasks: BackgroundTasks,
    a: str = "some param"
    ):
        background_tasks.add_task(some_async_task)
        return {"return": a}

@router.post("/test-endpoint")
def test_endpoint(
    request: Request,
    background_tasks: BackgroundTasks,
    a: str = "some param"
    ):
        background_tasks.add_task(some_task)
        return {"return": a}

ZeroDivisionError: float division by zero

Description
NOTE: # ( Describe the problem you're encountering. )
[TIP]: # ( Do NOT share sensitive information, whether personal, proprietary, or otherwise! )
In newrelic/core/thread_utilization.py in call at line 78 elapsed_time is 0.0 so utilization = utilization / elapsed_time raises a Exception. The error happens quite frequently - over a thousand times so far.

Expected Behavior
NOTE: # ( Tell us what you expected to happen. )
If elapsed time is 0 then thread utilization should be 0.

Troubleshooting or NR Diag results
NOTE: # ( Provide any other relevant log data. )
[TIP]: # ( Scrub logs and diagnostic information for sensitive information )

newrelic.ini:

[newrelic]

license_key = a-very-super-secret-key

app_name = Python Application

monitor_mode = true

log_file = stderr

log_level = info

high_security = false

transaction_tracer.enabled = true
transaction_tracer.transaction_threshold = apdex_f
transaction_tracer.record_sql = obfuscated
transaction_tracer.stack_trace_threshold = 0.5
transaction_tracer.explain_enabled = true
transaction_tracer.explain_threshold = 0.5
transaction_tracer.function_trace =
error_collector.enabled = true
error_collector.ignore_errors =
browser_monitoring.auto_instrument = true
thread_profiler.enabled = true
distributed_tracing.enabled = false

[newrelic:development]
monitor_mode = false

[newrelic:test]
monitor_mode = false

[newrelic:staging]
app_name = Python Application (Staging)
monitor_mode = true

[newrelic:production]
app_name = 15Five (Production)
monitor_mode = true

[import-hook:django]
execute = true
instrumentation.scripts.django_admin = reminder_report_due_today
                                       reminder_report_due_tomorrow
                                       reminder_report_due_yesterday
                                       summary_report_review
                                       comment_rollup_hourly
                                       comment_rollup_daily
                                       create_report_placeholders
                                       recurly_sync_subscriptions
                                       update_expired_invites
                                       calculate_admin_dashboard_stats
                                       companystatsweekly
                                       load_demo_company
                                       cleanup_db
                                       customerio_sync_deactivated
                                       expansion_update
                                       celery_monitor
                                       warbyexportemail
                                       ealoansexportemail
                                       test_newrelic

Error message:

The merging of custom metric samples from data source "Thread Utilization" has failed. Validate that the data source is producing samples correctly. If this issue persists then please report this problem to the data source provider or New Relic support for further investigation.

image
image

Steps to Reproduce
NOTE: # ( Please be as specific as possible. )
[TIP]: # ( Link a sample application that demonstrates the issue. )
The error happened while we were running a long-running custom django management command, reset_demo_companies. Unfortunately the code is proprietary.

Your Environment
[TIP]: # ( Include as many relevant details about your environment as possible including the running version of New Relic software and any relevant configurations. )
python 3.7.7
newrelic==5.20.1.150
Django v2
Ubuntu 18

Additional context
[TIP]: # ( Add any other context about the problem here. For example, relevant community posts or support tickets. )

Exception `AttributeError: 'NoneType' object has no attribute 'distributed_tracing'`

Description
This issue happened with Tornado. Within async POST handler, if we finish the request with self.finish() before using AsyncHttp client for making a http request, the transaction is 'exited' with settings set to None. But we land up in generate_request_headers with 'transaction' not set to None but transaction.settings None, leading to exception:

Traceback (most recent call last):
File "/home/ketan/repo/iot/newrelic/iot-data-manager/services/common/asynchttp.py", line 24, in fetch
async_http_response = await http_client.fetch(url)
File "/home/ketan/.local/lib/python3.6/site-packages/newrelic/hooks/framework_tornado.py", line 284, in wrap_httpclient_fetch
outgoing_headers = trace.generate_request_headers(current_transaction())
File "/home/ketan/.local/lib/python3.6/site-packages/newrelic/api/cat_header_mixin.py", line 92, in generate_request_headers
if settings.distributed_tracing.enabled:
AttributeError: 'NoneType' object has no attribute 'distributed_tracing'

Expected Behavior
There should not be any exception

**Troubleshooting or [NR Diag]
The issue is happening as we are doing CachedPath.exit() in the context of tornado.web.RequestHandler.finish(). If there is any further network call in the same POST handler the transaction object is not None, but transaction.settings is None.

Steps to Reproduce
I have created a sample app to reproduce this issue in the exact same manner what we faced.

The server (myapp.py):

import tornado.ioloop
import tornado.web
from tornado.httpclient import AsyncHTTPClient
import traceback

class App1(tornado.web.RequestHandler):
    async def post(self):
        try:
            print("Hello from App1")
            self.finish()
            http_client = AsyncHTTPClient()
            response = await http_client.fetch('http://127.0.0.1:9002')
            print(response.body.decode())
        except:
            print(traceback.format_exc())

class App2(tornado.web.RequestHandler):
    async def get(self):
        self.write("Hello from App2")
        self.finish()

def make_app1():
    return tornado.web.Application([
        (r"/", App1),
    ])

def make_app2():
    return tornado.web.Application([
        (r"/", App2),
    ])

if __name__ == "__main__":
    app1 = make_app1()
    app1.listen(9001)
    app2 = make_app2()
    app2.listen(9002)
    tornado.ioloop.IOLoop.current().start()

Execution:

NEW_RELIC_LICENSE_KEY=<my_newrelic_lic_key> NEW_RELIC_APP_NAME=my-app newrelic-admin run-python myapp.py

The client (curl command):

curl -X POST http://127.0.0.1:9001
curl -X POST http://127.0.0.1:9001

Please note that the issue does not surface during the first call. It happens second call onwards.

Server output:

Hello from App1
Hello from App2
Hello from App1
Traceback (most recent call last):
  File "myapp.py", line 12, in post
    response = await http_client.fetch('http://127.0.0.1:9002')
  File "/home/ketan/.local/lib/python3.6/site-packages/newrelic/hooks/framework_tornado.py", line 284, in wrap_httpclient_fetch
    outgoing_headers = trace.generate_request_headers(current_transaction())
  File "/home/ketan/.local/lib/python3.6/site-packages/newrelic/api/cat_header_mixin.py", line 92, in generate_request_headers
    if settings.distributed_tracing.enabled:
AttributeError: 'NoneType' object has no attribute 'distributed_tracing'

If we move the call self.finish() to line after http_client.fetch call, the server works fine:

Hello from App1
Hello from App2
Hello from App1
Hello from App2

It also works fine if i start server as a regular python app without newrelic (python3 myapp.py)
I hope this helps.
Your Environment
Ubuntu 18.04
Python 3.7

Implement enable/disable setting for gc profiler.

Acceptance criteria:

  • GC profiler is disabled by default, and can be enabled by a setting.
  • Testing is done to ensure no metrics are collected when disabled.
  • Existing tests pass after adding an override settings decorator to enable the profiler.

Process Responses to Add Attributes on External Spans

Related Epic

Support for httpx client

Acceptance Criteria/ Notes

  • Reference process_response method to add required attributes

  • Check out other external libraries for examples

  • Verify CAT response header functionality and status code attribute

  • Tests to verify required attributes are added

Agent crashes in FIPS mode

When using the agent on a FIPS enabled Linux (such as RHEL) the agent errors

Description
Running RHEL 7.9 with FIPS
Our logs are showing the newrelic-python-agent erroring
Reporting problems traced down to

newrelic/common/encoding_utils.py\", 
line 267, 
in generate_path_hash
path_hash = (rotated ^ int(md5(name).hexdigest()[-8:], base=16))

ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for fips

Expected Behavior
MD5 should not be used

Troubleshooting or NR Diag results

Traceback (most recent call last):
  File \"/home/myapp/p36venv/lib64/python3.6/site-packages/aiohttp/web_protocol.py\", line 418, in start
    resp = await task
  File \"/home/myapp/p36venv/lib64/python3.6/site-packages/newrelic/common/async_proxy.py\", line 120, in send
    return self.__wrapped__.send(value)
  File \"/home/myapp/p36venv/lib64/python3.6/site-packages/newrelic/common/async_proxy.py\", line 98, in __exit__
    self.transaction.__exit__(None, None, None)
  File \"/home/myapp/p36venv/lib64/python3.6/site-packages/newrelic/api/transaction.py\", line 553, in __exit__
    path_hash=self.path_hash,
  File \"/home/myapp/p36venv/lib64/python3.6/site-packages/newrelic/api/transaction.py\", line 722, in path_hash
    path_hash = generate_path_hash(identifier, seed)
  File \"/home/myapp/p36venv/lib64/python3.6/site-packages/newrelic/common/encoding_utils.py\", line 267, in generate_path_hash
    path_hash = (rotated ^ int(md5(name).hexdigest()[-8:], base=16))
ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for fips

Steps to Reproduce
Run newrelic agent inside a FIPS-enabled RHEL OS

Your Environment
RHEL 7.9 FIPS
Python 3.6.8
newrelic-5.16.2.147-cp36-cp36m-manylinux2010_x86_64.whl

Additional context
Has nobody else ever encountered this?

Virtuoso Research Spike: deployment in Single Host

Background

Only 28% new customers are able to see data on the first day, we want to increase that.

The Virtuoso team has created a guided install which detects and suggests agents to install. Once that agent is detected, we need to deliver a better installation experience for our customers.

Value/Use Cases of Virtuoso

  1. _Coverage: _ Ensure all users with Node applications have them instrumented:
    1. A new or existing user is using our Deploy New Relic guided install. It discovers they are running a node application and suggests installing the node agent.
  2. A user runs the "stitched path" command from the "Language Agents" screen within Virtuoso or through the "Instrumentation Recommendations" page in the entity overview.

Requirements for Recipe

Where ever possible, hot attach is a requirement via “Genesis” (the Infra agent virtuoso installation path).

  1. Where ever possible, hot attach is a requirement via “Genesis” (the Infra agent virtuoso installation path).
  2. Where hot attach is not possible, we will automate 100% of the configuration, code changes (in repo only, not live), and process restarts.
  3. We will provide an interactive mode for terminal CLI usage
  4. We will provide non-interactive/scriptable mode and terraform plugins for automated fleet deployment

Appetite

2 engineers x 2.5 weeks = 5 engineering weeks

Acceptance Criteria

  1. Determine what path will suite 80-90% of our customers
    1. supported environments, supported frameworks, etc.
  2. Decide on hot attach or automation of config and what that works like
  3. Outline work for interactive mode
  4. Outline work for non-interactive/scriptable mode and terraform plugins for automated fleet deployment
  5. Write acceptance criteria for the next MMF

Additional Information/Supporting Resources

Demo of Virtuoso installing infra agents and some integrations.

Demo showing latest mockups which include application instrumentation

Node Recipes

Related Koan Key Result: https://newrelic.koan.co/team/f5bc77b8-f32c-40c3-a912-32f1a4ae473d/goals/f/goals/8812d243-4a28-44a0-902a-6036e6a2f996

Related Row in Commits Spreadsheet: https://docs.google.com/spreadsheets/d/1xBv7gCmI-o0Ecrj9w6xUDlawi6Sdtc7_EMrGDvfqlXc/edit#gid=1546171950&range=E7

Virtuoso Research Spike: Deployment in Single Host Environments

Background

Only 28% new customers are able to see data on the first day, we want to increase that.

The Virtuoso team has created a guided install which detects and suggests agents to install. Once that agent is detected, we need to deliver a better installation experience for our customers.

Value/Use Cases of Virtuoso

_Coverage: _ Ensure all users with Node applications have them instrumented:
A new or existing user is using our Deploy New Relic guided install. It discovers they are running a node application and suggests installing the node agent.
A user runs the "stitched path" command from the "Language Agents" screen within Virtuoso or through the "Instrumentation Recommendations" page in the entity overview.
Requirements for Recipe

Where ever possible, hot attach is a requirement via “Genesis” (the Infra agent virtuoso installation path).

Where ever possible, hot attach is a requirement via “Genesis” (the Infra agent virtuoso installation path).
Where hot attach is not possible, we will automate 100% of the configuration, code changes (in repo only, not live), and process restarts.
We will provide an interactive mode for terminal CLI usage
We will provide non-interactive/scriptable mode and terraform plugins for automated fleet deployment
Appetite

2 engineers x 2.5 weeks = 5 engineering weeks

Acceptance Criteria

Determine what path will suite 80-90% of our customers
supported environments, supported frameworks, etc.
Decide on hot attach or automation of config and what that works like
Outline work for interactive mode
Outline work for non-interactive/scriptable mode and terraform plugins for automated fleet deployment
Write acceptance criteria for the next MMF
Additional Information/Supporting Resources

Demo of Virtuoso installing infra agents and some integrations.

Demo showing latest mockups which include application instrumentation

Node Recipes

Related Koan Key Result: https://newrelic.koan.co/team/f5bc77b8-f32c-40c3-a912-32f1a4ae473d/goals/f/goals/8812d243-4a28-44a0-902a-6036e6a2f996

Related Row in Commits Spreadsheet: https://docs.google.com/spreadsheets/d/1xBv7gCmI-o0Ecrj9w6xUDlawi6Sdtc7_EMrGDvfqlXc/edit#gid=1546171950&range=E7

Newrelic Agent 6.0.0.154 Not honoring HTTP_PROXY env vars

I have two identical services deployed. One is using 6.0.0.154 the other is using 5.24.0.153.

The one on version 6 is not registering with newrelic when in a network with no default route out only proxy. It does however register when given a default route without proxy...

The one on version 5 works as expected registering with newrelic using the proxy configured in HTTPS_PROXY env var.

Documentation example does not match actual API

Description
The Example provided in the docs here does not match the actual required parameter type. The Parameters says one must pass a list of tuples yet the Example given passes a dict, which would raise exceptions:ValueError: too many values to unpack. Unfortunately, we have fallen victim to writing our code based off of the example more times than I would like to admit.

Expected Behavior

  1. A valid example such as:
@newrelic.agent.background_task()
def send_request(): 
	response = requests.post('http://URL_path', headers=headers, data=data) 
	newrelic.agent.add_custom_parameters([
                ('url_path_status_code', response.status_code),
        ]) 
  1. Potentially, a misuse of the SDK does no totally fail code execution but could notify the user of the error loudly. Similar to how the logging module is implemented:
In [8]: def dont_kill_my_code(): 
   ...:     logger.info("bad data %d", "not a digit") 
   ...:     print("I made it here still") 
   ...:                                                                                                                                                                                                     

In [9]: dont_kill_my_code()                                                                                                                                                                                 
--- Logging error ---
Traceback (most recent call last):
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/logging/__init__.py", line 1081, in emit
    msg = self.format(record)
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/logging/__init__.py", line 925, in format
    return fmt.format(record)
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/logging/__init__.py", line 664, in format
    record.message = record.getMessage()
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/logging/__init__.py", line 369, in getMessage
    msg = msg % self.args
TypeError: %d format: a number is required, not str
Call stack:
  File "/Users/markward/anaconda2/envs/pollen/bin/ipython", line 11, in <module>
    sys.exit(start_ipython())
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/__init__.py", line 126, in start_ipython
    return launch_new_instance(argv=argv, **kwargs)
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/traitlets/config/application.py", line 664, in launch_instance
    app.start()
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/terminal/ipapp.py", line 356, in start
    self.shell.mainloop()
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 563, in mainloop
    self.interact()
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 554, in interact
    self.run_cell(code, store_history=True)
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2866, in run_cell
    result = self._run_cell(
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2895, in _run_cell
    return runner(coro)
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
    coro.send(None)
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3071, in run_cell_async
    has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3263, in run_ast_nodes
    if (await self.run_code(code, result,  async_=asy)):
  File "/Users/markward/anaconda2/envs/pollen/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-9-405f0f64a064>", line 1, in <module>
    dont_kill_my_code()
  File "<ipython-input-8-45cf361c5300>", line 2, in dont_kill_my_code
    logger.info("bad data %d", "not a digit")
Message: 'bad data %d'
Arguments: ('not a digit',)
I made it here still

Introduce a default config file location

Is your feature request related to a problem? Please describe.

We are automating this agent deployment using Puppet.

When we do that we can do all the operations needed for it to start working for a given app outside of the app config itself - install the package in the app's virtualenv, generate the config, restart the app. All we need to pass is the virtualenv directory and the app's systemd service name.

But then we need to add the env variable NEW_RELIC_CONFIG_FILE to the systemd service file of that app and our code gets unnecessarily complicated.

It would be much easier if the agent just had a default config file location as we run a single app per server.

Feature Description

Make the agent try to read the config file from a default location, f.e. /etc/newrelic/python.ini, if the configuration is not read by any of the current methods. This way the change will be backward compatible with existing user setups.

Describe Alternatives

We could add NEW_RELIC_CONFIG_FILE=/etc/newrelic/python.ini as a standard env variable in all of our Python apps systemd unit files, but I find this a rather ugly solution.

Priority

Nice to Have

daphne support

Is your feature request related to a problem? Please describe.

We are considering switching to daphne from uwsgi

Feature Description

daphne support

Describe Alternatives

We could switch to uvicorn, which is supported by newrelic, or use a different APM

Additional context

Add any other context here.

Priority

[Nice to Have]

@aldenjenkins FYI

ASGI APIs

Overview
ASGI stands for Asynchronous Server Gateway Interface. ASGI is a community driven Python specification (it is not officially accepted by the Python language standards) for standardizing interfaces for asyncio based network applications.
Motivation:
There is an emerging movement of customers transitioning onto asyncio based web application frameworks. Many of these frameworks implement the ASGI standard. Right now they don't have visibility into their transaction data.
By frameworks / servers implementing the ASGI standard, customers can choose network servers independently of frameworks (servers and frameworks are now pluggable).
Some examples of frameworks that utilize ASGI:
FastAPI
Django Channels
Starlette
Quart
Responder
Sanic
Servers implementing the ASGI specification:
uvicorn
hypercorn
daphne
Acceptance Criteria:
Deliver asgi_application APIs (context manager, decorator, wrapper interfaces) that can wrap a standard ASGI interface to extract web transaction data and start/end the transaction
ASGI version 1 and version 2 should be automatically supported via the same API
web sockets do not create transactions, since this can cause memory explosion for long-running transactions
long lived http/2 connections do not create transactions

Risks:
The ASGI specification is not yet accepted (or even submitted as an official proposal) by the official language. As such, the specification may change rapidly / in a way that breaks our APIs. This can introduce a maintenance burden on our team if we provide public APIs that break due to specification changes.

Wheel support for linux aarch64[arm64]

Summary
Installing newrelic on aarch64 via pip using command "pip3 install newrelic" tries to build wheel from source code

Problem description
newrelic doesn't have wheel for aarch64 on PyPI repository. So, while installing newrelic via pip on aarch64, pip builds wheel for same resulting in it takes more time to install newrelic. Making wheel available for aarch64 will benefit aarch64 users by minimizing newrelic installation time.

Expected Output
Pip should be able to download newrelic wheel from PyPI repository rather than building it from source code.

@newrelic-team, please let me know if I can help you building wheel/uploading to PyPI repository. I am curious to make newrelic wheel available for aarch64. It will be a great opportunity for me to work with you.

DeprecationWarning: `formatargspec` is deprecated since Python 3.5

Description

.venv/lib/python3.8/site-packages/newrelic/console.py:84: 18 warnings
  /opt/.venv/lib/python3.8/site-packages/newrelic/console.py:84: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
    prototype = wrapper.__name__[3:] + ' ' + inspect.formatargspec(

Expected Behavior

No deprecation warning.

Your Environment

Python 3.8.5, New Relic agent 5.20.0.149.

ASGI - Uvicorn

Background

Uvicorn is a fast ASGI server based on uvloop and httptools. It supports Supports HTTP/1 and WebSockets (scope here is limited to http/1). It is growing the fasted in popularity among ASGI servers https://pypistats.org/packages/uvicorn.

Today customers Uvicorn will not see any data today. Let's change that!

Acceptance Criteria:

  • Send web-transactions start/end times for http1 requests

  • Attach appropriate attributes to those events i.e.

    • number of bytes on output automatically captured
    • Request headers automatically captured
    • Request method
    • Request URI
    • Response headers
    • Response status
  • Ensure DT payloads are accepted automatically, so that uvicorn accesses appear connected in the DT UI

  • Errors (exceptions) are captured before they are converted to an HTTP response

  • Document this support in NR Docs and Website

  • Demo

Supporting documentation:
http://www.uvicorn.org/
https://buildmedia.readthedocs.org/media/pdf/asgi/latest/asgi.pdf

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 1 0 6 0 7

Fail #

readme-starts-with-community-plus-header #

The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date. For more information please visit https://opensource.newrelic.com/oss-category/. Below is a list of files or patterns that failed:

  • README.rst: The first 1 lines do not contain the pattern(s): Open source Community Plus header (see https://opensource.newrelic.com/oss-category).
    • 🔨 Suggested Fix: prepend [![Community Plus header](https://github.com/newrelic/opensource-website/raw/master/src/images/categories/Community_Plus.png)](https://opensource.newrelic.com/oss-category/#community-plus) to file

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.rst). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.rst). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-discuss-topic #

Contains a link to the appropriate discuss.newrelic.com topic (README.rst). New Relic recommends directly linking the your appropriate discuss.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-file-does-not-exist #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.