Code Monkey home page Code Monkey logo

pinject's Introduction

Pinject

image

image

image

image

Pinject is a dependency injection library for python.

The primary goal of Pinject is to help you assemble objects into graphs in an easy, maintainable way.

If you are already familiar with other dependency injection libraries, you may want to read the condensed summary section at the end, so that you get an idea of what Pinject is like and how it might differ from libraries you're used to.

There is a changelog of differences between released versions near the end of this README.

Why Pinject?

If you're wondering why to use a dependency injection library at all: if you're writing a lot of object-oriented code in python, then it will make your life easier. See, for instance:

If you're wondering why to use Pinject instead of another python dependency injection library, a few of reasons are:

  • Pinject is much easier to get started with. Forget having to decorate your code with @inject_this and @annotate_that just to get started. With Pinject, you call new_object_graph(), one line, and you're good to go.
  • Pinject is a pythonic dependency injection library. Python ports of other libraries, like Spring or Guice, retain the feel (and verbosity) of being designed for a statically typed language. Pinject is designed from the ground up for python.
  • The design choices in Pinject are informed by several dependency injection experts working at Google, based on many years of experience. Several common confusing or misguided features are omitted altogether from Pinject.
  • Pinject has great error messages. They tell you exactly what you did wrong, and exactly where. This should be a welcome change from other dependency frameworks, with their voluminous and yet inscrutable stack traces.

Look at the simplest getting-started examples for Pinject and for other similar libraries. Pinject should be uniformly easier to use, clearer to read, and less boilerplate that you need to add. If you don't find this to be the case, email!

Installation

The easiest way to install Pinject is to get the latest released version from PyPI:

sudo pip install pinject

If you are interested in the developing version, you can install the next version from Test PyPI:

sudo pip install \
    --no-deps \
    --no-cache \
    --upgrade \
    --index-url https://test.pypi.org/simple/ \
    pinject

You can also check out all the source code, including tests, designs, and TODOs:

git clone https://github.com/google/pinject

Basic dependency injection

The most important function in the pinject module is new_object_graph(). This creates an ObjectGraph, which you can use to instantiate objects using dependency injection. If you pass no args to new_object_graph(), it will return a reasonably configured default ObjectGraph.

>>> class OuterClass(object):
...     def __init__(self, inner_class):
...         self.inner_class = inner_class
...
>>> class InnerClass(object):
...     def __init__(self):
...         self.forty_two = 42
...
>>> obj_graph = pinject.new_object_graph()
>>> outer_class = obj_graph.provide(OuterClass)
>>> print outer_class.inner_class.forty_two
42
>>>

As you can see, you don't need to tell Pinject how to construct its ObjectGraph, and you don't need to put decorators in your code. Pinject has reasonable defaults that allow it to work out of the box.

A Pinject binding is an association between an arg name and a provider. In the example above, Pinject created a binding between the arg name inner_class and an implicitly created provider for the class InnerClass. The binding it had created was how Pinject knew that it should pass an instance of InnerClass as the value of the inner_class arg when instantiating OuterClass.

Implicit class bindings

Pinject creates implicit bindings for classes. The implicit bindings assume your code follows PEP8 conventions: your classes are named in CamelCase, and your args are named in lower_with_underscores. Pinject transforms class names to injectable arg names by lowercasing words and connecting them with underscores. It will also ignore any leading underscore on the class name.

Class name Arg name
Foo foo
FooBar foo_bar
_Foo foo
_FooBar foo_bar

If two classes map to the same arg name, whether those classes are in the same module or different modules, Pinject will not create an implicit binding for that arg name (though it will not raise an error).

Finding classes and providers for implicit bindings

So far, the examples have not told new_object_graph() the classes for which it should create implicit bindings. new_object_graph() by default looks in all imported modules, but you may occasionally want to restrict the classes for which new_object_graph() creates implicit bindings. If so, new_object_graph() has two args for this purpose.

  • The modules arg specifies in which (python) modules to look for classes; this defaults to ALL_IMPORTED_MODULES.
  • The classes arg specifies a exact list of classes; this defaults to None.
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class Foo(object):
...     pass
...
>>> obj_graph = pinject.new_object_graph(modules=None, classes=[SomeClass])
>>> # obj_graph.provide(SomeClass)  # would raise a NothingInjectableForArgError
>>> obj_graph = pinject.new_object_graph(modules=None, classes=[SomeClass, Foo])
>>> some_class = obj_graph.provide(SomeClass)
>>>

Auto-copying args to fields

One thing that can get tedious about dependency injection via initializers is that you need to write __init__() methods that copy args to fields. These __init__() methods can get repetitive, especially when you have several initializer args.

>>> class ClassWithTediousInitializer(object):
...     def __init__(self, foo, bar, baz, quux):
...         self._foo = foo
...         self._bar = bar
...         self._baz = baz
...         self._quux = quux
...
>>>

Pinject provides decorators that you can use to avoid repetitive initializer bodies.

  • @copy_args_to_internal_fields prepends an underscore, i.e., it copies an arg named foo to a field named _foo. It's useful for normal classes.
  • @copy_args_to_public_fields copies the arg named as-is, i.e., it copies an arg named foo to a field named foo. It's useful for data objects.
>>> class ClassWithTediousInitializer(object):
...     @pinject.copy_args_to_internal_fields
...     def __init__(self, foo, bar, baz, quux):
...         pass
...
>>> cwti = ClassWithTediousInitializer('a-foo', 'a-bar', 'a-baz', 'a-quux')
>>> print cwti._foo
'a-foo'
>>>

When using these decorators, you'll normally pass in the body of the initializer, but you can put other statements there if you need to. The args will be copied to fields before the initializer body is executed.

These decorators can be applied to initializers that take **kwargs but not initializers that take *pargs (since it would be unclear what field name to use).

Binding specs

To create any bindings more complex than the implicit class bindings described above, you use a binding spec. A binding spec is any python class that inherits from BindingSpec. A binding spec can do three things:

  • Its configure() method can create explicit bindings to classes or instances, as well as requiring bindings without creating them.
  • Its dependencies() method can return depended-on binding specs.
  • It can have provider methods, for which explicit bindings are created.

The new_object_graph() function takes a sequence of binding spec instances as its binding_specs arg. new_object_graph() takes binding spec instances, rather than binding spec classes, so that you can manually inject any initial dependencies into the binding specs as needed.

Binding specs should generally live in files named binding_specs.py, where each file is named in the plural even if there is exactly one binding spec in it. Ideally, a directory's worth of functionality should be coverable with a single binding spec. If not, you can create multiple binding specs in the same binding_specs.py file. If you have so many binding specs that you need to split them into multiple files, you should name them each with a _binding_specs.py suffix.

Binding spec configure() methods

Pinject creates implicit bindings for classes, but sometimes the implicit bindings aren't what you want. For instance, if you have SomeReallyLongClassName, you may not want to name your initializer args some_really_long_class_name but instead use something shorter like long_name, just for this class.

For such situations, you can create explicit bindings using the configure() method of a binding spec. The configure() method takes a function bind() as an arg and calls that function to create explicit bindings.

>>> class SomeClass(object):
...     def __init__(self, long_name):
...         self.long_name = long_name
...
>>> class SomeReallyLongClassName(object):
...     def __init__(self):
...         self.foo = 'foo'
...
>>> class MyBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('long_name', to_class=SomeReallyLongClassName)
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[MyBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.long_name.foo
'foo'
>>>

The bind() function passed to a binding function binds its first arg, which must be an arg name (as a str), to exactly one of two kinds of things.

  • Using to_class binds to a class. When the binding is used, Pinject injects an instance of the class.
  • Using to_instance binds to an instance of some object. Every time the binding is used, Pinject uses that instance.
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class MyBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='a-foo')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[MyBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'a-foo'
>>>

The configure() method of a binding spec also may take a function require() as an arg and use that function to require that a binding be present without actually defining that binding. require() takes as args the name of the arg for which to require a binding.

>>> class MainBindingSpec(pinject.BindingSpec):
...     def configure(self, require):
...         require('foo')
...
>>> class RealFooBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='a-real-foo')
...
>>> class StubFooBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='a-stub-foo')
...
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[MainBindingSpec(), RealFooBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'a-real-foo'
>>> # pinject.new_object_graph(
... #    binding_specs=[MainBindingSpec()])  # would raise a MissingRequiredBindingError
...
>>>

Being able to require a binding without defining the binding is useful when you know the code will need some dependency satisfied, but there is more than one implementation that satisfies that dependency, e.g., there may be a real RPC client and a fake RPC client. Declaring the dependency means that any expected but missing bindings will be detected early, when new_object_graph() is called, rather than in the middle of running your program.

You'll notice that the configure() methods above have different signatures, sometimes taking the arg bind and sometimes taking the arg require. configure() methods must take at least one arg that is either bind or require, and they may have both args. Pinject will pass whichever arg or args your configure() method needs.

Binding spec dependencies

Binding specs can declare dependencies. A binding spec declares its dependencies by returning a sequence of instances of the dependent binding specs from its dependencies() method.

>>> class ClassOne(object):
...    def __init__(self, foo):
...        self.foo = foo
...
>>> class BindingSpecOne(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='foo-')
...
>>> class ClassTwo(object):
...     def __init__(self, class_one, bar):
...         self.foobar = class_one.foo + bar
...
>>> class BindingSpecTwo(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('bar', to_instance='-bar')
...     def dependencies(self):
...         return [BindingSpecOne()]
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[BindingSpecTwo()])
>>> class_two = obj_graph.provide(ClassTwo)
>>> print class_two.foobar
'foo--bar'
>>>

If classes from module A are injected as collaborators into classes from module B, then you should consider having the binding spec for module B depend on the binding spec for module A. In the example above, ClassOne is injected as a collaborator into ClassTwo, and so BindingSpecTwo (the binding spec for ClassTwo) depends on BindingSpecOne (the binding spec for ClassOne).

In this way, you can build a graph of binding spec dependencies that mirrors the graph of collaborator dependencies.

Since explicit bindings cannot conflict (see the section below on binding precedence), a binding spec should only have dependencies that there will never be a choice about using. If there may be a choice, then it is better to list the binding specs separately and explicitly when calling new_object_graph().

The binding spec dependencies can be a directed acyclic graph (DAG); that is, binding spec A can be a dependency of B and of C, and binding spec D can have dependencies on B and C. Even though there are multiple dependency paths from D to A, the bindings in binding spec A will only be evaluated once.

The binding spec instance of A that is a dependency of B is considered the same as the instance that is a dependency of C if the two instances are equal (via __eq__()). The default implementation of __eq__() in BindingSpec says that two binding specs are equal if they are of exactly the same python type. You will need to override __eq__() (as well as __hash__()) if your binding spec is parameterized, i.e., if it takes one or more initializer args so that two instances of the binding spec may behave differently.

>>> class SomeBindingSpec(pinject.BindingSpec):
...     def __init__(self, the_instance):
...         self._the_instance = the_instance
...     def configure(self, bind):
...         bind('foo', to_instance=self._the_instance)
...     def __eq__(self, other):
...         return (type(self) == type(other) and
...                 self._the_instance == other._the_instance)
...     def __hash__(self):
...         return hash(type(self)) ^ hash(self._the_instance)
...
>>>

Provider methods

If it takes more to instantiate a class than calling its initializer and injecting initializer args, then you can write a provider method for it. Pinject can use provider methods to instantiate objects used to inject as the values of other args.

Pinject looks on binding specs for methods named like provider methods and then creates explicit bindings for them.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_foo(self):
...         return 'some-complex-foo'
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'some-complex-foo'
>>>

Pinject looks on binding specs for methods whose names start with provide_, and it assumes that the methods are providers for whatever the rest of their method names are. For instance, Pinject assumes that the method provide_foo_bar() is a provider method for the arg name foo_bar.

Pinject injects all args of provider methods that have no default when it calls the provider method.

>>> class SomeClass(object):
...     def __init__(self, foobar):
...         self.foobar = foobar
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_foobar(self, bar, hyphen='-'):
...         return 'foo' + hyphen + bar
...     def provide_bar(self):
...         return 'bar'
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foobar
'foo-bar'
>>>

Binding precedence

Bindings have precedence: explicit bindings take precedence over implicit bindings.

  • Explicit bindings are the bindings that come from binding specs.
  • Implicit bindings are the bindings created for classes in the modules and classes args passed to new_object_graph().

Pinject will prefer an explicit to an implicit binding.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class Foo(object):
...     pass
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='foo-instance')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'foo-instance'
>>>

If you have two classes named the same thing, Pinject will have two different (and thus conflicting) implicit bindings. But Pinject will not complain unless you try to use those bindings. Pinject will complain if you try to create different (and thus conflicting) explicit bindings.

Safety

Pinject tries to strike a balance between being helpful and being safe. Sometimes, you may want or need to change this balance.

new_object_graph() uses implicit bindings by default. If you worry that you may accidentally inject a class or use a provider function unintentionally, then you can make new_object_graph() ignore implicit bindings, by setting only_use_explicit_bindings=True. If you do so, then Pinject will only use explicit bindings.

If you want to promote an implicit binding to be an explicit binding, you can annotate the corresponding class with @inject(). The @inject() decorator lets you create explicit bindings without needing to create binding specs, as long as you can live with the binding defaults (e.g., no annotations on args, see below).

>>> class ExplicitlyBoundClass(object):
...     @pinject.inject()
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class ImplicitlyBoundClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='explicit-foo')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()],
...     only_use_explicit_bindings=True)
>>> # obj_graph.provide(ImplicitlyBoundClass)  # would raise a NonExplicitlyBoundClassError
>>> some_class = obj_graph.provide(ExplicitlyBoundClass)
>>> print some_class.foo
'explicit-foo'
>>>

You can also promote an implicit binding to explicit by using @annotated_arg() (see below), with or without @inject() as well.

(Previous versions of Pinject included an @injectable decorator. That is deprecated in favor of @inject(). Note that @inject() needs parens, whereas @injectable didn't.)

On the opposite side of permissiveness, Pinject by default will complain if a provider method returns None. If you really want to turn off this default behavior, you can pass allow_injecting_none=True to new_object_graph().

Annotations

Pinject annotations let you have different objects injected for the same arg name. For instance, you may have two classes in different parts of your codebase named the same thing, and you want to use the same arg name in different parts of your codebase.

On the arg side, an annotation tells Pinject only to inject using a binding whose binding key includes the annotation object. You can use @annotate_arg() on an initializer, or on a provider method, to specify the annotation object.

On the binding side, an annotation changes the binding so that the key of the binding includes the annotation object. When using bind() in a binding spec's configure() method, you can pass an annotated_with arg to specify the annotation object.

>>> class SomeClass(object):
...     @pinject.annotate_arg('foo', 'annot')
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', annotated_with='annot', to_instance='foo-with-annot')
...         bind('foo', annotated_with=12345, to_instance='12345-foo')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'foo-with-annot'
>>>

Also on the binding side, when defining a provider method, you can use the @provides() decorator. The decorator lets you pass an annotated_with arg to specify the annotation object. The decorator's first param, arg_name also lets you override what arg name you want the provider to be for. This is optional but useful if you want the same binding spec to have two provider methods for the same arg name but annotated differently. (Otherwise, the methods would need to be named the same, since they're for the same arg name.)

>>> class SomeClass(object):
...     @pinject.annotate_arg('foo', 'annot')
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides('foo', annotated_with='annot')
...     def provide_annot_foo(self):
...         return 'foo-with-annot'
...     @pinject.provides('foo', annotated_with=12345)
...     def provide_12345_foo(self):
...         return '12345-foo'
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'foo-with-annot'
>>>

When requiring a binding, via the require arg passed into the configure() method of a binding spec, you can pass the arg annotated_with to require an annotated binding.

>>> class MainBindingSpec(pinject.BindingSpec):
...     def configure(self, require):
...         require('foo', annotated_with='annot')
...
>>> class NonSatisfyingBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='an-unannotated-foo')
...
>>> class SatisfyingBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', annotated_with='annot', to_instance='an-annotated-foo')
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[MainBindingSpec(), SatisfyingBindingSpec()])  # works
>>> # obj_graph = pinject.new_object_graph(
... #     binding_specs=[MainBindingSpec(),
... #                    NonSatisfyingBindingSpec()])  # would raise a MissingRequiredBindingError
>>>

You can use any kind of object as an annotation object as long as it implements __eq__() and __hash__().

Scopes

By default, Pinject remembers the object it injected into a (possibly annotated) arg, so that it can inject the same object into other args with the same name. This means that, for each arg name, a single instance of the bound-to class, or a single instance returned by a provider method, is created by default.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_foo(self):
...         return object()
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class_1 = obj_graph.provide(SomeClass)
>>> some_class_2 = obj_graph.provide(SomeClass)
>>> print some_class_1.foo is some_class_2.foo
True
>>>

In some cases, you may want to create new instances, always or sometimes, instead of reusing them each time they're injected. If so, you want to use scopes.

A scope controls memoization (i.e., caching). A scope can choose to cache never, sometimes, or always.

Pinject has two built-in scopes. Singleton scope (SINGLETON) is the default and always caches. Prototype scope (PROTOTYPE) is the other built-in option and does no caching whatsoever.

Every binding is associated with a scope. You can specify a scope for a binding by decorating a provider method with @in_scope(), or by passing an in_scope arg to bind() in a binding spec's configure() method.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides(in_scope=pinject.PROTOTYPE)
...     def provide_foo(self):
...         return object()
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class_1 = obj_graph.provide(SomeClass)
>>> some_class_2 = obj_graph.provide(SomeClass)
>>> print some_class_1.foo is some_class_2.foo
False
>>>

If a binding specifies no scope explicitly, then it is in singleton scope. Implicit class bindings are always in singleton scope.

Memoization of class bindings works at the class level, not at the binding key level. This means that, if you bind two arg names (or the same arg name with two different annotations) to the same class, and the class is in a memoizing scope, then the same class instance will be provided when you inject the different arg names.

>>> class InjectedClass(object):
...     pass
...
>>> class SomeObject(object):
...     def __init__(self, foo, bar):
...         self.foo = foo
...         self.bar = bar
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_class=InjectedClass)
...         bind('bar', to_class=InjectedClass)
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()])
>>> some_object = obj_graph.provide(SomeObject)
>>> print some_object.foo is some_object.bar
True
>>>

Pinject memoizes class bindings this way because this is more likely to be what you mean if you bind two different arg names to the same class in singleton scope: you want only one instance of the class, even though it may be injected in multiple places.

Provider bindings

Sometimes, you need to inject not just a single instance of some class, but rather you need to inject the ability to create instances on demand. (Clearly, this is most useful when the binding you're using is not in the singleton scope, otherwise you'll always get the same instance, and you may as well just inject that..)

You could inject the Pinject object graph, but you'd have to do that dependency injection manually (Pinject doesn't inject itself!), and you'd be injecting a huge set of capabilities when your class really only needs to instantiate objects of one type.

To solve this, Pinject creates provider bindings for each bound arg name. It will look at the arg name for the prefix provide_, and if it finds that prefix, it assumes you want to inject a provider function for whatever the rest of the arg name is. For instance, if you have an arg named provide_foo_bar, then Pinject will inject a zero-arg function that, when called, provides whatever the arg name foo_bar is bound to.

>>> class Foo(object):
...   def __init__(self):
...     self.forty_two = 42
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_class=Foo, in_scope=pinject.PROTOTYPE)
...
>>> class NeedsProvider(object):
...     def __init__(self, provide_foo):
...         self.provide_foo = provide_foo
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> needs_provider = obj_graph.provide(NeedsProvider)
>>> print needs_provider.provide_foo() is needs_provider.provide_foo()
False
>>> print needs_provider.provide_foo().forty_two
42
>>>

Pinject will always look for the provide_ prefix as a signal to inject a provider function, anywhere it injects dependencies (initializer args, binding spec provider methods, etc.). This does mean that it's quite difficult, say, to inject an instance of a class named ProvideFooBar into an arg named provide_foo_bar, but assuming you're naming your classes as noun phrases instead of verb phrases, this shouldn't be a problem.

Watch out: don't confuse

  • provider bindings, which let you inject args named provide_something with provider functions; and
  • provider methods, which are methods of binding specs that provide instances of some arg name.

Partial injection

Provider bindings are useful when you want to create instances of a class on demand. But a zero arg provider function will always return an instance configured the same way (within a given scope). Sometimes, you want the ability to parameterize the provided instances, e.g., based on run-time user configuration. You want the ability to create instances where part of the initialization data is provided per-instance at run-time and part of the initialization data is injected as dependencies.

To do this, other dependency injection libraries have you define factory classes. You inject dependencies into the factory class's initializer function, and then you call the factory class's creation method with the per-instance data.

>>> class WidgetFactory(object):
...     def __init__(self, widget_polisher):
...         self._widget_polisher = widget_polisher
...     def new(self, color):  # normally would contain some non-trivial code...
...         return some_function_of(self._widget_polisher, color)
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_something_with_colored_widgets(self, colors, widget_factory):
...         return SomethingWithColoredWidgets(
...             [widget_factory.new(color) for color in colors])
...
>>>

You can follow this pattern in Pinject, but it involves boring boilerplate for the factory class, saving away the initializer-injected dependencies to be used in the creation method. Plus, you have to create yet another ...Factory class, which makes you feel like you're programming in java, not python.

As a less repetitive alternative, Pinject lets you use partial injection on the provider functions returned by provider bindings. You use the @inject() decorator to tell Pinject ahead of time which args you expect to pass directly (vs. automatic injection), and then you pass those args directly when calling the provider function.

>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.inject(['widget_polisher'])
...     def provide_widget(self, color, widget_polisher):
...         return some_function_of(widget_polisher, color)
...     def provide_something_needing_widgets(self, colors, provide_widget):
...         return SomethingNeedingWidgets(
...             [provide_widget(color) for color in colors])
...
>>>

The first arg to @inject(), arg_names, specifies which args of the decorated method should be injected as dependencies. If specified, it must be a non-empty sequence of names of the decorated method's args. The remaining decorated method args will be passed directly.

In the example above, note that, although there is a method called provide_widget() and an arg of provide_something_needing_widgets() called provide_widget, these are not exactly the same! The latter is a dependency-injected wrapper around the former. The wrapper ensures that the color arg is passed directly and then injects the widget_polisher dependency.

The @inject() decorator works to specify args passed directly both for provider bindings to provider methods (as in the example above) and for provider bindings to classes (where you can pass args directly to the initializer, as in the example below).

>>> class Widget(object):
...     @pinject.inject(['widget_polisher'])
...     def __init__(self, color, widget_polisher):
...         pass  # normally something involving color and widget_polisher
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_something_needing_widgets(self, colors, provide_widget):
...         return SomethingNeedingWidgets(
...             [provide_widget(color) for color in colors])
...
>>>

The @inject() decorator also takes an all_except arg. You can use this, instead of the (first positional) arg_names arg, if it's clearer and more concise to say which args are not injected (i.e., which args are passed directly).

>>> class Widget(object):
...     # equivalent to @pinject.inject(['widget_polisher']):
...     @pinject.inject(all_except=['color'])
...     def __init__(self, color, widget_polisher):
...         pass  # normally something involving color and widget_polisher
...
>>>

If both arg_names and all_except are omitted, then all args are injected by Pinject, and none are passed directly. (Both arg_names and all_except may not be specified at the same time.) Wildcard positional and keyword args (i.e., *pargs and **kwargs) are always passed directly, not injected.

If you use @inject() to mark at least one arg of a provider method (or initializer) as passed directly, then you may no longer directly inject that provider method's corresponding arg name. You must instead use a provider binding to inject a provider function, and then pass the required direct arg(s), as in the examples above.

Custom scopes

If you want to, you can create your own custom scope. A custom scope is useful when you have some objects that need to be reused (i.e., cached) but whose lifetime is shorter than the entire lifetime of your program.

A custom scope is any class that implements the Scope interface.

class Scope(object):
    def provide(self, binding_key, default_provider_fn):
        raise NotImplementedError()

The binding_key passed to provide() will be an object implementing __eq__() and __hash__() but otherwise opaque (you shouldn't need to introspect it). You can think of the binding key roughly as encapsulating the arg name and annotation (if any). The default_provider_fn passed to provide() is a zero-arg function that, when called, provides an instance of whatever should be provided.

The job of a scope's provide() function is to return a cached object if available and appropriate, otherwise to return (and possibly cache) the result of calling the default provider function.

Scopes almost always have other methods that control clearing the scope's cache. For instance, a scope may have "enter scope" and "exit scope" methods, or a single direct "clear cache" method. When passing a custom scope to Pinject, your code should keep a handle to the custom scope and use that handle to clear the scope's cache at the appropriate time.

You can use one or more custom scopes by passing a map from scope identifier to scope as the id_to_scope arg of new_object_graph().

>>> class MyScope(pinject.Scope):
...     def __init__(self):
...         self._cache = {}
...     def provide(self, binding_key, default_provider_fn):
...         if binding_key not in self._cache:
...             self._cache[binding_key] = default_provider_fn()
...         return self._cache[binding_key]
...     def clear(self):
...         self._cache = {}
...
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides(in_scope='my custom scope')
...     def provide_foo(self):
...         return object()
...
>>> my_scope = MyScope()
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()],
...     id_to_scope={'my custom scope': my_scope})
>>> some_class_1 = obj_graph.provide(SomeClass)
>>> some_class_2 = obj_graph.provide(SomeClass)
>>> my_scope.clear()
>>> some_class_3 = obj_graph.provide(SomeClass)
>>> print some_class_1.foo is some_class_2.foo
True
>>> print some_class_2.foo is some_class_3.foo
False
>>>

A scope identifier can be any object implementing __eq__() and __hash__().

If you plan to use Pinject in a multi-threaded environment (and even if you don't plan to now but may some day), you should make your custom scope thread-safe. The example custom scope above could be trivially (but more verbosely) rewritten to be thread-safe, as in the example below. The lock is reentrant so that something in MyScope can be injected into something else in MyScope.

>>> class MyScope(pinject.Scope):
...     def __init__(self):
...         self._cache = {}
...         self._rlock = threading.RLock()
...     def provide(self, binding_key, default_provider_fn):
...         with self._rlock:
...             if binding_key not in self._cache:
...                 self._cache[binding_key] = default_provider_fn()
...             return self._cache[binding_key]
...     def clear(self):
...         with self._rlock:
...             self._cache = {}
>>>

Scope accessibility

To prevent yourself from injecting objects where they don't belong, you may want to validate one object being injected into another w.r.t. scope.

For instance, you may have created a custom scope for HTTP requests handled by your program. Objects in request scope would be cached for the duration of a single HTTP request. You may want to verify that objects in request scope never get injected into objects in singleton scope. Such an injection is likely not to make semantic sense, since it would make something tied to one HTTP request be used for the duration of your program.

Pinject lets you pass a validation function as the is_scope_usable_from_scope arg to new_object_graph(). This function takes two scope identifiers and returns True iff an object in the first scope can be injected into an object of the second scope.

>>> class RequestScope(pinject.Scope):
...     def start_request(self):
...         self._cache = {}
...     def provide(self, binding_key, default_provider_fn):
...         if binding_key not in self._cache:
...             self._cache[binding_key] = default_provider_fn()
...         return self._cache[binding_key]
...
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides(in_scope=pinject.SINGLETON)
...     def provide_foo(bar):
...         return 'foo-' + bar
...     @pinject.provides(in_scope='request scope')
...     def provide_bar():
...         return '-bar'
...
>>> def is_usable(scope_id_inner, scope_id_outer):
...     return not (scope_id_inner == 'request scope' and
...                 scope_id_outer == scoping.SINGLETON)
...
>>> my_request_scope = RequestScope()
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()],
...     id_to_scope={'request scope': my_request_scope},
...     is_scope_usable_from_scope=is_usable)
>>> my_request_scope.start_request()
>>> # obj_graph.provide(SomeClass)  # would raise a BadDependencyScopeError
>>>

The default scope accessibility validator allows objects from any scope to be injected into objects from any other scope.

Changing naming conventions

If your code follows PEP8 naming coventions, then you're likely happy with the default implicit bindings (where the class FooBar gets bound to the arg name foo_bar) and where provide_foo_bar() is a binding spec's provider method for the arg name foo_bar.

But if not, read on!

Customizing implicit bindings

new_object_graph() takes a get_arg_names_from_class_name arg. This is the function that is used to determine implicit class bindings. This function takes in a class name (e.g., FooBar) and returns the arg names to which that class should be implicitly bound (e.g., ['foo_bar']). Its default behavior is described in the "implicit class bindings" section above, but that default behavior can be overridden.

For instance, suppose that your code uses a library that names many classes with the leading letter X (e.g., XFooBar), and you'd like to be able to bind that to a corresponding arg name without the leading X (e.g., foo_bar).

>>> import re
>>> def custom_get_arg_names(class_name):
...     stripped_class_name = re.sub('^_?X?', '', class_name)
...     return [re.sub('(?!^)([A-Z]+)', r'_\1', stripped_class_name).lower()]
...
>>> print custom_get_arg_names('XFooBar')
['foo_bar']
>>> print custom_get_arg_names('XLibraryClass')
['library_class']
>>> class OuterClass(object):
...     def __init__(self, library_class):
...         self.library_class = library_class
...
>>> class XLibraryClass(object):
...     def __init__(self):
...         self.forty_two = 42
...
>>> obj_graph = pinject.new_object_graph(
...     get_arg_names_from_class_name=custom_get_arg_names)
>>> outer_class = obj_graph.provide(OuterClass)
>>> print outer_class.library_class.forty_two
42
>>>

The function passed as the get_arg_names_from_class_name arg to new_object_graph() can return as many or as few arg names as it wants. If it always returns the empty list (i.e., if it is lambda _: []), then that disables implicit class bindings.

Customizing binding spec method names

The standard binding spec methods to configure bindings and declare dependencies are named configure and dependencies, by default. If you need to, you can change their names by passing configure_method_name and/or dependencies_method_name as args to new_object_graph().

>>> class NonStandardBindingSpec(pinject.BindingSpec):
...     def Configure(self, bind):
...         bind('forty_two', to_instance=42)
...
>>> class SomeClass(object):
...     def __init__(self, forty_two):
...         self.forty_two = forty_two
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[NonStandardBindingSpec()],
...     configure_method_name='Configure')
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.forty_two
42
>>>

Customizing provider method names

new_object_graph() takes a get_arg_names_from_provider_fn_name arg. This is the function that is used to identify provider methods on binding specs. This function takes in the name of a potential provider method (e.g., provide_foo_bar) and returns the arg names for which the provider method is a provider, if any (e.g., ['foo_bar']). Its default behavior is described in the "provider methods" section above, but that default behavior can be overridden.

For instance, suppose that you work for a certain large corporation whose python style guide makes you name functions in CamelCase, and so you need to name the provider method for the arg name foo_bar more like ProvideFooBar than provide_foo_bar.

>>> import re
>>> def CustomGetArgNames(provider_fn_name):
...     if provider_fn_name.startswith('Provide'):
...         provided_camelcase = provider_fn_name[len('Provide'):]
...         return [re.sub('(?!^)([A-Z]+)', r'_\1', provided_camelcase).lower()]
...     else:
...         return []
...
>>> print CustomGetArgNames('ProvideFooBar')
['foo_bar']
>>> print CustomGetArgNames('ProvideFoo')
['foo']
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def ProvideFoo(self):
...         return 'some-foo'
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()],
...     get_arg_names_from_provider_fn_name=CustomGetArgNames)
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'some-foo'
>>>

The function passed as the get_arg_names_from_provider_fn_name arg to new_object_graph() can return as many or as few arg names as it wants. If it returns an empty list, then that potential provider method is assumed not actually to be a provider method.

Miscellaneous

Pinject raises helpful exceptions whose messages include the file and line number of errors. So, Pinject by default will shorten the stack trace of exceptions that it raises, so that you don't see the many levels of function calls within the Pinject library.

In some situations, though, the complete stack trace is helpful, e.g., when debugging Pinject, or when your code calls Pinject, which calls back into your code, which calls back into Pinject. In such cases, to disable exception stack shortening, you can pass use_short_stack_traces=False to new_object_graph().

Gotchas

Pinject has a few things to watch out for.

Thread safety

Pinject's default scope is SINGLETON. If you have a multi-threaded program, it's likely that some or all of the things that Pinject provides from singleton scope will be used in multiple threads. So, it's important that you ensure that such classes are thread-safe.

Similarly, it's important that your custom scope classes are thread-safe. Even if the objects they provide are only used in a single thread, it may be that the object graph (and therefore the scope itself) will be used simultaneously in multiple threads.

Remember to make locks re-entrant on your custom scope classes, or otherwise deal with one object in your custom scope trying to inject another object in your custom scope.

That's it for gotchas, for now.

Condensed summary

If you are already familiar with dependency injection libraries such as Guice, this section gives you a condensed high level summary of Pinject and how it might be similar to or different than other dependency injection libraries. (If you don't understand it, no problem. The rest of the documentation covers everything listed here.)

  • Pinject uses code and decorators to configure injection, not a separate config file.
  • Bindings are keyed by arg name, (not class type, since Python is dynamically typed).
  • Pinject automatically creates bindings to some_class arg names for SomeClass classes.
  • You can ask Pinject only to create bindings from binding specs and classes whose __init__() is marked with @inject().
  • A binding spec is a class that creates explicit bindings.
  • A binding spec can bind arg names to classes or to instances.
  • A binding spec can bind arg names foo to provider methods provide_foo().
  • Binding specs can depend on (i.e., include) other binding specs.
  • You can annotate args and bindings to distinguish among args/bindings for the same arg name.
  • Pinject has two built-in scopes: "singleton" (always memoized; the default) and "prototype" (never memoized).
  • You can define custom scopes, and you can configure which scopes are accessible from which other scopes.
  • Pinject doesn't allow injecting None by default, but you can turn off that check.

Changelog

v0.15: master

  • Enable GitHub Actions
  • CI/CD DevOps for publishing to PyPI automatically
  • A version which the minor number is odd will be published as a prerelease and add dev to the patch version. (E.g. 0.15.0 will be published as 0.15.dev0 because the minor number 15 is odd)
  • Remove Python version 3.3 & 3.4 from CI/CD #50

v0.12: 28 Nov, 2018

  • Support Python 3
  • Add two maintainers: @trein and @huan

v0.10.2:

  • Fixed bug: allows binding specs containing only provider methods.

v0.10.1:

  • Fixed bug: allows omitting custom named configure() binding spec method.

v0.10:

  • Added default __eq__() to BindingSpec, so that DAG binding spec dependencies can have equal but not identical dependencies.
  • Allowed customizing configure() and dependencies() binding spec method names.
  • Deprecated @injectable in favor of @inject.
  • Added partial injection.
  • Added require arg to allow binding spec configure methods to declare but not define bindings.
  • Sped up tests (and probably general functionality) by 10x.
  • Documented more design decisions.
  • Added @copy_args_to_internal_fields and @copy_args_to_public_fields.
  • Renamed InjectableDecoratorAppliedToNonInitError to DecoratorAppliedToNonInitError.

v0.9:

  • Added validation of python types of public args.
  • Improved error messages for all Pinject-raised exceptions.
  • Added use_short_stack_traces arg to new_object_graph().
  • Allowed multiple @provides on single provider method.

v0.8:

  • First released version.

Author

Kurt Steinkraus @kurt

Maintainers

License

Apache-2.0

Pinject and Google

Though Google owns this project's copyright, this project is not an official Google product.

pinject's People

Contributors

cclauss avatar davidcim avatar dthkao avatar huan avatar kurtsteinkraus avatar rockobonaparte avatar saraedum avatar trein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pinject's Issues

Remove Python v3.3 & v3.4 from CI/CD

Because it seems that both the versions are deprecated.

GitHub Actions link

##[error]Version 3.3 with arch x64 not found
##[error]Version 3.4 with arch x64 not found
Available versions:

2.7.17 (x64)
3.5.9 (x64)
3.6.10 (x64)
3.7.6 (x64)
3.8.2 (x64)

Travis CI link

Downloading archive: https://storage.googleapis.com/travis-ci-language-archives/python/binaries/ubuntu/16.04/x86_64/python-3.3.tar.bz2
0.11s$ curl -sSf --retry 5 -o python-3.3.tar.bz2 ${archive_url}
curl: (22) The requested URL returned error: 404 Not Found
Unable to download 3.3 archive. The archive may not exist. Please consider a different version.

Support for >= 3.7

We're in python 3.8.1 today.
Could you please add automated builds for 3.7, 3.8 python versions as well?
I bet they will pass ;)
Thanks.

Pinject has difficulty to instantiate an instance for a subclass based on an abstract class

For example:

import pinject
import abc
class AbstrctClass(object):
... metaclass = abc.ABCMeta
...
class SomeClass(AbstrctClass):
... pass
...
obj_graph = pinject.new_object_graph(modules=None, classes=[AbstrctClass])
Traceback (most recent call last):
File "", line 1, in
File "/google/src/cloud/daming/fix/google3/third_party/py/pinject/object_graph.py", line 159, in new_object_graph
raise e
pinject.errors.WrongArgElementTypeError: wrong type for element 0 of arg classes: expected type but got ABCMeta

Nested dependency resolving issue

Hi there
I'm having a difficulty using pinject when I'm trying to resolve a dependency which has inner nested dependencies as well.

import pinject

class User:

    def __init__(self, user_id=None, username=None):
        self.user_id = user_id
        self.username = username

    def get_user_id(self):
        return self.user_id

    def get_username(self):
        return self.username

    def __str__(self):
        return str(self.__dict__)

    def __eq__(self, other):
        return self.__dict__ == other.__dict__

class QueueUserRepository(BaseUserRepository):

    def __init__(self, data_provider):
        self.data_provider = data_provider

    def get_user(self):
        user_data = self.data_provider.get_data('user')
        return User(user_data['user_id'], user_data['username'])

    def get_users_batch(self):
        return UserBatch(self.data_provider.get_data('users'))



class QueueDataProvider:
    class DataDoesNotExistsError(Exception):
        pass

    def __init__(self, provider_data):
        print(provider_data)        
        self.data = self.normalize_data(provider_data)

    @staticmethod
    def normalize_data(data):
        return data

    def get_data(self, key):
        try:
            return self.data[key]
        except KeyError:
            raise self.DataDoesNotExistsError('{} key does not exists'.format(key))

class MyBindingSpec(pinject.BindingSpec):
    def configure(self, bind):
        bind('provider_data', to_instance={'user': {'username': 'super', 'user_id': 'y'}})
        bind('data_provider', to_instance=QueueDataProvider)

class TestQueueUserRepository(unittest.TestCase):
    def setUp(self):
        self.container = pinject.new_object_graph(binding_specs=[MyBindingSpec()])
        
    def test_if_can_get_user(self):
        z = self.container.provide(QueueUserRepository)
        u = User('y', 'super')
        self.assertEqual(r.get_user(), u)

It seems pinject does not care about 'dependency of dependencies' as QueueDataProvider constructor does not seem to be called at all

The above code does not work however when I manually build objects, it works perfectly fine.

As I searched a lot in the documentation, I could not find any proper explanation with this.

Could you please help me with this.

EDIT:
I really tried so hard but I could not get code highlight support in this ...

UPADTE:
Unit tests are added.
To be more clear: QueueDataProvider is not receiving 'provider_data' on creation. I can say the QueueDataProvider's constructor is not called at all as far as I checked.

Unable to import pinject

Hello,
I decided to give pinject a try, but unfortunately it does not import. Here's what I do: install pinject with pip install pinject and then interactively import pinject.

It seem to fail on line 156 of third_party/decorator.py. I'm on python 3.4 so maybe that's the reason although looking at code I can see blocks dealing with python >3.2

Is that a bug?

Support for arbitrary callables

What is the reasoning for only allowing classes for injection and not allowing all callables?

Most of my providers/factories are simple callables / functions. Adding an unnecessary self parameter does not feel right to me.

An example of what I am trying to do:

def factory_1(some_value): return some_value + 2
def some_value(a: int): return a * 2

data = {"a": 1}

class MySpec(pinject.BindingSpec):
    def configure(self, bind):
        for k, v in data.items(): bind(k, to_instance=v)

graph = pinject.new_object_graph(binding_specs=[MySpec()])

print(graph.provide(factory_1))

I would expect the output to be 4

Some things I tried:

setattr(Spec, "provide_some_value", pinject.provide("some_value")(some_value)

For this to work I need to change some_value to

def some_value(self, a: int): ...

But when I want users to provide their custom providers I don't' want to force them to add this unneeded self.

I hope I could make it understood what I want to achieve. An inspiration are pytest fixtures.

Inject dependency into decorators

Hello,

How can I use Pinject to inject dependencies into decorators?

For example, say I have a decorator like:

# foo.py
def cache_decorator(cache_client):
    def decorator(func):
        def wrapper(*args, **kwargs):
            cache_client.read()
            result = func(*args, **kwargs)
            cache_client.write
            return result
        return wrapper
    return decorator

class A:
  @cache_decorator(dependency_I_want_to_inject)
  def foo(bar):
    bar.baz()

The goal is to swap the cache_client with a mock for unit testing.

Is something like this possible with Pinject?

Basic example not working

Hi,

I'm really struggling with pinject, even the basic example isn't working for me.
I'm on Windows 10 with python 3.7.3, pinject 0.12.6

My first try resulted in an exception:
ModuleNotFoundError: No module named '_gdbm'

So I applied a workaround I found in issue #22

Now this simple code:

class OuterClass(object):
    def __init__(self, inner_class):
        self.inner_class = inner_class

class InnerClass(object):
    def __init__(self):
        self.forty_two = 42

# somewhere else in my code:
obj_graph = pinject.new_object_graph(
    modules=[core, api]  # workaround
)
outer_class = obj_graph.provide(OuterClass)

Raises this exception:
{NothingInjectableForArgError} when injecting OuterClass.__init__ at C:\Users\me\PycharmProjects\MyProject\api\views.py:32, nothing injectable for the binding name "inner_class" (unannotated)

I even tried to add this line before requesting the outer_class:

inner_class = obj_graph.provide(InnerClass)

It knows how to create it! pinject should now inject that instance to create the outer_class, but it doesn't.

Any idea?

Can't inject decorated classes, getting pinject.errors.WrongArgTypeError: wrong type for arg cls: expected class but got <decorator>

I have a simple class with a no-op decorator defined and I'm trying to use pinject to instantiate it, but haven't got this to work yet.

Python 3.8
pinject==0.14.1

class Controller:
    def __init__(self, original_instance):
        self.original_instance = original_instance

@Controller
class UsersController:
  pass


users = obj_graph.provide(UsersController)

Trying to run this i get the following error:

example_python-3XnK4UKl\Scripts\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition 2019.2.3\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 56103 --file Main.py
pydev debugger: process 33564 is connecting

Connected to pydev debugger (build 192.6817.19)
Traceback (most recent call last):
  File "example_python/Main.py", line 29, in <module>
    users = obj_graph.provide(UsersController)
  File "example_python-3XnK4UKl\lib\site-packages\pinject\object_graph.py", line 193, in provide
    support.verify_class_type(cls, 'cls')
  File "example_python-3XnK4UKl\lib\site-packages\pinject\support.py", line 85, in verify_class_type
    _verify_type(inspect.isclass, elt, arg_name, 'class')
  File "example_python-3XnK4UKl\lib\site-packages\pinject\support.py", line 104, in _verify_type
    raise errors.WrongArgTypeError(
pinject.errors.WrongArgTypeError: wrong type for arg cls: expected class but got Controller

Process finished with exit code 1

ImportError: No module named decorator

I noticed this in the most-recent PyPI releases (0.12.2, 0.12.6):

Win10 with 64-bit Python 2.7.15, pip 10.0.1, pinject 0.12.6:

C:\temp\20181129>pip download pinject
Collecting pinject
  Downloading https://files.pythonhosted.org/packages/3c/85/1a422b22b0e7d6f4b017597d78fa6ebdfd401e668d8a92f06165f5e43f48/pinject-0.12.6.tar.gz (59kB)
    100% |################################| 61kB 124kB/s
  Saved c:\temp\20181129\pinject-0.12.6.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "c:\users\SCRUBBED\appdata\local\temp\pip-download-xvd2h3\pinject\setup.py", line 19, in <module>
        from pinject import (
      File "pinject\__init__.py", line 28, in <module>
        from .bindings import BindingSpec
      File "pinject\bindings.py", line 22, in <module>
        from . import decorators
      File "pinject\decorators.py", line 17, in <module>
        import decorator
    ImportError: No module named decorator

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\SCRUBBED\appdata\local\temp\pip-download-xvd2h3\pinject\

The same line fails with the same error on Ubuntu 16 using 64-bit Python 2.7.15rc1, pip version 9.0.1, trying to download 0.12.2.

I download the module to convert it to a distribution-specific package for internal distribution. The pip download command itself is failing.

Version 0.10.2 downloads fine and I've updated my software's setup.py to be bound to it instead of trying to acquire the newer versions.

different calls to provide() in a singleton scope do not return a singleton

This test should not fail, i.e., a and b.a should be the same instance in a singleton scope:

   def test_graph_creation_with_binding_to_instance(self):
        class A:
            def do(self):
                return 1

        class B:                
            def __init__(self, a):
                self.a = a

        class Binding(BindingSpec):
            def configure(self, bind):
                bind('a', to_instance=A())
             

        graph = new_object_graph(classes=[A, B], binding_specs=[Binding()])
        a = graph.provide(A)
        self.assertEqual(1, a.do())

        b = graph.provide(B)
        self.assertEqual(1, b.a.do())        

        self.assertIs(a, b.a)

The last assertion fails:

AssertionError: <test_injection.Test.test_graph_creation_with_binding.<locals>.A object at 0x10422f310> is not <test_injection.Test.test_graph_creation_with_binding.<locals>.A object at 0x1043fa2d0>

Efficient Testing

I used pinject to create object graphs in my automated testing. This works really well, except that the time required for new_object_graph() is rapidly adding up. The problem is that pinject is spending a lot of time gathering information about the available classes and what-not, but this information isn't changing between invocations of the function. It would be nice if I could construct a fresh object graph, but keep the static information.

For the moment, I've accomplished this by a rather nasty use of monkey patching in pinject.scoping.get_id_to_scope_with_defaults and replacing the SINGLETON scope with my own that I reset before each test. It works, (I think), but it'd be nice if there was a better approach.

Unable to run the basic dependency injection example mentioned in documentation.

Issue Description:

Unable to run the example mentioned in the documentation (https://github.com/google/pinject) of pinject. Getting ModuleNotFoundError: No module named '_gdbm'.

I've installed python-gdbm through conda but still, I see the ModuleNotFoundError. Could you please help me to understand what could be the issue?

conda install -c anaconda python-gdbm

Code:

import pinject

class OuterClass(object):
    def __init__(self, inner_class):
         self.inner_class = inner_class

class InnerClass(object):
    def __init__(self):
         self.forty_two = 42

obj_graph = pinject.new_object_graph()
outer_class = obj_graph.provide(OuterClass)
print(outer_class.inner_class.forty_two)

Error Log Trace:

ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-4-6f1689830d13> in <module>()
      9          self.forty_two = 42
     10 
---> 11 obj_graph = pinject.new_object_graph()
     12 outer_class = obj_graph.provide(OuterClass)
     13 # print(outer_class.inner_class.forty_two)

~/anaconda3/lib/python3.6/site-packages/pinject/object_graph.py in new_object_graph(modules, classes, binding_specs, only_use_explicit_bindings, allow_injecting_none, configure_method_name, dependencies_method_name, get_arg_names_from_class_name, get_arg_names_from_provider_fn_name, id_to_scope, is_scope_usable_from_scope, use_short_stack_traces)
     98         known_scope_ids = id_to_scope.keys()
     99 
--> 100         found_classes = finding.find_classes(modules, classes)
    101         if only_use_explicit_bindings:
    102             implicit_class_bindings = []

~/anaconda3/lib/python3.6/site-packages/pinject/finding.py in find_classes(modules, classes)
     30         # TODO(kurts): how is a module getting to be None??
     31         if module is not None:
---> 32             all_classes |= _find_classes_in_module(module)
     33     return all_classes
     34 

~/anaconda3/lib/python3.6/site-packages/pinject/finding.py in _find_classes_in_module(module)
     44 def _find_classes_in_module(module):
     45     classes = set()
---> 46     for member_name, member in inspect.getmembers(module):
     47         if inspect.isclass(member) and not member_name == '__class__':
     48             classes.add(member)

~/anaconda3/lib/python3.6/inspect.py in getmembers(object, predicate)
    340         # looking in the __dict__.
    341         try:
--> 342             value = getattr(object, key)
    343             # handle the duplicate key
    344             if key in processed:

~/anaconda3/lib/python3.6/site-packages/six.py in __get__(self, obj, tp)
     90 
     91     def __get__(self, obj, tp):
---> 92         result = self._resolve()
     93         setattr(obj, self.name, result)  # Invokes __set__.
     94         try:

~/anaconda3/lib/python3.6/site-packages/six.py in _resolve(self)
    113 
    114     def _resolve(self):
--> 115         return _import_module(self.mod)
    116 
    117     def __getattr__(self, attr):

~/anaconda3/lib/python3.6/site-packages/six.py in _import_module(name)
     80 def _import_module(name):
     81     """Import module, returning the module after the last dot."""
---> 82     __import__(name)
     83     return sys.modules[name]
     84 

~/anaconda3/lib/python3.6/dbm/gnu.py in <module>()
      1 """Provide the _gdbm module as a dbm submodule."""
      2 
----> 3 from _gdbm import *

ModuleNotFoundError: No module named '_gdbm'

pinject failed to assign value to functions with default value

I can't figure out a way to assign values to class with default init values.
Example as below.
I expect the output to be 'bbb' instead of 'B'.
Am I missing something? please help.

import pinject

class A(object):
    def __init__(self, a, b='B'):
        self.a=a
        self.b=b

class B(object):
    def __init__(self, obja):
        self.obja = obja

    def printit(self):
        print(self.obja.a)
        print(self.obja.b)

class MyBindingSpec(pinject.BindingSpec):
    def configure(self, bind):
        bind('obja',to_class=A)
        bind('a',to_instance='aaa')
        bind('b',to_instance='bbb')

obj_graph = pinject.new_object_graph(binding_specs=[MyBindingSpec()])
objb = obj_graph.provide(B)
objb.printit()

ObjectGraph Builder

Hi, I've developed a builder for having a multiple step configurations, or a configuration manager that can be imported from different places, configured and in the end can build the object graph.

I've made a different repo, since I wanted to use it immediately for my own project, however I'm opening this ticket as a suggested, if you are interested in integrating it with the original package, I'd be happy to port it, open a PR and shut down the additional package.

This is the package, and please check the documentation for use cases.
https://github.com/eshta/object-graph-builder

Please let me know your thoughts, and thanks :)

Undefined name: errors

Where is errors defined?
% flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics

./pinject/pinject/__init__.py:32:23: F821 undefined name 'errors'
for thing_name in dir(errors):
                      ^
./pinject/pinject/__init__.py:33:21: F821 undefined name 'errors'
    thing = getattr(errors, thing_name)
                    ^
2     F821 undefined name 'errors'
2

Feature request: "Local" bindings without annotating non-binding-spec code

Problem

The only way to bind the same name to two different objects depending on where they'll be used seems to be to annotate the "non-binding-spec" code to make the two parameters distinguishable.

But annotations of the non-binding-spec code go against pinject's unique selling point, which is that you can leave your regular code completely untouched. In fact, the very first point of the "Why pinject?" README section says:

[...] Forget having to decorate your code with @inject_this and @annotate_that just to get started. With Pinject, you call new_object_graph(), one line, and you're good to go.

Annotations to avoid name collisions brings back that exact kind of @annotate_that mess people who like pinject want to avoid.

So this is a feature request to come up with and implement an alternative to annotations to avoid name collisions and have certain bindings only apply "locally", e.g. only to one specific class.

Some ideas

Extra parameter to bind

The feature could take the form of e.g. an extra parameter to bind that allows you to choose a specific requesting class for which it is applied:

class SomeBindingSpec:
  def configure(bind):
    bind("common_name", to_class=SomeBoundClass, local_to=SomeRequestingClass)

class SomeOtherBindingSpec:
  def configure(bind):
    bind("common_name", to_class=SomeOtherBoundClass, local_to=SomeOtherRequestingClass)

Decorated binding specs

Another, perhaps more flexible way to do this would be to instead implement locality on the level of binding specs, e.g.:

@pinject.local_binding_spec(requesters=[SomeRequestingClass])
class SomeBindingSpec:
  def configure(bind):
    bind("common_name", to_class=SomeBoundClass)

@pinject.local_binding_spec(requesters=[SomeOtherRequestingClass])
class SomeOtherBindingSpec:
  def configure(bind):
    bind("common_name", to_class=SomeOtherBoundClass)

Composable binding specs and object graphs

Another idea that would allow one to more easily work around the issue would be if binding specs and object graphs were more "composable" than they are now, so you could e.g. create a common object graph without collisions and two separate object graphs each consisting of the common one plus the bindings that would otherwise collide, then use those to provide the conflicting requesting classes separately. pinject would however have to ensure that both object graphs return the exact same instances for the common parts, just like it does normally within the same object graph.

Thoughts?

Class names cannot contain numbers nor single letter words

The default translation from class names to argument names cannot handle class names containing numbers.

Example. If you run this:

from pinject import new_object_graph

class Model1:
    pass

class Obj:
    def __init__(self, model1):
        pass


graph = new_object_graph()

obj = graph.provide(Obj)

You get this error:

nothing injectable for the binding name "model1" (unannotated)

The same appens if the class name contains a single letter camel case word such as ModelA:

from pinject import new_object_graph

class ModelA:
    pass

class Obj:
    def __init__(self, model_a):
        pass


graph = new_object_graph()

obj = graph.provide(Obj)

It fails too:

nothing injectable for the binding name "model_a" (unannotated)

You can solve the issue by passing a customized get_arg_names_from_class_name like this:

import re

def get_arg_names_from_class_name_with_nums(class_name):
    parts = []
    rest = class_name
    if rest.startswith('_'):
        rest = rest[1:]
    while True:
        m = re.match(r'([A-Z][a-z]*|[0-9][a-z0-9]*)(.*)', rest)
        if m is None:
            break
        parts.append(m.group(1))
        rest = m.group(2)
    if not parts:
        return []
    return ['_'.join(part.lower() for part in parts)]

graph = new_object_graph(get_arg_names_from_class_name=get_arg_names_from_class_name_with_nums)

obj = graph.provide(Obj)

I have prepared a pull request to fix the default get_arg_names_from_class_name like in the example above.

TypeError in __init__ throws OnlyInstantiableViaProviderFunctionError

Hello,

every time I cause a TypeError in the init method of an injected class, pinject raises a pretty cryptic Exception that makes debugging way harder:

import pinject
class Foo(object):
def init(self, bar):
self.bar = bar

class Bar(object):
def init(self, foo_bar):
self.foo_bar = foo_bar

class FooBar(object):
def init(self):
raise TypeError("test")

o = pinject.new_object_graph()
o.provide(Foo)

Traceback (most recent call last):
File "/tmp/a.py", line 16, in
o.provide(Foo)
File "/spare/local/secmaster-overwatch-infra/lib/python2.7/site-packages/pinject/object_graph.py", line 244, in provide
raise e
pinject.errors.OnlyInstantiableViaProviderFunctionError: when injecting Bar.init at /tmp/a.py:8, the arg named "foo_bar" unannotated cannot be injected, because its provider, the class main.FooBar at /tmp/a.py:11, needs at least one directly passed arg

Thanks

Mirko

raises ConflictingExplicitBindingsError when it shouldn't

I bumped into a very very weird issue.

#! /usr/bin/env python2.7
import pinject


class FoundationA(object):
    @pinject.inject()
    def __init__(self, arg):
        self.arg = arg


class FoundationB(object):
    @pinject.inject()
    def __init__(self, arg):
        self._arg = arg


class BindingSpec(pinject.BindingSpec):
    def configure(self, bind):
        bind('arg', to_instance=1)

object_graph = pinject.new_object_graph(
    binding_specs=[
        BindingSpec(),
    ],
    only_use_explicit_bindings=True
)

foundation_a = object_graph.provide(FoundationA)
print(foundation_a.arg)

foundation_b = object_graph.provide(FoundationB)
print(foundation_b.arg)

This should print

1
1

Instead it raises ConflictingExplicitBindingsError.

$ python ./t.py 
Traceback (most recent call last):
  File "./t.py", line 25, in <module>
    only_use_explicit_bindings=True
  File "/tmp/venv/lib/python2.7/site-packages/pinject/object_graph.py", line 159, in new_object_graph
    raise e
pinject.errors.ConflictingExplicitBindingsError: multiple explicit bindings for same binding name:
  the binding at ./t.py:5, from the binding name "foundation" (unannotated) to the class __main__.FoundationA at ./t.py:5, in "singleton scope" scope
  the binding at ./t.py:11, from the binding name "foundation" (unannotated) to the class __main__.FoundationB at ./t.py:11, in "singleton scope" scope

Some very weird magic is happening behind the scenes, even though I've set only_use_explicit_bindings=True!

Any idea what might be wrong?

What's the release strategy?

I see there are prerelease versions out there since March of 2020. The last stable version release was May 2019. Are there any intentions to release the PRs from the past 18 months as a stable release?

pinject.errors.OnlyInstantiableViaProviderFunctionError:Simple Sample Script but not know how to fix the error

[root@localhost ~]# python3
Python 3.6.8 (default, Apr 16 2020, 01:36:27)
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.

import pinject
from enum import Enum

class SendKind(Enum):
... text = 't'
... link = 'l'
...
class test(object):
... @pinject.copy_args_to_internal_fields
... def init(self,send_kind):
... pass
... def demo(self):
... print(self._send_kind.text)
...
obj_graph = pinject.new_object_graph()
test_class = obj_graph.provide(test)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/pinject/scoping.py", line 62, in provide
return self._binding_key_to_instance[binding_key]
KeyError: <the binding name "send_kind" (unannotated)>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 51, in provide_from_arg_binding_key
provided = provider_indirection.StripIndirectionIfNeeded(Provide)
File "/usr/local/lib/python3.6/site-packages/pinject/provider_indirections.py", line 26, in StripIndirectionIfNeeded
return provide_fn()
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 43, in Provide
lambda: binding.proviser_fn(child_injection_context, self,
File "/usr/local/lib/python3.6/site-packages/pinject/scoping.py", line 64, in provide
instance = default_provider_fn()
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 44, in
pargs, kwargs))
File "/usr/local/lib/python3.6/site-packages/pinject/bindings.py", line 264, in Proviser
to_class, injection_context, pargs, kwargs)
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 70, in provide_class
return cls(*init_pargs, **init_kwargs)
TypeError: call() missing 1 required positional argument: 'value'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.6/site-packages/pinject/object_graph.py", line 203, in provide
raise e
File "/usr/local/lib/python3.6/site-packages/pinject/object_graph.py", line 200, in provide
direct_init_pargs=[], direct_init_kwargs={})
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 66, in provide_class
direct_init_pargs, direct_init_kwargs)
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 83, in get_injection_pargs_kwargs
lambda abk: self.provide_from_arg_binding_key(
File "/usr/local/lib/python3.6/site-packages/pinject/arg_binding_keys.py", line 108, in create_kwargs
for arg_binding_key in arg_binding_keys}
File "/usr/local/lib/python3.6/site-packages/pinject/arg_binding_keys.py", line 108, in
for arg_binding_key in arg_binding_keys}
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 84, in
fn, abk, injection_context))
File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 58, in provide_from_arg_binding_key
binding.get_binding_target_desc_fn())
pinject.errors.OnlyInstantiableViaProviderFunctionError: when injecting test.init, the arg named "send_kind" unannotated cannot be injected, because its provider, the class main.SendKind, needs at least one directly passed arg

Getting FileNotFoundError: [Errno 2] No such file or directory: '<my-path>/site-packages/pinject/../VERSION'

I am getting this error when installing the latest version (0.12.2) with pip install pinject:

File "/home/ronen/reach/reach_venv3.6/lib/python3.6/site-packages/pinject/version.py", line 10, in
VERSION = open(VERSION_FILE).read().strip()
FileNotFoundError: [Errno 2] No such file or directory: '/home/ronen/reach/reach_venv3.6/lib/python3.6/site-packages/pinject/../VERSION'

So I have looked and there isn't really any VERSION file in my site-packages folder (should there be?).
I am running python3.6.7 on Ubuntu 18.04. We have been running with django and pinject for a long time now and everything was great.

Please help, thanks in advance!

Requests/Urllib3/six injection issue.

Issue

When importing the requests library and calling new_object_graph, it fails complaining about Tkinter not being configured for the system.

The problem is that when calling new_object_graph, Pinject traverses the imported modules by default but for some reason it's error on importing Tkinter on python installations where Tkinter isn't supported (intentionally, i.e. python:3.7-alpine). I believe the import tree is application -> requests -> urllib3 -> six -> Tkinter. I'm not sure what the proper fix is here, but I wanted to report it in case it is an actual behavioral issue with Pinject.

Environment

Docker container: python3.7-alpine
Requests: 2.2.1
Pinject: v0.12

Reproduction

This should be enough to repro it:

# repro.py
import pinject

import requests

def main():
    graph = pinject.new_object_graph() # errors

if __name__ == "__main__":
    main()
(pinject-requests-tkinter) ~/w/pinject-requests-tkinter ❯❯❯ docker run --rm -it pinject-requests-tkinter                    master ✱ ◼
Python 3.7.3 (default, May 11 2019, 02:00:41)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import repro
>>> repro.main()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/app/repro.py", line 7, in main
    pinject.new_object_graph()
  File "/root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/site-packages/pinject/object_graph.py", line 100, in new_object_graph
    found_classes = finding.find_classes(modules, classes)
  File "/root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/site-packages/pinject/finding.py", line 32, in find_classes
    all_classes |= _find_classes_in_module(module)
  File "/root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/site-packages/pinject/finding.py", line 46, in _find_classes_in_module
    for member_name, member in inspect.getmembers(module):
  File "/usr/local/lib/python3.7/inspect.py", line 341, in getmembers
    value = getattr(object, key)
  File "/root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/site-packages/urllib3/packages/six.py", line 92, in __get__
    result = self._resolve()
  File "/root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/site-packages/urllib3/packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/site-packages/urllib3/packages/six.py", line 82, in _import_module
    __import__(name)
  File "/usr/local/lib/python3.7/tkinter/__init__.py", line 36, in <module>
    import _tkinter # If this fails your Python may not be configured for Tk
ImportError: Error loading shared library libtk8.6.so: No such file or directory (needed by /root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.7/lib-dynload/_tkinter.cpython-37m-x86_64-linux-gnu.so)
>>>

Also, apologies if this is the wrong place to put this! I figured it wasn't an issue with requests/urllib3/six (at least, I'd hope not) since they're so prevalent.

Can not provide objects for class with custom metaclass

pinject.object_graph.ObjectGraph.provide() expects "cls" to be types.Type type. This effectively removes support for any class with custom metaclass:

from pinject import new_object_graph

class Meta(type):
    pass

class A(object):
    __metaclass__ = Meta

new_object_graph().provide(A)

Code above ends up with error:

Traceback (most recent call last):
  File "t.py", line 9, in <module>
    new_object_graph().provide(A)
  File "/local/lib/python2.7/site-packages/pinject/object_graph.py", line 234, in provide
    _verify_type(cls, types.TypeType, 'cls')
  File "/local/lib/python2.7/site-packages/pinject/object_graph.py", line 175, in _verify_type
    arg_name, required_type.__name__, type(elt).__name__)
pinject.errors.WrongArgTypeError: wrong type for arg cls: expected type but got Meta

What's the reason for requiring types.Type on class?

Why a class with @pinject.inject is always singleton? Is it a bug?

OS: macOS 10.14.5
Python: 3.7
Dependencies:

[tool.poetry.dependencies]
python = "^3.6"
numpy = "^1.16"
PyContracts = "^1.8"
quandl = "^3.4"
pandas = "^0.24"
pinject = "^0.14.1"
typing_extensions = "^3.7"

[tool.poetry.dev-dependencies]
freezegun = "^0.3.11"
pytest = "^4.5"
PyHamcrest = "^1.9"
flake8 = "^3.7"
codecov = "^2.0"
pytest-cov = "^2.7"
jupyterlab = "^0.35.6"
nbval = "^0.9.1"
mypy = "^0.701.0"
pytype = "^2019.5"

Thank you for the lib. I enjoy your design decisions which make it the closest thing to dependency injection in Python I've seen so far. I'd like to use it. But have a problem in the code as follows:

class MyRegistry:
    def reg(self):
        return 42

class FooClass:
    @pinject.inject(['my_registry']) # https://github.com/google/pinject/blob/1e785b550cad4d4f9fd7f60a7d047dab9f7410e0/pinject/bindings.py#L168 make the FooClass singleton 
    def __init__(self, my_registry, param):
        self.my_registry = my_registry
        self.param = param

class MainClass:
    def __init__(self, provide_foo_class):
        a = provide_foo_class(param=1)
        b = provide_foo_class(param=2) # this line gets FooClass instance from the cache since it's singleton 
        print(a.my_registry.reg()) # works fine
        print(b.my_registry.reg()) # obivously, works fine too
        print(a.param)
        print(b.param) # this is actually `a` instance, so it prints 1 instead of 2

if __name__ == '__main__':
    obj_graph = pinject.new_object_graph()
    obj_graph.provide(MainClass)

I tried to override FooClass scope in custom spec, but it throws the exception of ambiguity since FooClass is registered as singleton. I can work around it with the FooClassFactory that creates instances of FooClass, but it seems unnecessary when there is provide_ facility.

How to make it work properly?

Default strategy for converting class names to argument names should allow multiple capitalized letters or numbers

According to PEP 8 -- Style Guide for Python Code, when using acronyms in CapWords, capitalize all the letters of the acronym. Thus HTTPServerError is better than HttpServerError.

Nevertheless, the default implementation of get_arg_names_from_class_name considers only one uppercase letter when splitting the class name, as can be seen below:

def default_get_arg_names_from_class_name(class_name):
    # content removed for brevity

    while True:
        m = re.match(r'([A-Z][a-z]*|[0-9][a-z0-9]*)(.*)', rest)

    # content removed for brevity

You can see next examples that show the problem:

main.py

if __name__ == '__main__':
    print(pinject.bindings.default_get_arg_names_from_class_name(sys.argv[1]))

Executing the sample program for 'HttpServer' works great:

$ python main.py HttpServer
['http_server']

But, executing the sample program for 'HTTPServer' results in an invalid value:

$ python main.py HTTPServer
['h_t_t_p_server']

The same occurs when the class name has digits before an capitalized digit:

$ python main.py Ec2Server
['ec_2_server']

If six.moves has been imported, finding.find_classes may cause an ImportErrors

The 'six' library includes a six.moves package that sets up a meta importer import hook to lazily load a bunch of modules only when they are first accessed. The finding.find_classes code when looping over sys.modules (via modules=ALL_IMPORTED_MODULES) is tripped up by this because as soon as it starts trying to introspect the six.moves module, it triggers lazy loading of a bunch of modules which may or may not be present in the systems given Python installation (common examples of modules that'll show up in the resulting ImportError messages: gdbm and Tkinter)

A workaround for this is to filter the list and avoid module names that start with 'six.'. Very hacky, agreed. Possibly worth considering a bug in https://pypi.python.org/pypi/six but given what it is trying to do I think code just needs to learn to play together. (i've encountered one other piece of code with this same problem due to six.moves having been imported)

I'm mailing you a CL internally, I'll let you push it upstream.

Publishing pinject v0.12 with Python 3 support

Hi there(who are looking for the Python3 support of pinject for a long time)!

We had just merged #19 (thanks @trein for the great work!) and staging a test version v0.11 on test.pypi.org.

Please install the dev version of pinject by running the following pip command:

pip install \
  --no-deps \
  --no-cache \
  --upgrade \
  --index-url https://test.pypi.org/simple/ \
  pinject

Then let us know whether it works under your environment.

We will publish the v0.12 version to pypi.org as an official release after we confirmed the v0.11 is work as expected by collecting enough confirmation replies under this issue.

Thanks for your help and let's looking forward to the v0.12 release!

If rdflib has been imported, new_object_graph() raises ImportError

Sample code:

import pinject
import rdflib

foo = pinject.new_object_graph()

Running this results in the following traceback:

Traceback (most recent call last):
  File "bug.py", line 4, in <module>
    foo = pinject.new_object_graph()
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/site-packages/pinject/object_graph.py", line 105, in new_object_graph
    found_classes = finding.find_classes(modules, classes)
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/site-packages/pinject/finding.py", line 32, in find_classes
    all_classes |= _find_classes_in_module(module)
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/site-packages/pinject/finding.py", line 47, in _find_classes_in_module
    for member_name, member in inspect.getmembers(module):
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 252, in getmembers
    value = getattr(object, key)
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources/_vendor/six.py", line 92, in __get__
    result = self._resolve()
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources/_vendor/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources/_vendor/six.py", line 82, in _import_module
    __import__(name)
ImportError: dlopen(/Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/gdbm.so, 2): Symbol not found: _gdbm_errno
  Referenced from: /Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/gdbm.so
  Expected in: /usr/local/opt/gdbm/lib/libgdbm.4.dylib
 in /Users/bob/.pyenv/versions/2.7.13/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/gdbm.so

Environment:

  • rdflib 4.2.2
  • pinject 0.10.2
  • python 2.7.13 (MacOS) and 2.7.14 (Amazon Linux)

As a workaround, I'm passing a list of modules to new_object_graph to exclude rdflib, and that seems to be working so far.

Thank you for this awesome library. Other than this one small hiccup, it has been working very well, doing exactly what I would expect it to do.

Compatibility with GTK

Is Pinject compatible with GTK? For me, this simple example doesn't work.

import gi

gi.require_version("Gtk", "3.0")
from gi.repository import Gtk

import pinject

class MyWindow(Gtk.Window):
    def __init__(self):
        Gtk.Window.__init__(self, title="Hello World")

        self.button = Gtk.Button(label="Click Here")
        self.button.connect("clicked", self.on_button_clicked)
        self.add(self.button)

    def on_button_clicked(self, widget):
        print("Hello World")

obj_graph = pinject.new_object_graph()

Error:

Traceback (most recent call last):
  File "main.py", line 54, in <module>
    obj_graph = pinject.new_object_graph()
  File "/home/lukas/.cache/pypoetry/virtualenvs/test-di-1-QD_7cPfB-py3.7/lib/python3.7/site-packages/pinject/object_graph.py", line 100, in new_object_graph
    found_classes = finding.find_classes(modules, classes)
  File "/home/lukas/.cache/pypoetry/virtualenvs/test-di-1-QD_7cPfB-py3.7/lib/python3.7/site-packages/pinject/finding.py", line 32, in find_classes
    all_classes |= _find_classes_in_module(module)
  File "/home/lukas/.cache/pypoetry/virtualenvs/test-di-1-QD_7cPfB-py3.7/lib/python3.7/site-packages/pinject/finding.py", line 46, in _find_classes_in_module
    for member_name, member in inspect.getmembers(module):
  File "/usr/lib/python3.7/inspect.py", line 341, in getmembers
    value = getattr(object, key)
  File "/home/lukas/.cache/pypoetry/virtualenvs/test-di-1-QD_7cPfB-py3.7/lib/python3.7/site-packages/gi/module.py", line 163, in __getattr__
    setattr(wrapper, value_name, wrapper(value_info.get_value()))
ValueError: invalid enum value: 6

[Enhancement] Allow calling methods and injecting their parameters

Problem

A lot of times I just want to call a function which has arguments that need to be injected. In order to call the function like thisI basically have to create a wrapper class for the function which specifies the arguments required, and then call some method on the wrapper to pass in the injected arguments. This works but is a lot of boilerplate. e.g.

def foobar(foo: Foo) -> int:
    ...
    
class FoobarWrapper:
    def __init__(foo: Foo) -> None:
        self.foo = foo
    def call(self): -> int:
        return foobar(self.foo)

obj_graph = new_object_graph()
print("foobar returns:", obj_graph.provide(FoobarWrapper).call())

Solution

I want to be able to call functions with arbitrary parameters and have pinject construct all of the necessary inputs for me.
It seems like the library is set up nicely to support this, however, it requires me to access private members of the ObjectGraph instance.

Proof of Concept

After browsing through the implementation, I've been able to achieve my goal with the following 3 line hack. I provide a helper method around it to keep things simple and type-safe.

from typing import TypeVar, Callable
from pinject.object_graph import ObjectGraph

T = TypeVar("T")


def inject_func(obj_graph: ObjectGraph, func: Callable[..., T]) -> T:
    context = obj_graph._injection_context_factory.new(func)
    args, kwargs = obj_graph._obj_provider.get_injection_pargs_kwargs(func, context, [], {})
    return func(*args, **kwargs)

And here's how you'd call it:

class Bar:
    def __init__(self) -> None:
        self.a = 1


class Foo:
    def __init__(self, bar: Bar) -> None:
        self.bar = bar


def foobar(foo: Foo) -> int:
    return foo.bar.a


obj_graph = new_object_graph()
print("foobar returns:", inject_func(obj_graph, foobar))
foobar returns: 1

Desired Implementation

What I'd like to see is something like this at the same level as ObjectGraph.provide. Maybe ObjectGraph.invoke

obj_graph = new_object_graph()
print("foobar returns:", obj_graph.invoke(foobar))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.