As a Flying Circus developer, I can use a base class to easily create a raw Python specification for an AWS Resource.
AWS resources have a fairly standard layout, with just a few (usually common) top-level CloudFormation attributes, and a long list of nested "attributes" inside a Properties object.
For convenience, we also want to be able to flatten the properties into the main class for access from the primary object.
Attributes with a YAML collection type (ie. a list or a map/class) should be initialised to an empty object on first get, if they are not otherwise set.
The best implementation seems to be for the AWSObject to have a map of {attributeName: class} for attributes that need this behaviour.
Extension behaviour (unclear if it belongs in this story, or a separate story):
When setting the attribute with supplied data, create a relevant object if it was not supplied (to ensure you are using the right type, and to auto-convert a plain dictionary).
Enforce that the correct type is being used inside a list.
This is probably implemented incidentally as part of #56 (default values)
As a flying-circus developer, I can write a test that produces output identical with one AWS template (after some post processing of the source template).
As a Flying Circus developer, I have specified relevant CloudFormation metadata on the template and/or individual resources so that it is clear these objects were created by Flying Circus (and which version of flying circus).
Currently we have functionality that consists of classes with constrained attributes, where those attributes are defined at the class level, and get extended by subclasses. This sounds a lot like a problem that slots can solve, in a more performant and elegant way than our bespoke implementation with a custom class field and overridden setattr.
OTOH, the functionality doesn't map precisely to slots. We distinguish between 3 types of attribute (internal, normal, and unknown), and the getter/setter behaviour for each type differs.
Be able to recursively apply tags to an object and it's nested objects, allowing for objects that don't support tagging and objects that have extra tag fields and/or unusual names for the tag attribute.
Currently lists of lists follow a common but somewhat unintuitive layout:
key:
- - 2
- 3
- 4
- - 5
It would be nice if we could improve this. however, it is not clear what a better alternative is, and I am not sure how often we encounter lists of lists in Cloud Formation.
Provide tooling and utility functions to enable users to deploy a flying circus stack directly to AWS, without having to go through the export-upload cycle.
As a devops user, I am able to intuitively insert references to Cloud Formation intrinsic functions into my Flying Circus infrastructure definitions.
This ticket only covers functions that are relevant to standard Flying Circus use cases. Some intrinsic functions provide basic programming functionality within a template, and hence are not often used in Flying Circus. These functions will be implemented in #197
Consider the changes library or bumpversion or bump2version or even bump. Also look at the Azure/GitHub integration for a release (GithubRelease task)
Conclusion: dephell actually looks good and maintained, as well as solving a number of problems all at once. If it has a way to deal with multiple files
When PyYAML emits a block-style list that is associated with a mapping, then the list is lined up vertically with the key. This is valid YAML, but is unintuitive and potentially misleading:
one:
- 2
- 3
- 4
It would be better if we could get PyYAML to indent the list values, which would also be valid YAML. However it is not easy to see how that could be done. Emitter.expect_block_sequence() appears to be the place where the business rule is encoded, it may be possible to solve the problem by overriding this method to ignore self.mapping_context. However, we would need to be careful about knock-on effects fr different sorts of nested blocks.
defexpect_block_sequence(self):
# indentless = (self.mapping_context and not self.indention) # Original codeindentless=notself.indention# Potential code modificationself.increase_indent(flow=False, indentless=indentless)
self.state=self.expect_first_block_sequence_item
Sceptre is a tool for managing CloudFormation templates. On quick investigation it appears to support multiple template styles, and provide command line scripts to directly interact with those templates and AWS.
It's still templating, so it's not directly comparable to Flying Circus, but it should be easy to integrate a flying circus stack object into the Sceptre system to manage uploading.
Have example tested Flying Circus code that will create an auto-scaling group on AWS.
Includes:
Integration test. Perhaps upload stack to AWS, then introspect the created elements to determine that they exist as expected, and key attributes match the desired configuration. -> Do stack validation only. See #59
YAML output test
unit tests
classes for alarms and scaling policies
purpose-specific classes for a CPU Alarm , a ScaleIn policy, and a ScaleOut policy (Scaling policies need support for stack merging - see #61)
As a developer, I can access the properties of a resource as python attributes on the Resource class, so that i can write resource.SomeProperty instead of resource.properties.SomeProperty
As a flying-circus user, I can add a Resource (or other Stack object) to a Stack and have the stack determine a sensible default name for the Resource based on data already defined within the Resource, rather than me supplying a key.
This should avoid repetition associated with the highly common use case (ie. a Resource object is only used once, and has an internal name attribute as well as a logical stack name)
As a Flying Circus developer, I can easily generate (and update) base classes in the _raw package to match the most up-to-date properties supported by Amazon Cloud Formation.