awslabs / aws-transit-vpc Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Starting 6/30/2018, all VGWs have a default ASN of 64512. This project sets a default value for the customer gateway as 64512. Therefore, anyone who launches this solution without changing the defaults will have errors in their Lambda execution:
An error occurred (AsnConflict) when calling the CreateVpnConnection operation: The ASN of the specified customer gateway and virtual private gateway are the same.: ClientError
Traceback (most recent call last):
File "/var/task/transit-vpc-poller.py", line 167, in lambda_handler
vpn1=ec2.create_vpn_connection(Type='ipsec.1',CustomerGatewayId=cg1['CustomerGateway']['CustomerGatewayId'],VpnGatewayId=vgw['VpnGatewayId'],Options={'StaticRoutesOnly':False})
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (AsnConflict) when calling the CreateVpnConnection operation: The ASN of the specified customer gateway and virtual private gateway are the same.
Kudos to @grolston for helping diagnose the issue.
It would be nice if there was an example of how to setup a second transit VPC in a second region and connect the two cisco routers.
Is it possible to include a smaple of how to do NAT?
Would it work to make the KMS key auto rotate?
Currently, the CloudFormation template for the Transit VPC account includes an AccountId
parameter, which allows you to specify one additional account which should have access to the VPN Config S3 bucket.
In our case, we have many accounts we want to grant access to, so the option to provide a comma separated list of account IDs to this field would be useful.
When the transitvpc:spoke=true tag is removed from a VPC no clean up on the Cisco CSRs occurs.
Is this by design for some reason?
Trying to update our AWS runtime to Python3.7 but the 'push-cisco-config' lambda fails on import.
[ERROR] Runtime.ImportModuleError: Unable to import module 'transit_vpc_push_cisco_config/lambda_function': No module named '_cffi_backend'
Never Mind
Running into a strange issue with a new transit VPC deployment from a few days ago, but haven't been able to successfully get spokes online. The poller function successfully finds a new spoke VPC tag, creates EC2 VPNs, and drops configuration files into the S3 bucket, however the "configurator" function, upon "Put" trigger, is seemingly unable to successfully retrieve the transit_vpc_config.txt
settings file, so the CSR devices are never configured. Logs show that it times out trying to retrieve this file, which is odd that it's unable to access it.
Any idea? The only different thing that I've done with this deployment is add 5 other AWS account IDs to the bucket policy per AWS docs Appendix C, but that's it. Never had this issue before with Transit VPC stack deployments.
CloudWatch debug output for the configurator function execution:
[DEBUG] 2018-06-05T17:30:36.754Z URI updated to: https://transitvpc-vpnconfigs3bucket-xxxxxxxxxx.s3-us-west-2.amazonaws.com/vpnconfigs/transit_vpc_config.txt
[DEBUG] 2018-06-05T17:30:36.755Z CanonicalRequest:
GET
/vpnconfigs/transit_vpc_config.txt
[DEBUG] 2018-06-05T17:31:36.726Z ConnectionError received when sending HTTP request.
Traceback (most recent call last):
File "/var/runtime/botocore/endpoint.py", line 222, in _get_response
proxies=self.proxies, timeout=self.timeout)
File "/var/runtime/botocore/vendored/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/var/runtime/botocore/vendored/requests/adapters.py", line 419, in send
raise ConnectTimeout(e, request=request)
ConnectTimeout: HTTPSConnectionPool(host='transitvpc-vpnconfigs3bucket-xxxxxxxxxx.s3-us-west-2.amazonaws.com', port=443): Max retries exceeded with url: /vpnconfigs/transit_vpc_config.txt (Caused by ConnectTimeoutError(<botocore.awsrequest.AWSHTTPSConnection object at 0x7f328043cc10>, 'Connection to transitvpc-vpnconfigs3bucket-xxxxxxxxxx.s3-us-west-2.amazonaws.com timed out. (connect timeout=60)'))
Hello,
in my case the deployment is stuck as the lambda function has errors:
[ERROR] Runtime.ImportModuleError: Unable to import module 'transit_vpc_solution_helper/transit-vpc-solution-helper': No module named 'transit_vpc_solution_helper/transit-vpc-solution-helper'
Add transitvpc:preferred-path key with value of CSR1 or CSR2 doesn't trigger any route-map config changes. Upon reading of the transit-vpc-poller config, it seems that updateConfigXML() method is only called during VPN creation and deletion. It would be good to have another condition to match cisco config with the transitvpc:preferred-path key value and update the xml config in the related s3 bucket to trigger necessary cisco route-map config changes.
I was wondering if any one has changed the user name that the Cisco configurator uses to push the config to the CSR's?
I have found the credentials defined in the "transit_vpc_config.txt" and i would like to change them as we want to integrate CIsco ISE on to the routers so the network team can manage access in the normal way.
To do this I would need to add the user name used by the configurator to our AD and the standard user of "automate" doesn't fit in well with our AD.
Is just updating the txt file all i need and will it break any existing config?
Did the SSH user change for the CSR AMI currently provided in the Cloudformation template? All documentation I can find leads me to ec2-user
as the user, but logging in as that user asks for a password (when providing the keypair used for the instance).
I've also tried spinning up the AMI by itself without any of the UserData from the Cloudformation script and have the same problem.
I can't deploy TransitVPC for Seoul by latest transit-vpc-primary-account.template file using CloudFormation.
I guess, License Included ami-id "ami-29b26647" for Seoul region, is not valid or deployed.
Please Check ami-id and update.
Thanks.
We tried to update the cloud formation of the Transit CSR Routers with the Baked AMI with Encrypted Volumes. The Tunnels went down . I think after the new Instance comes up the private IPs are not updated in the S3 configuration. Anyone any thoughts?
I had the following error in the CSR system log after initial boot:
Oct 26 00:04:12.580: %CVAC-3-XML_ERROR: Error while parsing XML from file varied:/ovf-env.xml: error 'XML_ERROR_FRAMING', element '', attribute '', explanation 'Invalid Escape Sequence '5 pass AHk%xp&E**rH''
It appears to be unhappy with the special characters in the randomly generated password. I've worked around this by hard coding a password in the template, but you might consider eliminating some special characters from the password generation script.
When trying to deploy the transit-vpc-primary-account.template
stack after subscribing in the AWS marketplace the stack is failing with the following error
2021-01-21 18:31:49 UTC-0500 | VpcCsr2 | CREATE_FAILED | API: ec2:RunInstances Not authorized for images: [ami-0d8a2f539abbd5763]
I have validated that my license exists and is visible in AWS License Manager. I can see these AMIs are not present in the public AMI list. Are there new AMIs that are needed?
I set up the entire infrastructur in one single dedicated account. The cloudformation script provided are fantastic. I attached a direct connect link the transit VPC. The Transit VPC has a CIDR range which is an official subnet of our corporate network, so traffic will be routed there. But now how does my corporate router know about the routes known to the Cisco CSR routers for the Spoke VPCs how do I get them. Do I have to use the BGP ASN to get them and still how are the routes eventually propagated. I don't get the concept. Can somebody help?
Regards
Peter
This file actually does not exist.
What happens when two VGWs are tagged with transitvpc:spoke=true at the same time, the poller creates the necessary config files and 4 puts (two for each VGW cause we've two cisco csrs) are fired and the cisco configurator lambda function gets executed. Is there chances for a race condition where the cisco configurator will be sshed on twice to the same CSR attempting to manipulate the config?
I noticed the following output after executing ./build-s3-dist.sh jbl-bucket-east
.
[snip]
./build-s3-dist.sh: line 56: zip: command not found
./build-s3-dist.sh: line 58: zip: command not found
./build-s3-dist.sh: line 59: zip: command not found
Clean up build material in /home/jeffl/aws-transit-vpc/deployment/dist/env
Completed building distribution
jeffl@aws-tools:~/aws-transit-vpc/deployment$
If desired, I can submit a PR to add error checking to the shell script so that it gracefully bails out.
It would be neat to see a secondary interface attached for management purposes so you can make configuration changes without the fear of locking yourself out and having to restart the CSR device.
Getting this error when trying to run the script to build out for some changes to the Lambda:
./build-s3-dist.sh wf-core-dev-templates
Staring to build distribution
export deployment_dir=/Users/robweaver/Projects/aws-transit-vpc/deployment
mkdir -p dist
cp -f *.template dist
Updating code source bucket in templates with wf-core-dev-templates
sed -i -e s/%%BUCKET_NAME%%/wf-core-dev-templates/g dist/transit-vpc-primary-account-existing-vpc.template
sed -i -e s/%%BUCKET_NAME%%/wf-core-dev-templates/g dist/transit-vpc-primary-account-marketplace.template
sed: dist/transit-vpc-primary-account-marketplace.template: No such file or directory
sed -i -e s/%%BUCKET_NAME%%/wf-core-dev-templates/g dist/transit-vpc-primary-account.template
sed -i -e s/%%BUCKET_NAME%%/wf-core-dev-templates/g dist/transit-vpc-second-account.template
Creating transit-vpc-poller ZIP file
Building transit-vpc-push-cisco-config ZIP file
/Users/robweaver/Projects/aws-transit-vpc/deployment/dist
virtualenv env
New python executable in /Users/robweaver/Projects/aws-transit-vpc/deployment/dist/env/bin/python
Installing setuptools, pip, wheel...done.
source env/bin/activate
pip install /Users/robweaver/Projects/aws-transit-vpc/deployment/../source/transit-vpc-push-cisco-config/. --target=/Users/robweaver/Projects/aws-transit-vpc/deployment/dist/env/lib/python2.7/site-packages/
Processing /Users/robweaver/Projects/aws-transit-vpc/source/transit-vpc-push-cisco-config
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/pr/59hffzcn3xn3rl2tv8tr27t00000gp/T/pip-req-build-If0il5/setup.py", line 4, in <module>
from pip.req import parse_requirements
ImportError: No module named req
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/pr/59hffzcn3xn3rl2tv8tr27t00000gp/T/pip-req-build-If0il5/
adding: pip-10.0.1.dist-info/ (stored 0%)
Saw some references to this being a problem with pip version 10, so tried downgrading with no luck.
Hello,
I have one small query in default VPC. Why default VPC does not require Elastic IP to connect in internet. As per VPC concept even through we are running our instance in VPC public subnet without attaching elaticIP or ELB we can't connect that instance in internet then how it is working in default VPC without Elastic IP.
Regards,
Raja
More of a question than an issue.
Why was this added to the cloudformation?
Hi,
I'm trying to reach Internet from EC2 instance that resides in VPC connected through VPN gateway to Transit VPC with Cisco CSRs. Tunnels are up, I've pointed 0.0.0.0/0 to vgw endpoint on route table, but I'm still unable to reach outside.
Is there any additional configuration required for this to work?
I have run this solution and when tagging VPC's with the tag transitvpc:spoke with a value of true a VPN gateway and that process is never triggered.
The actual CSR's are created and the CF template deployment states it was created successfully.
I had a problem with the stack where I had set the S3BucketPrefix to vpnconfigs/prefix_
This broke the cisco configurator lambda as it does a string split on "/" and joins on [:-2]. This drops the "prefix_" part.
transit-vpc-push-cicso-config.py@L128
def getBucketPrefix(bucket_name, bucket_key):
#Figure out prefix from known bucket_name and bucket_key
bucket_prefix = '/'.join(bucket_key.split('/')[:-2])
if len(bucket_prefix) > 0:
bucket_prefix += '/'
return bucket_prefix
All this is fine, but it would be nice if the docs would specify that the value for bucket prefix must have a trailing slash.
The CloudFormation templates are out dated, don't feature eu-west-3 region nor the correct AMIs anymore.
The updated template seems to be this one: https://s3.amazonaws.com/awsmp-fulfillment-cf-templates-prod/9f5a4516-a4c3-4cf1-89d4-105d2200230e.c3bb3cc5-974b-480c-53aa-f671857c1c89.template (found on marketplace)
Should we deprecate the templates in github or update them to warn not to use anymore ?
Would it be possible to added IPv6 Support?
We've recently configured our CSRs to allow both SSH-key and TACACS login for management purposes. This no longer automatically drops the SSH session into "enable" mode by default. Does the transit-vpc-cisco-configurator assume enable mode by default or does it execute "enable" as the first command before trying to configure the CSR?
I was hoping to add a syslog endpoint to my CSR configuration to capture it's logs and forward them to sumo logic.
I've added the configuration line to the userdata but the configuration updates aren't being applied
ios-config-28="logging host 100.64.127.232 transport udp port 514"
if I log into the CSR and add apply the configuration manually it applies but I want it applied when the instance starts up.
Also ip ssh maxstartups of 1
is invalid, it has to be a number 2-128. I've tried updating this and noticed that it doesn't get applied either.
Running the ./build-s3-dist.sh
build script on macOS directly results in errors of the form:
Unable to import module 'transit_vpc_push_cisco_config/lambda_function': /var/task/bcrypt/_bcrypt.so: invalid ELF header
This seems to be because paramiko uses C wrappers for low level crypto functions, and the versions of these available on macOS do not run on Linux (and thus the lambda function does not run).
My solution to this was to build the function inside a docker container:
$ cat Dockerfile
# Docker image to build the lambda function
FROM python:2.7
RUN apt-get update && \
apt-get install zip -y && \
pip install --upgrade pip && \
pip install --upgrade setuptools && \
pip install --upgrade virtualenv
ENTRYPOINT ["/bin/bash", "-c"]
# Run with:
# docker build -t build-lambda . && docker run -it --rm -v $PWD:/tmp/workspace -w /tmp/workspace/deployment build-lambda ./build-s3-dist.sh bucket-name-here
I have a Makefile in my fork of the repo which runs this, if it's worth a pull request.
Either way, something in the README for how to build on OS X would be useful.
We received notice that AWS will be retiring support for python2.7 soon. Will there be any support for python3???
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.