Code Monkey home page Code Monkey logo

amazon-aurora-user-guide's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-aurora-user-guide's Issues

Missing column entry

In the table at the bottom of the page, under the field: Max. bandwidth (mbps) of local storage
There is no value for this row (just a "--").
db.r3.8xlarge 32 104 244 โ€” 10 Gbps
Why is this?

Restoring from a DB Cluster Snapshot does not mention DB instances

The currently latest "Restoring from a DB Cluster Snapshot" explains a lot about restoring from an aurora DB cluster snapshot, but omits one important detail (at least for the CLI): aws rds restore-db-cluster-from-snapshot restores only the cluster, but no instances.

This is obscurely documented in some places but I can't find it documented in the official documentation.

I experience the confusion myself, and indeed after running aws rds restore-db-cluster-from-snapshot, one needs to run aws rds create-db-instance.

Do Fault Injection Queries Work in a Cross-Region Replication Setup?

Do the queries/commands listed in the "Testing Amazon Aurora Using Fault Injection Queries" work in a cross region replication setup.

In addition, it would be nice to add an example which explains how and when to use the ALTER SYSTEM CRASH [ INSTANCE | DISPATCHER | NODE ]; command or the failover-db-cluster CLI command.

In general it would be nice have more examples based on scenarios in the "Testing Amazon Aurora Using Fault Injection Queries". This is a very important topic for disaster recovery testing and planing.

Missing release notes for new listed feature

I noticed today a new Aurora MySQL release 3.03.1.

However it's not in your release notes. [1]
There is another page I just saw today, and it also must be brand new, as it requires 3.03.1 [2]

I wasn't able to see a current version of docs in order to point out a possible inconsistency. Is this repo being maintained, i.e. I could not find older 3.0X documents.

Kudos for actually having your docs on VCS. It was a starting point, but it didn't quite help me track this new release+feature.

[1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html
[2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.storage-type.html

\gset not supported in Aurora engineVersion 10.11

Documentation:
"To import from Amazon S3 into Aurora PostgreSQL, your database must be running PostgreSQL version 10.7 or later."
and
"psql=> SELECT aws_commons.create_s3_uri(
'sample-bucket',
'sample-filepath',
'us-west-2'
) AS s3_uri_1 \gset"

Problem:
Invalid command \gset. Try ? for help.

do FAQs and documentation disagree on single-AZ recovery time?

The FAQs are crystal clear: it takes up to 15 minutes to recover if the restored instance must be created in another AZ. On the other hand, the documentation doesn't specify to which case the 10 minutes applies to, leaving the reader to assume that it must be within the same AZ. See the excerpts below.

Aurora FAQs: "If you do not have an Amazon Aurora Replica (i.e. single instance), Aurora will first attempt to create a new DB Instance in the same Availability Zone as the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in a different Availability Zone. From start to finish, failover typically completes in under 15 minutes".

Documentation: "If the DB cluster doesn't contain any Aurora Replicas, then the primary instance is recreated during a failure event. A failure event results in an interruption during which read and write operations fail with an exception. Service is restored when the new primary instance is created, which typically takes less than 10 minutes."

Using Amazon Aurora Auto Scaling with Aurora Replicas - Innacurate Statements

In /doc_source/Aurora.Integrating.AutoScaling.md it states that:

Before you can use Aurora Auto Scaling with an Aurora DB cluster, you must first create an Aurora DB cluster with a primary instance and at least one Aurora Replica. Although Aurora Auto Scaling manages Aurora Replicas, the Aurora DB cluster must start with at least one Aurora Replica

This does not seem to be the case. I can do the following and watch an Aurora Replica get spun up automatically:

  1. Create an Aurora cluster
  2. Add a primary instance to the cluster
  3. Add an Auto Scaling policy with minimum capacity = 1

Whereas if I do the following, an Aurora Replica never gets created:

  1. Create Aurora cluster
  2. Add an Auto Scaling policy with minimum capacity = 1
  3. add a primary instance to the cluster

It seems more accurate to say that only a primary instance is needed. It also seems like the second scenario where the policy is created before the cluster has any instances should not be possible. The API should prevent the policy from being created.

Binlog Retention Hours documentation incorrect

The Replication documentation states the following about using the mysql.rds_set_configuration stored procedure to set the number of hours to retain binary logs:

CALL mysql.rds_set_configuration('binlog retention hours', 144);

and

If this setting isn't specified, the default for Aurora MySQL is 24 (1 day). If you specify a value for 'binlog retention hours' that is higher than 2160, then Aurora MySQL uses a value of 2160.

If a value like 2160 is used, the following error is thrown:

For binlog retention hours the value must be between 1 and 168 inclusive or be NULL

The rds_set_configuration stored procedure that sets this value has logic preventing values greater than 168 hours.

DELIMITER ;;
CREATE DEFINER=`rdsadmin`@`localhost` PROCEDURE `rds_set_configuration`(IN name VARCHAR(30), IN value INT)
    READS SQL DATA
    DETERMINISTIC
BEGIN
   DECLARE sql_logging BOOLEAN;

   IF name = 'binlog retention hours' AND value NOT BETWEEN 1 AND 168 THEN
       SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'For binlog retention hours the value must be between 1 and 168 inclusive or be NULL';
   END IF;

   SELECT @@sql_log_bin INTO sql_logging;
   SET @@sql_log_bin = OFF;
   UPDATE mysql.rds_configuration
   SET mysql.rds_configuration.value = value
   WHERE BINARY mysql.rds_configuration.name = BINARY name;
   SET @@sql_log_bin = sql_logging;
END;;
DELIMITER ;

This was observed on an instance running version 5.7.mysql_aurora.2.09.2.

Is the documentation here just incorrect or is the RDS provided stored procedure not implemented as intended?

Math on Aurora PostgreSQL max connections needs more explaining

On the page https://github.com/awsdocs/amazon-aurora-user-guide/blob/master/doc_source/AuroraPostgreSQL.Managing.md

It says:

Setting the max_connections parameter to this equation makes sure that the number of allowed connection scales well with the size of the instance. For example, suppose your DB instance class is db.r4.large, which has 15.25 gibibytes (GiB) of memory. Then the maximum connections allowed is 1660, as shown in the following equation:

LEAST( (15.25 * 1000000000) / 9531392 ), 5000) = 1600

But a gibibyte contains 1073741824 (2^30) bytes, so mathematically the answer isn't 1660 as in the text, or 1600 as in the callout, but should be 1717.

Fix/Remove incomplete and useless recommendation

Most of the information in section Using the Java client library for Data API of the document at https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html doesn't work or is not implemented. It is misguiding and users spend a lot of time figuring out what went wrong. I feel like it is not implemented completely and the documentation is not updated properly. For example withParameter works but, not withParam as it appears in the documentation.

Aurora serverless v2 private network requirements

Hi!

Needs more explanation if serverless v2 is possible to use in a spoke VPC with only transit gateway routing to the internet (via HUB VPC)?
What subnet settings or vpc endpoints needed to use serverless v2 in this private newtwork? Auto-assign public IPv4 address in a subnet is required?

Thanks.

unable to add IAM role to aurora

when calling the AddRoleToDBCluster operation: You currently can't add a role to Aurora Serverless DB Cluster --- Im getting this error

Lambda API returned error: Invalid Request Content.

The stored procedure example on https://github.com/awsdocs/amazon-aurora-user-guide/blob/74b9910506e442963462d371dcb105f492a860fa/doc_source/AuroraMySQL.Integrating.Lambda.md has:


   CONCAT('{"email_to" : "', email_to, 
	           '", "email_from" : "', email_from, 
	           '", "email_subject" : "', subject, 
	           '", "email_body" : "', body, '"}')

This will fail with a Invalid Request Content if there any quotes in body.

Solution I think is to use https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-quote

Thought I should mention this since we use your example and it is failing from time to time. =)
unee-t/bz-database#118

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.