Code Monkey home page Code Monkey logo

aws-cloud-practitioner's Introduction

Notes for Exam

Index
  1. Cloud Computing
  2. IAM - Identity and Access management
  3. EC2 - Elastic Compute Cloud
  4. EC2 Instance Storage
  5. Elastic load Balancing & Auto Scaling groups
  6. S3 - Simple Storage Service
  7. Database & Analytics
  8. Compute Services - ECS, Lambda, Batch, LightSail
  9. Deployments & Managing Infrastructure at Scale
  10. Leveraging the AWS global Infrastructure
  11. Cloud Integrations
  12. Cloud Monitoring
  13. VPC & Networking
  14. Machine Learning
  15. Other Services
  16. AWS Architecting & Ecosystem (T.B.C)
  17. Security & Compliance (T.B.C)
  18. Account Management, Billing & Support (T.B.C)

Ways to access the AWS Cloud
  1. Management Console (UI).
  2. AWS SDK - allow your code to access AWS resources.
  3. AWS CLI - command line iterface tool.

Cloud Computing

What is Cloud computing ?

It is the on-demand delivery of IT Resources over the internet with pay-as-you-go-pricing. Instead of maintaining physical servers or data centers, you can leverage services such as storage, computing power, network, security and databases from any cloud provider as per the need.

Deployment Models of Cloud
  • Private Cloud e.g. (Rackspace, Digital Ocean, Go Daddy)
  • Public Cloud e.g. (AWS, Azure, GCP, Oracle)
  • Hybrid Cloud e.g. (AWS + Private Infra)

Characteristics of Cloud Computing ? info

  • On-demand self service: Anyone can provision resource and use them without human interaction from the service provider.
  • Broad network access: Can be accessed by diverse client platforms.
  • Resource pooling: Cloud Provider will share all the physical resources (servers, storage, network etc) among multiple clients, Multi-tenant architecture.
  • Rapid elasticity and scalability : Scale based on demand, dispose resources when not needed.
  • Measured Service: Pay for what you use.

Six Advantages of cloud computing ? info

  • Trade Capital Expense (CAPEX) for Variable expense : Pay on-demand don't own any hardware which reduces the total cost of ownership. No need to maintain a seperate team to handle the infrastructure.
  • Benefit from massive economies of scale : If there are more number of customers which are using the AWS cloud, then lesser the price of using these services.
  • Stop guessing capacity : Scale up/down based on the demand.
  • Increase speed and agility : Add or Remove any new services anytime.
  • Stop spending money running and maintaining data centers : Leverage the power of the cloud.
  • Go global in minutes : Easily deploy application in multiple regions around the world.

Types of Cloud Computing info

  • Infrastructure as a Service (IaaS) : As an end user you need to maintain and configure the servers data storage, capcity, networking, db storage and connectivity, assess and security concerns of the resources.
  • Platform as a Service (PaaS) : As an end user you only need to manage application and its deployment. Underlying hardware, OS and its patches will be taken care by the Provider.
  • Software as a Service (SaaS) : As and end user your focus should be only on using the application. How it is built, is it scalable ? is not our concern.

Types of cloud computing

Cloud computing types responsibilities

Pricing of the Cloud info

  • Compute : Pay for the compute time.
  • Storage : Pay for the data stored in the cloud.
  • Data transfer OUT of the cloud : Data transfer in is free.

IAM - Identity and Access management

What is IAM ?

AWS Identity and Access Management (IAM) provides fine-grained access control across all of AWS resources. With, IAM you can create users, groups and assign permissions to them.

IAM users and groups

IAM Users and Groups

IAM Identities info

IAM identity provides access to an AWS account. Each IAM identity can be associated with one or more policies. Different types of identities under IAM:

  • Users: Members/Employees of the organization with pre-defined privileges and having an account in the AWS cloud. ROOT user is the one who registered the account, rest are called as the IAM users invited or added by the ROOT.
  • Groups: Consists of only users added to it. A user can be a part of one or more group. Group cannot be added to another group.
  • Permissions: Defines what privileges a user can have, in short which AWS resource an individual or a service can access or work with e.g. S3, EC2, Lambda, EBS etc
  • Role: A Role is a logical entity inside AWS, which can be assigned to any user/service. Roles have policies/permissions assigned to it and should be undertaken by any service/user who needs it.
    Scenarios when you need to create roles
    • Lambda Role : Lambda needs to access the S3 bucket to store or retrieve some files.
    • EC2 Role : EC2 services needs to access the S3 bucket to store or retrieve some files.
    • SQS Role : SQS services needs to send objects to lambda for further processing.

Access Management - IAM Policy structure info

Policies are JSON documents that are associated with a group, role or user. Policies define the types of permissions that a user can have. You should only assign the permissions that are required by the users.

In the policy Version, ID, and a Statement are included. Because a statement is a list, it must contain at least one statement in order to be considered a valid policy. It manages the permissions required by the user or a service for various AWS resources.

Example policy allowing all resources to access the S3 getObject service in AWS.

  {
    "Id": "CustomS3ObjectAccessPolicy2072022",
    "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "StmtForCustomS3ObjectAccessPolicy2072022",
        "Effect": "Allow",
        "Principal": "*",
        "Action": [
          "s3:getObject"
      ],
      "Resource": "arn:aws:s3:::demo-learning-web-bucket/*"
      }
    ]
  }
  • Id: the identifier of the policy. AWS recommended to use UUID for uniqueness.
  • Version: Specifies which syntax rules to be followed for the policy structure. Latest version is the "2012-10-17" older was "2008-10-17". Policy variables is introduced in the latest version.
  • Statement: contain a single statement or an array of individual statements.
  • Sid: unique identifier for the statement.
  • Effect: possible values Allow/Deny.
  • Principal: Who need to access, ARN of the user or the Service. Can specify a single value or a list.
  • Action: Possible actions that must be allowed to that resource. Inthe above example you are only allowing the getObject action from S3.
  • Resource: Limit to individual resources created under a Service. In the above example you are allowing access to a single bucket "demo-learning-web-bucket" created under the AWS S3 service.

Advanced

  1. AWS Cloudshell: This service is available only in few regions. It provides with an in browser terminal to interact with the AWS account and its services, alternative of using AWS-CLI.
  2. IAM Security Tool
    1. IAM Credential Report : lists all users in the account and status of their credntials such as access keys, mfa status, password, last login etc
    2. IAM Access Advisor : Shows permission granted to the user and when those services were last accessed.
  3. S.T.S Security Token Service: AWS provides AWS Security Token Service (AWS STS) as a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users you authenticate (federated users).
  4. Cognito: Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, Google or Apple.
  5. Directory Service: AWS Directory Service provides multiple ways to use Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices, and administrators use them to manage access to information and resources. AWS Directory Service provides multiple directory choices for customers who want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)–aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, groups, devices, and access.
  6. AWS Identity Center: AWS IAM Identity Center (successor to AWS Single Sign-On) helps you securely create or connect your workforce identities and manage their access centrally across AWS accounts and applications. IAM Identity Center is the recommended approach for workforce authentication and authorization on AWS for organizations of any size and type.

EC2 - Elastic Compute Cloud (IaaS)

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. It eliminates the need to invest in hardware up front, so you can develop and deploy applications faster. Amazon EC2 enables you to scale up or down; handle changes in requirements or spikes in popularity, reducing the need to forecast traffic.

Instance Start-up

EC2 service allows user to select below configuration when starting up a new EC2 instance.

  • Operting system : Window, Linux, Mac OS
  • Compute Power & CPU cores
  • System memory or RAM
  • Storage Space in G.B.
  • Static IP addresses assigned to the machine
  • Security groups or the ports to allow and disallow the traffic
  • EC2 user data - set up shell script commands to install or update any package when creating a new virtual machine, the script is only executed once. Can be used to setup a lamp stack, git tools, os updates etc

EC2 instance type info

  1. General Purpose: General purpose instances provide a balance of compute, memory and networking resources, good to use as application web servers.
  2. Compute Optimised: Applications which require high processing power. e.g. batch processing, machine learning etc
  3. Memory Optimised: Applications that process large data sets in memory. e.g. redis cache, non relation database, solr search cache etc This storage is best suitable for temporary storage data which be recreated anytime if there is a loss.
  4. Accelerated Computing: Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.
  5. Storage optimised: Perform sequential read write operations on large datasets. These instances are fine tuned to deliver multiple low-latency, random I/O operaions per second for any application.

EC2 Security groups info

Security group allow incorming and outgoing traffic from your ec2 instances by acting as a firewall. you can configure the ports and the type of traffic that must be allowed to your EC2 instance. Incoming traffic is configred via the inbound rules and Outgoing using the outbound rules. By default all outgoing traffic only is allowed on security groups, you can attach more than one security group to any EC2 instances.

Security Groups

Security Groups overview

EC2 - Instance connect

Allows to SSH into EC2 instance by starting a terminal session in the browser. Currently only works with linux AMI.

How do roles work for EC2 instances ?

Application running on the EC2 instances needs access to the S3 service. So instead of adding the access_id/secret on the EC2 instance which will be a bad idea (anyone can see it), you create an IAM role and attach it to the instance. The role would be having all the necessary policies attached to it so that it can access the S3 service, application then can use the role's temporary credentials to access the S3 service.

Security Groups

EC2 role representation

Instance purchasing options info

Amazon EC2 provides the following purchasing options which enable cost optimazation as per the need -:

Type Scope Description
On-Demand Instances Regional/Zonal Pay, by the second, for the instances that you launch.
Savings Plans Regional/Zonal Reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Further usage is priced as per on-demand rates.
Reserved Instances Regional/Zonal Reduce your Amazon EC2 costs by making a commitment to a consistent instance configuration, including instance type and region, OS for a term of 1 or 3 years.
Convertible Reserved Instances Regional/Zonal Allows to change the EC2 instance typem instance family, OS, scope and memory.
Sceduled Reserved Instances Regional/Zonal With Scheduled Reserved Instances, you can reserve capacity that is scheduled to recur daily, weekly, or monthly, with a specified start time and duration, for a one-year term. After you complete your purchase, the instances are available to launch during the time windows that you specified.
Spot Instances Regional/Zonal Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly.
Dedicated Hosts Specific Region Pay for a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs. Most expensive option.
Dedicated Instances Regional/Zonal Pay, by the hour, for instances that run on single-tenant hardware.
Capacity Reservations Zonal Reserve capacity for your EC2 instances in a specific Availability Zone for any duration.

EC2 Instance Storage

Amazon EC2 provides with flexible, cost effective, and easy-to-use data storage options for your EC2 instances. Each option has a unique combination of performance and durability.

  1. EBS - Elastic block storage

    Amazon EBS is network storage drive that can be attached to any EC2 instance for storing data which requires frequent updates. EBS drive can be attached to only one EC2 instance, but you can attach multiple EBS to one EC2 instance. These drives are confined to a given avaialbility zone i.e you cannot attach drive in us-east-1a to and EC2 isntance running in us-east-1b. In order to create backups of the attached EBS volumes you create snapshots which can be attached to any EC2 in another region or AZ. info

    Elastic block storage representation

  2. EBS - Snapshot

    Backup of EBS volumes is called as Snapshots. Can be taken when the EBS volume is attached to the EC2 instance, good is to dettach before taking the snapshot. It cab be copied across regions and AZ to attach it to another EC2 instances.info

    EBS Features
    • Move the snapshot to archive tier which can reduce the cost upto 75%. Restoring from archive tier can take upto 24-48 hours.
    • Deleted snapshots can be recovered by setting up a retention period.
  3. AMI - Amazon Machine Images

    Are images which are created and maintained by the AWS team which helps in launching an EC2 instance, similar to an operating system image. you can launch multiple instance having the same AMI or different AMIs.

    you can create our own AMI by launching an EC2 instance and then customizing it as per our own requirement to create an image from it. This created image is specific to a region and can be copied across multiple regions.info

  4. EC2 - Builder image

    Automate the create, update, test and distribute cycle of AMI or container images. This service can be run in a schedule daily, weekly, bi-weekly or on monthly basis. You pay only for the resources utilized for creating the image and the storage space required by the created image. Resources required for creating an image includes the EC2 instance as it takes the user supplied (bootstrap) commands to create the final image. Imagine this as creating a docker image in local using a Dockerfile, you need an environment to create it. As it is a regional service you can distribute it across any region.info

    Image builder process

  5. EC2 - Instance Store

    An instance store provides temporary block-level storage for our instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

    An instance store consists of one or more instance store volumes exposed as block devices. The size of an instance store as well as the number of devices available varies by instance type.info

  6. EFS - Elastic File Storage

    Amazon EFS provides scalable file storage for use with Amazon EC2. You can use an EFS file system as a common data source for workloads and applications running on multiple instances. EFS can only be attached to linux machines.info

  7. EFS-IA - Elastic File Storage Infrequent access

    Storage class optmized to redice cost of storage for file which are not accessed frequently. Cost are 92% lower than EFS standard class. Set a policy to move files to EFS-IA if they are not accessed in x days.


Elastic load Balancing & Auto Scaling groups

  1. Scalability & High Availability

    A measurement of a system's ability to grow to accommodate an increase in demand. High availability means running your system/application in multiple regions or availability zones to avoid failure or hardware loss. A System or an Infrastructure can be called as scalable in two ways.

    • Vertical Scalability: Increasing the size of the instance or resource attached to it viz. RAM, CPU, Storage etc
    • Horizontal Scalability: Increase the number of instances running info
  2. Elasticity

    Any system which can scale up or down depending on the load. Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.

  3. Agility

    Ability to add new resources and hardware at ease.

  4. Load Balancers

    A load balancer distributes workloads across multiple compute resources, such as virtual servers. Using a load balancer increases the availability and fault tolerance of your applications.

    Compute resources can be added or removed from the load balancer as the need change, without disrupting the overall flow of requests to the applications.

    You can configure health checks, which monitor the health of the compute resources, so that the load balancer sends requests only to the healthy ones.info

    Load Balancers

    Type of Load Balancers
    • Application Load Balancer: Application Load Balancer operates at the request level (layer 7), routing traffic to targets (EC2 instances, containers, IP addresses, and Lambda functions) based on the content of the request. Ideal for advanced load balancing of HTTP and HTTPS traffic, Application Load Balancer provides advanced request routing targeted at delivery of modern application architectures, including microservices and container-based applications. It simplifies and improves the security of our application, by ensuring that the latest SSL/TLS ciphers and protocols are used at all times.
    • Network load Balancer: Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone. It is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).
    • Gateway load Balancer: Gateway Load Balancer helps us easily deploy, scale, and manage your third-party virtual appliances. It gives you one gateway for distributing traffic across multiple virtual appliances while scaling them up or down, based on demand. This decreases potential points of failure in your network and increases availability.
  5. Attaching Load Balancer for EC2 instances

    Steps :

    • Launch 2 EC2 instances with a single web page (index.html) which identifies the instance which is serving the current request.
    • Create a Elastic Load Balancer of type application, attach a secutiry group to it which allows only HTTP traffic (port 80).
    • Make sure similar security group are also attached to the EC2 instances to allow HTTP traffic.
    • Create a Target Group which registers the two EC2 servers as targets.
    • Assign this Target Group to the ALB.
    • Copy the DNS name attached to the ALB, open it in the browser and verify if the correct web pages are served.
  6. Auto Scaling Group

    Traffic received by the application can increase at any time. Auto Scaling group contains a collection of EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. The main goal of an Auto Scaling Group is to Scale-out (add more instances) when the load on the application increases and scale-in (remove instances) when it decreases, it also ensures that the minimum number of EC2 instances are always running and to replaces the faulty instances with healthy ones.info

    Auto Scaling Group

  7. Scaling Strategies
    1. Manual Scaling: Change the ASG settings manually.
    2. Dynamic Scaling: Respond to changing demand.
      • Simple/Step Scaling: With step scaling and simple scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that invoke the scaling process.
      • Simple/Step Scaling: Specify an average value or metric of an application e.g. Scale to keep the CPU utilization at 60%.
      • Predictive Scaling: Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in traffic flows.

S3 - Simple Storage Service

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Called as the main backbone of AWS and is promoted as a "infinitely scaling" storage service. Many websites hosted on AWS use S3 for storage as well as many AWS services. info

Use Cases
  • Backup and Storage
  • Disaster Recovery
  • Application hosting
  • Files

Overview of S3 buckets

  • Files stored in S3 are called as objects. Objects have a key associated with them which is the full path needed to retrieve that file from a given bucket.
    Example: s3://my_bucket/my_file.txt => key is my_file.txt
  • Buckets created in S3 must have unique name globally (across all regions).
  • Buckets are created at region level.
  • Maximum upload size is 5TB
  1. S3 Security: Access Control on S3 buckets and the objects contained in it.
    • User Based: IAM Policies attached to a user confined to only specific S3 features.
    • Resource (Bucket Policies): Permissions attached to the bucket applies to all the objects in the bucket owned by the bucket owner. If ACL option is disabled all objects contained inside the bucket are owned by the account/bucket owner including those uploaded by other AWS accounts. info
    • Resource (Object ACL): Finer grain control on the objects uploaded in S3 bucket.
    • Resource (Bucket ACL): Finer grain control on the Bucket.
  2. S3 Public Bucket Policy: To allow public access (able to access files in browser or through any service) to files inside any bucket follow the following points. When creating a new/existing S3 bucket uncheck the option which reads "Block all public access", add a new Bucket policy using the Policy Generator which allows all Principals (services/users) to access S3 objects.
          {
            "Id": "Policy1661705525639",
            "Version": "2012-10-17",
            "Statement": [
              {
                "Sid": "Stmt1661705522813",
                "Action": [
                  "s3:GetObject"
                ],
                "Effect": "Allow",
                "Resource": "arn:aws:s3:::demo-learning-web-bucket-replica",
                "Principal": "*"
              }
            ]
          }
        
  3. S3 versioning: Version S3 objects, it is enabled at bucket level. Over-writing the same object creates a new version. Versioning helps you to recover from accidental deletion or roll back to previous version. Imp notes: versioning can be enabled/disabled at any time after a bucket is created or when creating a new. If enabled after bucket creation all objects will have default value as "null". Disabling versioning does not deletes the previous versions.
  4. S3 Access Logs: Log all requests made to a S3 bucket from any service or account. The data is stored in another S3 bucket, used by data analysis tools to find access/usage patterns.
  5. S3 Replication: Copy contents of S3 bucket to another S3 bucket. Versioning must be enabled to achieve replication. Copying happens asynchronously, buckets can be in diffetent accounts.
    • Cross Region Replication: Copy data to another bucket in different region.
    • Same Region Replication: Copy data to another bucket in same region.
  6. S3 Storage Classes: Amazon S3 offers a range of storage classes that you can choose from, based on the data access, resiliency, and cost requirements of your workloads. S3 storage classes are purpose-built to provide the lowest cost storage for different access patterns. S3 storage classes are ideal for virtually any use case, including those with demanding performance needs, data residency requirements, unknown or changing access patterns, or archival storage. info

    Note:: Check the official AWS docs (https://aws.amazon.com/s3/storage-classes/) for in depth understanding.

    Storage Classes Availability Features Use cases
    S3 Standard - General purpose 99.99% Used for frequently accessed data. Low latency and high throughput. Web applications, dynamic applications, big data analytics
    S3 Standard - Infrequent access 99.99% Used for less frequently accessed data, but requires rapid retrieval when needed.Minimum storage duration is 30 days. Long term storage, backups, data store for disaster recovery
    S3 One Zone - Infrequent access 99.5% Used for less frequently accessed data, but requires rapid retrieval when needed. This storage class stores data in a single AZ unlike other who stores data into minimum of 3 AZ. Cost 20% less than S3 Standard - IA.Minimum storage duration is 30 days. Storing secondary backups
    S3 Glacier - Instant retrieval 99.99% Archive storage class that delivers lowest cost storage for long lived data that is rarely accessed and requires retrieval in milliseconds. Using this storage class can save cost upto 68% than S3 - IA. Minimum storage duratin for objects should be 90 days. News media, user generated content etc
    S3 Glacier - Flexible retrieval 99.99% Archived storage class. Costs 10% less than S3 glacier instance retrieval. Data which is less accessed less than 1/2 times in a year. This class differs from S3 - IA or S3 one zone IA, as archived data is not retrieved rapidly (minutes to hours). Configurable retrieval times, from minutes to hours, with free bulk retrievals. Minimum storage duration is 90 days. Disaster recovery, offsite data storage etc
    S3 Glacier - Deep Archive 99.99% Archived storage class.Data retrieval can take around 12 hours. Minimum storage duration is 180 days. Disaster recovery, offsite data storage etc
    S3 Intelligent - Tiering 99.99% Automatically move objects/files to other storage classes based on the usage patten. Can save cost by moving files to correct tier. any use case can be considered
  7. S3 Object lock & Glacier Vault lock: Adopt a WORM policy (Write Once Read Many) model. Block an object version deletion for a specified amount of time.
  8. S3 Encryption: Three types to be considere for all uploaded files. No encryption - nothing is encrypted. Server-Side Encryption - Encrypt file after uploaded to S3, handled by AWS. Client side Encryption - USer encypts the file with some private key before uplaoding.
  9. AWS Snow family: Offline devices to perform data migrations in and out of AWS. If it takes weeks to transfer data to AWS over the network you should use Snowball devices.

    Devices:

    AWS Snowcone AWS Snowball edge storage optimized AWS snowball edge compute optimized AWS snowmobile
    Usable HDD Storage 8 TB 80 TB 42 TB 100 PB
    Usable SDD Storage 14 TB 1 TB 7.68 TB No
    Usable vCPUs 4 vCPUs 40 vCPUs 52 vCPUs N/A
    Usable Memory 4 GB 80 GB 208 GB N/A
    Storage Clustering No Yes, 5-10 nodes Yes, 5-10 nodes N/A
    256-bit Encryption Yes Yes Yes Yes
    HIPAA Compliant No Yes, eligible Yes, eligible Yes, eligible
    Data Sync Pre-installed No No No
  10. AWS Edge locations: Process or generate data on an edge location's. These locations may have limited/no internet access, no access to compiuting power. Examples: Trasport services, Ships, underground mining etc
  11. AWS Storage Gateway: AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage.

Databases & Analytics

  1. AWS RDS & Aurora: RDS stands for Relational Database service, and is a managed Database service provided by the AWS. It allows provisioning database engines such as Mysql, Postgres, MariaDb, Oracle, Aurora etc. Aurora is a proprietary database built by AWS. Aurora is 5x better than using mysql on RDS and 3x better than using postgres on RDS.


    Why to use RDS ? you can install any database service in EC2 instances right ?

    • RDS is a managed service.
    • AWS maintains & updates OS running the RDS instance.
    • Features such as auto backups and restore.
    • Read replicas for improved read performance.
    • Multi AZ setup for Disaster recovery.
    • Scaling capability.
    • Storage backed by EBS.
    • Dashboards for monitoring health.

    RDS architecture

    RDS architecture

  2. RDS Deployments options: Different ways you can configure RDS instances to serve any request.
    • Read Replicas: Amazon RDS Read Replicas provide enhanced performance and durability for Amazon RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

      RDS architecture

      RDS deployment architecture

    • Multi AZ: In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously replicates the data to an instance in a different AZ. When it detects a failure, Amazon RDS automatically fails over to a standby instance without manual intervention. info
    • Multi-Region: Similar to a Multi-AZ dpeloyment, but provides better application performance region wise as there can be multiple read replicas setup for a given primary database.
  3. Elastic Cache: Is a in-memory database, supported engines are Memcache and Redis. Data stored here is not permanent hence should only be used as a cache service. All Database extensive read tasks must be shifted to an in-memory cache for better performance as it has low latency.

    Strategy :: For any item that needs to be written to DB -> check if it exists in cache -> if not write/update to db then to the cache -> read from cache always -> if deleted from db delete from cache.

  4. Dynamo DB: It is a fully managed NoSQL database. Data stored in dynamo db is encrypted at rest by default. Low in cost and has auto scaling capabilities. Data is stored in SSD's and replicated across multiple Availability Zones in an AWS Region, providing built-in high availability and data durability.

    Dynamo Accelerator DAX: Cache specific for dynamoc DB. Caches items which are frequently accessed.

    Dynamo Global Tables: Make a DynamoDB table accessible with low latency across multiple regions.

  5. Redshift: Based on PostgreSQL but not used for OLTP. Used for OLAP - online analytical proceessing. Columnar storage of data instead of Row, 10x better performance than other data warehousing tools. Provides SQL interface to execute queries.
  6. Amazon EMR: Stands for "Elastic MapReduce". Creates a Hadoop Cluster to analyze and process vast amount of data. Cluster can be made of multiple EC2 instances. EMR takes care of provisioning and configuring the instances. Provides auto-scaling and is integrated with Spot instances.
  7. Athena: Serverless query service to perform analytics against S3 objects. Use standard SQL language to query the files. Supports CSV, JSON, ORC files. Use-cases look for a pattern in log files.
  8. QuickSight: Serverless machine learning-powered business intelligence service to create dashboards. Can source data from RDS/Redshift/DynamoDb etc
  9. Document DB: AWS name for MongoDB with many performance changes added by AWS team.
  10. Neptune: Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine that is optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL, allowing you to build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security.
  11. Amazon OLDB: Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. It is used to track all application data changes, and maintain a complete and verifiable history of changes over time.
  12. Managed Blockchain: Blockchain makes it possible to build applications where multiple parties can execute transactions without the need for a trusted, central authority.
  13. AWS Glue: Managed ETL (Extract Transform Load) service. Good to prepare and transform data (script) for analytics.
  14. DMS Service: Database migration service. Extract source data from XYZ database and restore to AWS managed database.

Compute Services - ECS, Lambda, Batch, LightSail

  1. ECS: Amazon Elastic Container Service (Amazon ECS) is a highly scalable and fast container management service. Use it to run, stop, and manage containers on a cluster. With Amazon ECS, containers are defined in a task definition that you use to run an individual task or tasks within a service. In this context, a service is a configuration that you use to run and maintain a specified number of tasks simultaneously in a cluster.

    Featuers of ECS

    • Integration with IAM.
    • Integration with other AWS services.
    • Integration with CI/CD tools and processes which monitors source code and build new images, then pushes to the registry.
    • Support for sending container instance logs to cloud-watch.
  2. Fargate: Similar to ECS only difference is the infrastructure is managed by AWS, hence you do not have to plan for capacity, servers, disk space etc. AWS will run the containers with the supplied configuration RAM/CPU.
  3. ECR: Elastic Container Registry, private registry to store Docker images.
  4. Serverless: Do not manage any infrastructure, just deploy the code and use the service. It is billed as Pay-per go pricing model.
  5. Lambda: Lambda is a compute service that lets us run code without provisioning or managing servers. Lambda runs our code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, and logging. With Lambda, you can run code for virtually any type of application or backend service. info

    Examples/Use-cases

    • Lambda connected to an API gateway which performs authentication tasks.
    • Connected with cloud watch event rule "cron job".
    • Push/Pull data from Snowflake.
  6. API Gateway: Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.

    API gateway allows us to create the following API types HTTP, WebSocket, REST , REST API Private (Accesible only from a VPC).

  7. AWS Batch: Managed service providing batch processing at a larger scale and provides the ability to access large amount of computing power. Can run 1000* of batch jobs efficiently. Batch service provisions EC2/Spot instance dynamically. Batch jobs are designed as Docker images which run on ECS inside the provision EC2 servers.
  8. LightSail: Provides with virtual servers, database and networking; for users who do not wish to get into details of EC2 instance handling and has less cloud experience. Provides with high availability but provides no auto scaling and has limited integrations with other AWS services. e.g. Hosting a Lamp stack, good for dev/test sites etc

Deployments & Managing Infrastructure at Scale

  1. Cloud Formation: Have all the Infrastructure as Yaml code; (IaC templates) Create a template that describes all the AWS resources that are needed (like Amazon EC2 or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. No manual intervention needed to create and configure AWS resources and figure out what's dependent on what; CloudFormation handles all the configuration.
  2. Cloud Development Kit (CDK): Instead of writing cloud (IaC) templates in yaml format (cloud formation templates), CDK allows you to write them in any language of choice such as python, typescript, java, .net etc and this service compiles the code into CloudFormation templates.

    CDK process

    CDK process

  3. Elastic Beanstalk: Managed service which can be used to deploy/host your application in AWS cloud. Instance and OS configuration is handled by Beanstalk service. Provides with services such as capacity provisioning, load balancing & auto scaling.

    AWS Beanstalk vs LightSail.

  4. CodeDeploy: Managed Service which can be used to deploy code automatically to other services such as EC2, Lambda, Fargate and on-premises servers. CodeDeploy can deploy application code that runs on a server which is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. It scales with the infrastructure, as in it can deploy to single or multiple instances without much delay.
    • Rapidly release new features.
    • Update AWS Lambda function versions.
    • Avoid downtime during application deployment.
  5. CodeCommit: Source control service that hosts GIT based repositories. Makes it easy to collaborate with other users.
  6. CodeBuild: Managed Code building service in the cloud. Can pull data from CodeCommit, compile it, run unit tests and create deployable artifacts.
    • Fully managed, serverless.
    • Continuosly scalable and higly available.
    • Only pay for the build time.
  7. Code Pipeline: CodePipeline is a continuous delivery service that automates the building, testing, and deployment of your software into prod/dev/test environments. Earlier you saw CodeDeploy, CodeCommit and CodeBuild; wondering how all of them can be connected ? CodePipeline allows us to create a view of the famous CI/CD tool using all of the above and other different services.

    CodePipeline

    AWS Codepipeline

  8. CodeArtifact: It is a Artifact Management system which is usually used by a code-pipeline stage to store and retrieve artifacts.

    Example: When running test cases in our code repository, test stage will create a test report file and store the same in the artifactory.

    Example 2: Integrate the build process with sonar lint and store the reports of all code violations in the artifactory.

  9. CodeStar: Easier way to quickly setup CodeCommit, CodePipeline, CodeBuild, CodeDeploy, EC2 and other services. This service provides with an UI interface which allows us to use the above mentioned services.
  10. CloudNine: Is a Cloud IDE for writing, running and debugging our code. It opens in a browser, user can start working on any project with doing any pre-required code or development environment setup. Allows for code collaboration in real time.
  11. Systems Manager: AWS service to control/monitor/debug/update/patch the overall provisioned application infrastructure or the different AWS services. It helps administrators to investigate issues with any of the service or a group of services and remediate them by rolling out patches/updates.

    Example: you need to monitor the fleet of EC2 instances, starting you need the SSM Agent installed on all of the instances so that they can be controlled at once using the Systems manager service.

    Systems Manager

    Systems Manager

  12. SSM Session Manager: Start a secure shell session to any of the EC2 instances controlled by the sessions manager service.
  13. AWS Ops Works: AWS OpsWorks is a configuration management service that helps to configure and operate applications in a cloud enterprise by using Puppet or Chef. AWS OpsWorks Stacks and AWS OpsWorks for Chef Automate allow to use Chef cookbooks and solutions for configuration management, while OpsWorks for Puppet Enterprise lets us configure a Puppet Enterprise master server in AWS. Puppet offers a set of tools for enforcing the desired state of your infrastructure, and automating on-demand tasks.

Leveraging the AWS global Infrastructure

  1. Global Applications: Application's deployed in multiple AZs and regions, not restricted to a given geographic area. This setup allows you to operate applications which are highly available, fault tolerant and scalable info

    Benefits of using the Global Infrastructure -:

    • Security
    • Scalability
    • Availability
    • Flexibility
    • Performance - (Decreased Latency)
    • Global Footprint
    • Disaster Recovery: Failover to another region if there a disaster at some geographic location.
  2. Route 53: It is a highly scalable and available DNS management service. DNS = The Domain Name System is the hierarchical and decentralized naming system used to identify computers reachable through the Internet or other Internet Protocol networks. (name => IP Address mapping) (www.google.com => 142.250.182.228)

    DNS Record types info

    • A record : Domain to IPv4
    • AAAA record : Domain to IPv6.
    • Alias record : Route traffic from a Domain to some AWS service.
    • CNAME record : Route traffic from a Domain to another domain.

    Routing Policy info

    • Simple routing: Domain pointing to a single webserver.
    • Failover routing: DNS system does a Health check on the webserver and sends traffic to the health one.
    • Geolocation routing: Redirect client request to the nearest server determined by the users location.
    • Geoproximity routing: Redirect client request to the nearest server determined by the users location as well the resources.
    • Latency routing: Redirect traffic to the server which provides the least latency.
    • IP-based routing: Route traffic based on the location of the users, and have the IP addresses that the traffic originates from
    • Multivalue answer routing: T.B.D
    • Weighted routing: Route traffic to multiple resources in proportions that is specified. Weighted records can be created in private hosted zone.
  3. Cloudfront: Is a Content Delivery Network (CDN), which speeds up the delivery of static assets of website (.css, .js, .html, img/*). Cloud front serve requested resources through a network of data locations called the Edge locations. There are in total 216 AWS edge locations globally.

    How does Cloudfront serve any requests ?:

    • When a resource served via. the Cloudfront service is requested, the request is routed to the nearest edge location providing the least latency. Cloudfront will cache the resource, to serve it faster for further requests.
    • If Cloudfront finds a valid cached copy of the requested resource it will serve the same.

    Cloudfront distributions: Distribution must be created in order to use Cloudfront service, it is a set of config which tell the service on how to serve the requested resource. Types of config.

    • Content origin: the Amazon S3 bucket, AWS Elemental MediaPackage channel, AWS Elemental MediaStore container, Elastic Load Balancing load balancer, or HTTP server from which CloudFront gets the files to distribute.
    • Access: whether the files to be available to everyone or restrict access to some users.
    • Security: should CloudFront ask users to use HTTPS to access the content.
    • Cache key: what must be the value of the cache-key. The cache key uniquely identifies each file in the cache for a given distribution.
    • Origin request settings: should cloudfront relay the request Headers, Query, Cookies to the origin service.
    • Geographic restrictions: should CloudFront prevent users in selected countries from accessing the content.
    • Logs: should CloudFront create standard logs or real-time logs that show viewer activity.
  4. S3 Transfer Acceleration: Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between the requestor and an S3 bucket. Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. Transfer Acceleration takes advantage of the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location, the data is routed to Amazon S3 over an optimized network path.

    Ways to upload files to an S3 bucket:

    • Directly upload to an S3 bucket.
    • Use S3 transger acceleration to upload files.

    Refer to the following tool to see the different os using Transer Acceleration over direct upload Link

    S3 Transfer Acceleration

  5. Global Accelerator: Create accelerators to improve the performance of the application. When a consumer queries any resource/server hosted on AWS inorder to reduce the total response time, by using Global accelerator you can leverage the AWS internal network which optimizes the request route needed to reach the destination. Done by providing 2 static anycast IP addresses that only need to be configured by users once. Behind these IP address you can add or remove AWS origins, opening up uses such as endpoint failover, scaling, or testing without any user-side changes.

    S3 Global Accelerator

  6. AWS Outpost: AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs.
  7. AWS Wavelength: AWS Wavelength enables developers to create applications with ultra-low latencies for mobile devices and end users. Wavelength brings standard AWS compute and storage services to the edge of telecom carriers' 5G networks. You can extend an Amazon Virtual Private Cloud (VPC) to one or more Wavelength Zones and then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run applications that require ultra-low latency and a connection to AWS services in the Region.
  8. AWS Local zones: AWS Local Zones are a type of AWS infrastructure deployment that place compute, storage, database, and other select services closer to large population, industry, and IT centers, enabling you to deliver applications that require single-digit millisecond latency to end-users.
  9. Global Application architecture: Ideal architecture styles to achieve a global application.
    • Single region, Single AZ
    • Single region, Multi AZ
    • Multi region, Active/Passive
    • Multi region, Active/Active

Cloud Integrations

  • SQS: Amazon Simple Queue Service (SQS) is a managed message queuing service which is used to send, store and retrieve multiple messages of various sizes asynchronously. Terminologies -: Producers create the message.Consumers, processes the messages. Data persist for a max duration of 14 days.
  • SNS: Amazon Simple Notification Service (AWS SNS) is a managed service that automates the process of sending notifications to the subscribers attached to it.
  • Kinesis: Amazon Kinesis is a managed, scalable service that allows real-time processing of streaming data per second. It can collect data from multiple sources and then pass onto other applications/services to work on it.
  • Amazon MQ: Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.

Cloud Monitoring

  1. Cloudwatch Metric: Provides information on the health and performance of all AWS services. Default metrics are provided by a variety of services, including EC2 instances, EBS volumes, Lambda and RDS DB instances. Cloudwatch collects metrics from all AWS services and displays them in an easy-to-use dashboard formatted as a graph.By default, all metrics are refreshed every five minutes.
  2. Cloudwatch Alarm: Alarms are triggered after some cloudwatch metric crosses it defined threshold value. The value can be default usage of the service or deduced via some mathematical calculation. Alarms have actions associated with them, which are performed when the alarm is triggered. e.g Send email to user when the CPU utilization of EC2 isntance crosses 80% or Auto scale the instances if there is more load and downgrade when it is less.
  3. Cloudwatch logs: With the help of CloudWatch Logs, you can consolidate all of your system, application, and AWS service logs into a single, scalable service. They can then be quickly viewed, searched for certain error codes or patterns, filtered according to particular fields or safely archived for later research.

    Collecting logs via the agent

  4. EventBridge: Service which provides with real time delivery of events generated by AWS services. Example : EC2 instance stagte chaning from "Pending" to "Started" or from "Running" to "Stopped". With Event bridge you can capture these events and can have have targets or a group of them which will process these events and perform some actions described for it.

    Terminologies -:

    • Events: An event indicates a change in your AWS environment. AWS resources can generate events when their state changes.
    • Rules: A rule matches incoming events and routes them to targets for processing.
    • Targets: A target processes events it receives events in JSON format.
  5. AWS Cloudtrail: AWS CloudTrail enables operational and risk auditing, governance, and compliance for your AWS account. Events in CloudTrail are actions taken by a user, role, or AWS service. Events include AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs actions.
  6. X-Ray: AWS X-Ray is a service that gathers information about the requests that your application fulfils and offers tools for you to view, filter, and gain insights into that information in order to spot problems and areas for improvement. You may view comprehensive details for any tracked request made to your application, including the request, the answer, and any calls that your application makes to databases, web APIs, microservices, and downstream AWS resources.
  7. Codeguru: Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code. Static code analysis, similar to what Sonar Qube/Profiler tool does. Has integrations with Github, BitBucket, CodeCommit etc
  8. Service Health: Shows health of all AWS service for all regions. AWS Health Dashboard
  9. Personal Health Dashboard: Personalized view of all the service which you are using. example :: if you have EC2 instances deployed which also send/fetch data from the SQS quque service, then personal health dashboard will only give insights on those two services which are deployed.

VPC & Networking

  1. VPC - Virtual Private Cloud: A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. As the name suggest it is a private cloud where you deploy the AWS resources related to your application stack. info

    VPC Diagram

  2. Subnet: A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. info
  3. Public Subnet: Subnet which is accessible from the Internet.
  4. Private Subnet: Subnet which is not accessible from the Internet.
  5. IGW - Internet Gateway: Help the VPC resources to connect with Internet.
  6. NAT Gateway: Allow your private subnet to connect to internet while reamining private.
  7. Security Group and NACL (Network Access Control List):
    Security Group NACL
    Attached to EC2 instances Attached at Subnet level
    Provides with Allow rules Provides with both Allow and Deny rules
    Security groups are stateful i.e. any change to incoming rule (traffic) is also applicable to outgoing rule. e.g. If we allow incoming traffic on PORT 80 outgoing traffic is also allowed. NACL are stateless i.e. any change to incoming rule (traffic) is not applicable to outgoing rule. e.g. If we allow incoming traffic on PORT 80 outgoing traffic must be explicitly allowed.
    Multiple Security groups can be attached to a single EC2 instance One NACL attached to a Subnet
  8. VPC Flow logs: You can record details about the IP traffic to and from network interfaces in your VPC using a feature called VPC Flow Logs.Data from flow logs can be published to Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose, among other places.Following the creation of a flow log, the entries can be retrieved from the log group, bucket, or delivery stream that you configured and viewed.
  9. VPC Peering: Connecting two VPC is called as VPC peering. Peering is not transitive i.e. if VPC(A) <-> VPC(B) & VPC(A) <-> VPC(C) then VPC(B) not connected to VPC (C). VPC Peering allows us to create a bigger network of resources across multiple regions.

    VPC Peering

  10. VPC Endpoint: VPC Endpoints allows you to connect to AWS services over a private network (VPN), which provides with lowered latency and and better security to access the AWS cloud.
    • VPC Endpoint Gateway: When connecting to S3 and DynamoDB.
    • VPC Endpoint Interface: When connecting to other services.
  11. AWS PrivateLink: Without exposing your traffic to the open internet, AWS PrivateLink offers private connectivity between your on-premises networks, AWS services, and VPCs. Your network design can be greatly simplified by using AWS PrivateLink to connect services across several accounts and VPCs.

    VPC Private link

  12. Site to Site VPN: Connect an on premises VPN to AWS. The connection is encrypted and transfer happens over the public internet.

    Site to Site VPN

  13. Direct Connect (DX): Eshtablish a physical connection between on-premises infrastructure and AWS.
  14. Client VPN: AWS Client VPN is a managed client-based VPN service that gives you access to your on-premises network's and AWS resources safely. With Client VPN, you can use an OpenVPN-based VPN client to access your resources from any place.
  15. Transit Gateway: Your on-premises networks and Amazon Virtual Private Clouds (VPCs) are linked together by a central hub using AWS Transit Gateway.By doing this, you can eliminate complicated peering arrangements and simplify your network. Every new connection is formed only once; it functions as a cloud router.

    Transit Gateway


Machine Learning

  1. Rekognition: Makes it easy to perform image and video analysis with the help of the Amazon Rekognition API. The service can identify objects, people, text, scenes, and activities. Perform accurate facial analysis, face comparison, and face search capabilities. info
  2. Transcribe: Convert Speech to Text. Support automatic language identification for multi-lingual support, remove PII data using redaction.
  3. Polly: Turn Text to Speech using deep learning.
  4. Translate: Natural and accurate language translation.
  5. Lex & connect: @todo
  6. Comprehend: Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. usecases : find the language of the text, extract key phrases, peoples, brands or places etc.
  7. Sagemaker: Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.
  8. Forecast: Forecast is a fully managed service that uses statistical and machine learning algorithms to deliver highly accurate time-series forecasts.
  9. Kendra: Amazon Kendra is a highly accurate and intelligent search service that enables your users to search unstructured and structured data using natural language processing and advanced search algorithms. Document search service powered by machine learning allows to extract text from documents of multiple formats (text, pdf, HTML, PowerPoint, MS word, FAQs).
  10. Personalize: Amazon Personalize is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata.
  11. Textract: Amazon Textract makes it easy to add document text detection and analysis to your applications.

Other Services

  1. Workspaces: Amazon WorkSpaces is a fully managed desktop virtualization service for Windows and Linux that allows you to access resources from any supported device. info
  2. AppStream 2.0: Amazon AppStream 2.0 is a fully managed, secure application streaming service which allows streaming desktop applications. User can stream any application they want to work on device of their choice.info
  3. Amazon sumerian: Used to create 3D models, VR (virtual reality) or AR (augmented reality) applications.info
  4. Sumerian: Used to create 3D models, VR (virtual reality) or AR (augmented reality) applications.info
  5. IoT Core: Connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. info
  6. Elastic Transcoder: Media Transcoding service, convert your source video file into multiple formats. Source video is an S3 Bucket -> Transcoder -> S3 Bucket. info
  7. AppSync: AppSync creates serverless GraphQL and Pub/Sub APIs to make application development easier by providing a single endpoint for securely querying, updating, and publishing data. info
  8. Amplify: Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. No cloud expertise needed. info
  9. Device Farm: Testing service that lets you improve the quality of your web and mobile apps by testing them across an extensive range of desktop browsers and real mobile devices. info
  10. AWS Backup: Manage and automate backups across all AWS services. Take on-demand backups, supports Point-in-Time recovery, cross region backups, cross account backups etc info
  11. Disaster Recovery: Types of Strategies
    • Backup and Restore: Backup data from Storage S3/EBS in case of failure.
    • Pilot Light: Have few (core) services in the cloud, to failover in case of disaster.
    • Warm standby: Have a minimum but full version of the application in the cloud.
    • Multi-Site/Hot Site: Have full version of the application in the cloud to switch in case of failover.
  12. Elastic Disaster Recovery: AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. info
  13. DataSync: AWS DataSync moves large amounts of data online between on-premises storage and Amazon S3, Amazon Elastic File System (Amazon Elastic File System) or Amazon FSx. Manual tasks related to data transfers can slow down migrations and burden IT operations. info
  14. Fault Injection Simulator: AWS Fault Injection Simulator (FIS) is a fully managed service for running fault injection experiments to improve an application’s performance, observability, and resiliency. FIS simplifies the process of setting up and running controlled fault injection experiments across a range of AWS services, so teams can build confidence in their application behavior.info
  15. Step Functions: AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
  16. Ground Station: Control sattelite data, control communications, process data nd scale the sattelite operations.
  17. Pinpoint: Amazon Pinpoint offers marketers and developers one customizable tool to deliver customer communications across channels, segments, and campaigns at scale. info
  18. Application Migration Service: T.B.C

aws-cloud-practitioner's People

Contributors

vishwac09 avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.