A cluser consists of single primary instance and 0 or more replicas
Replicas can be used for read operation.
Aurora does not use local storage but uses a shared cluster volume. This is shared storage available to all compute instances within a cluser.
This storage is SSD based with max size of 128 Tib. It is replicated synchronously to 6 storage nodes which are spread across AZs
When a part of disk volume fails ,Aurora repairs that part automatically.
One can have upto 15 replicas to which one can failover
Storage is based on what is used. The consumption is based on high watermark. You are still billed for freed storage
To reduce the Watermakr one needs to create new cluster
HighWatermark is being replaced progressively
Aurora clusters have multiple endpoints. Cluster & Reader EndPoints are provided by default
Reader Replicas load balance across all the read replicas
Each instance has its own EndPoint
Cost
No Free Tier
Compute - Hourly Charge , per second with 10 minute minimum
Storage - GB-Month Consumed + IO Cost
100% DB Sized backup is included in cost of cluster
Backups
In addition to the features supported by RDS, Aurora supports backtrack to a previous point in time
Backtrack needs to be explicitly enabled. This needs to be enabled on per cluster basis
One can create fast clones. A new instance can be created from an existing database. Cloned database only stores delta.
Global Database
Allows replication in 1 primary + 5 replication regions
Primary region has 1 read/write node & upto 15 read replicas. Secondary regions can have upto 16 read replicas
Replication between regions happens at storage layer & typically happens in 1 second
Useful for cross region DR & BC
Offers low latency read performance globally
Multi Master Writes
Default mode is Single Master i.e. only one writer instance
In this configuration, all nodes are Read Write instances
There is no cluser endpoint. application connects to instances directly
There is no load balancing or failover
When a node gets a write request, it immediately proposes the change to all other nodes. If the write node receives a quorum, it commits the write. This is then replicated across all instances.
It also updates in-memory caches of other instances.