Labour Day Special - 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: c4sdisc65

DBS-C01 PDF

$38.5

$109.99

3 Months Free Update

  • Printable Format
  • Value of Money
  • 100% Pass Assurance
  • Verified Answers
  • Researched by Industry Experts
  • Based on Real Exams Scenarios
  • 100% Real Questions

DBS-C01 PDF + Testing Engine

$61.6

$175.99

3 Months Free Update

  • Exam Name: AWS Certified Database - Specialty
  • Last Update: May 18, 2024
  • Questions and Answers: 324
  • Free Real Questions Demo
  • Recommended by Industry Experts
  • Best Economical Package
  • Immediate Access

DBS-C01 Engine

$46.2

$131.99

3 Months Free Update

  • Best Testing Engine
  • One Click installation
  • Recommended by Teachers
  • Easy to use
  • 3 Modes of Learning
  • State of Art Technology
  • 100% Real Questions included

DBS-C01 Practice Exam Questions with Answers AWS Certified Database - Specialty Certification

Question # 6

A financial organization must ensure that the most current 90 days of MySQL database backups are accessible. Amazon RDS for MySQL DB instances are used to host all MySQL databases. A database expert must create a solution that satisfies the criteria for backup retention with the least amount of development work feasible.

Which strategy should the database administrator take?

A.

Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.

B.

Modify the DB instances to enable the automated backup option. Select the required backup retention period.

C.

Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.

D.

Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.

Full Access
Question # 7

A financial institution uses AWS to host its online application. Amazon RDS for MySQL is used to host the application's database, which includes automatic backups.

The program has corrupted the database logically, resulting in the application being unresponsive. The exact moment the corruption occurred has been determined, and it occurred within the backup retention period.

How should a database professional restore a database to its previous state prior to corruption?

A.

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

B.

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

C.

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

D.

Restore using the appropriate automated backup. No changes to the application connection string are required.

Full Access
Question # 8

A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user.

What is the MOST operationally efficient way to restore the default permissions of the master user?

A.

Modify the DB instance and set a new master user password.

B.

Use AWS Secrets Manager to modify the master user password and restart the DB instance.

C.

Create a new master user for the DB instance.

D.

Review the IAM user that owns the DB instance, and add missing permissions.

Full Access
Question # 9

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

A.

Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.

B.

Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.

C.

Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.

D.

Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Full Access
Question # 10

A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.

Which step will provide additional security?

A.

Set up NACLs that allow the entire EC2 subnet to access the DB instance

B.

Disable the master user account

C.

Set up a security group that blocks SSH to the DB instance

D.

Set up RDS to use SSL for data in transit

Full Access
Question # 11

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.

Which combination of steps should the company take to meet these requirements? (Choose two.)

A.

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

B.

Deploy an ElastiCache for Memcached global datastore.

C.

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

D.

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

E.

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

Full Access
Question # 12

A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.

Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

A.

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

B.

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

C.

Edit and enable Aurora DB cluster cache management in parameter groups.

D.

Set TCP keepalive parameters to a high value.

E.

Set JDBC connection string timeout variables to a low value.

F.

Set Java DNS caching timeouts to a high value.

Full Access
Question # 13

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

A.

Ensure the table is always provisioned to meet peak needs

B.

Allow burst capacity to handle the additional load

C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Full Access
Question # 14

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.

The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.

What should a database specialist do to meet these requirements?

A.

Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.

B.

Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.

C.

Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.

D.

Use on-demand capacity.

Full Access
Question # 15

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Full Access
Question # 16

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

A.

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

B.

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

C.

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

D.

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Full Access
Question # 17

A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance.

Which strategy would allow for a successful migration with the LEAST amount of downtime?

A.

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.

B.

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.

C.

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.

D.

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises

Full Access
Question # 18

A gaming company is developing a new mobile game and decides to store the data for each user in Amazon DynamoDB. To make the registration process as easy as possible, users can log in with their existing Facebook or Amazon accounts. The company expects more than 10,000 users.

How should a database specialist implement access control with the LEAST operational effort?

A.

Use web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.

B.

Use web identity federation on the mobile app and create individual IAM users with credentials to access DynamoDB.

C.

Use a self-developed user management system on the mobile app that lets users access the data from DynamoDB through an API.

D.

Use a single IAM user on the mobile app to access DynamoDB.

Full Access
Question # 19

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.

The lead developer created a single DynamoDB table for the events with the following schema:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.

Which design change should a database specialist recommend to the development team?

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Full Access
Question # 20

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.

Which step should be taken to troubleshoot this issue?

A.

Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address

B.

Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect

C.

Ensure that the RDS DB instance has not reached its maximum connections limit

D.

Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections

Full Access
Question # 21

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.

Which solution will meet these requirements?

A.

Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

B.

Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

C.

Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

D.

Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

Full Access
Question # 22

A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.

Which solution will satisfy this criterion?

A.

Create a stack policy to prevent updates. Include ?€Effect?€ : ?€ProductionDatabase?€ and ?€Resource?€ : ?€Deny?€ in the policy.

B.

Create an AWS CloudFormation stack in XML format. Set xAttribute as false.

C.

Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.

D.

Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.

Full Access
Question # 23

A business's mission-critical production workload is being operated on a 500 GB Amazon Aurora MySQL DB cluster. A database engineer must migrate the workload without causing data loss to a new Amazon Aurora Serverless MySQL DB cluster.

Which approach will result in the LEAST amount of downtime and the LEAST amount of application impact?

A.

Modify the existing DB cluster and update the Aurora configuration to Serverless.

B.

Create a snapshot of the existing DB cluster and restore it to a new Aurora Serverless DB cluster.

C.

Create an Aurora Serverless replica from the existing DB cluster and promote it to primary when the replica lag is minimal.

D.

Replicate the data between the existing DB cluster and a new Aurora Serverless DB cluster by using AWS Database Migration Service (AWS DMS) with change data capture (CDC) enabled.

Full Access
Question # 24

A company is running critical applications on AWS. Most of the application deployments use Amazon Aurora MySQL for the database stack. The company uses AWS CloudFormation to deploy the DB instances.

The company's application team recently implemented a CI/CD pipeline. A database engineer needs to integrate the database deployment CloudFormation stack with the newly built CllCD platform. Updates to the CloudFormation stack must not update existing production database resources.

Which CloudFormation stack policy action should the database engineer implement to meet these requirements?

A.

Use a Deny statement for the Update:Modify action on the production database resources.

B.

Use a Deny statement for the action on the production database resources.

C.

Use a Deny statement for the Update:Delete action on the production database resources.

D.

Use a Deny statement for the Update:Replace action on the production database resources.

Full Access
Question # 25

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Full Access
Question # 26

A company uses a large, growing, and high performance on-premises Microsoft SQL Server instance With an Always On availability group cluster size of 120 TIE. The company uses a third-party backup product that requires system-level access to the databases. The company will continue to use this third-party backup product in the future.

The company wants to move the DB cluster to AWS with the least possible downtime and data loss. The company needs a 2 Gbps connection to sustain Always On asynchronous data replication between the company's data center and AWS.

Which combination of actions should a database specialist take to meet these requirements? (Select THREE.)

A.

Establish an AWS Direct Connect hosted connection between the companfs data center and AWS

B.

Create an AWS Site-to-Site VPN connection between the companVs data center and AWS over the internet

C.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises SQL Server databases to Amazon RDS for SQL Server Configure Always On availability groups for SQL Server.

D.

Deploy a new SQL Server Always On availability group DB cluster on Amazon EC2 Configure Always On distributed availability groups between the on-premises DB cluster and the AWS DB cluster_ Fail over to the AWS DB cluster when it is time to migrate.

E.

Grant system-level access to the third-party backup product to perform backups of the Amazon RDS for SQL Server DB instance.

F.

Configure the third-party backup product to perform backups of the DB cluster on Amazon EC2.

Full Access
Question # 27

A stock market analysis firm maintains two locations: one in the us-east-1 Region and another in the eu-west-2 Region. The business want to build an AWS database solution capable of providing rapid and accurate updates.

Dashboards with advanced analytical queries are used to present data in the eu-west-2 office. Because the corporation will use these dashboards to make purchasing choices, they must have less than a second to obtain application data.

Which solution satisfies these criteria and gives the MOST CURRENT dashboard?

A.

Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.

B.

Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multi-active replication to ensure that updates are quickly propagated to eu-west-2.

C.

Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.

D.

Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west-2. Configure the dashboard application to read from the read replica.

Full Access
Question # 28

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Full Access
Question # 29

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.

Which solution will meet these requirements?

A.

Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.

B.

Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.

C.

Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.

D.

Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.

Full Access
Question # 30

A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year

Which solution MOST cost-effectively meets these requirements?

A.

Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.

B.

Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan

C.

Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.

D.

Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.

Full Access
Question # 31

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between

the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the

data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

A.

Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.

B.

Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.

C.

Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the —storage-encrypted parameter set to true. Update the application to point to the new cluster.

D.

Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.

E.

Activate encryption at rest using the modify-db-cluster command with the —storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.

Full Access
Question # 32

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multiple accounts. The company will initiate each DB instance from an existing Aurora PostgreSQL DB instance that runs in a

shared account. The company wants the process to be repeatable in case the company adds additional accounts in the future. The company also wants to be able to verify if manual changes have been made

to the DB instance configurations after the company deploys the DB instances.

A database specialist has determined that the company needs to create an AWS CloudFormation template with the necessary configuration to create a DB instance in an account by using a snapshot of the existing DB instance to initialize the DB instance. The company will also use the CloudFormation template's parameters to provide key values for the DB instance creation (account ID, etc.).

Which final step will meet these requirements in the MOST operationally efficient way?

A.

Create a bash script to compare the configuration to the current DB instance configuration and to report any changes.

B.

Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.

C.

Set up CloudFormation to use drift detection to send notifications if the DB instance configurations have been changed.

D.

Create an AWS Lambda function to compare the configuration to the current DB instance configuration and to report any changes.

Full Access
Question # 33

Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.

What method should the database administrator use to configure the encryption to fulfill these specifications?

A.

AWS CloudHSM

B.

AWS Key Management Service (AWS KMS) with an AWS managed key

C.

AWS Key Management Service (AWS KMS) with server-side encryption

D.

AWS Key Management Service (AWS KMS) CMK with customer-provided material

Full Access
Question # 34

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.

What should the company do to eliminate this application performance issue?

A.

Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.

B.

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.

C.

Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.

D.

Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.

Full Access
Question # 35

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Full Access
Question # 36

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.

Which solution meets these requirements?

A.

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

B.

Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

C.

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.

D.

Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.

Full Access
Question # 37

A finance company migrated its 3 ?¢?’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime.

Which solution will meet these requirements?

A.

Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.

B.

Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

C.

Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.

D.

Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

Full Access
Question # 38

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Full Access
Question # 39

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and

the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days.

The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database.

Which solution will meet these requirements?

A.

Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.

B.

Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.

C.

Modify the temp file_limit parameter to a smaller value to reclaim space on the DB instance.

D.

Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.

Full Access
Question # 40

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?

A.

Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.

B.

Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.

C.

Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.

D.

Change the DB clusters to the burstable instance family.

Full Access
Question # 41

A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.

What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

A.

Use a Multi-AZ deployment in each Region.

B.

Use read replica deployments in all Availability Zones of the secondary Region.

C.

Use Multi-AZ and read replica deployments within a Region.

D.

Use Multi-AZ and deploy a read replica in a secondary Region.

Full Access
Question # 42

A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.

What should a database specialist do to mitigate this risk?

A.

Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.

B.

Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.

C.

Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.

D.

Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.

Full Access
Question # 43

A company's applications store data in Amazon Aurora MySQL DB clusters. The company has separate AWS accounts for its production, test, and development environments. To test new functionality in the test environment, the company's development team requires a copy of the production database four times a day.

Which solution meets this requirement with the MOST operational efficiency?

A.

Take a manual snapshot in the production account. Share the snapshot with the test account. Restore the database from the snapshot.

B.

Take a manual snapshot in the production account. Export the snapshot to Amazon S3. Copy the snapshot to an S3 bucket in the test account. Restore the database from the snapshot.

C.

Share the Aurora DB cluster with the test account. Create a snapshot of the production database in the test account. Restore the database from the snapshot.

D.

Share the Aurora DB cluster with the test account. Create a clone of the production database in the test account.

Full Access
Question # 44

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device

What is the cause of this error and what should the Database Specialist do to resolve this issue?

A.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

B.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

C.

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

D.

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

Full Access
Question # 45

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?

A.

Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0

B.

Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1

C.

Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture

D.

Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan

Full Access
Question # 46

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.

What should the database specialist do to meet these requirements?

A.

Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.

B.

Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.

C.

Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.

D.

Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

Full Access
Question # 47

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance.

Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

A.

Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.

B.

Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3.

C.

Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance.

D.

Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.

E.

Encrypt the data with client-side encryption before transferring the data to Amazon RDS.

Full Access
Question # 48

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Full Access
Question # 49

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads.

Which solution will meet these requirements in the MOST operationally efficient manner?

A.

Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data

B.

Use Amazon Redshift for relational data and JSON data.

C.

Use Amazon RDS for relational data. Use Amazon Neptune for JSON data

D.

Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.

Full Access
Question # 50

A coffee machine manufacturer is equipping all of its coffee machines with 10T sensors. The 10T core application is writing measurements for each record to Amazon Timestream. The records have multiple

dimensions and measures. The measures include multiple measure names and values.

An analysis application is running queries against the Timestream database and is focusing on data from the current week. A database specialist needs to optimize the query costs of the analysis application.

Which solution will meet these requirements?

A.

Ensure that queries contain whole records over the relevant time range.

B.

Use time range, measure name, and dimensions in the WHERE clause of the query.

C.

Avoid canceling any query after the query starts running.

D.

Implement exponential backoff in the application.

Full Access
Question # 51

A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.

Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

A.

Enable in-transit and at-rest encryption on the ElastiCache cluster.

B.

Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.

C.

Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.

D.

Create an IAM policy to allow the application service roles to access all ElastiCache API actions.

E.

Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.

F.

Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

Full Access
Question # 52

A news portal is looking for a data store to store 120 GB of metadata about its posts and comments. The posts and comments are not frequently looked up or updated. However, occasional lookups are expected to be served with single-digit millisecond latency on average.

What is the MOST cost-effective solution?

A.

Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.

B.

Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.

C.

Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and use Amazon Athena to query the data.

D.

Use Amazon DynamoDB with on-demand capacity mode. Switch the table class to DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).

Full Access
Question # 53

A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled.

The application has caused a logical corruption of the database, which is causing the application to become unresponsive. The specific time of the corruption has been identified, and it was within the backup retention period.

How should a database specialist recover the database to the most recent point before corruption?

A.

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

B.

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

C.

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

D.

Restore using the appropriate automated backup. No changes to the application connection string are required.

Full Access
Question # 54

A single MySQL database was moved to Amazon Aurora by a business. The production data is stored in a database cluster in VPC PROD, whereas 12 testing environments are hosted in VPC TEST with the same AWS account. Testing has a negligible effect on the test data. The development team requires that each environment be updated nightly to ensure that each test database has daily production data.

Which migration strategy will be the quickest and least expensive to implement?

A.

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Full Access
Question # 55

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.

Which approach will MOST effectively meet these requirements?

A.

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.

B.

Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.

C.

Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.

D.

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.

Full Access
Question # 56

A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the companys production account The companys security team requires that access to production

resources complies with AWS best security practices

A database administrator needs to provide the reporting application with access to the production database. The company has already configured VPC peering between the production account and developer account The company has also updated the route tables in both accounts With the necessary entries to correctly set up VPC peering

What must the database administrator do to finish providing connectivity to the reporting application?

A.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

B.

Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

C.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_

D.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports

Full Access
Question # 57

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the

Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to

RDS Data API is private and never passes through the public internet.

What should the database specialist do to meet this requirement?

A.

Modify the Aurora Serverless cluster by selecting a VPC with private subnets.

B.

Modify the Aurora Serverless cluster by unchecking the publicly accessible option.

C.

Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.

D.

Create a gateway VPC endpoint for RDS Data API.

Full Access
Question # 58

An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The

steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.

How should a Database Specialist address these requirements?

A.

Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB

B.

Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift

C.

Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance

D.

Use DynamoDB Accelerator to offload the reads

Full Access
Question # 59

A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2 Regions.

This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.

Which set of actions will meet these requirements?

A.

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

B.

Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

C.

Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

D.

Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

Full Access
Question # 60

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Full Access
Question # 61

A company requires near-real-time notifications when changes are made to Amazon RDS DB security groups.

Which solution will meet this requirement with the LEAST operational overhead?

A.

Configure an RDS event notification subscription for DB security group events.

B.

Create an AWS Lambda function that monitors DB security group changes. Create an Amazon Simple Notification Service (Amazon SNS) topic for notification.

C.

Turn on AWS CloudTrail. Configure notifications for the detection of changes to DB security groups.

D.

Configure an Amazon CloudWatch alarm for RDS metrics about changes to DB security groups.

Full Access
Question # 62

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment

method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.

Which process should the Database Specialist recommend to meet these requirements?

A.

Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.

B.

Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.

C.

Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.

D.

Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.

Full Access
Question # 63

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from .01% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.

What is the MOST operationally efficient and cost-effective solution?

A.

Configure RDS Storage Auto Scaling.

B.

Configure RDS instance Auto Scaling.

C.

Modify the DB instance allocated storage to meet the forecasted requirements.

D.

Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.

Full Access
Question # 64

A web-based application uses Amazon DocumentDB (with MongoDB compatibility) as its underlying data store. Sufficient access control IS in place, but a database specialist wants to be able to review logs if the primary DocumentDB database is deleted

Which combination of steps Should the database specialist take to meet this requirement? (Select TWO_)

A.

Set the audit_logs cluster parameter to enabled

B.

Enable DocumentDB log export to Amazon CloudWatch Logs.

C.

Enable Enhanced Monitoring tor DocumentDB.

D.

Enable AWS CloudTrail for DocumentDB.

E.

use AWS Config to monitor the state of DocumentDB.

Full Access
Question # 65

A company's database specialist is building an Amazon RDS for Microsoft SQL Server DB instance to store hundreds of records in CSV format. A customer service tool uploads the records to an Amazon S3 bucket.

An employee who previously worked at the company already created a custom stored procedure to map the necessary CSV fields to the database tables. The database specialist needs to implement a solution that reuses this previous work and minimizes operational overhead.

Which solution will meet these requirements?

A.

Create an Amazon S3 event to invoke an AWS Lambda function. Configure the Lambda function to parse the .csv file and use a SQL client library to run INSERT statements to load the data into the tables.

B.

Write a custom .NET app that is hosted on Amazon EC2. Configure the .NET app to load the .csv file and call the custom stored procedure to insert the data into the tables.

C.

Download the .csv file from Amazon S3 to the RDS D drive by using an AWS msdb stored procedure. Call the custom stored procedure to insert the data from the RDS D drive into the tables.

D.

Create an Amazon S3 event to invoke AWS Step Functions to parse the .csv file and call the custom stored procedure to insert the data into the tables.

Full Access
Question # 66

Amazon Aurora MySQL is being used by an ecommerce business to migrate its main application database. The firm is now doing OLTP stress testing using concurrent database connections. A database professional detected sluggish performance for several particular write operations during the first round of testing.

Examining the Amazon CloudWatch stats for the Aurora DB cluster revealed a CPU usage of 90%.

Which actions should the database professional take to determine the main cause of excessive CPU use and sluggish performance most effectively? (Select two.)

A.

Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.

B.

Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.

C.

Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.

D.

Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.

E.

Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

Full Access
Question # 67

A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.

Which solution meets these requirements in the MOST efficient way?

A.

Use Amazon RDS for MySQL as the database and use Amazon ElastiCache

B.

Use Amazon DynamoDB as the database and use DynamoDB Accelerator

C.

Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache

D.

Use Amazon DynamoDB as the database and use Amazon API Gateway

Full Access
Question # 68

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

A.

Increase the number of tables that are loaded in parallel.

B.

Drop all indexes on the source tables.

C.

Change the processing mode from the batch optimized apply option to transactional mode.

D.

Enable Multi-AZ on the target database while the full load task is in progress.

Full Access
Question # 69

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

A.

Create a snapshot of the old databases and restore the snapshot with the required storage

B.

Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS

C.

Create a new database using native backup and restore

D.

Create a new read replica and make it the primary by terminating the existing primary

Full Access
Question # 70

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

A.

Create an Amazon DynamoDB table with provisioned capacity mode

B.

Create an Amazon DocumentDB cluster

C.

Create an Amazon DynamoDB table with on-demand capacity mode

D.

Create an Amazon Aurora Serverless DB cluster

Full Access
Question # 71

A huge gaming firm is developing a centralized method for storing the status of various online games' user sessions. The workload requires low-latency key-value storage and will consist of an equal number of reads and writes. Across the games' geographically dispersed user base, data should be written to the AWS Region nearest to the user. The design should reduce the burden associated with managing data replication across Regions.

Which solution satisfies these criteria?

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Full Access
Question # 72

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster. The DB cluster uses three read replicas. The primary DB instance is an

8XL-sized instance, and the read replicas are each XL-sized instances.

Users report that database queries are returning stale data. The replication lag indicates that the replicas are 5 minutes behind the primary DB instance. Status queries on the replicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that the IO_THREAD is 1 binlog behind the primary.

Which changes will reduce the lag? (Choose two.)

A.

Deploy two additional read replicas matching the existing replica DB instance size.

B.

Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add three Aurora Replicas.

C.

Move the read replicas to the same Availability Zone as the primary DB instance.

D.

Increase the instance size of the primary DB instance within the same instance class.

E.

Increase the instance size of the read replicas to the same size and class as the primary DB instance.

Full Access
Question # 73

A corporation wishes to move a 1 TB Oracle database from its current location to an Amazon Aurora PostgreSQL DB cluster. The database specialist at the firm noticed that the Oracle database stores 100 GB of large binary objects (LOBs) across many tables. The Oracle database supports LOBs up to 500 MB in size and an average of 350 MB. AWS DMS was picked by the Database Specialist to transfer the data with the most replication instances.

How should the database specialist improve the transfer of the database to AWS DMS?

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Full Access
Question # 74

A database specialist needs to move a table from a database that is running on an Amazon Aurora PostgreSQL DB cluster into a new and distinct database cluster. The new table in the new database must be updated with any changes to the original table that happen while the migration is in progress.

The original table contains a column to store data as large as 2 GB in the form of large binary objects (LOBs). A few records are large in size, but most of the LOB data is smaller than 32 KB.

What is the FASTEST way to replicate all the data from the original table?

A.

Use AWS Database Migration Service (AWS DMS) with ongoing replication in full LOB mode.

B.

Take a snapshot of the database. Create a new DB instance by using the snapshot.

C.

Use AWS Database Migration Service (AWS DMS) with ongoing replication in limited LOB mode.

D.

Use AWS Database Migration Service (AWS DMS) with ongoing replication in inline LOB mode.

Full Access
Question # 75

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.

To prepare the new table with identical settings, which steps should be performed? (Choose two.)

A.

Re-create global secondary indexes in the new table

B.

Define IAM policies for access to the new table

C.

Define the TTL settings

D.

Encrypt the table from the AWS Management Console or use the update-table command

E.

Set the provisioned read and write capacity

Full Access
Question # 76

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Full Access
Question # 77

A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature.

Which AWS solution meets these requirements?

A.

Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.

B.

Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.

C.

Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.

D.

Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.

Full Access
Question # 78

A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX).

The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads.

What should the database specialist review first to improve the utility of DAX?

A.

The DynamoDB ConsumedReadCapacityUnits metric

B.

The trust relationship to perform the DynamoDB API calls

C.

The DAX cluster's TTL setting

D.

The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest

Full Access
Question # 79

A company has an existing system that uses a single-instance Amazon DocumentDB (with MongoDB compatibility) cluster. Read requests account for 75% of the system queries. Write requests are expected to increase by 50% after an upcoming global release. A database specialist needs to design a solution that improves the overall database performance without creating additional application overhead.

Which solution will meet these requirements?

A.

Recreate the cluster with a shared cluster volume. Add two instances to serve both read requests and write requests.

B.

Add one read replica instance. Activate a shared cluster volume. Route all read queries to the read replica instance.

C.

Add one read replica instance. Set the read preference to secondary preferred.

D.

Add one read replica instance. Update the application to route all read queries to the read replica instance.

Full Access
Question # 80

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app.

The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.

Which solution will provide the MOST cost optimization of the DynamoDB database layer?

A.

Change the DynamoDB tables to use on-demand capacity.

B.

Use AWS Auto Scaling and configure time-based scaling.

C.

Enable DynamoDB capacity-based auto scaling.

D.

Enable DynamoDB Accelerator (DAX).

Full Access
Question # 81

A company is using an Amazon Aurora MySQL database with Performance Insights enabled. A database specialist is checking Performance Insights and observes an alert message that starts with the following phrase: `Performance Insights is unable to collect SQL Digest statistics on new queries`¦`

Which action will resolve this alert message?

A.

Truncate the events_statements_summary_by_digest table.

B.

Change the AWS Key Management Service (AWS KMS) key that is used to enable Performance Insights.

C.

Set the value for the performance_schema parameter in the parameter group to 1.

D.

Disable and reenable Performance Insights to be effective in the next maintenance window.

Full Access
Question # 82

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

A.

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

B.

Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

C.

Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

D.

Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Full Access
Question # 83

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

A.

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

B.

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

C.

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

D.

Create an AWS Backup plan and assign the DynamoDB table as a resource.

Full Access
Question # 84

A database specialist needs to enable IAM authentication on an existing Amazon Aurora PostgreSQL DB cluster. The database specialist already has modified the DB cluster settings, has created IAM and database credentials, and has distributed the credentials to the appropriate users.

What should the database specialist do next to establish the credentials for the users to use to log in to the DB cluster?

A.

Add the users' IAM credentials to the Aurora cluster parameter group.

B.

Run the generate-db-auth-token command with the user names to generate a temporary password for the users.

C.

Add the users' IAM credentials to the default credential profile, Use the AWS Management Console to access the DB cluster.

D.

Use an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint.

Full Access
Question # 85

A company is running its customer feedback application on Amazon Aurora MySQL. The company runs a report every day to extract customer feedback, and a team reads the feedback to determine if the customer comments are positive or negative. It sometimes takes days before the company can contact unhappy customers and take corrective measures. The company wants to use machine learning to automate this workflow.

Which solution meets this requirement with the LEAST amount of effort?

A.

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon Comprehend to run sentiment analysis on the exported files.

B.

Export the Aurora MySQL database to Amazon S3 by using AWS Database Migration Service (AWS DMS). Use Amazon SageMaker to run sentiment analysis on the exported files.

C.

Set up Aurora native integration with Amazon Comprehend. Use SQL functions to extract sentiment analysis.

D.

Set up Aurora native integration with Amazon SageMaker. Use SQL functions to extract sentiment analysis.

Full Access
Question # 86

A significant automotive manufacturer is switching a mission-critical finance application's database to Amazon DynamoDB. According to the company's risk and compliance policy, any update to the database must be documented as a log entry for auditing purposes. Each minute, the system anticipates about 500,000 log entries. Log entries should be kept in Apache Parquet files in batches of at least 100,000 records per file.

How could a database professional approach these needs while using DynamoDB?

A.

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.

B.

Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.

C.

Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.

D.

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.

Full Access
Question # 87

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:

“Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.”

The developers need to load this data soon, so a database specialist must act quickly to solve this issue.

What is the MOST secure solution?

A.

Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.

B.

Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.

C.

Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.

D.

Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.

Full Access
Question # 88

A company's application team needs to select an AWS managed database service to store application and user data. The application team is familiar with MySQL but is open to new solutions. The application and user data is stored in 10 tables and is de-normalized. The application will access this data through an API layer using an unique ID in each table. The company expects the traffic to be light at first, but the traffic Will Increase to thousands of transactions each second within the first year- The database service must support active reads

and writes in multiple AWS Regions at the same time_ Query response times need to be less than 100 ms Which AWS database solution will meet these requirements?

A.

Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication

B.

Deploy an Amazon Aurora MySOL global database with write forwarding turned on

C.

Deploy an Amazon DynamoDB database with global tables

D.

Deploy an Amazon DocumentDB global cluster across multiple Regions.

Full Access
Question # 89

A company’s ecommerce website uses Amazon DynamoDB for purchase orders. Each order is made up of a Customer ID and an Order ID. The DynamoDB table uses the Customer ID as the partition key and the Order ID as the sort key.

To meet a new requirement, the company also wants the ability to query the table by using a third attribute named Invoice ID. Queries using the Invoice ID must be strongly consistent. A database specialist must provide this capability with optimal performance and minimal overhead.

What should the database administrator do to meet these requirements?

A.

Add a global secondary index on Invoice ID to the existing table.

B.

Add a local secondary index on Invoice ID to the existing table.

C.

Recreate the table by using the latest snapshot while adding a local secondary index on Invoice ID.

D.

Use the partition key and a FilterExpression parameter with a filter on Invoice ID for all queries.

Full Access
Question # 90

For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls.

Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)

A.

Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.

B.

Ensure the vertices and edges are specified in different .csv files with proper header column formatting.

C.

Use AWS DMS to move data from Amazon S3 to the Neptune Loader.

D.

Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.

E.

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.

F.

Create an S3 VPC endpoint and issue an HTTP POST to the database?€™s loader endpoint.

Full Access
Question # 91

A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev- VPC1.

What is likely causing the timeouts?

A.

The database is deployed in a VPC that is in a different Region.

B.

The database is deployed in a VPC that is in a different Availability Zone.

C.

The database is deployed with misconfigured security groups.

D.

The database is deployed with the wrong client connect timeout configuration.

Full Access
Question # 92

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises

SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.

How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

A.

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.

B.

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.

C.

Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.

D.

Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.

Full Access
Question # 93

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The companys operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience

Which solution will solve the throttling issue without requiring changes to the app?

A.

Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

B.

Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

C.

use on-demand capacity mode tor the DynamoDB table.

D.

use DynamoDB Accelerator (DAX).

Full Access
Question # 94

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

A.

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B.

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C.

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D.

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E.

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Full Access
Question # 95

A startup company in the travel industry wants to create an application that includes a personal travel assistant to display information for nearby airports based on user location. The application will use Amazon DynamoDB and must be able to access and display attributes such as airline names, arrival times, and flight numbers. However, the application must not be able to access or display pilot names or passenger counts.

Which solution will meet these requirements MOST cost-effectively?

A.

Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.

B.

Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.

C.

Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.

D.

Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.

Full Access
Question # 96

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Full Access
Question # 97

A database administrator needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days.

Which solution will meet these requirements in the MOST operationally efficient way?

A.

Create a manual copy of the snapshot.

B.

Export the contents of the snapshot to an Amazon S3 bucket.

C.

Change the retention period of the snapshot to 45 days.

D.

Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.

Full Access