Amazon AWS Certified Database Specialty – Exam preparation Part 1
August 7, 2023

1. Exam Guide & Sample Questions

In this section, let’s discuss a little bit about the database specialty exam format and some of the sample questions. All right, here I am on the certified database specialty home page on the AWS website. So here you can see the exam guide as well as the sample questions. Let’s look at the exam guide quickly. So this is the AWS official exam guide that details the requirements, exam format and the syllabus. For this particular certification exam, we can see the recommended AWS knowledge of minimum of five years of experience with common database technologies, two years of hands on experience on AWS and experience and expertise working on on premise and AWS cloud based relational and no SQL databases.

This is the recommendation from AWS. The exam has multiple choice as well as multiple response questions. There is no penalty for guessing. So if you do not know the answer to any questions, you can definitely go ahead and do the guesswork. And remember that minimum passing score is 750. And then you can definitely go through the different domains of the exam. So these are the different domains of the exam. We have covered each of these domains in this course and I’m sure you’ll be able to correlate what we have learned with these five domains.

 All right, so that’s about the exam guide. Let’s go back to the home page and here you can download the sample questions. This document provides ten sample questions that you can try to solve on your own. And we are going to solve these questions together in this section. All right? And there is an answer here at the end of this document that you can referred to after you attempt these questions. All right, so let’s go ahead and discuss these questions one by one.

2. Sample question 1

The first question is a media company is running a critical production application that uses Amazon RDS for Postgres SQL with multiaz deployments. Database size is 25 TB. The It director wants to migrate the database to Amazon Aurora PostgreSQL with minimal effort and minimal disruption into the business. What is the best migration strategy to meet these requirements? Now, the best strategy to answer these questions is look for the keywords and the AWS services involved in the question. So here we can see that we have RDS for postgres SQL with multiaz, and we want to migrate it to aura postgres, and we want to migrate with minimal effort and minimal disruption or minimal downtime. So without even looking at the options or the answers first, you should try to come up with an answer yourself.

We can see that this is a homogeneous migration. We are migrating from postgres SQL to PostgreSQL, and we are migrating from RDS to aura. So the easiest strategy will be to create an aura Replica. And we have seen this in the Audience and Aurora section, right? So let’s look at the options. Now, the first option is use the AWS Schema conversion tool to copy database schema from audience for PostgreSQL to aura PostgreSQL database cluster. Create an AWS DMs task to copy the data. Now, since this is a homogeneous migration, we really don’t need SCT, so this option doesn’t make sense. The second option says create a script to continuously backup the RDS for PostgreSQL instance using PG dump and restore the backup to an aura PostgreSQL DB cluster using PG restore.

It’s talking about using the native PostgreSQL backup and restore tools to migrate the database to aura again, this should work, but it’s not a minimal effort minimal disruption strategy, right? It’s going to take time. Backup and restore process generally takes long time. All right, the third option says create a read Replica from the existing RDS for postgres SQL instance. Check the replication lag is zero and then promote the read Replica as a standalone aura postgres SQL DB cluster. Now, this looks like a correct option, but remember, you cannot promote an RDS read Replica to be a stand alone aura database cluster. You can only promote an aura Replica to be an aura database cluster, not the RDS Replica.

The fourth option seems to be the right one, which says create an aura Replica from the existing production ideas for postgres SQL instance, stop the rights on the master, check that the replication lag is zero, and then promote the aura Replica as a stand alone aura PostgreSQL DB cluster. So the option D is the right answer. And we have talked about this when we discussed the cluster replication options for aura. So when Replicating between RDS database instance and an aura DB cluster, we can create an aura read Replica and then promote the aura Replica as a standalone aura DB cluster, right? So D is the correct answer. Let’s continue.

3. Sample question 2

Question two. A medical company is planning to migrate its onpremise Postgres SQL database along with application and web servers to AWS. RDS for PostgreSQL is being considered as the target database engine. Access to the database should be limited to application servers and a passenger host in a way. VPC, which solution meets the security requirements? Note the keywords here. We want to migrate from on premise postgres to RDS postgres. It’s a homogeneous migration. We want to migrate database as well as application and web servers. And we want to secure the access to the database. And we want to limit it only to the application servers and the bastion host. Right? And you know what bastion host is? We have discussed it in the VPC section.

So what are the options? Let’s look at them one by one. The first option says launch the RDS for Postgres SQL database in a DB subnet group containing private subnets and modify the PG underscore Hpa. com file on the DB instance to allow connections from only the application servers and the bastion host. Now, we have discussed already that we do not modify the conf files in RDS. We use parameter groups. So this option is incorrect. The second option is launch the RDS for Postgres SQL database in a DV subnet group containing public subnets. Now remember, we generally deploy databases within private subnets. So again, this option as well is incorrect.

The third option says launch the RDS for Postgres SQL database in a DB subnet group containing private subnets. Create a new security group with inbound rules to allow connections from only the security groups of the application servers and the basin host. Attach the new security group to the DB instance. Now, this option is obviously correct because we use security groups to control access to the database. And in this case, we are also using private subnets. And the security group is configured to allow connections only from the application servers and the bastion host. So this appears to be the correct answer, but let’s first check the option D as well. Option D says Launch the RDS for Postgres SQL database in a DB subnet group containing private subnets.

Create a knackle attached to the VPC and private subnets. Modify the inbound and outbound rules to allow connections to and from the application servers and bastion host. Now remember, knackles are used to control access at the subnet level. Security groups control access at the instance levels. Option C that uses security grids is of course the right one. So in this case, option C is the right one. You create custom rules in the security group for your database instances to allow connections from the security groups of the application servers and the bastard host. And this will enable you to securely connect to your database instances running in private subnets. Right? So option C is the right answer. Let’s continue.

4. Sample question 3

Question three a database specialist is troublesharing complaints from an application’s users who are experiencing performance issues when saving data in an Amazon ElastiCache for redis cluster with cluster mode disabled, the database specialist finds that the performance issues are occurring during the cluster’s backup window. The cluster runs in a replication group containing three nodes. Memory on the nodes is fully utilized. Organizational policies prohibit the database specialist from changing the backup window time. How could the database specialist address the performance concern? Select two options.

So whenever a question has multiple responses, it will be mentioned in the question like this one. All right, so what are the keywords here? The keywords here are we are using ElastiCache for redis with cluster mode disabled. We have performance issues during the backup window and also memory on the nodes is fully utilized. And we have discussed this earlier. We know that it’s recommended to backup from a replica and when you back up from a replica, this ensures primary node performance, right? So let’s look at what answers or what options we have. The first one says add an additional node to the cluster in the same AZ as the primary.

Now, adding a note to the cluster is not going to reduce your performance issues because performance issues are occurring due to the backup process, right? So A is not the correct answer. Option B says configure the backup job to take a snapshot of a read replica. Now, this is the right answer, and we have discussed this earlier as well, that it’s recommended to take a backup from a replica. Option C says increase the local instance storage size for the cluster nodes. Now, increasing the storage size of the cluster node is not going to help because the question says that memory on the nodes is fully utilized and this memory refers to the instance memory or the Ram.

So increasing the storage size is not going to help. Option D says increase the reserved memory percent parameter value and this is the right answer. We have discussed this when we discuss the redis best practices that we have to set this parameter reserved memory percent to about 25% and this helps the background processes or the non data processes to run efficiently. Right? Option D is the right answer. And option E says configure the backup process to flush the cache before taking the backup. This obviously is a distractor. We never flush the Elastic cache cache. Right? Option B and option the correct answers. All right, let’s continue.

5. Sample question 4

Question four a company’s security department has mandated that their existing audience for MySQL DB instance be encrypted at rest. What should a database specialist do to meet this requirement? Now, we have discussed this question when we discussed how to encrypt an unencrypted Audius database. It’s a straight forward answer. We already know that we take a manual snapshot of the unencrypted database, we copy that snapshot with encryption enabled, and then we restore that encrypted snapshot into the target database. That’s how we encrypt an existing unencrypted RDS database, right? So let’s look at the answers.

Option one, or option A, says modify the database to enable encryption. Apply this setting immediately without waiting for the next scheduled maintenance window. And we already know that we cannot encrypt an existing unencrypted database by modifying it like this, right? So option A is incorrect.Option B says export the database to an S three bucket with encryption enabled, create a new database and import the export file. Now, we don’t export database to S three. We export snapshots. Option B is, again, incorrect.

Option C says create a snapshot of the database, create an encrypted copy of the snapshot, and create a new database from the encrypted snapshot. So, this is what we learned. So we know that this is the right answer. And option D says create a snapshot of the database, restore the snapshot into the new database with encryption enabled. Now, again, we know that this is not the right answer, because we have to copy the snapshot with encryption enabled. And then we restore the encryption encrypted snapshot into the target database. Option C that says create a new database from encrypted snapshot is the right answer. All right, let’s continue.

6. Sample question 5

Question five a company has a highly available production ten TB SQL Server relational database running on EC Two. Users have recently been reporting performance and connectivity issues. A database specialist has been asked to configure a monitoring and alerting strategy that will provide metrics visibility and notification to troubleshoot these issues. Which solution will meet these requirements? Now, let’s look at the keywords here. SQL Server on EC Two. So we have Microsoft SQL Server database running on EC Two. It’s not running on RDS, and it’s facing performance and connectivity issues.

And we are required to configure a monitoring and alerting strategy. So let’s look at the options. The first one says Configure AWS Cloud Trail logs to monitor and detect signs of potential problems. Now we know that Cloud Trail is not used for performance tuning, so this is not the right answer. The second option says Install an Amazon Inspector agent on the DB instance. Again, Amazon Inspector is a security assessment tool, so it doesn’t make sense here. The third option says Migrate the database to RDS for SQL Server and use Performance Insights to monitor and detect signs of potential problems.

Create a scheduled AWS Lambda function that retrieves metrics from Performance Insights API and sends notifications to an Amazon SNS topic. Now this looks to be a good answer, but migrating from EC to RDS doesn’t look like an efficient or optimal solution. Let’s look at what we have in option D. Option D, says Configure cloud Watch application insights for Net and SQL Server to monitor and detect signs of potential problems. Configure Cloud Watch events to send notifications to an Amazon SNS topic. And this is the right answer. And if you remember, we discussed Cloud Watch Application Insights, and we know that we can use it with.

Net and SQL Server. We can also use it with DynamoDB tables, and it’s used for problem detection, notification and troubleshooting. So option D is the right answer. The Cloud Watch Application Insights uses machine learning classification algorithms to analyze metrics and identify signs of problems with your application. This also includes Windows Event Viewer and SQL Server error logs. And to receive notifications, you can create an event breach rule in Cloud Watch events. All right. Cloud Watch application insights is the right answer. That means option D is the correct option. Let’s continue.

7. Sample question 6

Question six a company’s ecommerce application stores order transactions in an Amazon audience for MySQL database. The database has run out of available storage and the application is currently unable to take orders. Which action should a database specialist take to resolve the issue in the shortest amount amount of time? So, we have MySQL on RDS and it’s run out of available storage. And we have discussed this in the RDS section, that if an instance runs out of storage, it may no longer be available until you allocate more storage. So you can allocate more storage to fix the issue. And to prevent this issue, you can use storage auto scaling.

And we have discussed this already in the RDS section. So let’s look at what options we have. Option A says add more storage space to the database instance using modified DB instance action. This is the right answer. We know that if instance runs out of storage, you have to allocate more storage.Let’s look at the other options to create a DB instance with more storage space from the latest backup. Now, this should work as well, but this will not solve the problem in the shortest amount of time. So that’s what is mentioned in the question. So, option B again is incorrect.

Option C says change the DB instance status from storage full to available. So again, this doesn’t make any sense at all, so it’s not the correct answer. And option D says, configure a read replica with more storage space. Now, read replica is not going to help because our application here is unable to take orders, which means it’s looking for the right request or the right load, and not the read load. So option D is incorrect. So the correct option is option A to add more storage space to the DB instance using the modified DB instance action. All right, let’s continue.

8. Sample question 7

Question seven a company undergoing a security audit has determined that its database administrators are presently sharing an administrative database user account for the company’s aura deployment. To support proper traceability governance and compliance, each database administration team member must dot using individual named accounts. Furthermore, long term database user credentials should not be used. Which solution should a database specialist implement to meet these requirements? Now let’s look at the keywords here we have an aura deployment and users or the administrators are sharing the master database user account.

And as a policy we do not want to allow this kind of behavior. So we want each user to have his or her own individual accounts to access the database. And the most important keyword here is we do not want long term database user credentials. So with this single keyword we come to know that we should be using temporary credentials. And the only option that allows us to use temporary user credentials is IMDb authentication. So without even looking at the options, we know that we must use IMDb authentication to solve this particular issue. So let’s look at what options we have.

The first one says use the CLI AWS CLI to fetch the im users and passwords from all team members for each IAM user. Create an aura user with the same password as the im user. Now, just by using the same password for IAM user and for aura is not going to help. So this is of course incorrect answer. Option B says enable IMDb authentication on the aura cluster. Create a database user for each team member without a password. Attach an im policy to each administrator’s im user account that grants the connect privileges using their database user account. And this is the right answer. We know this.

We have discussed how IMDb authentication works, and IMDb authentication provides you with temporary user credentials that have lifetime of about 15 minutes. And we also know that the im policy grants the connect privilege to the database user account. And we have also discussed the IMDb authentication process for MySQL as well as for postpress SQL, right? This, of course is the correct answer. Let’s still go through the other options. So option C says create a database user for each team member. Share the new database user credentials with team members.

Have users change the password on the first login to the same password as their im user. Again, just by using the same password, things are not going to work. So this again is not the right answer. And option D says create an im role and associate an im policy that grants the connect privilege using the shared account. Configure a trust policy that allows the administrators I am user account to assume the role. Now, the question clearly states that we want individual named accounts, right? So we cannot use a shared account here. Option D again is incorrect. Correct option is option B, which uses IMDb authentication all right. All right. Let’s continue.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!