Amazon AWS Certified Database Specialty – Monitoring, Logging and Encryption
August 10, 2023

1. Encryption and Snapshots

Okay. Now we’d like to discuss Kms key encryption as well as snapshot and encrypted snapshots in the database world. So follow along with me. So anytime you hear encryption, obviously it’s going to be around Kms. It’s an easy way to control access to your data. And then it is to manage the encryption keys. For us, it is going to be fully integrated with Im for authorization. And it has many seamless integrations. For example, into Amazon EBS, amazon S, three redshift, RDS SSM, et cetera, et cetera. And to use Kms, we can use these integrations, or we can, if we prefer, use the CLI or the SDK if we needed to. Okay? Now I’m talking about Kms because you need to understand how Kms works with snapshots and so on.

So let’s talk about EBS snapshots first. Say there is an EBS snapshot. It is encrypted with Kms already, and you want to copy it across to a different region. So to do so, you need to actually create a snapshot from it. So you take a volume and you have a snapshot, and the snapshot is going to be encrypted. And then you’re going to copy the snapshots over to a new region. And because the Kms keys cannot be ported from one region to another, you have to reencrypt the snapshot with the new Kms key. Now, this is an operation that is done automatically for you through the console when you enable the copy operation. But behind the scenes is going to be decrypted with the first key and reencrypted with the second key.

And so, as you can see, in the near region, there’s a different key key B that has the EBS encrypted snapshots. And then you can restore the snapshot into a volume with a different Kms encryption key. Now, this is for EBS volume. What about for redshift snapshots? So there is something called a Redshift snapshot grant. So the way it works that you want to have a redis snapshot, a redshift snapshot, sorry. And you want to copy this over into a new region, okay? And so it has to be a new different Kms key. And you can do a cross region copy a snapshot, but there is a process and you need to know about it from an exam perspective. So in the destination AWS region, create a Snapshot Copy grant by doing the following. So you have to do it in the destination region. So first you create a Kms key in the destination region.

Then you specify a name for the Snapshot Copy grant, and that name is be unique in the region for your account. Then you specify the Kmskid for which you are creating the grant. So the Kmskid you created and in the source region, then you’re going to enable copying of snapshots and specify the name of the Copy grant that you created in the destination in this region, which will allow you to copy the snapshots. And this is a bit boring but you have to remember these steps. This is partly one point for you at the exam if you do remember them. So just understand that the snapshot Copy grant has to be created in the destination region and then in the source region you would enable the copy of snapshots by specifying the copy grants you have created in the destination region.

What about creating an encrypted copy of an unencrypted RDS database? Well, you take an unencrypted one, then you take a snapshot of it which will also be unencrypted. Then you create an encrypted copy of the snapshot using the Kms key of your choosing which will create an encrypted snapshot and then finally restore the snapshot. Restore a database from the encrypted snapshot which will make you an RDS database that is encrypted encrypted. So that’s it for all the encryption stuff you should know at the exam. I hope that was helpful and I will see you in the next lecture.

2. Database Logging

Okay, so let’s summarize all the type of logging you can have on all the different databases. And again, it’s a bit boring, but you have to remember it because it could be helpful at the exam. So on RDS you get engine log files available. So Oracle, Microsoft, SQL Server, PostgreSQL MySQL and Mariodb. So you can list the log files by doing an API call called Describe DB log Files. And you can download these log files by doing an RDS download DB log file portion API call. The normal log file retention in RDS is up to seven days, and that’s configurable per DB engine using the Panda groups. And you have the option to publish all these logs into Cloud Watch logs. This allows you to do multiple things. Number one, to do real time analysis of the log data, and number two, to store the log in some highly durable storage.

Because in Cloud Watch logs, you can define how long you want to retain the logs for or even infinite. And from Cloud Watch logs you could export the logs into Amazon s three. And if you wanted to publish the logs into Cloud Watch logs, then you could create a custom parameter group. So the idea with RDS is that if it’s a database engine that is widely available, then you get all the logging facility from that database engine. So you can extract any of these logs from it. And again, remember, it can be set kept in RDS for seven days or published into Cloud Watch logs where you can retrieve it for a long time or export it to Amazon is free for example. Okay, so that’s number one then for Aura, this is a proprietary engine, so you will still get some engine logs available for Postgres SQL in MySQL.

So you can still use the two API calls from before to describe the database log files or to download the DB log file portion. Again, you will get seven days of retention in RDS and that’s configurable. And the logs can yet again be published into Cloud Watch logs, again for the very same reason as before. But the one thing you cannot publish into Cloud Watch logs is the transaction logs of Aura. Okay, so the transaction logs of Aura is not available to publish in Cloud Watch logs. Okay. Next we have redshift. Redshift is again preparatory from AWS. So what the logs we get from it are the connection and user activities in your database which will allow you to troubleshoot and audit.

That means that from the connection log you’re going to get all the logs of any authentication attempt. So whenever user tries to connect, for example, including connections and disconnections, this could be helpful from a security perspective or to troubleshoot, maybe a user trying to connect your database or an application that can’t connect to your database anymore. And then you have user logs. And this is information about changes to database user definitions. You also have User Activity Log, which is to log each query before it is run on the database, which can be quite helpful for troubleshooting. And all these logs for redshift are stored into Amazon is three buckets. You must enable it.

And so in terms of retention, you set the lifecycle policies on Amazon is free accordingly. And you need to make sure that the Amazon is free have a bucket policy that allow redshift to write to the bucket so that the logs can be successfully delivered. So, as you can see, for redshift, there is no option to send the logs to Cloud Watch Logs. Next, DynamoDB. So, DynamoDB is again a database of preparatory of AWS. And so all API calls to DynamoDB will be logged into Cloud Trail. And Cloud Trail will get basically anything such as writes, reads. Any API calls made into DynamoDB is an API call to AWS. So that’s why you will see everything in Cloud Trail.

And from Cloud Trail you can send all these logs into Cloud Watch Logs. Amazon is three buckets. There is no such thing as a quote unquote log file in DynamoDB because it’s proprietary. The only thing you know is that whenever someone does an API code into DynamoDB, you will see it in Cloud Trail. And that makes sense, because DynamoDB is not a database you manage yourself, it is a managed database. Okay, so what about Document DB? So, document DB you can audit all the events using something called document DB events. So remember this document DB events and you must opt in. And some example of logged events for dynamic for Document DB are successful and failed authentication attempts.

Dropping a collection in a database, creating an index data definition language. So, DVL, these four things are logged events and again you cannot get them unless you enable successfully the document DB events. Then these logs will be sent into Cloud Watch Logs. And so again, to update to document DB events, you need to set a flag in the parameter group called audit underscore Logs and you need to set that parameter to the value enabled. Finally, other information. So, ElastiCache has no access to the logs. Yet Neptune will publish audit log data into a log group in Amazon Cloudwash Logs.

And again it is a parameter to enable called Neptune underscore enable underscore audit log to one or zero to enable or disable. For QL DB, there is no logs available. And for DMs you can set task logging level to a log level that is maximum called Lugger severity, detailed debug, which is going to give you access to the most log possible regarding your DMs task. Okay, so that’s it just good to read about this in the slides before you get into the exam, if you can’t remember it. But it should be pretty easy after a few reads. Okay, so hope that was helpful and I will see you in the next lecture. Sure.

3. Secrets Manager

Okay, so now let’s talk about Secrets Manager. So it’s a newer service meant for storing secrets. And the good thing about Secrets Manager, it has the capability to do rotation of secrets every x number of days, which is very, very nice. And this is something we’ll be seeing in the confirmation section a lot to do the rotation of the secrets, it will use a lambda function. And so that lambda function can either be generated by AWS us, because there are a lot of database types that are integrated already with Secrets Manager. For example, RDS redshift and Document DB are integrated with Secrets Manager. And so therefore, the lambda function to perform the database rotation of the password for these database already exist.

And so we can leverage a lambda function that is created by AWS. We’ll see this in the cloud formation section. Or for other secrets, you would need to code yourself a lambda function to generate the next secret value to allow it to rotate. Now, all the secrets are going to be encrypted using Kms. And so Secrets Manager in the exam will be mostly meant when you need to pass in secrets to the RDS database to make it more secure, or maybe you had Shift or Document DB. So the idea is that no one will get access to the password of RDS except if they have access to that password in Secrets Manager. Now, this is something we’ll see in the hands on during CloudFormation. So, just an overview, I hope you liked it and I will see you in the next lecture.

4. Active Directory with RDS Microsoft SQL Server

Okay, so now let’s talk about how to integrate Active Directory with RDS SQL Server for security managed through your Active Directory. So you have a SQL Server database managed by RDS, and to integrate it with Active Directory, you have to use the AWS managed Microsoft ad. So you have to use that AWS managed Microsoft ad. Now, if you need to use your own internal Active Directory on your corporate data center to integrate it with RDS, what you need to do is that you need to join that on premises ad with a trust relationship on your AWS managed Microsoft ad. And as soon as these two things trust each other, then you will be able to use your corporate data center ID onto RDS Microsoft SQL Server. Then to allow the RDS Microsoft SQL Server instance to access your database managed Microsoft ad, you need to create an Im role.

And the Im role allows the database to access your ad. And then you need to configure your users and groups in the Microsoft Manage ad. Then you would change that Microsoft SQL Server RDS database to have a reference to the Im role you have created a reference to the ad it should access. And if you do make that modification, you need to know that you don’t need to stop the database if you’re just modifying it. You can just leave it running and do the modification on the fly. Okay? Finally, to make sure that all these things can talk to one another, you need to make sure that the security groups are correct. So as you can see, it is the RDS Microsoft SQL Server database accessing the AWS managed Microsoft ad.

So you need to make sure that the security group on the ad in AWS does allow traffic in from the security group attached to the SQL Server. And then finally when you’re done, you can log into the database using the master user Credential and create logins for your Active Directory. So again, some steps to remember. It’s a bit boring, we can really do a hands on this, but you have to remember the steps in this diagram, so hopefully that makes sense. It should all make sense in the end. Remember just the fact that you need to have some networking, some Im roles, trust relationship, the elements you need to create, and the fact you don’t need to stop the database if you’re modifying it to create this whole link. Okay, so that’s it for me. I hope you liked it and I will see you in the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!