Amazon AWS Certified Database Specialty – Database Migration, DMS and SCT Part 3
August 12, 2023

16. DMS security – IAM, encryption and networking

Now let’s talk about DMs security. So, just like other AWS services, DMs uses Im for managing DMs access and resource permissions. You can encrypt DMs endpoints using SSL certificates. So you can assign a certificate to an endpoint via the DMs console or using the API. And each endpoint may need a different SSL configuration depending on the database engine. And when I’m talking about the endpoints, I’m talking about the source endpoint and the target endpoint. And remember, if you’re using Redshift, then Redshift already uses an SSL connection so you don’t need the DMs SSL for that.

Similarly, Oracle SSL requires you to upload the Oracle wallet instead of the certificate pam files so that’s something good to remember. And encryption at Rest, that is for storage is provided through Kms and it does use the Kms keys. So that was about Im and encryption. Now let’s look at the networking security. So, GMs replication instance is always created within a VPC. The database endpoints must include knackles or Security group configuration to allow incoming access from the replication instance. So we have different network configurations that we can use with TMS. So we can have a single VPC setup, two VPC setup.

Or we can have on premise network two VPC using Direct Connect, VPN or Internet. Or you can use RDS outside VPC, possibly running on an easy to instance to a DB inside VPC via Classic link. So we can configure DMs in all these scenarios. Let’s take a look at each of these one by one. First, networking when we have a single VPC. So this is the simplest network configuration. All the components are within the same VPC. That means the source database and the target database are in the same VPC. And the GMs replication instance also runs within the same VPC. And this was the configuration we used for our hands on demo. All right, the next configuration is having two VPCs.

So you have source and target endpoints in different VPCs. So what you can do is create the replication instance in one of these VPCs and then use VPC pairings. You can have your DMs instance either in the source VPC or in the Target VPC. And it’s good to remember that generally you would get better performance if you place your DMs replication instance in the same AZ as the Target database. You should generally prefer placing your DMs instance in the same availability zone as your Target database as shown here. All right, so that’s about the two VPC configuration. Now let’s look at the third configuration, which is on premise two VPC.

So you have your source database in a corporate data center and you have your DMs instance, the replication instance and the target database in a VPC in AWS cloud. So in this case you can either use Direct Connect or a VPN connection. And in case you cannot use Direct Connect or even VPN, you can consider using an Internet gateway. But remember when you use Internet gateway that exposes your replication instance. With Internet gateway you will have public replication instance in a VPC. Now let’s look at the fourth configuration where we have RDS running outside VPC to a target database running within a VPC.

So in this case, we use classic link with a proxy server. Remember that the replication instance sitting in a VPC cannot use classic Link directly. Hence we need a proxy server. So proxy server helps with port forwarding. So port forwarding on the proxy server will allow for the communication between the source and target database. And remember, this is only applicable if your RDS instance is sitting outside the VPC. And generally you would have your RDS instances deployed inside a VPC. So you don’t have to worry about this much. All right, so these were a couple of VPC configurations that you would find in typical DMs migration projects. All right, with that, let’s continue to the next lecture.

17. DMS pricing

Now, let’s quickly talk about DMs pricing. So with DMs, you only pay for the replication instances and additional log storage. And as usual with any other AWS service, you also pay the standard data transfer charges. So that’s all you pay for when you use DMs. All right, so that was about DMs pricing. Let’s continue to the next lecture.

18. DMS general best practices

Now let’s discuss general best practices when using DMs. You should disable backups and transaction logs during your migration process and you should carry out validations during the CDC. And we talked about this before as well. So instead of running your validations during a full load task, it’s a good idea to run them during during the CDC process. And this improves the performance of your food load tasks. All right? Then you can use Multiac deployments for replication instances, and this is especially truefor CDC or for ongoing replications.

And remember, you should always provision appropriate instance resources. So if you use a smaller DMs instance than necessary, then you might experience performance issues with your migration. And to review all the best practices, I would suggest going over to this link here and reviewing different best practices for using DMs. All right, so that was a quick lecture on the general best practices with DMs. Let’s continue to the next lecture. It.

19. DMS migration architectures to minimize downtime

In this lecture, let’s look at some of the solution architectures for migration with DMs that help you minimize the downtime due to migration. All right, so there are a couple of architectures that you can use. For example, fallback approach, roll forward or fall forward approach. You can use dynamic connections or you can use the dual right approach. So all these architectures will allow for minimal downtime or near zero downtime or a zero downtime. And remember, whenever you’re looking for zero downtime, it invariably refers to full load plus CDC. So when you use a combination of full load and CDC, the migration will typically result in a zero downtime or near zero downtime.

All right? So that’s super important to remember that zero downtime means full load plus CDC. All right, now let’s look at these four approaches one by one. So the basic fallback approach, you have your client application connected to your source database, which for example, we have Oracle on premise, and then you use DMs with full load and CDC to migrate this database to, let’s say, Aura. And once the validation is successful, then you can stop the writes on the source database and cut over to the new database. And if you see a validation failure, then you can always fall back to the source database. This is a basic fallback approach when you cut over after your validation.

But remember, the most important thing here is full load plus CDC. So that is what is going to ensure that your downtime is zero downtime or near zero downtime. The next approach is roll forward or fall forward approach. So in this case, you have your source database and application connected to the source database. You use DMs with full load and CDC to migrate the data to the target. And in addition to this, you also load this data into another database, let’s call it a rollback database. So let’s say you have Oracle on premise and you migrate to Aura in AWS Cloud. So to use a roll forward approach, you also move that data into Oracle on RDS for example.

So you have source Oracle on premise and you have a rollback database Oracle on RDS. So if your validation is successful, you can stop rights on the source and cut over to the Aurora database. And if your validation fails, then you can roll forward or fall back or roll back to the Oracle database on RDS instead of falling back to the on premise database. So this is a roll forward or fall forward approach. The next approach is roll forward with a dynamic connection or a dual connection. Okay, so you have your client applications that talk to an application that manages your connection.

So for example, you have an application that manages these connections and this application maintains, let’s say, a DynamoDB table with a list of different applications and the database servers that they are using. Okay, so in this case, we are migrating from SQL Server on premise to Aurora in AWS Cloud. And the DynamoDB table maintains a list of applications which have been migrated and which have not yet been migrated. So you can see that application one and application four is using SQL Server and application two and three have been migrated to Aura. Typically in this approach, the applications write to both old and new databases, and each application can cut over separately.

So application one and four continue to write to the source database. So these applications use CDC for continuous replication of data to Aura. And other two applications, application two and three have been migrated to Aurora, so they can write directly to Aurora. And Aurora further does replication to the rollback database that allows to fall forward or roll forward. Okay, so this is another approach, and the last approach that we’re going to talk about is the dual right approach. So you have your client application connected to the source database, let’s say Oracle, and then you run a DMs task to migrate this data to, let’s say, DynamoDB. And now your application writes to both the databases simultaneously.

Okay. And post validation of your migration. If the validation is successful, you can cut over to the new database, that’s DynamoDB, and if the validation fails, you can fall back to the source database, which is Oracle. So till you cut over to the new database, you continue to write to both the source and the target database. So this is a dual write approach. So these were a couple of approaches that you can use with your migration projects. Minimize the downtime due to migration, and for more migration architectures, I would suggest going over the link that you see on the screen. All right, so that’s about it. And let’s continue to the next lecture.

20. Migrating large databases

Now let’s talk about different approaches that you can take to migrate large databases. So when you have large databases, you typically use a multiphase migration. So you split your migration workload into different phases and you migrate one by one, or you migrate in parallel using different DMs instances. Okay, so this is an approach you would typically use to migrate large databases. So for example, what you can do is you can copy the static tables first before migrating the active tables. This is one approach. Another option is you can clean up unwanted data from your source database to reduce the database size.

And this way your migration is going to complete way faster. Yet another approach is to use devices like Snowball Edge to move data to S three, and then you can use DMs to migrate data from S Three to your target database. So for example, you have your source database in your on premise data center and you can use SCT and DMs agent to copy this data to a Snowball Edge device. And then you ship this device to AWS. And AWS will copy the data from Snowball Edge to S three, and then you can use DMs to migrate the data from S Three to your target database. So these were a couple of approaches that you can use to migrate large databases using DMs. All right, let’s continue.

21. Migrating to RDS databases

Now let’s talk about migrating your databases to MySQL or MariaDB on RDS. So if you’re migrating from MySQL or MariaDB on premise instances, or from S Three or from EC Two, so on S Three, you would typically have the backups like MySQL dump, you could have your or databases running on premise or on easy two instances. So in this case, if you have small to medium databases, you can simply use MySQL dump or MySQL import utilities, and this definitely has some downtime. And if you have a one time migration, like a full load migration, you can also restore from a backup or a data dump that you put on S Three.

So you can use MySQL dump or MySQL import to create a data dump and store it in S Three. And then you can create your target database from the backup stored in S Three. And you have this option in your RDS console to create a database from S Three backup. Right, so this was about one time migration or full load migration. If you need ongoing migration, then one option is to configure Bin log replication from your source instance. So when you use replication or ongoing replication, this typically results in minimal downtime. All right, so these were options for migrating from MySQL or MariaDB instances to RDS.

Okay, so this is a homogeneous migration. If you’re migrating from MySQL or MarieTV instances on RDS to RDS, then it’s very easy. You simply promote a read replica to be a standalone instance and you are done. And if you’re migrating from any other database, which means it’s a heterogeneous migration, then for one time or ongoing replication, you can definitely use DMs, and this would result in minimal downtime. And even in case of homogeneous migration, that is if you’re moving from MySQL or MariaDB to MySQL or MariaDB on RDS, you can definitely use DMs in homogeneous migration as well. And that’s going to give you minimal downtime. All right, now let’s look at migrating to PostgreSQL on RDS.

So if you’re migrating from on premise or easy to databases, that is postgres SQL databases, then for one time migration or for full load migration, you can use the utilities like PG Dump and PG Restore. This will definitely have some downtime. Alternatively, you can migrate using CSV data stored on S Three. So what you do is you use the AWS underscore S Three extension. So this is a postgres SQL extension available on RDS, and you can use it and import data using the AWS underscore S Three table import from S Three function. So this function is available with this extension, and this will also require some downtime if you have larger PostgreSQL databases running on RDS.

In that case, you can use the PG Transport Extension or PG underscore Transport Extension. What this does is it streams data from source to target and it’s extremely fast and results in minimal downtime. So all in all, you have two extensions. If you’re migrating from s three, you can use the AWS s three extension. Or if you are migrating from Postgres SQL on RDS, then you can use the PG Transport extension. And PG Transport results in extremely fast migration with minimal downtime as it streams your data. And for heterogeneous migration, and even for homogeneous migration, you can definitely use DMs for one time migration as well as for ongoing replication. And DMs with CDC always results in minimal downtime.

All right, let’s continue. Now let’s talk about migrating to Oracle on RDS. If you have smaller databases, then you can simply use the Oracle SQL Developer tool. This is a freeware provided by Oracle. So in this case, you would simply perform a database copy using the Oracle SQL Developer, and this supports Oracle as well as MySQL database as your source database. Okay, in case of larger database there, you can consider using Oracle Data Pump utility. Oracle Data Pump can export and import data between Oracle databases. Whether they are on premise on EC Two or RDS, that doesn’t matter. For large databases, you would typically use Oracle data pump. All right. An Oracle data pump can use S three to transfer the dump files.

So when you want to use S Three to transfer the dump files with Oracle Data Pump, you set the option S Three integration in your option group. And another approach to move data from source to target in case of Oracle Data Pump is by using a database link between source and target. So you can create a database link between the source and target, and then move the dump file from source to target. And again, for heterogeneous or homogeneous migrations, you can definitely use DMs, both for onetime migration as well as for ongoing replication. And DMs will give you minimal downtime as well. All right, let’s continue. Now let’s talk about migrating to SQL Server on RDS.

So if you’re migrating from SQL Server on Premise or EC Two, you can use the native backup and restore options in your SQL Server. So you simply use the backup files or the dot back files stored on S Three for your migration. And this native backup and restore feature also supports encryption and compression. Alternatively, you can also use SQL Server Management studio. So the SQL Server Management Studio is a freeware, and it gives you three options for migration. So you can use the generate and publish scripts. Wizard. This wizard can create a script that contains your schema, or it can also contain your data in addition to schema. The second option this tool gives you is the Import and Export Wizard, or you can also perform a bulk copy.

So these are the three options provided by the SQL Server Management Studio that you can use for your SQL Server migration to SQL Server on RDS. And if you’re migrating from SQL Server on RDS to SQL Server on RDS in that case, you can simply restore from a snapshot or other options like the native backup run restore feature or the SQL Server Management studio can also be used in this case. And just like other databases, if you’re doing a heterogeneous or homogeneous migration, you can use DMs for that, and that’s going to give you a minimal downtime. You can use DMs for one time migration as well as for ongoing CDC replication. All right, so this was about migrating to the RDS databases. Let’s continue to the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!