If you are like me and love to keep up with the latest news in the Database, SQL and Cloud world, this is your place. Here is a short list of the blogs posts that got my attention during the past week. I hope you like them!
Using Amazon RDS for SQL Server in a hybrid cloud environment: “A common use case in an enterprise cloud database adoption strategy is to move your database workloads to the cloud first, while slowly moving the rest of your applications in batches. This post looks into the various possible scenarios and configurations you can use when accessing an Amazon RDS for SQL Server database instance from your on-premises or hybrid environments.”
Minimal Downtime Storage Migration of Large Databases: “The performance and stability of the storage layer is extremely critical for large databases, especially when the data size has grown to terabytes. When there is a shared storage among multiple servers, the storage layer can be upgraded without touching the servers. In such a situation one of the challenging tasks is moving the databases to the new storage. Due to the enormous size of the database, it usually takes several hours to copy the database files, which requires several hours of downtime. In this article I will be discussing how to minimize the downtime for larger databases during a storage migration.”
Stopping an Automatically Started Database Instance with Amazon RDS: “Customers needing to keep an Amazon Relational Database Service (Amazon RDS) instance stopped for more than 7 days, look for ways to efficiently re-stop the database after being automatically started by Amazon RDS. If the database is started and there is no mechanism to stop it; customers start to pay for the instance’s hourly cost. Moreover, customers with database licensing agreements could incur penalties for running beyond their licensed cores/users. […] This blog provides a step-by-step approach to automatically stop an RDS instance once the auto-restart activity is complete.”
Faster data migrations in Postgres:” In this post, let’s walk through the tradeoffs to consider while using pg_dump
and pg_restore
for your Postgres database migrations—and how you can optimize your migrations for speed, too. Let’s also explore scenarios in which you need to migrate very large Postgres tables. With large tables, using pg_dump
and pg_restore
to migrate your database might not be the most optimal approach. The good news is we’ll walk through a nifty Python tool for migrating large database tables in Postgres. With this tool we observed the migration of a large Postgres table (~1.4TB) complete in 7 hrs. 45 minutes vs. more than 1 day with pg_dump/pg_restore.”
And this is it. I hope you enjoyed reading them as much as I did. Have a nice weekend and keep yourself healthy!