As detailed in my last couple of posts, we have rebranded our company as PointAcross, which means we rebranded all of the products except the desktop Composer, which is still known as GoldMail. I created all new services and storage accounts with pointacross in the name in a new region, and everything is published in production and working fine.
Only one thing is left – the Windows Azure SQL Databases are still in US North Central. This is a disaster waiting to happen. It is a well-known fact that WASD’s have issues with connections, and you always have to put in connection management code. Running your services in one data center with your database in another is just asking for trouble. So now I have to move the primary database as well as the tiny session state database used in a couple of our web applications.
Although we have somewhere around 15 services running in Azure, only four of them access the primary database. These are the services for our primary application (PointAcross, used to make messages), and the Players used to play those messages.
Some investigation revealed several ways to migrate the data.
1. Copy the database from one place to another.
2. Use the SQL Azure Migration Wizard on codeplex.
3. Use the bacpac feature to back up a WASD to blob storage, then to restore it to the new database server.
4. Use the wrapper that Redgate has written around the bacpac capability that makes it very easy to backup/restore a WASD from/to blob storage.
1. Copy the database
This is pretty simple. You can make a copy of your database using T-SQL. It’s as simple as logging into the SQLServer Management Studio, connecting to the destination server, and running this command against the master database.
create database MyDatabaseBkp as copy of myserver.MyDatabase
After you do that, you will probably get back a message telling you it’s finished. What this means is that the query is finished, not the copy! It’s doing the copy in the background. There are two queries you can run against the system database to check the status of the database copy.
select * from sys.databases
This one shows you all the databases and the status of them. Look at the [state_desc] column. It will say “ONLINE” if it’s finished, or “COPYING” if it’s not.
The second command only shows running jobs, and will show a % completion in the [percent_complete] column. I read somewhere that this will update every 5 minutes, but don’t bet the farm on it. Mine went to 24%, stayed a long time, then jumped to 65%, and then finished only a few minutes later. So take it as an indication that something is happening, but don’t assume it’s definitive. Here’s the second command:
select * from sys.dm_database_copies
Copying a database is fairly easy, and works great. Our database is about 8 GB, and it took 10-15 minutes to run. Don’t forget that you have to pay for the second database!
There is a major gotcha with this option. This only works if the source and destination are within the same datacenter. Since we’re moving our database from USNorth Central to US West, this won’t work for us.
For more information about this feature, check out this article.
2. Use the SQL Azure Migration Wizard
This application scripts all of the objects in the database, and extracts the data to your local hard drive using BCP. Then it runs the script on the destination server and uploads the data to the new database. This means you end up downloading all of the data to your local machine, and then uploading it to the new server. I didn’t really want to download 8GB of data, and then wait for it to upload again. If you have all the time in the world, this will probably work for you. We used it when we initially migrated our infrastructure to Windows Azure, and still use it occasionally to migrate new tables and things like that that are developed locally, but I don’t want to have our systems down for as long as it will take to run this in production. For more information, check it out on codeplex.
3. Use the bacpac feature
This is a good choice. Back up the database in US North Central to blob storage, then restore it to the new server in the USWest region. For more information, check out this article. I would have stopped and figured out how to do this, but I found something easier.
4. Use the Redgate Cloud Services tool
It turns out that Redgate has written an application to let you backup your Windows Azure SQL Database to blob storage (or Amazon storage), and to restore it from storage back to Windows Azure. This looks a lot like a wrapper around the bacpac function to me. To check it out, click here. This is what I used to move our primary database.
After creating an account, you can set up a job. The screen looks like this:
Choose to back up a SQL Azure database to Azure and you will see this screen:
On the left, fill in the Azure server, User name, and Password, and click the Test button attached to the Password field. If you got the information right, this will populate the list of databases, and you can select the one you want to back up.
Note the checkbox about creating a temporary copy. When I used this application, I created my own copy of the database using the Copy Database command explained in option 1, and then used this to back up the copy. This ensured that I didn’t have any conflicts or problems from the database being accessed while it was being backed up. We actually shut down all of the client applications while I did this, so that shouldn’t have been a problem, but I figured better safe than sorry. If you don’t want to create your own copy, you can check the box and this application will do it for you. Note that you still have to pay for that extra database just as if you had done it yourself.
On the right side, fill in the storage account name and access key and click Test. If you got the information right, it will populate the list of containers. Select the one you want to put your database backup in, and specify the file name of the backup. This can be anything you want to call it; it is not the name that will be used for the database when you restore it – it’s just a file name.
After getting everything the way you want it, click Continue and you will see this screen:
Now you can schedule the job to be run later, or you can just run it now. I chose to run it now, but it’s cool that Redgate offers this job scheduling feature – this means you can use this tool to make regular backups of your database to your blob storage account. It would be nice if they would let you put a date marker in the file name so you can keep multiple copies, but I don’t see any way to do that right now.
First it shows the job getting ready to start.
It will show progress as it’s running.
When it’s finished, it sends out an e-mail (unless you have turned the notifications off), and the FINISH TIME will show the date/time that it finished.
For my sample run, the text is showing up in red. That’s because this one failed. It wouldn’t back up our ASPState database. The error said, “Microsoft Backup Service returned an error: Error encountered during the service operation. Validation of the schema model for data package failed.” (It said a lot more, too, but I’ll spare you.)
I didn’t know how to fix it or what the problem was, so for this tiny database, I used the SQL Azure Migration Wizard to move it into production.
The Redgate Cloud Services worked great for our production database. I did a dry run before running this in production. Our production database is around 8 GB. When backing it up, I did a dry run around 9 p.m. PDT on a Monday night and it took about an hour. The production run was on a Friday night around 10:30 p.m. PDT and took 20 minutes.
So now my database is backed up.
After you back it up, you can restore it to any of your WASD servers in any data center. Here’s what the screen looks like in the Redgate Cloud Services.
This works just like the backup – fill in the information (don’t forget to click the Test buttons to make sure the other boxes are populated) and schedule a job or run it now.
Note that you specify the database name on the SQL Azure side. The database does not have to exist; it will be created for you. If it does exist, and you want to replace it, there’s a checkbox for that.
We did a dry run and a production run on the restore. The dry run on a Monday night at 11:35 p.m. PDT took 50 minutes. The production run on a Friday night at 11 p.m. it took 44 minutes to run. Much, much faster than using the SQL Azure Migration Wizard would have been.
After I moved the database, I had to update the Azure Service configurations for the four services that access the database, and change the connection strings to point to the new database. Awesome Jack (our Sr. Systems Engineer who also does our builds) published them and we tried them out. After ensuring everything worked, Jack re-enabled access to all of the client applications, and we declared the Rebranding release completed.
We did have one issue. We have an account in our Windows Azure SQL Database that only has EXEC rights that is used by the services. This ensures that nobody puts dynamic SQL in a service call, and that if someone gets a copy of our Azure configuration, they can only execute stored procedures, they can’t log in and query our database. We had to set these up again because the Login is defined in the Master database. So we had to set it up on the target server, then run the setup queries against the restored database in order for the account to work. Not a big deal, but it’s a good thing we remembered to test this.
There are several ways to copy a Windows Azure SQL Database from one server to another. I used the Redgate Cloud Services tools, which I think are a wrapper around the bacpac feature. They were very easy to use, and worked perfectly for our production database. For our tiny session state database, we used the SQL Azure Migration Wizard. Our database is now moved, and the rebranding/upgrade project has been finished successfully.