When I migrated GoldMail’s infrastructure to Windows Azure, one of the things I puzzled over quite a bit was how to have a staging environment with our Windows Azure services and applications. We tried using the staging instances of the Azure services, but the URL changes every time you publish one of these, and with the interdependencies between the services, we had to use DNS entries to keep from having to change the configurations in all of the related services. It can take a while for the DNS entries to filter through, so this did impact our release process.
I ended up defining staging services, and deploying to the production instance of them. When we’re ready to put something in production, we publish to what we call “the staging instance of our production services” and do a VIP swap. Then we test what we’ve put in production, and when we’re satisfied, we delete the old version of the services. I don’t think we’ve had ever to revert to the old verison, but if we do, it’s worth having it there instead of waiting 15-20 minutes for a new deployment to spin up.
Drawing on my experience, I’ve written an article on how to handle staging deployments in Windows Azure. This talks about handling configurations, both the way we chose to do it and Microsoft’s new feature in the Azure Tools 1.4. I also show you how to handle different web.config files – we have one service that we publish http and https separately, so we ended up with 4 web.config files, and I show you how to handle that with pre-build command. I also show you how you can set up your services for both staging and production. You can check out the article on the Dev Pro Connections website here. It’s supposed to be published in the October issue, too, so you can check it out there, too. I hope it’s helpful to you.
Tags: Azure Staging