Lessons learned when trying to set up a hosted VM

My experience has been through Century Link’s cloud, and overall, we’ve been quite happy with them.  But there are lots of pitfalls.  It hasn’t been difficult, but as they say, the devil’s in the details.

There is one thing to remember at all times: SNAPSHOT IS YOUR FRIEND!  Repeat this after me: SNAPSHOT IS YOUR FRIEND!  Your VM/cloud vendor may vary, but any time you’re going to do something major to your OS or SQL or file system, MAKE A SNAPSHOT BACKUP!  Yes, it takes time, but restoring from a snapshot is a lot faster than opening a ticket and having their techs restore a backup.  When you’re confident that what you just did is doing what you planned, delete the snapshot to make room for the next, unless you have the ability to keep more than one shapshot.  If you are in such an enviable position and can name them, give it a name or description like ‘Before reformatting for 64k clusters’.

First pitfall: cluster size.  After creating your server and adding additional drives for logs, backups, TempDB, etc., STOP.  Go ahead and copy your SQL Server install disk to C: along with SPs and hot fixes, but STOP.  Don’t do any further installation work until you reformat those drives!  When the VM management system created them, they didn’t know we’d be talking SQL Server, so they have the default of 4k clusters.  SQL Server works better if you provide 64k clusters.  You’ll save yourself a lot of work by reformatting your drives above C: that will use SQL Server NOW, before ANY software is installed or files copied to them.

Failure to reformat and you’ve installed SQL: forget it.  You need to reformat those drives, and you’re going to have to uninstall/reinstall SQL to do it.  Back up EVERYTHING, including the system databases — you do know that you should be backing up Master and MSDB, right?  Script out all logins, users, roles, jobs.

If you install SQL and then decide to reformat the drives, you might think it could be very simple: stop all SQL services, copy the files and directory structures to a holding area, copy them back, restart services.  Well, it didn’t work for me.  Ultimately I did an uninstall, reinstall, copied the production databases and logs back where they belonged, and did an attach.  Everything was fine.

You can check the cluster size by running the following from a command prompt: (supply a drive letter at the end)

fsutil fsinfo ntfsinfo

Second Pitfall: not making your server part of a domain.  This is actually the first thing that you should do.  Any server-level accounts that you create, we’re not talking SQL Server-level, will be configured as [machine name]\[user name] and you’re going to have problems in SQL using Windows authentication.  It doesn’t matter if you will have just the one server and it won’t be linked to your real domain, some programs need to run on a domain, like Terminal Services Gateway.  So save yourself some grief and after the VM comes up for the first time, turn it in to a domain.

What happens if you don’t make your server a domain: confusion.  Seeing user IDs reflect the machine name and not the domain name.  Probably going to have some more troublesome consequences down the road, I would think especially if you’re setting up replication or advanced services like that.

Third Pitfall: NOT DOING BACKUPS!  “Well, the hosting provider is doing backups, why should I?”  Because (1) you’re trusting your fate (i.e., your job) to someone utterly outside of your control, and (2) you always need to back up, including the system databases as mentioned above.  Your provider is backing up your server, but do you know when?  You probably only know ‘when’ within a specific window that you don’t have control of.  You should do regular backups just like you would a server within your data center.  Don’t rely on taking an occasional snapshot: my provider only allows one, so that’s not much of a point in time.  And it’s only kept for up to 10 days (on my provider), so it’s no long-term solution.  My suggestion is to do log shipping or even compress/encrypt the backups and mail them to yourself, size permitting.  Azure becomes a possibility if your budget allows: back it up to an Azure server and you have a level of DR.

Personally I’m not fond of VSS and other services that back up databases in situ.  There are excellent products, such as Regate’s SQL Backup Pro, that do an excellent job.  But I’m a cheapskate, and the environments that I’ve worked in are uniformly small, so I haven’t used it.  And now SQL Server does backup compression (HIGHLY RECOMMENDED!), but I’ve seen SQL backup agents do horrible things.  I’m sure they’ve improved over the years, I just don’t think they’re necessary when it’s easy to script backups and control everything.


Some useful SQL Server installation/configuration advice

I tended to do most of this automatically, but a reminder is good.


One thing that I love in recent versions of SQL Server, I think it started in the ’08 release, is the ability to use non-Exchange email accounts.  My current project is a hosted server that is not part of our domain, and we didn’t want to install Exchange on it just so it could send me emails: too many resources consumed in terms of disk and CPU.  Conveniently I could set it up with its own Gmail account and then configure the operators to send alerts to both my work email and my personal email along with the DBCC summary.

Here’s the article that I used that explains the process of setting up non-Exchange email for SQL Server: