SQL Projects & Change Data Capture

I ran across this article from MSSQLTips.com discussing the use of CDC with SQL Projects. We don’t use CDC with our current systems so I’d never tried to investigate this, but I would also have expected better support out of the box. In the meantime, Ray Barley wrote up a good article on enabling CDC for the Database and then using post-deploy scripts to enable CDC for tables.

Find the article here: www.mssqltips.com/sqlservertip/3316/how-to-enable-change-tracking-in-a-sql-server-database-project/

Advertisements

SSDT: Setting Different Permissions per Environment

One of the areas that SSDT doesn’t address adequately is permissions. Many users have different users, logins, and permissions set up across their environments, but SSDT takes a “one size fits all” approach and assumes that you will have the exact same configuration in each environment. While that should be the case for your schema, this area requires some interesting and somewhat involved techniques to handle well in SSDT.
Attribution – I’m borrowing heavily from Jamie Thomson’s ( blog | twitter ) post
A strategy for managing security for different environments using the Database Development Tools in Visual Studio 2010
Jamie addressed this problem for the older DB Projects (Datadude, VSTSDB, DBPro, etc.). He came up with an excellent, if somewhat complex, solution to effectively handle pushing different permissions to different environments. This is my attempt to update this and distill it into a “how to” for SSDT and SQL Projects, but I freely admit that this is based on Jamie’s excellent work.
In general, the steps we’ll want to take are:

  1. Generate a set of SQL Logins that will be used on the server.
    1. These should be run on the servers or local box already – we will not use these within the SQL Project itself though you can if you wish.
  2. Generate the files for users/roles/permissions for each database project and environment.
  3. Add the files to your SSDT SQL Project
  4. Create and set a variable for your environments in your SSDT SQL Project.
  5. Adjust your Security Wrapper file to handle your environments.
  6. Adjust your post-deploy script to call the Security Wrapper.

First, let’s look at what happens when you import Permissions by default into an SSDT SQL Project.
clip_image001
As you can see from the screenshot, we have a “Permissions” file containing granted permissions, a “RoleMemberships” file that adds users to various roles, a “Sales” file for the Sales schema, and 3 files each for the users – one for its schema, one for its login, and another for its user. Confusing, but not necessarily a problem until you need a release where User1 does not and will not exist on the server or where User2 belongs to a different domain than your production servers and can’t and should not be created there because there’s no trust relationship.
Let’s remove all of those files for objects other than Schemas from the project before we start. You’ll probably have to look at each file to verify before removing it from the project if you’re not sure which file is which. If you accidentally remove the files for the schemas, you’ll get errors and need to re-add a “Schema” to this security folder for each schema that is used.
 
Generate SQL Logins
This Microsoft KB article details the sp_help_revlogin stored procedure. Create this on your source database, run it, and use the output to re-create those logins elsewhere. This will copy all logins and users, but no permissions or database users.
 
Generate the Security Files through PowerShell
First, download the Powershell_GeneratePermissions.zip file. Extract that to a folder on your hard drive.
Note – the script assumes that you will use Windows authentication to connect to your SQL Server. The script will need to be adjusted if you need to use SQL Authentication.
Edit the “GeneratePermissions.ps1” file. Replace the “DB1”, “DB2”, etc with whatever database names you’ll be using. Also uncomment any of the extra lines that you’ll need in the file. (There is likely a better way to do this in Powershell and I’m more than open to some suggestions on how to build up this array in a more concise manner.)
The assumption is made that the databasename == the projectname. If that’s not the case, adjust your DB/Project names accordingly.
Open a PowerShell prompt in that folder after editing/saving the file.
Run the file using something like this commandline:
.\GeneratePermissions.ps1 -SQLInstance “localhost” -Environment DEV
“localhost” should be replaced with your servername
DEV in this case indicates that this will be tied to a variable value of “DEV” when the project is published.
I’d recommend running this for every environment at one time if you can. It will make adding the files to the project much easier.
If you run for several environments, you’ll see a folder structure something like the following for each project (in this case for Adventureworks2012):
clip_image002
Inside your “SecurityAdditions” folder, you’ll see the following files and folders:
clip_image003
Each of these files will be used to add permissions for each environment. In this case, we have permissions set for our Development, QA, Local, Production, and Staging environments.
Now, let’s add the “SecurityAdditionsWrapper.sql” file from the files we extracted earlier into your “SecurityAdditions” folder. Feel free to edit it as needed to match the environment(s) you want to use for your database project. By default, it’s set to look for the above files plus a “Default” file to be a catch-all. Adjust that so all files/environments match the files you created and save it.
 
Add the files to your SSDT SQL Project
Copy your newly created folder(s) to the appropriate project. Open each project, and you will notice that the files still don’t show up. Click the project file. You should see a menu item that’s now enabled to “show all files”.
clip_image004
You’ll see a bunch of files that aren’t part of the project under a “Scripts” folder:
clip_image005
We want to add them to the project. Right-click the “Scripts” folder and select the option to “Include in Project”. Click the Project file again and turn off the option to show all files.

  • We need to set all file properties to “Not in Build” or there will be errors when trying to build the project.

Unfortunately, we can’t select just the folder to exclude from the build. We need to select the files. You can ctrl+click to select multiple files. Select all of the files, open the Properties tab, and change the option from “Build” to “None”
clip_image006
Create and set your Variable to change environments
Open the Project Properties window
clip_image007
We want to change the SQLCMD variable to add a new variable called “DeployType”. Set it’s default value to “Local” for now, or whatever environment for which you have a set of security scripts.
clip_image008
 
Adjust your Security Wrapper files
The SecurityAdditionsWrapper.sql file in the code set I’ve created comes with settings for “Dev”, “QA”, “Staging”, “Production”, “Local”, and “Default”. It’s likely that you will not use all of these in your setting. One drawback to using SQLCMD to wrap all these files together is that if you don’t have a file or set of files, the script will start throwing errors because an expected file doesn’t exist. Edit the SecurityAdditionsWrapper.sql file to remove any of the configurations you will not use or change the names to match the environment values you used earlier.
 
Adjust your Post-Deploy script
Finally, we need to open your Post-Deploy script.
Add the following to your script:
:r .\SecurityAdditions\SecurityAdditionsWrapper.SQL
This will include your SecurityAdditionsWrapper file as part of your post-deploy process. See my earlier post on selectively running pre/post deploy scripts if you want finer control.

SSDT: Trick to Not Run Pre/Post SQL on Publish

We’ve run across the need to avoid running certain scripts when publishing a brand new copy of a database from a SQL or DB Project from time to time. Often, those pre/post deploy scripts are designed to be run only against an existing system and don’t make sense for a brand-new, empty database.

We’ve worked around this by using a SQLCMD variable called DeployType and setting that variable to “New” for any build operation that will be used only to create the initial database. The code snippet for our Post-Deploy script looks something like the following:

— Place scripts here if they should only run for existing/working DB
IF ( ‘$(DeployType)’  ‘New’ )
  BEGIN –Run scripts
      PRINT ‘Deploying scripts.’
  END

–Security – runs last
:r .\SecurityAdditions\SecurityAdditionsWrapper.sql

 

The above script will look at the DeployType variable. If it’s set to “New”, it will ignore all of your deploy scripts in that block. If it’s set to any other environment, it will run the appropriate scripts if any exist. You can use this to control which environments will use certain scripts as well.

Learning to work around DB Pro

I’ve been trying to get a multi-database solution working correctly with MS Visual Studio Team System for Database Professionals (aka DBPro or DataDude). Anyone who has tried to tie multiple co-dependent databases together within DB Pro has experienced some of the pain points in working with these projects. Hopefully these notes can help you to some extent.

Pain Point #1: Circular References

This is the largest challenge that we face in our environment. DB Pro doesn’t really understand that DB A can reference DB B which can then reference DB A.  You can easily add a Database Reference to another project in your solution by right-clicking the “references” sub-folder and adding a new Database Reference.  However, once you’ve done that, you can’t add one from that project back to the original.  I understand you don’t want to get caught in an endless loop, but it would be nice to have some sort of Max Level of Recursion set that would stop the program from getting caught in that.

That being said, there is a way around this. You can generate a .DBSchema file for your existing database using the instructions from this site. Note that you may need to tweak the options a little bit. I think the “ModelType” parameter has been deprecated in GDR2.  Save that DB Schema file to some central folder and repeat for each database you want to reference.  Once done, go back into your projects and add a reference to each of those files. Because these are DBSchema files and not the actual database projects, DB Pro will recognize these and you’ll be able to add a reference as needed to each database.

Remember that these schema files will not be kept up to date automatically.  As you build your DB Projects, you’ll need to update those schema files in some way.


Pain Point #2: System Objects

This isn’t a hard problem to work around and there are a lot of hints out there about this.  The easiest way to handle references to the system objects is by adding a reference to the included DBSchema files created when you installed DB Pro.  These can generally be found in %ProgramFiles%\Microsoft Visual Studio 9.0\VSTSDB\Extensions\SqlServer. There should be sub-folders for 2000, 2005, and 2008.  Simply include these references in your projects and you should be set.

This may not always help you if you reference another database referencing the system objects, though. I ran into this problem and had to find yet another workaround, which leads me to…

Pain Point #3: “Unresolved References”

When all else fails and you can’t actually get around the “Unresolved Reference” warnings due to the use of temp tables, referencing another database that references the system objects, or some other reason you have a final recourse. That is to tweak the file properties to ignore certain error messages or warnings that occur for that file.  It’s not a great solution because you could miss some valid warnings, but it can help you get around those warnings that keep popping up when you try to build or deploy.  To add an exception just find the file, select it, and edit its properties.  The last option is to “Suppress Warnings” and that will let you bypass the warnings that normally occur when DB Pro can’t figure out what you are trying to do.  This should really only be turned on for code that is definitely working in your environment and the code should be reviewed carefully when enabling this option.

 

Finally, I don’t necessarily claim that any of these are best practices. We’re learning here and trying to figure out the best way to work around the limitations and quirks in DB Pro to get it working in our environment. I’m more than happy to learn new and better ways to do things so feel free to share your tips or to correct me where I’m wrong.