SQL Projects & Change Data Capture

I ran across this article from MSSQLTips.com discussing the use of CDC with SQL Projects. We don’t use CDC with our current systems so I’d never tried to investigate this, but I would also have expected better support out of the box. In the meantime, Ray Barley wrote up a good article on enabling CDC for the Database and then using post-deploy scripts to enable CDC for tables.

Find the article here: www.mssqltips.com/sqlservertip/3316/how-to-enable-change-tracking-in-a-sql-server-database-project/

Advertisements

SSDT Issue–Online Index Option Ignored

I came across this as we’d recently received a corporate directive that all indexes should be created with the ONLINE=ON option. That’s not too bad. Add that to all indexes as needed, publish the DB project, and we’re good to go. Except that we’re not.
Due to an issue with SSDT “working as designed”, the ONLINE=ON option is ignored as any of the criteria for comparing indexes within SSDT. On the positive side, it means that any index created that way won’t have the option dropped. On the negative side, it means that any index that is changed or added will not have that option.
If this is an issue to you, comment and vote on the related Connect ticket. If we can garner enough votes and interest, perhaps MS will hear us and change their mind about whether or not the ONLINE=ON should be part of the release script.
In the meantime, if you absolutely need this functionality, generate scripts for your releases and ensure that all index creates/alters include the option you want.

Edit: There’s been some discussion about this on the MSDN forums around this. Kevin Cunnane posted several responses pointing to a Codeplex example of how this might be possible by compiling a DLL and referencing that through the SQLPackage command line. I have not had a chance to try that out yet, but am including the forum and Codeplex links for those who want to investigate further.

Forum post
Codeplex example

All SSDT Articles

SSDT: Adding Logins for “Execute As”

We recently encountered an issue requiring us to use “Execute As” syntax in some of our stored procedures. Of course, we have multiple environments which all have different permission definitions so this would cause a bit of an issue. In this case, the login itself is standard in all of our environments, but we don’t define those within the project for the most part.

First problem: Add a new user called “MyLogin” to the project.

Solution: This was easy enough, I just added a new user to the project. However, I then ran into an issue that the Login for “MyLogin” did not exist in our master.dacpac reference.

 

Second Problem: Create a SQL Login reference within the master dacpac file that could be seen by all of our projects.

Solution: This proved to be a bit trickier. The model.xml file inside the default master.dacpac files contains many system-level objects. We would need to find the appropriate XML syntax for a new SQL Login element, add it to the model.xml file, and then update the hash for the dacpac file.

 

Extract the Model.xml file from our master.dacpac file

This can be easily done by finding your master.dacpac file that you referenced in your project and either renaming it to a .zip file or using a tool such as 7-zip to right-click the file and open it. Extract its “model.xml” file to some working folder.

Add a new SQL Login element to the model.xml file

Now that there’s a copy of model.xml to edit, open it in your favorite XML or Text editor. I like Notepad++, but even plain Notepad will work. Navigate to the very bottom of the file. We want to add the new element just before the tag. (for ease of use, not a requirement) The example XML is below. Note the “GloballyScoped” annotation type. This treats the entity as something that is available outside of the current database.




msFT7_&#$!~Make a new Dacpac file

msFT7_&#$!~Now that we’ve added the new login to our model.xml file, either add it back to the existing dacpac file by using something like 7-zip, or re-compress everything that was extracted from the old dacpac into a new zip file, then rename it to master.dacpac.

msFT7_&#$!~But what about the checksum?

msFT7_&#$!~All of the above works great, but the dacpac files provided by MS all have a checksum associated with them. If that checksum is off, it means that someone tampered with the file and it won’t work when you try to use it as a database reference. I will now give props to Gert Drapers (twitter) for writing up how to add objects to the master.dacpac file.

msFT7_&#$!~To summarize, you basically extract the model.xml, edit it, re-add it to the dacpac, but then you can run a small app Gert wrote called dacchksum.exe to modify the checksum. I’m quoting Gert’s article here, but you basically put that exe in a place in your path or in the folder you want to use and run something like this:

dacchksum.exe /i:Master.dacpac /u /v

This will change the checksum of your master.dacpac file so it will now work as a database reference.

 

Add your user to the project

Now that there’s a Login for the user to reference, you can now add the user to the project. Right-click the project, select “Add”, and then choose the “User” option. You’ll have a new User created which you can easily set up to reference the new login in your master file.

Overall, not my favorite way to do these things, but if I know that a login will exist on the server I am more comfortable with adding a user to the project than I would be otherwise. You could still run into issues if this login doesn’t exist everywhere you want to use the project.

 

Final Thoughts

As Gert mentions, this trick is for adding any object to master. If you have a copy of sp_whoisactive or sp_blitz that you want to reference for some reason, you can add them to some one-off project, then copy that XML Element node into your master’s model.xml, regen the hash, and enjoy a new globally referenced object. It’s a useful trick, though one I hope not to have to use very often.

SSDT: Setting Different Permissions per Environment

One of the areas that SSDT doesn’t address adequately is permissions. Many users have different users, logins, and permissions set up across their environments, but SSDT takes a “one size fits all” approach and assumes that you will have the exact same configuration in each environment. While that should be the case for your schema, this area requires some interesting and somewhat involved techniques to handle well in SSDT.
Attribution – I’m borrowing heavily from Jamie Thomson’s ( blog | twitter ) post
A strategy for managing security for different environments using the Database Development Tools in Visual Studio 2010
Jamie addressed this problem for the older DB Projects (Datadude, VSTSDB, DBPro, etc.). He came up with an excellent, if somewhat complex, solution to effectively handle pushing different permissions to different environments. This is my attempt to update this and distill it into a “how to” for SSDT and SQL Projects, but I freely admit that this is based on Jamie’s excellent work.
In general, the steps we’ll want to take are:

  1. Generate a set of SQL Logins that will be used on the server.
    1. These should be run on the servers or local box already – we will not use these within the SQL Project itself though you can if you wish.
  2. Generate the files for users/roles/permissions for each database project and environment.
  3. Add the files to your SSDT SQL Project
  4. Create and set a variable for your environments in your SSDT SQL Project.
  5. Adjust your Security Wrapper file to handle your environments.
  6. Adjust your post-deploy script to call the Security Wrapper.

First, let’s look at what happens when you import Permissions by default into an SSDT SQL Project.
clip_image001
As you can see from the screenshot, we have a “Permissions” file containing granted permissions, a “RoleMemberships” file that adds users to various roles, a “Sales” file for the Sales schema, and 3 files each for the users – one for its schema, one for its login, and another for its user. Confusing, but not necessarily a problem until you need a release where User1 does not and will not exist on the server or where User2 belongs to a different domain than your production servers and can’t and should not be created there because there’s no trust relationship.
Let’s remove all of those files for objects other than Schemas from the project before we start. You’ll probably have to look at each file to verify before removing it from the project if you’re not sure which file is which. If you accidentally remove the files for the schemas, you’ll get errors and need to re-add a “Schema” to this security folder for each schema that is used.
 
Generate SQL Logins
This Microsoft KB article details the sp_help_revlogin stored procedure. Create this on your source database, run it, and use the output to re-create those logins elsewhere. This will copy all logins and users, but no permissions or database users.
 
Generate the Security Files through PowerShell
First, download the Powershell_GeneratePermissions.zip file. Extract that to a folder on your hard drive.
Note – the script assumes that you will use Windows authentication to connect to your SQL Server. The script will need to be adjusted if you need to use SQL Authentication.
Edit the “GeneratePermissions.ps1” file. Replace the “DB1”, “DB2”, etc with whatever database names you’ll be using. Also uncomment any of the extra lines that you’ll need in the file. (There is likely a better way to do this in Powershell and I’m more than open to some suggestions on how to build up this array in a more concise manner.)
The assumption is made that the databasename == the projectname. If that’s not the case, adjust your DB/Project names accordingly.
Open a PowerShell prompt in that folder after editing/saving the file.
Run the file using something like this commandline:
.\GeneratePermissions.ps1 -SQLInstance “localhost” -Environment DEV
“localhost” should be replaced with your servername
DEV in this case indicates that this will be tied to a variable value of “DEV” when the project is published.
I’d recommend running this for every environment at one time if you can. It will make adding the files to the project much easier.
If you run for several environments, you’ll see a folder structure something like the following for each project (in this case for Adventureworks2012):
clip_image002
Inside your “SecurityAdditions” folder, you’ll see the following files and folders:
clip_image003
Each of these files will be used to add permissions for each environment. In this case, we have permissions set for our Development, QA, Local, Production, and Staging environments.
Now, let’s add the “SecurityAdditionsWrapper.sql” file from the files we extracted earlier into your “SecurityAdditions” folder. Feel free to edit it as needed to match the environment(s) you want to use for your database project. By default, it’s set to look for the above files plus a “Default” file to be a catch-all. Adjust that so all files/environments match the files you created and save it.
 
Add the files to your SSDT SQL Project
Copy your newly created folder(s) to the appropriate project. Open each project, and you will notice that the files still don’t show up. Click the project file. You should see a menu item that’s now enabled to “show all files”.
clip_image004
You’ll see a bunch of files that aren’t part of the project under a “Scripts” folder:
clip_image005
We want to add them to the project. Right-click the “Scripts” folder and select the option to “Include in Project”. Click the Project file again and turn off the option to show all files.

  • We need to set all file properties to “Not in Build” or there will be errors when trying to build the project.

Unfortunately, we can’t select just the folder to exclude from the build. We need to select the files. You can ctrl+click to select multiple files. Select all of the files, open the Properties tab, and change the option from “Build” to “None”
clip_image006
Create and set your Variable to change environments
Open the Project Properties window
clip_image007
We want to change the SQLCMD variable to add a new variable called “DeployType”. Set it’s default value to “Local” for now, or whatever environment for which you have a set of security scripts.
clip_image008
 
Adjust your Security Wrapper files
The SecurityAdditionsWrapper.sql file in the code set I’ve created comes with settings for “Dev”, “QA”, “Staging”, “Production”, “Local”, and “Default”. It’s likely that you will not use all of these in your setting. One drawback to using SQLCMD to wrap all these files together is that if you don’t have a file or set of files, the script will start throwing errors because an expected file doesn’t exist. Edit the SecurityAdditionsWrapper.sql file to remove any of the configurations you will not use or change the names to match the environment values you used earlier.
 
Adjust your Post-Deploy script
Finally, we need to open your Post-Deploy script.
Add the following to your script:
:r .\SecurityAdditions\SecurityAdditionsWrapper.SQL
This will include your SecurityAdditionsWrapper file as part of your post-deploy process. See my earlier post on selectively running pre/post deploy scripts if you want finer control.

SSDT: Trick to Not Run Pre/Post SQL on Publish

We’ve run across the need to avoid running certain scripts when publishing a brand new copy of a database from a SQL or DB Project from time to time. Often, those pre/post deploy scripts are designed to be run only against an existing system and don’t make sense for a brand-new, empty database.

We’ve worked around this by using a SQLCMD variable called DeployType and setting that variable to “New” for any build operation that will be used only to create the initial database. The code snippet for our Post-Deploy script looks something like the following:

— Place scripts here if they should only run for existing/working DB
IF ( ‘$(DeployType)’  ‘New’ )
  BEGIN –Run scripts
      PRINT ‘Deploying scripts.’
  END

–Security – runs last
:r .\SecurityAdditions\SecurityAdditionsWrapper.sql

 

The above script will look at the DeployType variable. If it’s set to “New”, it will ignore all of your deploy scripts in that block. If it’s set to any other environment, it will run the appropriate scripts if any exist. You can use this to control which environments will use certain scripts as well.

SSDT, Publishing, and Referenced Databases

Had this catch me today in a series of databases that all work together. In order to resolve all of the cross-database dependencies, I’ve added database references to the project. That takes care of all of the “Database.dbo.Object” references outside of the database. The problem is that within the current database, you will get errors without that reference defined.

In my haste to get the projects working, I added a database reference to the current project. I would build my project and reference the generated *.dacpac files when publishing. The problem is that every time I built my project, the referenced dacpac file would overwrite the generated dacpac file.

Results from this problem – I noticed that my “DeployType” variable wasn’t referenced anywhere, which struck me as odd because I use it in my Pre and Post Deploy scripts. I also started noticing that my dacpac file in my build folder was out of date, even after a successful build.

Workaround – do what I have done otherwise – replace all instances of CurrentDatabaseName.dbo. with just dbo. in all SQL files in the project.  This also includes the various forms that this can take:

Databasename.dbo.
Databasename..
[Databasename].[dbo].
etc

Once I replaced all of those references and rebuilt the project successfully, my publish actions started working again. My Deploy scripts were included. I didn’t get mysterious warnings about my variables not existing.

There may be a different way to work around self-referenced database objects within SSDT, but until I come across that, the workaround is to remove them from the code and not to just add a reference to the current database through a dacpac file.

SSDT: Tips, Tricks, and Gotchas

I wanted to add a short post to make sure I highlight some things that will trip people up or otherwise cause issues.

 

  • When setting trigger order for triggers on a table, you could run into an issue with Publishing your database. The project will build successfully, but throw a “Parameter cannot be null” error for parameter “Key”. This is an internal bug with the product as of at least SQL 2012 CU2. It’s been entered as an internal bug within MS with a workaround. To workaround the issue, do not try to set the trigger order in the table definition, but rather put this code in a post-deploy script.
    • This is supposed to be fixed in a future release of SSDT so your experience may vary.
  • Importing encrypted objects may fail. If you want to store encrypted objects in your project, you’ll likely need the code in order to create them in the project.
  • Unlike the Schema Compare option in VS 2010, there is no way to set the default Schema Compare options nor a way to filter out “skipped” objects in the schema compare.
    • I’d love some ideas on how better to handle this. You may be able to save the compare within the project for future re-use, but I had little success with this in VS 2010 Projects so have been reluctant to try that route again.
  • If your database uses FILESTREAM, you will not be able to debug using the default user instance that SSDT provides. You will need to point your debugging instance to a SQL Server install that supports FILESTREAM. See this forum post for more details.
    • Set this in the Project Properties and point to an actual instance of SQL Server that supports the feature(s) you want to use.
  • SQL Projects will substitute CONVERT for CAST in computed columns at the time of this writing. They also push default constraints with extra parentheses around the default value. If you clean these up from the target server, be aware that on your next Publish action, you could end up with extra work being done to make your target database use those parentheses or the CONVERT function.
    • To work around this, do a SQL Schema compare after a release to find any areas where the schema on the server differs from that in the project. For instance you may see a DEFAULT (getdate()) in your server, but have DEFAULT getdate() in your project.  Add the parentheses to your project to avoid unnecessary changes.

 

Do you have any tips to share? Add them to the comments below.

SSDT: SQL Project Snapshots

SSDT allows for snapshots to be taken of the project at any point. Just right-click the project name and select the option to “Snapshot Project”.

clip_image001

This builds a dacpac file into a folder within the project called “Snapshots” with a default name format of Projectname_yyyymmdd_hh-mi-ss.dacpac.
This file contains a build of the project at the time the snapshot was taken, including all objects and scripts.
 
Uses (by no means a complete list)

  • Save a specific version of your project to use for release
  • Save this version of the project before making changes to the underlying project
  • Use as a source for schema compare
  • See the differences between snapshots and/or the current project through schema compare
  • Baseline your project
  • Roll back to this state
    • Schema Compare with the snapshot as the source and the Project as the target
    • Import into a new project with this as the source.
    • Import into current project, but be aware that this could easily produce a lot of duplicates

SSDT: Errors and Warnings

SSDT includes an Errors and Warnings window that is well worth your attention. Ideally, your project should have no errors or warnings.

clip_image001

 

However, sometimes coding errors slip in to your project or you get warnings that an object can’t be found because it exists in another database. Sometimes a warning might appear because an object is missing completely, in this project or another one. In these cases, it’s well worth checking this section to find out where you may have some issues.

Warnings are not necessarily a problem. SSDT will bring possible issues to your attention, but warnings will not stop a project from building and publishing unless you have set the option to treat warnings as errors or unless there really is an underlying problem that causes an issue during the publish phase.

For example, if I modify the Person.Address table to add a new column in the code, but forget to add a comma, I’ll get an error something like this.

clip_image002

If you double-click on the line, the editor should open the appropriate file and take you pretty close to your problematic line. Correct the problem, save the file, and move on to the next error.

 

Some common warnings/errors

  • 4151 – Unresolved database reference. This is often caused when one of the objects in a database references another. This can often be resolved by creating a database reference.
  • 71562 – Unresolved database reference warning.
  • 71502 – Another unresolved database reference warning.

How to globally suppress certain warnings

  • Right click the root of the project and select properties
  • Click on the “Build” tab
  • Enter in the numeric portion of the codes, separated by commas. Remove the “SQL” and any leading zeroes when entering the code.

clip_image003