SSDT: Publishing Your Project

Build the project

In order to successfully publish your project, it must first be able to build successfully.

Start by building your project. Right-click the project and select “Build”.

clip_image001

If the build is successful, the project can be published.

 

You may want to create a folder within your project to store saved Publish Profiles. These can be used later to easily publish the project to your servers.

clip_image002

 

Creating Publish Profiles

Right-click the project and select Publish. This will bring up the Publish Database dialog.

  • Choosing to publish or opening a saved publish profile will initiate a build of the project.

clip_image003

Choose your target database, set your advanced options (similar to Schema Compare options), and choose the “Save Profile As” option to save this to a location within your project. Selecting the “Add profile to project” option will create the publish profile in the root of the project. You may wish to either move the file to a folder storing all of your publish profiles or, if you saved it without adding to the project, show all files of the project so you can include the file in the project.

clip_image004

 

Some options you may want to consider:

  • “Always re-create database” – this will re-create the database. Any data in the database will be lost.
  • “Block incremental deployment if data loss might occur” – If there are any changes that could result in the publish action failing because of data loss, this option will stop the script from running.
  • “DROP objects in target but not in project” – This will remove anything in the database that doesn’t exist in the project. Useful if you want consistency, but you may want to ensure this isn’t checked if there could be objects in the database that were created, but didn’t make it to the project.

Under the “Advanced Deployment Options”

  • Allow Incompatible Platform – Useful if you may publish to a different version of SQL Server than the one specified in the project
  • Include transactional scripts – Will run the entire update operation as a transaction. If any one part fails, the transaction will roll back. If you have cross-database dependencies, selecting this option could result in no changes being published if you’re publishing to a new server. For a new publication, you may want to de-select this option to ensure a successful deploy of what can be published.
  • Script state checks – This option will ensure that the publish action will only work on the specified server and database.
  • Verify deployment – Checks the database and project before publishing to try to ensure there are no changes that will cause problems with the publication such as missing data for a foreign key.

 

Using Publish Profiles

Once you’ve set up your publish profiles, you can easily use these to push changes to that server and database without needing to specify additional parameters. The easiest way to use them is to double-click the Publish Profile within the project and choose to either “Generate Script” or “Publish”.

Generate Script will generate a script for you to use to update the target at a later time (run in SQLCMD mode).

Publish will immediately attempt to push the changes to the target.

You can also use these at a later point to push changes through the SQLPackage.exe command line.

 

SQLPackage

To publish your package through a command line we use something like the following:

Code Snippet
  1. sqlpackage /a:publish /sf:.\sql\Local\Adventureworks2008.dacpac /pr:.\Publish\Local.publish.xml

The above will:

  • Use the “Publish” Action
  • Use the Source File named Adventureworks2008.dacpac, built in the sql\Local folder
  • Use the publish profile named “Local.publish.xml” (defined to push to the local SQL Server)

You may want to add SQLPackage.exe to your path. By default it is installed in:

C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin

You can override quite a few of the default settings through various command line arguments. This includes source, target, and variables. You can find a full list of the command line arguments at the SQL Package reference online.

 

Jenkins Automation for CI

We use Jenkins at my current workplace and set up a Jenkins job to do the following (With thanks to Matthew Sneeden for the assistance.):

  • Get the latest from our mainline repository
  • Build each SQLProj file.
    • Building the SLN file will result in also attempting to publish the database
    • msbuild .\Adventureworks.sqlproj /t:build /p:Configuration=”Local”
      • This assumes that msbuild.exe is in your path.
    • Configuration is mostly to control the location of the dacpac file generated.
  • Run SQLPackage w/ a specified Publish Profile for the appropriate environment and using the newly built dacpac as the source.

We are currently investigating how we can use Snapshot files to better control releases to our UAT and Production environments. This series will be updated when that information is available.

Advertisements

11 thoughts on “SSDT: Publishing Your Project

  1. This is exactly how we set up our Build Process Template to work. However, I noticed that SQLPackage.exe does not create a script that was used for the publishing. Before SSDT we would click on the Drop folder and we could review the script that was used during the publishing. But now I can't find that script. One step forward, one step back…

    Like

  2. You may want to try to use the sqlpackage command with “/action:script” and “/outputfile:.\yourscriptsgohere\scriptname.sql” instead of the “/a:publish” I mentioned above. I just tested that and SQLPackage created a SQL file for the release instead of automatically publishing the changes. Adjust for your preferred path for writing the release scripts.

    If you check out the link I posted for the “SQL Package reference online” you can see all of the parameters, though the description for /outputfile is slightly off. It indicates that the path is for XML files, but is used for whatever files would be written.

    Like

  3. Hi
    This is similar to what I do in my build server.
    Here is my (Jenkins) workflow (one of these per environment)
    1 git pull origin/master (or origin/test or origin/prod)
    2 msbuild the sln
    3 run tests
    4 run sqlpackage from dacpac generated in 2, outputs a .sql file
    5 run findstr to delete some nasty lines that come from a sqlpackage bug that keeps deleting user logins, even thought I specified not to delete…
    6 run sqlcmd to target server/db using sql file from 4/5

    Like

  4. Interesting. You may want to double check which version of SSDT you're running. There's now an October 2013 release and it could fix that issue if you've explicitly told your Publish Profile to ignore Users.

    You may also want to check Jamie Thomson's Permissions article (which I linked from the one on this site). We use post-deploy scripts to populate users/permissions appropriately. In our case, we need to do this because the permissions are different in every environment.

    Like

  5. How does the Publish Profile model support team development with version control?

    For example, I would expect to be able to have a profile that is checked into source control that is used by the build process, but also a local developer profile that will deploy locally for that developer – and thus will be different for each developer. This one should not be checked into source control and it's reference managed by (possibly) the sqlproj.user file (which is also not checked in)?

    Is this supported?

    Like

  6. I created a “Local” publish profile that the developer could use that just points to localhost for the SQL Server name. There are several publish profiles created for servers in the project as well.

    I haven't tried to split out different publish profile files for different users. I like having them all as part of the project so if needed, someone can load the project and tweak the publish profile.

    For our developers, I wrote up a batch file that does a build/publish to their local machine for each or all databases so they can just get latest, run that batch file, and be up to date without needing to open the projects. They use that most of the time because it's more convenient and they generally don't care how it works, just that it does. 🙂

    Like

  7. Though it is possible, it has to be a two-step process and during each step target database is compared with source, one during script generation and once during deploy. This adds more time to deploy process especially with larger databases. This was not the case with predecessor VSDBCMD where both script generation and deploy can be accomplished in one go. I will be glad if this combined option will be restored in future release/update.

    Like

  8. That's a bit more tricky. Generally tables are rebuilt because there's a change that requires them to be rebuilt – different collation, compression level, primary key, column order, etc. The best way to avoid having them be rebuilt is to make sure everything is on the same baseline (often from your production system). That may mean you need to update the databases in your pre-production realms to match production and incur the cost of rebuilds there once. After that if you are wise in the way you alter your objects, you should minimize rebuilds. There may be times you need to do it, but it should be infrequent.

    I'd do a schema compare of your project against the target database to see what's different and prompting the rebuild. I saw a StackExchange post recently where someone had different compression options in the project and in his target database that was prompting a table rebuild each time (until the options matched, anyway).

    This is assuming that you've turned off the option to completely rebuild your database each time you publish, of course. If that's enabled, you'll see the database dropped, recreated, and rebuilt each time.

    Like

  9. I am currently investigating howto get a deploymentreport file created after/during publishing process. Looks like this is a two step process as well and you are right, there should be a way to get a sql script and report within one step – one comparison process… If anyone knows a way I would appreciate your feedback.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s