Visual Studio 2013 (VS) has finally delivered a great way to manage databases within the Software Development Lifecycle. While VS has offered a Database Project type for several years, to me, it was never really a sustainable way to create, manage, source control, schema compare, and deploy database objects. As a data architect, I've always modeled and tested my objects in SQL Server Management Studio (SSMS) and then go to VS or another source control tool and retroactively make changes to reflect my work. In this blog I'll break down the feature set of database development in VS and offer some experience tips for SQL database developers to start working in what really is a great new way. There are plenty of blogs and videos out there which will provide a walkthrough of how to start with VS's database project, so I won't dive deep with screen shots and tutorials. Here is great a quick demo video, another focusing on schema comparisons, and if you prefer a screen shot web article, this one is quite thorough.

The below image is a nice visual of what 2013 offers. As I said, 2010 and 2012 may have noodled in these areas but I find 2013 to be the most complete solution. The key is the VS interface working with the latest edition of SQL Server Data Tools (SSDT), which replaces BIDS from earlier versions. While it debuted with SQL 2012 to handle SSIS, SSRS, and SSAS as well as the database project, it has an expanded feature set in 2014 which applies here.


Source: https://www.youtube.com/watch?v=kWKllVyozOg

The top row of "connected" features focus on developers using VS with a SQL Server Object Explorer on the left, code in the center, and a solution explorer on the right – a familiar layout. You can create a database project from an existing database instance, a series of .sql scripts, or an existing data tier application extract. I recommend importing your database in the "schema/object" structure, which will create .sql files in a local folder structure, making navigating around objects quick and easy. Double clicking on a table brings up the familiar designer and a DDL T-SQL window so users can edit in their interface of choice. Breaking away from SSMS is tough, but you can run all your queries inside VS (which itself is not new) but the Intellisense and in line validation works off the project so it has really supplemented all SSMS's features to me. Lastly, the schema comparison is from Project to Database or Database to Database, showing easy to read differences in a bottom panel. The comparison has settings to allow for exclusions and checkboxes to allow for picking which items to move to the target entity. The key here is updates can only go to the right, no synching back to the left.

The second row of "project" features allow the project to be connected to a standalone local instance of the database. By default, using VS with SSDT installs a local SQL instance and users no longer need an install of SQL Server Express/Developer too. When the developer begins debugging, the project builds, performs a validation check and deploys to this local database. It is important here to set your database platform as different build requirements will be used. Errors are reported with the common red squiggly lines and clickable errors in the output window. This isolated development allows for the data folks to have a sandbox to get everything working, again doing their query testing within VS, and then "publishing" the project (again from VS's solution explorer) to a database in another environment.

The third row of "deployment" features encompass the customizable publish. The publish settings, which are extensive, can be saved as an XML file; the solution can include multiple publishing profiles for quick access. The Data-tier Application framework (DACFX) is a DLL that also works with VS to dictate the way deployments are handled. You must register your database to utilize the DAC which is a "logical database management entity that defines all of the SQL Server objects".[2] The output of this process is a physical entity called a DAC package (DACPAC). You can apply this portable artifact to a database, in any environment, to update the schema objects. You also have the ability to "snapshot" the project, which creates a point in time DACPAC as part of the project for easy comparison or deployment. I use these for end of sprint artifacts in our Agile project. DACPACs can be extracted or deployed or used to register, unregister, or upgrade a database. These DACPACs include objects only; a DACPAC with data too is a Backup package (BACPAC). A BACPAC is the equivalent of a database backup, but again, because we can include/exclude certain objects and settings, it's a more powerful way to move a database lifecycle change around. The entire purpose of this DAC concept and use within VS is to stop us from scripting out all our database changes and giving them to a DBA with longwinded instructions about execution order and servers. Moving your changes to upper environments just got considerably easier! The publish process does still create a single, easy to read, .sql script which can be run separately, so those wanting to stick to the old way still have that option

The latest version of SSDT includes a fix for publishing which allows you to exclude permissions (users, logins, roles etc) from being dropped on the target. This is key because we do not want to have to manage these within the database projects as they should differ across environments. It is critical that you get this version (v12.0.50318.0). Additionally, it's clear the latest round of updates seem to be Azure focused as Microsoft prepares us all that our data will exist in the cloud as opposed to a physical server in network. Azure support in the SSDT is fully explored now. Finally, I'm impressed with how seamless the VS integration with GIT, SourceTree, and TFS is. Managing your local commits, submitting your changes and publishing your branch is all done from VS now. This is less "new" but just another step in the direction of seamless use of VS to manage database projects.

After a month of use, I'd also like to offer some personal tips. First, add a reference to the master database to your project. This will allow you to make use of the sys.objects entities for any dynamic SQL you may be doing. Next, use the Pre and Post Deployment scripts to manage your reference/lookup data. Your project can have one of each script (set them as "not in build") and make use of the command line execute syntax (:r .\SomeSQL.sql). You can user MERGE statements, creating one script file per lookup table which can create/modify/delete data with each deployment. By placing these scripts in the project, they're easy to manage. By executing them in the Post Deploy, it's easy to control execution order. Whenever a project is debugged or published, these scripts will execute at the end. Additionally, I've consistently unchecked the "Block incremental deployment if data loss might occur" setting on publishing. When checked, a deployment will not put a new column in the middle of a table – the deployment will fail if the table has data. When not checked, a deployment will drop FK constraints, move data to a temp table, drop the table, recreate with the new column order, move the data back, and then read the constraints. This is how most schema compare tools would operate and the preferred path in my mind. Finally, use the schema compare and debugging of a project regularly to test your changes and make sure everything that will get deployed is as you expect. Make the mental leap to do your work in the project and not directly on the database. It's clean, simple, and finally broke me of my ten year long refusal to develop in anywhere but SSMS. I hope you found this information helpful and have great success using Visual Studio to manage your database projects.