Main Options for Database DevOps

February 19, 2025    Development Database DevOps

Main Options for Database DevOps

I’ve recently started a new contract that have goal to get their database structure into Git. This is laudable!👏 (you have your code in Git, right?!?)

At Omnitech we have a long running Teams chat going on the difference between State and Migration based versioning of Databases (we are MS SQL centric thinkers). I’ve compiled the feedback and created 2 small demos for my own learning to share with you.

State Based

  • Visual Studio SQL Server Data Tools (SSDT) Database Projects

    • Make all changes in Visual Studio and check in the changes
    • Provides refactoring, validation at development time and Schema Management
    • Multiple profiles for different options
    • Can target specific SQL versions
    • Works with Azure Databases
    • Referenced databased can be worked around
  • Works with Git and Pipelines for CI/CD

    • .dacpac files are created via MSBuild that can be applied to databases
  • May run into interesting deployment and development issues

    • Sometimes it has required adhoc SQL for breaking schema changes to get the project back to valid for small teams. This could likely be avoided with more work where Audit compliance is more stringent
    • One experienced enough “pain” that he wouldn’t recommend this approach again (for larger products he would use migrations)
  • A more familiar approach for Developers

  • Good for simple single database maintained by Developers

  • You can see the full picture of what the database will be after changes are applied

  • Free!

  • https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/sql-database-projects?view=sql-server-ver16

  • SDK-style SQL Database Projects (General Availability in March 2025)

    • “The SDK-style SQL project introduces several advantages, especially for developers shipping their applications through CI/CD pipelines or working in cross-platform environments.”
    • “The database model validation of a SQL project build provides early verification of the SQL syntax used in the project, before code is checked in or deployed. Code analysis for antipatterns that impact database design and performance can be enabled as part of the project build and extended.”
    • https://techcommunity.microsoft.com/blog/azuresqlblog/the-microsoft-build-sql-project-sdk-is-now-generally-available/4392063
    • https://learn.microsoft.com/en-us/azure-data-studio/extensions/sql-database-project-extension-sdk-style-projects
    • As of July 2025, you need Visual Studio Preview. There is a VS Code extension that can be used
    • # install SqlPackage CLI
      dotnet tool install -g Microsoft.SqlPackage
      
      # install Microsoft.Build.Sql.Templates
      dotnet new install Microsoft.Build.Sql.Templates
      
      dotnet new sqlproj -n SqlSdkExample
      
      # Create the .dacpac  
      dotnet build 
      
      # Deploy
      sqlpackage /Action:Publish /SourceFile:yourfile.dacpac /TargetConnectionString:{yourconnectionstring}
      
      # Generate the Script
      sqlpackage /Action:Script /SourceFile:bin\Debug\SqlSdkExample.dacpac /TargetConnectionString:"Server=(localdb)\MSSQLLocalDB;Initial Catalog=SDKStyle" /OutputPath:script
      
    • in the .csproj Microsoft.Data.Tools.Schema.Sql.Sql160DatabaseSchemaProvider is the Sql Target Platform. 160 is SQL Server 2022.

Migration Based

  • Use DBUp
  • Transitions, not States
  • More control than SQL Database Projects, you dictate and can see how the scripts are applied in order
  • May need to look through migrations to determine what the final state will be
  • A more familiar approach for DBAs of creating migration scripts
  • Works with Git and Pipelines for CI/CD
    • Create and Update scripts with a comparison tool or manually
    • Add the script to Git with the correct name and folder
    • In the pipeline, execute the DBUp runner applies each script in succession
  • Free!
  • https://dbup.readthedocs.io/en/latest/philosophy-behind-dbup/

Another option is to use Entity Framework Core with Migrations . Here is a good video explaining the process .

RedGate Flyway

  • You choose migration or state approach
    • You can see the schema model and get migration scripts at the same time
  • Created to help DBAs do DevOps
  • Has a yearly per user cost
  • Works with Git and pipelines
  • A lot of helpful features are included
  • https://www.red-gate.com/products/flyway/

Which will you chose for your project?

Bonus - SDK Style ADO Pipeline Yaml

# https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/tutorials/create-deploy-sql-project?view=sql-server-ver17&pivots=sq1-visual-studio-sdk

trigger:
  - main

pr:
  - main

pool:
  vmImage: "ubuntu-latest"

variables:
  artifactName: "SqlSdkExample"
  ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
    isMain: "true"
  ${{ else }}:
    isMain: "false"
  sqlProj: "$(Build.SourcesDirectory)/Pipelines/SQL/SDK/SqlSdkExample/SqlSdkExample.sqlproj"
  sqlPackageVersion: "170.0.94"
  sqlPublishPac: "$(Build.ArtifactStagingDirectory)/${{ variables.artifactName }}/${{ variables.artifactName }}.dacpac"

stages:
  - stage: Build
    jobs:
      - job: Build
        displayName: "Build"
        steps:
          - task: UseDotNet@2
            inputs:
              packageType: "sdk"
              version: "8.x"
          - task: NuGetAuthenticate@1
            displayName: "NuGet Authenticate"
          - script: dotnet build ${{ variables.sqlProj }} -o $(Build.ArtifactStagingDirectory)
            displayName: Build
          
          - task: PublishBuildArtifacts@1
            displayName: "Publish Build Artifacts"
            condition: and(succeeded(), ${{ eq(variables.isMain, 'true') }})
            inputs:
              PathtoPublish: "$(Build.ArtifactStagingDirectory)"
              artifactName: "${{ variables.artifactName }}"

  - stage: PublishDev
    condition: and(succeeded(), ${{ eq(variables.isMain, 'true') }})
    dependsOn: Build
    variables:
      - group: Development-OnPremise
    jobs:
    - deployment: Deployment
      workspace:
        clean: all
      environment:
        name: 'Development-Database-OnPremise'
        resourceType: VirtualMachine
      strategy:
        runOnce:
          deploy:
            steps:
              - download: current
                artifact: ${{ variables.artifactName }}
              - checkout: none
              - task: DownloadPipelineArtifact@2
                inputs:
                  artifactName: ${{ variables.artifactName }}
                  targetPath: $(Build.ArtifactStagingDirectory)/${{ variables.artifactName }}

              - script: dotnet tool install --global Microsoft.SqlPackage --version ${{ variables.sqlPackageVersion }}
                displayName: "Install specific version of SqlPackage"

              - script: |
                    mkdir "$(Build.ArtifactStagingDirectory)/reports/"
                displayName: 'Create artifacts reports folder'

              # These commands could be used, but /Action:Publish creates the report and the script so we don't have to
              
              # The SqlPackage DeployReport action creates an XML report of the changes that would be made by a publish action. 
              # - script: sqlpackage /Action:DeployReport /SourceFile:"${{ variables.sqlPublishPac }}" /TargetConnectionString:"Server=$(DatabaseServer);Initial Catalog=Example" /OutputPath:"$(Build.ArtifactStagingDirectory)# \reports\deployReport.xml"
              #   displayName: 'Generate Deploy Report'

              # The SqlPackage DriftReport action creates an XML report of the Schema changes that have been made to the registered database since it was last registered.
              # this step didn't pass in the build, or output a file in a local CLI??
              # - script: sqlpackage /Action:DriftReport /TargetConnectionString:"Server=$(DatabaseServer);Initial Catalog=Example" /OutputPath:"$(Build.ArtifactStagingDirectory)\reports\driftReport.xml"
              #   displayName: 'Generate Drift Report'

              # The SqlPackage DriftReport action creates an XML report of the changes that have been made to the registered database since it was last registered.
              #- script: sqlpackage /Action:Script /SourceFile:"${{ variables.sqlPublishPac }}" /TargetConnectionString:"Server=$(DatabaseServer);Initial Catalog=Example" /OutputPath:"$(Build.ArtifactStagingDirectory)\reports\driftReport.xml"
              #  displayName: 'Generate Script'
                
              # $(DatabaseServer) is from the Variable Group
              # /Action:Publish creates the report and the script
              - script: sqlpackage /Action:Publish /SourceFile:"${{ variables.sqlPublishPac }}" /TargetConnectionString:"Server=$(DatabaseServer);Initial Catalog=Example" /DeployReportPath:"$(Build.ArtifactStagingDirectory)\reports\deployReport.xml" /DeployScriptPath:"$(Build.ArtifactStagingDirectory)\reports\changeScript.sql
                displayName: "Publish"
              
              - task: PublishPipelineArtifact@1
                inputs:
                  targetPath: "$(System.DefaultWorkingDirectory)\reports\"
                  artifact: "Diagnostic Files"
                  publishLocation: "pipeline"

# sqlpackage /Action:Publish /SourceFile:bin/Debug/MyDatabaseProject.dacpac /TargetConnectionString:{yourconnectionstring}

Comparison from Microsoft Copilot

Gave Microsoft Copilot and it gave me a helpful breakdown.

📊 Decision Matrix: DbUp vs SSDT/Dacpac

Criteria DbUp (Migration-based) SSDT/Dacpac (State-based)
Change Tracking Explicit, file-based migrations (e.g., 001_AddTable.sql) Implicit, based on schema diff
Version Control Easy to track individual changes Harder to trace specific changes
Deployment Order Controlled via script order Determined by schema comparison
Delayed Releases Easy to cherry-pick and delay specific scripts Difficult — requires managing schema drift
Rollback Strategy Manual (write down scripts) Not built-in, but can be scripted
Tooling Simplicity Lightweight, flexible Heavier tooling (SSDT, Visual Studio)
Cross-Repo Coordination Needs naming conventions to avoid conflicts Centralized schema avoids duplication
Linked Server Support Works well with raw SQL Often problematic or unsupported
SSIS Integration Can be co-located with SSIS in same repo Works well if DB project is in same solution
Environment Drift Detection Manual Built-in drift detection
Learning Curve Lower for SQL developers Higher due to tooling and concepts
Approval Workflow PR-based, script reviewed directly PR-based, but schema diff is abstracted
Production Safety High control, but requires discipline Safer with drift detection and validation
Variables/Templating Supported (e.g., tokens in scripts) Limited or complex to manage

🧭 When to Use Which?

Scenario Recommended Approach Why
Small, isolated DB changes tied to app features DbUp Easier to manage per-feature migrations
Large, centralized schema with multiple contributors SSDT/Dacpac Schema diffing helps avoid conflicts
Long-lived feature branches or delayed releases DbUp Cherry-picking scripts is easier
Projects with linked servers or raw SQL in SSIS DbUp More flexible and compatible
Projects already using SSDT successfully (e.g., BI team) SSDT/Dacpac Leverage existing tooling and experience
Need for drift detection or schema validation SSDT/Dacpac Built-in validation tools
Devs prefer SQL-first workflow DbUp More natural for SQL-centric teams