Posts

Transactional SQL Server Replication Toolbox Scripts

Transactional SQL Server Replication Toolbox Scripts

Originally published on KendalVanDyke.com.

During the last few years I’ve worked extensively with transactional replication and have written a handful of scripts that have found a permanent home in my “useful scripts” toolbox. I’ve provided these scripts as downloads whenever I’ve presented about replication…but not everyone who has worked with replication has been to one of my presentations (or had access to the downloads afterwards) so I’m posting them in this Transactional SQL Server Replication Toolbox Scripts series.

The first script in my toolbox shows all of the articles and columns in each article for all transactional publications in a published database. It’s pretty straightforward – just execute the script in the published database on the publisher. Note that because it uses the FOR XML PATH directive it must be run on SQL 2005 or higher.

/********************************************************************************************* 
Transactional SQL Server Replication Toolbox Scripts: Show Articles and Columns for All Publications 

Description: 
   Shows articles and columns for each article for all transactional publications 

   (C) 2013, Kendal Van Dyke (mailto:kendal.vandyke@gmail.com) 

Version History: 
   v1.00 (2013-01-29) 

License: 
   This query is free to download and use for personal, educational, and internal 
   corporate purposes, provided that this header is preserved. Redistribution or sale 
   of this query, in whole or in part, is prohibited without the author's express 
   written consent. 

Note: 
   Execute this query in the published database on the PUBLISHER 

   Because this query uses FOR XML PATH('') it requires SQL 2005 or higher 
   
*********************************************************************************************/ 

SELECT 
   syspublications.name AS "Publication", 
   sysarticles.name AS "Article", 
   STUFF( 
       ( 
           SELECT ', ' + syscolumns.name AS [text()] 
           FROM sysarticlecolumns WITH (NOLOCK) 
               INNER JOIN syscolumns WITH (NOLOCK) ON sysarticlecolumns.colid = syscolumns.colorder 
           WHERE sysarticlecolumns.artid = sysarticles.artid 
               AND sysarticles.objid = syscolumns.id 
           ORDER BY syscolumns.colorder 
           FOR XML PATH('') 
       ), 1, 2, '' 
   ) AS "Columns" FROM syspublications WITH (NOLOCK) 
   INNER JOIN sysarticles WITH (NOLOCK) ON syspublications.pubid = sysarticles.pubid 
ORDER BY syspublications.name, sysarticles.name

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Gotcha – Multiple Publications

SQL Server Replication Gotcha - Multiple PublicationsOriginally published on KendalVanDyke.com.

Here is another SQL Server Replication Gotcha – Multiple Publications. When administering replication topologies it’s common to group articles into publications based on roles that subscribers fulfill. Often you’ll have multiple subscriber roles and therefore multiple publications, and in some cases a subset of articles are common between them. There’s nothing to prevent you from adding the same article to more than one publication but I wanted to point out how this can potentially lead to major performance problems with replication.

Let’s start with a sample table:

CREATE TABLE [dbo].[ReplDemo]
    (
      [ReplDemoID] [int] IDENTITY(1, 1) NOT FOR REPLICATION
                         NOT NULL ,
      [SomeValue] [varchar](20) NOT NULL ,
      CONSTRAINT [PK_ReplDemo] PRIMARY KEY CLUSTERED ( [ReplDemoID] ASC )
        ON [PRIMARY]
    )
ON  [PRIMARY]
GO

Now let’s pretend that we need this table replicated to two subscribers which have different roles. We’ll create one publication for each role and add the table to both publications:

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'ReplDemo Publication A',
    @description = N'Publication to demonstrate behavior when same article is in multiple publications',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'ReplDemo Publication A',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO
-- Adding the transactional articles
EXEC sp_addarticle @publication = N'ReplDemo Publication A',
    @article = N'ReplDemo', @source_owner = N'dbo',
    @source_object = N'ReplDemo', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'ReplDemo', @destination_owner = N'dbo', @status = 8,
    @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboReplDemo]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboReplDemo]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboReplDemo]'
GO 

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'ReplDemo Publication B',
    @description = N'Publication to demonstrate behavior when same article is in multiple publications',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'ReplDemo Publication B',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO
-- Adding the transactional articles
EXEC sp_addarticle @publication = N'ReplDemo Publication B',
    @article = N'ReplDemo', @source_owner = N'dbo',
    @source_object = N'ReplDemo', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'ReplDemo', @destination_owner = N'dbo', @status = 8,
    @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboReplDemo]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboReplDemo]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboReplDemo]'
GO

After creating the publications we create our subscriptions, take & apply the snapshot, and we’re ready to start making changes so we execute this simple insert statement:

INSERT  INTO dbo.ReplDemo
        ( SomeValue )
VALUES  ( 'Test' )

Here’s the million dollar question: How many times does this insert statement get added to the distribution database? To find out we’ll run the following statement on the distributor (after the log reader agent has done it’s work, of course):

SELECT  MSrepl_commands.xact_seqno ,
        MSrepl_commands.article_id ,
        MSrepl_commands.command_id ,
        MSsubscriptions.subscriber_id
FROM    distribution.dbo.MSrepl_commands AS [MSrepl_commands]
        INNER JOIN distribution.dbo.MSsubscriptions AS [MSsubscriptions] ON MSrepl_commands.publisher_database_id = MSsubscriptions.publisher_database_id
                                                              AND MSrepl_commands.article_id = MSsubscriptions.article_id
        INNER JOIN distribution.dbo.MSarticles AS [MSarticles] ON MSsubscriptions.publisher_id = MSarticles.publisher_id
                                                              AND MSsubscriptions.publication_id = MSarticles.publication_id
                                                              AND MSsubscriptions.article_id = MSarticles.article_id
WHERE   MSarticles.article = 'ReplDemo'
ORDER BY MSrepl_commands.xact_seqno ,
        MSrepl_commands.article_id ,
        MSrepl_commands.command_id

Here’s the output of the statement:

Query Results

That’s one row for each publication the table article is included in. Now imagine that an update statement affects 100,000 rows in the table. In this example that would turn into 200,000 rows that will be inserted into the distribution database and need to be cleaned up at a later date. It’s not hard to see how this could lead to performance problems for tables that see a high volume of insert\update\delete activity.

Workarounds
Two workarounds for this behavior come to mind:

  1. Modify data using stored procedures, then replicate both their schema and execution. This won’t help for insert statements and is useless if you’re only updating\deleting a single row each time the procedure executes. This also assumes that all dependencies necessary for the stored procedure(s) to execute exist at the subscriber
  2. Limit table articles to one publication per article. If you’re creating publications from scratch then place table articles that would otherwise be included in multiple publications into their own distinct publication. If you’re working with existing publications that already include the table article then subscribe only to the article(s) that you need rather than adding the article to another publication. (Subscribing to individual articles within a publication can get tricky – I’ll demonstrate how to do this in a future post)

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Gotcha – Blank XML


SQL Server Replication Gotcha - Blank XMLOriginally published on KendalVanDyke.com.

Here is another SQL Server Replication Gotcha – Blank XML.  Transactional replication in SQL Server 2005\2008 can handle the XML datatype just fine with few exceptions – one in particular being when the XML value is blank. I’ll save the argument about whether or not a blank (or empty string if you prefer) value is well formed XML for another day because the point is that SQL Server allows it. Consider the following table:

CREATE TABLE [dbo].[XMLReplTest]
    (
      [XMLReplTestID] [int] IDENTITY(1, 1) NOT FOR REPLICATION
                            NOT NULL ,
      [SomeXML]  NOT NULL ,
      CONSTRAINT [PK_XMLReplTest] PRIMARY KEY CLUSTERED
        ( [XMLReplTestID] ASC ) ON [PRIMARY]
    )
ON  [PRIMARY]
GO

Execute the following statement and you’ll see that SQL Server handles it just fine:

INSERT  INTO dbo.XMLReplTest
        ( SomeXML )
VALUES  ( '' )

Now let’s add this table to a transactional replication publication:

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'XML Replication Test',
    @description = N'Sample publication to demonstrate blank XML gotcha',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'XML Replication Test',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO 

-- Adding the transactional articles
EXEC sp_addarticle @publication = N'XML Replication Test',
    @article = N'XMLReplTest', @source_owner = N'dbo',
    @source_object = N'XMLReplTest', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'XMLReplTest', @destination_owner = N'dbo',
    @status = 8, @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboXMLReplTest]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboXMLReplTest]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboXMLReplTest]'
GO

Assume we’ve created the publication, added a subscriber, taken & applied the snapshot, and we’re ready to start changing data. Let’s throw a monkey wrench into the works by executing the insert statement with the blank XML again and watch what happens to the log reader agent:

Log Reader Agent Error

That’s not a very nice error (or resolution)! I’ve been able to reproduce this behavior in SQL 2005 & 2008 but I have not tried it in 2008 R2. I’ve entered a Connect bug report so hopefully this is fixed in a forthcoming cumulative update. In the meantime there is a simple workaround – add a check constraint. Since we’re working with the SQL Server replication blank XML datatype the only option for checking length with a scalar function is DATALENGTH. The DATALENGTH for a blank xml value is 5 so we want to check that any inserted or updated value is greater than 5:

ALTER TABLE dbo.XMLReplTest ADD CONSTRAINT
   CK_XMLReplTest_SomeXML CHECK (DATALENGTH(SomeXML) > 5)
GO

If you are affected by this behavior please consider taking a moment to go vote for it on Connect.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication – Troubleshooting Transactional Replication

SQL Server Replication - Troubleshooting Transactional ReplicationKendal Van Dyke’s June 2010 article in SQL Server Pro magazine writes about SQL Server Replication – Troubleshooting Transactional Replication and solves three common replication problems.

Continue Reading on SQLMag.com >>

 

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Scripts: Show All Transactional Publications & Subscribers At Distributor

SQL Server Replication Scripts: Show All Transactional Publications & Subscribers At DistributorOriginally published on KendalVanDyke.com.

Anybody who has talked with me about replication or heard me present about it knows that I recommend using a dedicated remote distributor for anything beyond light replication workloads. Unfortunately neither SSMS nor Replication Monitor provide an easy “one view to rule them all” way at the distributor (or anywhere else) to show every transactional publication, subscriber, and article they’re subscribed to. The only way to gather that information using SSMS is to script out each publication and visually parse the SQL Server replication scripts. I manage hundreds of publications & subscriptions and that’s not a reasonable option for me so I’ve written SQL Server Replication Scripts: Show All Transactional Publications & Subscribers At Distributor to show me everything at once:

-- Show Transactional Publications and Subscriptions to articles at Distributor
-- Run this on the DISTRIBUTOR
-- Add a WHERE clause to limit results to one publisher\subscriber\publication\etc
SELECT  publishers.srvname AS [Publisher] ,
        publications.publisher_db AS [Publisher DB] ,
        publications.publication AS [Publication] ,
        subscribers.srvname AS [Subscriber] ,
        subscriptions.subscriber_db AS [Subscriber DB] ,
        articles.article AS [Article]
FROM    sys.sysservers AS publishers
        INNER JOIN distribution.dbo.MSarticles AS articles ON publishers.srvid = articles.publisher_id
        INNER JOIN distribution.dbo.MSpublications AS publications ON articles.publisher_id =publications.publisher_id
                                                              AND articles.publication_id =publications.publication_id
        INNER JOIN distribution.dbo.MSsubscriptions AS subscriptions ON articles.publisher_id =subscriptions.publisher_id
                                                              AND articles.publication_id =subscriptions.publication_id
                                                              AND articles.article_id = subscriptions.article_id
        INNER JOIN sys.sysservers AS subscribers ON subscriptions.subscriber_id = subscribers.srvid 

-- Limit results to subscriber
--WHERE   subscribers.srvname = '[Subscriber Server Name]' 

---- Limit results to publisher and publication
--WHERE   publishers.srvname = '[Publisher Server Name]'
--        AND MSpublications.publication = '[Publication Name]' 

ORDER BY publishers.srvname ,
        subscribers.srvname ,
        publications.publication ,
        articles.article

This script also works for distributors running SQL 2000; just substitute master.dbo.sysservers in place of sys.sysservers.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Snapshot Errors

SQL Server Replication Snapshot ErrorsOriginally published on KendalVanDyke.com.

I ran across interesting SQL Server replication snapshot errors recently that are worth sharing, so I wrote SQL Server Replication Snapshot Errors. It happened while using a distributor running SQL 2008, a publisher running SQL 2005, and the published database set to 2000 (80) compatibility. When adding a new subscription (version and compatibility of subscriber are irrelevant) the snapshot agent failed with the following error (extra details omitted for readability):

Error messages: 
Source: Microsoft.SqlServer.Smo 


Message: Script failed for Table 'dbo.Template_HeaderFooter'. 


Message: Column HeaderFooter_Value in object Template_HeaderFooter contains type NVarCharMax, which is not supported in the target server version, SQL Server 2000. 

The distributor was recently upgraded from SQL 2005 where this wasn’t a problem. A quick search of Microsoft’s KB turned up nothing on the error. After some tinkering I was able to figure out a workaround: change the compatibility level of the published DB to 2005 (90). While this works, it’s less than ideal if your DB is already live because you may break code by changing the compatibility level.

Unfortunately I haven’t found any other workarounds to the problem so if this is happening to you your best bet is to pick a time when no one is using the DB, change the compatibility level, take your snapshot, then change the compatibility level back. Of course an even better strategy is to work with your development teams to get the DB moved up to 2005 compatibility permanently.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Launch SQL Server Replication Monitor Without SSMS

Launch SQL Server Replication Monitor Without SSMSOriginally published on KendalVanDyke.com.

I manage a lot of replication publications\subscriptions and when I get a latency or agent failure alert it always bugs me that I have to go through 5 steps to launch Replication Monitor: launch SSMS, go to my server in object explorer, expand the treeview, right click on the Replication node, and choose the menu option to launch Replication Monitor. To make things easier I did what I think Microsoft should have done in the first place and created a shortcut to the Replication Monitor executable in my start menu and on my desktop.

To create a shortcut for yourself open up Windows Explorer and navigate to C:\Program Files\Microsoft SQL Server\100\Tools\Binn if you’re running SSMS 2008 or C:\Program Files\Microsoft SQL Server\90\Tools\Binn if you’re running SSMS 2005. Right click on sqlmonitor.exe and choose Send To > Desktop (Create Shortcut). Now you’ve got a shortcut on your desktop that will launch SQL Server replication monitor without SSMS with a simple double click! (You’ll probably also want to rename the shortcut to something like “Replication Monitor”)

Unfortunately the default icon doesn’t look great in anything bigger than 16×16 so I replaced it with a really nice (and free!) icon from iconspedia.com that looks much better:

Activity-monitor-48 Happy monitoring!

Visit: Activity Monitor Icon on iconpedia.com

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication – Hey I Said NOT To Replicate Delete Statements!

SQL Server ReplicationOriginally published on KendalVanDyke.com.

You’ve been given the task to set up a new publication and one of the requirements is that delete statements should not be replicated. No problem, you know just what to do – set the delete delivery format for your articles in the publication properties to “Do not replicate DELETE statements”, like so:

Statement delivery options for all table articles

You create your publication, set up your subscribers, and go about your day…only to have Joe Salesguy come into your office the next day in a panicked frenzy yelling “The data’s gone, the data’s gone! What did you do with the data?!”. Not convinced he knows what he’s talking about, you pull up Management Studio, run a few queries, and much to your chagrin you discover that sure enough, the data’s gone. Someone issued delete statements on your publisher that replicated out to your subscribers, even though you know that you set the publication up properly. So what gives??

Be careful when using this...It turns out that there’s a nasty “feature” in the article properties window which affects all three delivery formats. When the option to set the properties of all table articles is selected the underlying script that is generated is missing the parameters @ins_cmd, @upd_cmd, or @del_cmd from sp_addarticle for whichever commands the “Do not replicate ins/upd/del statements” was selected for. When the parameters are missing  SQL tries to fill in the gap and what you end up with is statements that get replicated even though you didn’t want them to.

There are three ways to get around this problem:

  1. When setting up your publication change the properties of the article(s) that you do not want to replicate commands for individually, rather than setting the properties of all articles at once.
  2. When setting up your publication and setting the properties of all articles at once, instead of letting the UI create the publication specify that you want to script the commands to a file. Open the file, and add the parameters @ins_cmd, @upd_cmd, and\or @del_cmd with a value of “NONE” to each sp_addarticle call for the corresponding statements that you do not want delivered.
  3. If you publication already exists, call sp_changearticle for each article in your publication and supply the necessary parameters to change the delivery format for inserts, updates, and deletes. This option is a bit more intrusive though because it will force you to reinitialize your subscriptions.

Fortunately this behavior has been corrected in SSMS 2008.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication – Reasons To Change CommitBatchSize And CommitBatchThreshold

SQL Server Replication - Reasons To Change CommitBatchSize And CommitBatchThresholdOriginally published on KendalVanDyke.com.

In my last post I showed how CommitBatchSize and CommitBatchThreshold Affect SQL Server Replication. Now the question is why would you want to change them? The simple answer is that usually you don’t need to – the defaults work just fine most of the time. But there are a few SQL Server replication reasons to change CommitBatchSize and CommitBatchThreshold you may consider:

Why you would lower the values

  • Your subscriber experiences a consistently high volume of activity and you want to minimize locking. Think SQL Servers sitting behind public facing web servers. Remember, replication delivers commands to subscribers in a transaction which cause row locks that can lead to blocking. Reducing the number of commands in each transaction will shorten the duration of the locks but be careful – there’s a fixed overhead to committing transactions so by lowering the values the tradeoff is that your subscribers will have to process more of them.
  • Your network between distributor is subscriber is slow and\or unreliable. Lowering the values will result in smaller transactions at the subscriber and if a network failure occurs there will be a smaller number of commands to rollback and re-apply.

Why you would raise the values

  • You want to increase replication throughput. One example is when you’re pushing changes to a publishing subscriber over a WAN connection and you don’t care about blocking at the subscriber. Raising the values means more commands are included in each transaction at the subscriber and fewer transactions means less overhead. Microsoft suggests that “increasing the values twofold to tenfold improved performance by 5 percent for INSERT commands, 10-15 percent for UPDATE commands, and 30 percent for DELETE commands” (take this with a grain of salt though – it was written back in the SQL 2000 days). The thing to watch out for is that at some point system resources at the subscriber (e.g. disk I/O) minimize the benefits of increasing the values. Also consider that more commands per transaction means that any failure at the subscriber will take longer to rollback and re-apply.

How much you raise or lower the values depends on a number of factors including: hardware horsepower, bandwidth, and volume of changes being replicated. There’s no one good answer that applies to all scenarios. The best thing to do is change them a small amount at a time and observe the impact – positive or negative. Eventually you’ll find the sweet spot for your environment.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

How CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

CommitBatchSize And CommitBatchThreshold Affect SQL Server ReplicationOriginally published on KendalVanDyke.com.

Note: This is a long post. I hope it’s worth your while to read.

A common suggestion for optimizing transactional replication performance is to adjust the values in your distribution agent profile for CommitBatchSize and CommitBatchThreshold. Unfortunately what these two value really do isn’t documented very well anywhere. According to Books Online:

  • CommitBatchSize “Is the number of transactions to be issued to the Subscriber before a COMMIT statement is issued. The default is 100.”
  • CommitBatchThreshold “Is the number of replication commands to be issued to the Subscriber before a COMMIT statement is issued. The default is 1000.”

When you read this you might wonder what the difference is between a transaction and a command. After all, isn’t a command just an autocommit transaction? Does this mean that the commands in a transaction could get broken up into smaller transactions at the subscriber? Won’t that violate ACID? Microsoft went out of their way to make these two distinct values so each one has to influence delivery of commands in some way, right?

I went the cheap route to find out and posted to the forums at SQLServerCentral and to microsoft.public.sqlserver.replication. While I waited for an answer I set up some simple tests to try and figure it out for myself.

Before reading on, I want to test your replication mettle and give you a little quiz. If you want to cut to the chase and see if you’re right, jump to the conclusions at the end of this post to view the answers.

  1. Based on the defaults above, what happens if I issue 1,500 insert\update\delete statements which each affect a single row all contained within an explicit transaction (e.g. BEGIN TRANSACTION, 1,500 updates, then COMMIT)?
  2. What if those 1,500 statements are not part of a transaction?
  3. What happens if I issue three update statements: the first affecting 600 rows, the second affecting 500 rows, and the third affecting 400 rows?

 

Setup

I started by creating a simple table and populating it with 1,000 rows (representative of a large and random sample set):

SET NOCOUNT ON 
GO 
CREATE TABLE TransReplTest ( 
    [ID] INT IDENTITY(1,1), 
    [SomeDate] DATETIME, 
    [SomeValue] VARCHAR(20), 
    CONSTRAINT [PK_TransReplTest] PRIMARY KEY CLUSTERED ( 
        [ID] 
    ) 
) 
GO 
INSERT INTO TransReplTest (SomeDate, SomeValue) 
SELECT GETDATE(), CONVERT(VARCHAR(20), RAND() * 1000) 
GO 1000

I created a publication, added a table article to it, and set up a subscription. I created a new distribution agent profile based on the default agent profile and changed only the values for CommitBatchSize and CommitBatchThreshold. Each time I changed their values I stopped and started the distribution agent to ensure I was using the new value. Finally, I set up a profiler session capturing the following events on the subscriber: “RPC:Completed”, “SQL:BatchCompleted”, and “SQL:BatchStarting”. I captured some extra noise so for the sake of readability I saved my profiler sessions and filtered them for each test afterwards.

Test 1

To make things easier to understand I set the values low, like this:

CommitBatchSize: 3
CommitBatchThreshold: 5

First I ran this statement which randomly updates 10 rows, one at a time, all contained within a transaction:

BEGIN TRANSACTION

DECLARE @rownum TINYINT 
SET @rownum = 0

WHILE @rownum < 10 
BEGIN

    UPDATE TransReplTest 
    SET SomeDate = GETDATE() 
    WHERE ID IN ( 
        SELECT TOP 1 ID 
        FROM TransReplTest 
        ORDER BY NEWID() 
    )

    SET @rownum = @rownum + 1 
END

COMMIT

Profiler shows 1 transaction with 10 updates. CommitBatchThreshold didn’t appear affect anything here.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

When I issue the same statement without the explicit transaction Profiler shows 4 transactions of 3 updates, 3 updates, 3 updates, and 1 update. CommitBatchSize causes every 3 statements (each of which is an autocommit transaction) to be grouped together in a transaction on the subscriber.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Next I issued a single update statement affecting 10 rows:

UPDATE TransReplTest 
SET SomeDate = GETDATE() 
WHERE ID IN ( 
    SELECT TOP 10 ID 
    FROM TransReplTest 
    ORDER BY NEWID() 
)

Profiler shows 1 transaction with 10 updates at the subscriber. Again CommitBatchThreshold did not affect anything here.

 

Finally, I issued an update statement affecting 3 rows 3 times:

UPDATE TransReplTest 
SET SomeDate = GETDATE() 
WHERE ID IN ( 
    SELECT TOP 3 ID 
    FROM TransReplTest 
    ORDER BY NEWID() 
) 
GO 3

This time Profiler shows 2 transactions; the first contains 6 updates and the second contains 3 updates. Even though CommitBatchSize was set to 3, the number of rows affected by the first two statements exceeded CommitBatchThreshold (set to 5) and so the third statement was put into its own transaction.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Test 2

Let’s see what happens if we switch the values around and run through the same statements as Test 1:

CommitBatchSize: 5
CommitBatchThreshold: 3

Randomly update 10 rows in a single transaction. We still see 1 transaction with 10 updates.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Now the same statement without the transaction. Now we see 4 transactions with updates in batches of 4, 1, 4, and 1. This time CommitBatchThreshold set to 3 was the reason we saw the smaller batches. But wait – shouldn’t a CommitBatchThreshold of 3 mean we should see 3 transactions with 3 updates in each transaction and 1 transaction with 1 update?

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Next, the single update statement affecting 10 rows. Profiler shows 1 transaction with 10 updates. CommitBatchThreshold made no difference here.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Finally, updating 3 rows 3 times. Profiler shows 1 transaction with 6 updates and 1 transaction with 3 updates. Just like in Test 1, even though CommitBatchThreshold was set to 3, the number of rows affected by the first two statements exceeded CommitBatchSize (set to 5) and so the third statement was put into its own transaction.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Conclusions

Based on the profiler traces, here’s what we can conclude:

  • Replication delivers changes to subscribers one row at a time; a single update statement which changes 1,000 rows at the publisher will result in 1,000 statements at the subscriber.
  • Transactions at the publisher are kept intact at the subscriber. If a million rows are affected inside a transaction – either explicit or autocommit – on the publisher you can count on a million statements in one explicit transaction to be delivered to the subscriber. We saw this in both the statement which updated 10 rows one at a time in an explicit transaction and the statement which updated 10 rows in one shot.
  • The lower of the two values for CommitBatchSize and CommitBatchThreshold is the primary driver for how statements are delivered to subscribers. When the number of transactions on the publisher reaches that lower value the distribution agent will apply them in a single explicit transaction at the subscriber. We saw this in the statement which updated 10 rows one at a time (not in a transaction).
  • The higher of the two values can cause commands to be delivered before the lower value is met. If the number of rows affected at the publisher reaches the higher value the distribution agent will immediately apply the changes in a single explicit transaction at the subscriber. We saw this in the statements which updated 3 rows 3 times.

OK, now that we know how CommitBatchSize and CommitBatchThreshold affect SQL Server replication, let’s look at the answers to the quiz:

  1. Based on the defaults above, what happens if I issue 1,500 insert\update\delete statements which each affect a single row all contained within an explicit transaction (e.g. BEGIN TRANSACTION, 1,500 updates, then COMMIT)?
    Answer: Transactions at the publisher are honored at the subscriber so we’ll see one explicit transaction applied at the subscriber which contains 1,500 updates.
  2. What if those 1,500 statements are not part of a transaction?
    Answer: Because CommitBatchSize is set to 100 we’ll see 15 transactions at the subscriber, each containing 100 statements which affect one row per statement.
  3. What happens if I issue three update statements: the first affecting 600 rows, the second affecting 500 rows, and the third affecting 400 rows?
    Answer: Because CommitBatchThreshold is set to 1,000 we’ll see 2 transactions at the subscriber. The first transaction will contain 1,100 statements and the second transaction will contain 400 statements.

BTW, remember that one weird case where CommitBatchSize was set to 5, CommitBatchThreshold was set to 3, and when we updated 10 rows one at a time we saw 4 transactions with updates in batches of 4, 1, 4, and 1? I think that’s a bug in SQL Server. It looks like the distribution agent alternates between (CommitBatchThreshold + 1) and (CommitBatchThreshold – 2) number of commands placed into each transaction delivered to the subscriber. Since this only appears to happen when CommitBatchSize is higher than CommitBatchThreshold – and most people don’t change the values to work that way – this seems relatively insignificantbut still not the behavior that I expected to see.

 

P.S. – I did get a response to my usenet posting from MVPs Hilary Cotter and Paul Ibison. You can read it here. Gopal Ashok, a program manager in the SQL Replication team, confirmed that what I thought was a bug was indeed a bug. Yay me!

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.