SQL Server Replication – Reasons To Change CommitBatchSize And CommitBatchThreshold

SQL Server Replication - Reasons To Change CommitBatchSize And CommitBatchThresholdOriginally published on KendalVanDyke.com.

In my last post I showed how CommitBatchSize and CommitBatchThreshold Affect SQL Server Replication. Now the question is why would you want to change them? The simple answer is that usually you don’t need to – the defaults work just fine most of the time. But there are a few SQL Server replication reasons to change CommitBatchSize and CommitBatchThreshold you may consider:

Why you would lower the values

  • Your subscriber experiences a consistently high volume of activity and you want to minimize locking. Think SQL Servers sitting behind public facing web servers. Remember, replication delivers commands to subscribers in a transaction which cause row locks that can lead to blocking. Reducing the number of commands in each transaction will shorten the duration of the locks but be careful – there’s a fixed overhead to committing transactions so by lowering the values the tradeoff is that your subscribers will have to process more of them.
  • Your network between distributor is subscriber is slow and\or unreliable. Lowering the values will result in smaller transactions at the subscriber and if a network failure occurs there will be a smaller number of commands to rollback and re-apply.

Why you would raise the values

  • You want to increase replication throughput. One example is when you’re pushing changes to a publishing subscriber over a WAN connection and you don’t care about blocking at the subscriber. Raising the values means more commands are included in each transaction at the subscriber and fewer transactions means less overhead. Microsoft suggests that “increasing the values twofold to tenfold improved performance by 5 percent for INSERT commands, 10-15 percent for UPDATE commands, and 30 percent for DELETE commands” (take this with a grain of salt though – it was written back in the SQL 2000 days). The thing to watch out for is that at some point system resources at the subscriber (e.g. disk I/O) minimize the benefits of increasing the values. Also consider that more commands per transaction means that any failure at the subscriber will take longer to rollback and re-apply.

How much you raise or lower the values depends on a number of factors including: hardware horsepower, bandwidth, and volume of changes being replicated. There’s no one good answer that applies to all scenarios. The best thing to do is change them a small amount at a time and observe the impact – positive or negative. Eventually you’ll find the sweet spot for your environment.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a leading Microsoft Gold Partner for organizations who rely on Microsoft’s Data Platforms, and its mission is to enable every leader to unlock data’s full potential.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *