Posts

SQL Server Audit 102 Reading Audit Output

Auditing doesn’t have to be scary.  SQL Server Audit 102 – Reading Audit Output is part of a blog series designed to help you audit database changes with SQL Server Audit.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization.  

SQL Server Audit 102 – Reading Audit Output

SQL Server Audit 102 – Reading Audit OutputOriginally published on ColleenMorrow.com.

In SQL Server Audit 101 – Creating Basic Audit, we went over the basics of creating a SQL Audit. Now obviously once you’ve gotten your audit in place, you’re going to want to look at the output once in a while, right? Right. So that’s what we’re going to go over today.

If you’re using the default file output for your audit, you have two options for reading your audit output: the log viewer and the fn_get_audit_file function.

Log Viewer

We briefly touched on using the log viewer last time, but in case you missed that post, you can view the audit logs by right-clicking on the Audit object and selecting View Audit Logs. The nice thing about the log viewer is that it’s convenient for taking a quick look at your most recent audit records, without having to know the exact path and file name of your current audit file. On the downside, you’re limited to the most recent 1000 records, so if you’ve got a busy system generating a lot of audit records, you might miss something. And you really can’t run reports or archive records using the Log Viewer, now can you? So, if you’re going to use audit files and do some serious auditing, you’ll want a more powerful tool.

fn_get_audit_file

Fortunately, we have that tool in the fn_get_audit_file function. The great thing about this function is that it allows us to treat the audit output file like a table; so we can search, filter and order our audit records like any other data. We can insert it into a table for archival and reporting purposes, and we can join it with other audit files to find trends in our audit data. And, unlike the Log Viewer, we’re not limited in the number of records we can view.

Using fn_get_audit_file

The fn_get_audit_file function accepts 3 parameters:

  • file_pattern – The first parameter is the file pattern, which specifies the path and file name of the audit file(s) to be read. You have to specify a path and a file name, though the file name can be or include a wildcard. So, for example, acceptable values would be ‘d:\myAudits\MyAudit*.sqlaudit’ or ‘\\Myserver\d$\myAudits\*’. You can also specify a specific file name, if that’s the only one you want to read.
  • initial_file_name – The second parameter is the initial file name. Suppose there were multiple files in d:\myAudits that started with MyAudit*, but I didn’t want to process them all. I could use this parameter to tell SQL Server which file to start with, and it will read that file and the remaining files after it.
  • audit_record_offset – This last parameter is used in conjunction with the initial file name to tell SQL Server where in that initial file to start. This comes in handy when you’ve already processed some records in that initial file, and you just want to pick up where you left off.

Examples

Let’s look at some examples using the DDLAudit audit I created last week. We’ll start with a basic query, reading in all the records in all the files we’ve accumulated so far.

select * from fn_get_audit_file ('D:\SQL2012\Audits\DDLAudit*.sqlaudit', DEFAULT, DEFAULT)

In my case, I only get four records returned, but that’s ok for this demo. If I scroll over to the file_name and audit_file_offset columns, I can make a note of my last audit record so far.

Get-audit-offset

I’ll create and drop a table in AdventureWorks2012 to generate a couple of audit records.

Use AdventureWorks2012
GO
CREATE TABLE myAuditTest2 (col1 int);
GO
DROP TABLE myAuditTest2;
GO

Now, if I run that same basic query again, I’ll get the new audit records in addition to the old records I’ve already viewed. But, if I use the initial_file_name and audit_file_offset parameter to tell SQL Server where I left off last time, I’ll only get the new records.

select * from fn_get_audit_file
   ('D:\SQL2012\Audits\DDLAudit*.sqlaudit',
    'D:\SQL2012\Audits\DDLAudit_D50CF1AD-2927-44C7-AFD0-0C31D302CA35_0_129861627977120000.sqlaudit',
    5632)

Use-audit-offset

If we only wanted to see object creation records, and we wanted to know the owner of the database where the event took place, we could use the following:

select p.name, a.database_name, a.schema_name, a.object_name, a.statement
from fn_get_audit_file
   ('D:\SQL2012\Audits\DDLAudit*.sqlaudit',
    'D:\SQL2012\Audits\DDLAudit_D50CF1AD-2927-44C7-AFD0-0C31D302CA35_0_129861627977120000.sqlaudit',
    5632) a
join sys.databases d on a.database_name = d.name
join sys.server_principals p on p.sid = d.owner_sid
where action_id = 'CR'

What’s Next?

Now that we’ve covered the basics of creating an audit and reading its output, we can put this knowledge to use creating a solid auditing solution for our SQL Server instance. That’s what we’ll do next in SQL Server Audit 201 – Creating an Audit Solution.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Audit 101 Creating Basic Audit

Auditing doesn’t have to be scary. SQL Server Audit 101 – Creating Basic Audit is part of a blog series designed to help you audit changes to your database by using SQL Server Audit.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization.  

SQL Server Audit 101 – Creating Basic Audit

SQL Server Audit 101 – Creating Basic AuditOriginally published on ColleenMorrow.com.

SQL Audit was introduced in SQL 2008, and for the first time auditing was treated as a “first-class” object in SQL Server, meaning it could be managed by DDL statements. It was built on the extended events framework and what made it really neat was that the event was recorded when the permission-check for that event occurred. What this meant to us as auditors was that the event would be recorded even if it didn’t really happen because the user didn’t have permissions. Why is this good? Well, suppose you’re auditing the execution of a stored procedure that modifies some sensitive data, like salary information. Wouldn’t it be nice to know not only who is executing that stored procedure, but who is trying to execute it?

One good thing about SQL Audit is that it executes asynchronously, which means it’s not going to hold up user processes. Unfortunately, that means it also can’t access certain information, like the network login or client associated with a session. So, going back to the salary procedure example, if a user is using a generic login to execute that procedure, you might have a hard time tracing it back to a real person.

Create the Server Audit

The first step in creating a SQL Audit is to create the audit object.

CreateServerAudit1_thumb

If you’re familiar with creating an audit in SQL Server 2008, you’ll notice a few changes in SQL 2012. The first is the “On Audit Log Failure” selection. In SQL 2008, this was only be checkbox to shutdown server on audit log failure. In SQL 2012, we now have options to continue (the equivalent of not checking the old checkbox), shutdown (checking the old checkbox) or fail operation, which will fail any operation that should have been recorded but couldn’t. This is nice if you want to prevent audited activity from going unrecorded, but don’t want to impact everything.

We have the same options for output: a file, the Windows Application log, or the Windows Security log. Keep in mind that, if you want to write to the Security log, some configuration is required.

The next change we see is the option for maximum files or maximum rollover files. Maximum rollover files means that, when that number of files is reached, the oldest file will be overwritten. If you choose Maximum files, however, once that max is reached, subsequent writes fail.

But the change that got me most excited (at first) was the new Filter tab. My biggest beef with SQL Audit in SQL 2008 was the inability to filter out any unwanted activity or objects from the audit output. It made for a lot of clutter. But in SQL 2012, we now have the ability to enter a predicate to filter the audit on, i.e. “(database_name = ‘AdventureWorks2012′)” This string is limited to 3000 characters.

CreateServerAudit2_thumb

Create the Audit Specification

The audit object tells SQL Server where to write the audit records, and how to manage them, but doesn’t actually specify what events to audit. For that, we need to create an audit specification.

There are 2 types of audit specification: a server audit specification or a database audit specification. Generally speaking, a server audit specification is used to audit events that occur at the server level; things like database creations, logins, creating a linked server. A database audit specification will audit events that occur at the database level; things like executing a stored procedure, creating a user, dropping a table. There are, however, some audit groups and events that span both levels. You can, for example, audit the SCHEMA_OBJECT_CHANGE_GROUP at the server or the database level. If you do it at the database level, it will only audit DDL changes in that database. Auditing it at the server level, however, will track DDL changes in all databases. You can create Server level audit specifications in all editions of SQL Server, however database audit specifications are only supported in Enterprise, Developer, and Evaluation editions.

Let’s say I want to audit DDL changes in the AdventureWorks2012 database. I can create a database audit specification or I can create a server audit specification and use the new filtering functionality to limit my audit output to only AdventureWorks2012 changes. Let’s do that. What’s the advantage? In this case, not much. But let’s say you have 100 databases on this server, and you want to audit all but 5. You could create database audit specs in 95 databases, or you could create one server audit spec and filter out the 5 databases you don’t want. Up to you.

CreateServerAuditSpec_thumb

Activating the Audit

Once I’ve created the audit and the audit specification, I’m almost ready to go. Before SQL Server will audit anything, I need to enable both the audit and the audit specification. I can do this by right-clicking on each and selecting “Enable” or I can do it using an ALTER statement.

Test the Audit and the Filter

I have my AdventureWorks2012 database. That’s what I’m auditing. But I also have a NoAuditDB which I’m, obviously, not auditing. If I create a table in each database and check the Audit logs (which I do by right-clicking on the Audit and selecting “View Audit Logs”) I see only one entry, the one for AdventureWorks2012.

TestAuditandFiltering_thumb

About that Filter

That filtering feature seems pretty handy, but what if you have a number of databases/objects/logins/etc. that you want to include or exclude from your audit? Listing each one can become cumbersome to say the least. What if you had a table somewhere that contained all the objects to exclude from your audit, could you use a subquery in the Filter predicate? Unfortunately, no, SQL Audit doesn’t handle this. Bummer.

That’s ok, though. As we’ll see soon, there’s more than one way to skin a cat. In fact, there’s even more than one cat. SQL Server Audit 102 – Reading Audit Output is next.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Auditing – Getting Started

Auditing doesn’t have to be scary. SQL Server Auditing – Getting Started is part of a blog series designed to help you audit changes to your database by using SQL Server Audit.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization.  

SQL Server Auditing – Getting Started

SQL Server Auditing – Getting StartedOriginally published on ColleenMorrow.com.

In my last organization, one of my jobs was auditing our database environment. I had been tasked with this responsibility for several years, and it wasn’t always easy. In fact, I used to despise the entire process. Why? Because I wasn’t using the right tool for the job. I didn’t know what options were available to me. Granted, I started out in SQL Server 2000, where there weren’t a whole lot of choices to begin with.

Over the years, I’ve taken a particular interest in auditing options available in SQL Server, mainly with the goal of making that part of my job easier. True, I probably could have gone to my boss at any point and said “hey, we should get a third-party auditing tool.” But the fact is, I get a kick out of seeing just what I can do with each tool. How I can spy on watch over my users and developers. Making the most of the tools I have at my disposal already. And these days, our audits are a piece of cake.

Why Audit?

There are a number of reasons why you might need to implement auditing in SQL Server. Maybe your company is bringing in an outside firm to perform security audits. Or you might even be required by law to perform such auditing. From a development perspective, auditing DDL changes can supplement a change management system. Or it can help you answer the question, “what changed?” that will inevitably be directed at you when the poop hits the fan. Auditing can tell you who’s accessing that sensitive data, or help you figure out what a particular login is being used for.

What Can You Audit?

So what exactly can you audit in SQL Server? Just about anything. For example:

  • DDL changes: create, alter, drop, truncate
  • Logins: all logins, failed logins, logins by sysadmins
  • When Agent jobs are created, removed, or changed
  • Who is accessing sensitive data or procedures
  • Who is trying to access sensitive data or procedures
  • Changes to a user’s or login’s privileges
  • Server or database configuration changes
  • The use of deprecated features

The thing is, almost everything that happens in SQL Server generates an event. A user logging in generates an event, as does that user issuing a query. Any locks that occur while that query executes generate events. Any waits, any disk space allocations, any object creations: they all generate events. And if it generates an event, chances are good that you can audit it in some way.

What Are Your Requirements?

Ok, so you’ve decided to audit your SQL Server database. Or maybe you’ve been told to. Either way, the first thing you need to do is figure out your requirements. Taking time to plan out exactly what you need will save yourself tons of time and frustration later. Trust me.

What Do You Want to Audit? The most important thing you need to decide is exactly what events you want to audit. Is this a DDL audit? Is it a security audit? Some auditing tools are better suited to handle tracking specific events, so depending on what you need to record, this could rule out a solution completely.

What Data Do You Want to Collect? If you’re auditing logins, you’ll obviously want to know the login name, and most likely the date and time of the login. But do you also need to know the application? What about the network login of the user or the client’s hostname? If you’re auditing DDL changes, it would definitely be useful to capture the SQL statement issued.

Where Do You Want to Run This Audit? Are you only planning on auditing a single database on a single instance, or is this going to be a system-wide thing? If you’re planning on auditing many environments, you’ll want something that’s easy to implement and maintain; maybe something you can manage centrally. The SQL version and edition of the audit target also matter.

Where Do You Want the Output to Go? Should the output be written to a file or would you prefer a database table? If you’re auditing several databases/instances, should they all write to separate outputs or a single repository? Should DDL audit output commingle with security audit output? Who should have access to the output? And how long do you want to retain it?

How Will the Audit Output Be Processed? Will you be reporting on the output? Will you need to search the output for specific events? Will you need to compare or search output from various audits?

What’s Your Budget? SQL Server has several free built-in tools that you can use to audit your database, but there are also a number of third-party tools available. Of course, these products come at a price, and that price generally goes up in proportion to the number of systems you want to audit.

Additional Considerations

Once you’re gotten your requirements firmed up, you can start using them to select an audit tool. In SQL Server, you have several options at your disposal, with each offering its own set of pros and cons. When picking out a solution, obviously you want to ensure it meets those requirements, but there are a few more things you’ll want to consider.

Will it Impact Normal Processing? Ideally you want a solution that will have minimal or no impact on the day-to-day performance of your SQL environment.

How Tamper-proof Is It? This is especially important when it comes to security-related auditing. You want to know that someone can’t mess with your audit to avoid having certain events recorded.

How Easy Is It to Implement/Maintain? The easier your audit process is to implement and use, the less painful auditing will be. Generally speaking, if you dread the whole audit process, you’re probably not using the right tool.

How Granular Is It? Can You Filter Out Certain Events or Objects? This is something that’s especially important to me. I do DDL auditing on a database where certain objects are routinely dropped and recreated by the application. I don’t care about those objects and I don’t want them in the audit report. I also don’t want things like index maintenance showing up. So the ability to exclude objects or events is something I look for.

Coming Up

Now that you know what you need, it’s time to start test driving some solutions. In the days (ok maybe weeks) to come, I’ll be discussing several of your options for auditing events in SQL Server. I’ll talk about how they work, what their pros and cons are, and hopefully introduce you to some new ideas for implementing and using them. Auditing is necessary, but it doesn’t have to be boring. Good stuff ahead, people. Check out SQL Server Audit 101 – Creating Basic Audit next.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Transactional SQL Server Replication Toolbox Scripts

Transactional SQL Server Replication Toolbox Scripts

Originally published on KendalVanDyke.com.

During the last few years I’ve worked extensively with transactional replication and have written a handful of scripts that have found a permanent home in my “useful scripts” toolbox. I’ve provided these scripts as downloads whenever I’ve presented about replication…but not everyone who has worked with replication has been to one of my presentations (or had access to the downloads afterwards) so I’m posting them in this Transactional SQL Server Replication Toolbox Scripts series.

The first script in my toolbox shows all of the articles and columns in each article for all transactional publications in a published database. It’s pretty straightforward – just execute the script in the published database on the publisher. Note that because it uses the FOR XML PATH directive it must be run on SQL 2005 or higher.

/********************************************************************************************* 
Transactional SQL Server Replication Toolbox Scripts: Show Articles and Columns for All Publications 

Description: 
   Shows articles and columns for each article for all transactional publications 

   (C) 2013, Kendal Van Dyke (mailto:kendal.vandyke@gmail.com) 

Version History: 
   v1.00 (2013-01-29) 

License: 
   This query is free to download and use for personal, educational, and internal 
   corporate purposes, provided that this header is preserved. Redistribution or sale 
   of this query, in whole or in part, is prohibited without the author's express 
   written consent. 

Note: 
   Execute this query in the published database on the PUBLISHER 

   Because this query uses FOR XML PATH('') it requires SQL 2005 or higher 
   
*********************************************************************************************/ 

SELECT 
   syspublications.name AS "Publication", 
   sysarticles.name AS "Article", 
   STUFF( 
       ( 
           SELECT ', ' + syscolumns.name AS [text()] 
           FROM sysarticlecolumns WITH (NOLOCK) 
               INNER JOIN syscolumns WITH (NOLOCK) ON sysarticlecolumns.colid = syscolumns.colorder 
           WHERE sysarticlecolumns.artid = sysarticles.artid 
               AND sysarticles.objid = syscolumns.id 
           ORDER BY syscolumns.colorder 
           FOR XML PATH('') 
       ), 1, 2, '' 
   ) AS "Columns" FROM syspublications WITH (NOLOCK) 
   INNER JOIN sysarticles WITH (NOLOCK) ON syspublications.pubid = sysarticles.pubid 
ORDER BY syspublications.name, sysarticles.name

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Gotcha – Multiple Publications

SQL Server Replication Gotcha - Multiple PublicationsOriginally published on KendalVanDyke.com.

Here is another SQL Server Replication Gotcha – Multiple Publications. When administering replication topologies it’s common to group articles into publications based on roles that subscribers fulfill. Often you’ll have multiple subscriber roles and therefore multiple publications, and in some cases a subset of articles are common between them. There’s nothing to prevent you from adding the same article to more than one publication but I wanted to point out how this can potentially lead to major performance problems with replication.

Let’s start with a sample table:

CREATE TABLE [dbo].[ReplDemo]
    (
      [ReplDemoID] [int] IDENTITY(1, 1) NOT FOR REPLICATION
                         NOT NULL ,
      [SomeValue] [varchar](20) NOT NULL ,
      CONSTRAINT [PK_ReplDemo] PRIMARY KEY CLUSTERED ( [ReplDemoID] ASC )
        ON [PRIMARY]
    )
ON  [PRIMARY]
GO

Now let’s pretend that we need this table replicated to two subscribers which have different roles. We’ll create one publication for each role and add the table to both publications:

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'ReplDemo Publication A',
    @description = N'Publication to demonstrate behavior when same article is in multiple publications',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'ReplDemo Publication A',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO
-- Adding the transactional articles
EXEC sp_addarticle @publication = N'ReplDemo Publication A',
    @article = N'ReplDemo', @source_owner = N'dbo',
    @source_object = N'ReplDemo', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'ReplDemo', @destination_owner = N'dbo', @status = 8,
    @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboReplDemo]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboReplDemo]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboReplDemo]'
GO 

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'ReplDemo Publication B',
    @description = N'Publication to demonstrate behavior when same article is in multiple publications',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'ReplDemo Publication B',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO
-- Adding the transactional articles
EXEC sp_addarticle @publication = N'ReplDemo Publication B',
    @article = N'ReplDemo', @source_owner = N'dbo',
    @source_object = N'ReplDemo', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'ReplDemo', @destination_owner = N'dbo', @status = 8,
    @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboReplDemo]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboReplDemo]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboReplDemo]'
GO

After creating the publications we create our subscriptions, take & apply the snapshot, and we’re ready to start making changes so we execute this simple insert statement:

INSERT  INTO dbo.ReplDemo
        ( SomeValue )
VALUES  ( 'Test' )

Here’s the million dollar question: How many times does this insert statement get added to the distribution database? To find out we’ll run the following statement on the distributor (after the log reader agent has done it’s work, of course):

SELECT  MSrepl_commands.xact_seqno ,
        MSrepl_commands.article_id ,
        MSrepl_commands.command_id ,
        MSsubscriptions.subscriber_id
FROM    distribution.dbo.MSrepl_commands AS [MSrepl_commands]
        INNER JOIN distribution.dbo.MSsubscriptions AS [MSsubscriptions] ON MSrepl_commands.publisher_database_id = MSsubscriptions.publisher_database_id
                                                              AND MSrepl_commands.article_id = MSsubscriptions.article_id
        INNER JOIN distribution.dbo.MSarticles AS [MSarticles] ON MSsubscriptions.publisher_id = MSarticles.publisher_id
                                                              AND MSsubscriptions.publication_id = MSarticles.publication_id
                                                              AND MSsubscriptions.article_id = MSarticles.article_id
WHERE   MSarticles.article = 'ReplDemo'
ORDER BY MSrepl_commands.xact_seqno ,
        MSrepl_commands.article_id ,
        MSrepl_commands.command_id

Here’s the output of the statement:

Query Results

That’s one row for each publication the table article is included in. Now imagine that an update statement affects 100,000 rows in the table. In this example that would turn into 200,000 rows that will be inserted into the distribution database and need to be cleaned up at a later date. It’s not hard to see how this could lead to performance problems for tables that see a high volume of insert\update\delete activity.

Workarounds
Two workarounds for this behavior come to mind:

  1. Modify data using stored procedures, then replicate both their schema and execution. This won’t help for insert statements and is useless if you’re only updating\deleting a single row each time the procedure executes. This also assumes that all dependencies necessary for the stored procedure(s) to execute exist at the subscriber
  2. Limit table articles to one publication per article. If you’re creating publications from scratch then place table articles that would otherwise be included in multiple publications into their own distinct publication. If you’re working with existing publications that already include the table article then subscribe only to the article(s) that you need rather than adding the article to another publication. (Subscribing to individual articles within a publication can get tricky – I’ll demonstrate how to do this in a future post)

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Gotcha – Blank XML


SQL Server Replication Gotcha - Blank XMLOriginally published on KendalVanDyke.com.

Here is another SQL Server Replication Gotcha – Blank XML.  Transactional replication in SQL Server 2005\2008 can handle the XML datatype just fine with few exceptions – one in particular being when the XML value is blank. I’ll save the argument about whether or not a blank (or empty string if you prefer) value is well formed XML for another day because the point is that SQL Server allows it. Consider the following table:

CREATE TABLE [dbo].[XMLReplTest]
    (
      [XMLReplTestID] [int] IDENTITY(1, 1) NOT FOR REPLICATION
                            NOT NULL ,
      [SomeXML]  NOT NULL ,
      CONSTRAINT [PK_XMLReplTest] PRIMARY KEY CLUSTERED
        ( [XMLReplTestID] ASC ) ON [PRIMARY]
    )
ON  [PRIMARY]
GO

Execute the following statement and you’ll see that SQL Server handles it just fine:

INSERT  INTO dbo.XMLReplTest
        ( SomeXML )
VALUES  ( '' )

Now let’s add this table to a transactional replication publication:

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'XML Replication Test',
    @description = N'Sample publication to demonstrate blank XML gotcha',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'XML Replication Test',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO 

-- Adding the transactional articles
EXEC sp_addarticle @publication = N'XML Replication Test',
    @article = N'XMLReplTest', @source_owner = N'dbo',
    @source_object = N'XMLReplTest', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'XMLReplTest', @destination_owner = N'dbo',
    @status = 8, @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboXMLReplTest]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboXMLReplTest]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboXMLReplTest]'
GO

Assume we’ve created the publication, added a subscriber, taken & applied the snapshot, and we’re ready to start changing data. Let’s throw a monkey wrench into the works by executing the insert statement with the blank XML again and watch what happens to the log reader agent:

Log Reader Agent Error

That’s not a very nice error (or resolution)! I’ve been able to reproduce this behavior in SQL 2005 & 2008 but I have not tried it in 2008 R2. I’ve entered a Connect bug report so hopefully this is fixed in a forthcoming cumulative update. In the meantime there is a simple workaround – add a check constraint. Since we’re working with the SQL Server replication blank XML datatype the only option for checking length with a scalar function is DATALENGTH. The DATALENGTH for a blank xml value is 5 so we want to check that any inserted or updated value is greater than 5:

ALTER TABLE dbo.XMLReplTest ADD CONSTRAINT
   CK_XMLReplTest_SomeXML CHECK (DATALENGTH(SomeXML) > 5)
GO

If you are affected by this behavior please consider taking a moment to go vote for it on Connect.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication – Troubleshooting Transactional Replication

SQL Server Replication - Troubleshooting Transactional ReplicationKendal Van Dyke’s June 2010 article in SQL Server Pro magazine writes about SQL Server Replication – Troubleshooting Transactional Replication and solves three common replication problems.

Continue Reading on SQLMag.com >>

 

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Scripts: Show All Transactional Publications & Subscribers At Distributor

SQL Server Replication Scripts: Show All Transactional Publications & Subscribers At DistributorOriginally published on KendalVanDyke.com.

Anybody who has talked with me about replication or heard me present about it knows that I recommend using a dedicated remote distributor for anything beyond light replication workloads. Unfortunately neither SSMS nor Replication Monitor provide an easy “one view to rule them all” way at the distributor (or anywhere else) to show every transactional publication, subscriber, and article they’re subscribed to. The only way to gather that information using SSMS is to script out each publication and visually parse the SQL Server replication scripts. I manage hundreds of publications & subscriptions and that’s not a reasonable option for me so I’ve written SQL Server Replication Scripts: Show All Transactional Publications & Subscribers At Distributor to show me everything at once:

-- Show Transactional Publications and Subscriptions to articles at Distributor
-- Run this on the DISTRIBUTOR
-- Add a WHERE clause to limit results to one publisher\subscriber\publication\etc
SELECT  publishers.srvname AS [Publisher] ,
        publications.publisher_db AS [Publisher DB] ,
        publications.publication AS [Publication] ,
        subscribers.srvname AS [Subscriber] ,
        subscriptions.subscriber_db AS [Subscriber DB] ,
        articles.article AS [Article]
FROM    sys.sysservers AS publishers
        INNER JOIN distribution.dbo.MSarticles AS articles ON publishers.srvid = articles.publisher_id
        INNER JOIN distribution.dbo.MSpublications AS publications ON articles.publisher_id =publications.publisher_id
                                                              AND articles.publication_id =publications.publication_id
        INNER JOIN distribution.dbo.MSsubscriptions AS subscriptions ON articles.publisher_id =subscriptions.publisher_id
                                                              AND articles.publication_id =subscriptions.publication_id
                                                              AND articles.article_id = subscriptions.article_id
        INNER JOIN sys.sysservers AS subscribers ON subscriptions.subscriber_id = subscribers.srvid 

-- Limit results to subscriber
--WHERE   subscribers.srvname = '[Subscriber Server Name]' 

---- Limit results to publisher and publication
--WHERE   publishers.srvname = '[Publisher Server Name]'
--        AND MSpublications.publication = '[Publication Name]' 

ORDER BY publishers.srvname ,
        subscribers.srvname ,
        publications.publication ,
        articles.article

This script also works for distributors running SQL 2000; just substitute master.dbo.sysservers in place of sys.sysservers.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Snapshot Errors

SQL Server Replication Snapshot ErrorsOriginally published on KendalVanDyke.com.

I ran across interesting SQL Server replication snapshot errors recently that are worth sharing, so I wrote SQL Server Replication Snapshot Errors. It happened while using a distributor running SQL 2008, a publisher running SQL 2005, and the published database set to 2000 (80) compatibility. When adding a new subscription (version and compatibility of subscriber are irrelevant) the snapshot agent failed with the following error (extra details omitted for readability):

Error messages: 
Source: Microsoft.SqlServer.Smo 


Message: Script failed for Table 'dbo.Template_HeaderFooter'. 


Message: Column HeaderFooter_Value in object Template_HeaderFooter contains type NVarCharMax, which is not supported in the target server version, SQL Server 2000. 

The distributor was recently upgraded from SQL 2005 where this wasn’t a problem. A quick search of Microsoft’s KB turned up nothing on the error. After some tinkering I was able to figure out a workaround: change the compatibility level of the published DB to 2005 (90). While this works, it’s less than ideal if your DB is already live because you may break code by changing the compatibility level.

Unfortunately I haven’t found any other workarounds to the problem so if this is happening to you your best bet is to pick a time when no one is using the DB, change the compatibility level, take your snapshot, then change the compatibility level back. Of course an even better strategy is to work with your development teams to get the DB moved up to 2005 compatibility permanently.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication – Reasons To Change CommitBatchSize And CommitBatchThreshold

SQL Server Replication - Reasons To Change CommitBatchSize And CommitBatchThresholdOriginally published on KendalVanDyke.com.

In my last post I showed how CommitBatchSize and CommitBatchThreshold Affect SQL Server Replication. Now the question is why would you want to change them? The simple answer is that usually you don’t need to – the defaults work just fine most of the time. But there are a few SQL Server replication reasons to change CommitBatchSize and CommitBatchThreshold you may consider:

Why you would lower the values

  • Your subscriber experiences a consistently high volume of activity and you want to minimize locking. Think SQL Servers sitting behind public facing web servers. Remember, replication delivers commands to subscribers in a transaction which cause row locks that can lead to blocking. Reducing the number of commands in each transaction will shorten the duration of the locks but be careful – there’s a fixed overhead to committing transactions so by lowering the values the tradeoff is that your subscribers will have to process more of them.
  • Your network between distributor is subscriber is slow and\or unreliable. Lowering the values will result in smaller transactions at the subscriber and if a network failure occurs there will be a smaller number of commands to rollback and re-apply.

Why you would raise the values

  • You want to increase replication throughput. One example is when you’re pushing changes to a publishing subscriber over a WAN connection and you don’t care about blocking at the subscriber. Raising the values means more commands are included in each transaction at the subscriber and fewer transactions means less overhead. Microsoft suggests that “increasing the values twofold to tenfold improved performance by 5 percent for INSERT commands, 10-15 percent for UPDATE commands, and 30 percent for DELETE commands” (take this with a grain of salt though – it was written back in the SQL 2000 days). The thing to watch out for is that at some point system resources at the subscriber (e.g. disk I/O) minimize the benefits of increasing the values. Also consider that more commands per transaction means that any failure at the subscriber will take longer to rollback and re-apply.

How much you raise or lower the values depends on a number of factors including: hardware horsepower, bandwidth, and volume of changes being replicated. There’s no one good answer that applies to all scenarios. The best thing to do is change them a small amount at a time and observe the impact – positive or negative. Eventually you’ll find the sweet spot for your environment.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.