Posts

SQL Server Replication Gotcha – Multiple Publications

SQL Server Replication Gotcha - Multiple PublicationsOriginally published on KendalVanDyke.com.

Here is another SQL Server Replication Gotcha – Multiple Publications. When administering replication topologies it’s common to group articles into publications based on roles that subscribers fulfill. Often you’ll have multiple subscriber roles and therefore multiple publications, and in some cases a subset of articles are common between them. There’s nothing to prevent you from adding the same article to more than one publication but I wanted to point out how this can potentially lead to major performance problems with replication.

Let’s start with a sample table:

CREATE TABLE [dbo].[ReplDemo]
    (
      [ReplDemoID] [int] IDENTITY(1, 1) NOT FOR REPLICATION
                         NOT NULL ,
      [SomeValue] [varchar](20) NOT NULL ,
      CONSTRAINT [PK_ReplDemo] PRIMARY KEY CLUSTERED ( [ReplDemoID] ASC )
        ON [PRIMARY]
    )
ON  [PRIMARY]
GO

Now let’s pretend that we need this table replicated to two subscribers which have different roles. We’ll create one publication for each role and add the table to both publications:

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'ReplDemo Publication A',
    @description = N'Publication to demonstrate behavior when same article is in multiple publications',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'ReplDemo Publication A',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO
-- Adding the transactional articles
EXEC sp_addarticle @publication = N'ReplDemo Publication A',
    @article = N'ReplDemo', @source_owner = N'dbo',
    @source_object = N'ReplDemo', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'ReplDemo', @destination_owner = N'dbo', @status = 8,
    @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboReplDemo]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboReplDemo]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboReplDemo]'
GO 

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'ReplDemo Publication B',
    @description = N'Publication to demonstrate behavior when same article is in multiple publications',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'ReplDemo Publication B',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO
-- Adding the transactional articles
EXEC sp_addarticle @publication = N'ReplDemo Publication B',
    @article = N'ReplDemo', @source_owner = N'dbo',
    @source_object = N'ReplDemo', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'ReplDemo', @destination_owner = N'dbo', @status = 8,
    @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboReplDemo]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboReplDemo]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboReplDemo]'
GO

After creating the publications we create our subscriptions, take & apply the snapshot, and we’re ready to start making changes so we execute this simple insert statement:

INSERT  INTO dbo.ReplDemo
        ( SomeValue )
VALUES  ( 'Test' )

Here’s the million dollar question: How many times does this insert statement get added to the distribution database? To find out we’ll run the following statement on the distributor (after the log reader agent has done it’s work, of course):

SELECT  MSrepl_commands.xact_seqno ,
        MSrepl_commands.article_id ,
        MSrepl_commands.command_id ,
        MSsubscriptions.subscriber_id
FROM    distribution.dbo.MSrepl_commands AS [MSrepl_commands]
        INNER JOIN distribution.dbo.MSsubscriptions AS [MSsubscriptions] ON MSrepl_commands.publisher_database_id = MSsubscriptions.publisher_database_id
                                                              AND MSrepl_commands.article_id = MSsubscriptions.article_id
        INNER JOIN distribution.dbo.MSarticles AS [MSarticles] ON MSsubscriptions.publisher_id = MSarticles.publisher_id
                                                              AND MSsubscriptions.publication_id = MSarticles.publication_id
                                                              AND MSsubscriptions.article_id = MSarticles.article_id
WHERE   MSarticles.article = 'ReplDemo'
ORDER BY MSrepl_commands.xact_seqno ,
        MSrepl_commands.article_id ,
        MSrepl_commands.command_id

Here’s the output of the statement:

Query Results

That’s one row for each publication the table article is included in. Now imagine that an update statement affects 100,000 rows in the table. In this example that would turn into 200,000 rows that will be inserted into the distribution database and need to be cleaned up at a later date. It’s not hard to see how this could lead to performance problems for tables that see a high volume of insert\update\delete activity.

Workarounds
Two workarounds for this behavior come to mind:

  1. Modify data using stored procedures, then replicate both their schema and execution. This won’t help for insert statements and is useless if you’re only updating\deleting a single row each time the procedure executes. This also assumes that all dependencies necessary for the stored procedure(s) to execute exist at the subscriber
  2. Limit table articles to one publication per article. If you’re creating publications from scratch then place table articles that would otherwise be included in multiple publications into their own distinct publication. If you’re working with existing publications that already include the table article then subscribe only to the article(s) that you need rather than adding the article to another publication. (Subscribing to individual articles within a publication can get tricky – I’ll demonstrate how to do this in a future post)

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Gotcha – Blank XML


SQL Server Replication Gotcha - Blank XMLOriginally published on KendalVanDyke.com.

Here is another SQL Server Replication Gotcha – Blank XML.  Transactional replication in SQL Server 2005\2008 can handle the XML datatype just fine with few exceptions – one in particular being when the XML value is blank. I’ll save the argument about whether or not a blank (or empty string if you prefer) value is well formed XML for another day because the point is that SQL Server allows it. Consider the following table:

CREATE TABLE [dbo].[XMLReplTest]
    (
      [XMLReplTestID] [int] IDENTITY(1, 1) NOT FOR REPLICATION
                            NOT NULL ,
      [SomeXML]  NOT NULL ,
      CONSTRAINT [PK_XMLReplTest] PRIMARY KEY CLUSTERED
        ( [XMLReplTestID] ASC ) ON [PRIMARY]
    )
ON  [PRIMARY]
GO

Execute the following statement and you’ll see that SQL Server handles it just fine:

INSERT  INTO dbo.XMLReplTest
        ( SomeXML )
VALUES  ( '' )

Now let’s add this table to a transactional replication publication:

-- Adding the transactional publication
EXEC sp_addpublication @publication = N'XML Replication Test',
    @description = N'Sample publication to demonstrate blank XML gotcha',
    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',
    @allow_pull = N'true', @allow_anonymous = N'false',
    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',
    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',
    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
    @repl_freq = N'continuous', @status = N'active',
    @independent_agent = N'true', @immediate_sync = N'false',
    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',
    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,
    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',
    @enabled_for_het_sub = N'false'
GO
EXEC sp_addpublication_snapshot @publication = N'XML Replication Test',
    @frequency_type = 1, @frequency_interval = 0,
    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,
    @frequency_subday = 0, @frequency_subday_interval = 0,
    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,
    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,
    @job_password = NULL, @publisher_security_mode = 1
GO 

-- Adding the transactional articles
EXEC sp_addarticle @publication = N'XML Replication Test',
    @article = N'XMLReplTest', @source_owner = N'dbo',
    @source_object = N'XMLReplTest', @type = N'logbased', @description = N'',
    @creation_script = N'', @pre_creation_cmd = N'drop',
    @schema_option = 0x00000000080350DF,
    @identityrangemanagementoption = N'manual',
    @destination_table = N'XMLReplTest', @destination_owner = N'dbo',
    @status = 8, @vertical_partition = N'false',
    @ins_cmd = N'CALL [dbo].[sp_MSins_dboXMLReplTest]',
    @del_cmd = N'CALL [dbo].[sp_MSdel_dboXMLReplTest]',
    @upd_cmd = N'SCALL [dbo].[sp_MSupd_dboXMLReplTest]'
GO

Assume we’ve created the publication, added a subscriber, taken & applied the snapshot, and we’re ready to start changing data. Let’s throw a monkey wrench into the works by executing the insert statement with the blank XML again and watch what happens to the log reader agent:

Log Reader Agent Error

That’s not a very nice error (or resolution)! I’ve been able to reproduce this behavior in SQL 2005 & 2008 but I have not tried it in 2008 R2. I’ve entered a Connect bug report so hopefully this is fixed in a forthcoming cumulative update. In the meantime there is a simple workaround – add a check constraint. Since we’re working with the SQL Server replication blank XML datatype the only option for checking length with a scalar function is DATALENGTH. The DATALENGTH for a blank xml value is 5 so we want to check that any inserted or updated value is greater than 5:

ALTER TABLE dbo.XMLReplTest ADD CONSTRAINT
   CK_XMLReplTest_SomeXML CHECK (DATALENGTH(SomeXML) > 5)
GO

If you are affected by this behavior please consider taking a moment to go vote for it on Connect.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication Snapshot Errors

SQL Server Replication Snapshot ErrorsOriginally published on KendalVanDyke.com.

I ran across interesting SQL Server replication snapshot errors recently that are worth sharing, so I wrote SQL Server Replication Snapshot Errors. It happened while using a distributor running SQL 2008, a publisher running SQL 2005, and the published database set to 2000 (80) compatibility. When adding a new subscription (version and compatibility of subscriber are irrelevant) the snapshot agent failed with the following error (extra details omitted for readability):

Error messages: 
Source: Microsoft.SqlServer.Smo 


Message: Script failed for Table 'dbo.Template_HeaderFooter'. 


Message: Column HeaderFooter_Value in object Template_HeaderFooter contains type NVarCharMax, which is not supported in the target server version, SQL Server 2000. 

The distributor was recently upgraded from SQL 2005 where this wasn’t a problem. A quick search of Microsoft’s KB turned up nothing on the error. After some tinkering I was able to figure out a workaround: change the compatibility level of the published DB to 2005 (90). While this works, it’s less than ideal if your DB is already live because you may break code by changing the compatibility level.

Unfortunately I haven’t found any other workarounds to the problem so if this is happening to you your best bet is to pick a time when no one is using the DB, change the compatibility level, take your snapshot, then change the compatibility level back. Of course an even better strategy is to work with your development teams to get the DB moved up to 2005 compatibility permanently.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Launch SQL Server Replication Monitor Without SSMS

Launch SQL Server Replication Monitor Without SSMSOriginally published on KendalVanDyke.com.

I manage a lot of replication publications\subscriptions and when I get a latency or agent failure alert it always bugs me that I have to go through 5 steps to launch Replication Monitor: launch SSMS, go to my server in object explorer, expand the treeview, right click on the Replication node, and choose the menu option to launch Replication Monitor. To make things easier I did what I think Microsoft should have done in the first place and created a shortcut to the Replication Monitor executable in my start menu and on my desktop.

To create a shortcut for yourself open up Windows Explorer and navigate to C:\Program Files\Microsoft SQL Server\100\Tools\Binn if you’re running SSMS 2008 or C:\Program Files\Microsoft SQL Server\90\Tools\Binn if you’re running SSMS 2005. Right click on sqlmonitor.exe and choose Send To > Desktop (Create Shortcut). Now you’ve got a shortcut on your desktop that will launch SQL Server replication monitor without SSMS with a simple double click! (You’ll probably also want to rename the shortcut to something like “Replication Monitor”)

Unfortunately the default icon doesn’t look great in anything bigger than 16×16 so I replaced it with a really nice (and free!) icon from iconspedia.com that looks much better:

Activity-monitor-48 Happy monitoring!

Visit: Activity Monitor Icon on iconpedia.com

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Replication – Reasons To Change CommitBatchSize And CommitBatchThreshold

SQL Server Replication - Reasons To Change CommitBatchSize And CommitBatchThresholdOriginally published on KendalVanDyke.com.

In my last post I showed how CommitBatchSize and CommitBatchThreshold Affect SQL Server Replication. Now the question is why would you want to change them? The simple answer is that usually you don’t need to – the defaults work just fine most of the time. But there are a few SQL Server replication reasons to change CommitBatchSize and CommitBatchThreshold you may consider:

Why you would lower the values

  • Your subscriber experiences a consistently high volume of activity and you want to minimize locking. Think SQL Servers sitting behind public facing web servers. Remember, replication delivers commands to subscribers in a transaction which cause row locks that can lead to blocking. Reducing the number of commands in each transaction will shorten the duration of the locks but be careful – there’s a fixed overhead to committing transactions so by lowering the values the tradeoff is that your subscribers will have to process more of them.
  • Your network between distributor is subscriber is slow and\or unreliable. Lowering the values will result in smaller transactions at the subscriber and if a network failure occurs there will be a smaller number of commands to rollback and re-apply.

Why you would raise the values

  • You want to increase replication throughput. One example is when you’re pushing changes to a publishing subscriber over a WAN connection and you don’t care about blocking at the subscriber. Raising the values means more commands are included in each transaction at the subscriber and fewer transactions means less overhead. Microsoft suggests that “increasing the values twofold to tenfold improved performance by 5 percent for INSERT commands, 10-15 percent for UPDATE commands, and 30 percent for DELETE commands” (take this with a grain of salt though – it was written back in the SQL 2000 days). The thing to watch out for is that at some point system resources at the subscriber (e.g. disk I/O) minimize the benefits of increasing the values. Also consider that more commands per transaction means that any failure at the subscriber will take longer to rollback and re-apply.

How much you raise or lower the values depends on a number of factors including: hardware horsepower, bandwidth, and volume of changes being replicated. There’s no one good answer that applies to all scenarios. The best thing to do is change them a small amount at a time and observe the impact – positive or negative. Eventually you’ll find the sweet spot for your environment.

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Enterprise Audit with SQL Server Audit

 

About the Presentation


Enterprise Audit with SQL Server AuditAudit. The very word strikes fear in the bravest of hearts. But as a DBA, the need to know who is doing what in your production databases is critical. In SQL Server 2008, Microsoft finally gave us a true auditing tool. But how does it work, what exactly can it track, and how can you handle its output?

In Enterprise Audit with SQL Server Audit, we cover SQL Server Audit from the ground up. We go from the basics of which events can be audited to a look at how SQL Server Audit works “under the covers”, and what that means for performance. While implementing server and database audits, we discuss audit granularity and filtering, as well as the pros and cons of Audit’s output options.

Whether it’s through the SQL Server Management Studio (SSMS) interface, via T-SQL, or using PowerShell and Server Management Objects (SMO), at the end of this session you’ll be able to deploy Audit across your SQL Server enterprise and manage its output, fearlessly.

Session Level: Intermediate

 This session was presented at PASS Summit 2015.Enterprise Audit with SQL Server Audit

Download the Presentation

Enterprise Audit with SQL Server Audit

Presentations (Upcoming & Past)

Want to Learn More About SQL Server Audit?

If you'd like to learn more about how UpSearch can support your SQL Server audit initiative, visit SQL Server Audit or contact us today.

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

How CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

CommitBatchSize And CommitBatchThreshold Affect SQL Server ReplicationOriginally published on KendalVanDyke.com.

Note: This is a long post. I hope it’s worth your while to read.

A common suggestion for optimizing transactional replication performance is to adjust the values in your distribution agent profile for CommitBatchSize and CommitBatchThreshold. Unfortunately what these two value really do isn’t documented very well anywhere. According to Books Online:

  • CommitBatchSize “Is the number of transactions to be issued to the Subscriber before a COMMIT statement is issued. The default is 100.”
  • CommitBatchThreshold “Is the number of replication commands to be issued to the Subscriber before a COMMIT statement is issued. The default is 1000.”

When you read this you might wonder what the difference is between a transaction and a command. After all, isn’t a command just an autocommit transaction? Does this mean that the commands in a transaction could get broken up into smaller transactions at the subscriber? Won’t that violate ACID? Microsoft went out of their way to make these two distinct values so each one has to influence delivery of commands in some way, right?

I went the cheap route to find out and posted to the forums at SQLServerCentral and to microsoft.public.sqlserver.replication. While I waited for an answer I set up some simple tests to try and figure it out for myself.

Before reading on, I want to test your replication mettle and give you a little quiz. If you want to cut to the chase and see if you’re right, jump to the conclusions at the end of this post to view the answers.

  1. Based on the defaults above, what happens if I issue 1,500 insert\update\delete statements which each affect a single row all contained within an explicit transaction (e.g. BEGIN TRANSACTION, 1,500 updates, then COMMIT)?
  2. What if those 1,500 statements are not part of a transaction?
  3. What happens if I issue three update statements: the first affecting 600 rows, the second affecting 500 rows, and the third affecting 400 rows?

 

Setup

I started by creating a simple table and populating it with 1,000 rows (representative of a large and random sample set):

SET NOCOUNT ON 
GO 
CREATE TABLE TransReplTest ( 
    [ID] INT IDENTITY(1,1), 
    [SomeDate] DATETIME, 
    [SomeValue] VARCHAR(20), 
    CONSTRAINT [PK_TransReplTest] PRIMARY KEY CLUSTERED ( 
        [ID] 
    ) 
) 
GO 
INSERT INTO TransReplTest (SomeDate, SomeValue) 
SELECT GETDATE(), CONVERT(VARCHAR(20), RAND() * 1000) 
GO 1000

I created a publication, added a table article to it, and set up a subscription. I created a new distribution agent profile based on the default agent profile and changed only the values for CommitBatchSize and CommitBatchThreshold. Each time I changed their values I stopped and started the distribution agent to ensure I was using the new value. Finally, I set up a profiler session capturing the following events on the subscriber: “RPC:Completed”, “SQL:BatchCompleted”, and “SQL:BatchStarting”. I captured some extra noise so for the sake of readability I saved my profiler sessions and filtered them for each test afterwards.

Test 1

To make things easier to understand I set the values low, like this:

CommitBatchSize: 3
CommitBatchThreshold: 5

First I ran this statement which randomly updates 10 rows, one at a time, all contained within a transaction:

BEGIN TRANSACTION

DECLARE @rownum TINYINT 
SET @rownum = 0

WHILE @rownum < 10 
BEGIN

    UPDATE TransReplTest 
    SET SomeDate = GETDATE() 
    WHERE ID IN ( 
        SELECT TOP 1 ID 
        FROM TransReplTest 
        ORDER BY NEWID() 
    )

    SET @rownum = @rownum + 1 
END

COMMIT

Profiler shows 1 transaction with 10 updates. CommitBatchThreshold didn’t appear affect anything here.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

When I issue the same statement without the explicit transaction Profiler shows 4 transactions of 3 updates, 3 updates, 3 updates, and 1 update. CommitBatchSize causes every 3 statements (each of which is an autocommit transaction) to be grouped together in a transaction on the subscriber.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Next I issued a single update statement affecting 10 rows:

UPDATE TransReplTest 
SET SomeDate = GETDATE() 
WHERE ID IN ( 
    SELECT TOP 10 ID 
    FROM TransReplTest 
    ORDER BY NEWID() 
)

Profiler shows 1 transaction with 10 updates at the subscriber. Again CommitBatchThreshold did not affect anything here.

 

Finally, I issued an update statement affecting 3 rows 3 times:

UPDATE TransReplTest 
SET SomeDate = GETDATE() 
WHERE ID IN ( 
    SELECT TOP 3 ID 
    FROM TransReplTest 
    ORDER BY NEWID() 
) 
GO 3

This time Profiler shows 2 transactions; the first contains 6 updates and the second contains 3 updates. Even though CommitBatchSize was set to 3, the number of rows affected by the first two statements exceeded CommitBatchThreshold (set to 5) and so the third statement was put into its own transaction.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Test 2

Let’s see what happens if we switch the values around and run through the same statements as Test 1:

CommitBatchSize: 5
CommitBatchThreshold: 3

Randomly update 10 rows in a single transaction. We still see 1 transaction with 10 updates.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Now the same statement without the transaction. Now we see 4 transactions with updates in batches of 4, 1, 4, and 1. This time CommitBatchThreshold set to 3 was the reason we saw the smaller batches. But wait – shouldn’t a CommitBatchThreshold of 3 mean we should see 3 transactions with 3 updates in each transaction and 1 transaction with 1 update?

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Next, the single update statement affecting 10 rows. Profiler shows 1 transaction with 10 updates. CommitBatchThreshold made no difference here.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Finally, updating 3 rows 3 times. Profiler shows 1 transaction with 6 updates and 1 transaction with 3 updates. Just like in Test 1, even though CommitBatchThreshold was set to 3, the number of rows affected by the first two statements exceeded CommitBatchSize (set to 5) and so the third statement was put into its own transaction.

CommitBatchSize And CommitBatchThreshold Affect SQL Server Replication

Conclusions

Based on the profiler traces, here’s what we can conclude:

  • Replication delivers changes to subscribers one row at a time; a single update statement which changes 1,000 rows at the publisher will result in 1,000 statements at the subscriber.
  • Transactions at the publisher are kept intact at the subscriber. If a million rows are affected inside a transaction – either explicit or autocommit – on the publisher you can count on a million statements in one explicit transaction to be delivered to the subscriber. We saw this in both the statement which updated 10 rows one at a time in an explicit transaction and the statement which updated 10 rows in one shot.
  • The lower of the two values for CommitBatchSize and CommitBatchThreshold is the primary driver for how statements are delivered to subscribers. When the number of transactions on the publisher reaches that lower value the distribution agent will apply them in a single explicit transaction at the subscriber. We saw this in the statement which updated 10 rows one at a time (not in a transaction).
  • The higher of the two values can cause commands to be delivered before the lower value is met. If the number of rows affected at the publisher reaches the higher value the distribution agent will immediately apply the changes in a single explicit transaction at the subscriber. We saw this in the statements which updated 3 rows 3 times.

OK, now that we know how CommitBatchSize and CommitBatchThreshold affect SQL Server replication, let’s look at the answers to the quiz:

  1. Based on the defaults above, what happens if I issue 1,500 insert\update\delete statements which each affect a single row all contained within an explicit transaction (e.g. BEGIN TRANSACTION, 1,500 updates, then COMMIT)?
    Answer: Transactions at the publisher are honored at the subscriber so we’ll see one explicit transaction applied at the subscriber which contains 1,500 updates.
  2. What if those 1,500 statements are not part of a transaction?
    Answer: Because CommitBatchSize is set to 100 we’ll see 15 transactions at the subscriber, each containing 100 statements which affect one row per statement.
  3. What happens if I issue three update statements: the first affecting 600 rows, the second affecting 500 rows, and the third affecting 400 rows?
    Answer: Because CommitBatchThreshold is set to 1,000 we’ll see 2 transactions at the subscriber. The first transaction will contain 1,100 statements and the second transaction will contain 400 statements.

BTW, remember that one weird case where CommitBatchSize was set to 5, CommitBatchThreshold was set to 3, and when we updated 10 rows one at a time we saw 4 transactions with updates in batches of 4, 1, 4, and 1? I think that’s a bug in SQL Server. It looks like the distribution agent alternates between (CommitBatchThreshold + 1) and (CommitBatchThreshold – 2) number of commands placed into each transaction delivered to the subscriber. Since this only appears to happen when CommitBatchSize is higher than CommitBatchThreshold – and most people don’t change the values to work that way – this seems relatively insignificantbut still not the behavior that I expected to see.

 

P.S. – I did get a response to my usenet posting from MVPs Hilary Cotter and Paul Ibison. You can read it here. Gopal Ashok, a program manager in the SQL Replication team, confirmed that what I thought was a bug was indeed a bug. Yay me!

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Enterprise-level SQL Server Audit on a Standard Edition Budget

 

About the Presentation

Enterprise-level SQL Server Audit on a Standard Edition BudgetIn SQL Server 2008, Microsoft finally introduced a first-class auditing tool for monitoring what’s going on in your databases. Unfortunately, that’s a SQL Server Enterprise Edition feature. What about Enterprise-level SQL Server Audit on a SQL Server Standard Edition Budget? In this demo-heavy session, we’ll learn about auditing methods for the masses, including triggers, event notifications, SQL Trace, extended events, and yes, even SQL Server Audit.

Session Level: Intermediate

Download the Presentation

Enterprise-level SQL Server Audit on a Standard Edition Budget

Presentations (Upcoming & Past)

Want to Learn More About SQL Server Audit?

If you'd like to learn more about how UpSearch can support your SQL Server audit initiative, visit SQL Server Audit or contact us today.

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Service Broker Basics Part Two

Service Broker is not a black box. SQL Server Service Broker Basics Part Two was designed to support your messaging needs and make sense of Service Broker.

Continued from SQL Server Service Broker Basics Part One

SQL Server Service Broker Basics Part Two

Originally published on SQLBlog.com.SQL Server Service Broker Basics Part Two

In the previous post, I introduced SQL Server Service Broker Basics Part One.  In this post, I’d like to cover some of the “plumbing” – the components that allow communication between different servers running Service Broker.

Endpoints. There needs to be a channel for the communications coming in and out of the server, and in the IP world that channel exists in the form of a port. You define the port to be used by defining an Endpoint in the master database.

CREATE ENDPOINT IntEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 4022 )
FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS );
GO

Routes. To get from one place to another Service Broker routes need to be defined. You’ll need a route to the remote server defined in the database where your Service Broker application is running, and also one to the local server, and the latter needs to be defined in the msdb database. Defining a remote destination in your application database places the route information in sys.routes, but Service Broker always looks in msdb.sys.routes for any incoming messages to determine where they go.

USE AdventureWorks
GO

CREATE ROUTE DMZRoute 
AUTHORIZATION dbo 
WITH 
     SERVICE_NAME = N'//DMZSite/Sync/IntService',
     ADDRESS = N'TCP://SQLTBWS:4023'
GO

USE msdb;
GO

CREATE ROUTE IntRoute 
AUTHORIZATION dbo 
WITH 
     SERVICE_NAME = N'//IntSite/Sync/IntService',
     ADDRESS = N'LOCAL'
GO

One thing I hadn’t addressed in my last post was message security. Service Broker allows you to encrypt all messages, preventing network sniffers from discovering the data being sent. To enable this I created certificates at each site, and created a database user without a login to send and receive messages. Here’s the code I used to create the local user:

USE AdventureWorks
GO

CREATE MASTER KEY
       ENCRYPTION BY PASSWORD = N'&lt;enter REALLY secure password string here&gt;';
GO

CREATE USER IntUser WITHOUT LOGIN;
GO
CREATE CERTIFICATE IntCert 
     AUTHORIZATION IntUser
     WITH SUBJECT = 'Int Certificate',
          EXPIRY_DATE = N'12/31/2012';

BACKUP CERTIFICATE IntCert
  TO FILE = N'E:\Certs\IntCert.cer';
GO

I did the same thing at the destination site (called DMZSite), and to allow the DMZUser to send messages to my site I’ll create a local user from the certificate created at that site.

CREATE USER DMZUser WITHOUT LOGIN;

CREATE CERTIFICATE DMZCert
   AUTHORIZATION DMZUser
   FROM FILE = N'E:\Certs\DMZCert.cer';
GO

Remote Service Binding. Once the users are established and secure, the last component required is the Remote Service Binding. This binds a remote Service Broker service to our local one, defining the security credentials to be used in the conversations.

CREATE REMOTE SERVICE BINDING [DMZBinding] 
  AUTHORIZATION dbo 
  TO SERVICE N'//DMZSite/Sync/IntService'
  WITH USER = [DMZUser]
GO

Finally, we’ll grant the SEND permission to the DMZUser to allow the remote service to send messages to our site.

GRANT SEND
      ON SERVICE::[//IntSite/Sync/IntService]
      TO DMZUser;
GO

As I mentioned before, this set of objects make up the “plumbing” that allow separate instances or servers to communicate with each other. In my next post we’ll talk about the automated activation process and walk through the steps of message handling.

Reprinted with author’s permission from SQLBlog.com.

>> Back to SQL Server Service Broker Basics Part One <<

 

UpSearch

About the Author

Microsoft SQL Server MVP and Practice Leader

Allen White

Allen White is an UpSearch Alum and Microsoft SQL Server MVP.  For over 30 years, Allen has specialized in developing applications that manage the movement of data and maximize data's usefulness. Allen excels at communicating highly technical information using language that results in increased client engagement and understanding, regardless of technical competency.

Allen has been working with relational database systems for over 20 years. He has architected database solutions in application areas like retail point-of-sale (POS), POS audit, loss prevention, logistics, school district information management, purchasing and asset inventory and runtime analytics. Allen thrives on providing comprehensive solutions to information management problems across a great variety of application environments.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Service Broker Basics Part One

Service Broker is not a black box. SQL Server Service Broker Basics Part One was designed to support your messaging needs and make sense of Service Broker.

SQL Server Service Broker Basics Part One

Originally published on SQLBlog.com.SQL Server Service Broker Basics Part One

I’m currently implementing a SQL Server Service Broker solution at a client site, and it’s been an interesting challenge, because there’s not a lot of information out there to help guide you through the process. Here I’d like to walk you through the basics.

Message Types. Service Broker sends messages asynchronously from one database to another. You can set it up to send messages between databases on a single server, or between SQL Server instances on a Windows server, or between different physical servers, whether or not they’re in the same domain. Essentially Service Broker works at the database level, the rest is handled through routing, which I’ll address in another post.

The important thing to remember is that Service Broker sends and receives messages, and then your applications (or stored procedures) handle those messages in some way. It handles them asynchronously, so the sending side doesn’t have to wait for the receiving side to acknowledge the message, and it handles them sequentially, so the messages will always arrive in the order in which they’ve been sent.

Many of the examples you’ll see use message types like “REQUESTMESSAGE” and “REPLYMESSAGE”. To me this is a disservice, because it doesn’t help you see the different ways you can use Service Broker to solve your business problems. At my client site the message types indicate the content of the message, so the receiving side can use the type to determine the action to take when the message is received. Service Broker has a built-in acknowledgement process, so you don’t need to specifically acknowledge a message, unless the application needs it. As long as the communication channels are open, the message will be delivered.

CREATE MESSAGE TYPE [//AWSync/Sync/HumanResourcesEmployee]
AUTHORIZATION dbo
VALIDATION = WELL_FORMED_XML
GO

Contracts. Once you’ve defined the types of messages that can be sent, you need to define how they’ll be delivered. Contracts define what message types are allowed to be sent, and in which direction. This means that Service Broker is secure in that it won’t process any messages types not defined in a contract, so rogue processes that attempt to try a type of SQL Injection attack against Service Broker will fail.

CREATE CONTRACT [//AWSync/Sync/IntContract]
	AUTHORIZATION dbo
	( [//AWSync/Sync/HumanResourcesEmployee] SENT BY ANY,
	  [//AWSync/Sync/PersonContact] SENT BY ANY,
      [//AWSync/Sync/PurchasingVendor] SENT BY ANY )
GO

Queues. Once the contract is defined, you can define the queue on which the messages are sent and received. The queue also defines (if you want) an automated process that will handle the messages it receives. In your Transact-SQL code you retrieve messages from the queue in the same way you read data from a table – in fact, the queue behaves just like a table in your database.

CREATE QUEUE IntQueue
   WITH
   STATUS = ON,
   RETENTION = OFF
GO

Services. The service is the glue which assigns the contract to the queue. It performs the work of actually sending the messages on the queue to their destination and receiving the messages coming from other senders.

CREATE SERVICE [//IntSite/Sync/IntService]
AUTHORIZATION IntUser
ON QUEUE IntQueue
([//AWSync/Sync/IntContract])
GO

Conversations. In its simplest form, the last thing we need is to send the message. We do that via a conversation, which is referred to in Service Broker as a DIALOG CONVERSATION or simply a DIALOG. You specify the source and destination service name, and a conversation handle (a GUID) is returned, then you SEND ON CONVERSATION using that conversation handle. The message body is usually in an XML form, and for security purposes should be encrypted.

BEGIN DIALOG @InitDlgHandle
   FROM SERVICE [//IntSite/Sync/IntService]
   TO SERVICE N'//ExtSite/Sync/IntService'
   ON CONTRACT [//AWSync/Sync/IntContract]
   WITH
	   ENCRYPTION = ON;

SEND ON CONVERSATION @InitDlgHandle
   MESSAGE TYPE [//AWSync/Sync/HumanResourcesEmployee]
   (@ChangeMsg);

Finally, you need to be able to receive the messages. Like I mentioned earlier, reading from a queue is like reading from a table, but there are some additional features in Transact-SQL to facilitate message handling. Specifically, there’s a special form of the WAITFOR command which will wait for either the arrival of a message, or timeout after a specified number of milliseconds.

WAITFOR (
	RECEIVE TOP(1)
		@ch = conversation_handle,
		@service_name = service_name,
		@service_contract_name = service_contract_name,
		@messagetypename = message_type_name,
		@messagebody = CAST(message_body AS XML)
	FROM ExtQueue
), TIMEOUT 60000

With these components you can set up messaging within a single instance of SQL Server. In my next post I’ll discuss the additional plumbing required to communicate between separate instances.

Reprinted with author’s permission from SQLBlog.com.

>> Continue Reading SQL Server Service Broker Basics Part Two <<

 

UpSearch

About the Author

Microsoft SQL Server MVP and Practice Leader

Allen White

Allen White is an UpSearch Alum and Microsoft SQL Server MVP.  For over 30 years, Allen has specialized in developing applications that manage the movement of data and maximize data's usefulness. Allen excels at communicating highly technical information using language that results in increased client engagement and understanding, regardless of technical competency.

Allen has been working with relational database systems for over 20 years. He has architected database solutions in application areas like retail point-of-sale (POS), POS audit, loss prevention, logistics, school district information management, purchasing and asset inventory and runtime analytics. Allen thrives on providing comprehensive solutions to information management problems across a great variety of application environments.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.