SQL Server Audit 101 Creating Basic Audit

Auditing doesn’t have to be scary. SQL Server Audit 101 – Creating Basic Audit is part of a blog series designed to help you audit changes to your database by using SQL Server Audit.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization.  

SQL Server Audit 101 – Creating Basic Audit

SQL Server Audit 101 – Creating Basic AuditOriginally published on ColleenMorrow.com.

SQL Audit was introduced in SQL 2008, and for the first time auditing was treated as a “first-class” object in SQL Server, meaning it could be managed by DDL statements. It was built on the extended events framework and what made it really neat was that the event was recorded when the permission-check for that event occurred. What this meant to us as auditors was that the event would be recorded even if it didn’t really happen because the user didn’t have permissions. Why is this good? Well, suppose you’re auditing the execution of a stored procedure that modifies some sensitive data, like salary information. Wouldn’t it be nice to know not only who is executing that stored procedure, but who is trying to execute it?

One good thing about SQL Audit is that it executes asynchronously, which means it’s not going to hold up user processes. Unfortunately, that means it also can’t access certain information, like the network login or client associated with a session. So, going back to the salary procedure example, if a user is using a generic login to execute that procedure, you might have a hard time tracing it back to a real person.

Create the Server Audit

The first step in creating a SQL Audit is to create the audit object.

CreateServerAudit1_thumb

If you’re familiar with creating an audit in SQL Server 2008, you’ll notice a few changes in SQL 2012. The first is the “On Audit Log Failure” selection. In SQL 2008, this was only be checkbox to shutdown server on audit log failure. In SQL 2012, we now have options to continue (the equivalent of not checking the old checkbox), shutdown (checking the old checkbox) or fail operation, which will fail any operation that should have been recorded but couldn’t. This is nice if you want to prevent audited activity from going unrecorded, but don’t want to impact everything.

We have the same options for output: a file, the Windows Application log, or the Windows Security log. Keep in mind that, if you want to write to the Security log, some configuration is required.

The next change we see is the option for maximum files or maximum rollover files. Maximum rollover files means that, when that number of files is reached, the oldest file will be overwritten. If you choose Maximum files, however, once that max is reached, subsequent writes fail.

But the change that got me most excited (at first) was the new Filter tab. My biggest beef with SQL Audit in SQL 2008 was the inability to filter out any unwanted activity or objects from the audit output. It made for a lot of clutter. But in SQL 2012, we now have the ability to enter a predicate to filter the audit on, i.e. “(database_name = ‘AdventureWorks2012′)” This string is limited to 3000 characters.

CreateServerAudit2_thumb

Create the Audit Specification

The audit object tells SQL Server where to write the audit records, and how to manage them, but doesn’t actually specify what events to audit. For that, we need to create an audit specification.

There are 2 types of audit specification: a server audit specification or a database audit specification. Generally speaking, a server audit specification is used to audit events that occur at the server level; things like database creations, logins, creating a linked server. A database audit specification will audit events that occur at the database level; things like executing a stored procedure, creating a user, dropping a table. There are, however, some audit groups and events that span both levels. You can, for example, audit the SCHEMA_OBJECT_CHANGE_GROUP at the server or the database level. If you do it at the database level, it will only audit DDL changes in that database. Auditing it at the server level, however, will track DDL changes in all databases. You can create Server level audit specifications in all editions of SQL Server, however database audit specifications are only supported in Enterprise, Developer, and Evaluation editions.

Let’s say I want to audit DDL changes in the AdventureWorks2012 database. I can create a database audit specification or I can create a server audit specification and use the new filtering functionality to limit my audit output to only AdventureWorks2012 changes. Let’s do that. What’s the advantage? In this case, not much. But let’s say you have 100 databases on this server, and you want to audit all but 5. You could create database audit specs in 95 databases, or you could create one server audit spec and filter out the 5 databases you don’t want. Up to you.

CreateServerAuditSpec_thumb

Activating the Audit

Once I’ve created the audit and the audit specification, I’m almost ready to go. Before SQL Server will audit anything, I need to enable both the audit and the audit specification. I can do this by right-clicking on each and selecting “Enable” or I can do it using an ALTER statement.

Test the Audit and the Filter

I have my AdventureWorks2012 database. That’s what I’m auditing. But I also have a NoAuditDB which I’m, obviously, not auditing. If I create a table in each database and check the Audit logs (which I do by right-clicking on the Audit and selecting “View Audit Logs”) I see only one entry, the one for AdventureWorks2012.

TestAuditandFiltering_thumb

About that Filter

That filtering feature seems pretty handy, but what if you have a number of databases/objects/logins/etc. that you want to include or exclude from your audit? Listing each one can become cumbersome to say the least. What if you had a table somewhere that contained all the objects to exclude from your audit, could you use a subquery in the Filter predicate? Unfortunately, no, SQL Audit doesn’t handle this. Bummer.

That’s ok, though. As we’ll see soon, there’s more than one way to skin a cat. In fact, there’s even more than one cat. SQL Server Audit 102 – Reading Audit Output is next.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Microsoft Mothership Calls Kendal Van Dyke

Kendal Van Dyke | UpSearch

In the journey of life, you meet people you just want to be around. Kendal Van Dyke is one of those people for me. That is why I was tinged with sadness when I learned the mothership called Kendal.

Kendal spent 3 ½ years as an integral member of UpSearch’s SQL Server Practice.? He was so much more than a DBA. Kendal was, without a doubt, a SQL Server Professional clients, team members and the entire SQL Server community responded to, trusted and wanted to work with.

When we started working together, Kendal was a newer SQL Server MVP and member of the PASS Board of Directors.? He was, and remained, uber-devoted to the SQL Server Community.

After all these years, I noticed a couple patterns about Kendal. Most people, in their own way, said he inspired confidence. Kendal’s natural curiosity, insatiable desire to tinker and positive outlook was contagious. He was always an excellent team member admired by everyone. I will miss Kendal’s presence and charming personality.

As evidence of Kendal’s integrity, he made certain to leave UpSearch better than he found us. He even worked to ensure his successor was in place, which we will announce in the coming days.

Effective January 11, 2016, Kendal will be part of the Microsoft Premier Developer Support team.

We believe Kendal’s transition is also a strategic development in the UpSearch – Microsoft partnership.? As a Microsoft Gold Partner, UpSearch calls upon Microsoft’s field support team to ensure our clients get the answers they need, when they need them.? Kendal will now be part of the Microsoft support team UpSearch relies upon.

Microsoft is getting much more than a highly-respected SQL Server Professional. They are getting a great person. Although we are saddened by Kendal’s departure, we take pride that the mothership noticed Kendal.

Please take a moment and help us celebrate the next chapter in Kendal’s journey.

Shawn Upchurch
Founder & CEO

Farewell Our Friend

Colleen Morrow, SQL Server Consultant

“As I sat down to write this, it hit me that we only started working together a little over a year ago. It really does seem like much longer. You’ve been a trusted colleague and a friend. It has been a pleasure getting to know you, and a privilege collaborating with you over the past year. I wish you all the best.” ~Colleen Morrow

 

“I’m certainly glad to have had the experience of working with Kendal. He is a complete professional; personable, knowledgeable, and experienced. I wish him luck in his new endeavors, and look forward to seeing his future successes.” ~David Maxwell

david_large

Team Photo at SQLSaturday Cleveland 2015

Honorable Mention

Thank you to Kendal’s family – Cathy, Casey, Elizabeth, Erin and Emma Grace – for letting him play DBA with us! Kendal is fortunate to have such a beautiful and supportive family.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Auditing – Getting Started

Auditing doesn’t have to be scary. SQL Server Auditing – Getting Started is part of a blog series designed to help you audit changes to your database by using SQL Server Audit.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization.  

SQL Server Auditing – Getting Started

SQL Server Auditing – Getting StartedOriginally published on ColleenMorrow.com.

In my last organization, one of my jobs was auditing our database environment. I had been tasked with this responsibility for several years, and it wasn’t always easy. In fact, I used to despise the entire process. Why? Because I wasn’t using the right tool for the job. I didn’t know what options were available to me. Granted, I started out in SQL Server 2000, where there weren’t a whole lot of choices to begin with.

Over the years, I’ve taken a particular interest in auditing options available in SQL Server, mainly with the goal of making that part of my job easier. True, I probably could have gone to my boss at any point and said “hey, we should get a third-party auditing tool.” But the fact is, I get a kick out of seeing just what I can do with each tool. How I can spy on watch over my users and developers. Making the most of the tools I have at my disposal already. And these days, our audits are a piece of cake.

Why Audit?

There are a number of reasons why you might need to implement auditing in SQL Server. Maybe your company is bringing in an outside firm to perform security audits. Or you might even be required by law to perform such auditing. From a development perspective, auditing DDL changes can supplement a change management system. Or it can help you answer the question, “what changed?” that will inevitably be directed at you when the poop hits the fan. Auditing can tell you who’s accessing that sensitive data, or help you figure out what a particular login is being used for.

What Can You Audit?

So what exactly can you audit in SQL Server? Just about anything. For example:

  • DDL changes: create, alter, drop, truncate
  • Logins: all logins, failed logins, logins by sysadmins
  • When Agent jobs are created, removed, or changed
  • Who is accessing sensitive data or procedures
  • Who is trying to access sensitive data or procedures
  • Changes to a user’s or login’s privileges
  • Server or database configuration changes
  • The use of deprecated features

The thing is, almost everything that happens in SQL Server generates an event. A user logging in generates an event, as does that user issuing a query. Any locks that occur while that query executes generate events. Any waits, any disk space allocations, any object creations: they all generate events. And if it generates an event, chances are good that you can audit it in some way.

What Are Your Requirements?

Ok, so you’ve decided to audit your SQL Server database. Or maybe you’ve been told to. Either way, the first thing you need to do is figure out your requirements. Taking time to plan out exactly what you need will save yourself tons of time and frustration later. Trust me.

What Do You Want to Audit? The most important thing you need to decide is exactly what events you want to audit. Is this a DDL audit? Is it a security audit? Some auditing tools are better suited to handle tracking specific events, so depending on what you need to record, this could rule out a solution completely.

What Data Do You Want to Collect? If you’re auditing logins, you’ll obviously want to know the login name, and most likely the date and time of the login. But do you also need to know the application? What about the network login of the user or the client’s hostname? If you’re auditing DDL changes, it would definitely be useful to capture the SQL statement issued.

Where Do You Want to Run This Audit? Are you only planning on auditing a single database on a single instance, or is this going to be a system-wide thing? If you’re planning on auditing many environments, you’ll want something that’s easy to implement and maintain; maybe something you can manage centrally. The SQL version and edition of the audit target also matter.

Where Do You Want the Output to Go? Should the output be written to a file or would you prefer a database table? If you’re auditing several databases/instances, should they all write to separate outputs or a single repository? Should DDL audit output commingle with security audit output? Who should have access to the output? And how long do you want to retain it?

How Will the Audit Output Be Processed? Will you be reporting on the output? Will you need to search the output for specific events? Will you need to compare or search output from various audits?

What’s Your Budget? SQL Server has several free built-in tools that you can use to audit your database, but there are also a number of third-party tools available. Of course, these products come at a price, and that price generally goes up in proportion to the number of systems you want to audit.

Additional Considerations

Once you’re gotten your requirements firmed up, you can start using them to select an audit tool. In SQL Server, you have several options at your disposal, with each offering its own set of pros and cons. When picking out a solution, obviously you want to ensure it meets those requirements, but there are a few more things you’ll want to consider.

Will it Impact Normal Processing? Ideally you want a solution that will have minimal or no impact on the day-to-day performance of your SQL environment.

How Tamper-proof Is It? This is especially important when it comes to security-related auditing. You want to know that someone can’t mess with your audit to avoid having certain events recorded.

How Easy Is It to Implement/Maintain? The easier your audit process is to implement and use, the less painful auditing will be. Generally speaking, if you dread the whole audit process, you’re probably not using the right tool.

How Granular Is It? Can You Filter Out Certain Events or Objects? This is something that’s especially important to me. I do DDL auditing on a database where certain objects are routinely dropped and recreated by the application. I don’t care about those objects and I don’t want them in the audit report. I also don’t want things like index maintenance showing up. So the ability to exclude objects or events is something I look for.

Coming Up

Now that you know what you need, it’s time to start test driving some solutions. In the days (ok maybe weeks) to come, I’ll be discussing several of your options for auditing events in SQL Server. I’ll talk about how they work, what their pros and cons are, and hopefully introduce you to some new ideas for implementing and using them. Auditing is necessary, but it doesn’t have to be boring. Good stuff ahead, people. Check out SQL Server Audit 101 – Creating Basic Audit next.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Performance Tuning Script – Can My Databases Grow?

SQL Server Performance Tuning Script - Can My Databases Grow?Originally published on DMMaxwell.WordPress.com.

SQL Server Performance Tuning Script – Can My Databases Grow?

Most of us try to proactively manage our available disk space as much as we can, but sometimes you’re not able to predict when a file will grow. We usually leave auto-grow on as a safeguard for this. One of the problems I’ve run into in the past is having a very large growth setting or lack of disk space create a situation where there isn’t enough space on disk for a file to grow. There may be enough “free space” on the disk for whatever your monitoring threshold is, but maybe not enough for a growth or two.

Here’s a stored procedure you can use to check to see if any of the databases on your server would fail to auto-grow. It requires dbmail be set up. Also note that for this, and most scripts I will usually post, I have a separate administrative database that I put this in. Let me know if you have any feedback on this, or if you get any use out of it.

Thanks for reading.

-David.

USE [SQLAdmin];
GO

CREATE PROCEDURE AutogrowWarning
/*******************************************************************************
AUTHOR:
David M Maxwell

DATE:
Apr 24 2012

DESCRIPTION:
Checks to see if sufficient disk space is available for autogrow. If not, sends a warning.

DEPENDENCIES:
None

TEST EXEC:

EXEC dbo.AutogrowWarning
@dbaemail = 'david.maxwell@companyname.com'
GO

TO DO LIST:
1. Nothing at this time.

NOTES:
Please send any comments or suggestions to david.maxwell@companyname.com.

CHANGE HISTORY:
Date - Author - Change
Apr 24 2012 - David M Maxwell - Initial Version

*******************************************************************************/
@dbaemail varchar(500) = NULL -- DBA email address to receive warnings.
AS

SET NOCOUNT ON;

/* Test Parameters.  Run everything from here down to test. */
--DECLARE @dbaemail varchar(500), @msgquery nvarchar(4000)
--SELECT
--  @dbaemail = 'david.maxwell@companyname.com'

/* Parameter check */
IF (SELECT @dbaemail) IS NULL
BEGIN
RAISERROR('ERROR: Email recipient address must be provided.',16,1)
--GOTO EXITPROC
END

/* Create temp tables for comparison. */
IF (SELECT object_id('tempdb.dbo.#DiskFree')) IS NOT NULL
DROP TABLE #DiskFree

CREATE TABLE #DiskFree (
DriveLetter char(1) NOT NULL,
FreePages int NOT NULL
);

IF (SELECT object_id('tempdb.dbo.#NextAutoGrow')) IS NOT NULL
DROP TABLE #NextAutoGrow

CREATE TABLE #NextAutoGrow (
DBName varchar(100) NOT NULL,
LogicalName varchar(100) NOT NULL,
FileType CHAR(10) NOT NULL,
DriveLetter char(1) NOT NULL,
Growth int NOT NULL
);

IF (SELECT object_id('tempdb.dbo.#AutoGrowFail')) IS NOT NULL
DROP TABLE #AutoGrowFail

CREATE TABLE #AutoGrowFail (
DBName varchar(100) NOT NULL,
LogicalName varchar(100) NOT NULL,
FileType CHAR(10) NOT NULL,
DriveLetter char(1) NOT NULL,
Growth int NOT NULL,
FreePages int NOT NULL
);

/* Create perm table, if it does not exist. */

IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'AutoGrowWarnings')
BEGIN
CREATE TABLE AutoGrowWarnings (
WarningMessage nvarchar(max)
)
END

TRUNCATE TABLE dbo.AutoGrowWarnings;

/* Get the amount of disk space free on each volume. */
INSERT INTO #DiskFree
EXEC xp_fixeddrives;

/* Convert to pages for easier comparison, since master_files.size
is in pages. */
UPDATE #DiskFree
SET FreePages = FreePages * 128;

/* Get the next autogrow for each DB per drive letter. */
INSERT INTO #NextAutoGrow
SELECT
db_name(database_id),
name,
type_desc,
substring(physical_name,1,1),
CASE is_percent_growth
WHEN 1 THEN (size / growth)
ELSE growth
END AS Growth
FROM master.sys.master_files
WHERE Growth > 0

/* Compare the next autogrow to the disk free and create
a list of files that autogrow would fail on, due to
insufficient disk space. */
INSERT INTO #AutoGrowFail
SELECT
NAG.DBName,
NAG.LogicalName,
NAG.FileType,
NAG.DriveLetter,
NAG.Growth,
DF.FreePages
FROM #NextAutoGrow NAG
INNER JOIN #DiskFree DF
ON NAG.DriveLetter = DF.DriveLetter
WHERE NAG.Growth > DF.FreePages;

/* If the list is not empty, email it to the DBA. */
IF (SELECT COUNT(*) FROM #AutoGrowFail) > 0
BEGIN

INSERT INTO dbo.AutoGrowWarnings
SELECT 'WARNING: As of ' + CONVERT(varchar(24),GETDATE(),120) + ',' + DBName + '.'
+ LogicalName + ' requires ' + CONVERT(nvarchar(24),growth)
+ ' pages of free space for autogrow and only ' + CONVERT(nvarchar(24),FreePages)
+ ' are available on drive ' + DriveLetter + '.'
FROM #AutoGrowFail

EXEC msdb.dbo.sp_send_dbmail
@recipients = @dbaemail,
@subject = 'WARNING: Possible Auto-Grow Failure',
@query = 'SELECT * FROM SQLAdmin.dbo.AutoGrowWarnings'
END

EXITPROC:

About the Author

Data Platform Consultant

David Maxwell

UpSearch Alum David Maxwell is a database strategist, community evangelist and public speaker. He is passionate about helping community and business leaders get the most out of SQL Server.

Since 2000, David has served to protect, unlock and optimize data's value within such diverse environments as healthcare providers, insurance companies, manufacturers and financial institutions.  David has the unique ability to zero in on a complex challenge quickly, and provide a long-term solution that fits both the business needs and budget.

Since 2012, David has maintained a SQL Server focused blog at https://dmmaxwell.wordpress.com/. He is a frequent presenter at local and regional events for the Professional Association for SQL Server (PASS) and and works with the Columbus, OH based PASS chapter.

In his free time, David is an avid musician who plays several instruments, as well as a lover of puzzle-based games.

Learn more about David Maxwell at https://upsearch.com/david-maxwell/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Performance Tuning – Two Approaches to Index Maintenance

SQL Server Performance Tuning - Two Approaches to Index MaintenanceOriginally published on DMMaxwell.WordPress.com.

Though well-done index maintenance is usually an iterative process, trial and assessment, I have two default modes for how I set up index maintenance tasks for a given database. I refer to them as “Aggressive” and “Relaxed”.

When I set up index maintenance tasks, I set them up to run as often as I can. I’m using Ola Hallengren’s IndexOptimize script, and I usually set it to run nightly, with a time limit dependent on what else goes on within that particular system. For example, on one server I have some backups that kick off at 8PM, and an ETL process that runs around midnight, so I set IndexOptimize to run at 6 PM, with a 90 minute time limit. That gives SQL a half hour to repopulate it’s buffer cache with data needed for queries or complete any running index tasks that may spill over the time limit. This seems to be a good setting for this system. It’s slightly different elsewhere.

The difference between what I consider aggressive, and more relaxed, is how I set the thresholds for fragmentation and index size. For a more aggressive plan, I would set those thresholds lower. For a more relaxed plan, I would set them higher. Much depends on both the usage and the performance of the system, but as a baseline, this is a decent place to start.

Lower thresholds mean that the indexes are going to be maintained more often, but the average fragmentation levels will remain lower. Maintenance jobs will tend to average run times that are a bit longer and use more available RAM. Higher thresholds mean that indexes would be maintained less often, and fragmentation levels would average a bit higher. However, you’ll have shorter running maintenance tasks on average, and more available RAM left for caching query-related data.

Where to set these thresholds depends on the individual system, of course, but here’s what I use as a rough guideline. Keep in mind for the following thresholds that “page count” refers to the number of 8k pages contained in the index, so a page count of 1500 is around 12 MB of data.

Aggressive: Page Count = 1500, Defrag at 5%, Rebuild at 15%.
Relaxed: Page Count = 4000, Defrag at 15%, Rebuild at 30%

So on a system that has lots of available RAM for cache, or smaller indexes, I may be more relaxed with my index maintenance, since many of the indexes will reside in cache and reading them from disk will be more rare. On a system that would have less RAM or larger indexes, I’ll want to be more aggressive with my index maintenance, since I’ll be reading data more often from disk than from cache. It will take more time and resources, but will pay off in better query performance.

On the topic of frequency, index maintenance should ideally be done as often as you can allow. It hurts nothing to do some maintenance on your databases as long as nobody is using them. However, if you’re trying to schedule index maintenance in a tight window in order to keep the database available to users, or run maintenance along with other resource intensive jobs, then setting the thresholds as I describe may alleviate some contention between index maintenance and other process. You will also find it more efficient than the “rebuild everything every time” approach.

Thanks for reading,
-David.

About the Author

Data Platform Consultant

David Maxwell

UpSearch Alum David Maxwell is a database strategist, community evangelist and public speaker. He is passionate about helping community and business leaders get the most out of SQL Server.

Since 2000, David has served to protect, unlock and optimize data's value within such diverse environments as healthcare providers, insurance companies, manufacturers and financial institutions.  David has the unique ability to zero in on a complex challenge quickly, and provide a long-term solution that fits both the business needs and budget.

Since 2012, David has maintained a SQL Server focused blog at https://dmmaxwell.wordpress.com/. He is a frequent presenter at local and regional events for the Professional Association for SQL Server (PASS) and and works with the Columbus, OH based PASS chapter.

In his free time, David is an avid musician who plays several instruments, as well as a lover of puzzle-based games.

Learn more about David Maxwell at https://upsearch.com/david-maxwell/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Performance Tuning Script Create Index Rebuild or Reorg Commands

SQL Server Performance Tuning Script – Create Index Rebuild or Reorg CommandsOriginally published on DMMaxwell.WordPress.com.

The scenario here is, you’ve just performed some massively fragmenting operation, like a shrink. (Don’t look at me like that – it happens, just don’t do it as regular maintenance) You know you need to get in and do some index maintenance. You could run your nightly index maintenance job, but in this case, you want to do this now, instead of waiting for off hours or running that job in the middle of the day. Furthermore, you want to see what indexes need maintenance and maybe run some of the commands independently. Here’s a quick script to make that easy for you. It outputs the table and index name, page count, and percent fragmentation, as well as a one line command to rebuild or defrag the index, depending on what thresholds you want to use for that.

/* 

  Quick script to create index defrag / rebuild commands.
  Useful for when you KNOW you have the work to do. 
  Not so useful for ongoing index maintenance, and should
  not be used for such.  

  The usual cautions about running this in production, 
  fitness for a particular purpose, or guarantee that this
  will fix anything apply. 

  DMMaxwell, October 2012.

*/

SELECT
   TableName = SCHEMA_NAME(t.schema_id) + '.' + OBJECT_NAME(ips.object_id)
  ,IndexName = ix.name 
  ,Pages = ips.page_count 
  ,Fragmentation = ips.avg_fragmentation_in_percent 
  ,ReindexCmd = 'ALTER INDEX [' + ix.name + '] ON [' + 
    SCHEMA_NAME(t.schema_id) + '].[' + OBJECT_NAME(ips.object_id) + '] ' + 
    CASE 
      WHEN ips.avg_fragmentation_in_percent > 15 THEN 'REBUILD; ' 
        /* BOL recommendation is 30 - being more aggressive. */
      WHEN ips.avg_fragmentation_in_percent > 5 THEN 'REORGANIZE; '  
        /* BOL recommendation I actually like. */
    END
FROM sys.dm_db_index_physical_stats(
  DB_ID('AdventureWorks'),  /* Your database name here. */
  NULL, /* Specify object ID or NULL for all of them.*/
  NULL, /* Specify index ID or NULL for all of them.*/
  NULL, /* Specify partition ID or NULL for all of them.*/
  'LIMITED' /* Going for speed, rather than depth of info. */
  ) ips
INNER JOIN sys.tables t
  ON ips.object_id = t.object_id
INNER JOIN sys.indexes ix 
  ON ips.object_id = ix.object_id
 AND ips.index_id = ix.index_id
WHERE ips.page_count > 500  
        /* Usually 1000 */
  AND ips.avg_fragmentation_in_percent > 5 
        /* Ignore fragmentation less than 5%. */
        /* These numbers vary, based on a multitude of factors. 
           I'm being a bit aggressive today. */
  AND ips.index_type_desc != 'HEAP' 
        /* Can't defragment a heap... */
GO

Thanks for reading.

-David.

About the Author

Data Platform Consultant

David Maxwell

UpSearch Alum David Maxwell is a database strategist, community evangelist and public speaker. He is passionate about helping community and business leaders get the most out of SQL Server.

Since 2000, David has served to protect, unlock and optimize data's value within such diverse environments as healthcare providers, insurance companies, manufacturers and financial institutions.  David has the unique ability to zero in on a complex challenge quickly, and provide a long-term solution that fits both the business needs and budget.

Since 2012, David has maintained a SQL Server focused blog at https://dmmaxwell.wordpress.com/. He is a frequent presenter at local and regional events for the Professional Association for SQL Server (PASS) and and works with the Columbus, OH based PASS chapter.

In his free time, David is an avid musician who plays several instruments, as well as a lover of puzzle-based games.

Learn more about David Maxwell at https://upsearch.com/david-maxwell/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Performance Tuning Script – Backup Throughput

SQL Server Performance Tuning Script - Backup ThroughputOriginally published on DMMaxwell.WordPress.com.

Recently on one particular server, I noticed that backups were taking much longer than they should have. The databases or backups weren’t getting significantly larger, so I wanted to know if there was a problem with the backup storage, and to do that I wanted to look at backup throughput over time. Here’s the query I wrote to do that.

The query makes use of two tables in MSDB, namely [backupset] and [backupmediafamily]. From [backupset], we can get the database name, start and finish times, and the size of the backup. To get the file name, we need to go to [backupmediafamily], and join on the media_set_id. By calculating the difference in seconds between start and finish times, and converting the backup size from bytes to MB, we can get the backup throughput in MB/s.

Also, in this case, I’m filtering for backups that are either full or differential, and are taking longer than 10 minutes. This eliminates backups that only take a few seconds, since those backups will give atrificially low throughput numbers. (If I only backed up 1MB, and it took 1s, then my throughput is 1MB/s even if I could have done more.)

SQL Server Performance Tuning Script – Backup Throughput

SELECT
bs.database_name AS DBName
,bs.backup_start_date AS DateStarted
,bs.backup_finish_date AS DateCompleted
,Duration =
DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date)
,bs.backup_size / 1048576.0 AS DataSizeMB
,[MB/sec] = (bs.backup_size / 1048576.0) /
CASE
WHEN DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date) > 0
THEN DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date)
ELSE 1
END
,bmf.physical_device_name AS BackupFile
FROM msdb.dbo.backupset AS bs
INNER JOIN msdb.dbo.backupmediafamily AS bmf
ON bs.media_set_id = bmf.media_set_id
WHERE bs.type != 'L'
AND DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date) > 600
;

That will give me the backup throughput for each Full or Differential backup running longer than 10 minutes that remains in the backup history. If you want to look at averages for all databases over time, you can group by parts of the datetimes. One way to do that would be like this:

WITH BackupThroughput AS (
SELECT
bs.backup_finish_date AS DateCompleted
,[MB/sec] = (bs.backup_size / 1048576.0) /
CASE
WHEN DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date) > 0
THEN DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date)
ELSE 1
END
FROM msdb.dbo.backupset AS bs
INNER JOIN msdb.dbo.backupmediafamily AS bmf
ON bs.media_set_id = bmf.media_set_id
WHERE bs.type != 'L'
AND DATEDIFF(SECOND, bs.backup_start_date, bs.backup_finish_date) > 600
)
SELECT
BackupDate = MIN(DateCompleted)
,AverageThroughput = AVG([MB/sec])
FROM BackupThroughput
GROUP BY
DATEPART(YEAR,DateCompleted)
,DATEPART(MONTH,DateCompleted)
,DATEPART(DAY,DateCompleted)
ORDER BY
DATEPART(YEAR,DateCompleted)
,DATEPART(MONTH,DateCompleted)
,DATEPART(DAY,DateCompleted)

With that information in hand, I can check to see if there was an obvious drop in backup throughput, and then investigate that.

Hope that helps.
-David.

About the Author

Data Platform Consultant

David Maxwell

UpSearch Alum David Maxwell is a database strategist, community evangelist and public speaker. He is passionate about helping community and business leaders get the most out of SQL Server.

Since 2000, David has served to protect, unlock and optimize data's value within such diverse environments as healthcare providers, insurance companies, manufacturers and financial institutions.  David has the unique ability to zero in on a complex challenge quickly, and provide a long-term solution that fits both the business needs and budget.

Since 2012, David has maintained a SQL Server focused blog at https://dmmaxwell.wordpress.com/. He is a frequent presenter at local and regional events for the Professional Association for SQL Server (PASS) and and works with the Columbus, OH based PASS chapter.

In his free time, David is an avid musician who plays several instruments, as well as a lover of puzzle-based games.

Learn more about David Maxwell at https://upsearch.com/david-maxwell/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Performance Tuning – Shrink Is A Four Letter Word

SQL Server Performance Tuning - Shrink Is A Four Letter WordOriginally published on DMMaxwell.WordPress.com.

SQL Server Performance Tuning – Shrink Is A Four Letter Word

Recently, I’ve been noticing some large-scale log growth on some of my developer SQL servers, which has been causing them to run out of disk space. Though shrinking files is almost universally a bad idea, there is a time and a place for doing so provided the ramifications of shrinking are understood. Needing to shrink a log file indicates that either we’re operating on more data than we expected, or operating inefficiently. Either way, it’s almost always the result of unplanned circumstances. Under such circumstances, you can’t always use the option you want to. Sometimes you have to do something regrettable.

When *NOT* to shrink:

  • When the growth happens in a Data file. Shrinking is going to rob you of free space required for index maintenance, not to mention fragmenting the hell out of your existing indexes, making them near useless.
  • When the growth is in TempDB. It can cause corruption. Either add TempDB files, or kill the transaction and recreate tempdb. You’ll need to restart SQL Server for that, and killing said transaction that big may cause a very long rollback. You’ve been warned.
  • As part of regular maintenance. If your log files are growing repeatedly, then they’re trying to get to the size they need to be in order to handle your workload. Why are you shrinking them? What do you have against them? They have dreams, too Fix the workload, not the log files.

Alternatives to shrinking:

When to shrink. I can only think of two situations off the top of my head

  • Under an emergency where other workloads on the same disk are affected and there’s no additional disk elsewhere for another log file.
  • When a database is being archived / retired, and will be set to read-only. Even in that case, you wouldn’t just shrink the DB. See Retiring a Database.

Benefits to shrinking:

  • Free disk space.
  • That’s about it.

Caveats to shrinking:

  • Massive performance hit during re-growth, as all transactions must be suspended.
  • Massive performance hit due to file fragmentation on re-growth.
  • Massive performance hit due to internal fragmentation on shrink.
  • Massive perfor. You get the idea.

So really, we want to avoid shrinking log files wherever possible. The best way to avoid shrinking is to properly size files to begin with. Make sure you have enough room for your transactions to run. If you have large transactions or volumes of data to work with, one good way to avoid file growth is to batch large transactions into smaller groups. i.e. Instead of inserting 5M rows, insert 100k rows at a time, 50 times. This can actually be faster than a single large insert, as you’re not stuck waiting for the log file to grow to accommodate 5M records.

Some growth may still occur since, let’s face it, we have a lot of data to work with. What we want to avoid is large, recurring growth. Remember that when the log file must grow, all transactions are suspended. If you are only growing slightly, that may not be a problem, but when you have 100GB of growth, that’s going to stop you in your tracks for the duration of the growth.

If you must shrink a log file due to uncontrolled or unexpected growth, you can use the SHRINKFILE command, but with one caveat – I would recommend shrinking the file by half instead of to its smallest size. This way, future growth may be avoided while queries are tuned to use less log space. For example, if a log were to grow to 40GB, after averaging a size of 2GB, I would shrink the log to 20GB, like this:

USE <DatabaseName>;
DBCC SHRINKFILE(<file ID>,<target size>);

Example:

USE BigLogDB;
DBCC SHRINKFILE(2,20000);

2 is the ID of the log file for most databases. You can use sp_helpfile on a database to check that. 20000 is the target size, which is always in MB. Shrinking requires sysadmin or diskadmin permissions, which some developers do have. I can also assist you with these tasks as needed. Another advantage to this is that it gives us an idea of how large the production log files will need to be to accommodate new SQL code.

Also, I would like to stress that shrinking is used as a temporary fix, not as a workaround, and *never* as part of regular maintenance. If you have a job on a server that is regularly shrinking log files, then you either need more disk space or smaller transactions. Regular shrinking is a sign that something bad is going on. Also, while shrinking can be considered on log files, it is almost never a good idea to shrink a data file. Avoid using SHRINKDATABASE.

Thanks for reading.
-David.

About the Author

Data Platform Consultant

David Maxwell

UpSearch Alum David Maxwell is a database strategist, community evangelist and public speaker. He is passionate about helping community and business leaders get the most out of SQL Server.

Since 2000, David has served to protect, unlock and optimize data's value within such diverse environments as healthcare providers, insurance companies, manufacturers and financial institutions.  David has the unique ability to zero in on a complex challenge quickly, and provide a long-term solution that fits both the business needs and budget.

Since 2012, David has maintained a SQL Server focused blog at https://dmmaxwell.wordpress.com/. He is a frequent presenter at local and regional events for the Professional Association for SQL Server (PASS) and and works with the Columbus, OH based PASS chapter.

In his free time, David is an avid musician who plays several instruments, as well as a lover of puzzle-based games.

Learn more about David Maxwell at https://upsearch.com/david-maxwell/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Performance Tuning – Dealing With Disk Space Issues

SQL Server Performance Tuning - Dealing With Disk Space IssuesOriginally published on DMMaxwell.WordPress.com.

On a typical day a few months ago, I received an automated warning in my inbox that a particular server was low on disk space. This came along with the requisite “panic” emails I usually get from other areas who are also on that notification list. I responded that I was looking into it in order to keep the email volume down. 🙂

I mapped the volume in question to my local machine and fired up SpaceSniffer to get a quick read on what was taking up all the space. It was a dedicated SQL Server data file drive and the usual suspects were taking up the lion’s share of the space. (SpaceSniffer is a fantastic tool, by the way. You should check it out, if you haven’t already.)

Now, when it comes to SQL Server performance tuning – dealing with disk space issues, there are three basic options:

1. Do nothing. This should always be considered as an approach, though you may rarely use this as a solution. Sometimes you really don’t need to do anything.

2. Reduce the space required. You can relocate files, clean up un-needed data, etc. You can even shrink files if you understand what the consequences would be and have a plan for dealing with that.

3. Increase the space available. In this case, we were on a SAN drive that had available space left, so we could grow the drive to make more space available.

Another quick check showed that most of the data files were over 90% used, and none of them would be able to grow unless more space was made available. That eliminated option 1. There were no other files on that drive to clean up or delete, i.e., no databases to drop, so we couldn’t reduce the amount of space required for option 2. That left us with option 3: Grow the drive. This was an online operation handled off-hours by our SAN admin, and was successful.

After resolving any issue, it’s a good idea to go back and do some root-cause analysis. What caused the disk space issue? One of the first things I do when taking on a new client or job is to set up a monitoring job on every server that keeps a few month’s worth of history on the database files. Using the sys.dm_io_virtual_file_stats DMV I take a snapshot of the disk activity for each data file, as well as it’s size on disk, every 15 minutes. This is then logged to a table in my administrative database on each server.

With the stats in place, I was able to put together a quick query to see which database had grown the most over the last few months. It turned out that one of the smaller databases had now become the second biggest database, all within the space of the last 8 weeks. Checking with the owner of the application, I found out a lot of new data had recently been imported, causing the database to grow by over 600%. After that, it didn’t take much more growth on the part of the other data files to push that disk over its threshold, and send an alert.

Here are the scripts I use, in case you’re interested.

  • DDL for FileIOStatsRaw table and vw_FileIOByHour view. – Created in the administrative DB. (In my case, [SQLAdmin].)
  • GetFileIOStats stored procedure – Run this as frequently as you want and output to the FileIOStatsRaw table. – Also created in the admin DB. Will keep 90 days by default, but call it how you want in an agent job.
  • Trend Query:
SELECT 
 DBName, 
 Yr, 
 Mo, 
 CalculatedGrowthInMBPerMonth = SUM(FileGrowthBytes) / 1024.0
FROM dbo.vw_FileIOByHour 
WHERE FileType = 'ROWS' 
GROUP BY DBName, Yr, Mo 
ORDER BY DBName, Yr, Mo;
GO

How do you monitor, measure and alert on disk space usage?

Thanks for reading.

-David.

About the Author

Data Platform Consultant

David Maxwell

UpSearch Alum David Maxwell is a database strategist, community evangelist and public speaker. He is passionate about helping community and business leaders get the most out of SQL Server.

Since 2000, David has served to protect, unlock and optimize data's value within such diverse environments as healthcare providers, insurance companies, manufacturers and financial institutions.  David has the unique ability to zero in on a complex challenge quickly, and provide a long-term solution that fits both the business needs and budget.

Since 2012, David has maintained a SQL Server focused blog at https://dmmaxwell.wordpress.com/. He is a frequent presenter at local and regional events for the Professional Association for SQL Server (PASS) and and works with the Columbus, OH based PASS chapter.

In his free time, David is an avid musician who plays several instruments, as well as a lover of puzzle-based games.

Learn more about David Maxwell at https://upsearch.com/david-maxwell/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

Transactional SQL Server Replication Toolbox Scripts

Transactional SQL Server Replication Toolbox Scripts

Originally published on KendalVanDyke.com.

During the last few years I’ve worked extensively with transactional replication and have written a handful of scripts that have found a permanent home in my “useful scripts” toolbox. I’ve provided these scripts as downloads whenever I’ve presented about replication…but not everyone who has worked with replication has been to one of my presentations (or had access to the downloads afterwards) so I’m posting them in this Transactional SQL Server Replication Toolbox Scripts series.

The first script in my toolbox shows all of the articles and columns in each article for all transactional publications in a published database. It’s pretty straightforward – just execute the script in the published database on the publisher. Note that because it uses the FOR XML PATH directive it must be run on SQL 2005 or higher.

/********************************************************************************************* 
Transactional SQL Server Replication Toolbox Scripts: Show Articles and Columns for All Publications 

Description: 
   Shows articles and columns for each article for all transactional publications 

   (C) 2013, Kendal Van Dyke (mailto:kendal.vandyke@gmail.com) 

Version History: 
   v1.00 (2013-01-29) 

License: 
   This query is free to download and use for personal, educational, and internal 
   corporate purposes, provided that this header is preserved. Redistribution or sale 
   of this query, in whole or in part, is prohibited without the author's express 
   written consent. 

Note: 
   Execute this query in the published database on the PUBLISHER 

   Because this query uses FOR XML PATH('') it requires SQL 2005 or higher 
   
*********************************************************************************************/ 

SELECT 
   syspublications.name AS "Publication", 
   sysarticles.name AS "Article", 
   STUFF( 
       ( 
           SELECT ', ' + syscolumns.name AS [text()] 
           FROM sysarticlecolumns WITH (NOLOCK) 
               INNER JOIN syscolumns WITH (NOLOCK) ON sysarticlecolumns.colid = syscolumns.colorder 
           WHERE sysarticlecolumns.artid = sysarticles.artid 
               AND sysarticles.objid = syscolumns.id 
           ORDER BY syscolumns.colorder 
           FOR XML PATH('') 
       ), 1, 2, '' 
   ) AS "Columns" FROM syspublications WITH (NOLOCK) 
   INNER JOIN sysarticles WITH (NOLOCK) ON syspublications.pubid = sysarticles.pubid 
ORDER BY syspublications.name, sysarticles.name

 

About the Author

Microsoft SQL Server MVP & Principal Consultant

Kendal Van Dyke

UpSearch Alum Kendal Van Dyke is a database strategist, community advocate, public speaker and blogger. He is passionate about helping leaders use Microsoft's SQL Server to solve complex problems that protect, unlock and optimize data's value.

Since 1999, Kendal has specialized in SQL Server database management solutions and provided IT strategy consulting. Kendal excels at disaster recovery, high availability planning/implementation and debugging/troubleshooting mission critical SQL Server environments.

Kendal Van Dyke served the SQL Server community as Founder and President of MagicPass, the Orlando, FL based chapter of the Professional Association for SQL Server (PASS). In 2012, Kendal served as a member of the PASS Board of Directors.

Kendal remains active in the SQL Server community as a speaker and blogger. He teaches SQL Server enthusiast and technology leaders how to protect, unlock and optimize data’s value. Since 2008, Kendal has operated a SQL Server focused blog at http://www.kendalvandyke.com/.

Microsoft acknowledged Kendal for his support and outstanding contributions to the SQL Server community by awarding him Microsoft MVP (2011-15). Learn more about Kendal Van Dyke https://upsearch.com/kendal-van-dyke/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.