Posts

SQL Server Service Broker Error Handling

Service Broker is not a black box.  SQL Server Service Broker Error Handling is part of a blog series designed to support your messaging needs and make sense Service Broker.

SQL Server Service Broker Error Handling

SQL Server Service Broker Error HandlingOriginally published on ColleenMorrow.com.

This post is part of a series on this blog that will explore SQL Server Service Broker, a native messaging and queueing technology built into the SQL Server Database Engine.

Previous posts:

In this installment, we discuss error handling in Service Broker applications as well as a noir-sounding concept called the poison message.

Poison Messages

At the end of last week’s post I raised a particular scenario. When receiving a message from the queue while inside a transaction, if we experience an error and the transaction rolls back, the message gets put back in the queue. And the next time we read from the queue, we’ll receive that same message again. So, what happens if we meet up with that same error again? And again? And again? Are you picking up what I’m layin’ down?

This type of situation, a message that can never be processed successfully, is known as a poison message. The name kind of makes it sound like there’s a problem with the message itself. And there might be. Perhaps the message format is wrong for what the receiving code was expecting. But maybe the problem is with the receiving code itself. Regardless of what causes the poison message, it has to be dealt with.

SQL Server has a built-in mechanism for handling poison messages. If a transaction that receives a message rolls back 5 times, SQL Server will disable the queue. So that means that all processing that depends on that queue will cease. Nice, huh? Because of this, it behooves you to make sure you include proper error handling in your message processing code. And how exactly you handle errors will depend on several factors:

  • Should the message processing be retried? If the error was a deadlock, then retrying is appropriate, because it should eventually succeed.
  • Should the whole transaction be rolled back or just part of it? You can use savepoints to rollback all logic except the receive, for instance.
  • Should the conversation be ended? In the case of a poison message, it’s common to commit the receipt of the message (to remove it from the queue) and end the conversation with an error. This notifies the sender service that something went wrong on the receiver side. However, if the sender doesn’t care about the success or failure of the receiver’s processing, you may choose to log the error and commit the transaction without ending the conversation.
  • What logic does the sender need to perform if the receiver gets an error? This is where things can get sticky. It’s relatively easy to code error handling on the receiver side. But what if there’s logic on the sender side that needs to be undone to complete the “rollback”? Now we need to include error handling in the receiving code that notifies the sending service of the error and we need receiving code on the sender service that will handle the error and perform any necessary logic.

Best Practices

Asynchronous applications can get pretty complex. I know you were probably hoping for some example code on proper error handling. But the thing is, so much is dependent on how your specific implementation has to function. What I can share are some best practices. Here are some of my recommendations, in no particular order:

  • Map it out before writing one bit of code – If you don’t have a clear picture of how your application logic flows, you simply won’t code an efficient and robust app. This should be a no-brainer, but even I’ve fallen victim to the urge to start coding before I’ve mapped out a clear picture of the logical flow. You’ve heard “measure twice, cut once”, well this is the developer’s version of that.
  • Do validation on the sending side – Does your receiver assume the message will be in xml format? Make sure you’re casting your message as xml at the sender. Does your receiver require a specific format? Consider using a specific xml schema. Performing as much validation as possible on the sender side not only helps prevent a lot of receiver errors, it also lessens the workload of the receiving service, which means better message processing performance.
  • Keep it simple – The less complex your receiving code, the less opportunity there is for errors. If this is a one-way application, you might even consider something as simple as receiving messages from the queue and inserting them into another table to await subsequent processing. But even with something this simple
  • Always include TRY/CATCH blocks in your receiving procedure.

Conclusion

I’d love to hear from others who are using Service Broker. How do you handle errors in your application? Any tips or tricks to share? Leave your thoughts in the comments.

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Service Broker Sending and Receiving

Service Broker is not a black box.  SQL Server Service Broker Sending and Receiving is part of a blog series designed to support your messaging needs and make sense of Service Broker.

SQL Server Service Broker Sending and Receiving

SQL Server Service Broker Sending and ReceivingOriginally published on ColleenMorrow.com.

This post is part of a series on this blog that will explore SQL Server Service Broker, a native messaging and queueing technology built into the SQL Server Database Engine.

Previous posts:

In this installment, we discuss sending and receiving messages.

Sending a Message

If you’ve been following along in this series, you’ll have created the necessary components of the service architecture, namely the message types, contract, queues, and services. You will also have started a conversation between your initiator and target services. You’re finally ready to start sending and receiving messages in your Service Broker app. Whoohoo! Let’s get to it!

To send a message in Service Broker, we use the SEND command. But first, we need to retrieve our conversation handle. Remember, all messages need to be sent as part of a conversation. If this message is unrelated to any previous messages, we might choose to start a new conversation. However, if this message *is* related to previous messages, we’ll want to reuse an existing conversation so that our messages are processed in the right order. More on that later.

For the sake of simplicity here, we’ll start a new conversation and send a message on that conversation handle.

DECLARE @InitDlgHandle UNIQUEIDENTIFIER;

DECLARE @TaxFormMessage NVARCHAR(1000);

BEGIN TRANSACTION;

--open a dialog between the initiator service and target service

BEGIN DIALOG @InitDlgHandle

FROM SERVICE [//SBDemo/Taxes/TaxpayerService]

TO SERVICE N'//SBDemo/Taxes/IRSService'

ON CONTRACT [//SBDemo/Taxes/TaxContract]

WITH ENCRYPTION = OFF;

--build the message

SELECT @TaxFormMessage = 
N'<Form1040EZ>

<SSANumber>695256908</SSANumber>

<LastName>Erickson</LastName>

<FirstName>Gail</FirstName>

<MiddleName>A</MiddleName>

<BirthDate>1952-09-27</BirthDate>

<FilingStatus>M</FilingStatus>

<Wages>66662.00</Wages>

<FederalIncomeTax>12888.00</FederalIncomeTax>

<StateIncomeTax>2522.00</StateIncomeTax>

<SocialSecurityTax>5523.00</SocialSecurityTax>

</Form1040EZ>

';

--send the message using the id of the conversation started above

--specify the Request message, which can only be sent by the conversation initiator

SEND ON CONVERSATION @InitDlgHandle

MESSAGE TYPE

[//SBDemo/Taxes/TaxFormMessage]

(@TaxFormMessage);

COMMIT TRANSACTION;

So what happens when we run this? When we begin a dialog, SQL Server creates a conversation endpoint in the initiator database. We can view that endpoint by querying the sys.conversation_endpoints table. It doesn’t try to communicate with the target service yet, though, so at this point SQL Server doesn’t know whether that target is online or whether it exists at all. In fact, if we look at sys.conversation_endpoints immediately after beginning the dialog, we’ll see the state_desc of the endpoint as “STARTED_OUTBOUND”. This means that the conversation has been started, but no messages have been sent yet.

When we issue the SEND command, what happens depends on how our architecture is configured. So far, our initiator and target services are both in the same database, so when we send a message SQL Server attempts to insert it directly into the target service’s queue. If for some reason SQL Server can’t write to the target queue (maybe it’s been disabled), then the message gets written to the sys.transmission_queue system table. SQL Server will continue to try to deliver the message until it succeeds. Once the message is successfully delivered to the target queue, the message is deleted from the transmission queue.

Receiving Messages

Let’s assume our message was successfully delivered. In fact, let’s verify that it was. Remember that Service Broker queues are really just hidden tables, and we can select from them. So if we want to know whether our message reached the target queue, all we need to do is query the queue.

SELECT * FROM IRSQueue

However, note that SELECTing from the queue doesn’t remove messages from that queue. To pop a message out of the Service Broker queue we must RECEIVE it.

RECEIVE TOP(1) * FROM IRSQueue

Now, if we were to issue the statement above, the message and all of its related information will be returned to our results tab in SSMS. That’s not very useful, though, is it. That’s why, typically, we receive messages into variables, so that we can perform additional processing.

DECLARE @RecvReqDlgHandle UNIQUEIDENTIFIER;

DECLARE @RecvReqMsg xml;

DECLARE @RecvReqMsgName sysname;

BEGIN TRANSACTION;

RECEIVE TOP(1)

@RecvReqDlgHandle = conversation_handle,

@RecvReqMsg = cast(message_body as xml),

@RecvReqMsgName = message_type_name

FROM IRSQueue

IF @RecvReqMsgName =

N'//SBDemo/Taxes/TaxFormMessage'

BEGIN

DECLARE

@SSANumber                        varchar(9),

@LastName                        varchar(50),

@FirstName                        varchar(50),

@MiddleName                        varchar(50),

@BirthDate                        date,

@FilingStatus                char(2),

@Wages                                Decimal(12,2),

@FederalIncomeTax        Decimal(12,2),

@StateIncomeTax                Decimal(12,2),

@SocialSecurityTax        Decimal(12,2),

@Refund                                decimal(12,2)

select

@SSANumber                        = c.value(N'(./SSANumber)[1]', N'varchar(9)') ,

@LastName                        = c.value(N'(./LastName)[1]', N'varchar(50)') ,

@FirstName                        = c.value(N'(./FirstName)[1]', N'varchar(50)') ,

@MiddleName                        = c.value(N'(./MiddleName)[1]', N'varchar(50)') ,

@BirthDate                        = c.value(N'(./BirthDate)[1]', N'date') ,

@FilingStatus                = c.value(N'(./FilingStatus)[1]', N'char(2)') ,

@Wages                                = c.value(N'(./Wages)[1]', N'Decimal(12,2)') ,

@FederalIncomeTax        = c.value(N'(./FederalIncomeTax)[1]', N'Decimal(12,2)') ,

@StateIncomeTax                = c.value(N'(./StateIncomeTax)[1]', N'Decimal(12,2)') ,

@SocialSecurityTax        = c.value(N'(./SocialSecurityTax)[1]', N'Decimal(12,2)')

from @RecvReqMsg.nodes('Form1040EZ') T(c)

EXEC CalculateRefund @FilingStatus, @Wages, @FederalIncomeTax, @Refund OUTPUT

DECLARE @ReplyMsg NVARCHAR(100);

IF @Refund < 10000

BEGIN

SELECT @ReplyMsg =

N'<Refund>;'+cast(@Refund as varchar)+'</Refund>;';

SEND ON CONVERSATION @RecvReqDlgHandle

MESSAGE TYPE

[//SBDemo/Taxes/TreasuryCheckMessage]

(@ReplyMsg);

END

ELSE

BEGIN

SELECT @ReplyMsg =

N'<AuditNotice>Refund exceeds $10000: '+cast(@Refund as varchar)+'</AuditNotice>';

SEND ON CONVERSATION @RecvReqDlgHandle

MESSAGE TYPE

[//SBDemo/Taxes/AuditNotificationMessage]

(@ReplyMsg);

END

END CONVERSATION @RecvReqDlgHandle;

END

COMMIT TRANSACTION;

GO

There are a few points I’d like to highlight in the code above. First, when we receive the message, we don’t just receive the body of the message, we’re also receiving the conversation_handle and the message_type_name. This is typical of Service Broker apps. Grabbing the conversation_handle allows us to send a reply message on the same conversation, which we actually do later in the code. Getting the message type allows us to add logic to process different message types in different ways, even though they’re coming in on the same queue.

Secondly, note that we need to cast the message_body as an xml variable. The message_body is stored as a varbinary(max), so we need to convert it to xml before we can shred it.

Finally, the RECEIVE statement is inside BEGIN and COMMIT transaction statements, so we’re receiving the message as part of the transaction. Therefore, if we hit an error, maybe in the CalculateRefund procedure, the entire transaction will roll back and the message will go back into the queue. And the next time we run the code, this message will be received and processed again. Depending on what caused the error, we may run into the same problem. Can you see where this might be an issue? More on that soon.

Conclusion

I hope you’re enjoying this series of posts as much as I am. We’ve only scratched the surface, so much more to come! If there’s a specific area of Service Broker you want to see addressed, please feel free to leave a suggestion in the comments. Next I will continue with SQL Server Service Broker Error Handling.

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Service Broker Conversation Architecture

Service Broker is not a black box.  SQL Server Service Broker Conversation Architecture is part of a blog series designed to support your messaging needs and make sense of Service Broker.

SQL Server Service Broker Conversation Architecture

SQL Server Service Broker Conversation ArchitectureOriginally published on ColleenMorrow.com.

This post is part of a series on this blog that will explore SQL Server Service Broker, a native messaging and queueing technology built into the SQL Server Database Engine.

Previous posts:

In this installment, we introduce Service Broker’s conversation architecture.

Conversations

Conversations are part of everyday life. We have them with our partner, our coworkers, our friends, even total strangers. They can be short:

“Is anyone sitting here?”

“No.”

Or they can be very, very long. But they all have some things in common. One participant starts the dialog, and they involve at least 2 people. Conversations are bi-directional, and messages are received in the order in which they are sent.

Like real-life conversations, Service Broker conversations are a reliable-bidirectional stream of messages exchanged between two participating services. A Service Broker conversation can be short-lived, a simple exchange of 2 messages, or it can span days, weeks, even years. There are two key elements of a Service Broker conversation that are important to note, however. The first is that messages are guaranteed to be delivered in order, and only once. Service Broker uses sequencing and acknowledgement mechanisms to ensure this. The second key is that conversations are persistent. Persistent across network interruptions. Across server restarts. In fact, conversations persist until they are explicitly ended.

In the world of Service Broker, you’ll sometimes see the term “conversation” used. Sometimes it’s “dialog”. Sometimes it’s even “dialog conversation”. Although “conversation” and “dialog” are distinct concepts in the greater world of messaging services, in the context of Service Broker they are interchangeable.

Initiators and Targets

Last week I introduced the Service Broker contract. Here’s the syntax again:

CREATE CONTRACT [//SBDemo/Taxes/TaxContract]

(

[//SBDemo/Taxes/TaxFormMessage] SENT BY INITIATOR,

[//SBDemo/Taxes/TreasuryCheckMessage] SENT BY TARGET,

[//SBDemo/Taxes/AuditNotificationMessage] SENT BY TARGET

);

GO

You may have noticed something about the syntax, specifically the references to INITIATOR and TARGET. In a Service Broker conversation, the initiator is the service that begins a dialog. The target is the service that accepts the dialog started by the initiator. It can be easy to take the roles of target and initiator and apply them to specific servers or instances. However, it’s important to understand that these roles are specific to a conversation, and may change for other conversations. Think about it this way, I can start a conversation with you in the morning. In that conversation I’m the initiator and you’re the target. But later in the day, you might approach me and start a conversation. In that conversation, you’re the initiator.

The same is true for Service Broker. Which service acts as the initiator and which is the target will depend on how you design your solution. It may well be that a service on ServerA always initiates conversations with a service on ServerB. But you may deploy a solution that works in a more bi-directional manner. Therefore you need to keep this in mind when defining contracts and assigning message types to each role.

Starting a Dialog

Before we can send a message between services, we need to create a dialog. Creating a dialog establishes the two participants in the dialog, what contract will be used (and therefore what message types are acceptable), and whether encryption will be used for extra security. We create a dialog using the BEGIN DIALOG command.

DECLARE @InitDlgHandle UNIQUEIDENTIFIER;

BEGIN DIALOG @InitDlgHandle

FROM SERVICE [//SBDemo/Taxes/TaxpayerService]

TO SERVICE N'//SBDemo/Taxes/IRSService'

ON CONTRACT [//SBDemo/Taxes/TaxContract]

WITH ENCRYPTION = OFF;

There are two keys to the CREATE DIALOG command that are important. The first is that it requires a UNIQUEIDENTIFIER variable. When a dialog is created, it’s assigned a GUID called a conversation handle. That conversation handle is then used to tell SQL Server which messages are related as part of a single conversation. So we use that variable to grab the conversation handle. The other part of the command worth noting is how each service is referenced. The FROM service (initiator) is always going to be in the current database, however the TO service (target) may be in the current database or it might reside in a separate instance altogether. For that reason, we specify the TO service as a string literal.

Conclusion

This has been a basic introduction to the conversation architecture. There are more complex aspects of conversations, like grouping and creating timers, which we’ll cover in future posts. Next time, however, we’ll walk through sending and receiving messages through Service Broker, as well as how to process incoming messages automatically in SQL Server Service Broker Sending and Receiving.

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Service Broker Service Architecture

Service Broker is not a black box. This SQL Server Service Broker Service Architecture is part of a blog series designed to support your messaging needs and make sense of Service Broker.

SQL Server Service Broker Service Architecture

SQL Server Service Broker Service ArchitectureOriginally published on ColleenMorrow.com.

This post is part of a series on this blog that will explore SQL Server Service Broker, a native messaging and queuing technology built into the SQL Server Database Engine.

Previous posts:

In this installment, we introduce the basic Service Broker service architecture components.

Taxes

Before I jump into the technical details of the Service Broker architecture, I think it helps to have a real-world analogy of what Service Broker is and does. In the last installment, I used the example of ordering something from Amazon.com. This time, I’d like to use an analogy that’s somewhat timely: taxes.

Each year, we fill out that 1040 or 1040EZ form and we send it to the Internal Revenue Service. Maybe we eFile, maybe we mail it in, it doesn’t matter. That form is received by the IRS and goes into a queue, awaiting review. At some point, days, maybe weeks later, our tax return is processed. If all goes well, our return is approved and the IRS cuts us a check. That is a Service Broker application.

Message Type

The first Service Broker components we define in a new application are the message types. The message type defines name and format of messages that will be exchanged between services. When we create a message type, we have the option of also applying validation, basically saying the message must adhere to a specific format. That validation format may be well-formed XML, XML of a specific schema, an empty message, or we can say no validation at all, in which case the message content could be anything. In our tax example we had 2 message types: the tax return form we submit and the check we get back. Each of these has a well-defined format with specific fields it must contain.

CREATE MESSAGE TYPE
[//SBDemo/Taxes/TaxFormMessage]
VALIDATION = WELL_FORMED_XML;

CREATE MESSAGE TYPE
[//SBDemo/Taxes/TreasuryCheckMessage]
VALIDATION = WELL_FORMED_XML;

CREATE MESSAGE TYPE
[//SBDemo/Taxes/AuditNotificationMessage]
VALIDATION = WELL_FORMED_XML;
GO

Contract

Once the message types have been defined, the next component we need to create is the contract. A Service Broker contract specifies which message types allowed in a conversation and which participant can send which message type. In our tax example, the taxpayer sends the 1040 message type and the IRS sends the treasury check. The IRS, however, would never send the taxpayer a completed 1040 form and a taxpayer would never send a treasury check. Why is this important? By defining what message types can be sent by each participant in a Service Broker app, we’re helping the receiving participant identify when an unauthorized message is received, thereby making our Service Broker app more secure.

Note that a contract can specify more than one message type for any participant. For example, the IRS can also send an audit notice. And the taxpayer can also send other forms or, unfortunately, a personal check. The same holds true for a Service Broker contract.

CREATE CONTRACT [//SBDemo/Taxes/TaxContract]
(
[//SBDemo/Taxes/TaxFormMessage] SENT BY INITIATOR,
[//SBDemo/Taxes/TreasuryCheckMessage] SENT BY TARGET,
[//SBDemo/Taxes/AuditNotificationMessage] SENT BY TARGET
);
GO

So, message types and contracts are pretty straightforward, right? The next 2 components sometimes cause a little confusion. Let’s start with queues.

Queue

A Service Broker queue is, at its essence, a hidden table in SQL Server that stores messages until they’re processed. Each message in the queue is a row in that hidden table. But the cool thing about this hidden table is that it’s not totally hidden. You can SELECT from it, but you can’t perform any DML operations on it. Like any other “real” table in the database, the queue is included in transactions, logging, database backups, database mirroring, etc., just like a “real” table. Each participant in a Service Broker application needs to have a queue to store received messages. In our tax analogy, I like to picture the queue as a cheap plastic inbox sitting on some sad desk in the IRS offices (complete with dreary florescent lighting, ala “Joe Versus the Volcano”). Our 1040 form will stay in that inbox until Tom Hanks processes it.

CREATE QUEUE TaxpayerQueue;
CREATE QUEUE IRSQueue;

Service

A Service Broker service is an addressable endpoint for conversations that bundles up a specific contract and queue. Huh? When we send our tax return form in, we don’t send it to a specific inbox on a desk, right? We send it to the Internal Revenue Service. And sending our return information to the Internal Revenue Service implies the adherence to the contract that was laid out earlier (we send in a specific form, they send us a check, etc.). And when we send that form to the IRS, the form automatically gets placed in that inbox (queue). Similarly, in a Service Broker application, we don’t send a message to a queue, we send it to a service. By sending a message to a specific service, we agree to use the message types defined in the contract on that service and SQL Server automatically places our message into the queue associated with that service.

CREATE SERVICE
[//SBDemo/Taxes/TaxpayerService]
ON QUEUE TaxpayerQueue
([//SBDemo/Taxes/TaxContract]);
GO

CREATE SERVICE
[//SBDemo/Taxes/IRSService]
ON QUEUE IRSQueue
([//SBDemo/Taxes/TaxContract]);
GO

Conclusion

That is a very basic introduction into the Service Broker architecture. Once we’ve created our message types, contract, queues, and services, we’re ready to start sending and receiving messages. More on that next time in SQL Server Service Broker Conversation Architecture!

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Service Broker Introduction

Service Broker is not a black box.  This SQL Server Service Broker Introduction is part of a blog series designed to support your messaging needs and make sense of Service Broker.

SQL Server Service Broker Introduction

SQL Server Service Broker IntroductionOriginally published on ColleenMorrow.com.

SQL Server Service Broker is a messaging technology that’s baked right into the SQL Server engine. It was first introduced in SQL Server 2005, and if you ask me, is one of the most under-utilized, under-appreciated features in SQL Server.

What’s so great about Service Broker?

So why am I such a big fan of Service Broker and if it’s so great, why isn’t everybody using it? Let me start by telling you why I’m such a fan.

  • Asynchronous – The biggest benefit of Service Broker, or probably any messaging technology, is that is decouples long running processing from the client application. A great example of what I mean by this is Amazon.com. When you place an order at Amazon, a whole series of backend processes are initiated. Your payment is processed, inventory is verified and updated, fulfillment and shipping is initiated, etc. All of which ultimately end with a box arriving on your doorstep. But the website doesn’t wait for all of that stuff to finish before it thanks you for your order. Your order information is added to a queue and the page returns. All that other stuff is handled asynchronously. And that’s what Service Broker lets you do in your database.
  • Guaranteed message delivery – Most of the time in a Service Broker application, you’ll be sending messages between servers. What happens if the network link between those servers goes down? Well, if you’re relying on synchronous connections or linked servers, you’d be in trouble. But with Service Broker, messages are queued until the communication link is back up and they are automatically sent then. Guaranteed.
  • Guaranteed message sequencing – Similar to guaranteeing all messages will be delivered, Service Broker uses internal locking and sequencing mechanisms to also guarantee that all messages will be processed in the order in which they were sent.
  • Durability – Service broker messages and conversations are persistent across server restarts.
  • Easier application development – All of the perks I mentioned above are built right into Service Broker, which means you don’t need to worry about any of that when you develop your code.
  • Any Edition – Service Broker is available in any edition of SQL Server. Now how many features can you say that about?

Wow, Service Broker sounds awesome! Why isn’t everybody using it?

While there are some pretty cool benefits of using Service Broker, it hasn’t really seen widespread adoption, and I think there are a couple reasons for that. The first reason is that many people don’t know where they could use messaging technology like Service Broker. So here are some ideas to get you started.

  • Event Notifications – If you’ve been to my blog much, you’ll know I’m kind of into auditing. Event notifications are one of the ways I’ve accomplished SQL Server auditing in the past, and event notifications use Service Broker.
  • Asynchronous triggers – Any time you implement a trigger, you should make sure it is as lightweight as possible. Many times I’ve seen developers and DBAs try to cram too much work into a trigger and the application suffers as a result. If there is logic in your trigger that doesn’t need to be performed synchronously, as part of the current transaction, consider using Service Broker to offload some of that processing to an asynchronous process.
  • Data warehouse ETL – Whether it’s in conjunction with triggers or with technology like Change Data Capture (CDC), Service Broker is a great way to send data changes to your data warehouse.
  • Distributed server-side processing for client applications – I mentioned Amazon as an example of this earlier, but this isn’t confined to order entry systems. Think about what back-end processing you have in your environment

The other problem with Service Broker is that it’s not always easy to understand and implement. And it can be pretty easy to mess up. I liken it to a black box of sorts: messages go in and we hope they come out on the other side. But when they don’t, we don’t always know where to look.

In this series of blog posts, I’ll walk through the Service Broker architecture, how Service Broker works, how to troubleshoot when things go wrong (and don’t worry, they will), best practices, and more. Next time, we’ll begin with basic Service Broker components and the role that each one plays.

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Audit Toolbox

SQL Server Audit ToolboxOriginally published on ColleenMorrow.com.

SQL Server Audit ToolboxA consolidated repository of various scripts and tools I’ve blogged about.

 

SQL Server Audit Toolbox

 

Related Posts

 

Download the SQL Server Audit Scripts

 

Download - SQL Server Audit Toolbox
First Name*
Last Name*
Email*
Twitter

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Audit User-Defined Audit Events

SQL Server Audit – User-defined Audit EventsOriginally published on ColleenMorrow.com.

One thing I failed to touch on during my series on SQL Audit was the use of user-defined events in audits. This was brought to my attention in a comment by one of my readers. He was trying to make use of user-defined events and was having a problem getting the output to actually write to the audit file. Since I was writing some code to recreate the problem on my system anyway, I decided to post it here.

Why might you want to create a user-defined event audit in the first place? Well, we already know that we can use SQL Audit to audit access to certain objects. So let’s say we have a table with salary data. We can use the SCHEMA_OBJECT_ACCESS_GROUP to audit access to any object in that schema. We can also use SELECT, INSERT, UPDATE, and DELETE actions to audit those actions on specific objects. But let’s say even getting that granular will produce more audit output than we’d like. Suppose we only want to know when an employee’s salary is increased by more than 10%. We can’t do that with any of the canned actions. But we can do that with a custom event.

Configure the Audit

The first step is to configure the audit object. Once we’ve got that configured we create the audit specification. This can be either a server audit spec or a database audit spec, depending on your needs. Just make sure to add the USER_DEFINED_AUDIT_GROUP action. And don’t forget to enable both the server audit and the audit spec.

USE [master]
GO



CREATE SERVER AUDIT [TestingUserDefinedEvents]
TO FILE
(   FILEPATH = N'D:\SQL2012\Audits'
    ,MAXSIZE = 5 MB
    ,MAX_ROLLOVER_FILES = 5
    ,RESERVE_DISK_SPACE = OFF
)
WITH
(   QUEUE_DELAY = 1000
    ,ON_FAILURE = CONTINUE
)
GO
ALTER SERVER AUDIT [TestingUserDefinedEvents] WITH (STATE = ON);
GO



USE [AdventureWorks2012]
GO



CREATE DATABASE AUDIT SPECIFICATION [UserDefinedEvents]
FOR SERVER AUDIT [TestingUserDefinedEvents]
ADD (USER_DEFINED_AUDIT_GROUP)
WITH (STATE = ON)
GO

 

Writing to the Audit

To write to the audit log, we’ll use the sp_audit_write stored procedure. This built-in stored procedure accepts 3 parameters:

  • @user_defined_event_id is a smallint used to identify the event
  • @succeeded is a binary flag used to specify whether the action was successful or not
  • @user_defined_information is an nvarchar string describing the event

So, to test our audit, let’s run the following:

USE [AdventureWorks2012]
GO
EXEC sp_audit_write @user_defined_event_id =  27 ,
              @succeeded =  0
            , @user_defined_information = N'Testing a user defined event.' ;

If we check the audit log, we should see the event.

 

Putting it into Practice

Back to our original purpose, we wanted to know whenever an employee’s salary was increased more than 10%. To do this, we can create a trigger. (As I’ve mentioned many times in the past, I’m not a developer, so no fair picking on my trigger code.)

USE AdventureWorks2012
GO



CREATE TRIGGER [humanresources].[SalaryMonitor] ON [humanresources].[employeepayhistory]
AFTER UPDATE
AS
declare   @oldrate money
        , @newrate money
        , @empid integer
        , @msg nvarchar(4000)



select  @oldrate = d.rate
from deleted d



select @newrate = i.rate, @empid = i.BusinessEntityID
from inserted i



IF @oldrate*1.10 < @newrate
BEGIN
    SET @msg = 'Employee '+CAST(@empid as varchar(50))+' pay rate increased more than 10%'
    EXEC sp_audit_write @user_defined_event_id =  27 ,
              @succeeded =  1
            , @user_defined_information = @msg;
END
GO

Now if we test the trigger by virtually doubling employee 4’s rate and only increasing employee 8’s rate by a small amount.

select * from HumanResources.EmployeePayHistory where BusinessEntityID= 4



Update HumanResources.EmployeePayHistory set rate = 59.8462
where BusinessEntityID=4 and RateChangeDate = '2006-01-15 00:00:00.000'



select * from HumanResources.EmployeePayHistory where BusinessEntityID= 8



Update HumanResources.EmployeePayHistory set rate = 41.8654
where BusinessEntityID=8 and RateChangeDate = '2003-01-30 00:00:00.000'

We should see an audit record for employee 4 in the output, and we do.

 

You might be wondering if you can configure an audit to only capture specific user-defined event IDs.  Great question, and you can by filtering on the user_defined_event_id field in the server audit.

So there you have it, another way to tailor SQL Audit a bit more to your specific needs. Happy auditing!

 

UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Audit 302 Deploying Audit Solution with Powershell

Auditing doesn’t have to be scary. SQL Server Audit 302 – Deploying Audit Solution with Powershell is part of a blog series designed to help you audit database changes with SQL Server Audit. Contact us if you have any questions about how to select and implement the right auditing solution for your organization.

SQL Server Audit 302 – Deploying Audit Solution with Powershell

SQL Server Audit 302 – Deploying Audit Solution with PowershellOriginally published on ColleenMorrow.com.

This is it, ladies and gentlemen. One last post and I promise I won’t mention the word audit for at least a week.

Before we begin, let’s take a moment to recap what we’ve covered so far:

For the final installment of this series, we’re going to take everything we’ve learned and put it all together.

Premise

In case you haven’t been paying attention, I really don’t like pointing and clicking my way through a GUI to do, well, just about anything. For any task that needs to be done more that once, I’d much rather spend the time upfront automating it, in order to save time later. Besides, automating tasks is fun! So it only makes sense that I’d want to use the same approach to auditing.

In SQLAudit 201, we created a SQLAudit database to act as our central audit repository. That’s where we’re storing the info on our current audits, and importing data from our audit files into a table for permanent storage and reporting.

The other thing we’re going to use that SQLAudit database for is to hold our “master” database audit specifications. These “master” specs will be used as templates to deploy auditing to other databases/instances. How, you ask? By using PowerShell, of course!

How it Works

I’ve split the work into 2 scripts: one to deploy an audit, one to remove an audit. Both can be executed interactively or in batch mode.

DeployAudit.ps1

Parameters

  • Target Instance – where to deploy audit to
  • Target Database – database to deploy audit to; null for a server audit
  • Target Login – Login with permissions to manage audits
  • Target Password
  • Audit Specification – name of the master audit specification to be deployed
  • UpdateAudit – y/n flag to overwrite an existing audit object if found
  • UpdateSpec – y/n flag to overwrite existing audit specification if found

If executed interactively, the script will prompt the user for target instance and database, login and password. It will connect to master instance and list available audit specs in the SQLAudit database and any server audit specs starting with “master”. The user is then asked to specify an audit to deploy. The script verifies that the user specified a valid master audit specification and determines whether the selected spec is databsse or server audit specification.

The script then connects to target instance, and determines if the target audit object exists. If it doesn’t, the audit object is created and a record is inserted into the SQLAudit.dbo.AuditLocator table. If the audit does exist, the user is prompted to overwrite definition. This is useful if you’ve modified the location or file options in the master audit object and you want to propagate those changes. If the path to the audit file has been changed, the AuditLocator table will be updated accordingly.

If a database audit specification was selected, the script will connect to the target database and check for a pre-existing spec. If one is found, the user is asked if the apec should be overwritten. If yes, the specification will be dropped and recreated. (There didn’t seem to be any sense to writing a separate function to update an audit specification.) Otherwise, the new specification is created. The same general logic is used for a server audit specification.

RemoveAudit.ps1

Parameters

  • Instance Name – name of the instance you want to stop auditing
  • Database Name – name of database being audited
  • Audit Specification Name – name of audit specification to remove.
  • Target Login
  • Target Password

If run interactively, the user is prompted for the target instance and database, login and password. The script connects to target instance and lists all existing server audit specs and any database audit specs in specified database. It then prompts the user for the audit spec to remove. The script verifies the selected spec name is valid, and, if so, deletes that specification. If that was the last audit specification associated with that audit object, the audit object is also dropped and the record in AuditLocator is updated to mark the audit as inactive.

Download the Scripts

I’ve packaged up my scripts into one handy-dandy zip file for your downloading pleasure.

SQLAuditScripts.zip contains:

  • SQLAuditDatabase.sql – Script to create the SQLAudit database and its objects.
  • Library-ManageAudits.ps1 – Library of PowerShell functions to drop, create, and update audit objects and audit specs.
  • DeployAudit.ps1 – Powershell script for deploying an audit.
  • RemoveAudit.ps1 – Powershell script for removing an existing audit.

Download - SQL Server Audit 302 Deploying the Audit Solution with Powershell
First Name*
Last Name*
Email*
Twitter

Additional Tools You’ll Need

I’ve mentioned these before, in my Inventory scripts. If you haven’t already, you’ll need to download a couple of PowerShell functions.

Name: Write-DataTable.ps1

Author: Chad Miller

Loads data into from a datatable object into SQL Server tables

Name: Out-DataTable.ps1

Author: Chad Miller

Formats input into a datatable object which can then be imported into SQL Server using Write-DataTable.

Sample Deployment

Let’s say we want to audit DDL (a common theme for me). Start by creating a Master_DDLAudit audit object on whatever instance holds the SQLAudit database. Use a file output, specifying a UNC path to our central audit folder. This is where all our audits will write to, so make sure it’s accessible by all your potential target instances. We’re not actually auditing with this master audit, so leave it disabled.

Next create an audit specification, Master_Database_DDLAudit, in the SQLAudit database. Specify the SCHEMA_OBJECT_CHANGE_GROUP action, and assign the spec to the Master_DDLAudit audit. Again, leave the specification disabled.

Suppose we also want to create a security audit to monitor changes to logins and server-level permissions. These are instance-level events, so we need to create a server audit specification, rather than a database audit spec. I don’t want this output going to the same file as my DDL audit, so I’ll create a new audit object, Master_SecurityAudit. Then I’ll create a server audit spec called Master_Server_SecurityAudit.

Now if I want to deploy the DDL audit to my AdventureWorks2012 database, I simply run the DeployAudit script.

DeployAudit

If I check my AuditLocator table, I see the audit has been added as an active audit.

AuditLocator

To stop auditing DDL on AdventureWorks, I use the RemoveAudit script.

RemoveAudit

Since this was the only audit specification attached to this audit, a select on my AuditLocator shows that the audit is now marked as inactive. The audit object has also been removed from the instance.

AuditLocator2

Conclusion

That concludes this series on SQL Audit. I hope you’ve enjoyed it as much as I have and I really hope you’ve found it helpful for your own auditing needs. Please don’t hesitate to contact me with any questions or problems you run into with the scripts.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Audit 301 Using Powershell to Manage Audits

Auditing doesn’t have to be scary. SQL Server Audit 301 – Using Powershell to Manage Audits is part of a blog series designed to help you audit database changes with SQL Server Audit. Contact us if you have any questions about how to select and implement the right auditing solution for your organization.

SQL Server Audit 301 – Using Powershell to Manage Audits

SQL Server Audit 301 – Using Powershell to Manage AuditsOriginally published on ColleenMorrow.com.

Today we’re going to go over some very basic scripts to create, drop, and copy SQL Audit objects using Powershell and SMO. Managing SQL Audit objects via PowerShell is actually pretty simple, even for a newbie like me. And I’m proud to say that these might be the first PowerShell scripts I’ve written that were entirely my own and not based on someone else’s work. I might actually be learning something! Nah, probably not. :-)

Creating an Audit Object

The first step in implementing SQL Audit is to create the audit object, so that’s where we’ll start. Let’s look at the whole script and then break it down.

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null

$instance = 'MyServer'
$auditName = $instance+"_TestAudit"
$auditDir = '\\MyServer\D$\Audits\'

$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $instance

$newAudit = new-object Microsoft.SqlServer.Management.Smo.Audit($srv, "$auditName")
$newAudit.DestinationType = [Microsoft.SqlServer.Management.Smo.AuditDestinationType]::File
$newAudit.FilePath = $auditDir
$newAudit.MaximumRolloverFiles = 10
$newAudit.MaximumFileSize = 100
$newAudit.QueueDelay = 1000
$newAudit.Create()
$newAudit.Enable()

The first thing we’re doing is simply declaring some variables to hold our instance name, the name of the audit we want to create, and the folder where we want our audit file to be written. For re-usability, we could even make these into parameters, but I wanted to keep this simple. Next we create a new SMO connection to our instance with the command

$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $instance

Once we’re connected to SQL Server, we can create a new audit class object and start assigning attribute values. Here, we’re setting the destination to a file, and the file path to our $auditDir variable. We set the maximum number of rollover files, the queue delay, etc. All of the available properties can be found here: http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.audit.aspx, but everything I’m setting here should look very familiar if you’ve been following along in this series.

$newAudit = new-object Microsoft.SqlServer.Management.Smo.Audit($srv, "$auditName")
$newAudit.DestinationType = [Microsoft.SqlServer.Management.Smo.AuditDestinationType]::File
$newAudit.FilePath = $auditDir
$newAudit.MaximumRolloverFiles = 10
$newAudit.MaximumFileSize = 100
$newAudit.QueueDelay = 1000

And once we’ve got all of our properties set, we do the PowerShell equivalent of clicking OK: we create the audit and enable it.

$newAudit.Create()
$newAudit.Enable()

Want to make sure our audit was created? We can list all of our audit objects like this:

$instance = 'MyServer'
$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $instance
foreach ($a in $srv.Audits)
{
   Write-Host $a.Name
}

Creating an Audit Specification

Once we’ve got our audit object created, we need to create the audit specification. For this example, we’ll create a database audit specification, rather than a server audit specification, though there are only slight differences between the two. In this particular audit spec, we’ll audit DDL changes to our AdventureWorks database and we’ll audit the execution of a stored procedure call usp_ChangeEmpSalary. As before, let’s see the whole script and break it down.

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null

$instance = 'MyServer'
$database = 'AdventureWorks'
$auditName = $instance+"_TestAudit"
$specName = $instance+"_"+$database+"_AuditSpec"

$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $instance
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$db = $srv.Databases.Item($database)

## Set audit spec properties
$AuditSpec = new-object Microsoft.SqlServer.Management.Smo.DatabaseAuditSpecification($db, $specName)
$AuditSpec.AuditName = "$auditName"

## Set audit actions
$SpecDetail = new-object Microsoft.SqlServer.Management.Smo.AuditSpecificationDetail("SchemaObjectChangeGroup")
$AuditSpec.AddAuditSpecificationDetail($SpecDetail)

$SpecDetail = new-object Microsoft.SqlServer.Management.Smo.AuditSpecificationDetail("EXECUTE","OBJECT","dbo","usp_ChangeEmpSalary","public")
$AuditSpec.AddAuditSpecificationDetail($SpecDetail)

## Create and enable audit spec
$AuditSpec.Create()
$AuditSpec.Enable()

Just like in the previous script we start out by defining our variables for instance, database, the audit we’re assigning this spec to, and the name of the new spec. Then we create our SMO objects for our server and database. Now we can get down to business. We start by creating a DatabaseAuditSpecification class object and setting its AuditName property to the name of the audit object we’re assigning this spec to.

## Set audit spec properties
$AuditSpec = new-object Microsoft.SqlServer.Management.Smo.DatabaseAuditSpecification($db, $specName)
$AuditSpec.AuditName = "$auditName"

Note that, if we wanted to create a server audit specification, we would use the ServerAuditSpecification class:

$AuditSpec = new-object Microsoft.SqlServer.Management.Smo.ServerAuditSpecification($srv, $specName)

Ok, so we’ve got our audit spec named and assigned, now we need to tell it what to audit. This is a two-step process in SMO, first we create an AuditSpecificationDetail object, and then we add it to the audit spec. I used these two actions to give you an idea of how to add an action group vs an individual action with more configuration options.

## Set audit actions
$SpecDetail = new-object Microsoft.SqlServer.Management.Smo.AuditSpecificationDetail("SchemaObjectChangeGroup")
$AuditSpec.AddAuditSpecificationDetail($SpecDetail)

$SpecDetail = new-object Microsoft.SqlServer.Management.Smo.AuditSpecificationDetail("EXECUTE","OBJECT","dbo","usp_ChangeEmpSalary","public")
$AuditSpec.AddAuditSpecificationDetail($SpecDetail)

And finally, we create the audit specification and enable it.

## Create and enable audit spec
$AuditSpec.Create()
$AuditSpec.Enable()

Dropping Audits and Audit Specifications

Dropping an audit or audit spec is even easier than creating one. You find the one that matches the name you’re looking for, you disable it and drop it. In fact, it’s so straightforward that I’m not going to step through this one. If you’ve been following along so far, you’ll see what I’m doing here.

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null

$instance = 'MyServer'
$database = 'AdventureWorks'
$auditName = $instance+"_TestAudit"
$specName = $instance+"_"+$database+"_AuditSpec"

$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $instance
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$db = $srv.Databases.Item($database)

## Delete the audit spec
$auditSpec = $db.DatabaseAuditSpecifications | where {$_.Name -match $specName}

if ( $auditSpec.Name -eq $null )
{
	Write-Host "Database Audit Spec $specName does not exist."
}
else
{
	$auditSpec.Disable()
	$auditSpec.Drop()
	Write-Host "Database Audit Spec $specName dropped."
}

## Delete the audit object
$delAudit = $srv.Audits | where {$_.Name -match $auditName}

if ( $delAudit.Name -eq $null )
{
	Write-Host "Audit $auditName does not exist."
}
else
{
	$delAudit.Disable()
	$delAudit.Drop()
	Write-Host "Audit $auditName dropped."
}

Copying an Audit Specification

The last activity I want to cover today is copying an audit specification from one database to another. This way we can define a “master” audit spec to use as a template for deployment to other databases/instances. Let’s imagine I have a database called SQLAudit where I’ve created such a template. And I want to copy that audit spec definition to AdventureWorks.

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null

$instance = 'MyServer'
$masterDB = 'SQLAudit'
$masterSpec = "Master_DB_AuditSpec"

$targetDB = 'AdventureWorks'
$targetSpec = $instance+"_"+$database+"_AuditSpec"

$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $instance
$mdb = New-Object Microsoft.SqlServer.Management.Smo.Database
$mdb = $srv.Databases.Item($masterDB)

$mAuditSpec = $mdb.DatabaseAuditSpecifications | where {$_.Name -match $masterSpec}

if ( $mauditSpec.Name -eq $null )
{
	Write-Host "Master Database Audit Spec $masterSpec does not exist."
}
else
{

	$tdb = New-Object Microsoft.SqlServer.Management.Smo.Database
	$tdb = $srv.Databases.Item($targetDB)

	$newAuditSpec = new-object Microsoft.SqlServer.Management.Smo.DatabaseAuditSpecification($tdb, $targetSpec)
	$newAuditSpec.AuditName = $mauditSpec.AuditName
	$newAuditSpec.AddAuditSpecificationDetail($mauditSpec.EnumAuditSpecificationDetails())
	$newAuditSpec.Create()
	$newAuditSpec.Enable()

	Write-Host "Database Audit Spec $targetSpec created."

}

The only somewhat tricky part of copying an audit spec definition is that you can’t directly copy the actions. You need to use the EnumAuditSpecificationDetails function to assign the details from the original specification to the new one. Everything else is easy.

What’s Next?

Now that we’ve got the basics of managing audits using PowerShell under our belts, we’re going to use these skills to deploy the audit solution I talked about previously. That’s what we’ll do next in SQL Server Audit 302 – Deploying Audit Solution with PowerShell. Good stuff people!!

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.

SQL Server Audit 201 Creating Audit Solution

Auditing doesn’t have to be scary. SQL Server Audit 201 – Creating Audit Solution is part of a blog series designed to help you audit database changes with SQL Server Audit.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization.  

SQL Server Audit 201 – Creating Audit Solution

SQL Server Audit 201 – Creating Audit SolutionOriginally published on ColleenMorrow.com.

Welcome back, folks!  Ok so, now that we’ve covered the basics of creating a SQL Audit and viewing the output, let’s put the pieces together to form a complete solution. As with any implementation, the key to success lies in careful planning. Before we take off running, we need to take a moment and figure out exactly where we want to go.

My requirements

  • What do I want to audit? DDL changes, excluding specific objects and excluding index/statistics maintenance events.
  • Where do I want to run the audit? AdventureWorks, with the option to audit additional databases in the future.
  • Where do I want the output to go? All audit data should be stored in a central audit database for archiving and reporting.
  • How will the audit output be processed? I’ll be using SSRS to generate reports.

Design

Now, there are a few ways I could do this, depending on what version of SQL Server I’m using. If I’m running SQL 2008, unless I want to audit all my databases, I’m limited to creating a Database Audit Specification in AdventureWorks to track the DDL changes. I would also have no way to eliminate index maintenance events from the audit, so those would need to be processed out later. However, if I’m running SQL 2012, I have the option of creating a Server Audit Specification and using a filter on the audit to limit my auditing to AdventureWorks DDL events and exclude any index maintenance commands. Like this:

CREATE SERVER AUDIT [DDLAudit]
TO FILE
(	FILEPATH = N'D:\SQL2012\Audits\'
	,MAXSIZE = 100 MB
	,MAX_ROLLOVER_FILES = 10
	,RESERVE_DISK_SPACE = OFF
)
WITH
(	QUEUE_DELAY = 0
	,ON_FAILURE = CONTINUE
	,AUDIT_GUID = 'd50cf1ad-2927-44c7-afd0-0c31d302ca35'
)
WHERE ([database_name]='AdventureWorks' AND NOT [statement] like 'ALTER INDEX%REBUILD%' AND NOT [statement] like 'ALTER INDEX%REORGANIZE%')
GO

In order to maintain backward compatibility to SQL 2008, I’m going to avoid using the filter for this demo.

CREATE SERVER AUDIT [DDLAudit]
TO FILE
(	FILEPATH = N'D:\SQL2012\Audits\'
	,MAXSIZE = 100 MB
	,MAX_ROLLOVER_FILES = 10
	,RESERVE_DISK_SPACE = OFF
)
WITH
(	QUEUE_DELAY = 0
	,ON_FAILURE = CONTINUE
	,AUDIT_GUID = 'd50cf1ad-2927-44c7-afd0-0c31d302ca35'
)
GO
USE AdventureWorks
GO
CREATE DATABASE AUDIT SPECIFICATION [AdventureWorks_DB_DDLAudit]
FOR SERVER AUDIT [DDLAudit]
ADD (SCHEMA_OBJECT_CHANGE_GROUP)
WITH (STATE = ON)
GO

Ok, so that takes care of my first two requirements. The next item on my wish list was to have my audit records stored to a centralized audit database. Right now, that isn’t happening, all my audit info is being written to .sqlaudit files on my D drive. So my next order of business is to build a database repository to hold them.

I’ll create a centralized SQLAudit database. Ideally would be isolated from whatever instance I’m auditing, but in this case it’s located on the same instance. In this audit database I’ll create a handful of tables.

  • AuditExclude – list of all objects I want excluded from my audit output.
  • AuditStage – this is the staging table for my audit records, before filtering; The contents of my audit file(s) will be pulled directly into this table.
  • AuditRecord – final storage for filtered/processed audit output.
  • AuditLocator – stores current audit file and offset to be used as starting point for next run; By knowing where I left off, I won’t end up re-processing audit records.
  • AuditLoadLog – run log; This just keeps track of the number of records staged and retained from each run.
USE [SQLAudit]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditRecord]') AND type in (N'U'))
DROP TABLE [dbo].[AuditRecord]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditStage]') AND type in (N'U'))
DROP TABLE [dbo].[AuditStage]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditLocator]') AND type in (N'U'))
DROP TABLE [dbo].[AuditLocator]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditLoadLog]') AND type in (N'U'))
DROP TABLE [dbo].[AuditLoadLog]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[AuditExclude]') AND type in (N'U'))
DROP TABLE [dbo].[AuditExclude]
GO

CREATE TABLE [dbo].[AuditExclude](
	[InstanceName] [nvarchar](128) NULL,
	[DatabaseName] [varchar](50) NULL,
	[SchemaName] [sysname] NOT NULL,
	[ObjectName] [varchar](50) NULL,
	[ObjectType] [varchar](50) NULL,
	[Reason] [varchar](100) NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[AuditRecord](
	[audit_name] [varchar](128) NOT NULL,
	[event_time] [datetime2](7) NOT NULL,
	[sequence_number] [int] NOT NULL,
	[action_id] [varchar](4) NULL,
	[succeeded] [bit] NOT NULL,
	[permission_bitmask] [bigint] NOT NULL,
	[is_column_permission] [bit] NOT NULL,
	[session_id] [smallint] NOT NULL,
	[server_principal_id] [int] NOT NULL,
	[database_principal_id] [int] NOT NULL,
	[target_server_principal_id] [int] NOT NULL,
	[target_database_principal_id] [int] NOT NULL,
	[object_id] [int] NOT NULL,
	[class_type] [varchar](2) NULL,
	[session_server_principal_name] [nvarchar](128) NULL,
	[server_principal_name] [nvarchar](128) NULL,
	[server_principal_sid] [varbinary](85) NULL,
	[database_principal_name] [nvarchar](128) NULL,
	[target_server_principal_name] [nvarchar](128) NULL,
	[target_server_principal_sid] [varbinary](85) NULL,
	[target_database_principal_name] [nvarchar](128) NULL,
	[server_instance_name] [nvarchar](128) NULL,
	[database_name] [nvarchar](128) NULL,
	[schema_name] [nvarchar](128) NULL,
	[object_name] [nvarchar](128) NULL,
	[statement] [nvarchar](4000) NULL,
	[additional_information] [nvarchar](4000) NULL,
	[file_name] [nvarchar](260) NOT NULL,
	[audit_file_offset] [bigint] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[AuditStage](
	[audit_name] [varchar](128) NOT NULL,
	[event_time] [datetime2](7) NOT NULL,
	[sequence_number] [int] NOT NULL,
	[action_id] [varchar](4) NULL,
	[succeeded] [bit] NOT NULL,
	[permission_bitmask] [bigint] NOT NULL,
	[is_column_permission] [bit] NOT NULL,
	[session_id] [smallint] NOT NULL,
	[server_principal_id] [int] NOT NULL,
	[database_principal_id] [int] NOT NULL,
	[target_server_principal_id] [int] NOT NULL,
	[target_database_principal_id] [int] NOT NULL,
	[object_id] [int] NOT NULL,
	[class_type] [varchar](2) NULL,
	[session_server_principal_name] [nvarchar](128) NULL,
	[server_principal_name] [nvarchar](128) NULL,
	[server_principal_sid] [varbinary](85) NULL,
	[database_principal_name] [nvarchar](128) NULL,
	[target_server_principal_name] [nvarchar](128) NULL,
	[target_server_principal_sid] [varbinary](85) NULL,
	[target_database_principal_name] [nvarchar](128) NULL,
	[server_instance_name] [nvarchar](128) NULL,
	[database_name] [nvarchar](128) NULL,
	[schema_name] [nvarchar](128) NULL,
	[object_name] [nvarchar](128) NULL,
	[statement] [nvarchar](4000) NULL,
	[additional_information] [nvarchar](4000) NULL,
	[file_name] [nvarchar](260) NOT NULL,
	[audit_file_offset] [bigint] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[AuditLocator](
	[audit_name] [varchar](128) NULL,
	[file_name] [nvarchar](260) NOT NULL,
	[audit_file_offset] [bigint] NOT NULL,
	[file_pattern] [nvarchar](260) NULL,
	[locator_id] int identity(1,1) not null,
	[active] char(1) default 'Y'
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[AuditLoadLog](
	[audit_name] [varchar](128) NULL,
	[staged_count] [int] NOT NULL,
	[saved_count] [int] NOT NULL,
	[run_date] datetime DEFAULT getdate()
) ON [PRIMARY]
GO

Now, since I already have an audit in place that I’d like to load, I’ll seed the AuditLocator table with that information.

USE master
GO
declare @initfile nvarchar(260)
select @initfile=log_file_path+ log_file_name from sys.server_file_audits where name = 'DDLAudit'
set @initfile = STUFF(@initfile,len(@initfile)-charindex('.',reverse(@initfile)), 1, '*')
Insert into SQLAudit.dbo.AuditLocator  (audit_name, file_name, audit_file_offset, file_pattern)
SELECT top 1 'DDLAudit', file_name, audit_file_offset, @initfile FROM fn_get_audit_file (@initfile, default,  default) order by event_time asc

Now that I’ve got someplace to store my audit data permanently, I need a way to get it out of external file(s) and into my database. So I’ll also be creating a stored procedure to read my audit files, massage the data, and save it into my AuditRecord table. That would be the LoadAuditData procedure. Let’s take a look.

USE SQLAudit
GO
 create procedure LoadAuditData
as
begin
	declare @audit varchar(128),
			@file nvarchar(260),
			@offset bigint,
			@pattern nvarchar(260),
			@staged int,
			@saved int

	set nocount on

	declare cAudits cursor for
		select audit_name, file_name, audit_file_offset, file_pattern
		from AuditLocator
		where active = 'Y'
	FOR UPDATE

The first thing I’m going to do is find all the active audits in my AuditLocator table and grab the audit file location, the current file and the current offset. I decided that, if I discontinue or move an audit for some reason, rather then deleting it from the AuditLocator table, I would just mark it inactive. That way I’ll have a historical record for my reference.

open cAudits
	fetch cAudits into @audit, @file, @offset, @pattern
	while @@fetch_status = 0
	begin

		set @staged = 0
		set @saved = 0

		insert into AuditStage
		SELECT @audit, * FROM fn_get_audit_file (@pattern, @file,  @offset)

		set @staged = @@rowcount

		insert into AuditRecord
		SELECT * from AuditStage a
		WHERE NOT EXISTS (SELECT 1 FROM dbo.AuditExclude ae WHERE
				a.server_instance_name = ae.InstanceName and
				a.database_name = ae.DatabaseName and
				a.schema_name = ae.SchemaName and
				a.object_name = ae.ObjectName)
		and statement not like '%STATISTICS%'
		and statement NOT LIKE 'ALTER INDEX%REBUILD%'
		and statement NOT LIKE 'ALTER INDEX%REORGANIZE%'

		set @saved = @@rowcount

For each of my active audits, I read in all audit records, starting with the file and offset I pulled out of AuditLocator. All audit records are loaded into AuditStage, then I pull only the records I care about into AuditRecord. In this case, I’m eliminating any statistics or index maintenance statements, and anything dealing with my excluded objects.

select top 1 @file=file_name, @offset=audit_file_offset from AuditStage order by event_time desc

		update AuditLocator set file_name = @file, audit_file_offset = @offset
		where current of cAudits

The next step is to grab the most current record from the AuditStage table, and that’s going to be my jumping off point for the next run. This was really the major purpose of the staging table. I could easily have loaded the audit records directly from my audit file(s) into AuditRecord, filtering at the same time. But what if none of the records met my criteria? I wouldn’t have any way to update my AuditLocator record and I’d end up re-processing the same records next time. This way, I know I processed this set of records and I know none of them met my criteria, so I can move on to the next set.

       insert into AuditLoadLog (audit_name, staged_count, saved_count) values (@audit, @staged, @saved)
		DELETE AuditStage
		fetch cAudits into @audit, @file, @offset, @pattern
	end
	close cAudits
	deallocate cAudits
end

Finally I log my record counts into the AuditLoadLog table and clear out my staging table for the next run.

Reviewing my requirements

  • What do I want to audit? DDL changes, excluding specific objects and excluding index/statistics maintenance events.
    • Using the SCHEMA_OBJECT_CHANGE_GROUP I’ll record the DDL changes I’m interested in. I’m not filtering the audit, but I’m able to filter the output before it gets loaded into the AuditRecord table, using procedure logic and the AuditExclude table.
  • Where do I want to run the audit? AdventureWorks, with the option to audit additional databases in the future.
    • I’m using a database audit specification to audit only AdventureWorks right now. In the future, I could create additional audits and simply add their information to the AuditLocator table for processing. As long as my audit files are on an accessible network share, I’m good to go.
  • Where do I want the output to go? All audit data should be stored in a central audit database for archiving and reporting.
    • Got it. Right now that database is on the same local server, but it could easily be on a dedicated remote instance, away from prying eyes.
  • How will the audit output be processed? I’ll be using SSRS to generate reports.
    • Because I’m loading my data into a single database, creating SSRS reports to view audit data for a particular database, or to track changes across audits, will be a snap.

Considerations

This audit solution has 2 things working against it: the lack of filtering in the audit and the use of rollover files. Let’s say I size my audit files really small and I set a low limit for my maximum rollover files, say 3 files. I’ve made a few DDL changes and they get logged to File 1. Then let’s say I kick off a big index maintenance or update stats job, and the audit records from that job fills up the rest of File 1, File 2, and File 3. Once File 3 is full, File 1 will be deleted and File 4 will be created. If I haven’t processed my audit files in a while, it’s possible that valid audit records in File 1 could be lost. So it’s important to size your files appropriately for the amount of audit data you expect, keep a good number of rollover files, and, most importantly, process your files often.

What’s next?

What if I want to deploy SQLAudit to a number of instances/databases? I could point and click my way through it, but that’s asking for a typo or configuration mistake. I could script it in T-SQL, but that’s so last year. So how about we use PowerShell? In SQL Server Audit 301 – Using PowerShell to Manage Audits, we’ll go over ways to create, drop, and manage SQL Audit via PowerShell and we’ll use a master audit as a template for deploying audits to the instance/database of our choosing.

SQL Server Audit Series

This blog series was designed to help you audit database changes.  Contact us if you have any questions about how to select and implement the right auditing solution for your organization with SQL Server Audit.
  1. SQL Server Auditing – Getting Started
  2. SQL Server Audit 101 – Creating Basic Audit
  3. SQL Server Audit 102 – Reading Audit Output 
  4. SQL Server Audit 201 – Creating Audit Solution
  5. SQL Server Audit 301 – Using PowerShell to Manage Audits
  6. SQL Server Audit 302 – Deploying Audit Solution with PowerShell
UpSearch

About the Author

SQL Server Consultant

Colleen Morrow

UpSearch Alum Colleen Morrow is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft's SQL Server to protect, optimize and unlock data's value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at https://upsearch.com/colleen-morrow/.

About UpSearch

up-social-round

UpSearch is a company of data management and analytics experts who enable digital maturity with Microsoft’s technologies. Its mission is to enable every leader to unlock data’s full potential. UpSearch provides full lifecycle support for SQL Server, SQL Server in Azure (IaaS), Azure SQL DB (PaaS), Azure SQL DW (PaaS), Analytics Platform System (APS), and Power BI.