Implementing a data warehouse: SSIS Connection Managers

Originally published on ColleenMorrow.com.

This post is part of a series on implementing a data warehouse with Microsoft SQL Server 2012. So far, we’ve covered:

One of the first items you’ll create when developing a new project in SQL Server Integration Services (SSIS) is a connection manager.  Connection managers allow control flow and data flow tasks within SSIS packages to access a variety of data sources and destinations.  Depending on the type of connection manager you create, you’ll be able to access relational database sources like SQL Server or Oracle, web services, external files, FTP Servers.  The connection manager you select will not only determine what data store you can access, but it can also determine what functionality is available to you, so choose wisely.

The following connection managers are built-in:

  • SQL Server
    • ADO.NET (preferred when using stored procedures or parameterized queries for its usability and better support for a larger range of data types)
    • ODBC
    • OLE DB (deprecated, use ODBC instead)
    • SQL Server Compact Edition
  • SQL Server administrative tasks
    • SMO
  • File operations
    • File or Multiple File
    • Flat File or Multiple Flat File
    • FTP
  • Other
    • Analysis Services
    • DQS (Data Quality Services)
    • Excel
    • HTTP
    • MSMQ (Microsoft Message Queueing)
    • SMTP
    • WMI (Windows Management Instrumentation)

Once you’ve created a connection manager, you can reuse it as often as needed within the package/project, in a variety of roles.  For example, an ODBC connection to the AdventureWorks2012 database can be used as the source for a data flow task, a data flow destination, the connection for an Execute SQL Task, etc.  It’s not necessary to create a separate connection manager for each task.

32-bit or 64-bit

One of the most confusing aspects about developing SSIS projects for newbies is when you’re running 32-bit and when you’re running in 64-bit.  The SSDT development environment uses 32-bit providers.  So you’ll only see those providers where a 32-bit version of the adapter was installed on the workstation on which you’re developing.  When you execute the project outside of the development environment, the version of the adapter is determined by the OS you’re running on.  As you start to develop projects in SSDT, you may notice a project property called Run64BitRuntime.  Don’t be confused, this property is only used at design-time, not at runtime.

Scope

Within SSIS, there is a hierarchy of scope.  For example, a project can contain one or more packages.  Those packages can contain one or more connection managers.  Additionally, the project itself can contain one or more connection managers.  Where a connection manager is available is dependent on where it was created.  A package-scoped connection manager is created within the scope of a specific package and may only be accessed by tasks within that package.  Connection managers that are project-scoped, however, are available to tasks in any package within the project.  If you define both a project-scoped and a package-scoped connection manager with the same name, the package-scoped connection manager’s settings will take precedence within that package.

Summary

Connection managers are relatively simple, yet critical components in SSIS development. Familiarize yourself with the various types available and the pros and cons of each.  And bear in mind that third-party connection managers are also available.

About the Author

SQL Server Consultant

Colleen Morrow

Colleen Morrow is SQL Server Consultant at UpSearch.

Colleen is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft’s SQL Server to protect, optimize and unlock data’s value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at //colleen-morrow/.

Implementing a data warehouse: Introduction to SSDT

Originally published on ColleenMorrow.com.

This post is part of a series on implementing a data warehouse with Microsoft SQL Server 2012. So far, we’ve covered:

Once you’ve designed your fact tables and dimensions, decided how you’re going to partition and index your tables, and created your empty structures, it’s time for the fun stuff:  loading data.

There are 2 basic scenarios when it comes to moving data from one repository to another: simple and complex.  For simple data movements, data is read from the source and loaded into the destination as-is.  This can be as simple as an export/import process.  But for most data warehouse environments, a simple data movement isn’t going to cut it.  Most of the time, we need to massage or transform the data prior to loading it into the data warehouse.  This is a complex data movement, and is also known as ETL, or Extract-Transform-Load.

SQL Server Integration Services (SSIS) is Microsoft’s primary tool for developing and implementing the ETL process.  And SQL Server Data Tools is the development environment in SQL Server 2012 for the development and deployment of SSIS projects, as well as SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) projects.  If you’ve worked with SSIS packages prior to SQL Server 2012, you’re probably familiar with the Business Intelligence Development Studio (BIDS), which was similar to the SSDT interface.

The first time you start up SSDT, you may see a window asking you to choose your default environment settings.  If you’ll primarily be developing SSIS packages , go ahead and choose Business Intelligence Settings.  This will customize your environment for developing business intelligence projects.  You can always change this setting later. Once Visual Studio opens, click New Project…  Navigate to Installed Templates -> Integration Services, and click Integration Services Project.  Give your project a name and click OK.

An SSDT Tour

 

The SSDT interface

The SSDT interface

 

In the upper right-hand corner of the SSDT interface you’ll see the Solution Explorer.  There you should see the Solution and Project you just created, as well as all related files.  Right now, the only file in our project is the default Package.dtsx. Any additional files or project-level connection managers or parameters you create will be listed here as well.

Below the Solution Explorer, in the lower right-hand corner, you’ll see a Getting Started window, listing resources for sample SSIS projects, which can be helpful for getting familiar with the interface.  If you close the Getting Started tab, you’ll see the Properties pane.

Continuing clockwise, at the bottom of the interface you’ll see the Connection Managers pane, where you will define package-level connection managers.  Connection managers allow SSIS to access various data sources and destinations.  We’ll discuss connection managers in more depth next time.

On the left side of the interface you’ll see the SSIS Toolbox pane.  The items listed in the Toolbox are context-specific.  Right now, in the context of the control flow tab, you’ll find control flow tasks like the Execute SQL Task, Execute Process Task, FTP Task, etc.  Click on the Data Flow tab, and you’ll see a different set of items in the Toolbox, ETL tasks that are specific to the data flow.

Tookbox context

Tookbox context

We should probably take a step back for a second and talk about control flow vs. data flow.  In SSIS, control flow is the defined order of operations in a package and the precedence constraints that define when they are executed.  Data flow describes the movement of data from source, through any transformations, to the destination.  The data flow itself is actually a task within the control flow of the package.

Other uses for SSIS

Data movement is the most common use for SSIS, but keep in mind that SSIS is capable of much more, including

  • File system operations and FTP access – if the data you need to load is transported or provided in files, SSIS can be used to access those files
  • Invoking external processes – if, for example, an external service is required for your warehouse load process, the service can be invoked from within SSIS, making integration easier to maintain
  • Database administration tasks – maintenance plans are implemented as SSIS packages
  • OS operations and system state can be accessed via Windows Management Instrumentation (WMI)
  • Sending email
  • Processing SSAS objects

Summary

If you’re not already familiar with the SSDT environment, take some time to poke around.  Download some sample projects and get comfortable with the interface.  We’ll be spending a lot of time there in the next few weeks.

About the Author

SQL Server Consultant

Colleen Morrow

Colleen Morrow is SQL Server Consultant at UpSearch.

Colleen is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft’s SQL Server to protect, optimize and unlock data’s value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at //colleen-morrow/.

Implementing a data warehouse: Intro to Columnstore Indexes

Originally published on ColleenMorrow.com.

This post is part of a series on implementing a data warehouse with Microsoft SQL Server 2012. So far, we’ve covered:

SQL Server 2012 introduced a new way of storing nonclustered indexes, called the columnstore index, which has been shown to improve performance in a data warehouse environment many times over. How, you might ask? Well, the magic of the columnstore index lies in how the data is stored.

Rowstore structure

If you look at a traditional nonclustered index, you’ll see that the data is stored in a row-wise format. For example, let’s assume we have a Customers table with columns for CustomerNum, FirstName, LastName, and BirthDate. In a row-wise storage format, data pages contain rows of data, as illustrated below.

rowstore storage

Rowstore storage structure

Let’s say I want to run the following query:

SELECT LastName FROM dbo.Customers

Even though I only want the LastNames, SQL Server still has to read all pages in their entirely into memory to satisfy my query (assuming I don’t have an index on last name). That makes for a lot of wasted space in memory holding information that I don’t really need (i.e. first names, birth dates, etc.). It also means a lot of extra IO operations if the data isn’t already in memory.

Columnstore structure

With columnstore indexes, data is stored in a columnar format, meaning pages contain the data from a single column, as shown below.

Columnstore storage structure

Columnstore storage structure

By storing data for a single column on a page, SQL Server can now read in only the data really needed by a particular query. If my query only needs LastName, SQL Server only needs to retrieve the pages for LastName. This leads to fewer IO operations and less wasted memory space. SQL Server also uses a special compression algorithm on columnstore indexes, leading to even greater IO savings.

Batch Mode Processing

But wait, there’s more! In addition to highly compressed columnstore indexes, SQL Server 2012 also introduced batch mode processing. In the data warehouse environment, we’re typically scanning and processing large sets of data. Working with individual rows would be very inefficient, so SQL Server can now choose to process data in batches, rather than one row at a time.

Not all operators will run in batch mode. In SQL Server 2012, batch mode processing is available for filter, sequence project, scan, hash match (partial) aggregation, hash match (inner join), and hash table build operations. You can use the Actual and Estimated Execution Mode properties of the Actual Execution Plan to determine what mode SQL Server is using in a particular plan. Keep in mind that batch processing is really only used when joining, filtering or aggregating large data sets.

The cost of columnstore indexes

Unfortunately, there’s no such thing as a free lunch. SQL Server still needs to ultimately return rows of data. Therefore, it needs to take the columnstore data and reconstruct the rows that satisfy a query. This reconstruction takes time and CPU resources. This also makes nonclustered columnstore indexes read-only and their underlying tables also read-only, so you won’t be using this in your OLTP databases. (SQL Server 2014 introduced updateable clustered columnstore indexes, but we won’t go into those here.) If your table is partitioned, the columnstore index must be partition aligned.

There can also only be one columnstore index on a table. A common recommendation for the data warehouse is to create a columnstore index on fact tables, putting all of the columns of the fact table in the columnstore index. Larger dimension tables can also benefit from columnstore indexes. Just keep in mind that you don’t want to use this feature on tables that are updated frequently.

Summary

That’s a brief introduction on columnstore indexes and their benefits, as well as their inherent costs. Next time we’ll look at how to implement columnstore indexes in our data warehouse.

About the Author

SQL Server Consultant

Colleen Morrow

Colleen Morrow is SQL Server Consultant at UpSearch.

Colleen is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft’s SQL Server to protect, optimize and unlock data’s value.

Colleen has been working with relational databases for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at //colleen-morrow/.

 

Implementing a data warehouse: Fact tables

Originally published on ColleenMorrow.com.

This post is part of a series on implementing a data warehouse with Microsoft SQL Server 2012. So far, we’ve covered:

Today we’re exploring fact tables in more depth.  If you’ll recall fact tables store the quantitative data that lies at the heart of the data warehouse.  Without fact tables, our data warehouse would really just be a bunch of mostly unrelated dimension tables, and how boring would that be?

Fact columns

The InternetSalesFact fact table in the AdventureWorksDW2012 database

The InternetSalesFact fact table in the AdventureWorksDW2012 database

Just like dimension tables, fact tables have their own column types.  They are:

  • Foreign Keys – As in an OLTP database, foreign key columns reference primary key values in the dimension tables with which our fact table is associated.  In the FactInternetSales table, these would include ProductKey, CustomerKey, and OrderDateKey, among others.
  • Business Keys – This is usually used as a surrogate key from the table we’re using as our primary source for this fact table.  They’re not strictly necessary, but they make it easy to compare and link back to the source data.  For example, the SalesOrderNumber and SalesOrderLineNumber columns comprise a composite surrogate key in the FactInternetSales table.  They were pulled from the OrderDetails source table.
  • Lineage columns – Just as with dimension tables, these are strictly for auditing purposes.
  • Measures – These columns store measurements of interest associated with a business process.  I’m talking about columns like SalesAmount, OrderQuantity, etc.  They are the root of our fact table and the values we’re going to aggregate for reporting.  Therefore, they’re usually of a numeric data type.

Additivity

The primary benefit of a data warehouse is the ability to aggregate data for reporting, forecasting, etc.  We don’t often report row-level information, rather we want to be able to slice and dice sales by quarter, by region, by product type, etc.  So when we design our fact tables and start adding measures, we need to keep in mind how those measures will be aggregated.  Because sometimes how we aggregate a particular measure will depend on what dimension we’re looking at.

For some measures, it’s pretty straightforward.  When it comes to a measure like sales amount, we can sum it by date, by region, by product type.  If a measure can be summed across all dimensions, it’s referred to as an additive measure.

Sometimes, however, we can sum a measure across all dimensions except for time.  An example of this is a balance amount.  If Bob has $2000 in his checking account and Betty has $3000, then we can sum across the customer dimension and say that they have a sum of $5000.  However, if Bob buys a new laptop today and his checking account balance drops to $500, Bob obviously doesn’t have $2500 total.  We can’t sum the account balance across the time dimension.  We would need to do something like take the average instead, or simply use the last value.  Measures like this are called semi-additive measures.

Finally, some measures can’t ever be summed.  These are called non-additive measures, and include measures like discount percentages and prices.

Summary

In this installment, we reviewed the different types of fact columns and the additivity of measures.  Next time, we’ll talk about implementing fact and dimension tables in the data warehouse.

 

About the Author

SQL Server Consultant

Colleen Morrow

Colleen Morrow is SQL Server Consultant at UpSearch.

Colleen is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft’s SQL Server to protect, optimize and unlock data’s value.

Colleen has been working with relational database for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at //colleen-morrow/.

 

Implementing a data warehouse: More about Dimensions

Originally published on ColleenMorrow.com.

This post is part of a series on implementing a data warehouse with Microsoft SQL Server 2012.  So far, we’ve covered:

Today we’re going to review dimension tables in more depth.  As I stated last time, dimension tables store qualitative, or descriptive data.  For example, the DimCustomer table in the AdventureWorksDW2012 database stores information about each customer; data like first and last name, address, income, marital status, and number of children.  What DimCustomer does not store is any information regarding a customer’s previous purchases.

Dimension Columns

The DimCustomer dimension table in AdventureWorksDW2012

The DimCustomer dimension table in AdventureWorksDW2012

What may not be obvious right off the bat is that there are several types of columns in a dimension table.  And each of these column types has a purpose.

  • Keys – Just like they would in an OLTP environment, key columns are used to uniquely identify rows in the dimension.  In the DimCustomer table, this would be the CustomerKey column.
  • Name columns – Name columns provide a more human-readable way of identifying a row, rather than using the key value.  You probably wouldn’t refer to customer 12345, you would refer to the customer as Alison Raje.
  • Member properties – These are descriptive columns that are typically displayed on reports, but they’re not used for pivoting or analyzing data.  For example, in DimCustomers, member properties would be AddressLine1, Phone, EmailAddress, etc.
  • Lineage columns – These columns are strictly for auditing purposes.
  • Attributes – These are the columns you’ll use for pivoting data during analysis.  The thing you want to keep in mind with attributes is you want to restrict your data to a small number of distinct values.  This usually means grouping values into discrete buckets, also known as discretizing.  Take an income column, for example.  If you pivoted your data based on actual income values, your resultset would be way too wide to be useful.  More often, income values are grouped into ranges and the ranges are what’s used for pivoting.  That process is called discretizing and the resulting ranges are your attributes.  You may still store the actual income value in your dimension, but it becomes a member property.

Slowly Changing Dimensions

Keep a data warehouse around for long enough and you’re eventually going to have to update your dimension data.  Customers move, products are discontinued, employees are promoted. How you decide to handle those updates will have a big impact on whether you’re able to report on historical data.  Let’s go back to the DimCustomer table.  Customer 12345, aka Alison Raje, lives in Townsville, Queensland, Australia.  But suppose Alison was offered a job in Chicago and decided to move.  Alison’s row in DimCustomer needs to be updated to reflect her new address, but how?  This is known as a Slowly Changing Dimension (SCD), and there are a few options.

  • Type 1 SCD – In a Type 1 SCD you simply overwrite the history of an attribute.  If a customer moves from one city to another, the row for that customer is updated in the dimension and any record of that customer living anywhere else is eradicated.  The upside of this method is its simplicity.  The downside is it will wreak havoc on any historical reporting.  For instance, purchases that Alison made while living in Townsville will now be reported for Chicago.
  • Type 2 SCD – In a Type 2 SCD updates are handled by inserting a new row into the dimension.  This has its caveats.  First of all, a surrogate key must be used to uniquely identify each record.  Second, there needs to be a way to identify the current record.  This can be done by adding a flag column or by using “valid from” and “valid to” dates.
  • Type 3 SCD – In a Type 3 SCD, a limited amount of history is preserved by using additional historical columns.  For instance, you might have a CurrentCity column and a PreviousCity column.  Because the amount of history is restricted by the number of historical columns in the table design, Type 3 SCD has pretty limited usability.

The business users will have a big say in what type of SCD you implement in your data warehouse.  And keep in mind that Type 1 and Type 2 aren’t necessarily mutually exclusive.  Depending on the data, you may be ok with overwriting history on some columns, but need to maintain history for other columns.

Summary

Today we covered dimensions in more detail, including the different types of columns that make up a dimension, and how we can handle updates to dimension data.  Next time we’ll look at fact tables in more depth.

 

About the Author

SQL Server Consultant

Colleen Morrow

Colleen Morrow is SQL Server Consultant at UpSearch.

Colleen is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft’s SQL Server to protect, optimize and unlock data’s value.

Colleen has been working with relational database for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at //colleen-morrow/.

 

Implementing a data warehouse: Tables and Schemas

Originally published on ColleenMorrow.com.

The PASS Summit has come and gone, and besides attending sessions and networking, I also managed to pass 2 of my certification exams.  That leaves the much-dreaded 70-463 exam.  So it’s time to start preparing for that exam, and as I go along I’ll post here so you can study along with me.  Ready?

Exam 70-463 focuses on implementing a data warehouse in SQL Server.  Obviously this is a pretty big topic that goes way beyond even what the exam covers, but let’s set the foundation with some fundamentals.  Most of us are very familiar with OLTP database design.  It’s highly normalized and great for line of business (LOB) applications that are primarily using lots of small transactions of very short duration.  The problem with a normalized design is that it’s not so great for analytical reporting.  First of all, normalization means joining many tables which leads to poorer query performance and also makes for much more complex queries.  The other problem is that all of your LOB data reside in different databases, so comparing or aggregating data between different systems is difficult.  To get around this, we build a data warehouse where all of our disparate OLTP data can be merged for analysis and historical reporting.  What makes the data warehouse so great for this type of reporting is its fundamental design, and that’s what we’re going to cover in this post.

Tables

There are two main types of tables in the data warehouse environment: fact tables and dimension tables.

  • Fact – Fact tables store quantitative information for analysis or reporting.  An example of this would be a sales table like the FactInternetSales table in the AdventureWorksDW2012 database.  Every row in the FactInternetSales table records a sale that happened.  Customer A purchased Product B on date C, and so on.  However, your typical sales fact table wouldn’t contain the actual customer information.  Take a look at FactInternetSales and you won’t see customer data. Rather it stores a foreign key reference to the customer data.  And that’s where dimension tables come in.
  • Dimension – A dimension table contains descriptive attributes that can be used to filter, group, and label facts in the fact table.  They provide context.  For example, the FactInternetSales table mentioned earlier would likely be joined to a Customers dimension table, as well as a Products dimension.  The more dimensions a fact table is associated with, the more ways you can slice the data. This is referred to as dimensionality or granularity.

Schemas

It’s the logical design or schema of the data warehouse, or how those fact and dimension tables are laid out and connected to each other, that makes the data warehouse so great for reporting.  And your data warehouse will typically use one of two schemas: the Star schema and the Snowflake schema.

  • Star schema – in a Star schema,  a central fact table is surrounded by a single layer of dimension tables.  One Star schema will cover one business area in the DW, but you can (and probably will) have multiple Star schemas in your DW.  These schemas should be connected by sharing dimension tables (shared dimensions).   If the various Star schemas in your DW don’t share dimension tables, you lose the connection between fact tables, making it difficult, if not impossible, to compare data between different fact tables.
Star schema example

Star schema example

  • Snowflake schema – a Snowflake schema will start out as a Star schema, but then some of the dimensions are normalized, creating the snowflake-like pattern.  An advantage of the Snowflake schema is that is can make hierarchies more apparent.  For example, a Product dimension might be normalized to join to a ProductSubCategory dimension, and that might in turn be joined to a ProductCategory dimension.  In general, Snowflake schemas are discouraged for performance (more joins) and maintainability reasons.
The Snowflake schema starts to add complexity

The Snowflake schema starts to add complexity

Summary

Today we covered the two primary types of table found in the data warehouse, the fact table that holds your quantitative data and the dimension table that holds your qualitative, contextual data.  We also reviewed the two most common schema designs, the Star schema and the Snowflake schema.  Next time we’ll talk more about dimension tables, their columns, and how to handle updates.

 

About the Author

SQL Server Consultant

Colleen Morrow

Colleen Morrow is SQL Server Consultant at UpSearch.

Colleen is a database strategist, community advocate, author, blogger and public speaker. She is passionate about helping technology leaders use Microsoft’s SQL Server to protect, optimize and unlock data’s value.

Colleen has been working with relational database for almost 20 years. Since 2000, Colleen has specialized in SQL Server and Oracle database management solutions. She excels at performance tuning, troubleshooting mission critical SQL Server environments, and training technology professionals and business users of all levels.

Since 2011, Colleen has maintained a SQL Server focused blog at http://colleenmorrow.com. She is an active member of the Ohio North SQL Server User Group, as well as a volunteer for the Professional Association for SQL Server (PASS). Colleen earned a Bachelor of Science in Computer and Information Systems from Cleveland State University.

Learn more about Colleen Morrow at //colleen-morrow/.