Handling and Tuning Really Large Analysis Services Cubes

Just trying to speed up our cubes at the moment and found this blog post really useful, thanks!

Pause, Resume and Run a Reporting Services SharePoint Shared Schedule

I’ve recently re-entered the world of SharePoint Reporting Services integration and needed a way to run a shared schedule from a SQL job.

The scenario is this – We have a job that runs in the morning to populate the data warehouse, after the data warehouse it populates the cubes. What I want to happen next is for a number of reports to run against the cubes to warm up the SSAS cache and wake up reporting services ready for the business users.

So I have created the stored proc below to do just that – to make it a bit more useful I have also built in the ability to enable or disable a shared schedule. That way if the morning load fails we can programmatically disable the daily reports so they don’t go out with missing data.

Here’s the sproc….


USE ReportServer
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[usp_Control_SharedSchedules]
(
@Schedule varchar(200)
,@Action varchar(8)
)
AS
BEGIN
/*
============================================================
= usp_Control_SharedSchedules 'Runs After Cube Processing Complete', 'Enable'
============================================================
= Description
= ------------------
= Author : Dan Wakefield
= Created Date : 1 Aug 2014
=
= History
= ------------------
= Created to allow us to kick off a set of reports to warm the cache
= after the cubes have finished processing.
=
= Can also be used to disable a schedule if the load fails and enable it again after it succeeds.
=
= Input
= --------
= @ScheduleName = Name of shared schedule e.g. 'Daily 10am'
= @Action = What do you want to happen? Disable, Enable or Run
(enable means it will run next time it reaches its usual scheduled time, run means run now)
=
= Return
= ----------
= RS
============================================================
*/
SET DATEFIRST 1
SET NOCOUNT ON
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

/*
===============================================================================================
= Declarations
===============================================================================================
*/

DECLARE @job_name VARCHAR(128)

SELECT @job_name = ScheduleID
FROM Schedule
WHERE Name = @Schedule

/*
===============================================================================================
= Processing Code
===============================================================================================
*/

IF @Action = 'Run'
BEGIN
-- Run a shared schedule
-- This will kick off all reports that are tied to the named shared schedule
EXEC msdb.dbo.sp_start_job @job_name
END

IF @Action = 'Disable'
BEGIN
-- Disable schedule
-- No reportts tied to this schedule will run until re-enabled
UPDATE MSDB.dbo.sysjobs
SET Enabled = 0
WHERE Name = @job_name;
END

IF @Action = 'Enable'
BEGIN
-- Enable schedule
-- All reports tied to this schedule will run next time the scheduled time arrives
UPDATE MSDB.dbo.sysjobs
SET Enabled = 1
WHERE Name = @job_name;
END

END

Comparing strings with trailing spaces

select top 1 * from database.dbo.table WHERE 1=0 — returns no results

— makes sense

select top 1 * from database.dbo.table WHERE 1=1 — returns 1 row

— yep

select top 1 * from database.dbo.table WHERE ‘frog’ = ‘frog’ — returns 1 row

— cool

select top 1 * from database.dbo.table WHERE ‘frog’ = ‘frog  ‘ — returns 1 row

— eh!!?

Apparently it’s by design – http://support.microsoft.com/kb/316626

I didn’t know that!

Exporting user list from SharePoint 2010

Here’s the magic URL – Just replace the bits in square brackets with the appropriate values for your site

http://[YOURSITE]/_vti_bin/owssvr.dll?CS=109&Using=_layouts/query.iqy&List=[LISTID]&View=[VIEWID]&CacheControl=1

[YOURSITE] – This is the main root of your SharePoint url

[LISTID] and [VIEWID] – Get these by going to ‘Site Actions’ -> ‘Site Settings’ -> ‘People and Groups’.  Click on ‘Settings’ -> ‘List Settings’ then down at the bottom under ‘Views’ click on ‘List View’.  Now look in your URL and you will see ‘List=’ and ‘&View=’ – These are the GUIDS you need for LISTID and VIEWID.

Once you have constructed your URL in notepad, paste it into internet explorer and hit enter – This should then prompt you to open or save the results – Click open and follow the instructions in Excel.

Changing object ID in SSAS BIDS

Developing an analysis services database in BIDS – You have created a dimension for example based on a table in your dsv called frog

You now want to rename this (for reasons that are entirely your own!) – Changing the name is easy via the properties but changing the ID is a bit trickier.

The trick is to right click the object (in this case the frog dimension) in BIDS and select ‘view code’ – You can then edit the ID here in the first couple of lines.

Splitting delimited strings in SQL

This is an issue I come back to a couple of times a year and always forget how I did it last so here’s the info…

I recently found a great blog that compares all the different methods of splitting delimited strings in SQL Server (there are lots!), interestingly it isn’t always the same solution that is the fastest – It depends on the number of elements in your strings…

http://www.sqlservercentral.com/articles/Tally+Table/72993/

As I’m never expecting more than 30 elements in my strings I’ve settled on the XML method – I found a good example of it here http://www.sqlservercentral.com/articles/XML/66932/

However this wasn’t working for me as I have special characters in my strings such as * and / which are illegal characters in XML – To get around this I had a chat with our DBA and he suggested adding "<![CDATA[" to act as effectively a text qualifier. This works great, see the code below…

Apologies for the dodgy apostrophes in the code below – I’m working on it!

Since playing with this a bit I ran into a problem with strange ASCII characters in my strings which was causing it to fail – The trick was to find out what the character was (in my case it was a ‘record separator’) and the remove it using REPLACE and CHAR() – Mine was a CHAR(30).

 

CREATE FUNCTION [dbo].[ufnSplitXml]  
   (@List nvarchar(max), @Delim char(1)= ',')  
RETURNS @Values TABLE (Item nvarchar(100))AS  
  
--  This function accepts a parameter @List containing the string to be split eg 'abc,def,ghi'
--  The default delimiter is comma but you can specify others if required.
--  Use a cross apply to join to this table valued function.
  
  BEGIN 
  
    DECLARE @XML XML
    
    IF Right(@List,LEN(@Delim)) = @Delim
    BEGIN
          SET @List = LEFT(@List,LEN(@List) - Len(@Delim))
    END

    SET @xml = N'<t><![CDATA[' + REPLACE(@List,@Delim,']]></t><t><![CDATA[') + ']]></t>'  

    INSERT INTO @Values
    SELECT  r.value('.','varchar(MAX)') as Item  
    FROM  @xml.nodes('/t') as records(r)
 
  RETURN  
  END 

Batching SQL Server Updates

We sometime have to apply updates to very large tables of data, it’s usually wise to do this in batches.

Here’s an example of an update being applied in 100,000 row batches – The raiserror command puts a message in the Messages window to tell us which batch has just been completed.


DECLARE @i INT = 0,
@j INT,
@msg VARCHAR(100)

SELECT @j = MAX(HouseholdKey) FROM DimHousehold

WHILE @i <= @j
BEGIN
BEGIN TRAN
UPDATE dh
SET dh.AvgDriveTime = AvgDeliveryTime
FROM DimHousehold dh
INNER JOIN #OrderData fo
ON dh.HouseholdKey = fo.HouseholdKey
AND dh.DeliveryType = ‘Delivery’
WHERE dh.HouseholdKey BETWEEN @i AND @i+99999
COMMIT TRAN

CHECKPOINT

SELECT @msg = CAST(@i AS VARCHAR(20)) + ‘ – ‘ + CAST(@i+99999 AS VARCHAR(20)) + ‘ household keys updated’
RAISERROR(@msg,1,1) WITH NOWAIT
SET @i = @i + 100000
END


Resources for BI Strategy

B eye Network – http://www.b-eye-network.com

http://www.b-eye-network.com/view/8137 (part 1) + http://www.b-eye-network.com/view/8360 (part 2)

The four best and worst practices in Business Intelligence

http://www.itbusinessedge.com/slideshows/show.aspx?c=83418

The data warehouse institute – http://tdwi.org

http://tdwi.org/blogs/wayne-eckerson/2010/03/high-performance-teams.aspx

http://tdwi.org/whitepapers/2013/03/process-driven-bi-building-smarter-business-processes.aspx

http://tdwi.org/portals/agile-bi.aspx

Kimball – http://www.kimballgroup.com/

http://www.kimballgroup.com/data-warehouse-business-intelligence-resources/kimball-techniques/dw-bi-lifecycle-method/

http://www.kimballgroup.com/data-warehouse-business-intelligence-consulting/dw-bi-strategy-assessment/

Self Documenting Cubes

How cool is this!

 

http://www.purplefrogsystems.com/blog/2010/09/olap-cube-documentation-in-ssrs-part-2/