Monthly Archives: August 2012

Temp Table Scope

Temporary table scope is actually a little trickier than it might seem. First of all, Microsoft guidance is not particularly clear. Regarding local temp tables, BOL states: “[T]hey are visible only to the current connection for the user, and they are deleted when the user disconnects from the instance of SQL Server.”

Well, that’s not entirely accurate. “Connection” is misleading. Take a look at the following to see what I mean.

In the first run-through we’ll create a temp table the “normal” way; as you’ll see, everything works as expected.

-- If the temp table we intend to create already exists, drop it
IF OBJECT_ID(N'tempdb..#tbl') IS NOT NULL
    DROP TABLE #tbl;

-- Create a temp table

-- Confirm the temp table was created
SELECT 'ObjectID' = OBJECT_ID(N'tempdb..#tbl');

-- Drop the temp table we created
IF OBJECT_ID(N'tempdb..#tbl') IS NOT NULL
    DROP TABLE #tbl;

-- Confirm the temp table was dropped
SELECT 'ObjectID' = OBJECT_ID(N'tempdb..#tbl');

So far so good. Now, let’s try the same thing again, but this time we’ll run the SQL from within an EXECUTE command.

-- Declare variables

-- SQL to create the temp table and confirm it exists
SELECT ''ObjectID'' = OBJECT_ID(N''tempdb..#tbl'');'

-- Create the temp table and confirm creation

-- See if the temp table still exists
SELECT 'ObjectID' = OBJECT_ID(N'tempdb..#tbl');

What?! Within the EXECUTE statement context, the temp table was created and we confirmed that it existed in that context. However, outside the scope of the EXECUTE statement, the table is gone. Yet we are still in the same connection — which you can confirm by looking at the process id inside and outside of the EXECUTE statement context (simply add @@SPID to the SELECT statements).

Thankfully, the global temp table resolves that issue, so it’s not too much of an inconvenience. Except…

With a global temp table you have to be a little more careful about execution process overlap. In other words, if there is the potential for multiple processes to run the same procedure at close to the same time, and that procedure creates a temporary table of the same name, then the processes could collide and one process might use the table while the other process is still using it, potentially causing data corruption, or worse.

Well, that’s easy enough to mitigate as well. We just need to make the temp table name effectively unique such that each process will have a temp table of a different name. And that is easy to do with a GUID. When we create the global temp table, we’ll simply add a GUID to the name. That is sufficient to solve the unique name problem, but it does add the complication of having to share the GUID with any other process that might need to use the temp table. But, if a temp table is necessary, then passing an additional variable for the table name is likely a small price to pay to ensure data quality.

-- Declare variables
    @nstrGUID NCHAR(36) = NEWID()
    , @nstrSQL NVARCHAR(MAX)

-- SQL to create the temp table and confirm it exists
SET @nstrSQL = N'CREATE TABLE [##tbl_' + @nstrGUID + '] (ID INT);
SELECT ''ObjectID'' = OBJECT_ID(N''tempdb..[##tbl_' + @nstrGUID + ']'');'

-- Create the temp table and confirm creation

-- SQL to re-confirm the temp table exists
SET @nstrSQL = N'SELECT ''ObjectID'' = OBJECT_ID(N''tempdb..[##tbl_'
    + @nstrGUID + ']'');'

-- See if the temp table still exists

Problem solved.


Who Started That Job?

While a SQL Server job is running, details on its status are limited. The job activity monitor will show which job step is currently being processed, but it is completely opaque as to whether a transaction in that step is executing or rolling back or just waiting.

Even simple information such as “who started this job” is not available until AFTER the job completes, when the user who called the job will be written into the message field of msdb.dbo.sysjobhistory.

So what? Well, normally that is not really a problem… except when the job takes hours (or days!) to complete. Or, what if it is a continuous task such as a replication monitor? For whatever reason, the job might need to be stopped… but you might not know with whom to confirm unless you know who started the job in the first place. Of course, some jobs start with the SQL Server or based on a schedule — but for those jobs there is rarely a question as to why the job is running.

Execution details are well and truly hidden during a job run, but, thankfully, at least the job initiator is not. And Microsoft even gave us a blueprint on how to find that information. Take a look at the definition of the stored procedure msdb.dbo.sp_get_composite_job_info to see how the system execution procedure xp_sqlagent_enum_jobs is used to return some very useful job status information.

The following example breaks out the relevant bits for our purpose.

-- Create a table to store job information
    , last_run_date INT NOT NULL
    , last_run_time INT NOT NULL
    , next_run_date INT NOT NULL
    , next_run_time INT NOT NULL
    , next_run_schedule_id INT NOT NULL
    , requested_to_run INT NOT NULL -- boolean
    , request_source INT NOT NULL
    , request_source_id SYSNAME COLLATE database_default NULL
    , running INT NOT NULL -- boolean
    , current_step INT NOT NULL
    , current_retry_attempt INT NOT NULL
    , job_state INT NOT NULL)

-- Declare and initialize variables
    @is_sysadmin INT = ISNULL(IS_SRVROLEMEMBER(N'sysadmin'), 0)
    , @job_owner SYSNAME = N'sa'
    /* To filter on a single job, uncomment the lines below */
    --, @job_id UNIQUEIDENTIFIER = (
    -- SELECT job_id
    -- FROM msdb.dbo.sysjobs
    -- WHERE name = N'<job_name SYSNAME>')

-- Select the job information
EXECUTE xp_sqlagent_enum_jobs
    , @job_owner
    /* To filter on a single job, uncomment the line below */
    --, @job_id

-- Select the running jobs that were started by a user
    'JobName' =
    , 'JobStartedBy' = t1.request_source_id
    , 'JobStarted' = acts.start_execution_date
    , 'CurrentTime' = GETDATE()
    , 'CurrentRunTimeSeconds' =
        DATEDIFF(SECOND, acts.start_execution_date, GETDATE())
    , 'CurrentRunTimeMinutes' =
        DATEDIFF(MINUTE, acts.start_execution_date, GETDATE())
    , 'CurrentRunTimeHours' =
        DATEDIFF(HOUR, acts.start_execution_date, GETDATE())
    , 'CurrentRunTimeDays' =
        DATEDIFF(DAY, acts.start_execution_date, GETDATE())
    @tbl AS t1
    INNER JOIN msdb.dbo.sysjobs AS jobs
        ON t1.job_id = jobs.job_id
    INNER JOIN msdb.dbo.sysjobactivity AS acts
        ON t1.job_id = acts.job_id
    t1.running = 1 -- running jobs only
    AND t1.request_source = 4 -- started by a user

Please note that there is small delay between when a job starts and when SQL Server populates information into sysjobactivity. Thus, if you are testing this with simple (read: quick-running) jobs, be sure to add a few WAITFOR DELAY commands in there to give SQL time to work. A delay of just a few seconds is sufficient.

SQL 2000 — Argh!

Dang you, SQL Server 2000! Yes, it was a big step forward from SQL Server 7, but jeez there is a lot of functionality that is just so convenient since SQL Server 2005. Here’s another example, this time using the system functions in combination with metadata functions.

-- Set the context to a compatibility level 80 (SQL Server 2000) database
USE [db80];

-- Try using the DB_ID() function within a system table-valued function
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) AS ips

While the previous statement works fine with compatibility level 90 (SQL Server 2005) or greater, it fails for level 80. You’ll receive a syntax error around the “(“. And yet, both the DB_ID() function and the index physical stats function work by themselves at level 80.

-- Both the system function and the metadata function work
FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, NULL) AS ips
WHERE ips.database_id = DB_ID()

So, what’s the big deal? Well, if you’re in an environment that still supports SQL Server 2000 databases (maybe even side-by-side with newer databases on the same SQL server?), watch your step. Don’t expect to be able to develop for 2005 (not to mention 2008, R2, or 2012) and easily track down and fix the errors you get when running commands at the server level.

Current Server Up Time

Upon occasion, you may need to know how long your SQL Server has been up and running. Conveniently, that information is readily available through a dynamic management view (DMV).

-- Select the SQL Server start time and calculate duration in various units
    'UpSince' = sqlserver_start_time
    , 'CurrentTime' = GETDATE()
    , 'UpTimeDays' = DATEDIFF(DAY, sqlserver_start_time, GETDATE())
    , 'UpTimeHours' = DATEDIFF(HOUR, sqlserver_start_time, GETDATE())
    , 'UpTimeMinutes' = DATEDIFF(MINUTE, sqlserver_start_time, GETDATE())
    , 'UpTimeSeconds' = DATEDIFF(SECOND, sqlserver_start_time, GETDATE())
FROM sys.dm_os_sys_info

Of course, rather than up-time as such, what we are generally concerned with is when a server becomes unavailable. Given that, some sort of active alerting might be far more useful than up-time reporting. We would obviously prefer not to use SQL Server to notify users when it becomes unavailable… because if it is unavailable how can we rely on it to send an alert?! The best solution is to have a witness — a completely separate server that monitors and reports on SQL availability. (And then a witness to monitor the witness… and a witness to monitor the witness that is monitoring the witness… ;-)) Barring a dedicated witness, we could create an alert at the operating system level that alerts on the SQL Server service running on the OS. To do that:

  1. Create a new task in the system tools > task scheduler
  2. For the trigger, choose to begin the task on an event
  3. Choose custom event and enter the following XML, replacing <your server> with the name of your SQL server instance.
      <Query Id="0" Path="System">
        <Select Path="System">
          *[System[Provider[@Name='Service Control Manager'] and (EventID=7036)]]
          *[EventData[(Data='SQL Server (<your server>)') and ((Data='stopped') or (Data='paused'))]]
  4. For the action, choose to email a suitable distribution group

The XML event definition specifies that we will look for event 7036 within the Service Control Manager log; that event fires whenever a service changes states. Within that event, we are filtering on data within the Event Data node that contains the SQL Server instance name and either the “stopped” or “paused” key words. More explicit filtering is, of course, possible… but it is limited to a subset of the XPath 1.0 syntax. (FYI, certain operators such as “contains” and “starts-with” are not available.)

Asynchronous Transactions

Business Case: An organization frequently creates databases ad-hoc and wants them to be backed-up immediately upon creation. This reduces risk by automating what would otherwise be another step for the database creator to remember to perform. Additionally, the existing maintenance actions include a full backup daily and a log backup every fifteen minutes. The challenge is that a log backup will fail if a full backup has not already been performed on the database. Given the frequency and ad-hoc nature of database creation, it is important to ensure a full backup is taken as soon as a database is created.

Environment: On the surface, this business case seems like a good match for a server-level trigger on the CREATE_DATABASE statement, with the trigger action performing the backup. However, that will not work because a backup operation is not permitted within a transaction containing other statements; triggers are executed as part of the calling statement (because the calling transaction must be rolled-back if the trigger action fails). Thus, we will need to be able to perform a second action as reliably as if it were part of a database transaction, but it cannot actually exist within the same literal transaction.

Approach: To perform this sort of asynchronous transaction, we will use event notification and the service broker to perform a full backup whenever a database is created.

Solution: Let us start by creating a stored procedure that will read a yet-to-be-defined service broker queue whenever a message is placed in it. The procedure will consume the message, perform the database backup, and then end the service broker conversation.

You’ll notice that this stored procedure makes use of a procedure we created previously to perform our backup operations. It is available here.

Also note that this yet-to-be-defined queue will be created in the msdb database. We are creating it there because it roughly aligns with part of what msdb is intended to do (manage the SQL agent); also, because msdb is a system database, it is conceptually independent from the user databases for which this solution is intended to manage initial backup creation.

-- Create a procedure to consume service broker queued messages
CREATE PROCEDURE [dbo].[uspBrokerBackup]
    -- Initialize variables for later use
        @uiHandle UNIQUEIDENTIFIER
        , @strMsg NVARCHAR(MAX)
        , @sysMsgName SYSNAME
        , @sysDBName SYSNAME
    -- Consume a message from the queue, waiting no longer than one second
        RECEIVE TOP(1)
            @uiHandle = [conversation_handle]
            , @strMsg = message_body
            , @sysMsgName = message_type_name
        FROM msdb.dbo.queBackupDB)
        , TIMEOUT 1000 -- in milliseconds
    -- Validate message type and contents
    IF @sysMsgName = N''
    AND CHARINDEX(N'<DatabaseName>', @strMsg) != 0
    AND CHARINDEX(N'</DatabaseName>', @strMsg) != 0
        -- Extract the database name from the message body
        SET @sysDBName = SUBSTRING(
            , CHARINDEX(N'<DatabaseName>', @strMsg)
                + LEN(N'<DatabaseName>')
            , CHARINDEX(N'</DatabaseName>', @strMsg)
                - CHARINDEX(N'<DatabaseName>', @strMsg)
                - LEN(N'<DatabaseName>'))
        -- Perform the full backup; click here for uspMxBackup
        EXECUTE master.dbo.uspMxBackup
            @strBackupType = 'FULL'
            , @sysServerName = NULL
            , @sysDatabaseName = @sysDBName
        -- End the service broker conversation
        END CONVERSATION @uiHandle;

Now let’s setup the service broker. We only need to create a couple of the service broker objects; we’ll use some system objects for the rest. We’ll need a queue to store and process notifications of database creation, a service to receive notifications and put them in the queue, and an event notification to catch the create database actions and send them to the service. Don’t worry: it only sounds easy because it is. 🙂

-- If the service already exists, drop it
    SELECT *
    WHERE name = N'//SQLEXPRESS/InternalAction/BackupDB')

-- If the queue already exists, drop it
    SELECT *
    FROM sys.service_queues
    WHERE name = N'queBackupDB')
DROP QUEUE [queBackupDB]

-- Create the queue to store and process the notifications
        PROCEDURE_NAME = [master].[dbo].[uspBrokerBackup]
        , MAX_QUEUE_READERS = 1
        , EXECUTE AS N'dbo')

-- Create the service to send notifications to the queue
ON QUEUE queBackupDB ([])

-- If the event notification already exists, drop it
    SELECT *
    FROM sys.server_event_notifications
    WHERE name = N'evntCreateDatabase')

-- Get the msdb service broker id
    SELECT service_broker_guid
    FROM sys.databases
    WHERE name = N'msdb')

-- Build the SQL to create the event notification
    ''//SQLEXPRESS/InternalAction/BackupDB'' -- name of service broker service
    , ''' + CONVERT(CHAR(36), @uiBrokerID) + N''' -- service_broker_guid for msdb db

-- Create the event notification
EXECUTE sp_executesql @strSQL;

And, there you have it. When the create database event occurs, a notification message is sent via a service to a queue, which then uses a stored procedure to process each notification message. If useful in your environment, feel free to add defensive coding, such as ensuring notification messages are processed successfully, retrying if they are not, notifying an operator if any piece fails, etc. Enjoy!