Tag Archives: TRANSACTION

Decoupling a Trigger

Table-level triggers are generally used to enforce business rules based on data manipulation language (DML) statements. Among the reasons they are so good at this function is that they are implicitly wrapped into the DML transaction. Simply put: if the trigger fails the DML fails.

In many cases, that is the desired behavior. However, what if you have an optional business rule? In other words, you’d like to take secondary action on certain data events, but, if the secondary action were to fail, you still want to keep the data change. A silly example of this could be if you wanted to send an email anytime someone named “Bob” registered at your website. While the email is desired, if it could not be sent for some reason, you’d still want “Bob” to be able to register. A trigger is a good candidate for this requirement because we want to be directly tied to the DML event of “Bob” registering… but we somehow need to decouple the trigger action from the DML transaction.

One way to do this is to use an execution context command that allows you to specify that an error should not abort the transaction. This setting is defaulted to “off” in normal query execution… but with triggers the default is “on”. Thus, simply setting the configuration within the trigger will do the job.

One caveat, however: even with the setting “off”, if an error within the trigger is sufficiently severe, the DML transaction will fail and rollback. Nonetheless, more innocuous errors can occur without causing DML failure.

Let’s take a look at this behavior in action.

-- if the example table exists, drop it
IF OBJECT_ID(N'tbl', N'U') IS NOT NULL DROP TABLE tbl;
GO

-- create the example table
CREATE TABLE tbl (
    ID INT IDENTITY(1,1) NOT NULL
    , Value VARCHAR(20) NOT NULL
    , CONSTRAINT PK_tbl PRIMARY KEY CLUSTERED (ID)
    );
GO

-- create a trigger on the example table
CREATE TRIGGER itrgTbl
ON tbl
FOR INSERT
AS
BEGIN
    -- turn off transaction aborting
    SET XACT_ABORT OFF;
    BEGIN TRY
        PRINT 'TRY';
        -- cause a failure
        DECLARE @i INT = 1/0;
    END TRY
    BEGIN CATCH
        PRINT 'CATCH!';
        -- capture the error values
        DECLARE
            @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE()
            , @ErrorSeverity INT = ERROR_SEVERITY()
            , @ErrorState INT = ERROR_STATE()
            ;
        -- raise the error
        IF @ErrorState > 0
            RAISERROR (
                @ErrorMessage
                , @ErrorSeverity
                , @ErrorState
                );
    END CATCH
END
GO

-- test the trigger by inserting a record
INSERT INTO tbl (Value) VALUES ('Hello, world.');

-- check the table contents to determine if the insert was successful
SELECT * FROM tbl;

When executing the code above you will see that the divide by zero error is raised, but the record was committed to the table anyway. Try commenting out the SET XACT_ABORT OFF command and running it again. Without that command the division error is escalated to a trigger execution error and the DML is blocked with message 3616:
Msg 3616, Level 16, State 1, Line 2
An error was raised during trigger execution. The batch has been aborted and the user transaction, if any, has been rolled back.

If you need to isolate your trigger code even more robustly from the DML event, you can also go fully asynchronous by using the trigger to send a message to the service broker. But more on that in a future post!

For more info on XACT_ABORT, read all about it on technet.

Test Compatibility Level

Business Case: An existing application is built on a legacy version of the SQL Server database engine (such as SQL Server 2000). To take advantage of modern capabilities, the application is to be upgraded to a more recent version of the database engine.

Environment: The application is business-critical and unscheduled downtime is not acceptable. The application must be tested to confirm reliability at the higher database version, but full-function application testing is unrealistic due to the complexity of the application.

Approach: To test the application database layer, we will create a script to collect and parse the existing SQL statements under the higher database version, capturing any errors that might be generated.

Solution: To demonstrate the approach, we will simply write and test a few SQL statements. To fully-develop this solution, rather than run the test against our sample SQL table, we could run it against a filtered selection of sys.sql_modules — a system view that contains the SQL definition statement of virtually all objects in the database.

First, let’s set the database context to a database at the target version (compatibility level). We’ll also suppress statement record count reports.

-- Set the database context to a db at the target version
USE [db];
GO

-- Do not show statement record counts
SET NOCOUNT ON;

For demonstration purposes, we’ll create a table variable to store the SQL statements to be tested. In the real-world, a query from the [definition] field of sys.sql_modules would be a good place to start. In this example we’ll test three statements that are valid in SQL Server 2000 but invalid in SQL Server 2008.

-- Create the SQL input table
DECLARE @tblIn TABLE (
    ID INT IDENTITY(1,1) NOT NULL
    , SQLString NVARCHAR(MAX) NOT NULL
    )
;

-- Populate the SQL input table
INSERT INTO @tblIn (SQLString)
VALUES -- these statements are valid at level 80, invalid at level 100
    (N'SET')
    , (N'SELECT o.name FROM sys.objects AS o HOLDLOCK;')
    , (N'SELECT o.name FROM sys.objects AS o, sys.tables AS t WHERE o.[object_id] *= t.[object_id];')
;

Now we’ll create an output table to store the results of our syntax check. We’ll capture the details of our input as well as the level at which we are testing the statement and any errors that might result.

-- Create the SQL output table
DECLARE @tblOut TABLE (
    ID INT NOT NULL
    , SQLString NVARCHAR(MAX) NOT NULL
    , ValidationLevel TINYINT NOT NULL
    , ErrorNumber INT NOT NULL DEFAULT(-1)
    , ErrorMessage NVARCHAR(2000) NOT NULL DEFAULT(N'')
    )
;
GO

Next we’ll begin the automated testing phase. We’ll create a cursor to scroll through the SQL statements to be tested. Start by creating variables to hold relevant information for the cursor, then initialize the cursor definition and iteration process.

-- Declare the cursor output variable(s)
DECLARE
    @intID INT
    , @nstrSQLOrig NVARCHAR(MAX)
    , @nstrSQL NVARCHAR(MAX)
    , @intCompatLevel TINYINT = (
            SELECT [compatibility_level]
            FROM sys.databases
            WHERE database_id = DB_ID()
        )
;

-- Declare and initialize the cursor
DECLARE crs CURSOR FAST_FORWARD
FOR
    -- Define the cursor recordset
    SELECT ID, SQLString
    FROM @tblIn
;

-- Open the cursor
OPEN crs;

-- Initialize the cursor output
FETCH NEXT FROM crs INTO @intID, @nstrSQL;

With the cursor now open, let’s iterate through the records. For each SQL statement to be tested, we’ll first capture the original SQL statement. Then we’ll modify the statement by prepending turning the PARSEONLY setting on. That setting tells SQL Server to check the syntax of the statement, but do not compile it and do not execute it. If, for your situation, you’d like to test statement compilation as well, try setting NOEXEC on. Once parsed, we’ll turn PARSEONLY off so that further statements are executed. And, by wrapping the whole thing in a try-catch block and within a transaction, we can control the flow and error capture to meet our needs.

-- Iterate through the cursor records
WHILE @@FETCH_STATUS = 0
BEGIN
    -- Capture the original SQL statement
    SET @nstrSQLOrig = @nstrSQL
    -- Attempt to parse
    BEGIN TRY
        -- Start transaction
        BEGIN TRANSACTION
            -- Prepend statement with parseonly setting 
            SET @nstrSQL = N'SET PARSEONLY ON; ' + @nstrSQL;
            EXECUTE (@nstrSQL);
            -- Turn off parseonly
            SET PARSEONLY OFF;
            -- Capture statements and validation level
            INSERT INTO @tblOut (ID, SQLString, ValidationLevel)
            VALUES (
                @intID
                , @nstrSQLOrig
                , @intCompatLevel
            ;
        -- Commit transaction
        COMMIT TRANSACTION;
    END TRY
    -- Parse failed; capture error
    BEGIN CATCH
        -- Roll back the transaction
        ROLLBACK TRANSACTION;
        -- Capture statement and error information
        INSERT INTO @tblOut (
            ID
            , SQLString
            , ValidationLevel
            , ErrorNumber
            , ErrorMessage
            )
        VALUES (
            @intID
            , @nstrSQLOrig
            , @intCompatLevel
            , ERROR_NUMBER()
            , ERROR_MESSAGE()
            )
        ;
    END CATCH
    -- Iterate the cursor output
    FETCH NEXT FROM crs INTO @intID, @nstrSQL;
END

Finally, let’s close and deallocate the curser and turn statement counts back on.

-- Close cursor
CLOSE crs;

-- Release cursor from memory
DEALLOCATE crs;

-- Show statement counts
SET NOCOUNT OFF;

What about the results? Well, let’s take a look!

-- View validation report
SELECT * FROM @tblOut;

If the preceding script is run under SQL Server 2000 (compatibility level 80), you’ll see the following output.

Test SQL at level 80

Valid syntax for SQL 2000

And, if run under SQL Server 2008 (compatibility level 100), you’ll see the following.

Test SQL at level 100

Invalid syntax for SQL 2008

Does anyone have any suggestions on a more elegant way to accomplish this type of automated compatibility testing?

Transactions Really Do Work

Believe it or not, transactions really do work — even when you forget to think about them! In the example below we’ll walk through how transaction rollback can prevent an attempt at error capture.

For this example, we will first create a table in which to store any errors we generate.

-- Create a table in which to store error messages
IF OBJECT_ID(N'tblLog') IS NOT NULL
    DROP TABLE tblLog;
CREATE TABLE tblLog (
    ID INT IDENTITY(1,1) NOT NULL
    , DateStamp DATETIME2(3) NOT NULL
    , ErrorNumber INT NOT NULL
    , ErrorMessage NVARCHAR(2048) NOT NULL
    )
;
ALTER TABLE tblLog
ADD CONSTRAINT DF_tblLog_DateStamp
DEFAULT (GETDATE()) FOR DateStamp
;
GO

With our error capture table created, let us now generate some errors! Specifically, we will produce an uncommitable transaction error so that we can eventually demonstrate how transaction rollback can affect error capture.

-- First, let's demonstrate capturing an uncommitable
-- transaction error
BEGIN TRY
    -- This is an uncommitable transaction because the
    -- table does not exist in the database
    DROP TABLE tblDoesNotExist;
END TRY
BEGIN CATCH
    -- Here is the catch where we capture the error
    INSERT INTO tblLog (ErrorNumber, ErrorMessage)
    VALUES (ERROR_NUMBER(), ERROR_MESSAGE());
END CATCH

Assuming tblDoesNotExist does not, in fact, exist in your database, the above statement will result in an entry to the error capture table, tblLog. Let’s take a look at the error:

-- Notice that we've captured the error
SELECT * FROM tblLog;

Yes, indeed, the error was captured; and it told us that the table does not exist and we thus cannot delete it. So far, so good. This information is useful for us to be able to troubleshoot the problem.  Let’s clear the error log now to reset it for our next example.

-- Clear the log table
DELETE FROM tblLog;

In the following example, we will take the exact statement we ran above and put it inside a transaction that includes rollback logic. Using rollback logic of this type is very common and is good practice. In fact, more robust validation and tracking can be implemented — but that is superfluous to the current demonstration.

-- Now, let's see how error capture can be lost in a transaction
BEGIN TRY
    BEGIN TRANSACTION;
        -- Here is the code from above, but run
        -- within a transaction with rollback logic
        BEGIN TRY
            -- This is an uncommitable transaction because the
            -- table does not exist in the database
            DROP TABLE tblDoesNotExist;
        END TRY
        BEGIN CATCH
            -- Here is the inner catch where we attempt to
            -- capture the error
            INSERT INTO tblLog (ErrorNumber, ErrorMessage)
            VALUES (ERROR_NUMBER(), ERROR_MESSAGE());
        END CATCH
    -- This commit will fail because the transaction is in
    -- an uncommitable state due to the attempted DROP TABLE
    COMMIT TRANSACTION;
END TRY
BEGIN CATCH
    -- The catch is called; transaction must be rolled back
    ROLLBACK TRANSACTION;
    -- Capture the outer error
    INSERT INTO tblLog (ErrorNumber, ErrorMessage)
    VALUES (ERROR_NUMBER(), ERROR_MESSAGE());
END CATCH

At first glance, it might seem like we would only expect the same, one error in our log table. Afterall, the DROP TABLE command was in a TRY-CATCH block and so it was “handled”, right? And, if it was handled, then the outer CATCH block would not even execute. Right? Not exactly.

The DROP TABLE command will fail and will result in the error being written to the log table. However, because that error makes the transaction uncommitable, the COMMIT TRANSACTION also fails and pushes execution to the outer CATCH block. And, in the outer CATCH block, the transaction is rolled back. Which means writing the inner error to the log does NOT occur! Let’s see what the log shows:

-- Notice that only the outer error is captured; the
-- inner error capture is rolled back in the outer catch
SELECT * FROM tblLog;

While I hope this example is easy to follow (as examples should be), I trust that you can imagine how the situation will get confusing if, rather than a single DROP TABLE command, there were nested procedures with their own transactions and try-catch blocks. At any point in that potentially complex web of execution a statement might fail. And, despite our intention to capture the error at the point of failure, all of that error capture activity could get rolled-back and the end result might be a simple statement saying the transaction was uncommitable. A single error statement such as that provides virtually no insight into which part of the transaction was uncommitable — only that something within it was uncommitable.

What can be done about that? Pay close attention to where your transactions begin and end, and ensure you are not capturing errors where they will not be persisted!

Oh yes, and don’t forget to clean up after yourself! 🙂

-- Clean up
IF OBJECT_ID(N'tblLog') IS NOT NULL
    DROP TABLE tblLog;

Retry Execution (Even on Deadlock!)

This business case is closely related to another post: Reduce Deadlocks

Business Case: A multi-client application frequently updates data. Occasionally, transactions fail with resulting loss of information and application instability. The application team has requested a solution in the database layer.

Environment: It has been determined that deadlock on lock resources is the primary cause of the failed transactions.

Approach: While there are a handful of ways to help mitigate locked resource deadlocks, in this instance we will simply fortify our database with a retry capability. In other words, if the first attempt at transaction execution fails, we will retry the transaction a couple more times. For deadlocks due to frequent, relatively short-duration transactions, this fortification can often due the trick.

Demonstration Preparation: For demonstration purposes, first we will create two tables with a handful of records in each. We will then create update queries that, when run back to back in separate processes, will produce a deadlock situation. For this example we will assume our database is named “DB” and will be working within the “dbo” schema.

-- If the first table already exists, drop it
IF OBJECT_ID('DB.dbo.TableOne') IS NOT NULL
    DROP TABLE DB.dbo.TableOne
;

-- Create the first table
CREATE TABLE DB.dbo.TableOne (
    ID INT)
;

-- Populate the first table with data
INSERT INTO DB.dbo.TableOne
VALUES
    (1)
    , (2)
    , (3)
;

-- If the second table already exists, drop it
IF OBJECT_ID('DB.dbo.TableTwo') IS NOT NULL
    DROP TABLE DB.dbo.TableTwo
;

-- Create the second table
CREATE TABLE DB.dbo.TableTwo (
    ID INT)
;

-- Populate the second table with data
INSERT INTO DB.dbo.TableTwo
VALUES
    (10)
    , (11)
    , (12)
;

Now we will create two queries that, when run within a few seconds of each other in separate processes, will produce a locked resource deadlock.

-- This transaction should be run in one query window (process)
BEGIN TRANSACTION
    UPDATE DB.dbo.TableOne
    SET ID = 1
    ;
    -- Delay execution for three seconds, to give
    -- the second transaction time to start
    WAITFOR DELAY '00:00:03'
    ;
    UPDATE DB.dbo.TableTwo
    SET ID = 11
    ;
COMMIT TRANSACTION


-- This transaction should be run in a second query window (process)

BEGIN TRANSACTION
    UPDATE DB.dbo.TableTwo
    SET ID = 12
    ;
    -- Delay execution for three seconds, to give
    -- the first transaction time to create the
    -- deadlock
    WAITFOR DELAY '00:00:03'
    ;
    UPDATE DB.dbo.TableOne
    SET ID = 2
    ;
COMMIT TRANSACTION

When the two transactions above are run within three seconds of each other, in separate windows (processes), it will create a deadlock and SQL Server will decide the second transaction is the victim and cancel its execution.

Solution: To fortify these transactions, we can build in a retry capability. For demonstration purposes, we will only fortify the second transaction; in the real world, all transactions could be fortified depending on business/application requirements.

Our fortification technique will include creating a few variables to capture any error message,  iterate through transaction attempts, and specify the maximum number of attempts we will try before allowing the transaction to fail. Then we will create a WHILE loop to iterate through attempts, and wrap our transaction in a TRY CATCH block to trap execution failures.

-- Declare and initialize variables
DECLARE @ErrorMessage NVARCHAR(2000) = '';
DECLARE @Iteration TINYINT = 0;
DECLARE @IterationLimit TINYINT = 2;

-- Begin the iteration via WHILE operation
WHILE
    -- Conditions for iteration
    @ErrorMessage IS NOT NULL --null indicates successful execution
    AND @Iteration < @IterationLimit -- do not exceed iteration limit
    BEGIN
        -- Increment the iteration counter
        SET @Iteration += 1;
        -- Attempt to execute the transaction
        BEGIN TRY
            BEGIN TRANSACTION
                UPDATE DB.dbo.TableTwo
                SET ID = 12
                ;
                -- Delay execution for three seconds, to give
                -- the first transaction time to create the
                -- deadlock
                WAITFOR DELAY '00:00:03'
                ;
                UPDATE DB.dbo.TableOne
                SET ID = 1
                ;
                -- Capture the error message
                SET @ErrorMessage = ERROR_MESSAGE()
                ;
            COMMIT TRANSACTION
        END TRY
        -- Catch any execution errors
        BEGIN CATCH
            -- Notify if iteration limit is reached
            IF @Iteration = @IterationLimit
                SELECT 'Iteration reached; last error: ' + @ErrorMessage
                ;
            -- Rollback the transaction
            ROLLBACK TRANSACTION
        END CATCH
    END