Author Archives: jmdority

Automatic Index Tuning in SQL Server Using the Missing Index Dynamic Management Views

In many cases, a DBA may not have access to your deployed databases (if they are deployed to customer sites), may not be able to predict which indexes will help most in ad-hoc reporting databases, or simply may not have the time to hand-tune indexes on all of your databases. To help DBAs with tuning indexes, the missing index Dynamic Management Views (DMVs) were introduced in SQL Server 2005, but they only provide suggestions, there is no built-in facility to automate index tuning. However, with a small T-SQL script, you can put some of your databases on autopilot, and hopefully provide a better experience for your users.

Once again, there is no replacement for an experienced DBA with knowledge of your data and usage patterns performing tuning, but if you don’t have a DBA available for your database, just want a little guidance in which indexes are needed, or you have a large ad-hoc query workload that is a bit more unpredictable, this script can help. With the proper values for your database, this script will add the most valuable missing indexes while keeping the total number of indexes on your tables to a maximum value by removing the least-used indexes. Keep in mind that when you create or drop an index on a table, all index suggestions for that table are cleared, so if you schedule this script to run on a regular basis, be sure that the time between runs is sufficient to be a representative sample of your typical workload. Also, you should keep the @maxIndexCreates and @maxIndexDrops values relatively low compared to the @maxIndexesTotal value, to avoid a situation where you are constantly alternating between indexes by dropping ones that are still recently used. If you run this with @printOnly set to 0 (to automatically run the CREATEs and DROPs), you should review the output on occasion to make sure that the indexes it creates and drops make sense to you, so that you are not removing valuable indexes. I can’t guarantee that it won’t drop a lesser-used index that really is important to you, so I suggest that you run it with @printOnly set to 1 at least a few times to review what it will do and make sure you have appropriate settings for your environment. The script will tune the database that it runs in, so be sure to change the database to the one you intend to tune before running it. Also, since choosing the CLUSTERED index for a table should be based on some knowledge of the data, this script will only deal with NONCLUSTERED indexes.

-- Automated index tuning script, will maintain NONCLUSTERED indexes based on missing index DMVs.
-- Set options below and run in the database to be tuned to generate the script (@printOnly = 1)
-- or to actually create and drop the indexes (@printOnly = 0).  

-- This is intended to be used for databases with large numbers of ad-hoc queries or databases that
-- cannot be actively maintained by a DBA.  Use this at your own risk, and you should at least start
-- by using this with @printOnly set to 1 and manually running the script it generates.

-- The values provided here for the parameters are just provided for example purposes, you should try 
-- different values to see what works best in your database, the ideal values will depend on your workload.
-- Keep in mind that this will not touch CLUSTERED indexes, and there may be further improvement
-- to be found by changing the clustered index on the table.

DECLARE @minUserSeeksOrScans INT = 10		-- Minimum user seeks + user scans to qualify for index to be created
DECLARE @minAvgTotalUserCost FLOAT = 2.5	-- Minimum average total user cost to qualify for index to be created
DECLARE @minAvgUserImpact FLOAT = 88.0		-- Minimum average user impact value to qualify for index to be created
DECLARE @maxIndexesTotal INT = 10			-- Maximum total number of NONCLUSTERED indexes per table
DECLARE @maxIndexCreates INT = 4			-- Maximum number of new indexes per table to create
DECLARE @maxIndexDrops INT = 2				-- Maximum number of existing indexes per table that may be removed if over @maxIndexesTotal
DECLARE @printOnly BIT = 1					-- Print the index commands only, do not actually create or drop them

-- Table variables for holding index information
DECLARE @newIndexes TABLE (schemaName SYSNAME, tableName SYSNAME, benefitRank INT, 
							createStatement NVARCHAR(MAX), tableObjectId INT)
DECLARE @oldIndexes TABLE (tableObjectId INT, leastUsedRank INT, dropStatement NVARCHAR(MAX))

SET NOCOUNT ON

PRINT '-------------------------------------------'
PRINT '-- Automatic index processing starting.  --'

IF @printOnly = 0
BEGIN
	PRINT '--  LIVE MODE: INDEXES WILL BE MODIFIED! --'
END

PRINT '-------------------------------------------'
PRINT ''
PRINT ''


-- Load the index suggestions and rank them by expected benefit
INSERT INTO @newIndexes (schemaName, tableName, benefitRank, createStatement, tableObjectId)
SELECT S.name AS schemaName, T.name AS tableName, 
	ROW_NUMBER() OVER (PARTITION BY S.name, T.name 
						ORDER BY MIGS.avg_total_user_cost * MIGS.avg_user_impact * (MIGS.user_seeks + MIGS.user_scans) DESC
					) AS benefitRank,
	'CREATE INDEX ' + QUOTENAME('IX_' + T.name + '_' 
		+ ISNULL(REPLACE(REPLACE(REPLACE(MID.equality_columns, '], [', '_'), ']', ''), '[', ''), '') 
		+ CASE WHEN equality_columns IS NOT NULL AND inequality_columns IS NOT NULL THEN '_' ELSE '' END 
		+ ISNULL(REPLACE(REPLACE(REPLACE(MID.inequality_columns, '], [', '_'), ']', ''), '[', ''), ''))
		+ ' ON ' + QUOTENAME(S.name) + '.' + QUOTENAME(T.name) 
		+ ' (' + ISNULL(equality_columns, '')
		+ CASE WHEN equality_columns IS NOT NULL AND inequality_columns IS NOT NULL THEN ', ' ELSE '' END 
		+ ISNULL(inequality_columns, '') + ')' AS createStatement,
		T.object_id AS tableObjectId
FROM sys.dm_db_missing_index_group_stats MIGS
	INNER JOIN sys.dm_db_missing_index_groups MIG ON MIG.index_group_handle = MIGS.group_handle
	INNER JOIN sys.dm_db_missing_index_details MID ON MID.index_handle = MIG.index_handle
	INNER JOIN sys.objects T ON T.object_id = MID.object_id  
	INNER JOIN sys.schemas S ON S.schema_id = T.schema_id
WHERE MIGS.user_seeks + MIGS.user_scans > @minUserSeeksOrScans
	AND MIGS.avg_total_user_cost > @minAvgTotalUserCost
	AND MIGS.avg_user_impact > @minAvgUserImpact
	AND MID.database_id = DB_ID()

-- Load the existing NONCLUSTERED indexes for all tables that had suggestions and rank them by least used
INSERT INTO @oldIndexes (tableObjectId, leastUsedRank, dropStatement)
SELECT T.object_id AS tableObjectId, 
	ROW_NUMBER() OVER (PARTITION BY S.name, T.name 
						ORDER BY ISNULL(user_seeks + user_scans + user_lookups, 0), 
							ISNULL(last_user_seek, last_user_scan)
					) AS leastUsedRank,
	'DROP INDEX ' + QUOTENAME(I.name) + ' ON ' + QUOTENAME(S.name) + '.' + QUOTENAME(T.name) AS dropStatement
FROM sys.indexes I
	INNER JOIN sys.objects T ON T.object_id = I.object_id
	INNER JOIN sys.schemas S ON S.schema_id = T.schema_id
	LEFT OUTER JOIN sys.dm_db_index_usage_stats IUS ON IUS.object_id = I.object_id AND IUS.index_id = I.index_id AND IUS.database_id = DB_ID()
WHERE I.type_desc = 'NONCLUSTERED'
	AND IUS.object_id IN (SELECT tableObjectId FROM @newIndexes)

-- Now loop through the tables, create up to @maxIndexCreates indexes, drop up to @maxIndexDrops indexes, but leave @maxIndexesTotal or fewer
DECLARE tableCur CURSOR FOR 
	SELECT MAX(schemaName), MAX(tableName), tableObjectId
	FROM @newIndexes
	GROUP BY tableObjectId
	ORDER BY MAX(schemaName), MAX(tableName)
	
OPEN tableCur

-- Variables used within the loop
DECLARE @schemaName SYSNAME
DECLARE @tableName SYSNAME
DECLARE @tableObjectId INT
DECLARE @existingIndexCount INT
DECLARE @numIndexesToCreate INT
DECLARE @statementToRun NVARCHAR(MAX)

FETCH NEXT FROM tableCur INTO @schemaName, @tableName, @tableObjectId

WHILE @@FETCH_STATUS = 0
BEGIN
	PRINT '-------------------------------------------------------------------------------------------------------'
	PRINT '-- Starting index processing for table: ' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName)

	-- Get number of existing NONCLUSTERED indexes on this table
	SELECT @existingIndexCount = ISNULL((SELECT MAX(leastUsedRank) FROM @oldIndexes WHERE tableObjectId = @tableObjectId), 0)
	
	PRINT '--      ' + CAST(@existingIndexCount AS VARCHAR(100)) + ' existing NONCLUSTERED indexes found.'
	
	-- Create up to @maxIndexCreates indexes, unless that would leave more than @maxIndexesTotal indexes after others are dropped.
	-- The actual number that will run will be the lesser of @maxIndexCreates and (@maxIndexesTotal - @existingIndexCount + @maxIndexDrops
	SELECT @numIndexesToCreate = CASE WHEN @maxIndexesTotal - @existingIndexCount + @maxIndexDrops < @maxIndexCreates
										THEN @maxIndexesTotal - @existingIndexCount + @maxIndexDrops
									ELSE @maxIndexCreates
								END
	
	-- Loop through the CREATE INDEX statements to print/run
	DECLARE createIndexCur CURSOR FOR
		SELECT createStatement
		FROM @newIndexes
		WHERE tableObjectId = @tableObjectId
			AND benefitRank <= @numIndexesToCreate
			
	OPEN createIndexCur
	
	FETCH NEXT FROM createIndexCur INTO @statementToRun
	
	WHILE @@FETCH_STATUS = 0
	BEGIN
		-- Print the CREATE INDEX statement
		PRINT @statementToRun
		
		IF @printOnly = 0
		BEGIN
			-- Create the index
			EXEC (@statementToRun)
		END
	
		FETCH NEXT FROM createIndexCur INTO @statementToRun
	END
	
	CLOSE createIndexCur
	DEALLOCATE createIndexCur
	
	-- Now drop up to @maxIndexDrops NONCLUSTERED indexes, if over the total.
	-- This is the lesser of @maxIndexDrops and how far above @maxIndexesTotal we now are for this table.
	-- Loop through the DROP INDEX statements to print/run
	DECLARE dropIndexCur CURSOR FOR
		SELECT dropStatement
		FROM @oldIndexes
		WHERE tableObjectId = @tableObjectId
			AND leastUsedRank <= @existingIndexCount + @numIndexesToCreate - @maxIndexesTotal
			AND leastUsedRank <= @maxIndexDrops
			
	OPEN dropIndexCur
	
	FETCH NEXT FROM dropIndexCur INTO @statementToRun
	
	WHILE @@FETCH_STATUS = 0
	BEGIN
		-- Print the DROP INDEX statement
		PRINT @statementToRun
		
		IF @printOnly = 0
		BEGIN
			-- Drop the index
			EXEC (@statementToRun)
		END
		
		FETCH NEXT FROM dropIndexCur INTO @statementToRun
	END
	
	CLOSE dropIndexCur
	DEALLOCATE dropIndexCur

	PRINT '-- Finished processing for table: ' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName)
	PRINT '-------------------------------------------------------------------------------------------------------'
	PRINT ''

	FETCH NEXT FROM tableCur INTO @schemaName, @tableName, @tableObjectId
END

CLOSE tableCur
DEALLOCATE tableCur

PRINT ''
PRINT ''
PRINT '-------------------------------------------'
PRINT '-- Automatic index processing completed. --'
PRINT '-------------------------------------------'

I ran a few queries against the AdventureWorks2008R2 database to get some missing index statistics populated, then fired off the script (although with lower thresholds for seeks/scans, cost, and impact just to get some results for this post), which produced this:

-------------------------------------------
-- Automatic index processing starting.  --
-------------------------------------------
 
 
-------------------------------------------------------------------------------------------------------
-- Starting index processing for table: [Person].[Address]
--      2 existing NONCLUSTERED indexes found.
CREATE INDEX [IX_Address_City] ON [Person].[Address] ([City])
-- Finished processing for table: [Person].[Address]
-------------------------------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------------------------------
-- Starting index processing for table: [Person].[Person]
--      1 existing NONCLUSTERED indexes found.
CREATE INDEX [IX_Person_MiddleName] ON [Person].[Person] ([MiddleName])
-- Finished processing for table: [Person].[Person]
-------------------------------------------------------------------------------------------------------
 
 
 
-------------------------------------------
-- Automatic index processing completed. --
-------------------------------------------

Of course, this script won’t meet everyone’s index-tuning needs, but you could always add features like ignoring certain tables, changing the parameter values for particular tables, or using a particular naming convention to preserve certain indexes (I would suggest something like adding “_noDelete” to the end of the index name and ignoring those if you need this).

Advertisements

Using Entity Framework 4.1 DbContext Change Tracking for Audit Logging

Many applications have a need to keep audit information on changes made to objects in the database. Traditionally, this would be done either through log events, stored procedures that implement the logging, or the use of archive/tombstone tables to store the old values before the modification (hopefully enforced through stored procedures). With all of these, there is always a chance that a developer could forget to do those things in a specific section of code, and that changes could be made through the application without logging the change correctly. With Entity Framework 4.1’s DbContext API, it is fairly easy to implement more robust audit logging in your application.

The DbContext exposes the ChangeTracker property, which contains all of the changes made since the most recent SaveChanges() call. This property can be used to get information about the current and previous values of any entities in the DbContext object. By overriding the SaveChanges() method, we can implement audit logging any time changes are saved to the database. For the sake of this example, let’s assume that we want to write a row to a database table that contains the entire contents of the new entity for inserts (there were no previous contents), the entire previous contents for deletes (there are no current contents). For updates to existing records, a column-by-column audit is desired, where a row exists in the audit table for each column that was changed, and the before-and-after values are shown in that row. For example, if we have a database containing the following tables:

CREATE TABLE Items
	(
		ItemID UNIQUEIDENTIFIER NOT NULL,
		ItemDescription NVARCHAR(100) NOT NULL,
		CONSTRAINT PK_Items PRIMARY KEY NONCLUSTERED (ItemID)
	)
GO

CREATE TABLE ItemAdditionalAttributes
	(
		ItemAdditionalAttributeID UNIQUEIDENTIFIER NOT NULL,
		ItemID UNIQUEIDENTIFIER NOT NULL,
		AttributeName NVARCHAR(50) NOT NULL,
		AttributeValue NVARCHAR(500) NOT NULL,
		CONSTRAINT PK_ItemAdditionalAttributes PRIMARY KEY NONCLUSTERED (ItemAdditionalAttributeID)
	)
GO

ALTER TABLE ItemAdditionalAttributes ADD CONSTRAINT FK_ItemAdditionalAttributes_Items FOREIGN KEY (ItemID) REFERENCES dbo.Items (ItemID)
GO

We can then add an additional table for logging, similar to this one:

CREATE TABLE AuditLog
	(
		AuditLogID UNIQUEIDENTIFIER NOT NULL,
		UserID NVARCHAR(50) NOT NULL,
		EventDateUTC DATETIME NOT NULL,
		EventType CHAR(1) NOT NULL,
		TableName NVARCHAR(100) NOT NULL,
		RecordID NVARCHAR(100) NOT NULL,
		ColumnName NVARCHAR(100) NOT NULL,
		OriginalValue NVARCHAR(MAX) NULL,
		NewValue NVARCHAR(MAX) NULL
		CONSTRAINT PK_AuditLog PRIMARY KEY NONCLUSTERED (AuditLogID)
	)
GO

Using EF Code-First, the model classes for these tables will look like this:

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;

namespace EFChangeTrackingDemo.Models
{
    [Table("Items")]
	public class Item
	{
        [Key]
		public System.Guid ItemID { get; set; }

        [Required]
        [MaxLength(100)]
		public string ItemDescription { get; set; }

		public virtual ICollection ItemAdditionalAttributes { get; set; }

        public Item()
        {
            this.ItemAdditionalAttributes = new List();
        }
    }
}

 

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;

namespace EFChangeTrackingDemo.Models
{
    [Table("ItemAdditionalAttributes")]
	public class ItemAdditionalAttribute
	{
        [Key]
		public System.Guid ItemAdditionalAttributeID { get; set; }

        [Required]
		public System.Guid ItemID { get; set; }

        [Required]
        [MaxLength(50)]
		public string AttributeName { get; set; }

        [MaxLength(500)]
		public string AttributeValue { get; set; }

        [ForeignKey("ItemID")]
        public virtual Item Item { get; set; }
    }
}

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.ComponentModel.DataAnnotations;

namespace EFChangeTrackingDemo.Models
{
    [Table("AuditLog")]
    public class AuditLog
    {
        [Key]
        public Guid AuditLogID { get; set; }

        [Required]
        [MaxLength(50)]
        public string UserID { get; set; }

        [Required]
        public DateTime EventDateUTC { get; set; }

        [Required]
        [MaxLength(1)]
        public string EventType { get; set; }

        [Required]
        [MaxLength(100)]
        public string TableName { get; set; }

        [Required]
        [MaxLength(100)]
        public string RecordID { get; set; }

        [Required]
        [MaxLength(100)]
        public string ColumnName { get; set; }

        public string OriginalValue { get; set; }

        public string NewValue { get; set; }
    }
}

In order to obtain the user ID that is making the change, in our DbContext class, we will override the SaveChanges() method to throw an exception requiring a user ID to be passed as a parameter, and create a new SaveChanges(string userId) method. Inside the new SaveChanges(userId) method, we can obtain all of the changes that were made since the last SaveChanges() call, and add records to the AuditLog table for each of those. The ChangeTracker property of the DbContext class contains all of the tracked entities, so pulling out the added, deleted, or modified entities is easy. If we have a function that returns the audit records for a single change, then the SaveChanges() code will look like this:

        // This is overridden to prevent someone from calling SaveChanges without specifying the user making the change
        public override int SaveChanges()
        {
            throw new InvalidOperationException("User ID must be provided");
        }

        public int SaveChanges(string userId)
        {
            // Get all Added/Deleted/Modified entities (not Unmodified or Detached)
            foreach (var ent in this.ChangeTracker.Entries().Where(p => p.State == System.Data.EntityState.Added || p.State == System.Data.EntityState.Deleted || p.State == System.Data.EntityState.Modified))
            {
                // For each changed record, get the audit record entries and add them
                foreach(AuditLog x in GetAuditRecordsForChange(ent, userId))
                {
                    this.AuditLog.Add(x);
                }
            }

            // Call the original SaveChanges(), which will save both the changes made and the audit records
            return base.SaveChanges();
        }

In order to make the output of the entire contents of an entity easier to obtain, we can create an interface that has a Describe() method that can be placed on our model classes:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace EFChangeTrackingDemo.Models
{
    public interface IDescribableEntity
    {
        // Override this method to provide a description of the entity for audit purposes
        string Describe();
    }
}

With this in place, our entities will look something like this:

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;

namespace EFChangeTrackingDemo.Models
{
    [Table("Items")]
	public class Item : IDescribableEntity
	{
        [Key]
		public System.Guid ItemID { get; set; }

        [Required]
        [MaxLength(100)]
		public string ItemDescription { get; set; }

		public virtual ICollection ItemAdditionalAttributes { get; set; }

        public Item()
        {
            this.ItemAdditionalAttributes = new List();
        }

        public string Describe()
        {
            return "{ ItemID : \"" + ItemID + "\", ItemDescription : \"" + ItemDescription + "\" }";
        }
    }
}

So now the only remaining item to implement is the method that creates AuditLog entries from a changed entity. To complete the AuditLog entries, we need the table name (if one was specified via the [Table()] attribute, otherwise we will use the entity class name), the primary key value, and either the contents of the entire entity (for inserts and deletes), or the before-and-after values of the changed columns. For the contents of the entire entity, if the entity implements the IDescribableEntity interface, we will call the Describe() method to get the content data, otherwise we will just use the ToString() method. For updates, we will just loop through all of the properties of the entity and add an AuditLog record to the output for each one where the original value is not equal to the new value. The GetAuditRecordsForChange() method will then look like this:

        private List<AuditLog> GetAuditRecordsForChange(DbEntityEntry dbEntry, string userId)
        {
            List<AuditLog> result = new List<AuditLog>();

            DateTime changeTime = DateTime.UtcNow;

            // Get the Table() attribute, if one exists
            TableAttribute tableAttr = dbEntry.Entity.GetType().GetCustomAttributes(typeof(TableAttribute), false).SingleOrDefault() as TableAttribute;

            // Get table name (if it has a Table attribute, use that, otherwise get the pluralized name)
            string tableName = tableAttr != null ? tableAttr.Name : dbEntry.Entity.GetType().Name;

            // Get primary key value (If you have more than one key column, this will need to be adjusted)
            string keyName = dbEntry.Entity.GetType().GetProperties().Single(p => p.GetCustomAttributes(typeof(KeyAttribute), false).Count() > 0).Name;

            if (dbEntry.State == System.Data.EntityState.Added)
            {
                // For Inserts, just add the whole record
                // If the entity implements IDescribableEntity, use the description from Describe(), otherwise use ToString()
                result.Add(new AuditLog()
                        {
                            AuditLogID = Guid.NewGuid(),
                            UserID = userId,
                            EventDateUTC = changeTime,
                            EventType = "A", // Added
                            TableName = tableName,
                            RecordID = dbEntry.CurrentValues.GetValue<object>(keyName).ToString(),  // Again, adjust this if you have a multi-column key
                            ColumnName = "*ALL",    // Or make it nullable, whatever you want
                            NewValue = (dbEntry.CurrentValues.ToObject() is IDescribableEntity) ? (dbEntry.CurrentValues.ToObject() as IDescribableEntity).Describe() : dbEntry.CurrentValues.ToObject().ToString()
                        }
                    );
            }
            else if(dbEntry.State == System.Data.EntityState.Deleted)
            {
                // Same with deletes, do the whole record, and use either the description from Describe() or ToString()
                result.Add(new AuditLog()
                        {
                            AuditLogID = Guid.NewGuid(),
                            UserID = userId,
                            EventDateUTC = changeTime,
                            EventType = "D", // Deleted
                            TableName = tableName,
                            RecordID = dbEntry.OriginalValues.GetValue<object>(keyName).ToString(),
                            ColumnName = "*ALL",
                            NewValue = (dbEntry.OriginalValues.ToObject() is IDescribableEntity) ? (dbEntry.OriginalValues.ToObject() as IDescribableEntity).Describe() : dbEntry.OriginalValues.ToObject().ToString()
                        }
                    );
            }
            else if (dbEntry.State == System.Data.EntityState.Modified)
            {
                foreach (string propertyName in dbEntry.OriginalValues.PropertyNames)
                {
                    // For updates, we only want to capture the columns that actually changed
                    if (!object.Equals(dbEntry.OriginalValues.GetValue<object>(propertyName), dbEntry.CurrentValues.GetValue<object>(propertyName)))
                    {
                        result.Add(new AuditLog()
                                {
                                    AuditLogID = Guid.NewGuid(),
                                    UserID = userId,
                                    EventDateUTC = changeTime,
                                    EventType = "M",    // Modified
                                    TableName = tableName,
                                    RecordID = dbEntry.OriginalValues.GetValue<object>(keyName).ToString(),
                                    ColumnName = propertyName,
                                    OriginalValue = dbEntry.OriginalValues.GetValue<object>(propertyName) == null ? null : dbEntry.OriginalValues.GetValue<object>(propertyName).ToString(),
                                    NewValue = dbEntry.CurrentValues.GetValue<object>(propertyName) == null ? null : dbEntry.CurrentValues.GetValue<object>(propertyName).ToString()
                                }
                            );
                    }
                }
            }
            // Otherwise, don't do anything, we don't care about Unchanged or Detached entities

            return result;
        }

I have used Code-First for this, and have avoided using the Fluent API for adding attributes to the model classes. You can use Model-First or Database-First and obtain the same results, but you will need to add the DbContext API code-generation item to your .edmx model and modify the templates to add at least the [Key] attribute to key columns. The Fluent API is a bit trickier: I’m not currently aware of a way to pull those attributes out of the entity, so you would need to find some other way to determine which column to output as the key value to duplicate this example.

To exercise this code, here is a simple example in an MVC 3 project:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using EFChangeTrackingDemo.Models;

namespace EFChangeTrackingDemo.Controllers
{
    public class HomeController : Controller
    {
        DemoDBContext db = new DemoDBContext();

        public ActionResult Index()
        {
            ViewBag.Message = "Welcome to ASP.NET MVC!";

            string userId = User.Identity.Name;

            Item newItem = new Item() { ItemID = Guid.NewGuid(), ItemDescription = "Test item" };

            db.Items.Add(newItem);
            db.SaveChanges(userId);

            newItem.ItemDescription = "Test 2";
            db.SaveChanges(userId);

            newItem.ItemAdditionalAttributes.Add(new ItemAdditionalAttribute() { ItemAdditionalAttributeID = Guid.NewGuid(), ItemID = newItem.ItemID, AttributeName = "Test Attribute", AttributeValue = "This is a test" });
            db.SaveChanges(userId);

            // Not that this will cascade delete down to remove the ItemAdditionalAttribute as well
            db.Items.Remove(newItem);
            db.SaveChanges(userId);

            return View();
        }

        public ActionResult About()
        {
            return View();
        }
    }
}

Your code will (hopefully) be more complex than this, but it’s enough to demonstrate how our audit process works. When a user visits the home page of the application, the controller will create a new Item record, save it, change the description, save it again, add a child record in the ItemAdditionalAttributes table, save again, and delete the Item record, which will also cascade the delete to the record in the ItemAdditionalAttributes table. If we run this and examine the AuditLog table, we will find the following (click to enlarge):

Note that the cascade delete captures the delete from both tables, and has records of all of the changes made between calls to SaveChanges(). Now that this is in place, any change made in the application that uses that DbContext class will automatically write audit information to the AuditLog table. Of course, in your own applications, you can do anything you want with the data, such as log it via your logging framework of choice, but with the new change tracking features in the DbContext API, you no longer have to do this manually in all of your data access code. By adding this code in one place, you can get audit coverage of your entire app. That is about as DRY as you can get. I have placed my sample project here for you to download.


Introduction

I have started this site as a place to share some of my ideas and observations on software development with the community.  I am currently working as a lead systems engineer/developer for a large global aviation services company, where the majority of time is spent on C# ASP.NET and WPF applications running against SQL Server databases (with SSRS, SSIS, SSAS, and MDS all thrown in for good measure), although projects with Java, Oracle, Silverlight, SharePoint, or something completely different do come up regularly.  I suspect that my posts here will tend to follow a similar pattern, as most of the initial ideas I have written down are related to C# and web development.

My hope is that these posts will provide ideas for you to use in their development work, and that through a bit of community involvement on this site, new ideas might emerge to make the implementation portion of development easier, so we can all spend more time solving new problems instead of implementing the same solutions to the same old ones.