VS2010 Schema Compare Order of Operations

Take the formulas 1 + 2 x 3 and (1 + 2) x 3; two completely different answers. In the case of Visual Studio Schema Compare, what I would assume the order of operations to be for updating a database would be to first update tables, then updates views (which could be dependent on the tables), and finally update stored procedures (which could be dependent on either the tables or the views). It seems that Schema Compare attempts to do stored procedures first, and when it comes across a reference to a database object that does not yet exist, it dies out with the following error message.

Order of Operations

In this example, we have two tables which have been created and the corresponding Create and Update stored procedures which interact with the new tables. This is a pretty basic example of typical database schema updates and one I would think Schema Compare could figure out.

The current workaround I have found for this is to:

  1. Mark all Stored Procedures as Skip All.
  2. Run Write Updates.
  3. Refresh the compare.
  4. Mark all stored procedures as Create/Update All.
  5. Run Write Updates.

A few extra steps, but it seems to work every time. Hopefully this is something that Microsoft will correct or provide some information as to why the order of operations proceeds with stored procedures first.

VS2010 Schema Compare Crashing

VS2010 Schema Compare Crash

I’ve been using Visual Studio 2010 Schema Compare for deploying database changes from one environment to another (development to staging, staging to production), but I have ran into an annoying bug that I would love some enlightenment on.

I am trying to compare a database on our staging SQL server to our production server and after selecting the two databases and clicking “Compare”, Visual Studio immediately crashes.

Problem
signature:

Problem Event Name: CLR20r3

Problem Signature 01: devenv.exe

Problem Signature 02: 10.0.30319.1

Problem Signature 03: 4ba1fab3

Problem Signature 04: Microsoft.Data.Schema.Sql

Problem Signature 05: 10.0.0.0

Problem Signature 06: 4ba1d812

Problem Signature 07: 2f6c

Problem Signature 08: 24

Problem Signature 09: System.NullReferenceException

OS Version: 6.1.7600.2.0.0.256.48

Locale ID: 1033

 

Additional
information about the problem:

LCID: 1033

 

Read
our privacy statement online:

http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409

 

If the
online privacy statement is not available, please read our privacy statement
offline:

C:\Windows\system32\en-US\erofflps.txt

What’s odd about this particular problem, is that other databases on the staging server can be compared to their production counterparts without problem. Additionally, this particular database was able to be backed up and restored to our development environment and a schema compare could be ran between development and production.

If anyone has any tips on this, please leave a comment explaining. If you’re having this issue, please note that as a workaround, it seems to be working to backup and restore the database. I also plan on posting this to Microsoft as well and I will update the post if I find anything.

Qxtend Query Service, .NET, and Dexter

Work, a new house, young daughter, and watching Dexter from the beginning doesn’t leave a lot of time for writing a software blog. Fortunately, tonight I finished the last episode (terrible ending) and it’s time to get back to writing!

To catch you up to the current season in my storyline; I’ve been working in the enterprise world, the land of three letter acronyms (TLA). I recently asked a few of my co-workers to help me come up with a short list to capture the type of work we’ve been doing, here’s what we came up with:

  • ERP – Enterprise Resource Planning
  • MES – Manufacturing Execution System
  • MRP – Material Requirements Planning
  • CRM – Customer Relationship Management
  • EAM – Enterprise Asset Management
  • PLM – Product Lifecycle Management
  • DHR – Device History Record
  • DHF – Device History File
  • MDR – Master Device Record
  • NCR – Non-Conformance Record
  • ECO – Electronic Change Order
  • BOM – Bill of Materials
  • BPR – Business Process Reengineering
  • ATP – Available To Promise
  • ISO – International Standardization Organization
  • CNC – Computer Numerical Control
  • RMA – Return Merchandise Authorization
  • ROI – Return On Investment
  • CSR – Customer Service Representative
  • DNS – Domain Name Service
  • EFT – Electronic Funds Transfer
  • FTP – File Transfer Protocol
  • JIT – Just In Time
  • POS – Point of Sale
  • CAD – Computer Aided Design
  • CAE – Computer Aided Engineering
  • RFP – Request For Proposal
  • WIP – Work In Process

In addition to all of the TLA’s, we’ve been working with a ton of new (and old) technology which is something that I thought I would post about. What’s interesting about working with many of these ERP-related systems is that if they are not SAP or Oracle, they don’t seem to get a lot of exposure. This makes working on these systems so much harder because information is no longer a Google (or Bing) search away. Not only is the technology older, but the way to access information about the technology is a throw-back to the days of (gasp) reading manuals and asking real people.

The particular ERP system we use is called QAD, which uses a web service based interface called Qxtend to communicate to other systems. Trust me, more than once I tried searching for interfacing with QAD via Qxtend and .NET, but to no avail. Working with Qxtend as a .NET developer is extremely different than consuming typical web services, at least for us at this stage in our understanding. Instead of adding a service reference or using WCF or similar framework, we had to do this an older more manual way which I will outline below. It does seem to work quite reliably, but there were several gotcha’s both in setting up the Query Service on the part of Qxtend and consuming it from .NET. All code on the .NET side can be used to connect to other SOAP based web services, so if you aren’t using QAD, don’t worry – you can still get a bit out of this code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Net;
using System.Xml;

namespace QxtendSoapCall
{
    class Program
    {
        static void Main(string[] args)
        {
            //Prep information related to Qxtend 
            //(these values are made up for demo purposes)
            string url = "http://qad-qxtend/services/QDocWebService";
            string receiver = "qad_rcv";
            string domain = "abc";
            string version = "v1_1";
            string sourceApp = "abc";
            string profile = "GetCustomerReps";
            int maxRows = 0;  //unlimited

            //Set filter for query service (optional)
            //(these values are made up for demo purposes)
            string part = "ABC-123";
            string filter = string.Format("pt_part = {0}", part);

            //Load generic xml request template from external file
            //(create an empty QueryService QDOC request to begin and add placeholders)
            string templatePath = @"C:\XmlData\QueryService.xml";
            string requestXML = string.Empty;
            using (StreamReader streamReader = new StreamReader(templatePath))
            {
                requestXML = streamReader.ReadToEnd();
            }

            //Replace template values with values for this query service
            requestXML = requestXML.Replace("{receiver}", receiver);
            requestXML = requestXML.Replace("{domain}", domain);
            requestXML = requestXML.Replace("{version}", version);
            requestXML = requestXML.Replace("{sourceApplication}", sourceApp);
            requestXML = requestXML.Replace("{profile}", profile);
            requestXML = requestXML.Replace("{filter}", filter);
            requestXML = requestXML.Replace("{maxRows}", maxRows.ToString());

            //Clean up template
            requestXML = requestXML.Replace("\n", "").Replace("\r", "");

            //Prep service call variables for qxtend
            WebRequest request = null;
            WebResponse response = null;
            string xmlResponse = string.Empty;

            try
            {
                //Prepare web request
                request = WebRequest.Create(url) as HttpWebRequest;
                request.Method = "POST";  //post method
                request.ContentType = "text/xml";  //xml
                request.Headers.Add("soapaction", url);  //soapaction header
                request.Headers.Add("Synchronous", "Yes");  //synchronous
                request.Timeout = 30000;  //30 seconds timeout expiry

                //Encode xml string into byte array
                byte[] byteData = Encoding.UTF8.GetBytes(requestXML);
                request.ContentLength = byteData.Length;

                //Post byte array
                using (Stream postStream = request.GetRequestStream())
                {
                    postStream.Write(byteData, 0, byteData.Length);
                    postStream.Close();
                }

                //Get web response
                response = request.GetResponse() as HttpWebResponse;

                //Pull response into stream
                Stream stream = response.GetResponseStream();

                //Read stream
                StreamReader reader = new StreamReader(stream);
                xmlResponse = reader.ReadToEnd();
            }
            catch (WebException webEx)
            {
                //TODO: Handle your web exceptions here
            }
            catch (Exception ex)
            {
                //TODO: Handle your general exceptions here
            }

            //Convert string to XmlDocument (or XDocument)
            XmlDocument xdoc = new XmlDocument();
            if (!string.IsNullOrEmpty(xmlResponse))
            {
                xdoc.LoadXml(xmlResponse);
            }

            //TODO: Do something with XML now that you have data from QAD
        }
    }
}

That’s it, now you’ve got a way to generically call QAD Qxtend Query Service from .NET without needing an XSD or creating service references to .NET. This is fairly new for me, so if you see any bugs or better approaches, please leave a comment!

Cache Management Helper Class

As a follow up to my post on creating a Web Session Management and Centralization Class, I thought I would also touch on one method of presentation level caching. Caching, as well all know, stores a local version of some data closer to the actual requestor making the call, whether it is in the BLL, the presentation, or even page/control specific. Caching can be a significant boost to performance and is fairly easy to implement. Additionally, there are several types of caching, such as image, page, custom, etc. In this brief example, we’re going to explore a custom implementation of caching to minimize trips to the database.

When Would I Use This?

In my opinion the ideal candidates for presentation level caching are things like dropdown lists which are dynamically pulled from the database. I’ve been working on a project which utilizes several dropdowns throughout the application, and each one is to be administered dynamically. This means for each page, I may have several calls to simply setup the page and populate the dropdowns.

In this application, I’m calling a business logic layer which calls down to the data access layer and finally to the database. We can minimize the amount of calls to the database by instead of calling directly to the business layer, call our WebCacheManager class, which is going to first check if the data exists in local cache, or if it needs to make a call to retrieve the data. If the call is made down to the business layer, the results are then stored in the local cache until either the cache expires based on an absolute date/time or a dependent file (acting as a trigger) is modified.

The external file, located in _cacheDependencyFilePath in the following code is a textfile, which can be manually or programmatically modified to expire the cache without having to reset IIS or wait out the expiration time period. The following code sets up a helper class which could sit on the presentation (web application), and be called to retrieve cached data back very quickly.

WebCacheManager

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using Common.Entities.Complaints;  //This is just a reference to my business entities project
using System.Web.Caching;
using System.IO;

namespace UI.ComplaintsWeb.Code
{
    public class WebCacheManager
    {
        /// 
        /// A reference to the current context cache.
        /// 
        private Cache _cache;

        /// 
        /// A filepath to the cachedependency file which triggers expiration of cache.
        /// 
        private string _cacheDependencyFilePath;

        /// 
        /// The absolute date/time of cache expiration.
        /// 
        private DateTime _expirationDate;

        public WebCacheManager()
        {
            _cache = HttpContext.Current.Cache;
            _cacheDependencyFilePath = Path.GetFullPath("../CacheDependency.txt");
            _expirationDate = new DateTime(DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day, 23, 59, 59);
        }

        public InspectionType[] GetInspectionTypes()
        {
            InspectionType[] data;
            string cacheKey = "InspectionTypes";

            if (_cache[cacheKey] != null)
            {
                //Get data from cache
                data = _cache[cacheKey] as InspectionType[];
            }
            else
            {
                //Get data from database                
                data = WebHelper.ComplaintsLogic.GetInspectionTypes();

                //Assign to cache
                _cache.Add(cacheKey, data, new CacheDependency(_cacheDependencyFilePath), _expirationDate, System.Web.Caching.Cache.NoSlidingExpiration, CacheItemPriority.Normal, null);
            }

            return data;
        }        
    }
}

Fancy, How Do I Call It?

No sweat, let’s imagine we’re binding to a DropDownList called “InspectionTypeList”, instead of your typically business call (or direct DB call if you’re into that sort of thing, and you probably shouldn’t be, but that’s another topic); simple use the following similar syntax:

var cacheManager = new WebCacheManager();
//Inspection Types
InspectionTypeList.DataSource = cacheManager.GetInspectionTypes();
InspectionTypeList.DataBind();

Not too much to it. If you have any questions on how the WebCacheManager is being used, or suggestions on improvements, please leave a comment below. Don’t be shy, you know you want to comment.

A Generic Data Mapper for Simple Data

Recently, a buddy and I were working on a project and we came across an almost abnormal amount of simple data entities which were used primarily for populating dropdowns. You know the type of data, there is essentially an ID, Name, and not much else.

We needed to do several things with these simple entities, namely populate dropdown, but also associate the entities with other more complex entities. In our architecture, we’re using a layered approach and manual mappers of data retrieval to business entities. There were several ways that we could have gone about propagating this data through the DAL, BLL, and to the presentation, but we chose to create individual entities to represent each of these lists of data. Our reasoning for this was to optimize the ability to separate the code and provide for future classes that could take on additional responsibility if these entities became more complex.

For these simple entities, we didn’t want to write an individual mapper for each one (essentially a whole lot of cut and paste work); so we opted to create a generic mapper class which is constrained by a base class which all of the simple entities inherit from (SimpleData).

The SimpleData class is just that, simple, and looks like this:

The SimpleData Entity

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.Serialization;

namespace Common.Entities.General
{
    [DataContract()]
    public abstract class SimpleData
    {
        #region Constructors

        public SimpleData()
        {
        }

        public SimpleData(int id)
            : this(id, string.Empty, string.Empty)
        {
        }

        public SimpleData(string text, string value) : this(0, text, value)
        {            
        }

        public SimpleData(int id, string text, string value)
        {
            this.ID = id;
            this.DataText = text;
            this.DataValue = value;
        }

        #endregion

        #region Properties

        public DateTime Created { get; set; }
        public string DataText { get; set; }
        public string DataValue { get; set; }
        public int ID { get; set; }
        public DateTime? LastUpdated { get; set; }

        #endregion

        #region Methods

        public abstract SimpleData Create(int id, string text, string value);

        #endregion
    }
}

Now that we had a base class to inherit from, we needed to apply this to our simple entity to be mapped; here is one example of our entity inheriting SimpleData:

The Entity to be Mapped

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Common.Entities.General
{
    /// 
    /// An arbitrary class implementing SimpleData.
    /// 
    public class CallType : SimpleData
    {
        #region Constructors

        public CallType()
        {
        }

        public CallType(int id)
            : this(id, string.Empty, string.Empty)
        {
        }

        public CallType(string text, string value) : base(text, value)
        {            
        }

        public CallType(int id, string text, string value)
            : base(id, text, value)
        {
        }

        #endregion

        public override SimpleData Create(int id, string text, string value)
        {
            return new CallType(id, text, value);
        }
    }
}

Now that we had a consistent approach to marking up the simple entities, we could create a single generic mapper class called “SimpleMapper”. The class takes in a DataRow, the name of the ID field, the name of the textual (Name) field, and the name of the value field (specifically, if this is different than the ID).

The reason we have a few extra input parameters here is so the SimpleMapper could handle for these two scenarios (we’ll use our Call Type as the example):

  • Scenario 1: My data has an integer ID for the unique identifier of the database record and the name of the Call Type as the textual value. My database table could look like this:
CallTypes Database Table
ID int
Name varchar(50)
  • Scenario 2: My data has an integer ID for the unique identifier of the database record, a name of the Call Type as the textual value, and a 3-digit textual code for the CallType. My database table could look like this:
CallTypes Database Table
ID int
Name varchar(50)
Code varchar(3)

By specifying which column should be returned as the DataValue field (either the ID, the code, or even the textual name itself) the class has a little more flexibility. Below is the SimpleMapper class in its entirety:

The SimpleMapper Class

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Common.Entities.General;
using System.Data;

namespace DataAccess.Full
{
    public class SimpleMapper where T : SimpleData, new()
    {
        public static T Map(DataRow row, string idName, string textName, string valueName)
        {
            var creator = new T();
            int id = (int)row[idName];
            string text = (string)row[textName];

            string value = id.ToString();
            if (idName != valueName)
            {
                value = (string)row[valueName];
            }

            var obj = creator.Create(id, text, value);
            
            return (T)obj;
        }

        public static T[] Map(DataTable data, string idName, string textName, string valueName)
        {
            var entities = new List();
            foreach (DataRow row in data.Rows)
            {
                entities.Add(Map(row, idName, textName, valueName));
            }

            return entities.ToArray();
        }
    }
}

Implementing Data Access Interfaces and Stubbing

This article describes one approach on how to interface a data access layer to allow for multiple data access platforms including a stubbed data access layer. I’ll show how to link up a stubbed layer and an MS SQL layer which both utilize the same interface contract.

Sounds Great! What Are We Doing Again?

It sounds a bit more complicated than it really is. Sometimes, especially when you are working with a large project, or one that is very critical (core to the business), you may want to take some additional considerations in your data access architecture. In enterprise applications, or in applications where you may foresee wholesale changes in the data access in the future, you may want to create an interface for any data access to follow in case you decide to switch from SQL to Oracle for example.

Another good use is when you want to “stub” the data access layer. Stubbing is simply when you return fake hardcoded data that emulates what you would expect the data access layer to really return. At first, it sounds like stubbing doesn’t really do anything. But there are many purposes, stubbing allows you to split work between developers (or teams) between layers; allowing one developer to work on the business logic or presentation layer, while another team works on the data access layer. Stubbing is also useful for unit tests when you do not want to actually build up/tear down the database and simply emulate what you would expect to return from the database.

Not 100% convinced? Well, don’t feel alone, it took me a little while to accept there was any real return on developing stubbed layers; but trust me, there is. Hopefully, this will become more clear further down the artice.

The Solution

First, we need to set up the solution. Our solution is going to use several projects to logically (and potentially physically) seperate code into layers/tiers. The projects we will be working with our as follows:

  • P2.Business
  • P2.DataAccess
  • P2.DataAccess.MSSQL
  • P2.DataAccess.Stub
  • P2.Common.Entities

There are other projects within this solution, but for the purpose of this article they’re not referenced.

A Note About Business Entities

For the sake of brevity, I’m not going to go into the details of the business entities, simply note that when the proceeding code references a data type from Common.Entities this simply means there is a business entity created to frame the data into an object oriented view.

For more information on business entities you can view the following posts: Common Reusable Business Entities Part 1, Common Business Entities Part 2, and Common Business Entities Part 3.

Creating the Data Access Interface

First, we need to create an interface (contract) to tell our real data access layers how we want them to interact. In this demonstration, I will have a solution called “P2”. P2 will have the following projects:

The project P2.DataAccess is going to house all of our interface knowledge; P2.DataAccess.MSSQL and P2.DataAccess.Stub will implement the interfaces within P2.DataAccess. We’ll create an interface for working with our “products” first called IProductData.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

using P2.Common.Entities.Ecommerce;

namespace P2.DataAccess
{
    public interface IProductData
    {
        bool SaveProduct(ref Product product);
        bool DeleteProduct(Product product);

        Product GetProductByID(int id);
        Product[] GetProducts();

        Product[] SearchProducts(string search);
        Product[] SearchProducts(bool posSearch, string search, bool posCategory, Category[] categories);
    }
}

This interface describes what the product data access code will be able to do, in this case, it will be able to save a product, delete a product, and retrieve products in a variety of ways. Each data access, regardless of platform, will need to implement this interface and address these concerns.

Creating the Stubbed Data Access Layer

Now, let’s create a “real” data access layer which implements this interface. Because the interface resides in a different project, make sure that the P2.DataAccess.Stub project references P2.DataAccess. Next, create a file called ProductData.cs as follows:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

using P2.Common.Entities.Ecommerce;
using P2.Common.Entities.General;

namespace P2.DataAccess.Stub
{
    public class ProductData : IProductData
    {
        #region IProductData Members

        public bool SaveProduct(ref Product product)
        {
            return true;
        }

        public bool DeleteProduct(Product product)
        {
            return true;
        }

        public Product GetProductByID(int id)
        {
            var product = new Product(id)
            {
                Name = "Cosmic Uamou",
                Summary = "A handpainted original from Tokyocube.",
                Description = "Tokyocube are very pleased to present a beautiful new custom series from Ayako Takagi called Cosmic Uamou. The series is made up of 7 hand-painted originals, each with its own unique colours and design. The series is based on the Buuts Buuts Uamou, and each figure is signed and exclusively available through Tokyocube.",
                Price = 12.50,
                Weight = .25,
                Width = 2,
                Height = 6,
                Length = 2,
                SKU = "CA001",
                LargeImage = new Image("1_l.jpg", "Cosmic Uamou", "Cosmic Uamou - Front"),
                MediumImage = new Image("1_m.jpg", "Cosmic Uamou", "Cosmic Uamou - Front"),
                Thumbnail = new Image("1_t.jpg", "Cosmic Uamou", "Cosmic Uamou - Front"),
            };

            return product;
        }

        public Product[] GetProducts()
        {
            Product[] products = new Product[3];

            products[0] = new Product(1)
            {
                Name = "Cosmic Uamou",
                Summary = "A handpainted original from Tokyocube.",
                Description = "Tokyocube are very pleased to present a beautiful new custom series from Ayako Takagi called Cosmic Uamou. The series is made up of 7 hand-painted originals, each with its own unique colours and design. The series is based on the Buuts Buuts Uamou, and each figure is signed and exclusively available through Tokyocube.",
                Price = 12.50,
                Weight = .25,
                Width = 2,
                Height = 6,
                Length = 2,
                SKU = "CA001",
                LargeImage = new Image("1_l.jpg", "Cosmic Uamou", "Cosmic Uamou - Front"),
                MediumImage = new Image("1_m.jpg", "Cosmic Uamou", "Cosmic Uamou - Front"),
                Thumbnail = new Image("1_t.jpg", "Cosmic Uamou", "Cosmic Uamou - Front"),
            };

            products[1] = new Product(2)
            {
                Name = "Mermaid",
                Summary = "Designed by Keiichi Tanaami.",
                Description = "Designed by Keiichi Tanaami, and uniquely produced in polystone, the Mermaid is a very special piece showing the potential of current production techniques with its superb detailing and immense quality. Produced to a run of just 200 worldwide, Keiichi Tanaami's Mermaid stands 19 cm tall.",
                Price = 10.50,
                Weight = .25,
                Width = 2,
                Height = 6,
                Length = 2,
                SKU = "CA002",
                LargeImage = new Image("2_l.jpg", "Mermaid", "Mermaid - Front"),
                MediumImage = new Image("2_m.jpg", "Mermaid", "Mermaid - Front"),
                Thumbnail = new Image("2_t.jpg", "Mermaid", "Mermaid - Front"),
            };

            products[2] = new Product(3)
            {
                Name = "Kaiju Blue",
                Summary = "Designed by Sun Guts.",
                Description = "Kaiju Blue release a new collection of art toys from Japanese designers Sun Guts, including variations of the Uraname and Suiko figures. All produced in Japan, and available as a limited run.",
                Price = 12.50,
                Weight = .25,
                Width = 2,
                Height = 6,
                Length = 2,
                SKU = "CA003",
                LargeImage = new Image("3_l.jpg", "Kaiju Blue", "Kaiju Blue - Front"),
                MediumImage = new Image("3_m.jpg", "Kaiju Blue", "Kaiju Blue - Front"),
                Thumbnail = new Image("3_t.jpg", "Kaiju Blue", "Kaiju Blue - Front"),
            };

            return products;
        }

        public Product[] SearchProducts(string search)
        {
            Product[] products = new Product[0];
            products[0] = new Product(1);

            return products;
        }

        public Product[] SearchProducts(bool posSearch, string search, bool posCategory, Category[] categories)
        {
            Product[] products = new Product[0];
            products[0] = new Product(1);

            return products;
        }

        #endregion
    }
}

There are a couple of things to notice in this code example. First, I am including the mapping of data to an entity in my data access layer; some people may do this in the business logic layer and return datatables/datasets or some other generic DTO (data transfer object) from the data layer. Because my objects have very little if any business logic, they are essentially DTO’s. Second, everything is hardcoded with actual values and there is no actual database interaction – this is what makes this a “stub”. The results are always the same, and can be unit tested against with certainty of the expected result. Finally, on a non-technical note, my products all have pretty unique names and descriptions. I found these while looking at e-commerce websites and got a kick out of their products. Here are a few of their product images:

Article Intermission

tokyo cube ex. 1
tokyo cube ex. 2
tokyo cube ex. 3
Visit Tokyo Cube for more…

Creating the MSSQL Data Access Layer

Let’s now create the MSSQL layer. This project will also reference the main P2.DataAccess project and implement the same interface as before:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using P2.Common.Entities.Ecommerce;
using P2.Common.Utilities;
using System.Data;

namespace P2.DataAccess.MSSQL
{
    public class ProductData : IProductData
    {
        #region Mappers
        
        //TODO: Removed for brevity.

        #endregion

        #region IInventoryData Members

        public bool SaveProduct(ref Product product)
        {
            const string sql = "SaveProduct";
            var parameters = new SqlHelperParameter[]
            {
                new SqlHelperParameter("Product", product.ID),
                new SqlHelperParameter("Name", product.Name)
            };

            //Create product
            object id = null;
            var idColumn = "ProductID";
            var result = SqlHelper.ExecuteNonQuery(ConnectionManager.Databases.P2_Ecommerce, sql, parameters, ref id, idColumn);

            product.ID = Convert.ToInt32(id);
            return result;
        }

        public bool DeleteProduct(Product product)
        {
            const string sql = "DeleteProduct";
            var parameters = new SqlHelperParameter[]
            {
                new SqlHelperParameter("Product", product.ID),
            };

            var result = SqlHelper.ExecuteNonQuery(ConnectionManager.Databases.P2_Ecommerce, sql, parameters);

            return result;
        }

        public Product GetProductByID(int id)
        {
            const string sql = "GetProductByID";
            var parameters = new SqlHelperParameter[]
            {
                new SqlHelperParameter("ProductID", id)
            };

            DataRow row = null;
            var data = SqlHelper.RetrieveDataTable(ConnectionManager.Databases.P2_Ecommerce, sql, parameters);
            if (data.Rows.Count > 0)
            {
                row = data.Rows[0];
            }

            return MapDataToProduct(row);
        }

        public Product[] GetProducts()
        {
            const string sql = "GetProducts";

            var data = SqlHelper.RetrieveDataTable(ConnectionManager.Databases.P2_Ecommerce, sql);
            return MapDataToProducts();
        }

        public Product[] SearchProducts(string search)
        {
            const string sql = "SearchProducts";
            var parameters = new SqlHelperParameter[]
            {
                new SqlHelperParameter("Search", search)
            };

            var data = SqlHelper.RetrieveDataTable(ConnectionManager.Databases.P2_Ecommerce, sql, parameters);
            return MapDataToProducts(data);
        }

        public Product[] SearchProducts(bool posSearch, string search, bool posCategory, Category[] categories)
        {
            //TODO: Leaving out for brevity.
            throw new NotImplementedException();
        }

        #endregion
    }
}

Note: I like to use a SQL Helper class that I’ve written, therefore the code above does not look like normal ADO.NET. The actual code for the data access is not as important as the concept that we’ve seperated two versions of the data access layers which are tied together by a common interface.

Now, Let’s Tie It All Together in the Business Layer

The final piece to the puzzle is the business layer, where we want to give it the power to decide which data layer to execute. We can do this several ways but one that I have found really useful is by specifying the desired data access in the business logic constructor. We can do this in the following way:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

using P2.DataAccess;
using P2.Common.Entities.Ecommerce;

namespace P2.Business.Ecommerce
{
    public class ProductLogic
    {
        private IProductData _data;

        public ProductLogic() : this(new P2.DataAccess.Stub.ProductData())
        {
        }

        public ProductLogic(IProductData data)
        {
            _data = data;
        }

        public Product[] GetProducts()
        {
            return _data.GetProducts();
        }
    }
}

Notice that the constructor allows the caller to specify what type of Data Access to use. An example of this call could be:

var logic = new ProductLogic(new P2.DataAccess.Stub.ProductData());
var products = logic.GetProducts();

This call would get products from the data access stub. In a later post, I will demonstrate how to default a data access layer with a blank constructor using reflection. Normal usage of code could use the parameterless constructor and specify the default, while unit testing can specifically call the stub to avoid build up/tear down of the database.

And there you have it, the business logic now has total control of calling down to the desired data access and the data access layer becomes platform independent by implementing a common interface. Hope this was helpful, happy coding!

A Tale of Two Architectures

At times, software development can be a complicated task, especially when developing enterprise-level systems with heavy load or critical functionality. It is because of this, that software architecture has become increasingly more important and refined. There are now so many types of software architectural models, that choosing the most appropriate solution is a skill in itself.

In this article, I would like to demonstrate two fairly generic architectures that I prefer to use for enterprise applications which require a decent amount of complexity and the flexibility to be easily maintained and updated in the future. There are of course pro’s and con’s to each of these architecture models, which we will discuss later in this article.

Both architectures take into account the following requirements:

  • Logical seperation between functional purpose.
  • Reuseable code.
  • Flexible for maintenance/upgrades.
  • Tiers are able to be distributed across several servers.

Objects as Actors

Objects as Actors Architecture

First, to clarify what I mean by “actors”: an actor as I am using it refers to the code that executes behavior. In the case of objects as actors, the behavior is derived from the objects (business entities). This architecture is a hybrid and borrows concepts from several official architectural patterns such as Layered Architecture and OOAD.

As you can see, the layers are split into four major areas; data access, business logic, presentation, and common/crosscut. In this case, the business logic layer contains both business logic (such as validation, computations, etc.) and business entities. Business entities become the main section of code that all interaction with the presentation layer derives from.

Logic and entities are tightly coupled because they are packaged together in the same layer. This is more closely related to object oriented design, where everything is object-centric. Now, many of you may have read the words “tightly coupled” and were turned off; but, choosing this architecture means you are deciding that if a business entity structure changes, the business logic will need to change as well. In most real-world cases, this is true, so tightly coupling is not as negative as you may expect.

Additionally in the architecture, there is are common assemblies that cross across all layers such as Utilities (ex: extension methods), Communication (such as between layers on multiple servers), and security.

The presentation layer could include any form of presentation platform such as Web, Windows, Mobile, Silverlight, etc. The primary point is that as much as possible, whichever presentation is in use, or if multiple presentations exist, all code interacts with a common point, the business entities.

Optionally, you can implement a services layer between the business and presentation layers to lean closer to a service-oriented architecture.

Presentation-Level Code Interaction Example

Let’s look at a short example of how you would interact with the entities from any layer above business logic. This code creates a “person” entity, assigns a first and last name in the constructor, and then saves the person to the database.

var person = new Person("John", "Smith");
person.Save();

Business Logic as Actors

Business Logic as Actors Architecture

The main differece between this architecture and “Objects as Actors” is that business entities are moved into the common crosscut section and treated more as data structures (thin vs. thick entities).

The interaction with these business entities occur through calls to the business logic layer directly instead of the entity. The advantages of this are that entities are lighter, changes in the structure do not have to be changed in the business tier, and the objects tend to conform better to WCF.

The drawbacks are that entities no longer have structured behavior of what they can and cannot do. You use another class (in the business logic layer) to define and execute the actiosn of business entities – the business logic actor.

Presentation-Level Code Interaction Example

Our same “person” example using the Business Logic Actor. The MembershipActor contains a list of possible functions to do with various membership-related logic (such as saving Person).

var person = new Person("John", "Smith");
var membershipActor = new MembershipActor();
membershipActor.SavePerson(person);

Additional Resources

While in the middle of writing this article, I came across the Application Architecture Guide 2.0 by Microsoft Patterns and Practices. You can download the complete guide from Codeplex here. Kudos to Microsoft for putting this out for free!

Closing Notes

Various points on architecture are debateable as to what is good or bad; mostly because it depends so heavily on the individual situation. However, I hope after you’ve read this article you will leave your feedback as to what you agree, and more importantly, disagree about the above architectural models.

Happy coding!