Well, this was a perfect example that the theory should really come before the practice.
In the first two articles about modeling the data for the ShareDove conference solution, I was starting with the premise that we model the data first. But I didn’t explain why. When I say model the data – I do not mean creating SharePoint lists and libraries. That comes later. I mean understanding the domain, and creating business entities based on that domain.
This approach has created some passionate discussions, both in social networks and inside my team, which go far deeper than just a question how we create business objects for a SharePoint based solution, and do we need to create the custom objects at all. It is actually about the essence of software architecture. SharePoint solutions are no exception there, why should they be?
The second question which has been raised was about validation methods, being, or not being, built in into the business entities. Considering that this issue gets further complexity with SharePoint solutions, in comparison to standard .NET solutions, I was planning to devote a whole article to the validation topic, but I will briefly talk about it now.
Yes, this article should have come before previous two, but it is never too late.
Why business entities at the first place? Since the "Domain-Driven Design" (DDD) pattern has been first presented in mid 2000s, I have been a great advocate of it, even if it might, at the first look, mean more work for architects and developers. So, what it is it all about?
The domain is our business, for which we create the software solution. In order to create good software, you need to know what that software is all about. For example, you cannot create software for organizing conferences, unless you have a good understanding about how the conferences are organized. You must understand the domain – the knowledge of conference organization. And who knows that better, than the people who organize conferences? Business analysts, architects and developers do not have the domain knowledge. DDD pattern is actually a theory, that all the parties who are interested in development of a software solution – process owners, business analysts, architects and developers – agree upon a common, non-technical jargon, which is focused on the domain, to describe, to communicate, to exchange feedback, and to achieve mutual understanding.
Quite often, communication problems within different parties, and development teams, are due not only to misunderstanding the language of the domain, but also due to the fact that the domain’s language is itself ambiguous. Good domain description uncovers these pitfalls and helps us to target them before they become a real, hard to correct, problems.
For more information on Domain-Driven Design, please take a look at a free "Domain-Driven Design" book here:
But what does this all have to do with the business entities?
Business entities are first, and the most obvious representation of the domain, and the first step in describing the domain. They require in-depth analysis of a business domain. The goal in a domain model design is to define business entities that represent real world entities within the business domain. Somebody without knowledge of conference organization should be able to learn a lot about it, just by looking at the business entities, and the relations between them.
Business Entities are, in code, represented through classes. This basically means that you start by creating Plain Old CLR Object (POCO) classes that model the domain of the application. And those are rich classes, built in accordance with the OOP pattern, and can contain behaviors (but not the business processes!), or at minimum, validation.
The project where the business entities are stored has no dependencies outside its own assembly. On the other side, that project, is referenced from the presentation layer, business processes, and data access layer. We are speaking about tight coupling here: it does not make any sense to have loose coupling between the classes in those layers and the business entities: business entities are our common language; they are the mean of exchanging information, we need them everywhere. Even if business processes are loosely coupled with the data access layer, they are both tightly coupled to the business entities.
The following picture displays the dependencies between different classes and layers:
So, why should not we use Object-Relational Mapping (ORM) tools, like Linq to SharePoint (for SharePoint), Entity Framework or NHibernate (other data sources) to create business entities, and make our life easier?
Nothing stops you from doing that, and it might be a right thing to do in smaller scale projects, where you don’t have time and resources for all the DDD hassle.
But, since we are here speaking about enterprise applications, based on SharePoint, let’s list just some of the reasons why you should not do that in enterprise scenarios:
- You don’t follow the Domain language. Using ORM tools means that you already have a data source, and you are creating your Business Entities upon it. This is not suitable for describing the Domain, because simple foreign keys, used in the data source, cannot represent all the complex relations between the entities. And – the process is wrong: we should create our data structures based on the domain, not the otherwise round.
- Coupling. ORM classes are usually created inside the Data Access Layer, and reside together with the other DAL classes. Since we want to reference our business entities from almost all of the classes in the presentation (UI), business, and DA layers, which would mean a direct dependency between UI and DA layers. And we don’t want that. Sure, we can extract the entity classes from the ORM generated code, and move them to the Business Layer, where they naturally belong, but it means a lot of hassle. And if we speak about automatic regeneration of those classes….
- And if we speak about automatic regeneration of Business Entity classes through ORM tools, that means that your business entities will be very insecure, and that you actually cannot count that they will not be changed. This can be very, very dangerous when creating an enterprise application; you need to rely on the business entities as a core representation of the domain. Furthermore, adding behaviors and validation to business entities quickly becomes a mission impossible with automatically generated classes. Sure, you can make partial classes, and write your code somewhere else, but… You understand – it’s a hassle.
- Multiple data sources are very common scenario in SharePoint based enterprise applications. Lists are simply not data tables, and they are not suitable for storing a huge amount of data. It is very common that you store most of your data in SharePoint lists and libraries (where you need all of the fine SharePoint features like versioning, checking in/out, permission control etc.), but you put a large, transactional data, into SQL Server data tables. And, as an additional spice, you could buy some external data (data as service), for example from Windows Azure Marketplace. Let’s say, the stock-exchange data. So, you have three parallel data sources now: SharePoint lists, SQL Server table and a service. Try representing that through the business entities automatically created by the ORM tool.
No, it is definitely recommended to create your own business entities, when designing the enterprise applications. And you should do that at the first place, before you start designing your data source. You will use the Data Access Layer to map your data , wherever it is stored, to the Business Entities, and do necessary transformations and persistence tasks inside the DAL.
Does that mean that using ORM tools is not recommended? No, definitely not. I am a huge fan of Linq to SharePoint, but it is just one of the components of the DAL. All the query results are on the end mapped to the Business Entities. This might sound as an unnecessary additional work to do, but it is actually not the case, this is done quite fast.
Crossing the boundaries
In most cases, to pass data across physical boundaries such as AppDomain, process, and service interface boundaries, you must serialize that data.
We have tight coupling between Business Entities and three main layers, which is fine – we can push and pull our business entities back and forth, we can use them in the model classes in MVP pattern (UI Layer), but we cannot, or at least we should not, push them through the services. Since the Business Entities are OOP classes, which can contain numerous child objects and structures, they are very difficult to serialize, and, not to mention – simply too heavy to push through the physical boundaries.
This is why we often transform these objects into lightweight objects, suitable for serialization, and push those through the service. For example, if we want to push information about a conference session through the service, we don’t want to push all of the session ratings together with it, even if they belong to the Session business entity. It is simply too heavy, and this information is not required by the service consumer. But, let’s say, we want to know the average rating of the session. So we will create a lightweight session object, with only those properties which are really necessary – and the average rating will be one of those properties. We will not carry the weight of each single rating through the service.
This is known as Data Transfer Object (DTO) pattern. It is a design pattern used to package multiple data structures into a single structure for transfer across boundaries. Translator components are used along the DTO, which translate data formats between the business layer entities and the DTO objects.
This approach gives us another advantage: it makes it possible to change the internal implementation of the business layer, even the definition of Business Entities (although this is not done that often – at least it should not be), without affecting functionality of consumers on another side of the service.
Validating Business Entities
Best practices are telling us that we should validate our Business Entities inside the entity classes. That means, each business entity class should check the values of the properties, and prove if they fulfill the basic rules, like existence, range and length.
There are various theories and examples all over the internet, what is the best way to do this. The worst ones are, by my opinion, those with hard coded rules, where changing the rule means recompiling the solution. Yes, clearly, complex business rules need to be hardcoded, but there is no excuse for hardcoding basic validation rules. Even if you mask this hardcoding through some nice-styled code, like property attributes, it is still ugly.
There are lots of implementations, which include so called "rules engine", which then rely on some configuration entity, like XML file, from which the rules are being read.
If we use SharePoint as a data store, it brings another level of complexity. As you know, you can set the basic validation rules directly on the fields in SharePoint lists and libraries. If you now point out, that we should not care about it, since the business entities are the heart of our model, and not the data structures in the data source, you would be theoretically right, but still, it is not a nice thing to do. And, furthermore, if we take the rules set in SharePoint field definitions into consideration, we will potentially have fewer exceptions in the data access layer – since we will not be "surprised" if someone changes the validation rules on the fields in the lists.
But, this brings us promptly to two further issues:
- Business Entities are not SPContext aware, and they cannot, and should not, be directly mapped to the SharePoint Fields
- Validation rules in the SharePoint fields are pretty rudimental, and they can cover only basic stuff. So they need to be combined with some other rules source through our rules engine. Furthermore, as we have seen, we cannot count on SharePoint to be our only data source – if we have SQL Server and web services as additional data sources, things are becoming even more interesting.
As I have mentioned before, the validation topic will be covered in one of the future posts in this series, I just wanted to briefly talk about it, since a lot of questions I had about previous articles were about validation, and why wasn’t it included there.