Showing posts with label Programming. Show all posts
Showing posts with label Programming. Show all posts

Sunday, March 13, 2011

Source Code Comments Revisited

I have to admit I don't get much traffic on this blog - usually a few hits per day, according to Feedjit. That's why I was pleasantly surprised when I saw a steady stream of comments earlier today. Some of them were submitted here on blogger.com, but most went to the discussion on reddit.com. Since I can't possibly reply to every single comment, here is a summary response.

First of all, I'd like to thank everyone who agreed with my view of automated unit tests as a perfectly good substitute of source code comments (of course, unit tests are much much more than a replacement for comments!). I also appreciate the folks who submitted constructive criticism and shared their personal rules of thumb for writing comments.


Some readers took offence on my use of terms like "cohesion", "coupling", and "cyclomatic complexity", calling them buzzwords. My friends, if these terms are just buzzwords to you, then you clearly are missing some important trends in our industry. If you are writing objects with hundreds of methods that pertain to dozens different responsibilities, or if your thousand-line functions have so many if-else statements it makes you dizzy, then no amount of source code comments will hide one simple fact: your code is bad and is in dire need of refactoring.

One often-repeated mistake was that I am advocating against ALL source code comments. This is simply not true, and if you read the blog post through the end, you will see a pretty funny example of a very legitimate comment one programmer left in his code.

Speaking of examples - I got a few posts saying they are "stupid", "straw-man", and not real. Well, I don't even know what to say... There are just two examples in my blog post, and both are taken verbatim from the StackOverflow discussion called "What is the best comment in source code you have ever discovered".


To all people who questioned my credibility, I am happy to say that I have been a professional programmer for over 16 years, used more programming languages than I care to remember, and was involved in the maintenance of large codebases. I do code reviews on a regular basis, and I always enjoyed respect of my colleagues. True, my current job title is no longer a "programmer", and I've been coding in C# exclusively for the last several years, but make no mistake - I am no "dabbler" when it comes to software development.

Friday, December 17, 2010

Mobile Enterprise Application. Step 3: Registering For Push Notifications

This is a third post in a series. Previous posts:
 - Step 1: General Architecture
 - Step 2: Authentication

I briefly defined Push Notification Service in the previous post. Here we will take a closer look at implementing notifications. PNS has cloud-based infrastructure maintained by Microsoft; phone application registers with it and assumes a unique endpoint address. That address is sent to the server portion of our system which uses it to forward messages to PNS infrastructure which, in turns, forwards it to the phone. There are three different types of alerts that can be sent:

  1. Tile notifications. If an application is pinned to the start screen, its image (a.k.a. tile) can change in response to tile notification. We can either change the entire image or just display a number on the default tile.
  2. Toast notifications. These are essentially short text messages that are briefly displayed on top of the phone screen. If user touches the message, associated application opens up.
  3. Raw notifications. If previous two are used to communicate with the phone OS, this one is targeted at the phone application itself. Therefore, message contents and system response will be application-specific.
Check User Preferences
According to Microsoft certification requirements, users should be able to opt out of push notifications. The most straightforward approach is to store user preferences in the application setting.
   1:      public class AppSettings
   2:      {
   3:          private readonly IsolatedStorageSettings _isolatedStore;
   4:   
   5:          public bool CanUsePNS
   6:          {
   7:              get
   8:              {
   9:                  return GetValueOrDefault<bool>(Constants.Settings.CanUsePNSKey, Constants.Settings.CanUsePNSDefault);
  10:              }
  11:              set
  12:              {
  13:                  AddOrUpdateValue(Constants.Settings.CanUsePNSKey, value);
  14:                  Save();
  15:              }
  16:          }
  17:   
  18:          public AppSettings()
  19:          {
  20:              _isolatedStore = IsolatedStorageSettings.ApplicationSettings;
  21:          }
  22:   
  23:          public bool AddOrUpdateValue(string key, Object value)
  24:          {
  25:          }
  26:   
  27:          public TValueType GetValueOrDefault<TValueType>(string key, TValueType defaultValue)
  28:          {
  29:          }
  30:   
  31:          public void Save()
  32:          {
  33:              _isolatedStore.Save();
  34:          }
  35:      }


Register Push Channel.
This needs to be done when application starts and after user confirmed he or she wants to use PNS.
   1:          public void RegisterPushChannel()
   2:          {
   3:              if(!(new AppSettings()).CanUsePNS) return;
   4:   
   5:              _httpChannel = HttpNotificationChannel.Find(Constants.ChannelName);
   6:   
   7:              if (null != _httpChannel)
   8:              {
   9:                  SubscribeToChannelEvents();
  10:                  SubscribeToService();
  11:                  SubscribeToNotifications();
  12:              }
  13:              else
  14:              {
  15:                  _httpChannel = new HttpNotificationChannel(Constants.ChannelName, Constants.Channels.Service);
  16:                  SubscribeToChannelEvents();
  17:                  _httpChannel.Open();
  18:              }
  19:          }


Method SubscribeToChannelEvents simply adds application handlers to process various events raised by the HttpNotificationChannel object.

   1:          private static void SubscribeToChannelEvents()
   2:          {
   3:              // Register to UriUpdated event - occurs when channel successfully opens
   4:              _httpChannel.ChannelUriUpdated += new System.EventHandler<NotificationChannelUriEventArgs>(HttpChannelChannelUriUpdated);
   5:   
   6:              // Subscribe to raw notifications
   7:              _httpChannel.HttpNotificationReceived += new System.EventHandler<HttpNotificationEventArgs>(HttpChannelHttpNotificationReceived);
   8:   
   9:              // General error handling for push channel
  10:              _httpChannel.ErrorOccurred += new System.EventHandler<NotificationChannelErrorEventArgs>(HttpChannelErrorOccurred);
  11:   
  12:              // Subscribe to toast notifications
  13:              _httpChannel.ShellToastNotificationReceived += new System.EventHandler<NotificationEventArgs>(HttpChannelShellToastNotificationReceived);
  14:          }

Method SubscribeToNotifications binds channel notifications to Windows shell:
   1:          private static void SubscribeToNotifications()
   2:          {
   3:              if (!_httpChannel.IsShellToastBound)
   4:              {
   5:                  _httpChannel.BindToShellToast();
   6:              }
   7:              if (!_httpChannel.IsShellTileBound)
   8:              {
   9:                  _httpChannel.BindToShellTile();
  10:              }
  11:          }

Associate User and Channel URI
Method SubscribeToService needs to send unique endpoint URI (which can be accessed via _httpChannel.ChannelUri property) to the server portion of the system, e.g., by calling a web service. The service will associate the URI with the ID of the currently logged in user. This is an important consideration: since different users may be using the mobile application (or the same person may have different user accounts), but channel URI will always be the same. If we do not properly associate URI with the current user, he or she will be receiving other person's notifications. By the same rationale, when user logs off from the application, a web service call needs to be made to disassociate her from push notifications URI.

Wednesday, November 17, 2010

Mobile Enterprise Application. Step 2: Authentication

This is a second post in the series. Previous post:
 - Step 1: General Architecture

Authentication - as in supplying proper user credentials - is an essential part of any enterprise application. Consumer apps and games usually allow unauthenticated users to execute them, but enterprise systems have higher security requirements. So, your login page is likely to be set in WMAppManifest.xml as default task:

    <Tasks>
      <DefaultTask Name ="_default"
            NavigationPage="/Views/LoginPage.xaml"/>
    <Tasks>

It's not difficult to put together a simple page with two text boxes and a button, then make a web service call to verify user credentials and return an object containing user context. However, there are a couple of things to consider.

Push Notifications Opt-In
Push Notification Service (PNS) is a powerful tool that allows your server application to initiate communication with the client even while the client isn't running. There is no equivalent functionality on the desktop or in web applications; it is one of the unique features of the mobile client. I'm sure any enterprise system could use PNS, and in my next post I will show how to implement it. Normally, you would register a user for push notifications as soon as he or she authenticates. However, according to Windows Phone 7 Application Certification Requirements, the application must ask the user for explicit permission to receive a toast notification. Once the opt-in is obtained from the user, it can be saved in isolated storage settings. Below is a code snippet that checks settings and redirects user accordingly:
   NavigationService.Navigate(
      IsolatedStorageSettings.ApplicationSettings.Contains(Constants.Settings.CanUsePNS)
         ? new Uri(Constants.Urls.LandingPage, UriKind.Relative)
new Uri(Constants.Urls.PNSOptInPage, UriKind.Relative));


Session Management
Authentication usually implies a time-limited user session. Unlike ASP.NET, which is a server-side platform, Silverlight doesn't provide session management features out of the box. My recommended approach for mobile application is to implement a dual session management mechanism along these lines:

  1. Client ask user for a preferred session duration (not to exceed a predefined system limit) before login
  2. Client successfully authenticates, and server returns a unique session token (a GUID, for instance)
  3. Client keeps session state in the isolated storage
  4. Every time the application is activated, it checks if session length has exceeded timeout
  5. Every time the client makes a webservice call, it includes the session token as a parameter. Server uses it to validate the session and optionally create an audit trail of user activity.
Restoring Session State On Activation
When application is running, session state is stored in a public static property of the App class (App.xaml.cs), for example:
   public static UserLogin CurrentUser { getset; }
However, the value goes away when application becomes inactive (again, this is unique behavior of mobile clients) and we need to restore it as part of reactivation:

   1:  private void Application_Activated(object sender, ActivatedEventArgs e)
   2:  {
   3:      if (AppController.CurrentUser == null)
   4:      {
   5:          using (var store = IsolatedStorageFile.GetUserStoreForApplication())
   6:          {
   7:              if (store.FileExists(Constants.Files.UserLogin))
   8:              {
   9:                  using (var file = new IsolatedStorageFileStream(Constants.Files.UserLogin, FileMode.Open, store))
  10:                  {
  11:                      var serializer = new DataContractSerializer(typeof(UserLogin));
  12:                      AppController.CurrentUser = (UserLogin)serializer.ReadObject(file);
  13:                  }
  14:              }
  15:          }
  16:      }
  17:  }

(to be continued)

Monday, August 09, 2010

Should Software Development Be Regulated?

The question of government regulation of business is high on the agenda these days. Over the last couple of years we have witnessed some spectacular events, like the Great Recession of 2008 and Deepwater Horizon explosion (and subsequent Gulf of Mexico oil spill) of 2010. These have already become case studies of the importance of government regulation. New legislation on financial and healthcare reforms will significantly increase the role of government in those areas. So, here is my question: has the time finally come to regulate software development?

I realize a lot of people have a negative knee-jerk reaction to anything that might expand the role of government (I can almost hear them scream!). Personally, coming from a communist country, I tend to be fairly skeptical in this area: I've seen what happens when bureaucrats are given unchecked power over people's lives. But let's consider the matter objectively. After thinking about it for a while I came up with three different areas where regulation can bring positive change.

Professionalism
It never ceases to amaze me that you cannot work as a plumber without a plumbing license, but no license is required to write software. Mind you, obtaining a plumber's license is far from a formality: it requires four years of job training, and the applicant must pass a written exam. On the other hand, anyone can apply for a software engineer position: it is up to the hiring company whether or not to ask for evidence of some formal training. Some companies administer tests, or ask a bunch of technical questions during interview process, but there aren't any standards.

As a direct result, ranks of software developers are full of people who picked programming as a hobby or were attracted to it by higher salaries, but never learned the mathematical foundations of the discipline. I would argue that these people are more likely to use poor coding practices, steer clear of object-oriented programming, and never bother with design patterns. Note that I am not advocating the supremacy of college graduates; all I'm saying that programming requires proper training.

By the way, similar observation can be made about businesses. For example, a financial services company may own cars, but is unlikely to have an in-house team of mechanics who fix them. And yet, the same exact company doesn't have second thoughts about maintaining an in-house software development organization.

Quality Control
Given the role software is playing in our lives, it's hard to understand why people tolerate low-quality applications. Although there are many reasons for poor quality, the industry pretty much knows how to address this problem. It all starts with a solid design, of course: application architecture should be appropriate for the task. Developers should write automated unit tests and ensure good code coverage, and these tests should be executed as part of every build. Each application should have well-defined white box and black box test cases, and appropriate performance testing should be done before the system goes live.

However, good quality control can be expensive: for example, the time used to write unit tests is the time developers do not implement new functionality. Automated testing tools for QA can be very expensive, too. It's no surprise some businesses prefer to save money on quality, given the extraordinary tolerance consumers have towards buggy software. By enforcing standard QA processes, government regulators can make good reliable software a reality and make life easier for the end user.

Security
Over the last 15 years, as high-speed internet access became first widespread and then ubiquitous, software applications grew to rely more and more on connectivity. Sadly, this opened the floodgates for an entirely new class of problems: cyber attacks. Let me quote from an excellent book on the subject, Richard Clarke's "Cyber War":
These military and intelligence organizations are preparing the cyber battlefield with things called "logic bombs" and "trapdoors," placing virtual explosives in other countries in peacetime. Given the unique nature of cyber war, there may be incentives to go first. The most likely targets are civilian in nature. The speed at which thousands of targets can be hit, almost anywhere in the world, brings with it the prospect of highly volatile crises.
Of course, cyber attackers exploit security weaknesses in software, and of course the system is as secure as its weakest link. But how does software acquire these weaknesses in the first place? One reason is that people who develop it lack the knowledge and expertise to do proper threat modeling. And even if the application was developed with security in mind, has it ever been tested for security vulnerabilities? This is where government regulators could step in, making sure all software has been secured at an appropriate level.

In conclusion of this essay, I would like to acknowledge that regulation doesn't always work, and it is entirely possible that a bad appointee will turn the initiative completely upside down. After all, doesn't Great Recession illustrate inability of the SEC to control derivatives market? And didn't oil rig explosion shed light on mass incompetence and corruption at MMS? But software has become such an important aspect of our civilization that we must at least begin a conversation.

Monday, August 02, 2010

VB or C#? A Personal Journey

Last time I checked LinkedIn group .NET People, there were 435 posts in the "VB or C#?" discussion. That's strange, I said to myself. After ten years and four language iterations are there enough differences to spark the debate? So I started reading...

Well, there were a couple of people who found genuine gaps (like XML literals in VB or yield keyword in C#). There were a couple of trolls, and a couple of people just having a good laugh ("I prefer C# over VB because I am an American!"). But the majority of comments were pure opinion. "Code is cleaner", "more readable", "I hate semicolons", "I love curly braces", "too verbose", "closest to plain English" were some of the statements repeated over and over. IMHO, this entire discussion sheds more light on the .NET development community than on programming languages themselves.

It's no secret that people come to software development using [at least] two separate routes. Some study Computer Science in college (even if it's not their major or they never graduate). They are probably taught programming courses in Java or C++, so C# comes naturally to this group. Second category of developers started out in a different line of work and discovered Office automation with VBA somewhere along the way. Or perhaps they learned VBScript in order to maintain their department's ASP page on the intranet. When .NET came along, this group made a transition to VB.NET.

Now, I'm not trying to argue which group has better programmers - I've seen extremely bright engineers without CS degree, as well as some dim bulbs who turned out to have a Master of Science in CS. But it's a common knowledge that C# was designed from the ground up as a managed object-oriented language, while VB.NET is essentially the outcome of multiple cosmetic surgeries made to an aging body. First change happened when original BASIC - Beginner's All-purpose Symbolic Instruction Code - was updated to support structural programming. It has acquired the "Visual" prefix, but didn't become fully object-oriented until its VB.NET incarnation. Nowadays, Microsoft works diligently to keep the language on par with C#, adding constructs like generics, lambda expressions, closures, and so on.

However, the efforts to modernize VB have little impact on most VB programmers, who probably just aren't familiar enough with contemporary design and programming patterns. So, it's no surprise they tend to get a little bit defensive...

Interestingly, I myself managed to travel both paths to software development. My college major was Applied Mathematics and Cybernetics, and I had plenty of instruction on typical CS subjects. We used Turbo Pascal in the classroom, and by the end of school I transitioned to Borland C++. Incidentally, Soviet Union imploded at about the same time, and in the chaos that followed, my aspirations to find a job in IT became laughable (people were lucky if they had any job at all - it was not unusual in those days for a doctor to work as a taxi driver). So, I ended up doing bookkeeping, accounting and then business planning for a big multinational corporation.

Before long, I was dabbling in Microsoft Access and creating automated databases and spreadsheets for my team. VB was easy and forgiving, and, more importantly, it was ubiquitous. When I finally managed to switch my career back to IT, I didn't feel comfortable with latest C++ tools and frameworks, so I stuck with VBScript and VB6. When .NET was introduced, my first instinct was to transition to VB.NET. However, I decided that it was time to re-educate myself. I started reading about design patterns (which weren't even on the radar when I was in college), test-driven development and extreme programming. I studied source code and tackled new classes of problems, like multi-threaded services development.

Eventually, I realized that C# was a better choice for me, made a switch, and never looked back. This was around 2005, when gap between the two languages was fairly big. Five years later, it is almost gone. But like I said earlier, it's easier to update a compiler than to change people's mindset. Both VB and C# are here to stay, I'm just waiting for someone to port another of my college-era languages, Prolog, to .NET framework...

Sunday, May 02, 2010

Code Generation For Auto-Implemented Properties

I recently ventured into one of the more obscure areas of .NET framework - code generation. The project involved rules engine manipulating properties of our internal domain objects. Long story short, I had to create a routine that converts our domain objects into .NET classes (derived from System.Workflow.Activity). These generated classes did not have much behavior - all methods were pushed to the base class - but they did carry so many properties that they in turn had to be grouped together into classes.

Writing code generation logic for a property turned out to be a lot of work: first, I had to add declaration for a private backing field, then a property declaration, including code expressions for both getter and setter. Here's sample code similar to what I ended up with:


var myType = new CodeTypeDeclaration("Person");
 

var field = new CodeMemberField()
{
    Name = "_LastName",
    Type = new CodeTypeReference("System.String"),
    Attributes = MemberAttributes.Private
};
myType.Members.Add(field);
 
var prop = new CodeMemberProperty()
{
    Name = "LastName",
    Type = new CodeTypeReference("System.String"),
    Attributes = MemberAttributes.Public
};
prop.GetStatements.Add(
    new CodeMethodReturnStatement(
        new CodeFieldReferenceExpression(
            new CodeThisReferenceExpression(), "_LastName")));
prop.SetStatements.Add(
    new CodeAssignStatement(
        new CodeFieldReferenceExpression(
            new CodeThisReferenceExpression(), "_LastName"), 
        new CodePropertySetValueReferenceExpression()));
myType.Members.Add(prop);

And here is the code that was generated by the above fragment:


private string _LastName;
public string LastName
{
    get { return _LastName; }
    set { _LastName = value; }
}

Of course, C# 3.0 has introduced a much shorted way of declaring the same property: "public string LastName { get; set; }". This syntax is called "auto-implemented properties" and it puts the burden on the compiler to create a backing field and implement getter and setter logic. Naturally, I wanted generated classes to look cleaner, so being an optimist that I am I decided to change code generation logic to create auto-implemented properties instead.

That proved to be a mistake: after a while I realized that classes in System.CodeDom namespace do not support auto-implemented properties generation. The best I could come up with was a hack using CodeSnippetTypeMember:


var snippet = new CodeSnippetTypeMember("public string LastName { get; set; }");
myType.Members.Add(snippet);

This solution is pretty far from ideal. It goes against the spirit of code generation because it allows me to target just one programming language, C#. Still, it is pragmatic. Hopefully, Microsoft can bring CodeDom up to date in a future release.

Monday, March 29, 2010

Passing Parameters To ClickOnce Applications

I was doing some research with ClickOnce deployment architecture and found an unexpected challenge: passing command line parameters. This post summarizes a couple of different approaches; hopefully, it will save someone a few hours of trial and error and frantic googling.

Use Query String

ClickOnce applications can be installed from a website, and they can also be invoked from a webpage. All that's needed is a hyperlink that references the .application file, e.g.: http://www.mywebsite.com/foobar/foobar.application. When user clicks the link, Foobar will launch (actually, the same link will install Foobar on user machine). In this scenario, parameters can be passed to the application by adding query string to the URL: http://www.mysite.com/foobar/foobar.application?param1=value1&param2=value2&... . Here's how to do it:

Step 1: Enable URL parameters in ClickOnce application

Open project properties in Visual Studio and click "Publish" tab. Then click "Options" button, which brings up a dialog window. Select "Manifests" from the list, and make sure the checkbox that reads "Allow URL parameters to be passed to application" is checked.



Step 2: Add code to process parameters

In regular Windows applications written in C#, it is enough to declare static void Main(string[] args) in order to get the list of command line parameters in the args array. Unfortunately, this doesn't work with ClickOnce applications - the args array will be empty whether or not URL had any parameters. In order to access them, we need to analyze AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData property.


[STAThread]

static void Main()

{

    Application.EnableVisualStyles();

    Application.SetCompatibleTextRenderingDefault(false);

 

    var args = AppDomain.CurrentDomain.SetupInformation.ActivationArguments;

    var frm = new MainForm();

    if (args != null

    && args.ActivationData != null

    && args.ActivationData.Length > 0)

    {

        var url = new Uri(args.ActivationData[0], UriKind.Absolute);

        var parameters = HttpUtility.ParseQueryString(url.Query);

        // Process parameters here

    }

 

    Application.Run(frm);

}



Add File Type Association

Another interesting approach is to associate the application with a specific file extension. When a file with this extension is downloaded from a web page or opened in Windows Explorer, application will be launched automatically and file name will be passed to it. Prior SP1 release of .NET 3.5, file type association had to be created programmatically (by creating subkeys for the desired extension and a shell\open\command using Microsoft.Win32.Registry API). Visual Studio 2008 SP1 allows to define association declaratively. In the same Options dialog mentioned above, click "File Associations" and fill in required information in the data grid.



Processing logic is very similar to the first scenario - we still need to analyzeAppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationDataproperty. The only difference is that instead of calling HttpUtility.ParseQueryString, we need to extract file name from it.

Saturday, March 13, 2010

Continuous Integration + Continuous Improvement

Long time ago, in the early days of software development, one person could program entire application. Gary Kildall wrote the CP/M operating system, and Wayne Ratliff wrote dBASE, one of the first database engines. Nowadays, the project can get started with one or two developers, but eventually ends up with many more (as management often seems to ignore Brooks's law about adding man-power to a late project).

Once you have several people working on the same codebase, integrating their changes can become a challenge. (I remember one project where two developers decided to work independently and did not attempt to integrate until the code-cutoff day. Sadly but unsurprisingly, the solution didn't build, and since there was no time to solve all the issues, they had to deliver two separate applications instead of one.) The solution, known as continuous integration, is to use the common source code repository and integrate frequently. The first part is obvious, the second may require some explanation.

Many software companies have Build engineers (or even teams of Build engineers), whose main job is to produce builds. Since "integrate" really means "get latest code and build the system", it is theoretically possible to assign this task to Build engineers. However, I do not think it is such a great idea: first of all, the task is boring, and second, humans may have a problem with the "frequently" part. For some projects, it will be enough to run a nightly build, while other will prefer to integrate every time source code is committed to the repository. There is no way a human could be doing that! The best thing to do is automate the task, and there are commercial and open-source systems that can do the job.

A few months ago I installed one such application, open-source product called Cruise Control .NET on a virtual server we use as a build machine. It consists of a Windows service and an ASP.NET web application: service runs integration tasks, web app provides user interface for build status, logs, and reports. Naturally, it supports multiple projects, but project configuration has to be done the old-fashioned way, by manually editing XML in a couple of .config files. Another nice feature is a utility called CCTray; this is a little app that displays an icon in the taskbar. The icon uses traffic light colors to notify user of their project status.

Cruise Control is a great application that I highly recommend, but there is one more concept I wanted to describe in this blog post: continuous improvement, or "Kaizen" in the original Japanese. Kaizen is a philosophy that helps to improve productivity and quality while reducing cost. It applies to different areas: manufacturing, government, banking, healthcare, and its main ideas are individual empowerment and continuous quality cycle.

I think we can successfully apply Kaizen ideas to software development. Most straightforward approach in my opinion is encouraging programmers to do three things:
  • Refactor code to design patterns,
  • Increase unit test code coverage,
  • Fix all bugs (not just those reported by users).
This of course means that our application's codebase will be continuously updated, and I know there are companies out there that will be really uncomfortable with such perspective. However, when source code is sufficiently covered by unit tests, and all tests are executed as part of every build, I see no reason for concern.

Tuesday, July 14, 2009

Exposing EntLib to COM Clients

All of us (well, most of us) know and appreciate the benefits of Enterprise Library (EntLib) Application Blocks: they solve common problems, encapsulate best practices, implement design patterns. They are easy to use and not hard to extend. What's even better, they ensure consistency across different applications and development teams.

Sadly, but all this goodness is only available to .NET code.

I do not have exact statistics, but anecdotal evidence suggests that even software companies firmly committed to .NET platform still have 25-50% of their codebase in C++, VB 6, or some form of VBScript. The ratio will continue to shift but legacy code is unlikely to disappear anytime soon (after all, mainframes and COBOL are still with us). And of course, all that code needs to be maintained.

Programmers are rarely enthusiastic about legacy code maintenance. Part of the reason is that such code is often a reverse of EntLib: it doesn't encapsulate best practices, doesn't use design patterns, is difficult to use. Yet, we cannot afford to rewrite the whole thing and have to be content just patching holes. Therefore, I think many will welcome the possibility to somehow plug in Application Blocks into their legacy code. Parts of EntLib aren't useful in the non-.NET world, of course, but things like logging, caching, and cryptography are perfect integration points. 

In the remaining part of this post, I will describe how this can be done and will use Logging Application Block as an example.

COM Facade
There are many different ways to expose EntLib functionality to older applications, but COM Interop is probably the most efficient. The idea is to use a "facade" design pattern: create a .NET class that will be exposed to COM clients via Interop and pass through calls to EntLib.

I decided to derive this new facade from System.EnterpriseServices.ServicedComponent and host it in a COM+ server application. This way we can take advantage of a couple of very useful services provided by COM+ infrastructure.
[ComVisible(true)]
[ClassInterface(ClassInterfaceType.None)]
public class EntLibAdapter : ServicedComponent, IEntLibAdapter
Interface IEntLibAdapter contains a single method:
[ComVisible(true)]
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface IEntLibAdapter
{
void WriteLog(ref ILogData log, string category, int priority);
}
Here, ILogData is a name of another interface that defines multiple properties collected for logging. Our COM clients will invoke WriteLog method of the facade class and Logging Application Block will determine whether to log or not (based on category and priority), and which logging trace listener to use. This, however, requires that Application Block be properly initialized.

Initialization
EntLib application blocks rely on XML configuration which is usually stored in the web.config or exe.config file. In our scenario, entry point is not a .NET application, so we cannot rely on default behavior. Fortunately, EntLib supports multiple configuration sources: for example, it can consume a stand-alone file and read XML configuration from it. Here is how this is achieved for Logging Application block:
FileConfigurationSource configSrc = new FileConfigurationSource(fileName);
LogWriterFactory factory = new LogWriterFactory(configSrc);
this.LogWriter = factory.Create();
Because the facade class is a ServicedComponent, we can take advantage of the COM+ Activation service. When object is created, COM+ will invoke Construct method and pass a string that in our case will contain full path to the configuration file:
[ComVisible(true)]
[ClassInterface(ClassInterfaceType.None)]
[ConstructionEnabled(true, Default = @"C:\EntLibAdapter.config")]
public class EntLibAdapter : ServicedComponent, IEntLibAdapter
{
protected override void Construct(string s)
{
if (File.Exists(s))
{
FileConfigurationSource configSrc = new FileConfigurationSource(s);
LogWriterFactory factory = new LogWriterFactory(configSrc);
this.LogWriter = factory.Create();
}
}
}
Object Pooling
In a typical usage scenario, EntLibAdapter object will be constructed, WriteLog method invoked, and then the object will be destroyed. Notice that the initialization step is fairly expensive - it involves reading a file from disk, parsing XML, building up objects. We can improve performance and increase overall system scalability by maintaining a pool of objects. That way EntLibAdater instances do not get destroyed after client releases the reference - they simply go back to the pool. Object pooling is another built-in service in COM+, so all we need to do is mark our class to participate:
[
ComVisible(true),
ClassInterface(ClassInterfaceType.None),
ConstructionEnabled(true, Default = @"C:\Nexsure\Installation\Dlls\EntLibAdapter.config"),
EventTrackingEnabled(true),
ObjectPooling(true, MinPoolSize = 5)
]
public class EntLibAdapter : ServicedComponent, IEntLibAdapter
Object pooling may not be the best approach if you plan to use flat file logging. In the example above, there will always be 5 instances in the pool, and each will create its own log file (4 of them will have a GUID-based file name). This should not be a problem if your primary target is database or Windows Event Log.

Deployment
Since EntLibAdapter is a ServicedComponent, it has to be strongly named and registered using RegSvcs.exe. In addition, EntLib assemblies that it depends on, such as Microsoft.Practices.EnterpriseLibrary.Logging.dll, need to be placed in the GAC.

Thursday, June 04, 2009

Design For Operations, Part III - MMC Integration

Back in 2006 I started writing about designing applications with IT department in mind. I'm glad to say that my position didn't change since then - I still believe computer software should be friendly to its end users, which are either customers (mobile, desktop and web apps) or IT engineers (web apps and services). Unfortunately, these two groups are not represented equally in the design process: business analyst is a voice of the customer, but who is a voice of IT?

MMC Background
In my previous posts I covered such aspects as event logging, performance counters, and WMI integration. Today I wanted to discuss Microsoft Management Console (MMC) which was originally developed for Windows NT Option Pack. The idea was to create a common interface for managing IIS, Certificate Server, and Transaction Server, so that administrators have fewer tools to learn. This was further extended in the next release, MMC 2.0 (included with Windows XP/Server 2003), when a concept of the snap-in was added. Snap-in was a COM in-process server that MMC would communicate with, thus allowing any third-party application to have a custom management screen within MMC. In order to develop custom snap-ins, you would need to be well-versed in C++ COM development, as there were 30-something interfaces that could be used. Development of snap-ins in managed code was not supported, although there were custom frameworks, for example, an open-source project called MMC.NET.

MMC 3.0, which was shipped with Vista and Server 2008 (but available for download for older platforms), includes a managed layer and thus natively supports snap-in development in any .NET language. Everyone but hard-core C++ programmers would agree that this can be done faster, with fewer lines of code and simplified maintenance. And even they will have to admit that the ability to use WinForms inside MMC is really cool.

Setting Up Solution
In order to develop our custom MMC snap-in using C# 2008, we will create a class library project. However, before you begin, there is one important step: executing %WINDIR%\System32\MMCPerf.exe from the command prompt. This will put MMC assemblies in the GAC and NGEN them. After creating new project, add a reference to Microsoft.ManagementConsole.dll by browsing to the following folder: \Program Files\Reference Assemblies\Microsoft\mmc\v3.0\. Unfortunately, the assembly doesn't appear in the .NET tab of the "Add Reference" dialog.

Writing Code
We will start by adding two classes to the project: one derived from Microsoft.ManagementConsole.SnapInInstaller, will be used to register custom snap-in with MMC, and another, derived from Microsoft.ManagementConsole.SnapIn, will serve as an entry point.

using System.ComponentModel;
using System.Security.Permissions;
using Microsoft.ManagementConsole;

[assembly: PermissionSetAttribute(SecurityAction.RequestMinimum, Unrestricted = true)]

namespace Microsoft.ManagementConsole.Samples
{
[RunInstaller(true)]
public class InstallUtilSupport : SnapInInstaller
{
}

[SnapInSettings("{9627F1F3-A6D2-4cf8-90A2-10F85A7A4EE7}",
DisplayName = "- Sample SnapIn",
Vendor = "My Company",
Description = "Shows FormView")]
public class SelectionFormViewSnapIn : SnapIn
{
}
}
Note the attribute that decorates SelectionFormViewSnapIn class. All snap-ins are defined in the  Registry, so we have to provide a GUID, and also specify metadata which will be displayed in the catalog. We can leave the body of the installer empty, but SnapIn class requires a constructor.

public SelectionFormViewSnapIn()
{
// Create the root node.
this.RootNode = new ScopeNode();
this.RootNode.DisplayName = "Selection (FormView) Sample";

// Create a form view for the root node.
FormViewDescription fvd = new FormViewDescription();
fvd.DisplayName = "Users (FormView)";
fvd.ViewType = typeof(FormView);
fvd.ControlType = typeof(SelectionControl);

// Attach the view to the root node.
this.RootNode.ViewDescriptions.Add(fvd);
this.RootNode.ViewDescriptions.DefaultIndex = 0;
}
In the constructor, we are effectively defining the root node of the snap-in. In this example, we will be using a WinForms user control called SelectionControl. This is a regular UserControl that implements a special interface Microsoft.ManagementConsole.IFormViewControl, which really only has a single method: void Initialize(FormView view). This is where we would put our initialization logic.

Deployment
After successful build of the solution, we will end up with a single DLL. Deploying it is very straightforward: all you need to do is execute InstallUtil.exe against it. Assuming you didn't skip the first step (running MMCPerf.exe), you should see no errors. Start MMC and choose "Add/Remove Snap-in" from the File menu, then click the "Add" button. You should see your custom snap-in appear in the list.

Friday, February 13, 2009

Using Web Client Software Factory With Mobile Web Forms

After I somewhat successfully solved the problem Visual Studio 2008 has with mobile web forms, I had to tackle another challenge. The website is built with Web Client Software Factory, a powerful and flexible ASP.NET framework from Microsoft Patterns & Practices team. WCSF, or, more specifically, CWAB (composite web application block) provides dependency injection to the application. Unfortunately, the most recent release of WCSF doesn't include any support for mobile web forms. Of course, this release is almost one year old, and I know that good people of P&P are planning a new release for 2010, so hopefully this will be addressed, but in the meantime here is the solution I came up with.

WCSF guidance package includes a nice set of recipes that automate creation of web forms, master pages and user controls. The boilerplate code that is autogenerated for code-behind classes defines them as derived from Microsoft.Practices.CompositeWeb.Web.UI.Page class instead of standard System.Web.UI.Page:

public partial class MySummaryView : Microsoft.Practices.CompositeWeb.Web.UI.Page, IMySummaryView
{
}

The Page class overrides the OnPreInit method, and that is where dependency injection "magic" happens:

protected override void OnPreInit(EventArgs e)
{
base.OnPreInit(e);
Microsoft.Practices.CompositeWeb.WebClientApplication.BuildItemWithCurrentContext(this);
}
We can use similar approach for mobile web forms. Of course, there is no class in WCSF that is derived from System.Web.UI.MobileControls.MobilePage, so we will have to create our own base class in App_Code (or in a shared class library):

namespace Microsoft.Practices.CompositeWeb.Web.UI
{
public class MobilePage : System.Web.UI.MobileControls.MobilePage
{
protected override void OnPreInit(EventArgs e)
{
base.OnPreInit(e);
Microsoft.Practices.CompositeWeb.WebClientApplication.BuildItemWithCurrentContext(this);
}
}
}
Now, as long as we derive our mobile web form from this class, we can declare a Presenter property and let CWAB inject it at runtime. There is one drawback, though: we still need to manually create the presenter class and view interface, something that the guidance package recipe used to do automatically. Unfortunately, I don't know GAT well enough to create my own recipe for mobile web form, so the workaround I am using is this:
  1. Execute "Add page with presenter" recipe
  2. Modify .aspx file to register "mobile" tag prefix
  3. Modify code-behind file to change the base class

Friday, February 06, 2009

Mobile Web Forms in Visual Studio 2008

I recently discovered that Visual Studio 2008 dropped support for ASP.NET mobile. You can create ASP.NET websites, of course, but try adding a mobile web form or mobile user control - these item templates are no longer there.

Omar Khan wrote a post for Visual Web Developer Team blog which describes a workaround but doesn't explain why this happened in the first place. One big problem with that workaround is that you can't download it due to a broken link.

I found that one way to solve the problem is to copy mobile item templates from Visual Studio 2005 (assuming you still have it installed) to a special folder where VS 2008 will look for user item templates. Here's the detailed how-to:

1) Find item templates in VS 2005 folder:


2) Find user item templates folder:



3) Copy MobileWebForm.zip, MobileWebUserControl.zip, and MobileWebConfig.zip to that folder

4) Restart VS 2008. Mobile web items now appear in the "Add New Item" dialog under "My Templates":



One problem still remains: VS 2008 designer doesn't display mobile forms and controls. This isn't a major issue for me because I hardly ever use the designer.

Wednesday, November 12, 2008

Analytical Approach To Solving Programming Problems

In the six months that passed since I updated this blog I've been working on various web application projects, learning a lot about ASP.Net Ajax and Web Client Software Factory. Nevertheless, this posting isn't about any particular technology. In my opinion, software developers already have way too many technologies, frameworks, programming languages, and APIs available to us. It's a challenge just to keep up with all the new stuff that comes out. What I want to discuss instead are the benefits of the analytical approach to programming problems.

Here is a sample problem. Imagine there is a virus spreading through the cells of a very large two-dimensional matrix. We start with a relatively healthy matrix with only 10 random cells infected. The virus is spreading by infecting 4 adjacent cells every minute. For example, if "." represents a healthy cell, this is how the epidemic will progress:







Start

.........
.........
....0....
.........
.........


After first minute

.........
....1....
...101...
....1....
.........


After second minute

.........
....2....
...212...
..21012..
...212...
....2....
.........


Of course, the virus starts spreading from 10 different places on the surface, so depending on where these cells are, the time it takes to infect entire matrix can vary. Our task is to find that time given 10 initial locations.

It may be tempting to rely on a raw processing power of modern computers and concoct a solution that looks like this:

while (!matrix_is_fully_infected)
{
infect_next_set_of_cells();
}

The model above simply recreates the behavior of the virus. The obvious drawback here is the sheer inefficiency of the algorithm: we end up scanning entire matrix an unknown number of times. As matrix size increase, the inefficiency will be more evident. Still, this may be a valid approach in some cases, where there is no easy analytical solution. Fortunately, our virus has a primitive DNA and yields itself to mathematical definition.

For simplicity, let's assume that we begin with a single infected cell with coordinates (a,b). The number of minutes it takes to infect an arbitrary cell (x,y) can be expressed with this simple formula: |a-x|+|b-y|. Now let's assume we had a second infected cell at the beginning: (c,d). We could use a similar formula to find out how many minutes it will need to infect our arbitrary cell (x,y): |c-x|+|d-y|.

Depending on whether (a,b) or (c,d) is located closer to (x,y), one of the above expressions will produce a smaller number of minutes. This will be the answer to the question "how long it takes to infect a single arbitrary cell". As we go from 2 infected cells to the original 10, we can write the answer as a function of (x,y):

min(|ai - x| + |bi - y|), where 1 <= i <= 10

Of course, our job is not done yet - the virus doesn't stop until all cells are infected. What we need to find out is how many minutes it will take to infect the last cell. Evidently, this will be the maximum time across the matrix, so our solution will be to take the maximum of the above function:

max( min(|ai - x| + |bi - y|) ), where x and y vary across matrix dimensions

As you can easily see, analytical approach provided significant performance improvement - we now only need to scan the matrix once.

Sunday, August 19, 2007

Mapping Framework

Data mapping is one of the key components of enterprise application integration. Whether we are in the realm of business partner integration built on top of secure file exchange, or integrating applications built on a common SOA platform, data mapping is always there.

From an application's point of view, mapping can be either inbound or outbound. Inbound mapping converts raw data (positional or delimited flat file or XML document) into objects that are native to the application. Naturally, outbound mapping represents the reverse operation: transformation of native objects (or losely typed datasets) into flat or XML data.

Although simple logic can be hard-coded inside application, this approach doesn't scale well. It's a good idea to have a framework that will allow new mapping logic to be put in place with little or no custom coding.

In order to define mapping, we need to specify mapping rules (which source data elements map to a destination element?) and, optionally, transformation (what needs to be done with source data elements in order to arrive to the destination element?). Here is how a single inbound mapping piece can be represented (fields are shown instead of properties for brevity):

public class InboundMappingPiece
{
public string PropertyName;
public MethodInfo MethodInfo;
public int StartIndex;
public int Length;
public string XPath;
}

Let's review the fields. PropertyName designates the "destination": which property of a native object we are populating with this mapping piece. MethodInfo is a function pointer that implements transformation logic. Now the only thing missing is the source element. In order to map from positional flat file, we need to know StartIndex and Length, while XML data is easily extracted using XPath queries.

public class OutboundMappingPiece
{
public string[] SourceElements;
public MethodInfo MethodInfo;
public string DestinationElement;
public int DestinationWidth;
public char PadCharacter;
}

OutboundMappingPiece is built in a similar fashion. We've got an array of SourceElements (in case we wanted to use many-to-one mapping), and a MethodInfo pointer for transformation logic. If our destination is XML, we need to know the name of DestinationElement, otherwise DestinationWidth and PadCharacter allow us to generate flat file output.

By matching lists of mapping pieces with type names, we can declare the map as a whole. This is how a Mapper class may look like:

public class Mapper
{
private Dictionary<String, List<InboundMappingPiece>> _InboundMap;
private Dictionary<String, List<OutboundMappingPiece>> _OutboundMap;

// Outbound
public string Transform(object obj) {...}
public string TransformToXml(object obj) {...}

// Inbound
public T GetObject<T>(string rawData) where T : new() {...}
public T GetObject<T>(XmlNode node) where T : new() {...}
}

Implementation of inbound and outbound transformation methods can be boiled down to iterating through lists of mapping pieces and applying them to the source data. Of course, the devil is in details, and there are lots of details to be considered: how to handle arrays, nullable types, nested types, and so on. Another interesting question is where to store mapping configuration, but I will leave it until the next post.

Monday, February 19, 2007

Design For Operations (Part II)

Last time I made a mistake by including words "Part I" in the title of my post. The plan was to write part II right away, and of course, it took me two months to get to it. I bet if I didn't call it "Part I", I would have posted this essay before Christmas ;-)

So, my goal is to make the application more accessible to the people who actually have to maintain production servers. WMI is great for this particular purpose, because it is a standard mechanism for accessing many system components at run-time: disk drives, Windows services, message queues, etc.

1. Publishing to WMI

If I design my application as a WMI provider, it will be able to publish information to WMI infrastructure built into Windows system. Application status information is represented by instances and events. Windows WMI infrastructure will ensure that any client that has proper permissions can query this data, subscribe to events, etc. All I need to do is define classes that will be published and decorate them with InstrumentationClassAttribute.

[InstrumentationClass(InstrumentationType.Instance)]
public class MyInstance
{
public MyInstance() {}
public string ProcessName;
public string Description;
public string ProcessType;
public string Status;
}

[InstrumentationClass(InstrumentationType.Event)]
public class MyTerminationEvent
{
public MyTerminationEvent() {}
public string ProcessName;
public string TerminationReason;
}

Publishing data is extremely simple:

using System.Management.Instrumentation;
...
MyInstance myInstance = new MyInstance();
MyTerminationEvent myEvent = new MyTerminationEvent();
// Set field values...
Instrumentation.Publish(myInstance);
Instrumentation.Fire(myEvent);

2. Registering WMI Namespace

Now let's take a step back. In order to make above code functional, I need to register WMI namespace for my application. This is done using ManagementInstaller class, but first, I have to decorate the assembly with a special attribute:

[assembly: Instrumented(@"root\MyCompanyName\MyNamespace")]

ManagementInstaller is trivial: it just needs to be added to the Installers collection of my application's Installer class:

[RunInstaller(true)]
public partial class MyAppInstaller : Installer
{
private ManagementInstaller managementInstaller;

public MyAppInstaller()
{
InitializeComponent();

managementInstaller = new ManagementInstaller();
Installers.Add(managementInstaller);
}
}

Now, after I build the application, I can register WMI namespace simply by running "installutil" command against assembly name.

3. Developing WMI Client

Chances are, operations team will ask me to write a WMI client for my provider. No problem, .NET framework has all the tools to get to my application's published data. One approach is to write a query using a SQL-like syntax and execute it using WqlObjectQuery class. Another relies on the ManagementClass object:

ObjectGetOptions options = new ObjectGetOptions(null, new TimeSpan(0, 0, 30), true);
ManagementClass wmiClass = new ManagementClass(WmiScope, "MyInstance", options);
ManagementObjectCollection wmiInstances = wmiClass.GetInstances();

In both cases, I will get back a collection of ManagementObject instances. Although I can extract all the data I want from ManagementObject using field names (e.g., obj["ProcessName"]), I would rather have a strongly typed class to work with. Turns out, there is a .NET tool called "mgmtclassgen" that does exactly that - generates a strongly typed class wrapper for any WMI instance type.

***

WMI is a complex subject, and I realize that I barely scratched the surface in this post. Still, there is enough information to get you started. Good luck!

Monday, December 11, 2006

Design For Operations (Part I)

When I design an enterprise application, I need to realize one simple truth: the system is going to spend just 15% (or less) of its life in development environment. After that it moves to the gated community known as production, and the only people who are supposed to have access to production are system operations engineers, a.k.a. “IT guys”. And of course, I cannot assume that IT guys will become familiar with the intricacies of the application’s design. If I do, I would be making a grave mistake which results in those dreaded 4:50 PM or 2:30 AM phone calls from the NOC.

So, I really need to design the system with operations in mind. It should be able to report its status and notify about any issues. I should allow operations to monitor my system with their usual tools, such as event viewer, performance monitor, management console, or MOM, instead of running SQL queries and reviewing XML configuration files. This means instrumenting my application with event logs, performance counters, and WMI objects and events.

Event Logging. Although simple file log is very convenient to put all sorts of debugging and profiling information, I can’t really expect IT to dig through megabytes of text looking for error information. Instead, they should be able to get it from Windows Event Viewer. So, I will create an instance of EventLogInstaller in the application’s installer class and specify the Source and Log properties. I will make sure to log all unhandled exceptions (see my previous post) using EventLog.WriteEntry method.

Performance counters are invaluable tools for monitoring and profiling the system in production. They may also give early indication of system issues. Windows and .NET framework already contain dozens of performance counters, but custom counters can provide an insight into my application’s processing logic. So, exactly kind of information should I expose via performance counters and what kind of counters (instantaneous or average) I should use? There is no standard answer; it really depends on the nature of the system. There is a good introduction to the concept on MSDN. In order to register custom performance counters I usually create a custom installer:

public class PerformanceCountersInstaller : Installer
{
public const String CategoryName = "...";
public const String CategoryHelp = "...";
public const String CounterName = "...";
public const String CounterHelp = "...";

public override void Install(IDictionary state)
{
base.Install(state);
Context.LogMessage("Installing performance counters...");
SetupPerformanceCounters();
}

public override void Uninstall(IDictionary state)
{
Context.LogMessage("Uninstalling performance counters...");
if (PerformanceCounterCategory.Exists(CategoryName))
PerformanceCounterCategory.Delete(CategoryName);
Context.LogMessage("Successfully uninstalled performance counters");
base.Uninstall(state);
}

private void SetupPerformanceCounters()
{
try
{
if (PerformanceCounterCategory.Exists(CategoryName))
PerformanceCounterCategory.Delete(CategoryName);

CounterCreationDataCollection CCDC = new CounterCreationDataCollection();

// Create and add the counters
CounterCreationData ccd;
ccd = new CounterCreationData();
ccd.CounterType = PerformanceCounterType.CounterDelta32;
ccd.CounterName = CounterName;
ccd.CounterHelp = CounterHelp;
CCDC.Add(ccd);

// Create the category.
PerformanceCounterCategory.Create(CategoryName,
CategoryHelp,
PerformanceCounterCategoryType.SingleInstance,
CCDC);
Context.LogMessage("Successfully installed performance counters");
}
catch (Exception ex)
{
Context.LogMessage(String.Concat("Could not install performance counters", ex.Message));
}
}
}

In the next post I will discuss using WMI to publish application status information.

Tuesday, November 28, 2006

Dealing With Exceptions

Although I don't have exact statistics, it certainly feels that most .NET developers often don't know how to deal with exceptions. I often see code where author had assumed that nothing ever goes wrong and decided not to put in any kind of exception handling. Such "infantile" code is clearly not ready for the hard realities of life. On the other end of the spectrum we've got programs that swallow all exceptions in an effort to make themselves bullet-proof. What developers don't realize is that it actually makes them more vulnerable to security attacks. When such attacks destabilize operating environment, a normal system would fail but "exception-swallower" carries on, making an ideal target for exploitation.

So, when do I actually need to catch exceptions? In essence, there are three distinct scenarios. First is called handling. It's when I know what kind of exception to expect and - more importantly - how to recover from it. For example, my stored procedure may become a victim of a SQL Server deadlock. In the managed code, this will result in a SqlException which I should handle (retry transaction up to the pre-defined number of times). Another example is trying to read some configuration data from a file:

try
{
configData = File.ReadAllText(configFilePath);
}
catch(FileNotFoundException)
{
configData = DefaultConfigData;
}

As you can see, I am handling FileNotFoundException by force-feeding some default configuration data into the variable. It's important to emphasize that I didn't attempt to handle any other kind of exception that File.ReadAllText can throw. For instance, it may throw UnauthorizedAccessException or SecurityException and I'd rather have these bubble to the top and hopefully force program termination.

This brings us to the second scenario: unhandled exceptions. If the exception hasn't been handled anywhere in the call stack (which either means there is an unknown problem or a problem that I don't know how to recover from) it should be caught and properly logged. Windows applications should display a generic error message to the user and shut down, Web applications should redirect user to a generic error page, and services can either shut down or terminate failed thread.

Third scenario is called exception wrapping. The idea is to substitute a low-level exception object with higher-level exception class containing additional information (if you are absolutely positive that original error is not sufficient). Wrapping is different from handling because there is no recovery - a new exception is thrown. In the example below, I am replacing SqlException with ScriptException that adds stored procedure name in an effort to facilitate debugging:

catch(SqlException ex)
{
ScriptException e = new ScriptException(storedProcName, ex);
throw e;
}

Wrapping should be used with caution because it changes the call stack and makes debugging more difficult. It is imperative to assign original exception object to the InnerException property of the new exception (in the above example this is done using a constructor overload).

An interesting implication is that in order to handle exceptions I need to know what exceptions a method can throw in the first place. List of exceptions should really be part of the method signature. In fact, Java has the concept of checked exceptions and corresponding "throws" syntax while in .NET we need to rely on class documentation. If you are interested in comparative analysis of the two approaches, read this interview with Anders Hejlsberg, creator of C#.

Saturday, November 11, 2006

Recruitment By [Lucky] Numbers

In the past few years I have interviewed a lot of people for various software development positions. Finding the right employee is always a challenge (as Franco DiAddezio put it, recruitment is an equivalent of finding the perfect spouse after just one or two dates). Candidates can have plenty of work experience, and you can fairly easily confirm whether or not they really known the technologies advertised in their resume. But are technology skills alone sufficient? My personal opinion is that a good software engineer is defined by his or her analytical thinking and problem-solving abilities. Specific technologies, such as programming languages and API's can always be learned.

My own litmus test for identifying the right engineer is a small but elegant programming problem called "The Lucky Numbers" problem. I first heard of it years ago in the university, and more recently - on Mikhail Gustokashin's site dedicated to programming problems where it is ranked "Very Easy" (follow the link only if you can read Russian). Here it is:
A 6-digit ticket number is considered "lucky" if the sum of its first 3 digits equals the sum of last 3 digits. For example, "006123" and "511304" are both lucky, "980357" isn't. Write an efficient algorithm to determine how many lucky numbers exists among all 6-digit numbers (from 000000 to 999999).

First, let's write an inefficient algorithm. We will iterate through all six-digit numbers and increment the counter if sum of first 3 equals to sum of last 3.

for(int i=0; i<10; i++)
for(int j=0; j<10; j++)
for(int k=0; k<10; k++)
for(int l=0; l<10; l++)
for(int m=0; m<10; m++)
for(int n=0; n<10; n++)
if(i+j+k == l+m+n) luckyNumbersCount++;

This algorithm performs 1 million iterations and it is the least I would expect from a candidate (amazingly, more than half failed to produce it). We can arrive at the efficient solution by carefully reading the problem. It doesn't ask us to produce all "lucky numbers", only their quantity. Can we find it without generating the numbers? We know that digit sums of both halves of the lucky number are equal. A sum of digits can take values from 0 (0+0+0) to 27 (9+9+9). For each value, we need to find out how many combinations of digits can produce it, e.g. "1" has 3 combinations: "001", "010", and "100". Evidently, there are 3 * 3 = 9 "lucky numbers" that correspond to the value of "1". So, here is optimized algorithm that performs only 1027 iterations:

int[] combinations = new int[28];
for(int i=0; i<10; i++)
for(int j=0; j<10; j++)
for(int k=0; k<10; k++)
combinations[i+j+k]++;

int luckyNumbersCount = 0;
for(int i=0; i<28; i++)
luckyNumbersCount += combinations[i] * combinations[i];

Friday, September 15, 2006

Business Objects and Value Objects

Encapsulation of data and behavior is one of the cornerstones of object-oriented programming. This basically means that a business object contains both the data and methods that manipulate the data. In the example below, a CreditCard object contains public method Authorize and public property AuthorizationCode. It's important for the object to establish a proper public interface. Authorization code value is returned by the payment processor; we wouldn't want clients to accidentally modify it. Therefore, I exposed a read-only property rather than a field.

public CreditCard
{
...
private string _AuthorizationCode;
public string AuthorizationCode
{
get { return _Authorizationcode; }
}

public bool Authorize(double amount) {...}
...
}


Object-oriented approach is great - within a single application tier. When designing a distributed application, we need to take other factors into consideration. Suppose, my application needs to display customer credit card data on a web page. Should I pass the CreditCard object to the web tier? Sure, it is possible, but the web tier only needs credit card data, not the behavior. Frankly, I wouldn't want web tier code to accidentally call Authorize() method. So, for the sake of security, we should somehow limit the objects. Performance is another concern, especially when passing objects between physical tiers. Regardless of which remote technology we use - web services, .NET Remoting, or COM+ - large objects with lots of methods and properties may not be ideal for this.

A simple and elegant solution is to combine essential business object data into a "value object". Individual data elements of the value object are exposed to business object's clients via public properties. Revised CreditCard class below demonstrates this approach. Note how overload of the constructor allows us to easily create a business object from a value object. We can extract value object just as easily and send it to another application tier.

public CreditCard
{
...
private CreditCardInfo _CCInfo;
public CreditCardInfo CCInfo
{
get { return _CCInfo; }
}

public CreditCard(CreditCardInfo info)
{
_CCInfo = info;
}

public string AuthorizationCode
{
get { return _CCInfo.Authorizationcode; }
}

public bool Authorize(double amount) {...}
...
}


public CreditCardInfo
{
...
public string CardNumber;
public DateTime ExpDate;
public string Authorizationcode;
public DateTime? LastTransactionDate;
...
}

Thursday, August 17, 2006

Coding Standards - Good, Bad, and Ugly

Are coding standards a good thing to have in a development organization? Most companies would say yes, and cite a variety of reasons, among them ease of code maintenance and improved continuity (which is important in an industry with such high turnover rate). In addition, good coding standards could help developers to avoid common pitfalls. Code reuse, that holy grail of enterprise development, supposedly improves, too.

Yet the employees of the few companies I know that actually have coding standards document are rarely excited about it. Usually the document is extremely large and unbelievably boring. In an effort to make it comprehensive, authors put together lots of small rules, which makes the document feel like a programming textbook. Well, at least a textbook has a target audience, while the standards document contains a mixture of trivial, simple, moderate, and advanced items. Also, the rules in a textbook are supported by detailed explanations. Coding standards document can be very vague or simply omit the explanations.

The ugly part begins when project managers and team leaders require their engineers to follow coding standards to the letter. This immediately kills all creativity; people think more about compliance than solutions. Dogmatism in such a dynamic profession as software engineering can only mean one thing: stagnation.

So, how do we get all the benefits of coding standards without any of the drawbacks? First, we need to recognize that software engineering is a creative profession. I would put an emphasis on both words. It's creative, so we shouldn't limit the spectrum of algorithms, technologies, and patterns to solve the programming problem. We need to treat engineers as professionals, and assume that they don't need another textbook. Of course, there are plenty of bad programmers out there, which is a subject for a different blog.

Ideal standards document would concentrate on the specifics of the architecture adopted by the company. Describe how the application layers are structured, what are the common components for logging, data access, exception handling, configuration management, caching. Don't bother defining naming conventions for variables.