Friday, December 17, 2010

Mobile Enterprise Application. Step 3: Registering For Push Notifications

This is a third post in a series. Previous posts:
 - Step 1: General Architecture
 - Step 2: Authentication

I briefly defined Push Notification Service in the previous post. Here we will take a closer look at implementing notifications. PNS has cloud-based infrastructure maintained by Microsoft; phone application registers with it and assumes a unique endpoint address. That address is sent to the server portion of our system which uses it to forward messages to PNS infrastructure which, in turns, forwards it to the phone. There are three different types of alerts that can be sent:

  1. Tile notifications. If an application is pinned to the start screen, its image (a.k.a. tile) can change in response to tile notification. We can either change the entire image or just display a number on the default tile.
  2. Toast notifications. These are essentially short text messages that are briefly displayed on top of the phone screen. If user touches the message, associated application opens up.
  3. Raw notifications. If previous two are used to communicate with the phone OS, this one is targeted at the phone application itself. Therefore, message contents and system response will be application-specific.
Check User Preferences
According to Microsoft certification requirements, users should be able to opt out of push notifications. The most straightforward approach is to store user preferences in the application setting.
   1:      public class AppSettings
   2:      {
   3:          private readonly IsolatedStorageSettings _isolatedStore;
   4:   
   5:          public bool CanUsePNS
   6:          {
   7:              get
   8:              {
   9:                  return GetValueOrDefault<bool>(Constants.Settings.CanUsePNSKey, Constants.Settings.CanUsePNSDefault);
  10:              }
  11:              set
  12:              {
  13:                  AddOrUpdateValue(Constants.Settings.CanUsePNSKey, value);
  14:                  Save();
  15:              }
  16:          }
  17:   
  18:          public AppSettings()
  19:          {
  20:              _isolatedStore = IsolatedStorageSettings.ApplicationSettings;
  21:          }
  22:   
  23:          public bool AddOrUpdateValue(string key, Object value)
  24:          {
  25:          }
  26:   
  27:          public TValueType GetValueOrDefault<TValueType>(string key, TValueType defaultValue)
  28:          {
  29:          }
  30:   
  31:          public void Save()
  32:          {
  33:              _isolatedStore.Save();
  34:          }
  35:      }


Register Push Channel.
This needs to be done when application starts and after user confirmed he or she wants to use PNS.
   1:          public void RegisterPushChannel()
   2:          {
   3:              if(!(new AppSettings()).CanUsePNS) return;
   4:   
   5:              _httpChannel = HttpNotificationChannel.Find(Constants.ChannelName);
   6:   
   7:              if (null != _httpChannel)
   8:              {
   9:                  SubscribeToChannelEvents();
  10:                  SubscribeToService();
  11:                  SubscribeToNotifications();
  12:              }
  13:              else
  14:              {
  15:                  _httpChannel = new HttpNotificationChannel(Constants.ChannelName, Constants.Channels.Service);
  16:                  SubscribeToChannelEvents();
  17:                  _httpChannel.Open();
  18:              }
  19:          }


Method SubscribeToChannelEvents simply adds application handlers to process various events raised by the HttpNotificationChannel object.

   1:          private static void SubscribeToChannelEvents()
   2:          {
   3:              // Register to UriUpdated event - occurs when channel successfully opens
   4:              _httpChannel.ChannelUriUpdated += new System.EventHandler<NotificationChannelUriEventArgs>(HttpChannelChannelUriUpdated);
   5:   
   6:              // Subscribe to raw notifications
   7:              _httpChannel.HttpNotificationReceived += new System.EventHandler<HttpNotificationEventArgs>(HttpChannelHttpNotificationReceived);
   8:   
   9:              // General error handling for push channel
  10:              _httpChannel.ErrorOccurred += new System.EventHandler<NotificationChannelErrorEventArgs>(HttpChannelErrorOccurred);
  11:   
  12:              // Subscribe to toast notifications
  13:              _httpChannel.ShellToastNotificationReceived += new System.EventHandler<NotificationEventArgs>(HttpChannelShellToastNotificationReceived);
  14:          }

Method SubscribeToNotifications binds channel notifications to Windows shell:
   1:          private static void SubscribeToNotifications()
   2:          {
   3:              if (!_httpChannel.IsShellToastBound)
   4:              {
   5:                  _httpChannel.BindToShellToast();
   6:              }
   7:              if (!_httpChannel.IsShellTileBound)
   8:              {
   9:                  _httpChannel.BindToShellTile();
  10:              }
  11:          }

Associate User and Channel URI
Method SubscribeToService needs to send unique endpoint URI (which can be accessed via _httpChannel.ChannelUri property) to the server portion of the system, e.g., by calling a web service. The service will associate the URI with the ID of the currently logged in user. This is an important consideration: since different users may be using the mobile application (or the same person may have different user accounts), but channel URI will always be the same. If we do not properly associate URI with the current user, he or she will be receiving other person's notifications. By the same rationale, when user logs off from the application, a web service call needs to be made to disassociate her from push notifications URI.

Wednesday, November 17, 2010

Mobile Enterprise Application. Step 2: Authentication

This is a second post in the series. Previous post:
 - Step 1: General Architecture

Authentication - as in supplying proper user credentials - is an essential part of any enterprise application. Consumer apps and games usually allow unauthenticated users to execute them, but enterprise systems have higher security requirements. So, your login page is likely to be set in WMAppManifest.xml as default task:

    <Tasks>
      <DefaultTask Name ="_default"
            NavigationPage="/Views/LoginPage.xaml"/>
    <Tasks>

It's not difficult to put together a simple page with two text boxes and a button, then make a web service call to verify user credentials and return an object containing user context. However, there are a couple of things to consider.

Push Notifications Opt-In
Push Notification Service (PNS) is a powerful tool that allows your server application to initiate communication with the client even while the client isn't running. There is no equivalent functionality on the desktop or in web applications; it is one of the unique features of the mobile client. I'm sure any enterprise system could use PNS, and in my next post I will show how to implement it. Normally, you would register a user for push notifications as soon as he or she authenticates. However, according to Windows Phone 7 Application Certification Requirements, the application must ask the user for explicit permission to receive a toast notification. Once the opt-in is obtained from the user, it can be saved in isolated storage settings. Below is a code snippet that checks settings and redirects user accordingly:
   NavigationService.Navigate(
      IsolatedStorageSettings.ApplicationSettings.Contains(Constants.Settings.CanUsePNS)
         ? new Uri(Constants.Urls.LandingPage, UriKind.Relative)
new Uri(Constants.Urls.PNSOptInPage, UriKind.Relative));


Session Management
Authentication usually implies a time-limited user session. Unlike ASP.NET, which is a server-side platform, Silverlight doesn't provide session management features out of the box. My recommended approach for mobile application is to implement a dual session management mechanism along these lines:

  1. Client ask user for a preferred session duration (not to exceed a predefined system limit) before login
  2. Client successfully authenticates, and server returns a unique session token (a GUID, for instance)
  3. Client keeps session state in the isolated storage
  4. Every time the application is activated, it checks if session length has exceeded timeout
  5. Every time the client makes a webservice call, it includes the session token as a parameter. Server uses it to validate the session and optionally create an audit trail of user activity.
Restoring Session State On Activation
When application is running, session state is stored in a public static property of the App class (App.xaml.cs), for example:
   public static UserLogin CurrentUser { getset; }
However, the value goes away when application becomes inactive (again, this is unique behavior of mobile clients) and we need to restore it as part of reactivation:

   1:  private void Application_Activated(object sender, ActivatedEventArgs e)
   2:  {
   3:      if (AppController.CurrentUser == null)
   4:      {
   5:          using (var store = IsolatedStorageFile.GetUserStoreForApplication())
   6:          {
   7:              if (store.FileExists(Constants.Files.UserLogin))
   8:              {
   9:                  using (var file = new IsolatedStorageFileStream(Constants.Files.UserLogin, FileMode.Open, store))
  10:                  {
  11:                      var serializer = new DataContractSerializer(typeof(UserLogin));
  12:                      AppController.CurrentUser = (UserLogin)serializer.ReadObject(file);
  13:                  }
  14:              }
  15:          }
  16:      }
  17:  }

(to be continued)

Wednesday, November 10, 2010

Mobile Enterprise Application. Step 1: General Architecture

I decided to follow up on my previous post and write a series of more practical articles to illustrate various decisions that are specific to mobile enterprise applications. Being is Microsoft platform developer, I am going to stick to Windows Phone 7 in this series. This first blog post discusses general architectural issues.

Choosing Client Technology
WinPhone 7 supports two development paradigms: Silverlight and XNA. The latter targets game developers, providing them with game loop, sprites, and stuff like that. Silverlight, on the other hand, is a subset of the traditional desktop UI platform (WPF). It has a rich set of controls plus the ability to do cool visual effects. Silverlight is a natural choice for a mobile enterprise application.

Relative Merits of MVVM
Model-View-ViewModel design pattern has a widespread adoption in the XAML world, including Silverlight. The question isn't whether or not MVVM can be used in WinPhone 7 applications (it can) but whether or not it should be used. On the negative side, MVVM makes your app larger and slower, which is no small thing on the mobile device. On the positive side, it provides the separation between user interface and logic which is extremely beneficial in two scenarios: unit testing and sharing projects with a designer.

Unit Testing
Generally, I am a big proponent of unit testing - I believe they are essential to building high-quality maintainable programs. However, WinPhone 7 unit testing is a little wobbly at the moment. Test framework built into Visual Studio doesn't support phone applications, so we are supposed to download Silverlight Unit Test framework, which is included into Silverlight Toolkit. Only problem is that latest version of the toolkit supports Silverlight 4, but not WinPhone 7... I'm sure eventually Microsoft will sort everything out, but for the time being it's probably best not to concentrate on unit tests.

Sharing Projects With Designer
If your project team has a dedicated designer who actually works with the same project as developers, MVVM provides a nice clean separation of concerns. If, on the other hand, you do not have a designer, or the one you have prefers exchanging images and screenshots with you over email, MVVM doesn't really give you much of an edge.

Choosing Server Technology
We should try to shift as much logic as possible to the server. This makes sense from several points of view. First of all, mobile client doesn't offer a lot of processing power while server side can be very scalable. Second, if you support more than one client platform, it helps to have as little code to port as possible. Third, your business logic is your intellectual property, and it is definitely more secure on the server.

SOAP or REST?
When it comes to building web services, WCF allows the choice between SOAP and REST. On the high level, SOAP is really a remote procedure call mechanism which can support sophisticated logic and advanced  security requirements. REST, on the other hand, operates with a simple set of HTTP verbs and is therefore better suited for simpler CRUD-type logic.

Secure Communication
Mobile clients communicate with web services over 3G or WiFi networks, which makes them vulnerable to security attacks (open public WiFi are especially fertile ground for packet sniffing - read more here). It is therefore essential to encrypt all messages, not just those related to authentication. Using SSL protocol for communication is a good start.

(to be continued)

Thursday, October 21, 2010

Mobile Clients of Business Applications

As of today, there are over 200,000 applications in Apple AppStore targeting iPhone, iPod Touch, or iPad. Android Market has over 100,000 applications. Windows Phone 7 will start selling in November, and there is already a rush to fill up its Marketplace. Now, overwhelming majority of all these applications are consumer-oriented: games, social networking, news, and much more (even books these days are often made into phone applications). Does that mean that business apps (sometimes they are called "line of business" apps) don't have a place on a mobile client?

Given the huge popularity and widespread adoption of modern smartphones, combined with changes in the workplace, the answer must be a resounding "no". However, creating a mobile client that really adds value to the system as a whole is not a trivial task and requires careful planning. In this post, I will try to highlight some of the challenges involved in designing a mobile client for a business application.

Challenge 1. Define a Reasonable Subset of Functionality
Typical enterprise app has a very large scope, which tends to grow as business users demand more and more features with each release. Trying to squeeze as much as possible into the mobile client is probably not a good idea. First of all, consider the form-factor: will user be able to accomplish the task with tiny screen and soft keyboard? Then take into account specific needs of the mobile user - they are guaranteed to be different from someone who is comfortably sitting in front of a computer. For example, the latter may need the ability to sift through large sets of data with different search/sort criteria, while the former wants to see just the relevant data - and upfront.

Challenge 2. Mobile-Specific Features
Modern smartphones like iPhone or Windows Phone 7 have advanced hardware, and developers can incorporate such compelling features as multi-touch interface, camera, microphone, location services, and accelerometer. Naturally, applications can also have the ability to place phone calls and send email. These features may or may not be applicable to a specific business system. A mobile client for a customer-relationship management application is likely to take advantage of location services, phone and email, but it's difficult to imagine why it would need an accelerometer.

Challenge 3. Choose Appropriate Architecture
When it comes to choosing a specific technology, there are two main choices: develop mobile client as a web application or a native one. Although a web application is often a better choice for a desktop client, the situation is opposite in the mobile world. Even if the business system already has a website, a special version will be required to support small form-factor of the mobile device. On the other hand, browsers that are included with mobile operating systems vary in their support for various features of HTML, JavaScript, and CSS. The lowest common denominator website will probably not satisfy the users. Another drawback is the inability to tap into the mobile-specific features, such as location services or camera.

Going native allows full control over application UI, plus it enables access to hardware features, but it's hardly an easy task. Sheer number of available platforms is the main difficulty, with little or no code reuse between them. Android development should be done in Java, iPhone apps are written in Objective C, and Windows Phone 7 relies on C#/.NET.

Challenge 4. Security and Session Management
Pretty much all enterprise systems deal with sensitive data one way or another, and mobile clients will most likely need to work with it, too. Mobile devices have a security disadvantage compared to office computers: they are easily lost or stolen. It is therefore important not to store anything sensitive on the device itself and employ two-way authentication mechanism. User sessions cannot be open-ended and user activity within the application must be logged on the server for auditing purpose. Another potential vulnerability is the network: smartphones communicate over public Wi-Fi and 3G networks. This can be mitigated by using SSL protocol.

Monday, August 09, 2010

Should Software Development Be Regulated?

The question of government regulation of business is high on the agenda these days. Over the last couple of years we have witnessed some spectacular events, like the Great Recession of 2008 and Deepwater Horizon explosion (and subsequent Gulf of Mexico oil spill) of 2010. These have already become case studies of the importance of government regulation. New legislation on financial and healthcare reforms will significantly increase the role of government in those areas. So, here is my question: has the time finally come to regulate software development?

I realize a lot of people have a negative knee-jerk reaction to anything that might expand the role of government (I can almost hear them scream!). Personally, coming from a communist country, I tend to be fairly skeptical in this area: I've seen what happens when bureaucrats are given unchecked power over people's lives. But let's consider the matter objectively. After thinking about it for a while I came up with three different areas where regulation can bring positive change.

Professionalism
It never ceases to amaze me that you cannot work as a plumber without a plumbing license, but no license is required to write software. Mind you, obtaining a plumber's license is far from a formality: it requires four years of job training, and the applicant must pass a written exam. On the other hand, anyone can apply for a software engineer position: it is up to the hiring company whether or not to ask for evidence of some formal training. Some companies administer tests, or ask a bunch of technical questions during interview process, but there aren't any standards.

As a direct result, ranks of software developers are full of people who picked programming as a hobby or were attracted to it by higher salaries, but never learned the mathematical foundations of the discipline. I would argue that these people are more likely to use poor coding practices, steer clear of object-oriented programming, and never bother with design patterns. Note that I am not advocating the supremacy of college graduates; all I'm saying that programming requires proper training.

By the way, similar observation can be made about businesses. For example, a financial services company may own cars, but is unlikely to have an in-house team of mechanics who fix them. And yet, the same exact company doesn't have second thoughts about maintaining an in-house software development organization.

Quality Control
Given the role software is playing in our lives, it's hard to understand why people tolerate low-quality applications. Although there are many reasons for poor quality, the industry pretty much knows how to address this problem. It all starts with a solid design, of course: application architecture should be appropriate for the task. Developers should write automated unit tests and ensure good code coverage, and these tests should be executed as part of every build. Each application should have well-defined white box and black box test cases, and appropriate performance testing should be done before the system goes live.

However, good quality control can be expensive: for example, the time used to write unit tests is the time developers do not implement new functionality. Automated testing tools for QA can be very expensive, too. It's no surprise some businesses prefer to save money on quality, given the extraordinary tolerance consumers have towards buggy software. By enforcing standard QA processes, government regulators can make good reliable software a reality and make life easier for the end user.

Security
Over the last 15 years, as high-speed internet access became first widespread and then ubiquitous, software applications grew to rely more and more on connectivity. Sadly, this opened the floodgates for an entirely new class of problems: cyber attacks. Let me quote from an excellent book on the subject, Richard Clarke's "Cyber War":
These military and intelligence organizations are preparing the cyber battlefield with things called "logic bombs" and "trapdoors," placing virtual explosives in other countries in peacetime. Given the unique nature of cyber war, there may be incentives to go first. The most likely targets are civilian in nature. The speed at which thousands of targets can be hit, almost anywhere in the world, brings with it the prospect of highly volatile crises.
Of course, cyber attackers exploit security weaknesses in software, and of course the system is as secure as its weakest link. But how does software acquire these weaknesses in the first place? One reason is that people who develop it lack the knowledge and expertise to do proper threat modeling. And even if the application was developed with security in mind, has it ever been tested for security vulnerabilities? This is where government regulators could step in, making sure all software has been secured at an appropriate level.

In conclusion of this essay, I would like to acknowledge that regulation doesn't always work, and it is entirely possible that a bad appointee will turn the initiative completely upside down. After all, doesn't Great Recession illustrate inability of the SEC to control derivatives market? And didn't oil rig explosion shed light on mass incompetence and corruption at MMS? But software has become such an important aspect of our civilization that we must at least begin a conversation.

Monday, August 02, 2010

VB or C#? A Personal Journey

Last time I checked LinkedIn group .NET People, there were 435 posts in the "VB or C#?" discussion. That's strange, I said to myself. After ten years and four language iterations are there enough differences to spark the debate? So I started reading...

Well, there were a couple of people who found genuine gaps (like XML literals in VB or yield keyword in C#). There were a couple of trolls, and a couple of people just having a good laugh ("I prefer C# over VB because I am an American!"). But the majority of comments were pure opinion. "Code is cleaner", "more readable", "I hate semicolons", "I love curly braces", "too verbose", "closest to plain English" were some of the statements repeated over and over. IMHO, this entire discussion sheds more light on the .NET development community than on programming languages themselves.

It's no secret that people come to software development using [at least] two separate routes. Some study Computer Science in college (even if it's not their major or they never graduate). They are probably taught programming courses in Java or C++, so C# comes naturally to this group. Second category of developers started out in a different line of work and discovered Office automation with VBA somewhere along the way. Or perhaps they learned VBScript in order to maintain their department's ASP page on the intranet. When .NET came along, this group made a transition to VB.NET.

Now, I'm not trying to argue which group has better programmers - I've seen extremely bright engineers without CS degree, as well as some dim bulbs who turned out to have a Master of Science in CS. But it's a common knowledge that C# was designed from the ground up as a managed object-oriented language, while VB.NET is essentially the outcome of multiple cosmetic surgeries made to an aging body. First change happened when original BASIC - Beginner's All-purpose Symbolic Instruction Code - was updated to support structural programming. It has acquired the "Visual" prefix, but didn't become fully object-oriented until its VB.NET incarnation. Nowadays, Microsoft works diligently to keep the language on par with C#, adding constructs like generics, lambda expressions, closures, and so on.

However, the efforts to modernize VB have little impact on most VB programmers, who probably just aren't familiar enough with contemporary design and programming patterns. So, it's no surprise they tend to get a little bit defensive...

Interestingly, I myself managed to travel both paths to software development. My college major was Applied Mathematics and Cybernetics, and I had plenty of instruction on typical CS subjects. We used Turbo Pascal in the classroom, and by the end of school I transitioned to Borland C++. Incidentally, Soviet Union imploded at about the same time, and in the chaos that followed, my aspirations to find a job in IT became laughable (people were lucky if they had any job at all - it was not unusual in those days for a doctor to work as a taxi driver). So, I ended up doing bookkeeping, accounting and then business planning for a big multinational corporation.

Before long, I was dabbling in Microsoft Access and creating automated databases and spreadsheets for my team. VB was easy and forgiving, and, more importantly, it was ubiquitous. When I finally managed to switch my career back to IT, I didn't feel comfortable with latest C++ tools and frameworks, so I stuck with VBScript and VB6. When .NET was introduced, my first instinct was to transition to VB.NET. However, I decided that it was time to re-educate myself. I started reading about design patterns (which weren't even on the radar when I was in college), test-driven development and extreme programming. I studied source code and tackled new classes of problems, like multi-threaded services development.

Eventually, I realized that C# was a better choice for me, made a switch, and never looked back. This was around 2005, when gap between the two languages was fairly big. Five years later, it is almost gone. But like I said earlier, it's easier to update a compiler than to change people's mindset. Both VB and C# are here to stay, I'm just waiting for someone to port another of my college-era languages, Prolog, to .NET framework...

Thursday, May 20, 2010

Dynamic Connection Strings With SSRS Data Processing Extension

It is my firm opinion that whoever came up with the names for various parts of SQL Server must be fired. "SQL Server Reporting Services", "SQL Server Analysis Services", "SQL Server Service Broker" do not exactly roll off the tongue. Just try using a few of these in a speech - you will immediately realize you need nicknames (SSRS sounds too much like USSR. Oh, well, I'll just call it Kevin). Anyway, this post wasn't supposed to be a rant, so let's move on.

SSRS supports ten data sources out of the box, including SQL Server (duh!), ODBC, OLE DB, and Oracle. Data Processing Extensions are usually recommended when you need to generate reports from non-standard data sources, for example, files in a proprietary format. You start by creating a .NET assembly with classes that implement a half-dozen or so interfaces defined in Microsoft.ReportingServices.Interfaces.dll. Once everything is working, you deploy the assembly to two separate locations: Visual Studio subfolder on a report developer's workstation, and a Reporting Services subfolder on a server. The process is well documented on MSDN and there is also a good article on The Code Project.

However, what if your database is a SQL Server, but you cannot rely on a static connection string? Suppose you maintain separate databases for your various customers and generate connection strings at runtime? Although SSRS allows us to use parameterized connection strings, sometimes this isn't an optimal solution, given the fact that those parameters are passed around openly inside URL. I found that Data Processing Extensions can be used very effectively in this scenario.

Rather than implementing all the interfaces required by DPE, we will encapsulate an existing class SqlConnectionWrapper in the Microsoft.ReportingServices.DataExtensions namespace (it is marked as "sealed", so you can't subclass it):

   1:  using Microsoft.ReportingServices.DataExtensions;
   2:  using Microsoft.ReportingServices.DataProcessing;
   3:   
   4:  public class MySqlConnection : IDbConnectionExtension
   5:  {
   6:      private SqlConnectionWrapper _Connection;
   7:   
   8:      public MySqlConnection()
   9:      {
  10:          _Connection = new SqlConnectionWrapper();
  11:      }
  12:  }
Right-click IDbConnectionExtension and choose "Implement Interface". This automatically implements three more interfaces, IDbConnection, IDisposable, and IExtension members. Most of the new methods and properties added to our class will be merely wrappers of the respected methods and properties of the _Connection. For example:
   1:      public string Password
   2:      {
   3:          set { _Connection.Password = value; }
   4:      }
   5:   
   6:      public string UserName
   7:      {
   8:          set { _Connection.UserName = value; }
   9:      }
Of course, you still need to add the implementation.

Property LocalizedName should return the string that you want report developers to see in the list of data sources (e.g., on the "Select Data Source" screen of the new report wizard).
   1:      public string LocalizedName
   2:      {
   3:          get { return "Dynamic SQL Server Connection"; }
   4:      }
Arguably the most important implementation is ConnectionString setter. This is where you need to put your proprietary logic that dynamically generates a valid connection string. There are a couple of different approaches. If you don't need any additional information in order to generate the connection string, ignore the "value" and just call necessary methods:
   1:      public string ConnectionString
   2:      {
   3:          get
   4:          {
   5:              return _Connection.ConnectionString;
   6:          }
   7:          set
   8:          {
   9:              _Connection.ConnectionString = MyDataLayer.GenerateConnectionString();
  10:          }
  11:      }
If, on the other hand, your logic does require parameters, you will need to parse ConnectionString value that client code provided. Example below uses regular expression to extract CustomerID from the value (and skips property getter for brevity):
   1:      public string ConnectionString
   2:      {
   3:          set
   4:          {
   5:              Match m = Regex.Match(value, "CustomerID=([^;]+)", RegexOptions.IgnoreCase);
   6:              int custId = 0;
   7:              if (m.Success
   8:                 && int.TryParse(m.Groups[1].Captures[0].Value, out custId))
   9:              {
  10:                  _Connection.ConnectionString = MyDataLayer.GenerateConnectionString(custId);
  11:              }
  12:              else
  13:              {
  14:                  throw new ArgumentException("Valid CustomerID is missing");
  15:              }
  16:          }
  17:      }
Now the only thing left to do is deploy assembly containing MySqlConnection to both server and developer workstation.

*** UPDATED 11/22/2011 ****

Bad news for those of you using SQL 2008 R2 - the approach of encapsulating Microsoft.ReportingServices.DataExtensions.SqlConnectionWrapper I outlined in this post no longer works. Apparently, they changed class accessibility from "public" to "internal". Looks like the only opportunity is to implement all interfaces manually.

Tuesday, May 18, 2010

San Diego .NET User Group Presentation

Thanks to everyone who attended my presentation at San Diego .NET User Group. As promised, below are the links to slides I used and sample code we wrote.

Sunday, May 02, 2010

Code Generation For Auto-Implemented Properties

I recently ventured into one of the more obscure areas of .NET framework - code generation. The project involved rules engine manipulating properties of our internal domain objects. Long story short, I had to create a routine that converts our domain objects into .NET classes (derived from System.Workflow.Activity). These generated classes did not have much behavior - all methods were pushed to the base class - but they did carry so many properties that they in turn had to be grouped together into classes.

Writing code generation logic for a property turned out to be a lot of work: first, I had to add declaration for a private backing field, then a property declaration, including code expressions for both getter and setter. Here's sample code similar to what I ended up with:


var myType = new CodeTypeDeclaration("Person");
 

var field = new CodeMemberField()
{
    Name = "_LastName",
    Type = new CodeTypeReference("System.String"),
    Attributes = MemberAttributes.Private
};
myType.Members.Add(field);
 
var prop = new CodeMemberProperty()
{
    Name = "LastName",
    Type = new CodeTypeReference("System.String"),
    Attributes = MemberAttributes.Public
};
prop.GetStatements.Add(
    new CodeMethodReturnStatement(
        new CodeFieldReferenceExpression(
            new CodeThisReferenceExpression(), "_LastName")));
prop.SetStatements.Add(
    new CodeAssignStatement(
        new CodeFieldReferenceExpression(
            new CodeThisReferenceExpression(), "_LastName"), 
        new CodePropertySetValueReferenceExpression()));
myType.Members.Add(prop);

And here is the code that was generated by the above fragment:


private string _LastName;
public string LastName
{
    get { return _LastName; }
    set { _LastName = value; }
}

Of course, C# 3.0 has introduced a much shorted way of declaring the same property: "public string LastName { get; set; }". This syntax is called "auto-implemented properties" and it puts the burden on the compiler to create a backing field and implement getter and setter logic. Naturally, I wanted generated classes to look cleaner, so being an optimist that I am I decided to change code generation logic to create auto-implemented properties instead.

That proved to be a mistake: after a while I realized that classes in System.CodeDom namespace do not support auto-implemented properties generation. The best I could come up with was a hack using CodeSnippetTypeMember:


var snippet = new CodeSnippetTypeMember("public string LastName { get; set; }");
myType.Members.Add(snippet);

This solution is pretty far from ideal. It goes against the spirit of code generation because it allows me to target just one programming language, C#. Still, it is pragmatic. Hopefully, Microsoft can bring CodeDom up to date in a future release.

Monday, March 29, 2010

Passing Parameters To ClickOnce Applications

I was doing some research with ClickOnce deployment architecture and found an unexpected challenge: passing command line parameters. This post summarizes a couple of different approaches; hopefully, it will save someone a few hours of trial and error and frantic googling.

Use Query String

ClickOnce applications can be installed from a website, and they can also be invoked from a webpage. All that's needed is a hyperlink that references the .application file, e.g.: http://www.mywebsite.com/foobar/foobar.application. When user clicks the link, Foobar will launch (actually, the same link will install Foobar on user machine). In this scenario, parameters can be passed to the application by adding query string to the URL: http://www.mysite.com/foobar/foobar.application?param1=value1&param2=value2&... . Here's how to do it:

Step 1: Enable URL parameters in ClickOnce application

Open project properties in Visual Studio and click "Publish" tab. Then click "Options" button, which brings up a dialog window. Select "Manifests" from the list, and make sure the checkbox that reads "Allow URL parameters to be passed to application" is checked.



Step 2: Add code to process parameters

In regular Windows applications written in C#, it is enough to declare static void Main(string[] args) in order to get the list of command line parameters in the args array. Unfortunately, this doesn't work with ClickOnce applications - the args array will be empty whether or not URL had any parameters. In order to access them, we need to analyze AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData property.


[STAThread]

static void Main()

{

    Application.EnableVisualStyles();

    Application.SetCompatibleTextRenderingDefault(false);

 

    var args = AppDomain.CurrentDomain.SetupInformation.ActivationArguments;

    var frm = new MainForm();

    if (args != null

    && args.ActivationData != null

    && args.ActivationData.Length > 0)

    {

        var url = new Uri(args.ActivationData[0], UriKind.Absolute);

        var parameters = HttpUtility.ParseQueryString(url.Query);

        // Process parameters here

    }

 

    Application.Run(frm);

}



Add File Type Association

Another interesting approach is to associate the application with a specific file extension. When a file with this extension is downloaded from a web page or opened in Windows Explorer, application will be launched automatically and file name will be passed to it. Prior SP1 release of .NET 3.5, file type association had to be created programmatically (by creating subkeys for the desired extension and a shell\open\command using Microsoft.Win32.Registry API). Visual Studio 2008 SP1 allows to define association declaratively. In the same Options dialog mentioned above, click "File Associations" and fill in required information in the data grid.



Processing logic is very similar to the first scenario - we still need to analyzeAppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationDataproperty. The only difference is that instead of calling HttpUtility.ParseQueryString, we need to extract file name from it.

Saturday, March 13, 2010

Continuous Integration + Continuous Improvement

Long time ago, in the early days of software development, one person could program entire application. Gary Kildall wrote the CP/M operating system, and Wayne Ratliff wrote dBASE, one of the first database engines. Nowadays, the project can get started with one or two developers, but eventually ends up with many more (as management often seems to ignore Brooks's law about adding man-power to a late project).

Once you have several people working on the same codebase, integrating their changes can become a challenge. (I remember one project where two developers decided to work independently and did not attempt to integrate until the code-cutoff day. Sadly but unsurprisingly, the solution didn't build, and since there was no time to solve all the issues, they had to deliver two separate applications instead of one.) The solution, known as continuous integration, is to use the common source code repository and integrate frequently. The first part is obvious, the second may require some explanation.

Many software companies have Build engineers (or even teams of Build engineers), whose main job is to produce builds. Since "integrate" really means "get latest code and build the system", it is theoretically possible to assign this task to Build engineers. However, I do not think it is such a great idea: first of all, the task is boring, and second, humans may have a problem with the "frequently" part. For some projects, it will be enough to run a nightly build, while other will prefer to integrate every time source code is committed to the repository. There is no way a human could be doing that! The best thing to do is automate the task, and there are commercial and open-source systems that can do the job.

A few months ago I installed one such application, open-source product called Cruise Control .NET on a virtual server we use as a build machine. It consists of a Windows service and an ASP.NET web application: service runs integration tasks, web app provides user interface for build status, logs, and reports. Naturally, it supports multiple projects, but project configuration has to be done the old-fashioned way, by manually editing XML in a couple of .config files. Another nice feature is a utility called CCTray; this is a little app that displays an icon in the taskbar. The icon uses traffic light colors to notify user of their project status.

Cruise Control is a great application that I highly recommend, but there is one more concept I wanted to describe in this blog post: continuous improvement, or "Kaizen" in the original Japanese. Kaizen is a philosophy that helps to improve productivity and quality while reducing cost. It applies to different areas: manufacturing, government, banking, healthcare, and its main ideas are individual empowerment and continuous quality cycle.

I think we can successfully apply Kaizen ideas to software development. Most straightforward approach in my opinion is encouraging programmers to do three things:
  • Refactor code to design patterns,
  • Increase unit test code coverage,
  • Fix all bugs (not just those reported by users).
This of course means that our application's codebase will be continuously updated, and I know there are companies out there that will be really uncomfortable with such perspective. However, when source code is sufficiently covered by unit tests, and all tests are executed as part of every build, I see no reason for concern.

Saturday, January 30, 2010

Introducing Web Client Software Factory

Earlier today I gave a talk on Socal Code Camp entitled "Introducing Web Client Software Factory". Thanks to everyone who attended! We ran out of time and didn't cover some of the less important features of WCSF. Perhaps, in the future I will need to split this into two sessions.