Monday, November 30, 2009

What I Learned from PDC - Thick, Thin and Rich Clients

These three paradigms of application deployment try to optimize between application reach and application experience.


Thick Clients

Thick clients have easy access to client resources (file system, local databases, USB ports, and peripherals). They are also characterized by running their code on the local CPU. Thick clients do not need to be connected to the internet to run, but they can be connected to resources on the internet. Using local resources and memory allows them to be ‘stateful’, which means the application can easily remember the actions of the user as the user progresses. Behavior like dynamically enabling menus, progressing through a wizard dialog or cut/copy & paste are examples of statefullness. Microsoft provides 2 technologies to create thick client applications: Winforms (Win32 & GDI) and WPF.

Thin Clients

‘Thin client’ was the original name given to web browser based applications. This paradigm has all the code executing on the server, and the code is designed to deliver information to a web browser (HTML and JavaScript), and the browser will render the information as a web page. Microsoft markets ASP.NET and ASP.NET MVC as platforms that deliver thin client solutions. These applications have achieved wide popularity because of their reach -- anyone with a browser and an internet connection can access this information -- regardless of the hardware or operating system. Developing richness in these applications requires the use of Cascading Style Sheets (CSS), Ajax, JQuery and JavaScript.

Browser based applications are not really stateful on the client (with the exception of cookies and session tokens), so with each user interaction, the local HTML, stored in the browser’s in-memory Document Object Model (DOM), is modified or refreshed dynamically. Ajax and JQuery are technologies used to modify the contents of the DOM (letting it refresh the screen after the change) and can be used to create some amazing GUIs that rival thick client applications. Thin clients have two weaknesses:
1. For security reasons, they are forced to run in the browser’s “sandbox”, which limits access to local system resources.
2. Building a rich GUI is difficult, and making that GUI testable is extremely difficult.

Based on my conversations at PDC, I found that programmers who quickly adopt the Ajax / JQuery route to provide richness have spent years creating UIs by programming JavaScript or HTML in ASP.NET applications. These technologies naturally fit into their way of thinking and their existing knowledge and experience.

I have seen solutions that deploy thin client / server applications onto a single box to try and get the advantages of the thick client. This can be done by using a Web Server (like IIS) and works well in applications that do NOT need to access local machine resources. This approach has difficulty synchronizing with a data source on the web. Integrating to local resources from the browser “sandbox” eventually requires some sort of browser plug-in that the end user trusts and has given permission to access the resources.

Rich Clients

Rich client applications, and Silverlight in particular, try to blend what is best about thick and thin clients. Microsoft accomplishes this by making Silverlight a 5 MB subset of the .NET platform, optimized to run on any OS (Windows, Mac, Linux) instead of targeting any particular browser. Silverlight is delivered to the browser as a plug-in (similar to Flash or Adobe Reader) and can use the browser’s resources and DOM.

Applications written using Silverlight can also be installed on the desktop and access local machine resources. So, like thick client solutions, rich clients run code locally and offer GUI features like drag/drop, animation, threading, and statefulness. They can also run connected or disconnected and can access local resources. Like thin client solutions, rich clients are accessible via a web browser, deliverable without a separate install, can detect newer versions and update automatically. Silverlight has the advantage of using the same programming environment as the server. This is VERY important.

As a developer, what I really like about Silverlight is that you can use the same code regardless if it is running in a browser or on the desktop. All the development can be done in WPF (XAML) and C#, so I do not need to be expert in a lot of different authoring technologies. Having the common .NET platform on both the client and the server makes it easier for me to optimize the solution’s performance just by moving code between the client project and the server project. Silverlight is aware of where it is running, which makes it easier to add and remove application features at runtime by querying the environment dynamically and not having to trust that the user installed the correct version. Lastly, the entire solution, from GUI to database access, can have a built-in automated test suite to assure quality and performance.

Asynchronous vs. Synchronous Communication

Asynchronous and synchronous communications are shown in boxes on the diagram. Silverlight enforces an asynchronous communications model when communicating with the server. That is to say that GUI interaction is not blocked when you are loading data or web pages. So while the user is performing tasks like navigation or mouse over, in the background Silverlight can be running calculations, doing analysis, loading data or any other programming task. Thin client solutions, including ASP.NET MVC, tend to run in a synchronous manner, which is where the user performs some action, the request is passed to the server, then the server sends a page of information back to the client, and the browser re-renders the page. Ajax and JQuery allow the application to send requests to the server that re-render sections of the page, but you cannot interact with those sections while they are ‘waiting’ for information. Microsoft introduced GUI controls that allow the Silverlight UI to behave synchronously by showing a progress bar and blocking user interaction. Blocking has it uses, but to me this feels like a step backward.

The challenge is to design UIs that are still functional while waiting for data. Think of iTunes - while songs are downloading, iTunes does not prevent you from accessing all of its features. Building a good GUI that blends the menu / work-area paradigm of the desktop with the hyperlink / URL navigation of the web is very tricky. Users can get focused on the details so quickly (colors and styles) that the things that make a very useable GUI, (layout, workflow, transitions and statefulness) do not get detailed consideration until after deployment.

Wednesday, November 25, 2009

What I Learned from PDC - An Overview

I attended MIX07, PDC07, MIX08, PDC08, and MIX09 online, so I have been following the Microsoft technical narrative for the last 3 years. I attended PDC09 live. One of my goals was to organize the technical narrative, which is normally presented product / technology centric into the components of a single application centric model.

For the past decade, my customers were looking for applications that offered the best of all experiences. Because they were unconstrained by technology in their thinking, they asked for features that were an amalgam of Web, Phone and Desktop applications, their ‘ideal’ application. I have been looking for a set of technologies that can deliver all these features from a single code base. I could see Microsoft was approaching such a solution, and at PDC09 many of these pieces came together. I created a diagram to help navigate the changing face of Microsoft technology in the light of customers’ Web/Desktop visions.


If you find this diagram hard to read, don't worry -- it will be broken down into readable pieces in the blog posts that follow.

The ‘ideal’ application in my customers’ eyes can run over the web, or run disconnected on a laptop without changing the GUI or user experience. Being connected to the internet should only be necessary to read or share data, but not a requirement to complete work. It should also be able to run on nearly any device with any screen size, a server, desktop, laptop, or a phone. Software components must be agnostic of where they are running (client or server) and cannot duplicate logic. The software community describes this style of application as Line of Business (LOB) apps; I do not like the term but it looks like it is getting a lot of traction on the web. Two archetypes for this LOB style of application are a traveling salesman and a soldier deployed in Afghanistan.

The diagram above is used to map the Microsoft technologies to support the LOB application. Not everything discussed at PDC09 has a place in this mapping, but I think you can get more out of the talks if you keep this diagram in mind while you pick and choose among the talks. The diagram shows, on the left side, the breath and depth of an application, on the right side, the technology and tooling available to construct, maintain and deploy the application. The functional layer (vertically down the center) was extracted from and reinforced by many of the PDC talks.

A well designed end-to-end .NET application will account for all of these functional layers in each of the technology stacks. As developers (and users), we can see the value in a software architecture that includes GUI, Client/Server side validation, Logic and Rules, Command/Navigation control, Data services, Networking, Object Relation Mapping, and Persistence.

Vertically the diagram shows a progression from End User to Application Hosting, via a set of functional requirements (Functional Layer). Horizontally, there is a set of four technology stacks, each with a unique perspective on building software.
Horizontally I have juxtaposed 4 diagrams each focusing on a different aspect of the software development process -- breadth, depth, platform, and tooling. Here are some working definitions:
•Breadth: who is the audience and how do I reach them, how do they plan to access my solution?
•Depth: what tasks do my users do and under what conditions, how do I make the application easy to use?
•Platform: what are the technologies I can use to build a deployable, maintainable, extensible solution, successfully using the skills of my development staff?
•Tooling: what are the tools available to assure that I am producing the solution efficiently and correctly?

My goal is to explore the subset of the Microsoft technologies presented at PDC in the context of building a solution that can run thinly over the web, or disconnected as a thick client reusing as much code and common technologies / tooling as possible. The result should be a road map of the essential Microsoft technologies you should be aware of in order to build a modern .NET client/server application.