Wednesday, April 14, 2010

Generic Property Merge Using Reflection

I have found this generic merge method to be very useful. Before you cut and paste it into you Silverlight application, let me explain why I created it.

  1. public virtual bool MergeValues(object oSource, bool bNotifyUI)
  2. {
  3. bool bSucess = true;
  4. foreach (var oPropInfo in oSource.GetType().GetProperties())
  5. {
  6. if (!oPropInfo.CanRead !oPropInfo.CanWrite oPropInfo.Name == "Item")
  7. continue;
  8. var oResult = oPropInfo.GetValue(oSource, null);
  9. if (oResult != oPropInfo.GetValue(this, null))
  10. try
  11. {
  12. oPropInfo.SetValue(this, oResult, null);
  13. if (bNotifyUI)
  14. NotifyOfPropertyChange(oPropInfo.Name);
  15. }
  16. catch
  17. {
  18. bSucess = false;
  19. }
  20. }
  21. return bSucess;
  22. }

Once Silverlight binds a property to a XAML element, your program will perform a lot better if you just change the value rather then rebind to a new object. I have not done any detailed timing tests, but by using this code in a before / after way you can “feel” that this is faster.

I know there is a performance cost when you access a property via reflection, but I am willing to pay that cost to have a solution that is VERY reliable.

By defining this method as virtual you are able to override to build a version that performs better, for example:

  1. public override bool MergeValues(object oSource, bool bNotifyUI)
  2. {
  3. bool bSucess = true;
  4. Person oPerson = oSource as Person;
  5. this.ID = oPerson.ID;
  6. this.FirstName = oPerson.FirstName;
  7. this.MiddleName = oPerson.MiddleName;
  8. this.LastName = oPerson.LastName;
  9. this.MostRecentFlight = oPerson.MostRecentFlight;
  10. return bSucess;
  11. }

But this solution is not as reliable and requires me to change the code every time I modify the properties in this class.

Feel free to cut, paste and modify this code to meet your own coding objectives.

Sunday, March 21, 2010

NCAA Brackets in Silverlight

The developers at a company we were working with put out a challenge -- can you write software to pick the brackets in the NCAA basketball tournament? The answer is you can and 6 of us did. Our solutions range from random number generators to multivariate stepwise regressions. We did the multivariate stepwise regression.

I created a Silverlight application to display the results; that is, Debra designed the application and I did the software. The code is posted to codeplex here, and you can see our picks at http://www.mysilverisland.com/NCAA.

The code is very object oriented and uses a direct approach to pushing the data into the view. No MV-VM was necessary, but that was because of the nature of the solution. This application simply creates a report of our picks, and the picks are shoved into the brackets using a "plug and chug" technique of matching textbox names with ID's in a collection.

I think this code may be useful for other developers who need to manage a tournament.

You can get the source code on codeplex:

http://ncaabasketball.codeplex.com/

Thursday, February 25, 2010

Another Way to Think about Code Reuse

Anyone who practices Object Oriented Programming has a lot of different ways to reuse code. The most common way is to use inheritance, but recently I have been reusing code in an unconventional way.

Whenever I create a class I follow a strict convention on naming properties. Here is an example:

  1. public class Movie
  2. {
  3.     public string Title { get; set; }
  4.     public List<People> Actors { get; set; }
  5.  
  6.     public DateTime ReleaseDate { get; set; }
  7.     public double MoneyEarnedToDate { get; set; }
  8.  
  9.     public string Producer { get; set; }
  10.     public string Genre { get; set; }
  11. }

CamelCase is the convention I use to define every property. I always use complete words and never use abbreviations. I do not use underscores in public properties. I stick to this convention because it lets me extract Meta data from the class while it is in use and can be used to simplify coding of the UI.

  1. public class CustomGrid : DataGrid
  2. {
  3.     public CustomGrid()
  4.         : base()
  5.     {
  6.         AutoGenerateColumns = true;
  7.         AutoGeneratingColumn += new EventHandler<DataGridAutoGeneratingColumnEventArgs>(OnAutoGeneratingColumn);
  8.     }
  9.  
  10.     void OnAutoGeneratingColumn(object sender, DataGridAutoGeneratingColumnEventArgs e)
  11.     {
  12.         string sHeader = e.Column.Header.ToString();
  13.         e.Column.Header = CamelToTitleCase(sHeader);
  14.     }
  15.  
  16.     public string CamelToTitleCase(string Text)
  17.     {
  18.         Text = Text.Substring(0, 1).ToUpper() + Text.Substring(1);
  19.         return Regex.Replace(Text, @"(\B[A-Z])", @" $1");
  20.     }
  21.  
  22. }

Via inheritance I can customize the standard Silverlight DataGrid to automatically generate columns for whatever class is bound to it. I sign up for the OnAutoGeneratingColumn event so I can rename the column headings. The function CamelToTitleCase will automatically insert spaces before each capitol letter, so the property ReleaseDate becomes the header Release Date and the property MoneyEarnedToDate becomes Money Earned To Date



So is using a coding convention really code reuse or is it more like asset repurposing? I have learned that having a convention makes my code easier to read and debug. It also leads to even more advanced techniques I will share in later blogs.

Monday, February 15, 2010

Know Your Parents and Grandparents

In programming Silverlight I have come across situations where a control or framework element needs to access information on its parent. The code to get information off your direct parent is simple:


  1.             var oContent = this.Parent.ReadLocalValue(Content);
for example.

The real power of Silverlight is in the declarative nesting of XAML elements, and that means that the type of your direct parent may not always be known. To clarify, if you drop a control on a Page, then the Page object will be your parent. However if you then wrap your control in a ScrollViewer, the ScrollViewer becomes your parent and the Page becomes your grandparent.

To avoid all these issues I include the following code in most of my Silverlight projects:

  1.         public virtual T ParentOfType<T>() where T : FrameworkElement
  2.         {
  3.             Type oType = typeof(T);
  4.             var oParent = Parent as FrameworkElement;
  5.             while (oParent != null)
  6.             {
  7.                 if (oType.IsInstanceOfType(oParent))
  8.                     return oParent as T;
  9.  
  10.                 if (oParent.GetType() == oType)
  11.                     return oParent as T;
  12.  
  13.                 oParent = oParent.Parent as FrameworkElement;
  14.             }
  15.             return null;
  16.         }

Recently I needed to access the Navigation Service that is part of a Silverlight Page object. Using the method above I wrote this code to access the service:

  1.         void OnMouseLeftButtonDown(object sender, MouseButtonEventArgs e)
  2.         {
  3.             var oPage = ParentOfType<Page>();
  4.  
  5.             if ( oPage != null )
  6.                 oPage.NavigationService.Navigate(new Uri(NavigateUri, UriKind.RelativeOrAbsolute));
  7.         }

Now regardless where this control is placed in the XAML, the Navigation Service will execute correctly.

Monday, January 25, 2010

What I Learned at PDC - Microsoft Tools

Visual Studio 2010 (VS210) is the latest version of the software development platform. It is also the first version to be written entirely in .NET managed code. It also uses MEF to manage extensibility and WPF to implement all the UI and code editing tools. Microsoft also offers a development environment for GUI and web designers call Expression Blend. Blend and VS2010 share the same project file format so projects can be shared between developers and designers easily.

Developers, you should know that every demo at PDC was done using VS2010 and there are lots of sessions that dive into the new features. An existing but little used VS2010 feature (that may go as far back as the very first VS) is T4 templates. These are code generation templates that developers can customize to pragmatically create code. Scott Hanselman show this technique at 20 minutes into his talk FT59: ASP.NET MVC 2: Ninjas Still on Fire Black Belt Tips.

Designers, VS2010 also includes a design surface to help developers create GUI in WPF/XAML. It works well for simple GUI composition, but is not good for real creative work. Blend is the tool of choice for creating GUI with animations or custom controls that use Virtual State Manager.

For non-developers, there are only 2 things you need to know about this tool:

1) VS2010 can generate code for any version of the .NET framework in existence; this feature is called multi-target deployment. Developers can port their projects to VS2010 without being forced to upgrade to a specific version of .NET. By simply flipping a compiler switch, you can retarget your application to take advantage of later versions of .NET. I have tried this on software I wrote for .NET 1.1 and it works great. It runs in .NET 3.5 and 4.0 without any issues.

2) In building VS2010, Microsoft’s main focus was developer productivity. I have found that I was immediately productive with VS2010 and took advantage of many of the new features, like a visual form builder for Silverlight applications, intuitively.

Bottom Line: developers should install and start working with VS 2010 right away. Simple new features, like being able to open VS2010 windows on multiple separate monitors, will improve developer productivity or at least reduce programmer fatigue. Also check out CL09: How Microsoft Visual Studio 2010 Was Built with WPF 4 to see how Microsoft uses their own technology to build their own products. This talk also shows a tool named Snoop that lets you see all the layers of the WPF tree being rendered: Snoop.

Quality Assurance, Testing, Performance and Source Code Control

These tools make the difference between hacking out code and developing software professionally. I have an interest in all these tools but they are not my primary focus, so here is a quick list of the sessions I attended or watched and my comments.

CL25: Become a Web Debugging Virtuoso with Fiddler: If you are building / testing / debugging web applications, Fiddler is a tool that can let you see communication between your web browser and the server. This talks show how that and other features of Fiddler can help you create web applications.

CL32: Developing Testable Silverlight Applications: This talks shows, in detail, how to build automated unit tests for Silverlight applications that can mimic user interaction. It also shows how you can write applications that are more testable by using the MV-VM pattern.

FT35: Microsoft Visual C# IDE Tips and Tricks: Learn some productivity tips that are new in VS2010. Learn how to get a deeper insight into your code and improve your speed to solution. DJ Park proves it by racing the clock to generate code, a must-see event 43 minutes in. Also check out CodeRush Xpress for C# . It is a free refactoring tool that lets you move around bodies of C# code while preserving the coding intent. Also check out the Architecture Explorer in VS2010, 13 minutes into the talk.

FT54: Power Tools for Debugging: Debugging is hard; VS2010 has some new debugging tools like Intellitrace to make it easier. But this talk is mainly about research into future debugging tools. They show a Holmes, a statistical debugger that can find a correlation between code paths and test failures.

Wednesday, January 20, 2010

What I Learned at PDC - .NET Platform and Extensions (continued)

Composition Manager (MEF) and Construction Manager (Prism)

The Microsoft Patterns and Practices team have built software libraries to help large teams develop large applications. It turns out that these techniques also work well in any application.

MEF stands for Managed Extensibility Framework. This is a technology designed to introduce extensibility points (like Office or web browser plug-ins) into your application. Developing with extensibility in mind lets you think of your application as a platform and can simplify your other development efforts. A well componentized application can be extended after it is deployed by third parties that have special or advanced needs. The technique can foster an ecosystem around the original application, letting others open new markets for your original solution. Glenn Block has a hand in creating MEF and Prism: FT24: Building Extensible Rich Internet Applications with the Managed Extensibility Framework . Listen to the first 5 minutes to understand the problem MEF is solving. Projects can be extended by MEF after initial development, but it is best to think about extension points as you go. Watch the demos first, then circle back to get see how it is done. I love the demo he does 45 minutes into the talk. It shows how MEF can be used to control feature access and deliver functionality on demand. At 50 minutes in you can see a series of demos that have real world application.

Extensibility can also be a competitive advantage. If you expose your application by exposing data services, you may be fostering competition from others to build their own client. Providing extensibility in your own client lets would-be competitors become partners by building extensions that use your data services client plug-ins. A step by step demo on adding MEF to your application is done my Mike Taulty, whom I also had a chance to meet.
Prism is not an acronym for anything -- it is a collection of technology services and constructs that promote loose coupling of software components by introducing a series of “management” object classes. Prism is not really a Microsoft product, so it does not get the marketing push, but anyone who was discussing a rich client application talked about using Prism.

The best guide I have heard, hands down, on Prism is from my friend Eric Mork: Prism Development Guides and Videos. If you follow this link, also sign up for his podcast where he talks technology with many of the experts in the Silverlight community. The commanding feature of Prism is now unnecessary with the addition of ICommand in Silverlight.

Friday, January 15, 2010

What I Learned at PDC - .NET Platform and Extensions

In the late 90’s, the general consensus was that the Microsoft programming model was losing ground to JAVA. At that time, every product that Microsoft offered was COM(Component Object Model) based. And every language Microsoft offered produced COM programs. The COM programming model generated machine instructions targeted at the native instruction set of the CPU. JAVA, on the other hand, provided a virtual machine environment, so compiled JAVA code targeted the instruction set of the virtual machine (VM), which is an abstract CPU. JAVA was able to use its VM implementation to create the catch phrase “write once, run anywhere”, because all that was needed to execute your JAVA program on any OS, platform or device was an implementation of the JAVA VM. There was no need to recompile your JAVA code in order to run it on a new target machine, and as a result there was no need to created install programs to set up the target machine. ‘X-COPY deployment’ become another catch phrase because all you needed to do to install and run a JAVA program was to copy the binaries to a folder on the target machine and say run!

In early 2000 Microsoft released .NET, its own VM based programming environment. At the time, Steve Balmer described .NET as a corporate strategy, programming environment and a marketing campaign. All programming languages implemented in .NET (VB, C#, C++) would emit the same instruction set and target the VM. This makes is possible to easily create applications that mix languages. .NET also makes it possible to call into programs written in the older COM technology through a process called ‘interop’.

Throughout this decade Microsoft has been improving the .NET platform. Two new technologies that emerged in .NET were WPF (Windows Presentation Foundation) and WCF (Windows Communication Foundation). These technologies are the core components used for GUI and networking.

So what is new and improved in the .NET platform? Everything, starting with .NET itself.

The latest version of .NET is .NET 4.0. It should be noted that many of the productivity frameworks come in the box as part of the .NET 4.0 platform, and have also been back-ported to .NET 3.0 and VS2080 SP1. Many are also open source.

Language and Modeling Tools
.NET Framework (POCO, LINQ, Rx, IDynamic ) : it all starts with improvements to the .NET framework. POCO is an acronym for Plain Old CLR Objects (CLR is Common Language Runtime), and although POCO has been around since .NET 1.0, Microsoft is favoring this type of detailed definition for data binding and transportation over the existing classes like DataSet, DataTable & DataRow. I think it is because they work better with LINQ (Language Integrated Query).

LINQ is a fabulous technology (Wikipedia: Language Integrated Query) for lots of reasons. One of the most important, from a developer’s point of view, is that it reduces the interaction with any .NET data source (SQL, XML, CSV…) into a single syntax.

Rx, or Reactive Framework, was one of the surprises of PDC, and I did not even discover it until after the conference, online. The LINQ model is to pull data from a data source, where the Rx model reacts to changes in the data source and sends you a notification. The session VTL04: Rx: Reactive Extensions for .NET explains how it works. Eric Meijer leads the talk and he always wears a tie-dyed shirt, always.

IDyamic is the feature / interface I have been waiting for since the PDC08 demos. Dynamic languages defer compile type checking to runtime and in many cases this is very useful. Popular languages like Ruby (on Rails) use this technique exclusively. This, in conjunction with the DLR (Dynamic Language Runtime), will make is possible to extend the syntax of any .NET language. It offers better interop with COM based software, like Office and VB6. The DLR is also a major component for languages like Iron Ruby and Iron Python. Mads Torgerson gives a solid talk on the development history: FT31: Dynamic Binding in C# 4. Luck Bolognese gives a talk that shows using the technology: FT11: Future Directions for C# and Visual Basic.

Adding dynamic characteristics to existing languages follows the theme of programming in a declarative style instead of an imperative style.

I will finish this topic in my next post on PDC, where I will talk about MEF and Prism...

Sunday, January 10, 2010

What I Learned from PDC - Server Technologies

The server is a provider of a set of services to support the client and the domain. The server is hosted on a machine that could be anywhere on the internet or on the same machine as the client. There are also a lot of applications (like Microsoft Office, video games) where the client (GUI) / server (Services) are integrated into one common executable. Think of a Server as software that is providing a set of services to many clients. Cloud computing has changed the notion that a Server as a physical machine on the internet -- that type of thinking makes it hard to grasp the ideas to come. Envision a server as a provider of a set of services -- authentication, billing, data retrieval and persistence, document management, 24/7 availability, scalability, compute power, social connection and interaction – and you will begin to understand why the industry is calling Cloud Computing the next step in the industry.

Data Services: It is about bridging data across the web between clients and servers. Data Services package data into RESTful style services that transport data to client applications via structured, parse-able text feeds in Atom (XML) or Json format. They also provide data access business logic that can be used to restrict data operations (CRUD). The foundation of all these services is called Windows Communication Foundation (WCF) and from this technology branches all the data related technologies like WCF Data Services, RIA Data Services, and OData.
Talking Points: Pablo Castro shows a Json / JQuery demo (at 20 min in) that has the browser providing the data proxy; at 34 minutes in is a data service server construction demo: FT12: ADO.NET Data Services: What’s new with the RESTful data services framework. You can also see, live, the limits of JavaScript debugging tools.

Data Abstraction: In modern .NET applications nobody programs SQL statements directly against the database. Instead, developers use tools like Entity Framework, n-hibernate or IdeaBlade to build an object based data abstraction layer. The abstraction layer simplifies the code that provides data operations CRUD and protects your applications from changes in the underlying database table and column structures (schema) that is necessary for physical storage and performance. This layer also provides metadata about the data source, which make it possible to create tooling to generate code and documentation from an empty data source. I think that all modern .NET software should take advantage of a data abstraction layer.
Talking Points: FT10: Evolving ADO.NET Entity Framework in .NET 4 and Beyond. I love this talk; it is all demos that show how Microsoft is listening to developers and tuning the Entity Framework to address real world development issues. Don Box and Chris Anderson show in Data Programming and Modeling for the Microsoft .NET Developer that you can create a database from the model or from code. It is worth watching.
Data Storage: Microsoft’s enterprise solution for relational data storage is SQL Server 2008. However data can be stored and accessed in other data providers like XML or the file system. If your application has more that one database (maybe resulting from the need to run disconnected), synchronization is going to be one of your largest problems. Luckily there is an app for that: Microsoft Sync Framework.
Talking Points: Mark Scurrell’s talk SV23: Using the Microsoft Sync Framework to Connect Apps to the Cloud is loaded with demos that show database synchronization in action, and very little code is involved.

Isolated Storage vs. Client Resource Access
Thin clients use HTTP cookies to store information on the client. This small bit of text is stored somewhere in your local hard drive, deep in the file system. Browsers are known for running their applications in a sandbox. This is used to promote secure operation of untrusted programs. The sandbox constraint is also true for rich client plug-ins . They run in a sandbox and have no access to client resources until the user gives permission. Silverlight applications have access to a larger storage area (1 MB , with the user’s permission) referred to as isolated storage. The isolated storage area can be used to store configuration information and user setting preferences. It can also be used to cache local record sets, making it possible to work in a disconnected mode.

A Silverlight application, with the user’s permission, can be installed to the desktop from the browser with a single mouse click and run in full trust or out-of-browser mode. With elevated trust, the Silverlight application can now access all your client resources and function as a thick client. Some of the more common scenarios here are to access COM software components (Microsoft Office, VB 6 Programs) and read and write to the local file system.
To really understand the ease of programming Out Of Browser (OOB) applications and the power of having the same code function as thin client and thick client, watch Joe Stegman’s talk CL20: Improving and Extending the Sandbox with Microsoft Silverlight 4.