Paul Ford: What is Code?

Both Phil Haack and Jon Gruber linked to this post today.

And with good reason. Its exquisite, sublime (in the OED sense of the word not the IDE), faciniating and insightful in ways that only reading it will adequately communicate.

Paul Ford’s Magnum Opus – What is Code?

It should be required reading for anyone involved in anything to do with Software.

PS – Took me 107 minutes (and two cups of tea) to read all 38000 words

Entity Framework Tip of the Day

esterday, I was doing so some work with EF and ran into something unusual – my navigational properties weren’t being populated.

I rechecked my model and the DB schema and eerything was normal.

So it turns out that my entity object had just been inserted earlier in the procedure. This makes sense becuase EF usually does the navigational property population when you pull an entity directly from the DB.

So – either I find a solution or re-write my method to pull down the recently inserted entity again after insertion.

It turns out there is a single line of code that will fix everything:

Entities.Entry(item).Reference(c => c.NavgationProperty).Load();

This forces EF to load the property using the defined FK – and then using that navigational property works as expected. You’ll have to do this for each property that needs to be populated.

Hope someone finds this useful.

Microsoft Translate API Errors

ike any good dev, I have side projects. And like any good busy dev, I don’t get enough time to finish them properly.

So this time of the year is always productive as i try and get stuff done before returning to work.

Today I needed to translate a whole bunch of resource strings into a bunch of other languages. So I thought the Bing Translate Api would be a good fit. So off I went to Azure Marketplace to add the service to my Azure subscription.

And right away I started running into problems – its nowhere to be found.

To cut a long story short – it seems to have been superceded by the Microsoft Translator API.

Now. Signing up and adding it to my existing subscription is simplicity itself.

And when you get to the dashboard for the service, you get something that looks like this:

The proxy class is nice and compact and excellent and saves the hassle of adding a full blown service reference to your VS2013 project.

Then they give you this small code snippit to use the proxy class:

var client = new TranslatorContainer(new Uri("https://api.datamarket.azure.com/Data.ashx/Bing/MicrosoftTranslator/v1/"));
client.Credentials = new NetworkCredential("accountKey", "Insert Account Key");
var marketData = client.Translate(
"hello",
"nl",
null
).Execute();

(obviously you need to swap “Insert Account Key” with your actual account key)

One thing to note here is that this is using V1 of the API – and its just for OData (tho this doesn’t preclude you from calling it from javascript). This API is fairly basic. There is a more powerful V2 API floating about that requires a bit more elaborate authentication.

(The difference between the two versions caught me out for a while.)

Now, when I went to use the proxy class right out of the box, I got an entity mismatch error:

"There is a type mismatch between the client and the service. Type 'Microsoft.Translation' is not an entity type, but the type in the response payload represents an entity type. Please ensure that types defined on the client match the data model of the service, or update the service reference on the client."

Now if you open Fiddler and allow HTTP decryption, you’ll see that the request actually suceeds. So the error is not in the call, or the authentication (as I initially thought), but in parsing the response.

And if we look at the error text, we see that it spells out eaxtly what the issue is – ‘Microsoft.Translation’ is not an entity type

Of course, being the doofus that I occasonally am, I missed that completely at first.

Once I noticed that, it hit me – I simply need to tell the DataServiceContext that Microsoft.Translation is an entity type.

And how do you do that?

Enter the [DataServiceEntity] attribute.

So. Open up the proxy class you downloaded from the Azure dashboard page and decorate the TranslationLanguage and DetectedLanguage classes with [DataServiceEntity] and you’re good to go.

Your code should look like this:

[DataServiceEntity]
public partial class Translation {
    
    private String _Text;
    
    public String Text {
        get {
            return this._Text;
        }
        set {
            this._Text = value;
        }
    }
}
[DataServiceEntity]
public partial class Language {
    
    private String _Code;
    
    public String Code {
        get {
            return this._Code;
        }
        set {
            this._Code = value;
        }
    }
}

[DataServiceEntity]
public partial class DetectedLanguage {
    
    private String _Code;
    
    public String Code {
        get {
            return this._Code;
        }
        set {
            this._Code = value;
        }
    }
}

And if anyone knows anyone at Microsoft, hopefully the proxy class file can be updated.

Entity Framework with Identity Insert

Entity Framework is great. I’ve used it since its’ first public beta. And with each release, its gets better and better.

Code First is awesome. Being able to generate a database from pure code was great. Then they added a hybird version of this – you could generate Code First from an Existing Database. I remember screaming because I could have used that very feature the week before it got released – would have saved me a weeks work. 🙂

I recently wrote an program to bulk insert test data into a blank version of a production database.

The program used EF and a whole bunch of logic to check and verify data integrity before inserting data into the database.

To start with I used AddRange(). The AddRange method does exactly what it says on the tin – it’ll insert all the data passed to it into the database.

However – before inserting anything, EF has an eleborate set of checks to go through to verify that the inserted data against the model it has of the database. Chief among these checks is that of the primary keys. If a primary key is an integer and a Computed column, EF will not insert it, rather leaving it up to the database to assign a primary key.

But what if you want to force EF to insert your Primary Key? Generally speaking, we want this if we are importing multiple tables and wish to preserve the Foreign Key relationships between them. Which was exactly what I wanted.

So, how do we do this?

Step 1

In your Code First model (or your .edmx file for Database First) the primary keys should not have any attibutes, or they should be set to [DatabaseGenerated(DatabaseGeneratedOption.Identity)].

All the primary keys that you want to Force to be inserted should be changed to [DatabaseGenerated(DatabaseGeneratedOption.None)] This ensures that EF will not try and change the values and will simply insert them with the rest of the data.

If you go ahead and try and insert the data now you’ll get an error says that Identity Insert needs to be enabled.

Step 2 A

So. We need to execute a SQL statement against the database. Basically the SQL loos like this: SET IDENTITY_INSERT [dbo].[TableName] ON

So we’ll try this code:

context.Database.ExecuteSqlCommand(@"SET IDENTITY_INSERT [dbo].[TableName] ON");
context.TableName.AddRange(items);

context.SaveChanges();

context.Database.ExecuteSqlCommand(@"SET IDENTITY_INSERT [dbo].[TableName] OFF");

Now, if you try and run that, you’ll get the exact same Identity Insert error.

The code looks legit! Why is this happening?

Basically – Identity Insert is scope limited. It is only enabled for the scope of a single transaction. Mainly to prevent you forgetting to turn it back off after you’re finished and causing choas.

So the second call to Identity insert is superfluous. But I include it for readabilities sake.

Step2 B

So we need a way of executing the EF AddRange() and SaveChanges() methods in the same transaction scope as our Identity Insert ON statement.

Fortunately there is a way. Enter the TransactionScope class.

<!-- wp:code -->
<pre class="wp-block-code"><code>using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
      {
      using (var conn = new System.Data.SqlClient.SqlConnection(_connectionstring))
            {
                 conn.Open();
                 using (Context context = new Context(conn, false))
                        {
                            context.Database.ExecuteSqlCommand(@"SET IDENTITY_INSERT &#091;dbo].&#091;TableName] ON");
                            context.TableName.AddRange(items);

                            context.SaveChanges();

                            context.Database.ExecuteSqlCommand(@"SET IDENTITY_INSERT &#091;dbo].&#091;TableName] OFF");
                            scope.Complete();
                          }
                 }
	}</code></pre>
<!-- /wp:code -->

So what are we doing here?

We first create a TransactionScope object in a using block. The using block ensures that the TransactionScope Object will be destroyed once the flow of control leaves the using block.

In the body of the using block we create another using block and create a new SqlConnection Object and pass it our connection string. The SQLConnection will automatically be enrolled in the Transaction.

We create our third using block and we create a new version of our EF Database Context and pass it our open SqlConnection object for it to use internally. Note that we also pass false as a second parameter to the context constructor. This tells EF that it does not own the SqlConnection object and ensures that it plays nicely with the TransactionScope.

Except for the last line of the using block, the rest of the code is the same as our earlier attempt. The last line calls scope.Complete().

This closes the scope and commits the transaction. We then exit all three using blocks.

At this point all the objects we’ve created have been destroyed. We do this because we want to clean up objects we won’t be using and free up resources. And also because using two EF Contexts can be dangerous. Although its not in the code snippit above, its likely that you will have one EF Context to do all the operations on the database that don’t require this special treatment along with the second Context that we create to force the Primary keys to be inserted.

So by destoying the second one, we avoid any confusion between them. A quick look at Stackoverflow shows that this is more of an issue than you might want to believe.

The result of this code is that the data would have been sucessfully added to the table with its primary keys intact.

Success!!

(written mainly for my future self to rediscover this at an acute moment of crisis)

ASP.Net MVC Tip: URL Anchor Tags

This is one of those features which should be baked into MVC by default, but isn’t.

Actually, it is partially supported by MVC. This Codeplex answer details it:

 

@Html.ActionLink(
    "Link Text",           // linkText
    "Action",              // actionName
    "Controller",          // controllerName
    null,                  // protocol
    null,                  // hostName
    "fragment",            // fragment
    new { id = "123" },    // routeValues
    null                   // htmlAttributes
)

will produce (assuming default routes):

<a href="/Controller/Action/123#fragment">Link Text</a>

But what if you want the returned redirect of an ActionResult method to include the anchor tag. There is no RedirectAnchorTagResult to return.

 

That same answer does some voodoo to pull it off:

public ActionResult Index()
{
    var url = UrlHelper.GenerateUrl(
        null,
        "Action",
        "Controller",
        null,
        null,
        "fragment",
        new RouteValueDictionary(new { id = "123" }),
        Url.RouteCollection,
        Url.RequestContext,
        false
    );
    return Redirect(url);
}

 

It is this that should be baked into MVC by default. Thats an awful lot of complexity to expose just for the sake of a simple anchor tag. So, yes, a RedirectAnchorTagResult would be a nice addition.

 

Thanks to Darin Dimitrov for providing the answer. Go ahead and upvote his answer.

Codeplex Projects of the Week.

Two Codeplex projects have come in quite handy for  me this week and I thought I’d pass them on.

 

WSUS 3.0 – WSUS Smart Approve

I installed Windows Server Update Services on my Windows Home Server 2011 this week.

As usual, I was in a bit over my head. I hit the Approve All Updates nuclear option. Knowledgeable sysadmins are possibly cringing or screaming at me (or both). By the time i figured out that approving updates puts them in the download queue, 4 and a half gigabytes were in the queue. Ordinarily this wouldn’t be a problem. But my broadband has had issues and it’s ridiculously slow (0.85 MBPS) and WSUS was eating up all the bandwidth.

So this was a problem. The solution was, of course to approve and download only needed updates. Amazingly, WSUS has no way of automatically approving needed updates (even as a checkbox buried five or six option screens down and off by default).

So, Codeplex to the rescue!

WSUS Smart Approve does exactly what it says on the tin – automatically approving updates according to certain rules. One of those is, of course, to approve needed updates automatically.

 

MetaWeblogAPI – ooMetaWeblog

One of the nice things about the MetaWeblogAPI is that almost everyone supports it. One of those is, of course, Windows Live Writer 2011.

Using Windows Live Writer as a WYSIWYG editor would be convenient.

There are a number of options to implement a server-side MetaWeblogAPI endpoint. Scott Hanselman has one approach.

I used another approach  – the Matlus.MetaWeblogAPI.

The corresponding Codeplex project is ooMetaWeblog.

 

Hope they come in helpful for somebody.

Why Silverlight Should Stay

Mary Jo Foley just published a post discussing the future of Silverlight.

I’m not a Silverlight Developer by any stretch of the imagination. I never played with it. Never touched it at all.

Then, for the Flying Shakes website,  I had to have the control below in a web form. Naturally I turned to Silverlight.

image

 

With no knowledge or experience of Silverlight it took me 90 minutes from idea to working control.

And yes, I realise that I could write a HTML5 version of that now. But it would probably take much, much longer (don’t nobody suggest Flash).

Silverlight is good, not just for rich client experiences it allows us to build, but also because its part and parcel of the tools we Visual Studio devs work with every day.

The flip side to this, of course is the user perspective.

Here in the UK we have Sky satellite television. The reason why I like them so much is that they are fairly technology friendly. Besides streaming on the go (iPad, iPhone, etc), you can log on to their Sky Go website to stream on-demand or download and watch on your desktop offline.

This experience is delivered by, wait for it, Silverlight. The impressive part of this whole thing was the Sky Go Desktop Client. Its an offline Silverlight application, popping straight out of the browser and installed silently.  Was downloading from my queue 10 seconds after hitting the download button.

Satisfied does not even begin to describe it.

I HTML 5 may be the bees knees, but there is still a business case for keeping Silverlight around.

I’ll consider HTML 5 a contender when we have the same level of support and tooling for it as we have now for Silverlight.

LINQ + Google Charts + MVC : Pie Chart

A couple of weeks ago, I pointed out a LINQ snippet that was running against the Google Charts API.

This week, I’m back with my implementation of  the Pie Chart.

I’ve started with the Pie chart because its relatively simple and will lay the foundation for dealing with the complexities of the other charts.

Building Blocks

Since this is MVC, I created a new ChartsService class in the Services namespace of the project.

If you look at the snippets in the original ACM article, you’ll notice an extension method called “SeparatedBy”. So our first task is to create this extension method.

There are, of course, a number of ways to concatenate a list of string with a separator. This being an exercise in LINQ, we are going to use the LINQ Aggregate method.

   
public static string SeparatedBy(this List<String> list, string seperator)
{ 
return list.Aggregate((current, next) => current + seperator + next); 
}

 

I’m sure that you’ll agree with me when i say that it’s a nice and clean approach to what could be a messy block of code.

However, that extension method will only concatenate the list of strings passed to it. Why is this a problem? Because we are going to want to concatenate lists of int objects as well. So that extension method will just not do. We  could do some fancy work with generics, but the simplest thing to do is to supply to an overloaded method that accepts lists of type int.

public static string SeparatedBy(this List<int> list, string seperator)

List<string> thelist = list.ConvertAll<string>(x => x.ToString());

 return thelist.Aggregate((current, next) => current + seperator + next); 
}

There is one additional difference between these and our original method – the call to ConvertAll. Rather than have a foreach loop that does the conversation, we simply supply an inline lambda function that gets executed against each item  in the list and returns a list of the desired type. Again, very clean.

 

So, armed with these extension methods, we can now declare our classes.

Data

Google charts offers a wide range of functionality and many different kinds of charts. Each chart has a host of differing options and settings available to it. So when creating classes to represting thse charts kinds we have to bear in mind that there will be unique functionality not common to other charts that will come up.

Charts logically are made up of a number of  smaller complements:  bar charts have columns, pie charts have slices and so on and so forth. So we’ll represent these first.


public class Slice
{
public int Value { get; private set; }
public string Legend { get; private set; }

public Slice(int value, string legend)
 this.Value = value;
this.Legend = legend;
}

} 

Lets first look at the Pie class itself now.

Pie Class


public List<Slice> Slices { get; set; }
public string Title { get; private set; 
public Double Height { get; private set; }
public Double Width { get; private set; }

public Pie(string title, double height, double width)
{
this.Slices = new List<Slice>();
this.Title = title;
this.Height = height;
this.Width= width;
}

We start by declaring a number of properties and a constructor. In the constructor we initialize our list of Slices. This is where we see a departure from the snippets of he ACM article.  We do not pass the slices into the constructor. Of course, this is an issue of style over substance. There is no reason why we could not have generated the slices before the creating the chart and then passed the slices.


public string Compile()
{
var tt = Title.Split(' ').ToList().SeparatedBy(' ');
var p = new List<string>(){
this.Slices.Select(slice =>slice.Legend).ToList().SeparatedBy("|"),
this.Slices.Select(slice =>slice.Value).ToList().SeparatedBy(",")};

return string.Format(@"http://chart.googleapis.com/chart?cht=p&chtt={0}&chs={3}x{4}&chl={1}&chd=t:{2}", tt, p.ElementAt(0), p.ElementAt(1), this.Width, this.Height)
}

This is where all the important stuff happens.

Line 3 properly formats the title by putting a + sign to signify spaces between words. Note that we are using the extension method we wrote earlier.

Line 4 creates a new list of strings, each string being the comma or | delimited list of values and legends. Using a List gives us greater flexibility later on when our implementation will handle multiple data series and legends

Line 8 uses String.Format to merge all this data into a url that we can use to call the Google Charts API with. For an explanation of what each of these parameters mean, see the Google Charts API Parameter List

 

ViewModel

Now, this being MVC, the way to display stuff is by using a ViewModel. So lets create one:


public class MonthlyReportsViewModel
{
public MonthlyReports Details { get; set; }
public Pie Chart { get; set; }
 }

The one property we are interested in here is the Chart Property.

Now that we have our ViewModel, we have to populate it. So, in our ActionResult method:


<pre>Pie chart = new Pie("This is my chart",200);
chart.Slices.Add(new Slice(25, “Slice 1”));
chart.Slices.Add(new Slice(25, “Slice 2”));
chart.Slices.Add(new Slice(25, “Slice 3”));
chart.Slices.Add(new Slice(25, “Slice 4”));
model.Chart = chart;
return View(model);

 

View

In our View itself, we’re going to have to render out the chart. Since the call to Google Charts API will return an image, we can simply do the following:


<img src = "@Model.Chart.Compile()" alt ="">

 

What one could do is to put the actual rendering code in a Helper Method and call the Helper Method from your view, like so:


@GoogleCharts.PieChart(@Model.Chart)

That, of course, further abstracts the code. It does have the advantage of being much cleaner and easier to do.

 

Conclusion

As you can see, using LINQ to abstract away the complexity of what your code is actually doing is not just the province of database code. One thing what I’ve enjoyed about working with LINQ is how code always comes out looking fresh, clean and crisp. Having worked extensively with LINQ and Lambda expressions, using foreach loops  to process Lists looks so much messier.

Next time, we’ll take a look at the somewhat more complicated Bar Chart. I’ll not cover every single possible piece of functionality, but I’ll cover the basics. All I want to show is how the foundation laid down today can easily translate over. My current implementation of bar charts is sufficient only for the limited functionality the app needs and nothing more. 

At some point in the future, I’d also like to implement Line charts.

 

Postscript

I must say that apart from working with LINQ, its been a very satisfying experience for me to implement a C# version of a web API.

There is GoogleChartsSharp on Codeplex that Implements a whole lot more of the functionality of Google Charts. I did indeed use it for a while before implementing it on my own.

So its been a satisfying experience for me to implement an API that allows me to work the way I want to work. Not only did I write something simpler and easier to work with, but I dropped a dependency in the process and that made me happier than I think its safe to admit. 

Writing against something requires you, the writer, to pay extra attention to the small details. It requires you to think of the relationship between your code,the web API  calls and the documentation that supports it. When one uses an already baked implementation such as GoogleChartSharp, its like working with a giant black box  you have no idea what goes on inside. And you really don’t want to know the finer details. But writing the API, you create a white box. And you HAVE to understand those finer details.

So while the LINQ is nothing special in and of itself,nothing earth shattering or ground-breaking, it is the experience and the satisfaction gained from it that makes this a worthwhile post to write.

LINQ Goodness: Google Charts Edition

My day job (one of them, anyway) is to design*, run and maintain Flying Shakes

If someone had told me when I started  that 90% of the code (and 87.653% of all stats are made up, but you get my drift) I’d write would be for the administration side of things, I’d never have believed it.

 

Anyway, to cut a long story short, it was in this context that I came across a fascinating article from the Association for Computing Machinery (and no, I have not heard of them before either). I came across this a month or so ago, but lost the link.

 

With a little bit of Google-fu, I’ve found it once again: The World According to LINQ.

While its a fascinating article that appeals to the Computer Scientist in me (supposedly useless classes on in-depth database theory tend to do that), what caught my eye was the code sample right at the bottom for generating Google Chart Url’s.

That sample is going to come in very handy for me and I thought I’s share it with you.

Go ahead and read the article.

 

*If you see me ranting on Twitter or Google+ about CSS, this is probably why.

WCF Chat Update: Long Polling

Updating WCF chat continues slowly but surely. I have not made any commits yet, so you’ll have to wait to see the changes.

In addition to the changes to Authentication, there are changes to the callback mechanism that I originally wrote.

 

When i originally wrote the chat application, the callback was one of the first pieces of code that i wrote. The fact of the matter is, the the only requirement for it was to work across the local network (or even simply between instances on the local machine). So when I wrote the Cloud version, suddenly callbacks had to work across NAT to let the application function across the internet.

 

Now, there are a number of possible design patterns that would allow the server to execute a callback on a remote client. 

 

The first is ,indeed, the design pattern we use at the moment. Where we actually have a callback. the enabler for this is actually found in WCF. The wsHTTPDuplexBinding allows for dual HTTP connections – one in each direction. This allows you to invoke an operation on the client. However, in order for this to work, you need to have an instance of WCF server on a per-instance basis. So. Every new client session will spawn a new instance of the server. This means that you are going to end up with dozens ( or hundreds etc) of long-lived instances. The question here is scalability. Is this scalable?

It might seem somewhat arrogant to talk about scalability, but if you design with scalability in mind, you’re not going to end up dealing with it later.

 

The other is something that, while not new, really hit the big-time after Friendfeed released its Tornado server. Tornado supports long polling http connections. Long polling is not in and of itself new. The basic design pattern involves a client making a call ( http or otherwise) to a server. The server receives the connection and keeps it open until it has something to return. Some long polled connections eventually time out and this is the implicit signal for the client to immediately open another connection. Others, such as the Friendfeed implementation, keep them open indefinitely.

 

There are probably more that you can think of, but these are the two that I considered for the Chat Application. As with most things, the choice is between a Push model (where notifications are pushed to the client) and the Pull model (where information, Notifications or otherwise, is pulled from the server by the client).

 

The fact of the matter is that I like both. Both are Cool. And both are supported intrinsically by WCF – no coding voodoo to make thins work.

Of the two, I’ve begun implementing the long polling method. Although it s radical departure, it will allow the overall design of to server-side to remain the same. The WCF server remains as a Single Instance service, and so the implementation remains the same.

The WCF stack is written in such a way that when you mark a OperationContract as needing the Async Pattern, WCF with start the Asyc operation and then go off and handle another request until that method returns. The End method that receives the results then returns the data to the client. In other words, its non-blocking.

I’ve not sorted out the exact specifics of implementation, but there will be changes to both the server and the client to accommodate  this. In saying that, I’m doing  a lot of simplification to the class structure. So hopefully what emerges from all these changes will be better than the current setup. Even I had to go back and follow the inheritance tree to figure things out.

These changes are happening in parallel with the changes to the authentication scheme.

 

So, while I’ve got no code, I leave you with this MSDN blog post on Async programming with WCF and this post that adapts it to long-polling specifically.