Note to self– Use OutputCache in MVC 3

I’m just reading ScottGu’s post on the MVC 3 release candidate.

What got me really thinking was the output cache attribute:

image

Scott explains:

Notice how the DailySpecials method above has an [OutputCache] attribute on it.  This indicates that the partial content rendered by it should be cached (for 3600 seconds/1 hour).  We are also indicating that the cached content should automatically vary based on the category parameter.

If we have 10 categories of products, our DailySpecials method will end up caching 10 different lists of specials – and the appropriate specials list (computers or diapers) will be output depending upon what product category the user is browsing in.  Importantly: no database access or processing logic will happen if the partial content is served out of the output cache – which will reduce the load on our server and speed up the response time.

This new mechanism provides a pretty clean and easy way to add partial-page output caching to your applications.

So, with my Windows Azure Feedreader in mind,  my note to self is as follows:

In a couple of weeks, when we get to the shared items Rss feed, we can use this code. We vary by user id rather than by category in the example above.

I’m actually quite relived, as I was wondering how we’d do the shared items Rss feed. These actions would, arguably, be highly trafficked and reading directly from the blobs (which was my original plan) would be too bandwidth intensive. Problem solved.

I’m very excited about MVC 3 as a whole, but when i see the direct application of features, I get even more excited.

In fact, when Razor first came out, I really was going to use it for the Feedreader instead.

Windows Azure Feedreader Episode 7: User Subscriptions

Episode 7 is up. I somehow found time to do it over several days.

There are some audio issues, but they are minor. There is some humming noises for most of the show. I’ve yet to figure out where they came from. Apologies for that.

 

This week we

  • clear up our HTML Views
  • implement user subscriptions

We aren’t finished with user subscriptions by any means. We need to modify our OPML handling code to take account of the logged in user.

 

Enjoy:

Remember, you can head over to vimeo.com to see the show in all its HD glory.

Next week we’ll finished up that user subscriptions  code in the OPML handling code. 

And we’ll start our update code as well.

Windows Azure Feedreader Episode 6: Eat Your Vegetables

A strange title, no doubt, but I’ll explain in a moment.

Firstly, apologies for the delay. I’ve been busy with some other projects that couldn’t be delayed. And yes, I have been using Windows Azure Storage for that.

I’m doing some interesting work with Tropo, the cloud communications platform. It’s similar to Twillo. So, at some point I’ll do a screencast – my contribution to the lack of documentation for Tropo’s C# library.

This weeks episode had the stated intention of testing the Opml upload and storage routine we wrote back in Week 4.

We manage this, reading in the contents of the OPML file, storing the blog metadata in Windows Azure tables and the posts in Blob Storage.

However, in getting there, we have to tackle a number of bugs. Truthfully speaking, a few could have been avoid earlier – such as the fact that calling UrlPathEncode does not remove the ‘+’, and so IIS7 freaks out as a result (e.g. when blob names are used in URLs).  Others, I had no idea about – like the requirement for only lowercase blob container and queue names.

Which brings me to why I’ve named this episode as such. Dave Winer wrote a brilliant post earlier this week about working with Open Formats. To quote him:

1. If it hurts when you do it, stop doing it.

2. Shut up and eat your vegetables.

3. Assume people have common sense.

Number 2 says you have to choose between being a person who hangs out on mail lists talking foul about people and their work, or plowing ahead and making software anyway even though you’re dealing with imperfection. If you’re serious about software, at some point you just do what you have to do. You accept that the format won’t do everything anyone could ever want it to do. Realize that’s how they were able to ship it, by deciding they had done enough and not pushing it any further.

So, this episode is about doing exactly that – shutting up about the bugs and getting the software to work.

The fact is that every episode so far has been about writing the foundations upon which we can build. The bugs solved in this episode, mean we have few problems down the road. We have been immersed in the nitty gritty details, rather than build with out having to worry if our feed details are really being read and stored, or if our tables are being created correctly, or if blobs are being stored in the right containers.

Enjoy (or not) my bug fixing:

Remember to head over to Vimeo.com to see the episode in all its HD glory.

A word about user Authentication. While I don’t cover it in the video, I’ll be moving to use Google Federated Authentication over Windows Live ID. So for next week, I’ll have the Windows Live Stuff removed, and we’ll be using Google and the Forms Authentication integration the dontnetopenauth library provides.

Next week, the order of business is as follows:

  1. Clean up our view HTML
  2. ViewData doesn’t work in a couple of places – so we need to fix that
  3. Store user subscriptions.
  4. Make a start on our update code

PS. I added new pre-roll this week as an experiment. Hope you like it.

Coming to a realization

I was doing some work last week with tropo, the cloud communications platform.

To start myself off, I did a quick IVR app – “Welcome to X, please press 1 for y….” etc.

That took a couple of days to get working properly. Literally, this is 5 lines of code with the Tropo C# API. And it took days.

But I got there in the end.

Then, the current screencast series is progressing well. I was reviewing things and it does seem that its getting a little slow and tedious. Episode 6, hopefully out tomorrow, may well be the most tedious.

But it struck me that there was something in common with both projects. Both are going slowly, tackling problems one step at a time.

In contrast, there are other projects of mine that promised loads, but never went anywhere. like my WHS2Smumug add-in. there reason was simple – I tried to do too much too quickly.

Thats somewhat the trouble with being a software developer. They say that genius is like lightening across the brain. And when I get an idea, I tend to see the whole thing, the entire feature set. I see all the plumbing required to get these things working, and before I know it, I’m immersed in a thousand and one technical details about standard and formats and API’s – to paraphrase the Big Bang Theory, its chaos in my head.

But because  I tend to do that too often, I bite off more than I can chew.

This screencast series has really slowed down the pace of development, forcing me to consider issues and problems one episode at a time. In a sense, I’m taking small bites and chewing them thoroughly.

Like writers and musicians, software developers need to find their own style, their own rhythm. I guess I’m still finding mine.

Windows Azure Feedreader: Choosing a Login System – Which would you choose?

Update: Here’s the related screencast episode.

As you may have noticed in the last episode (episode4), writing the Feed Reader has got to the stage where we require UserID’s.

Given the delicate nature of login credentials and the security precautions required, its much easier to hand off the details to Google Federated Login, or even Windows Live ID. These services simply give us a return token indicating who has logged in.

The previous version of the feed reader used Windows Live ID. Its a very simple implementation. It consists of a single MVC controller, and a small iFrame containing the login button. It’s elegantly simple. Since its MVC, there are no issue running it on Windows Azure. The reason why I picked it the last time, was a) its simplicity and b) its part of the Windows Azure ecosystem.

The alternative is to use Google Federated Login. This is a combination of OpenID and OAuth. The implementation is certainly much more involved, with a lot of back and forth with Google’s Servers.

OpenIdDiagram[1]

 

  1. The web application asks the end user to log in by offering a set of log-in options, including using their Google account.
  2. The user selects the “Sign in with Google” option. See Designing a Login User Interface for more options.
  3. The web application sends a “discovery” request to Google to get information on the Google login authentication endpoint.
  4. Google returns an XRDS document, which contains the endpoint address.
  5. The web application sends a login authentication request to the Google endpoint address.
  6. This action redirects the user to a Google Federated Login page, either in the same browser window or in a popup window, and the user is asked to sign in.
  7. Once logged in, Google displays a confirmation page (redirect version / popup version) and notifies the user that a third-party application is requesting authentication. The page asks the user to confirm or reject linking their Google account login with the web application login. If the web application is using OpenID+OAuth, the user is then asked to approve access to a specified set of Google services. Both the login and user information sharing must be approved by the user for authentication to continue. The user does not have the option of approving one but not the other.Note: If the user is already logged into their Google account, or has previously approved automatic login for this web application, the login step or the approval step (or both) may be skipped.
  8. If the user approves the authentication, Google returns the user to the URL specified in the openid.return_to parameter of the original request. A Google-supplied identifier, which has no relationship to the user’s actual Google account name or password, is appended as the query parameter openid.claimed_id. If the request also included attribute exchange, additional user information may be appended. For OpenID+OAuth, an authorized OAuth request token is also returned.
  9. The web application uses the Google-supplied identifier to recognize the user and allow access to application features and data. For OpenID+OAuth, the web application uses the request token to continue the OAuth sequence and gain access to the user’s Google services.Note: OpenID authentication for Google Apps (hosted) accounts requires an additional discovery step. See OpenID API for Google Apps accounts.

 

As you can see, an involved process.

There is a C# library available called  dontnetopenauth, and I’ll be investigating the integration of this into MVC and its use in the Feed Reader.

There is one advantage of using Google Accounts, and that’s the fact that  the Google Base Data API lets us import Google Reader Subscriptions.

It may well be possible to allow the use of dual login systems. Certainly, sites like stackoverflow.com use this to great effect.

Why is choosing an external login system important?

Well, firstly its one less username and password combination that has to be remembered.

Secondly, security considerations are onus of the authentication provider.

If we were to go with multiple authentication providers, I’d add a third reason: Not having an account with the chosen authentication provider is a source of frustration for users.

So, the question is, dear readers, which option would you choose?

  1. Google Federated login
  2. Windows Live ID
  3. Both

Windows Azure Feed Reader Episode 4: The OPML Edition

As you’ve no doubt surmised from the title, this weeks episode deals almost entirely with the OPML reader and fitting it in with the rest of our code base.

If you remember, last week I showed a brand new version of the OPML reader code using LINQ and Extension Methods. This week, we begin by testing said code. Given that its never been been tested before, bugs are virtually guaranteed. Hence we debug the code, make the appropriate changes and fold the changes back into our code base.

We then go on to creating an Upload page to upload the OPML file. We store the OPML file in a Blob and drop a message in a special queue for this OPML file to be uploaded. We make the changes to WorkerRole.cs to pull that message off the queue and process the file correctly, retrieve the feeds and store them. If you’ve been following along, none of this code will be earth shattering to you either.

The fact is that a) making the show any longer would bust my Vimeo Basic upload limit  and b) I couldn’t think of anything else to do that could be completed in ~10 minutes.

The good thing is that we’re back to our 45 minute-ish show time, after last weeks aberration.

Don’t forget, you can head over to Vimeo to see the show in all its HD glory: http://www.vimeo.com/14510034

After last weeks harsh lessons in web-casting, file backup and the difference between 480p and 720p when displaying code, this weeks show should go perfectly well.

Enjoy.

Remember, the code lives at http://windowsazurefeeds.codeplex.com

PS. Some occasional interference in the sound. I’m wondering if the subwoofer is causing it while I’m recording it. Apologies.

Windows Azure Feed Reader Episode 3

Sorry for the lateness of this posting. Real life keeps getting in the way.

This weeks episode is a bit of a departure from the previous two episodes. The original recording I did on Friday had absolutely no sound. So, instead of re-doing everything. I give you a deep walkthrough of the code. Be as that may, I did condense an hours worth of coding into a 20 minute segment – which is probably a good thing.

As I mentioned last week, this week we get our code to actually do stuff – like downloading, parsing and displaying a feed in the MVCFrontEnd.

We get some housekeeping done as well – I re-wrote the OPML reader using LINQ and Extension Methods. We’ll test this next week.

The final 20 minutes, or so is a fine demonstration of voodoo troubleshooting ( i.e. Hit run and see what breaks) but we get Scott Hanselmans feed parsed and displayed. The View needs a bit of touching up to display the feed better, but be as that may, it works.

Since we get a lot done this week, its rather longer – 1 hour and 9 minutes. I could probably edit out all the pregnant pauses. 🙂

Here’s the show:

Success! My 2nd HD attempt uploaded last night. Click here to see the HD on vimeo.com. Enjoy.

Remember, the code lives at http://windowsazurefeeds.codeplex.com

Windows Azure Feed Reader, Episode 2

A few days late (meant to have this up on Tuesday). Sorry. But here is part 2 of my Series on building a Feed Reader for the windows Azure platform.

If you remember, last week we covered the basics and we ended  by saying that this week’s episode would be working with Windows Azure proper.

Well this week we cover the following:

  • Webroles (continued)
  • CloudQueueClient
  • CloudQueue
  • CloudBlobClient
  • CloudBlobContainer
  • CloudBlob
  • Windows Azure tables
  • LINQ (no PLINQ yet)
  • Lambdas (the very basics)
  • Extension Methods

Now, I did do some unplanned stuff this week. I abstracted away all the storage logic into its own worker class. I originally planned to have this in Fetchfeed itself. This actually makes more sense than my original plan.

I’ve added Name services classes for Containers and Queues as well, just so each class lives in its own file.

Like last weeks, this episode is warts and all. I’m slowly getting the hang of this screencasting thing, so I’ll be getting better as time goes on I’m sure. Its forcing me to think things through a little more thoroughly as well.

Enjoy:

Next week we’ll start looking at the MVC project and hopefully get it to display a few feeds for us. We might even try get the OPML reader up to scratch as well.

PS. This weeks show is higher res than the last time. let me know if its better.