Windows Azure Feedreader: Choosing a Login System – Which would you choose?

Update: Here’s the related screencast episode.

As you may have noticed in the last episode (episode4), writing the Feed Reader has got to the stage where we require UserID’s.

Given the delicate nature of login credentials and the security precautions required, its much easier to hand off the details to Google Federated Login, or even Windows Live ID. These services simply give us a return token indicating who has logged in.

The previous version of the feed reader used Windows Live ID. Its a very simple implementation. It consists of a single MVC controller, and a small iFrame containing the login button. It’s elegantly simple. Since its MVC, there are no issue running it on Windows Azure. The reason why I picked it the last time, was a) its simplicity and b) its part of the Windows Azure ecosystem.

The alternative is to use Google Federated Login. This is a combination of OpenID and OAuth. The implementation is certainly much more involved, with a lot of back and forth with Google’s Servers.

OpenIdDiagram[1]

 

  1. The web application asks the end user to log in by offering a set of log-in options, including using their Google account.
  2. The user selects the “Sign in with Google” option. See Designing a Login User Interface for more options.
  3. The web application sends a “discovery” request to Google to get information on the Google login authentication endpoint.
  4. Google returns an XRDS document, which contains the endpoint address.
  5. The web application sends a login authentication request to the Google endpoint address.
  6. This action redirects the user to a Google Federated Login page, either in the same browser window or in a popup window, and the user is asked to sign in.
  7. Once logged in, Google displays a confirmation page (redirect version / popup version) and notifies the user that a third-party application is requesting authentication. The page asks the user to confirm or reject linking their Google account login with the web application login. If the web application is using OpenID+OAuth, the user is then asked to approve access to a specified set of Google services. Both the login and user information sharing must be approved by the user for authentication to continue. The user does not have the option of approving one but not the other.Note: If the user is already logged into their Google account, or has previously approved automatic login for this web application, the login step or the approval step (or both) may be skipped.
  8. If the user approves the authentication, Google returns the user to the URL specified in the openid.return_to parameter of the original request. A Google-supplied identifier, which has no relationship to the user’s actual Google account name or password, is appended as the query parameter openid.claimed_id. If the request also included attribute exchange, additional user information may be appended. For OpenID+OAuth, an authorized OAuth request token is also returned.
  9. The web application uses the Google-supplied identifier to recognize the user and allow access to application features and data. For OpenID+OAuth, the web application uses the request token to continue the OAuth sequence and gain access to the user’s Google services.Note: OpenID authentication for Google Apps (hosted) accounts requires an additional discovery step. See OpenID API for Google Apps accounts.

 

As you can see, an involved process.

There is a C# library available called  dontnetopenauth, and I’ll be investigating the integration of this into MVC and its use in the Feed Reader.

There is one advantage of using Google Accounts, and that’s the fact that  the Google Base Data API lets us import Google Reader Subscriptions.

It may well be possible to allow the use of dual login systems. Certainly, sites like stackoverflow.com use this to great effect.

Why is choosing an external login system important?

Well, firstly its one less username and password combination that has to be remembered.

Secondly, security considerations are onus of the authentication provider.

If we were to go with multiple authentication providers, I’d add a third reason: Not having an account with the chosen authentication provider is a source of frustration for users.

So, the question is, dear readers, which option would you choose?

  1. Google Federated login
  2. Windows Live ID
  3. Both

Windows Azure Feed Reader Episode 4: The OPML Edition

As you’ve no doubt surmised from the title, this weeks episode deals almost entirely with the OPML reader and fitting it in with the rest of our code base.

If you remember, last week I showed a brand new version of the OPML reader code using LINQ and Extension Methods. This week, we begin by testing said code. Given that its never been been tested before, bugs are virtually guaranteed. Hence we debug the code, make the appropriate changes and fold the changes back into our code base.

We then go on to creating an Upload page to upload the OPML file. We store the OPML file in a Blob and drop a message in a special queue for this OPML file to be uploaded. We make the changes to WorkerRole.cs to pull that message off the queue and process the file correctly, retrieve the feeds and store them. If you’ve been following along, none of this code will be earth shattering to you either.

The fact is that a) making the show any longer would bust my Vimeo Basic upload limit  and b) I couldn’t think of anything else to do that could be completed in ~10 minutes.

The good thing is that we’re back to our 45 minute-ish show time, after last weeks aberration.

Don’t forget, you can head over to Vimeo to see the show in all its HD glory: http://www.vimeo.com/14510034

After last weeks harsh lessons in web-casting, file backup and the difference between 480p and 720p when displaying code, this weeks show should go perfectly well.

Enjoy.

Remember, the code lives at http://windowsazurefeeds.codeplex.com

PS. Some occasional interference in the sound. I’m wondering if the subwoofer is causing it while I’m recording it. Apologies.

Windows Azure Feed Reader, Episode 2

A few days late (meant to have this up on Tuesday). Sorry. But here is part 2 of my Series on building a Feed Reader for the windows Azure platform.

If you remember, last week we covered the basics and we ended  by saying that this week’s episode would be working with Windows Azure proper.

Well this week we cover the following:

  • Webroles (continued)
  • CloudQueueClient
  • CloudQueue
  • CloudBlobClient
  • CloudBlobContainer
  • CloudBlob
  • Windows Azure tables
  • LINQ (no PLINQ yet)
  • Lambdas (the very basics)
  • Extension Methods

Now, I did do some unplanned stuff this week. I abstracted away all the storage logic into its own worker class. I originally planned to have this in Fetchfeed itself. This actually makes more sense than my original plan.

I’ve added Name services classes for Containers and Queues as well, just so each class lives in its own file.

Like last weeks, this episode is warts and all. I’m slowly getting the hang of this screencasting thing, so I’ll be getting better as time goes on I’m sure. Its forcing me to think things through a little more thoroughly as well.

Enjoy:

Next week we’ll start looking at the MVC project and hopefully get it to display a few feeds for us. We might even try get the OPML reader up to scratch as well.

PS. This weeks show is higher res than the last time. let me know if its better.

Building a Feed Reader on Windows Azure – Screencast Part 1

As I announced last month, I’ll be screen casting my efforts on the Windows Azure Feed reader.

Well, a couple of weeks ago, I just went ahead and did the first episode. The delay between recording it and posting is due to a weeks holiday and a further week and a half spent offline due to my ISP.

I’ve open sourced the code to http://windowsazurefeeds.codeplex.com.

In this episode I cover:

  • the data access layer
  • associated helper code
  • I add a skeleton Feed class
  • I add all the Windows Azure Roles that we’ll be needing

I’ve tried not only to explain what I’m doing, but also why I’ve done things this way – with a view to how it will affect things down the road.

So here is the screencast:

Next time we’ll begin integrating things with our worker and web roles. We’ll be working with blobs and queues. And we’ll start chewing through some live data.

I’ll try keeping to a weekly schedule, but my schedule of late is anything but regular and predictable.

Enjoy.

Thoughts on Windows Azure Storage

I’m sitting here planning what to do with the re-write of the Feedreader I wrote for university. Naturally, the storage solution is always going to be a contentious issue. The NoSql versus SQl debate plays a lot into it.

But, for the most part, i used Windows Azure tables and blobs for its simplicity over SQL Azure. In saying that, SQL is not my favourite thing in the world, so make of that what you will. But also, for a demonstrator application, the use of something completely new played in my hands very well.

So the re-write is also meant to be a demonstrator application. so the Windows Azure storage is staying.

But, not so fast. because Windows Azure Storage  needs something. The way I used Tables and Blobs essentially was as a poor mans object database. Thus meant that there was a lot of leg work involved in this, not to mention tedious plumbing code. The fact is, I think that the is the most logical use case for Windows Azure Storage – where in metadata is stored in the tables and the object themselves are in the blobs.

What I would like to be added, then, is the ability to formalize this relationship in code. Some way of saying “hey, this value in this TableRow actually points to a blob in this container”. So I can call “getBlob()”, or something on a Row an get the blob back. Now, to be clear, I don’t mean this to be a foreign key relationship. I don’t want to cascade updates and deletes. And i certainly don’t want my hands tied by making the Blob Attribute column (or whatever) mandatory.

Now, this could be added right now. And in fact am considering doing that. But support for this in the Storage Client library would be nice. But weather backend support is needed, or in fact a good idea, is another question entirely. The implications of making such a fundamental change on the back end. For example, say you’ve exposed a table via OData. How do you expose the blob as well? And given the nature of the use case, the fact that it is needed on a table by table basis makes it much easier to limit any such functionality to the Tools library only.

I can hear you asking yourself why I’m asking for support in the Storage Client library if I can gin up and duct tape together some Extension Methods? I’m a big proponent of the idea that anything that we use in intact with software, be that actual applications or libraries we use in our code, has a Natural user interface. Think of it, the API methods that are exposed for us as developers to use are in fact a User Interface. so I’m asking for a better user interface that supports this functionality without me having to do the legwork for it. In so delivering support, it is perfectly possible, indeed likely, that the library code that Microsoft ships will be more efficient than whatever code I can write.

My final report on the project did call out VS Windows Azure Tools, mainly for not making my life easier. So I’m looking forward to the new version (1.3) and seeing how it goes, particularly with regard to storage.

Now performance-wise, the version I wrote last wasn’t exactly fast at retrieving data. I suspect that this was due to a) my own code in-efficiencies and b) the fact that my data wasn’t optimally (obviously) normalized. Its also probable that better use of MVC viewdata (actually, I think that should be “better use of MVC, period”) and caching will improve things a lot as well.