Apple’s Sony Reader Problem.

As you no doubt have heard already about the fact that Apple has rejected the Sony Reader app for iOS devices. This is apparently based on the fact that all purchases must go though the official apple sanctioned method of using in app purchases and not any third party method of doing things.

This no doubt is causing some executives at Amazon to consider the very real possibility that their highly acclaimed Kindle app for iOS might be pulled as well.

Now let’s stop the hysteria right the heck there. Apple is not stupid. Put your hand up who ought an iPad thinking “The Sony Reader app is going to be awesome on the iPad”?? Anyone, anyone?? Bueller, Bueller??

Now, who bought an iPad thinking “there is both iBooks and the Kindle app on the iPad- I’ll be able to get any book I want”??? I’m betting that a whole lot more people, including me, thought of that.

The fact of the matter is that Apple is not going to get a whole lot of grief from die hard Sony Reader fans for this. Just imagine the uproar if Amazon was forced to withdraw their kindle app for iPad. The horror. I can imagine seeing legions of angry people marching up Infinite Loop with fire torches and pitchforks.

In other words, the Kindle is actually a net benefit for the iOS platform. It actually helps Apple sell iPads and iPhones and iTouches. I doubt they’d be in a hurry to kill what is a net benefit for them.

So provided that Amazon and to a lesser extent, Barnes and Noble, play their cards right, I don’t see the problem.

The is of course the separate issue of Apple wanting all purchases to go through their inapplicable purchases mechanism. On the face of it, it’s possible to see this as simple profiteering. I doubt it’s quite that simple.

However, what apple needs to realise that, almost by accident, they’ve turned the app store into a vital piece of infrastructure, the Windows of the app store world. I’m not sure they’re quite prepared to undertake this role.

Google on the other hand set out to turn themselves into vital piece of the webs infrastructure. Whatever they may do wrong, they do have that goal very clear in mind. Apple not so much. They come across as being quite heavy handed when things like this happen. If anyone should understand the power of perception, it should be Steve Jobs and Apple.

So, ultimately the I believe that Apple will do the pragmatic thing and lay off the heavy handed moves in this space, but in the short term the perception of things may very well work against them.

Windows Azure Feedreader Episode 8

Apologies for the long delay between episodes. I do these in my spare time and my spare time doesn’t always coincide with a quite environment to record the screencast.

But safe to say, I’m super excited to be back and doing these once again.

So, to it!!

This week we follow straight on from what we did in week 7, and modify our backend code to take account of user subscriptions when adding RSS feeds and RSS feeds from OPML files.

Additionally, we lay the ground work for next weeks episode where we will be writing the code that will update the feeds every couple of hours.

And finally, and perhaps most importantly, this week we start using the Windows Azure SDK 1.3. So if you haven’t downloaded and installed it, now is  the time.

There are some slight code modifications to take account of some breaking changes in 1.3 that are detailed in Steve Marx’s post on the subject.

Finally, I’ve just realised that the episode contains the wrong changeset information. The correct changeset is: 84039 (I’ll be correcting this slight oversight. Perfectionism demands it)

So, enjoy the show:

Remember, you can head over to vimeo.com to see the show in all its HD glory.

As I said, next week, We’ll be writing the update code that will automatically update all RSS feeds being tracked by the application.

Windows Home Server: Delete All Backups Failure (or How to fix Backup Service Not Running)

I removed and  then replaced a hard drive on WHS earlier today.  Since there are only 4 internal SATA ports and all my extra USb enclosures don’t work, i had to disconnect one of the drives and connect the drive i was removing. The reason for this was that due to space constraints, the new drive had to go in first so that the Drive remove Wizard could migrate data to it.

Anyway, once I had everything sorted, the Backup Service refused to run. Even after I did everything (clear file conflicts etc) that is suggested in help files, it’s still not running. So as a last resort I went to delete all backups. This is what I got.

alt

I can’t find anything on this anywhere. Any ideas?

I suspect this is some sort of deep problem somewhere in the bowls of WHS. If thats the case, it would indicate a re-install of WHS itself. And thats something which I’m very hesitant to do.

 

UPDATE: Thanks to alexh on the MediaSmart forums:

I’ve fixed it.
I had already followed the error log paper trail using the Start->Control Panel->Administrative Tools->Event Viewer tool which showed problems with a file on D:
Turns out that all the files in the folder : D:/folders/{00008086-058D-4C89-AB57-A7F909A47AB4} had been corrupted. The error "Device not connected" is a crap error message which actually means "invalid file handle error" caused by corrupted files / filesystem.
I removed the files in this folder then followed Method 5 of http://support.microsoft.com/kb/946339 "Backup Database Repair" and it replaced D:/folders/{00008086-058D-4C89-AB57-A7F909A47AB4}/Commit.dat and everything came back to life.

However, don’t do what I did and delete all the files – this will remove all of your backups. Just rename commit.dat and then run database repair.

On the one hand, I’m stoked to have the backup service back, but on the other sad that so many backups are toast. Sigh.

Client Server Chat with WCF

Almost a year after I wrote this post promising to keep WCF Chat updated, I’m living up to that promise and updating my WCF Chat application over on Codeplex. The original release on Codeplex is actually just a zip file with the project in it. All things considered it was a knee-jerk posting in the mist of the openFF effort to clone Friendfeed. Of course, the original, actual reason why I posted it is lost to history. And in the middle of all that  hoohah I never wrote an introduction to and an explanation of the codebase.

An Introduction

 

The WCF Chat application was actually a class assignment for a class that included, among other things, WCF, REST and TCP. Its actually interesting to see how that class has changed since I took it three years ago. This year, for example, its including MVC, but I digress. The fact is that my submission did go above and beyond the requirements. And the reason for that is that once I wrote the basic logic, the more complicated stuff was easy. In other words: given enough layers of abstractions, anything is easy.

Having dusted off and worked with the code for a few hours, its rather amazing how easy a chat application is. Now, that statement should be taken in the context of the fact that WCF is doing most of the heavy lifting. So getting the client to ask for data is a relatively trivial task. The tricky bit is the need for a callback.

In this case, I use callbacks for Direct Messages and File transfers. Now, you are probably wonder why I went through the trouble given that the sensible option is simply to poll a server. And it is a sensible option. Tweetdeck, Seesmic and other twitter clients all use polling. Basically, it was in the requirements that there should be two way communication. There are a number of way sto implement this. One could, for example, have a WCF service on the client that the server can call back to. This did occur to me, but its a complex and heavy handed approach, not to mention a resource intensive one. WCF always gives me headaches and so I was only going to do one service. So I wrote a TCP listener that received pings from the server on a particular port.

Thats one peculiarity about this app. The other is the way the server is actually written. We have the WCF service implementation and we have a second server class that the WCF service calls. There is a division of responsibility between the two. The service is always responsible for getting the data ready for transmission and the server does the actual application logic.

The client is fairly straightforward. It uses the Kryton Library from the Component Factory. Its a great UI library that I use anytime I have to write a Windows forms UI. The actual UI code is rather like the Leaning Tower of Pisa – its damn fragile. Basically because it relies on the component designer logic for the way controls are layered. So I haven’t touched it at all. In fact, I haven’t needed to. More on this later.

When you are looking at the client code, you’ll realise that for each type of message control, there is a message type base control. The reason for this is that I foolishly (and successfully) tried to use generic controls. In the current implementation, there is actually precious little code in the MessageBase control. The reason  for this is mainly historical. There was originally a lot of code in there, mainly event code. In testing, I discovered that  those events weren’t actually firing for reasons beyond understanding. So they were moved from the inherited class to the inheriting class.  This is the generic control.

There are message type base controls that inherit from the MessageBase control, and pass in the type (Post, DM, File). This control is in turn inherited by the actual Message, DM or File control. The reason for this long inheritance tree is that the designer will not display generic controls. Rather, that was the case. I’ve yet to actually try it with Visual Studio 2010 Designer. As I said, I haven’t changed the client UI code and architecture at all.

The client has a background thread that runs a TCP Listener to  listen for call backs from the server. Its on a background thread so that it does not block the UI thread. I used a background worker for this rather than the newer Task functionality built into .Net 4 that we have today.

Functionality

 

Basically, every one sees all the public messages on a given server. There is no mechanism to follow specific people or ever to filter the stream by those usernames. Archaic, I know, But I’m writing my chat application, not Twitter.

There are Direct Messages that can be directed to a specific user. Because the server issues callbacks for DM’s, they appear almost instantly in the users stream.

You can send files to specific users as well. These files are stored on the server when you sent them. The server will issue a call back to the user and the file will be sent to them when the client responds to that callback.  You can also forward a file you have received to another user. Files are private only by implication. They are able to be accessed by whoever is informed of the files existence.

All of the above messages are persisted on the server. However, forwarding messages is not persisted in any shape or form.

Also, you can set status. This status is visible only when you are logged in. In fact, your username is only visible to others when you are logged in.

It should be noted that you have to use the Server Options panel to add and remove users.

Todays Release

 

Todays changes basically upgrade everything to .Net 4  and make sure its compatible. Todays release does not take advantage of anything new other than some additional LINQ and extension methods. Taking advantage of the new stuff will require a careful think of how and where to use them. I’m not quite willing to sacrifice a working build for new code patterns that do the exact same thing.

The original server was actually just a console application. I took that original code and ported it to a Windows Service. There were trivial logic changes made at most. The UI ( i.e the Options form) that was part of that console application has been moved into its own project.

I also ported the server code to a Windows Azure web role. And let me tell you something- it was even easier that I had anticipated. The XML file and the collections I stored the streams  in  are replaced with Windows Azure Tables for Users, Streams, DMS and Files. The files themselves are written to Windows Azure Blobs rather than being written out to disk. 

The web role as written is actually single instance. The reason is that the collection that stores the active users (i.e. what users are active right now) is still a collection. I haven’t moved it moved it over to windows azure tables yet. You could fire up more than one instance of this role, but all of them would have a different list of active users. And because Windows Azure helpfully provides you with a load balancer, there’s no guaranteeing which instance is going to respond to the client. There is a reason why i haven’t move that collection over to Windows Azure Tables. Basically, I’m not happy with it. If Azure had some sort of caching tier, using Velocity or something so i could instantiate a collection of objects to the cache and have all instances share that collection. The Windows Azure table would be changing from minute to minute with Additions, Edits and Deletions and I don’t think Windows Azure Tables would keep up. I’m interested to now what you think of this.

I also added an Options application to talk to the Windows Azure Web Role, and I wrote a WCF web service in webrole to support this application.

The client  is essentially the same as it has always been. There is the addition of a domain when you are logging in – this could be for either cloud or service based server implementations. Since there is no default domain, the client needs one when you are logging in.  The client will ask for one when logging in. once you have provided one, you’ll have to restart the application.

There are installers for all the applications except for the Cloud project. The service installer will install both the service and the Options application.

Bear in mind that for the Options Applications, there is no authentication and authorisation. If you run the app on a server with the ChatServer installed, or you point the CloudOptions app at the appropriate server, you are in control. This is a concern to me and will be fixed in a future release.

Future changes

 

I was tempted to write a HTTP POST server for all this. MVC makes it so easy to implement. There would be plenty of XML flying back and forth. Some of the WCF operations would require some high-wire gymnastics to implement as HTTP POST, but its possible. I might to this.

The one thing that I didn’t update was the version of the Kryton UI library I use. I’d very much like to use the latest version to write a new UI from scratch. Again its a possibility.

The fact is that once you start thinking of implementing following a la twitter, your database schema suddenly looks much more complicated. And since I’m not writing Twitter, I’ll pass.

If you have any suggestions on future changes, holler.

Final Words

 

Bear in mind that for the Options Applications, there is no authentication and authorisation. If you run the app on a server with the ChatServer installed, or you point the CloudOptions app at the appropriate server, you are in control. This is a concern to me and will be fixed in a future release.

Also, bear in mind that this is marked Alpha for a reason. If it eats your homework and scares your dog, its not my fault – I’m just some guy that writes code.

Finally, this code all works IN THEORY. I’ll be testing all these pieces throughly in the coming weeks.

Where you can get it

 

You can get it off Codeplex. The source code for all the above mentioned components is checked into SVN source control.

For this 1.1 Alpha release, you’ll find each setup files for each component in a separate downloadable zip file. The CloudServer code is included as is, since no setup files are possible for Cloud projects.

VLC, GPL and the Apple App Store

Update:I wrote this post using the WordPress iPhone app. So just got home and corrected some formatting

Today I read (see here) that the successful VLC iPhone app might be pulled from the App store.

The reasoning behind this, apparently is that the App Store Terms Of Service breach the GPL in that all apps are sold with DRM.

First of all, this is lunacy. After 3 years trying to get it in the store, pulling it would cause an uproar. After Apples’ successful weathering of the no flash controversy, that uproar is not going to get apple to remove DRM.

Second, VLC is open source. So, open source the app, or release a DRM free version on the jailbreak app stores. Problems solved.

So my advice to the VLC team is to grin it and bear it. Nobody said the world was perfect.

Sticking to the letter of the GPL may be wonderful for the open source diehards, but the rest of us seriously couldn’t care less.

Core Competencies and Cloud Computing

Wikipedia defines Core Competency as:

Core competencies are particular strengths relative to other organizations in the industry which provide the fundamental basis for the provision of added value. Core competencies are the collective learning in organizations, and involve how to coordinate diverse production skills and integrate multiple streams of technologies. It is communication, an involvement and a deep commitment to working across organizational boundaries.

 

So, what does this have to do with Cloud Computing?

I got thinking about different providers of cloud computing environments. If you abstract away the specific feature set of each provider what were the differences remaining that set these providers apart from each other.

Now, I actually starting thinking about this backwards. I asked myself why Microsoft Windows Azure couldn’t do a Google App Engine and offer free applications. I had to stop myself there and go off to wikipedia and remind myself of the quotas that go along with an App Engine free application:

 

Hard limits

Apps per developer
10

Time per request
30 sec

Blobstore size (total file size per app)
2 GB

HTTP response size
10 MB

Datastore item size
1 MB

Application code size
150 MB

Free quotas

Emails per day
2,000

Bandwidth in per day
1,000 MB

Bandwidth out per day
1,000 MB

CPU time per day
6.5 hours per day

HTTP Requests per Day
1,300,000*

Datastore API calls per day
10,000,000*

Data stored
1 GB

URLFetch API calls per day..
657,084*

Now the reason why i even asked this question, was the fact that I got whacked with quite a bit of a bill for the original Windows Azure Feed Reader I wrote earlier this year. That was for my honours year university project, so I couldn’t really complain. But looking at those quotas from Google, I could have done that project many times over for free.

This got me thinking. Why does Google offer that and not Microsoft? Both of these companies are industry giants, and both have boatloads of CPU cycles.

Now, Google, besides doing its best not to be evil, benefits when you use the web more.  And how do they do that? They go off and create Google App Engine. Then they allow the average dev to write an app they want to write and run it. For free. Seriously, how many websites run on App Engine’s free offering?

Second, Google is a Python shop. Every time someone writes a new Library or comes up with a novel approach to something, Google benefits. As Python use increases, some of that code is going to be contributed right back into the Python open source project. Google benefits again. Python development is a Google Core competency.

Finally, Google is much maligned for its approach to software development: thrown stuff against the wall and see what sticks. By giving the widest possible number of devs space to go crazy, the more apps are going to take off.

So, those are all Googles core competencies:

  1. Encouraging web use
  2. Python
  3. See what sticks

And those are perfectly reflected in App Engine.

Lets contrast this to Microsoft.

Microsoft cater to writing line of business applications. They don’t mess around. Their core competency, in other words, is other companies IT departments. Even when one looks outside the developer side of things, one sees that Microsoft office and windows are all offered primarily to the enterprise customer. The consumer versions of said products aren’t worth the bits and bytes they take up on disk. Hence, windows Azure is aimed squarely at companies who can pay for it, rather than enthusiasts.

Secondly, Windows Azure uses the .Net Framework, another uniquely Microsoft core competency.  With it, it leverages the C# language. Now, it  is true that .net is not limited to Windows, nor is Windows Azure  a C# only affair. However, anything that runs on Windows Azure leverages the CLR and the DLR. Two pieces of technology that make .Net tick.

Finally, and somewhat  related, Microsoft has a huge install base of dedicated Visual Studio users. Microsoft has leveraged this by creating a comprehensive suite of Windows Azure Tools.

Hopefully you can see where I’m going with this. Giving stuff away for free for enthusiasts to use is not a Microsoft core competency. Even with Visual Studio Express, there are limits. Limits clearly defined by what enterprises would need. You need to pay through the nose for those.

So Microsoft core competencies are:

  1. Line of Business devs
  2. .Net, C# and the CLR/DLR
  3. Visual Studio

Now, back to what started this thought exercise – Google App Engines free offering. As you can see its a uniquely Google core competency, not a Microsoft one.

Now, what core competencies does Amazon display in Amazon Web Services?

Quite simply, Amazon doesn’t care who you are or what you want to do, they will provide you with a solid service at a very affordable price and sell you all the extra services you can handle. Amazon does the same things with everything else, so why not cloud computing. Actually, AWS is brilliantly cheap. Really. This is Amazon’s one great core competency and they excel at it.

So, back to what started this thought exercise – a free option. Because of its core Competencies, Google is uniquely positioned to do it. And by thinking about it, Microsoft and Amazon’s lack of a similar offering becomes obvious.

Also, I mentioned the cost of Windows Azure.

Google App Engine and its free option mean that university lecturers are choosing to teach their classes using Python and App Engine rather than C# and Windows Azure.

Remember what a core competency is. Wikipedia defines Core Competency as:

Core competencies are particular strengths relative to other organizations in the industry which provide the fundamental basis for the provision of added value. Core competencies are the collective learning in organizations, and involve how to coordinate diverse production skills and integrate multiple streams of technologies. It is communication, an involvement and a deep commitment to working across organizational boundaries.

I guess the question is, which offering make the most of their parent companies core competencies? And is this a good thing?

Windows Azure Feedreader Episode 5: User Authentication

Firstly, apologies for being late with  this episode. Real life presented some challenges over the weekend.

This weeks episode focuses on the choice of user authentication systems. As I mentioned last week, there is a choice to be made between Google Federated login and Windows Live ID.

So, this week, I implement both systems.

It should be noted that I only do the basics for Google Federated Login – that is, only the openID part of the process. We’ll leave OAuth till later.

If you read my earlier post, I was still deliberating on which to use. Having actually worked with the dontnetopenauth library in an application centric manner, it does seem to be more appealing. Because it integrates nicely with Forms authentication, it lends itself to MVC. Also, because of this, having dual login systems isn’t going to be possible. So we have to choose one of them.

So next week, we’ll be removing the code  for the loser. As i said above, I’m leaning toward Google Federated Login.

So this week we cover:

Here’s the show:

And remember, you’ll have to go to vimeo.com to see the full HD version.

I had planed to test our OPML code that we wrote last week instead, but we’ll do that for next weeks episode. As  bonus, we can do it properly with full integration with user information.

About next weeks episode, I’m busy all weekend. The next episode may or may not appear on time next tuesday.

Why Apple will not let Flash on iOS

I just left the following comment on Dave Winers blog. He was, once again, having a go at Apple over Flash. And this particular post was a response to Grubers’ response to his original post. I’ve lost you already, haven’t I?

Anyway, this is what I said:

I don’t have an iPad, so I don’t feel the lack of Flash as much.

In saying that, what Apple have to remember is that will millions of Apple Customers convince web designers to dump flash?

Adobe tried to get Flash running on iOS but Apple stopped them.

What we’re looking for here is for some sort of compromise. Would Apple allow Adobe to deploy a completely custom Flash build on iOS, one that removes some UI headaches (such as the mouseovers that Steve always talks about)? Would web devs actually use such a thing ( remembering that the whole premise of Flash is to write once, run everywhere)?

What if the whole reason that Apple is doing this is to give HTML5 a running start?

So, if we are going to ask if Apple is winning and losing, we need to define exactly what “winning” and “losing” actually is. Does Apple win when HTML5 becomes dominant? Does Apple win when Adobe shutters Flash? Does Apple win when iOS only Flash-less sites spring up everywhere?

OF course, for Adobe, they win when Apple lets Flash in any form on to the platform. Adobe even win when Apple lets Adobes translation tool run.
What we can say for certain is that thus far, lack  of Flash has not hurt Apple very much.

Later, it occurred to me, that there could be another reason for Apple to leave Flash out of iOS.

Consider. Most of the worlds advertising is Flash-based. And without Flash, there is no way for people to view those adverts.

So, what does Apple come out with, but their own advertising platform.

So, Apple just locked out most of their competition in the advertising space, giving their own platform a running start. So, this means that all those advertisers have to come to Apple (or Admob, but thats a footnote) to get their adverts some views.

Apple giveth and Apple taketh away (reverse that).

Also, when one thinks of Hulu and other sites that primarily use Flash as a delivery mechanism for content, not having that option means that delivery of said content to iOS users has to go through either the iTunes Store, or H.264 and HTML 5.

So, keeping Flash off the iOS platform is central to Apple’s business interests. And, as I said in my comment above, Apple has yet to see significant backlash. Unless you are  ageek or a web dev, nobody says “I ain’t buying Apple till they support Flash”.

In fact, until this back and forth erupted between Winer and Gruber, I completely for got there wasn’t Flash on iOS. Why was that? Because web designers and developers have been making thier sites iOS friendly for years.

Even if you take the view that Apple isin’t winning, it certainly isin’t losing either.

Windows Azure Feedreader: Choosing a Login System – Which would you choose?

Update: Here’s the related screencast episode.

As you may have noticed in the last episode (episode4), writing the Feed Reader has got to the stage where we require UserID’s.

Given the delicate nature of login credentials and the security precautions required, its much easier to hand off the details to Google Federated Login, or even Windows Live ID. These services simply give us a return token indicating who has logged in.

The previous version of the feed reader used Windows Live ID. Its a very simple implementation. It consists of a single MVC controller, and a small iFrame containing the login button. It’s elegantly simple. Since its MVC, there are no issue running it on Windows Azure. The reason why I picked it the last time, was a) its simplicity and b) its part of the Windows Azure ecosystem.

The alternative is to use Google Federated Login. This is a combination of OpenID and OAuth. The implementation is certainly much more involved, with a lot of back and forth with Google’s Servers.

OpenIdDiagram[1]

 

  1. The web application asks the end user to log in by offering a set of log-in options, including using their Google account.
  2. The user selects the “Sign in with Google” option. See Designing a Login User Interface for more options.
  3. The web application sends a “discovery” request to Google to get information on the Google login authentication endpoint.
  4. Google returns an XRDS document, which contains the endpoint address.
  5. The web application sends a login authentication request to the Google endpoint address.
  6. This action redirects the user to a Google Federated Login page, either in the same browser window or in a popup window, and the user is asked to sign in.
  7. Once logged in, Google displays a confirmation page (redirect version / popup version) and notifies the user that a third-party application is requesting authentication. The page asks the user to confirm or reject linking their Google account login with the web application login. If the web application is using OpenID+OAuth, the user is then asked to approve access to a specified set of Google services. Both the login and user information sharing must be approved by the user for authentication to continue. The user does not have the option of approving one but not the other.Note: If the user is already logged into their Google account, or has previously approved automatic login for this web application, the login step or the approval step (or both) may be skipped.
  8. If the user approves the authentication, Google returns the user to the URL specified in the openid.return_to parameter of the original request. A Google-supplied identifier, which has no relationship to the user’s actual Google account name or password, is appended as the query parameter openid.claimed_id. If the request also included attribute exchange, additional user information may be appended. For OpenID+OAuth, an authorized OAuth request token is also returned.
  9. The web application uses the Google-supplied identifier to recognize the user and allow access to application features and data. For OpenID+OAuth, the web application uses the request token to continue the OAuth sequence and gain access to the user’s Google services.Note: OpenID authentication for Google Apps (hosted) accounts requires an additional discovery step. See OpenID API for Google Apps accounts.

 

As you can see, an involved process.

There is a C# library available called  dontnetopenauth, and I’ll be investigating the integration of this into MVC and its use in the Feed Reader.

There is one advantage of using Google Accounts, and that’s the fact that  the Google Base Data API lets us import Google Reader Subscriptions.

It may well be possible to allow the use of dual login systems. Certainly, sites like stackoverflow.com use this to great effect.

Why is choosing an external login system important?

Well, firstly its one less username and password combination that has to be remembered.

Secondly, security considerations are onus of the authentication provider.

If we were to go with multiple authentication providers, I’d add a third reason: Not having an account with the chosen authentication provider is a source of frustration for users.

So, the question is, dear readers, which option would you choose?

  1. Google Federated login
  2. Windows Live ID
  3. Both