What I Learned from buying an NFL Gamepass

I can’t recall a week where some content provider didn’t raise my hackles by geo-blocking content.

Every. Single. Week.

This week, it was the venerable BBC. Even though I pay the license fee that funds the BBC (ergo, paid for BBC content already), I still can’t watch BBC iPlayer in Spain where I’m currently on holiday. I can’t even stream the international version of BBC News 24.

Then the rest of the time, various US-based content providers block things such as Jon Olivers Last Week Tonight, or The Daily Show…. etc.

(others, like The Tonight Show with Jimmy Fallon, post short clips and excerpts – which leave you really wanting to watch the full show)

Now. I’m an NFL fan in general (and SeaHawks in particular). The BBC and Channel 4 don’t show the games anymore. And Sky (the satelite broadcaster) is inconvienent to stream to any device other than your set-top box.

(to be fair to Sky, I can stream thier internaltional news channel)

Does the NFL, with content worth billions, care where I am? No. I can stream every game live here in Spain with my AppleTV and my NFL Gamepass that I paid in blood for.

For that matter, does the AppleTv care where I am? No I can stream all my iTunes content.

So. Why can’t all content be like this? I suspect its because content licensing is still in the dark ages of the 20th century – deals were inked before the full forces of the internet were unleashed, or perhaps inked after that fact, but without fully understanding the full scope of the changes being wrought.

Is advertising a part of this puzzle? Theres no reason for ad-supported content not to continue to be ad-supported no matter where its being shown – and show the relevant advertising for that geolocation. This is, after all, how it happens now in the traditonal media – When I’m watching Big Bang Theory on my local TV UK station- I get local ads.

This mnight be a contraversial statement to make, but I don’t mind ads because I know that the ads are helping pay for the content. (What I do mind is Channel 4 showing me 4 minutes of pre-roll ads and 4 minutes of ads at the half)

The point is that content is still not taking full advatange of the digital age – you can’t take full advantage of the digitial age and the digital market place, while trying to impose traditional restrictions and mores on that marketplace.

I’m not calling on content to be free (or be fleeced for that content), but let us stream content here, there and everywhere. If the NFL can do it, anyone can.

The revolution will not only be televised, tweeted, blogged, and instagrammed, but also streamed.

I Like My Walled Garden

So Scoble, ye olde bastion of bleeding edge technology enthusiasts, is switching to Android.

Um, who cares?

Unfortunately, rather a lot of people do. Why. Because of that bleeding edge thing again…. Already Matthew Ingram and Guy Kawasaki are other high profiler switchers. Leo Laporte in fact, uses both iOS and Android.

But I Like My Walled Garden. It works for me.  The collary to that is, of course,  what works for me will not necessarily work for others.

If it works for Matthew, Guy and Robert, Great!But what works for them will not necessarily work for me. Or you, or your dog.  So lets just calm ourselves down a bit.

(tho i doubt we’ll get the tech press/blogs to stop salivating over this story)

I’m not writing this to address complains… People with far more time that me can do that. But there is something I want to say.

It struck me in writing this post that 90% of the time, we really do live in a world of software walled gardens. Microsoft for the OS and Office (and in my case, dev tools), Adobe for Creative Suite, and Apple for iTunes, iPhone, iPad etc.

Each of these walled gardens Just Works (Windows 8 is making this reality in the MS world). And I like that.

I’d like to argue the following proposition: Being inside a walled garden is preferable to being outside it.

Who wants to argue the other aside of proposition. Any takers?

Adobe Creative Cloud: Adobe Application Manager



Being a card carrying member of the To-The-Cloud camp, anything that uses the cloud gets my attention.

Creative Cloud is pure dead brilliant. Not to mention affordable. Its definitely the way forward for Adobe.

However, my gripe is with the Adobe Application Manager. As lofty as the name sounds, mission control a la Adobe its not. There definitely is room for more functionality.

Now, being on the far end of a bad broadband line, I rely on Download Managers more often than not. Being able to reliably pause and restart downloads is key when you’ve got to ration bandwidth.

Firstly: Do you see a pause button in that screen shot? Nope.  There is only a cancel button. When you’re 60% of the way through a multi-gigabyte download, that’s the last thing you want to do. So a way to a pause the downloads and restart would be nice.

Secondly, having two computers means I want CS6 installed on both of them. The Application Manager, as far as I can tell, does not cache the installer files at all. And I’ve really gone looking for them. So it requires a separate download on the laptop. This does not please me. Its a hassle. Its very un-user friendly. A fix would be nice – or at least THE OPTION of keeping the files.

If there is already a  cache – a link to it would be helpful.

There are basic features that are lacking. And its disappointing that they’re missing.

But looking at the application as a whole, it is Spartan – there is a certain lack of features. Yes I’m sure we’re supposed to use the website for all the other management tasks.

But, for example, installing language packs. Now this is not a problem for me. English is fine. Or even Pirate. But its a bit of a convoluted process switching language and getting it to download the correct language packs.

What about an auto uploader to the Creative Cloud storage? A Dropbox for Designers anyone? 

So a little love and attention would be nice to complete the experience.

I suppose the point to my little rant here is that as great as CS6 and Creative cloud are, the Application Manager is somewhat lacking in comparison.

Google Cookie Monster, Part Deux – Where Is The Debate??

I’m rather disspointed by thispost on GigaOm, responding to Microsoft’s Dean Hachamovitch’s post doing a little Google Privacy exposé of his own. .

The GigaOm post, it seems, blames Microsoft for pulling Google up on its failure to adhere to an old standard, in this case the P3P standard. This standard is meant to allow a site to notify a browser, and hence the user, on what it will do with the information it collects from you. In other words, the standard is meant to solve the very problem that started Google Cookiegate.

Instead of blaming Microsoft for jumping into the fray and touting its own browser, rather we should be pointing out that not only is Internet Explorer the only browser to implement this privacy related standard, but the fact that Google has highlighted the need for it to be improved.

Why is it that no other browsers implement this standard?? Why is it that there is no movement for it to be improved? All those bloggers calling bloody murder over this Google Cookiegate should be urging the updating and wider adoption of this standard.

Far from being a damp squib, Hachamovitch’s post is an important contribution to the debate. It may be self serving, but who else would have known of the existence of this standard if it wasn’t for that post.

Come on GigaOm – I expect better than blindly bashing any contribution Microsoft makes.

The Google Cookie Monster

So everyone is up in arms because Google is using some nefarious tricks to bypass browser cookie privacy provisions.


Ok, So Google isn’t living up to its “Dont Be Evil” motto. Big deal.


However, let be clear here….. when we use Google and all those free Google services, do you really think Google is providing all this out of the goodness of its heart?? Seriously??


Of course not. As the Facebook IPO demonstrates so well, our personal information is worth money to advertisers – and worth a lot of it too.


So, taken to its logical conclusion, when we use Google and all those free Google services, Google puts tracking cookies on our machines, we effectively trade our this personal information for the use of Goolge services. Good trade, right??


Now, what do you expect Google to do in this market place? Facebook and its personalised advertising are beating down Google’s alley. Why do you think Google is fighting back with Goolge+ and including Google+ in its search results? (remember the kerfuffle that caused? Well, for a short period of time, anyway). You really expect them to sit on their hands when theres a way of collecting even more data??


You really need bigger excuses than putting some noses out of joint to pass on making money.


There’s a legitimate argument to be made that Google should be honouring web standards like the P3P standard that’s at the centre of this latest kerfuffle. the web with out standard is a bad place to be. But I’d argue that this is a natural occurrence in the evolution of standards. We’ll find a middle group between the privacy needs to users and the  need for advertisers to make money.


So, grow up techy people – the world outside Silicon Valley DOES NOT CARE!

PS – The US store chain Target uses almost the exact same techniques coupled with a little nifty statistics to tell when you’re pregnant.


PPS Go watch This Week In Tech Leo Laporte and Co have some excellent opinions on this.

Why Silverlight Should Stay

Mary Jo Foley just published a post discussing the future of Silverlight.

I’m not a Silverlight Developer by any stretch of the imagination. I never played with it. Never touched it at all.

Then, for the Flying Shakes website,  I had to have the control below in a web form. Naturally I turned to Silverlight.



With no knowledge or experience of Silverlight it took me 90 minutes from idea to working control.

And yes, I realise that I could write a HTML5 version of that now. But it would probably take much, much longer (don’t nobody suggest Flash).

Silverlight is good, not just for rich client experiences it allows us to build, but also because its part and parcel of the tools we Visual Studio devs work with every day.

The flip side to this, of course is the user perspective.

Here in the UK we have Sky satellite television. The reason why I like them so much is that they are fairly technology friendly. Besides streaming on the go (iPad, iPhone, etc), you can log on to their Sky Go website to stream on-demand or download and watch on your desktop offline.

This experience is delivered by, wait for it, Silverlight. The impressive part of this whole thing was the Sky Go Desktop Client. Its an offline Silverlight application, popping straight out of the browser and installed silently.  Was downloading from my queue 10 seconds after hitting the download button.

Satisfied does not even begin to describe it.

I HTML 5 may be the bees knees, but there is still a business case for keeping Silverlight around.

I’ll consider HTML 5 a contender when we have the same level of support and tooling for it as we have now for Silverlight.

One of the things that makes Google+ so great

I’ve been thinking about this for a few weeks, and never really got a handle on how to articulate it.

Till today.


I was commenting on Scoble’s post. He says in the post  that importing tweets into Google+ is a very bad idea.

Now, apart from being very pleased that me and him see eye to eye on this, I commented:

Agree with you +Robert Scoble No tweets in here.
Keeping Google + free of imported stuff has encouraged/forced people to post original material rather than reuse from twitter, facebook, friendfeed, flickr, etc.
Its one of those things that make Google+ interesting and different from all the other social networks around. Its made google + a destination in and of itself, rather than simply a portal or an aggregator (like Friendfeed is)


This is part of, shall we say, a philosophy around which Google+ is built. A philosophy which, dare i say it, is socially engineering us.


For example, the fact that comments and posts are not limited to 140 characters and allow rich formatting actually encourages people to comment. And not just comment – to comment substantially.  That is why Google+ has such a great reputation (already) for interaction.


Google has taken a very different approach to the other social networks. It is attempting to fulfil a very different version of what a social network should be.

And so far, its succeeding.

Getting your News Manually Blows: A Reply

Last night Holden Page wrote a pretty good blog post entitled: Getting Your News Manually Blows (http://pagesaresocial.com/2011/04/05/getting-your-news-manually-blows/).

It was late, so I thought I’d reply now when I’m fully awake :).

Holden’s point was that when you use a service like my6thsense that auto-curates the news and presents this to you, it’s much easier than using Twitter (and by extension Google Reader etc) to get your news. Essentially, it’s easier to have the news pushed to you rather than to have to go off and pull it from various services.

Essentially, Holden is arguing for the curation model rather than e co sunroom model. This is infect something that I’ve noticed myself. There is consistent lack of content, even using Feedly and Twitter and Friendfeed to get my news.

Now I got an iPad about 2 months ago. And promptly installed Flipboard. I mention this because the experience is as important, if not more so, as the content in that experience. This rapidly warmed me to the iPad as a digital newspaper, complete with page turns and layout. Now it’s blatantly obvious, but the iPads ability to BECOME the app that’s running is a singular experience. As a dead tree newspaper reader, the fuss of a broadsheet just melts away on the iPad. Turning pages on a broadsheet can be a torturous experience. Flipboard showed me how that just melts away.

Thus I went off to seek actual newspaper apps to use.

Now before you groan and call newspapers dead tree media of the past, sit down and think about it. Newspapers were the my6thsense of the dead tree media (albeit it political persuasions, rivalries and narcissistic owners took thee place of algorithms). Decades worth of experience curating the news still produces some damn fine newspapers.

That’s a fact. Take the Times of London. I’ve been a Times Reader for years, even though it is a Tory paper. The iPad apps for the Times and the Sunday Times are incredibly good. They preserve the newspaper’s look and feel whilst incorporating video and some innovative layouts. I like it so much I signed up for the monthly subscription.

Here’s the scary bit: I open up The Times app and read it before I open up Feedly. No kidding. It’s curated news that covers a wide variety of topics succinctly. It’s quick and easy to browse through.

“Hang on a minute” I hear you say, “there’s no sharing or linking or liking or anything! It’s a Walled Garden”. And that’s the truly scary part: I don’t care.

Now i’m not at all suggesting that we abandon our feed readers and start reading digital newspapers. Quite the opposite. I’m saying that if we’re looking for sources of curated news, digital newspapers had better be one of those sources. Indeed, Feedly still gets used everyday. It’s invaluable to me. (Today, for example, the Falcon 9 Heavy story is nowhere to be found in The Times).

There other good newspaper app I found was the USA Today app. It’s nice and clean, with a simple interface that focusses on content. As a bonus, you can share articles to your hearts content. Plus, it’s US centric for the Americans among us 😉

I mentioned above that looking through my feeds for good content or through Twitter and Friendfeed was/is becoming a it of a chore. I’m finding more and more time where i’m looking at absolutely nothing. I probably need to subscribe to better feeds or better people. I probably need to reorganise my feeds to better layout content, and cull the boring ones. But here’s the thing, I don’t have the time to do all that.

As Apple would say, There’s an App For That!

WordPress Feature Request: A Universal Video Player

As any of you who have been following my screen casts know, I’ve been using Vimeo to host and share my videos.

However, Vimeo as it is is slightly restrictive: 500Mb uploads a week, and one HD video a week. for screen casts, HD is essential. So I’m limited to one video a week, if that.

Now that I’d love to do is to upload my videos to Windows Azure blobs. Blobs is comply format agnostic, so I could upload any format I wanted to: it doesn’t have to be Silverlight Adaptive Streaming (or whatever the term is). The CDN capability of Blobs is also very helpful.

Now nothing is stopping me from moving away from WordPress and being able to customise what every blogging system i’d run to load videos from Windows Azure Blobs. However, that’s a little more trouble that’s it worth right now.

(I’m actually thinking of eventually moving to Windows Azure utilising the extra small instance, but that’s a ways off)

So what I’d love is a video player that you can just point at a URL and it will play the video. It doesn’t have to be Silverlight or Widows Media specific. It could be H.264 files (an iffy proposition with Google yanking H.264 support from Chromium to be sure).

So in other words, WordPress can provide the player in whatever form it wants to. I just provide the player with the URL to the file to be played.

I would think that the HTML5 spec would make this a fairly trivial undertaking.

In fact, now that I think of it. Couldn’t one simply insert a tag in the HTML view of posts??

Thoughts on Windows Azure Storage

I’m sitting here planning what to do with the re-write of the Feedreader I wrote for university. Naturally, the storage solution is always going to be a contentious issue. The NoSql versus SQl debate plays a lot into it.

But, for the most part, i used Windows Azure tables and blobs for its simplicity over SQL Azure. In saying that, SQL is not my favourite thing in the world, so make of that what you will. But also, for a demonstrator application, the use of something completely new played in my hands very well.

So the re-write is also meant to be a demonstrator application. so the Windows Azure storage is staying.

But, not so fast. because Windows Azure Storage  needs something. The way I used Tables and Blobs essentially was as a poor mans object database. Thus meant that there was a lot of leg work involved in this, not to mention tedious plumbing code. The fact is, I think that the is the most logical use case for Windows Azure Storage – where in metadata is stored in the tables and the object themselves are in the blobs.

What I would like to be added, then, is the ability to formalize this relationship in code. Some way of saying “hey, this value in this TableRow actually points to a blob in this container”. So I can call “getBlob()”, or something on a Row an get the blob back. Now, to be clear, I don’t mean this to be a foreign key relationship. I don’t want to cascade updates and deletes. And i certainly don’t want my hands tied by making the Blob Attribute column (or whatever) mandatory.

Now, this could be added right now. And in fact am considering doing that. But support for this in the Storage Client library would be nice. But weather backend support is needed, or in fact a good idea, is another question entirely. The implications of making such a fundamental change on the back end. For example, say you’ve exposed a table via OData. How do you expose the blob as well? And given the nature of the use case, the fact that it is needed on a table by table basis makes it much easier to limit any such functionality to the Tools library only.

I can hear you asking yourself why I’m asking for support in the Storage Client library if I can gin up and duct tape together some Extension Methods? I’m a big proponent of the idea that anything that we use in intact with software, be that actual applications or libraries we use in our code, has a Natural user interface. Think of it, the API methods that are exposed for us as developers to use are in fact a User Interface. so I’m asking for a better user interface that supports this functionality without me having to do the legwork for it. In so delivering support, it is perfectly possible, indeed likely, that the library code that Microsoft ships will be more efficient than whatever code I can write.

My final report on the project did call out VS Windows Azure Tools, mainly for not making my life easier. So I’m looking forward to the new version (1.3) and seeing how it goes, particularly with regard to storage.

Now performance-wise, the version I wrote last wasn’t exactly fast at retrieving data. I suspect that this was due to a) my own code in-efficiencies and b) the fact that my data wasn’t optimally (obviously) normalized. Its also probable that better use of MVC viewdata (actually, I think that should be “better use of MVC, period”) and caching will improve things a lot as well.