Managing Feeds

I was just adding some new feeds to Feedly. While this is not in itself a statement of earth shattering proportions, I did something I’ve never done before: I changed the title to reflect WHY I was subscribing to that feed.

image

Frasier Spiers is doing this really cool thing with iPads at a school in Greenock, Scotland (just up the road from me, as it turns out) called the iPad Project. So i changed the title from “Frasier Spiers” to include “The IPad Project”. Now i can remember why I’ve subscribed to that feed.

Hopefully you can see where I’m driving with this. I’d love to have some formal way to remind myself what I’ve subscribed to a particular feed. Some feeds will be self explanatory, such as Scoble or Scott Hanselmann. But feeds from others less well known (or at all) such as Frasier are a tad difficult to remember.

Not sure what form this may take, but it would make life an awful lot easier.

In closing, it strikes me that twitter follows have much the same problem. But its entirely the wrong medium for requiring explanations when you follow.

My RSSCloud Server: Thinking of doing some screencasts.

This year was my last at Uni ( actually, i still have an exam to write, so the past tense isn’t accurate). As is typical with Honours year undergraduates, a final year project was set.

If you are regular reader of this blog, you’ll probably know that what i picked was a RSSCloud server running on Windows Azure. However, as they say on the home shopping networks, THERES MORE! My project needed a little more body to it. So I added an online Feedreader, in other words a poor (dirt-simple) imitation of Google Reader.

Now, this app uses a number of technologies for which it would be a pretty cool demo project. Windows Azure itself (obviously), Windows Azure Tables and Blobs, LINQ, WCF, MVC2 and so on. This includes it being a demonstrator of the RSSCLoud specification itself.

Although its an academic submission, my lecturers are fine with me opensourceing it.

Given the rise of .Net 4, and the experience points gained writing the first version, I feel that everyone would be better served with a rewrite. Not to mention the fact that It’ll give me a chance to use the new Windows Azure Tools for Visual Studio.

As I re-write it, I think a screencast series is in order. All the code will be checked in to codeplex. This’ll give everyone a chance to double check my logic (particularly interested in what Dave Winer thinks of my implementation of RSSCloud).

So, firstly, What do you think?

And secondly, anyone know a good hosting provider? I don’t know about Youtube. But Vimeo looks pretty good. If their limit is 500Gbs/per week upload space, it’ll give me chance to do one video each week, more or less.

I have all the software required to pull this off. So thats not a problem. I actually did a screencast of a live coding session in class for one of my lectures (writing an interpreter turns out to be pretty fun, actually).

i think this would be a pretty good contribution to the community as a whole.

Quotes of the the Day

The first comes from David Weiss’s blog:

"Engineering is the art of modelling materials we do not wholly understand, into shapes we cannot precisely analyze so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect the extent of our ignorance."

– Dr. AR Dykes, British Institution of Structural Engineers, 1976.

To echo what David said: If this doesn’t accurately describe software engineering, I don’t know what does.

 

The second comes from Jeff Atwood’s post on Coding Horror: The Vast And Endless Sea:

If you want to build a ship, don’t drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea.
– Antoine de Saint Exupéry

Seriously I’d go and read the whole post. If I were teaching Introduction to Programming, this is the sort of quote I’d use on slide number one. Since thats what I’d be doing.

A word about FaceBook privacy

I left this comment on Paul Bucheit’s Friendfeed Thread:

Paul: I get value out of having Twitter and FF completely public. Thats not the issue. The issue here is that FB was originally sold as a private service. Another thing. You and I may have seen value out of being completely public, but the only value to anyone about Grandma Indinana being completely public belongs to the knitting accessories advertisers.

And Followed it up with this one:

And for the record I don’t have a FB account. In the old days, snail mail mostly guaranteed privacy for your communications by virtue of the fact that your communiques were physically sealed by you. That essentially is the analogue version of FB pre privacy changes, albeit not at scale. In other words, privacy was implicit in the social convention of exchanging snail mails. With FB, people expected this social convention to extend, at slightly greater scale, to the online medium. Since, thats essentially how FB was marketed in the beginning. Now, FB has single handedly challenged the privacy implicit in this social convention, changing the default implied to a public one. That is the problem

I think the last comment sums things  up nicely. But do read that thread for a wide variety of opinions.

Web API’s have an Identity Problem (In response to and in support of @davewiner)

If you’ll remember, a while back I announced I was implementing RSSCloud on Windows Azure. By and large this is going well and I expect to have a demo up and running soon.

PS. what follows below is based on an email I sent to my Honours Year supervisor at University and some of this will make it into my thesis too.

The RSSCloud API relies on HTTP POST for messages in and out. And I initially thought Windows Communication Foundation was the way to go.

(bear with me, I’m using this to illustrate my point)

Up until now, WCF has been working. However, in order to actually test the RSS Cloud code, I’ve had to have it the WCF Operation Contracts written as a REST service. Its clearly HTTP POST, but its not in the RSSCloud specification. Though arguably, it should be. Why do i say this? Developers should be able to write code they are comfortable with. Whether than is REST or POST or SOAP or its against an WSDL generated API client.

Back to my little problem. So instead of:

[WebInvoke(Method = "POST", UriTemplate = "/ping?url={value}")]

        [OperationContract]

        String Ping(string value);

I had to use:

[WebInvoke(Method = "POST", UriTemplate = "/ping/url/{value}"/)]

        [OperationContract]

        String Ping(string value);

There is a subtle difference. HTTP post uses the query string, where as REST uses the url itself to transmit information.

Sending a HTTP POST to the first method (where the query string is of the form" ?url={value}&port={value}&…..") hangs the test client. The request is accepted, but it never returns. I can’t even debug the method. Using a pure REST url (the second method), things work perfectly.

In order for project as a whole to conform to the project specification (by which I mean the interoperability of the program and its compliance with the HTTP POST methods defined in the RSSCloud specification), being able to accept HTTP POST is paramount.

I spoke to one of my WCF savvy lecturers. Basically he said that there were two ways to doing this: either stick to using a REST. Or encode the url as part of the POST data. Neither of which solve the problem of sticking to the specification and using HTTP Post.

So, I was digging around ASP.Net MVC 2  yesterday. I was building the page that will actually display the posts in a feed on the page. I noticed that the Controller actions that handle the request  (i.e the feed id to get) have a [HttpPost] attribute above them. I’d never really given that much thought until yesterday.

After my little chat, I had a hunch. Using MVC, I simply added a controller action like so:

[HttpPost]

        public RedirectToRouteResult ThePing()

        {

            string url = (string) Request.QueryString["url"];

            url = url.ToLower();

            ……..

And it worked flawlessly. After all my wrestling with WCF configurations and what not, i was actually quite shocked that it worked first time. One of the problems with working with new frameworks is that you keep discovering new things, but only long after you shoud’ve known.

So, to hit the ThePing method above, the url is http://rsscloudproject/Cloud/ThePing?url=….... (obviously this isn’t deployed yet)

Why does this work?

The reason is quite simple: As I understand it, MVC exposes the Request object for you to use directly, while WCF hides this somewhere in the bowels of its inner workings. So, without getting a handle on the Request object, I can’t force WCF to process the query string differently. Hence, WCF was the wrong choice of framework for this.

So my code is now 100% in compliance with the HTTP POST methods defined in the RSSCloud Specification

Now, what does this mean for the WCF REST project?

I’m keeping it as part of the project. It gives a REST interface, and it gives WSDL that developers can use to build against my service.

Not so much the case with REST, but I personally think that the concept of a WSDL is under-represented when it comes to web based APIs. Adding these two additional interfaces to the RSSCloud specification will be one of my recommendations in the final report. I feel strongly that a web based API needs to give developers as many alternative interfaces as possible. Its no fun when you know one way of doing things, but this API is only provided in another.

For example. I wish Smugmug provided a WSDL that I could point Visual studio to and generate a client for.

Both of these situations illustrate a problem among Web API’s.

I wrote a while  back that Bill Buxton’s Mix 10 keynote about designing natural user interfaces, interfaces that respect the abilities and the skills acquired by the user also applies to designers of API’s.

Bill gives this wonderful example of a violin. The violin itself may be worth millions of dollars (if I remember correctly, Joshua Bell paid $4.5 million for his Stradivarius). The bow of the violin for any first violinist in any symphony orchestra is never less than $10,000. Remember these are musicians. They make a pittance. So as a proportion of income, it’s a fortune. But it’s worth it. Why? Because it’s worthy of the skills that those musicians been acquired over decades.

Dave Winer today published a post about Facebook not providing XML API responses. And bemoaning that Twitter is going to do the same. Dave does not want JSON. He wants XML. Why? He feels comfortable with it, and he has to tools to work with it. Clearly the new API changes do not respect Dave Winer and the abilities he has acquired of decades.

I left the following comment:

I totally understand where you are coming from.

On the other hand, tools will always be insufficent. I don’t think .Net, for example, has JSON support built in, either.

Technology moves so fast, as you say, that next week there will be something new and shiny. Developers find themselves in the curious position for having to write for today, but to prepare for next weeks new thing – they have to find a way to straddle that fence between the two. Open Source is not the complete answer to this problem ( its part of it tho).

  • So, API developers have the responsibility to provide for developers.
  • Tool developers (closed or open source) have a responsibility to provide the tools in a timely fashion.
  • And developers have the responsibility to have reasonable expectations for what they want supported.

This is a large and deep problem in the world of web API’s. They don’t have to be Just JSON or Just XML or Just HTTP POST or Just XML-RPC or Just SOAP or Just WSDL. This collection of formats and standards can co-exist.

And co-exist they should. An API should be available to the widest possible cross section of developers, to respect the individual skills that these developers have acquired of years and decades.

Because when you don’t do that, when you don’t respect that, you make people like Dave Winer stop coding against your API.

Help Needed: Silicon Image Sil 3512 SATALink Controller BIOS Flash

So, I installed a 2 port eSata adaptor from LaCie last week and connected my brand spanking new 1.5Tb drive to it.

This is a Windows Home Server system, if you must know. So disk activity is always high, both reading and writing.

Now the hard drive itself is perfectly fine (I’ve tested it on other computers using USB 2.0). The enclosure is perfectly fine (since I’ve tested that too).

This leads me to the issue I have with the controller.

This error message always preceded a crash:

“The device, \Device\Scsi\SI3112r1, did not respond within the timeout period.”

That error let me to this Microsoft KB article: http://support.microsoft.com/kb/154690/EN-US/

A quote:

The reason that drives tend to have these types of problems under heavy stress is often slow microprocessors. In a multitasking environment, the processor may not be fast enough to process all the I/O commands that come in nearly simultaneously.

Hmmmmm…… This certainly fits he bill, since, after much careful examination, it seems heavy reads cause this problem.

I’ve tried all the other stuff in the KB article except flashing the PCI cards’ BIOS.

Now this is where it gets interesting. The LaCie card uses the Silicon Image Sil 3512 SATALink Controller. This is what shows up in Windows Device Manager.

I’ve updated the driver to its latest version from Windows Update. But not the BIOS.

Now the download is simply a flashtool and a readme file thats gives the following command line instructions:

Procedures to run SiFlashTool .exe

· Open Windows command prompt

· Change to a directory where the SiFlashTool .exe and BIOS binary file are located.

· Run SiFlashTool to update the flash memory with BIOS binary code

The SiFlashTool.exe command line syntax is as follows:

SiFlashTool [/BusNum:xx /DevNum:xx] [/File:filespec] [/v]

Where:

BusNum / DevNum: These parameters specify the PCI bus and device number respectively of a Silicon Image storage controller. These parameters only need to be used if there is more than one Silicon Image storage controller in the system.

File: This parameter specifies the path and name of the BIOS image file to be programmed.

/V: This switch causes the program to only display the version number of a controller’s BIOS. No BIOS image is programmed when this switch is used. The /File parameter is ignored if specified along with this switch. If /BusNum and/or /Devnum are specified, then only the BIOS versions of controller’s at the specified PCI locations are displayed.

If I Run it with /V it tells me that BusNum is 05 and DevNum is 04.

Question One, what BIOS binary file are they talking about?

Question two, how am I supposed to include the BusNum and DevNum arguments?

 

Many thanks for any help all the hardware and command prompt gurus out there can give.

In defense of @friendfeed from @techcrunch’s attack. (@parislemon I’m looking at you)

Allow me to repost the comment I made on this Techcrunch post, it being a blatant attack on Friendfeed.

One we are not “pissed”. At all. We’d only be up in arms if Facebook closed Friendfeed.

Two. If its not news why are you reporting it.

Three. It is news because Friendfeed pioneered some of those wonderful features now known as Google buzz.

Four. The last time Friendfeed had problems was October 29th with some network problems. Ergo, it is NOT twitter. At all.

Five. Even if it were twitter, you never did treat twitter as harshly as you treated Friendfeed in this post. Even during the Era of the Failwhale.

Six. You don’t like Friendfeed. We get it.

Seven. Here endeth the lesson.

Really. I’m not surprised.

The question I really want to be answering here is why people are leaving Friendfeed. I certainly can’t think of a reason why not to. Even Scoble freely admits that Friendfeed has the superior feature set.

Facebook as a 200 million strong userbase.

So, Mark Zuckerberg, turn them loose on Friendfeed please.

Scobles’ Molecules of Infomation

Devotion to Duty

Scobles’  molecules of information post reminded me of something. Blog posts are the original molecules of information. A blog post is a place to bring tweets, pics and youtube videos together. Since blogging took off, we have a host of new tools to add to the army knife. We have foursquare check-ins for example. they provide an awful lot of context to location sensitive tweets.

Thats why I’m sharing this here rather than going straight to Friendfeed and Twitter. 

I commented on Scobles’ post:

Er, Scoble. You can tag tweets. Its called hashtags. What we DON’T have is the ability to search and mine that information.
Friendfeed has hashtags as well. And FF has a far more power search engine for all these little atoms of information.
Friendfeed is way ahead of you. They show you related items.
The future is here, its just not evenly distributed yet.

To which Scoble replied (Disqus comments with replies are awsome, BTW)

Nice try. Hashtags are NOT tags. At least they aren’t anything like the tags that Flickr photos have. FriendFeed does NOT have tags. It has comments. Not the same again. Not even close. FriendFeed’s related items? They are to remove some duplication noise and that feature doesn’t work anywhere close to as well as a human curated system would. Try again.

To which I responded:

Robert, hashtags need a systematic engine for them to work as actual
tags. Twitter should add this.

But nonetheless they provide a way of categorising tweets. Tweetdecks
tweet filtering works primarily due to hashtags. For events, for
example, hashtags are brilliant.

Friendfeed related items link may primarily be for noise reduction,
but this functionaity could be greatly extended. Comments are content
as well, but quite often they provide context too. See Jesses’ FF3.0
FF posts this morning for an example. Where links between FF items are
posted in the comments.

If this were extended to solidify the relationtionship between items
beyond simply showing the items linking to the same page, we’d have
your information molecules.

The sum total of tweets, posts, videos, foursquare check ins, you name
it about something often ends up providing more context than any one
single service or method can provide.
Typically speaking blog posts have filled this need for creating
context, collateing all this related information together in a single
article. This tweet, that twitpic, this video. The first instance of
an information molecule.

As noted above we already have been manually adding in links between
related content. Geolocation services have always created information
molecules, combining tweets and google maps. In like manner, the
services concerned need to solidify these methods for other types of
information.

What do you think?

Why I Just Bought A Dell (instead of an iPad)

295 best_experience_20100127

Even with all the iPad hysteria in yonder interwebs, there is one fact that differentiates the iPad from a true, bad-to-the-bone laptop: the need to sync.

This above all else cripples the iPad (at least when one considers it against the backdrop of the average laptop hardware spec). Think of it. How are you going to get all those wonderful iPhone apps you’ve bought over the past three years onto your brand spanking new iPad?? You need to sync it. How are you going to get your music, tv shows and movies on top your iPad? You need to sync it. In fact, how are you going to get some swanky software update that Apple will surely release on to your iPad without syncing it??

I have that problem with my iPhones at the moment. My iTunes library  that i sync the iPhones to got borked a few weeks back. Now I have to erase and re-sync BOTH iPhones with my partially rebuild library (its a bit of a hit or miss process). Until I do that, I can get stuff off the devices, but not sync stuff to them. Bit of a pain, no?? Its going to be even worse with the iPad if I’m ever in this sticky situation with it.

Secondly, the iPad runs iPhone OS3.2, the laptop runs Windows 7 Professional. Which gives me the great freedom of applications?? It depends. I have no qualms about the app store. Its the type of application that is allowed on the iPad/iPhone thats the problem. Apple clearly prohibits running Virtual machines, or any kind of Just In Time compiliation on the device in question. So how do I write code on the thing?? (writing code is useless if you can’t compile in real time and debug). A Jailbreak is out of the question , and even then, Visual Studio is certainly not coming to a jailbroken iPad near you.

Second, the hardware itself limits what kind of applications you can run. If Adobe produces a stripped down version of Photoshop (likely – they already have a Photoshop iPhone app), Lightroom (possible, it depends on if the SDK allows access to the SD and USB port adaptors) or Illustrator (after Apple demonstrated the drawing capabilities of the iPad, why not?), you can bet your bottom dollar that they are not going to be anywhere as full featured and powerful as their desktop (and laptop) counterparts. The hardware is Apple’s very own custom silicon. The A4 system-on-a-chip made by PA Semi for its parent company runs at 1Ghz. Not exactly world class performance. And until we have industry standard bench marks, nobody can say for sure. Nevertheless, this nice Dell system runs a Intel® Core™2 T6670(2.2GHz,800MHz,2MB). A nice speed improvement, if I do say so myself. The current consensus is that the iPad has about a 1Gb of RAM. Compared to the 4Gbs in the Dell build.

Now I do a lot of typing on my laptop – whether thats for code or for taking notes or the occasional blog post. So the Keyboard is must for me. The iPad keyboard dock is an ingenious design, and would look good on just about any desktop (not to mention those nice display tables at the Apple Store). It goes along way to answering those critics who, after three years of using their iPhone virtual keyboards, still like their tactile feedback (not to mention the much improved ergonomics of writing volumes on the keyboard dock rather than just on your lap – there must be some ergonomically minded lobby that would blame apple for all the RSI around, right?). What i can’t imagine is lugging the dock all the way to uni, setting it up and then putting this tiny little iPad on it and then taking notes for three hours (mind you, after actually trying this I may change my mind, but thats months away). Equally, I can’t imagine turning up to a busness meeting armed with the keyboard dock and iPad – i’d be the laughing stock of any (Dell-dominated) conference table.

In saying that the iPhone virtual keyboard has been very good to me. If one had to graph the spelling mistakes I (inadvertently) tweet, there is a continual improvement ( a reverse hockey stick graph if you will). So I’m certainly not against the virtual keyboard on the iPad. How it will actually work, however, is another question altogether. I’m typeing this on the last Dell laptop i bought, and the keys give me firm, reassuring feedback. Not to mention the almost soothing sound the keys make as I type, the sound of success (if I an’t typing, I aint working).

Then there is battery. Now, if Apple is to be believed, the iPad has 10 hours of battery life and a month of standby. No idea if that’s 10 ours of general use, of video playback, of web browsing or music playback etc. Going by the iPhone’s track record I’m not so sure I’m always going to get 10 hours out of the thing. However, the 10 hours still far outlives the seven i had for two years with the current laptop’s 9 cell li-ion battery. And the 2 hours I’ve lived with for the past for months. And the zero hours that I’ve had for a week and a half now.

Now lets think of the gravy.

One, the laptop has no app store. On the minus side, this means that I have to source the applications I wish to run myself.  I have replacements for all the iPads built in applications. This, ironically enough, includes iBooks. Its called Kindle for PC. From Amazon. (Amazon’s actions over the weekend is a subject for another post, but read this brilliant article by the author John Scalazi). I have the Full Creative suite 3 from Adobe. I have Microsoft’s Expression Studio 3. I have Visual studio 2008 and 2010. I have SQL Server 2008. I have Office 2008 (soon to be 2010). I have a virtual swiss knife of utilities near and dear to my heart for everything from screen capture to April fools jokes.

Two, webcam. This laptop build has an integrated webcam. And the iPad does not. And yes, I’ve heard of those rumors of the camera cavity in the iPad’s frame. And yes there is every possibility that el Steveo will pull a One More Thing on launch day and announce the addition of a camera. But here we deal with certainties and absolutes, not obscure fantasies and wet dreams of fanboys. So we assume that there is no camera on the iPad version 1. But, again assuming that the SDK allows the access, the appearance of the third party webcam is almost assured. But still, I have a integrated webcam here and now.

Third, 64 bit. This is a 64 bit processor with a 64 bit OS. Need I say more?

Forth, DVD drive. For those movies I’d like to watch without going though the palava of syncing them. The benefits of having the DVD drive handy are still very much apparent, even in this age of the cloud and the on demand nature of the downloading programs off the web (legitimately, of course). The iPad is complete dependant on the internet for its software, music, and there is iTunes syncing for anything else.

The one question mark here, which I will require an actual iPad to answer, is the screen. The Dell screen is anti glare, and promises to be a significant improvement on the screen on my current laptop. The iPad screen is IPS and supposedly has a great viewing angle. According to Steve Jobs, that is. No-one has had it in direct sunlight yet, so we’ve no idea how well it handles the glare. The winner in this category will undoubtedly be Amazons Kindle (that pesky Company again).

So with out further ado, here are the specs:

Base
Vostro 1520 : Standard Base

Memory
4096MB 800 MHz Dual Channel DDR2 SDRAM (2x2GB)

Keyboard
Internal Keyboard – English (QWERTY)

Video Card
Integrated GMA X4500 HD Graphics

Hard Drive
320GB (7,200rpm) Serial ATA Hard Drive with Free Fall Sensor

Microsoft Operating System
English Genuine Windows® 7 Professional (64 BIT)

Optical Devices
8X DVD+/-RW Drive including software for WIN7

Wireless Networking
Dell Wireless 1397 Mini Card (802.11 b/g) European

Primary Battery
Primary 6-cell 56 WHr Lithium Ion battery

Processor
Intel® Core™2 T6670(2.2GHz,800MHz,2MB)

Camera
Integrated 1.3MP Camera

Colour Choice
Obsidian Black

LCD
15.4 inch WXGA+ CCFL Anti-Glare Display Anti-Glare