Getting your News Manually Blows: A Reply

Last night Holden Page wrote a pretty good blog post entitled: Getting Your News Manually Blows (http://pagesaresocial.com/2011/04/05/getting-your-news-manually-blows/).

It was late, so I thought I’d reply now when I’m fully awake :).

Holden’s point was that when you use a service like my6thsense that auto-curates the news and presents this to you, it’s much easier than using Twitter (and by extension Google Reader etc) to get your news. Essentially, it’s easier to have the news pushed to you rather than to have to go off and pull it from various services.

Essentially, Holden is arguing for the curation model rather than e co sunroom model. This is infect something that I’ve noticed myself. There is consistent lack of content, even using Feedly and Twitter and Friendfeed to get my news.

Now I got an iPad about 2 months ago. And promptly installed Flipboard. I mention this because the experience is as important, if not more so, as the content in that experience. This rapidly warmed me to the iPad as a digital newspaper, complete with page turns and layout. Now it’s blatantly obvious, but the iPads ability to BECOME the app that’s running is a singular experience. As a dead tree newspaper reader, the fuss of a broadsheet just melts away on the iPad. Turning pages on a broadsheet can be a torturous experience. Flipboard showed me how that just melts away.

Thus I went off to seek actual newspaper apps to use.

Now before you groan and call newspapers dead tree media of the past, sit down and think about it. Newspapers were the my6thsense of the dead tree media (albeit it political persuasions, rivalries and narcissistic owners took thee place of algorithms). Decades worth of experience curating the news still produces some damn fine newspapers.

That’s a fact. Take the Times of London. I’ve been a Times Reader for years, even though it is a Tory paper. The iPad apps for the Times and the Sunday Times are incredibly good. They preserve the newspaper’s look and feel whilst incorporating video and some innovative layouts. I like it so much I signed up for the monthly subscription.

Here’s the scary bit: I open up The Times app and read it before I open up Feedly. No kidding. It’s curated news that covers a wide variety of topics succinctly. It’s quick and easy to browse through.

“Hang on a minute” I hear you say, “there’s no sharing or linking or liking or anything! It’s a Walled Garden”. And that’s the truly scary part: I don’t care.

Now i’m not at all suggesting that we abandon our feed readers and start reading digital newspapers. Quite the opposite. I’m saying that if we’re looking for sources of curated news, digital newspapers had better be one of those sources. Indeed, Feedly still gets used everyday. It’s invaluable to me. (Today, for example, the Falcon 9 Heavy story is nowhere to be found in The Times).

There other good newspaper app I found was the USA Today app. It’s nice and clean, with a simple interface that focusses on content. As a bonus, you can share articles to your hearts content. Plus, it’s US centric for the Americans among us šŸ˜‰

I mentioned above that looking through my feeds for good content or through Twitter and Friendfeed was/is becoming a it of a chore. I’m finding more and more time where i’m looking at absolutely nothing. I probably need to subscribe to better feeds or better people. I probably need to reorganise my feeds to better layout content, and cull the boring ones. But here’s the thing, I don’t have the time to do all that.

As Apple would say, There’s an App For That!

Flipboard Review

One of the first apps I installed on my 64Gb wifi ipad was the Flipboard app. Now, after all the hoopla I heard from Robert Scoble and others, I was eager to use the app myself.

Now, the fact of the matter is that the app is beautifully designed. Really it is. A work of art. The page flips are beautifully animated. The app start up is brilliant, making me wonder each and every time i start it up what picture I’m going to get. The popout article pages are beautifully presented. Even the twitter client aspect of the app is very nicely done.

However, there are times when I go through my settle very leisurely and rather randomly, and Flipboard is sheer gold dust when it comes to this. Other times I’m more structured about my reading. I read the Tech folder first, then the Space folder, and finally the General folder. Only then, if i have time, do I randomly go through my other folders. This second way of reading my feeds is rather difficult on Flipboard. Moving between folders isn’t a smooth process and it requires too much work.

I actually prefer Feedly on my iPhone for reading my feeds when I’m going through Feedly quickly.

Feedly for iPad is coming soon, and we’ll see what difference it makes in this area.

There is one thing I’ve noticed: read/unread items do not always translate back to Feedly on the desktop. So, in fact, I end up reading some items twice. Since I’ve no idea of how Flipboard works under the hood, I’m not sure where the lag is coming in, whether it’s a Flipboard, Google Reader, or Feedly issue. But it sure would be nice to have some sync going on.

However, that having been said, Flipboard is very much of my routine. I rather enjoy reading it in the mornings over a cup of tea. It’s the first app I open. I really do think that it’s a most marvellous application and will definitely be using it very often.

Managing Feeds

I was just adding some new feeds to Feedly. While this is not in itself a statement of earth shattering proportions, I did something I’ve never done before: I changed the title to reflect WHY I was subscribing to that feed.

image

Frasier Spiers is doing this really cool thing with iPads at a school in Greenock, Scotland (just up the road from me, as it turns out) called the iPad Project. So i changed the title from ā€œFrasier Spiersā€ to include ā€œThe IPad Projectā€. Now i can remember why I’ve subscribed to that feed.

Hopefully you can see where I’m driving with this. I’d love to have some formal way to remind myself what I’ve subscribed to a particular feed. Some feeds will be self explanatory, such as Scoble or Scott Hanselmann. But feeds from others less well known (or at all) such as Frasier are a tad difficult to remember.

Not sure what form this may take, but it would make life an awful lot easier.

In closing, it strikes me that twitter follows have much the same problem. But its entirely the wrong medium for requiring explanations when you follow.

My RSSCloud Server: Thinking of doing some screencasts.

This year was my last at Uni ( actually, i still have an exam to write, so the past tense isn’t accurate). As is typical with Honours year undergraduates, a final year project was set.

If you are regular reader of this blog, you’ll probably know that what i picked was a RSSCloud server running on Windows Azure. However, as they say on the home shopping networks, THERES MORE! My project needed a little more body to it. So I added an online Feedreader, in other words a poor (dirt-simple) imitation of Google Reader.

Now, this app uses a number of technologies for which it would be a pretty cool demo project. Windows Azure itself (obviously), Windows Azure Tables and Blobs, LINQ, WCF, MVC2 and so on. This includes it being a demonstrator of the RSSCLoud specification itself.

Although its an academic submission, my lecturers are fine with me opensourceing it.

Given the rise of .Net 4, and the experience points gained writing the first version, I feel that everyone would be better served with a rewrite. Not to mention the fact that It’ll give me a chance to use the new Windows Azure Tools for Visual Studio.

As I re-write it, I think a screencast series is in order. All the code will be checked in to codeplex. This’ll give everyone a chance to double check my logic (particularly interested in what Dave Winer thinks of my implementation of RSSCloud).

So, firstly, What do you think?

And secondly, anyone know a good hosting provider? I don’t know about Youtube. But Vimeo looks pretty good. If their limit is 500Gbs/per week upload space, it’ll give me chance to do one video each week, more or less.

I have all the software required to pull this off. So thats not a problem. I actually did a screencast of a live coding session in class for one of my lectures (writing an interpreter turns out to be pretty fun, actually).

i think this would be a pretty good contribution to the community as a whole.

Quotes of the the Day

The first comes from David Weiss’s blog:

"Engineering is the art of modelling materials we do not wholly understand, into shapes we cannot precisely analyze so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect the extent of our ignorance."

– Dr. AR Dykes, British Institution of Structural Engineers, 1976.

To echo what David said: If this doesn’t accurately describe software engineering, I don’t know what does.

 

The second comes from Jeff Atwood’s post on Coding Horror: The Vast And Endless Sea:

If you want to build a ship, don’t drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea.
– Antoine de Saint ExupĆ©ry

Seriously I’d go and read the whole post. If I were teaching Introduction to Programming, this is the sort of quote I’d use on slide number one. Since thats what I’d be doing.

Running Windows Communication Foundation on Windows Azure

In a perfect world, these two Microsoft products would Just Work. Unfortunately, it does not quite work that way in practice.

In general, WCF can be run in a number of ways.

  • The First is to use IIS to host your service for you. IIS will take care of port mapping between internal and external ports.
  • The Second is to self host. This means that your process will create a ServiceHost, passing in the appropriate class. Your service is responsible for error handling and termination of the WCF service. The service class itself can contain the Service behaviour attributes or you can stick them in the app.config file.Ā  This also means that you have to ensure that the WCF service is listeningĀ  on the correct port when you create it.

In Windows Azure terms, the first method would be a Web role, and the second method would be a worker role.

It should be noted that all of the above methods and code patterns are perfectly legitimate ways to create and host a WCF service. The question here is, which of these ways will run on Windows Azure.

Programming for Windows Azure is generally an easy thing so long as your application start up code is correct. If there are any exceptions in your start up code, the role will cycle between Initializing, Busy and Stopping. Its enough to drive you mad. Therefore, if you choose to self host, you need to be sure that your worker role code runs perfectly.

For all your Windows Azure applications its also a good idea to create a log table using Windows Azure tables to assist you in debugging your service.

Now, before you begin any WCF Azure work, you need to install a patch. WCF can exhibit some strange behaviour when deployed behind a load balancer.Since all Windows Azure service instances are behind a LB by default, we need to install this patch. This patch is also installed in the cloud.

You can find the patch, as well a thorough run down of all the WCF issues in Azure here. While I’m not attempting to tackle all of the problems, and indeed the solutions to them, i do aim to get a basic service working with as little fuss as possible.

IIS Hosting

It must be said that i did try all sorts of settings in the config file. It turns out that the one with almost no actual settings is the one that works. basically this forces us to rely on IIS to do the configurations for us. On one hand this is a good thing, saving us the hassle. On the other, there is some loss of control that the programmer in me does not like.

Firstly, lets look at the config file. This section is vanilla. And basically what you’ll get when you add the WCF Service Web Role via the Add New Role dialog.

<system.serviceModel>
<services>
<service name="WCFServiceWebRole1.Service1" behaviorConfiguration="WCFServiceWebRole1.Service1Behavior">
<!-- Service Endpoints -->
<endpoint address="" binding="basicHttpBinding" contract="WCFServiceWebRole1.Service1">

</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="WCFServiceWebRole1.Service1Behavior">
<!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
<serviceMetadata httpGetEnabled="true"/>
<!-- To receive exception details in faults for debugging purposes, set the value below to true.Ā  Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="false"/>
</behavior>
</serviceBehaviors>
</behaviors>
</system.serviceModel>

If you deploy this file to the cloud, the role will initialise and start up correctly. However, at this point, the service is not aware of anything outwith the load balancer. In fact, if you deploy the WCF web role template to the cloud it will start up and run perfectly,Ā albeitĀ only behind the LB.

Now for the actual service code.Ā  This code is actually from a REST service I wrote. It basically provides a interface for the RSSCloud specification (sorry Dave). So we can talk to the RSS Cloud server using HTTP Post, REST and any WSDL client.

[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class Service1
{

[WebInvoke(Method = "POST", UriTemplate = "pleasenotify/notifyProcedure/{notifyprocedure}/port/{port}/protocol/{protocol}/domain/{domain}/url/{url}"), OperationContract]
String RequestNotification(string notifyprocedure, int port, string path, string protocol, string domain, string url)
{
// this change to the storage model is intended to make scaleing easier.
// theĀ  subscribers will be stored perblog in individual blobs
// the table will store the blogids, the blogurls and the last modified date of the blob file.
DateTime Timestamp = DateTime.Now;
XmlDocument thedoc = new XmlDocument();

string name = BlobNameServices.generateBlobName(url);

if (name == null)
{
name = BlobNameServices.CreateBlobEntry(url, source);
}
// Set the metadata into the blob

string hash = Utils.HashToBase64(domain + path + port + protocol + url);

insertTable(name, hash, notifyprocedure, port, path, protocol, domain, url);

requestqueue.AddMessage(new CloudQueueMessage(name));

return "reply";
}

[WebInvoke(Method = "POST", UriTemplate = "recivenotify/url/{url}"), OperationContract]
String RecieveNotification(string url)
{
recieverqueue.AddMessage(new CloudQueueMessage(url));

return "Recieved";
}

[WebInvoke(Method = "POST", UriTemplate = "ping/url/{value}"), OperationContract]
String Ping(string value)
{
char[] chars = value.ToCharArray();

//we have a potential inconsistancy here - the url is passed as an arguement,
//whereas for notifications the blog id is passed as an argument

pingqueue.AddMessage(new CloudQueueMessage(value));

String result = "You said: " + value;

return result;
}

}

I arrived at this code by taking the vanilla WCF webrole and combining it with the WCF REST template and replacing the template code with my own code

Amazingly enough, it will work. Up to a point though. When you hit the svc file, it points you to a WSDL BEHIND the load balancer, using the local instances’  address. Obviously, we can’t get to it. So, we have to modify our WSDL very slightly to take advantage of the patch mentioned above.

<behaviors>
<serviceBehaviors>
<behavior name="WCFServiceWebRole1.Service1Behavior">
<!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
<serviceMetadata httpGetEnabled="true"/>
<!-- To receive exception details in faults for debugging purposes, set the value below to true.Ā  Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="false"/>
<useRequestHeadersForMetadataAddress>
<defaultPorts>
<add scheme="http" port="8080" />
</defaultPorts>
</useRequestHeadersForMetadataAddress>
</behavior>
</serviceBehaviors>
</behaviors>
<pre>

If you hit the service now, the WSDL file is now pointing to the correct address and you can get to it. So any WSDL client can now talk to your service.

You want to make sure that the port selected in in the webrole settings matches the port you have in the Request headers section. Whether that is port 80 or something else. If you have an HTTPS end point, you need to add it as well to get it to work.

Self Hosting

Now, I’ve tried self hosting. This is the idea.

string port="";
try
{
port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WCFEndpoint"].IPEndpoint.Port.ToString();
}
catch
{
port = "8000";
}
try
{
using (ServiceHost svcHost = new ServiceHost(typeof(RssNotifyService), new Uri(http://northwind.cloudapp.net: + port)))
{

svcHost.Open();
while (true)
{
//do somthing
}
svcHost.Close();
}
}
catch(Exception ex)
{
CloudStorageAccount acc = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
LogDataSource log = new LogDataSource(acc);
log.Insert("WCF",ex.Message,"Error",DateTime.Now);

}

For this to work you need to have a HTTP in port set in your web role settings. In this case it was port 8000.

While the above will work in the development fabric, it won’tĀ  in the cloud. But it demonstrates the idea.

At the end of the day, there is not very much to set up if you are writing a new WCF service with minimal custom configuration.

So, install the patch and watch that config file.

PS. Many thanks to Steve Marx ( Windows Azure Program Manager) for his help with this.

A word about FaceBook privacy

I left this comment on Paul Bucheit’s Friendfeed Thread:

Paul: I get value out of having Twitter and FF completely public. Thats not the issue. The issue here is that FB was originally sold as a private service. Another thing. You and I may have seen value out of being completely public, but the only value to anyone about Grandma Indinana being completely public belongs to the knitting accessories advertisers.

And Followed it up with this one:

And for the record I don’t have a FB account. In the old days, snail mail mostly guaranteed privacy for your communications by virtue of the fact that your communiques were physically sealed by you. That essentially is the analogue version of FB pre privacy changes, albeit not at scale. In other words, privacy was implicit in the social convention of exchanging snail mails. With FB, people expected this social convention to extend, at slightly greater scale, to the online medium. Since, thats essentially how FB was marketed in the beginning. Now, FB has single handedly challenged the privacy implicit in this social convention, changing the default implied to a public one. That is the problem

I think the last comment sums things  up nicely. But do read that thread for a wide variety of opinions.

Web API’s have an Identity Problem (In response to and in support of @davewiner)

If you’ll remember, a while back I announced I was implementing RSSCloud on Windows Azure. By and large this is going well and I expect to have a demo up and running soon.

PS. what follows below is based on an email I sent to my Honours Year supervisor at University and some of this will make it into my thesis too.

The RSSCloud API relies on HTTP POST for messages in and out. And I initially thought Windows Communication Foundation was the way to go.

(bear with me, I’m using this to illustrate my point)

Up until now, WCF has been working. However, in order to actually test the RSS Cloud code, I’ve had to have it the WCF Operation Contracts written as a REST service. Its clearly HTTP POST, but its not in the RSSCloud specification. Though arguably, it should be. Why do i say this? Developers should be able to write code they are comfortable with. Whether than is REST or POST or SOAP or its against an WSDL generated API client.

Back to my little problem. So instead of:

[WebInvoke(Method = "POST", UriTemplate = "/ping?url={value}")]

        [OperationContract]

        String Ping(string value);

I had to use:

[WebInvoke(Method = "POST", UriTemplate = "/ping/url/{value}"/)]

        [OperationContract]

        String Ping(string value);

There is a subtle difference. HTTP post uses the query string, where as REST uses the url itself to transmit information.

Sending a HTTP POST to the first method (where the query string is of the form" ?url={value}&port={value}&…..") hangs the test client. The request is accepted, but it never returns. I can’t even debug the method. Using a pure REST url (the second method), things work perfectly.

In order for project as a whole to conform to the project specification (by which I mean the interoperability of the program and its compliance with the HTTP POST methods defined in the RSSCloud specification), being able to accept HTTP POST is paramount.

I spoke to one of my WCF savvy lecturers. Basically he said that there were two ways to doing this: either stick to using a REST. Or encode the url as part of the POST data. Neither of which solve the problem of sticking to the specification and using HTTP Post.

So, I was digging around ASP.Net MVC 2  yesterday. I was building the page that will actually display the posts in a feed on the page. I noticed that the Controller actions that handle the request  (i.e the feed id to get) have a [HttpPost] attribute above them. I’d never really given that much thought until yesterday.

After my little chat, I had a hunch. Using MVC, I simply added a controller action like so:

[HttpPost]

        public RedirectToRouteResult ThePing()

        {

            string url = (string) Request.QueryString["url"];

            url = url.ToLower();

            ……..

And it worked flawlessly. After all my wrestling with WCF configurations and what not, i was actually quite shocked that it worked first time. One of the problems with working with new frameworks is that you keep discovering new things, but only long after you shoud’ve known.

So, to hit the ThePing method above, the url is http://rsscloudproject/Cloud/ThePing?url=….... (obviously this isn’t deployed yet)

Why does this work?

The reason is quite simple: As I understand it, MVC exposes the Request object for you to use directly, while WCF hides this somewhere in the bowels of its inner workings. So, without getting a handle on the Request object, I can’t force WCF to process the query string differently. Hence, WCF was the wrong choice of framework for this.

So my code is now 100% in compliance with the HTTP POST methods defined in the RSSCloud Specification

Now, what does this mean for the WCF REST project?

I’m keeping it as part of the project. It gives a REST interface, and it gives WSDL that developers can use to build against my service.

Not so much the case with REST, but I personally think that the concept of a WSDL is under-represented when it comes to web based APIs. Adding these two additional interfaces to the RSSCloud specification will be one of my recommendations in the final report. I feel strongly that a web based API needs to give developers as many alternative interfaces as possible. Its no fun when you know one way of doing things, but this API is only provided in another.

For example. I wish Smugmug provided a WSDL that I could point Visual studio to and generate a client for.

Both of these situations illustrate a problem among Web API’s.

I wrote a while  back that Bill Buxton’s Mix 10 keynote about designing natural user interfaces, interfaces that respect the abilities and the skills acquired by the user also applies to designers of API’s.

Bill gives this wonderful example of a violin. The violin itself may be worth millions of dollars (if I remember correctly, Joshua Bell paid $4.5 million for his Stradivarius). The bow of the violin for any first violinist in any symphony orchestra is never less than $10,000. Remember these are musicians. They make a pittance. So as a proportion of income, it’s a fortune. But it’s worth it. Why? Because it’s worthy of the skills that those musicians been acquired over decades.

Dave Winer today published a post about Facebook not providing XML API responses. And bemoaning that Twitter is going to do the same. Dave does not want JSON. He wants XML. Why? He feels comfortable with it, and he has to tools to work with it. Clearly the new API changes do not respect Dave Winer and the abilities he has acquired of decades.

I left the following comment:

I totally understand where you are coming from.

On the other hand, tools will always be insufficent. I don’t think .Net, for example, has JSON support built in, either.

Technology moves so fast, as you say, that next week there will be something new and shiny. Developers find themselves in the curious position for having to write for today, but to prepare for next weeks new thing – they have to find a way to straddle that fence between the two. Open Source is not the complete answer to this problem ( its part of it tho).

  • So, API developers have the responsibility to provide for developers.
  • Tool developers (closed or open source) have a responsibility to provide the tools in a timely fashion.
  • And developers have the responsibility to have reasonable expectations for what they want supported.

This is a large and deep problem in the world of web API’s. They don’t have to be Just JSON or Just XML or Just HTTP POST or Just XML-RPC or Just SOAP or Just WSDL. This collection of formats and standards can co-exist.

And co-exist they should. An API should be available to the widest possible cross section of developers, to respect the individual skills that these developers have acquired of years and decades.

Because when you don’t do that, when you don’t respect that, you make people like Dave Winer stop coding against your API.

Help Needed: Silicon Image Sil 3512 SATALink Controller BIOS Flash

So, I installed a 2 port eSata adaptor from LaCie last week and connected my brand spanking new 1.5Tb drive to it.

This is a Windows Home Server system, if you must know. So disk activity is always high, both reading and writing.

Now the hard drive itself is perfectly fine (I’ve tested it on other computers using USB 2.0). The enclosure is perfectly fine (since I’ve tested that too).

This leads me to the issue I have with the controller.

This error message always preceded a crash:

ā€œThe device, \Device\Scsi\SI3112r1, did not respond within the timeout period.ā€

That error let me to this Microsoft KB article: http://support.microsoft.com/kb/154690/EN-US/

A quote:

The reason that drives tend to have these types of problems under heavy stress is often slow microprocessors. In a multitasking environment, the processor may not be fast enough to process all the I/O commands that come in nearly simultaneously.

Hmmmmm…… This certainly fits he bill, since, after much careful examination, it seems heavy reads cause this problem.

I’ve tried all the other stuff in the KB article except flashing the PCI cards’ BIOS.

Now this is where it gets interesting. The LaCie card uses the Silicon Image Sil 3512 SATALink Controller. This is what shows up in Windows Device Manager.

I’ve updated the driver to its latest version from Windows Update. But not the BIOS.

Now the download is simply a flashtool and a readme file thats gives the following command line instructions:

Procedures to run SiFlashTool .exe

Ā· Open Windows command prompt

Ā· Change to a directory where the SiFlashTool .exe and BIOS binary file are located.

Ā· Run SiFlashTool to update the flash memory with BIOS binary code

The SiFlashTool.exe command line syntax is as follows:

SiFlashTool [/BusNum:xx /DevNum:xx] [/File:filespec] [/v]

Where:

BusNum / DevNum: These parameters specify the PCI bus and device number respectively of a Silicon Image storage controller. These parameters only need to be used if there is more than one Silicon Image storage controller in the system.

File: This parameter specifies the path and name of the BIOS image file to be programmed.

/V: This switch causes the program to only display the version number of a controller’s BIOS. No BIOS image is programmed when this switch is used. The /File parameter is ignored if specified along with this switch. If /BusNum and/or /Devnum are specified, then only the BIOS versions of controller’s at the specified PCI locations are displayed.

If I Run it with /V it tells me that BusNum is 05 and DevNum is 04.

Question One, what BIOS binary file are they talking about?

Question two, how am I supposed to include the BusNum and DevNum arguments?

 

Many thanks for any help all the hardware and command prompt gurus out there can give.

In defense of @friendfeed from @techcrunch’s attack. (@parislemon I’m looking at you)

Allow me to repost the comment I made on this Techcrunch post, it being a blatant attack on Friendfeed.

One we are not ā€œpissedā€. At all. We’d only be up in arms if Facebook closed Friendfeed.

Two. If its not news why are you reporting it.

Three. It is news because Friendfeed pioneered some of those wonderful features now known as Google buzz.

Four. The last time Friendfeed had problems was October 29th with some network problems. Ergo, it is NOT twitter. At all.

Five. Even if it were twitter, you never did treat twitter as harshly as you treated Friendfeed in this post. Even during the Era of the Failwhale.

Six. You don’t like Friendfeed. We get it.

Seven. Here endeth the lesson.

Really. I’m not surprised.

The question I really want to be answering here is why people are leaving Friendfeed. I certainly can’t think of a reason why not to. Even Scoble freely admits that Friendfeed has the superior feature set.

Facebook as a 200 million strong userbase.

So, Mark Zuckerberg, turn them loose on Friendfeed please.