(AdMob)Comment of the Day

Not from here, but rather from Kara Swishers post on Apple barring AdMob.

David K makes this excellent point:

Really? I never realized how I was held hostage. I could swear that I am completely free to buy any smart phone I want if I don’t like the iPhone. I wasn’t aware that I Apple would come send its iPolice after me if I walked into a Verizon store tommorow and picked up a Droid…

Since basically the argument in the preceding comments was along the lines that iAd was yet another instance of the closed ecosystem.

By the same token, devs aren’t held hostage as regards to their choice of Ad provider. This is made clear by the language in the new ToS. They just can’t use AdMob.

Quite frankly, Jobs has every right to bar Admob. To do anything else would be like Microsoft selling Lotus Notes in their stores. Not gonna happen.

Web API’s have an Identity Problem (In response to and in support of @davewiner)

If you’ll remember, a while back I announced I was implementing RSSCloud on Windows Azure. By and large this is going well and I expect to have a demo up and running soon.

PS. what follows below is based on an email I sent to my Honours Year supervisor at University and some of this will make it into my thesis too.

The RSSCloud API relies on HTTP POST for messages in and out. And I initially thought Windows Communication Foundation was the way to go.

(bear with me, I’m using this to illustrate my point)

Up until now, WCF has been working. However, in order to actually test the RSS Cloud code, I’ve had to have it the WCF Operation Contracts written as a REST service. Its clearly HTTP POST, but its not in the RSSCloud specification. Though arguably, it should be. Why do i say this? Developers should be able to write code they are comfortable with. Whether than is REST or POST or SOAP or its against an WSDL generated API client.

Back to my little problem. So instead of:

[WebInvoke(Method = "POST", UriTemplate = "/ping?url={value}")]

        [OperationContract]

        String Ping(string value);

I had to use:

[WebInvoke(Method = "POST", UriTemplate = "/ping/url/{value}"/)]

        [OperationContract]

        String Ping(string value);

There is a subtle difference. HTTP post uses the query string, where as REST uses the url itself to transmit information.

Sending a HTTP POST to the first method (where the query string is of the form" ?url={value}&port={value}&…..") hangs the test client. The request is accepted, but it never returns. I can’t even debug the method. Using a pure REST url (the second method), things work perfectly.

In order for project as a whole to conform to the project specification (by which I mean the interoperability of the program and its compliance with the HTTP POST methods defined in the RSSCloud specification), being able to accept HTTP POST is paramount.

I spoke to one of my WCF savvy lecturers. Basically he said that there were two ways to doing this: either stick to using a REST. Or encode the url as part of the POST data. Neither of which solve the problem of sticking to the specification and using HTTP Post.

So, I was digging around ASP.Net MVC 2  yesterday. I was building the page that will actually display the posts in a feed on the page. I noticed that the Controller actions that handle the request  (i.e the feed id to get) have a [HttpPost] attribute above them. I’d never really given that much thought until yesterday.

After my little chat, I had a hunch. Using MVC, I simply added a controller action like so:

[HttpPost]

        public RedirectToRouteResult ThePing()

        {

            string url = (string) Request.QueryString["url"];

            url = url.ToLower();

            ……..

And it worked flawlessly. After all my wrestling with WCF configurations and what not, i was actually quite shocked that it worked first time. One of the problems with working with new frameworks is that you keep discovering new things, but only long after you shoud’ve known.

So, to hit the ThePing method above, the url is http://rsscloudproject/Cloud/ThePing?url=….... (obviously this isn’t deployed yet)

Why does this work?

The reason is quite simple: As I understand it, MVC exposes the Request object for you to use directly, while WCF hides this somewhere in the bowels of its inner workings. So, without getting a handle on the Request object, I can’t force WCF to process the query string differently. Hence, WCF was the wrong choice of framework for this.

So my code is now 100% in compliance with the HTTP POST methods defined in the RSSCloud Specification

Now, what does this mean for the WCF REST project?

I’m keeping it as part of the project. It gives a REST interface, and it gives WSDL that developers can use to build against my service.

Not so much the case with REST, but I personally think that the concept of a WSDL is under-represented when it comes to web based APIs. Adding these two additional interfaces to the RSSCloud specification will be one of my recommendations in the final report. I feel strongly that a web based API needs to give developers as many alternative interfaces as possible. Its no fun when you know one way of doing things, but this API is only provided in another.

For example. I wish Smugmug provided a WSDL that I could point Visual studio to and generate a client for.

Both of these situations illustrate a problem among Web API’s.

I wrote a while  back that Bill Buxton’s Mix 10 keynote about designing natural user interfaces, interfaces that respect the abilities and the skills acquired by the user also applies to designers of API’s.

Bill gives this wonderful example of a violin. The violin itself may be worth millions of dollars (if I remember correctly, Joshua Bell paid $4.5 million for his Stradivarius). The bow of the violin for any first violinist in any symphony orchestra is never less than $10,000. Remember these are musicians. They make a pittance. So as a proportion of income, it’s a fortune. But it’s worth it. Why? Because it’s worthy of the skills that those musicians been acquired over decades.

Dave Winer today published a post about Facebook not providing XML API responses. And bemoaning that Twitter is going to do the same. Dave does not want JSON. He wants XML. Why? He feels comfortable with it, and he has to tools to work with it. Clearly the new API changes do not respect Dave Winer and the abilities he has acquired of decades.

I left the following comment:

I totally understand where you are coming from.

On the other hand, tools will always be insufficent. I don’t think .Net, for example, has JSON support built in, either.

Technology moves so fast, as you say, that next week there will be something new and shiny. Developers find themselves in the curious position for having to write for today, but to prepare for next weeks new thing – they have to find a way to straddle that fence between the two. Open Source is not the complete answer to this problem ( its part of it tho).

  • So, API developers have the responsibility to provide for developers.
  • Tool developers (closed or open source) have a responsibility to provide the tools in a timely fashion.
  • And developers have the responsibility to have reasonable expectations for what they want supported.

This is a large and deep problem in the world of web API’s. They don’t have to be Just JSON or Just XML or Just HTTP POST or Just XML-RPC or Just SOAP or Just WSDL. This collection of formats and standards can co-exist.

And co-exist they should. An API should be available to the widest possible cross section of developers, to respect the individual skills that these developers have acquired of years and decades.

Because when you don’t do that, when you don’t respect that, you make people like Dave Winer stop coding against your API.

Bill Buxton: Respect The Skills Acquired By Your Users

Take 30 minutes and watch his keynote appearance: http://live.visitmix.com/MIX10/Sessions/KEY02 (he’s introduced at the 2:13 mark). Its worthwhile.

If you notice the title, you’ll see that it’s slightly different to what Bill Buxton actually said. Bill is a UI designer. UI design is the natural application of his design paradigm.

But it goes deeper than simple UI design.

You see, Bill’s paradigm is that User Interfaces must respect the skill that has been acquired by the user. Since the acquiring of skill is on thing that we all have in common.

Bill gives this wonderful example of a violin. The violin itself may be worth millions of dollars (if I remember correctly, Joshua Bell paid $4.5 million for his Stradivarius). The bow of the violin for any first violinist in any symphony orchestra is never less than $10,000. Remember these are musicians. They make a pittance. So as a proportion of income, it’s a fortune. But it’s worth it. Why? Because it’s worthy of the skills that those musicians been acquired over decades.

Bill was taking purely in terms of User Interfaces. But the application of this paradigm is rather broad.

For instance, lets take developers.

Developers typically end up working with libraries and API’s. We rarely rewrite what Joel Spolsky affectionately calls duct tape code. Like date comparisons and string builders. It’s a brutally Darwinian process in which the libraries and API’s that are most easily used rise to prominence.

Personally, when I write any code, whether it’s an API used by some other code to do processing, or some plumbing for my UI, or even a class definition with functions and parameters, I always think of the way in which this code is going to be consumed. Since the consumer typically dictates what it needs to get out of that API/function/class. At this point, simple abstraction takes over: how can I abstract away processing code such that my consumer code is much easier and cleaner. In this case the User Interface is not pixels on a screen. No, it’s functions and parameters. The consumer code is and should be treated as a fully fledged user of that code.

The premise this blog post started off with was that user interfaces should respect the skill that has been acquired by the user. Of course, code has no appreciation of skill, whether the code is elegant or not has no meaning to the compiler. But you, as the programmer are consuming the service, the function or the API. You have skill. A skilled programmer writes elegant code. He or she draws on a vast reserve of skill and talent even in the most simplest of tasks.

My point is that when you write your API, when you write your function, when you define you class. You want to ask: "How can I help the programmer that programs against this service write elegant code, how can I write an interface that respects the skills acquired"?

let make some practical application of this:

Typically, I find Web services frustrating. I find it frustrating because I can’t just point Visual Studio at a URL and say "this API lives here and I want to use it". WSDL files, where available, make this so much easier because Visual studio will generate either a web or a service reference.

Why do I say this? For me as a programmer, it does not respect my skills to spend hours each day parsing SOAP or XML or JSON results when what I should actually be doing is writing program code. And yes, some of you will say that it takes all the fun out of life. But I want to be able to go off and write code. Program code. Not low level plumbing, specially plumbing that should be automatically generated for me here in 2010.

That’s one of the things that blew me away about Microsoft’s OData protocol. Here’s a URL and BOOM you have data and are ready to program. You don’t even need a WSDL (whether we need yet another data protocol to have this functionality is another question altogether). It respects the skills of the user, namely me, the programmer. It allows me to immediately get on with the business of practising my chosen craft.

It should be noted that I’m not arguing that we never get our hands dirty in the plumbing. Someone has got to do it. And it is essential training for anyone interested in programming, let alone web services.

In some ways writing an interface that respects the skills I have acquired is a meta-function. The better the interface/class/API is, the better my code is going to be: it’s writing code to write code.

Let’s take another aspect. PowerPoint presentations.

This mornings lecture borrowed a slide deck from TechEd 2009. It was about the BizTalk 2009 ESB Toolkit. Now the slides had no relevance to us as programmers. At all. No respect to the skills we have acquired over years or training and practise. Just a lot of SmartArt. (Id argue that BizTalk as a whole shows little or no respect for our skills as programmers). As a result I discovered what talented doodlers there are in my class, and that nobody snores.

But when was the last time you watched a Scott Hanselman presentation? He uses little or no slides. The majority of his talks involve coding in visual studio. Seriously, how much more respect can you have for the skills your audience has acquired? As a result people sit up and pay attention (and that has absolutely nothing to do with Scott’s various attempts at humour).

Want to see what I mean? See this video: Creating NerdDinner.com with Microsoft ASP.NET Model View Controller (MVC) or this one from Mix10:

BEYOND FILE | NEW COMPANY: FROM CHEESY SAMPLE TO SOCIAL PLATFORM

Talking of Scott Hanselman, have you seen his BabySmash WPF app? It’s written for babies. Babies smash your keyboard very much at random. The app takes this input and turns it into colourful animated shapes that move about the screen. Once again, the User Interface shows respect to the skills the user has (or in this case hasn’t) acquired.

Do not misconstrue this as me banging my drum about ease of use. The easiest point and click UI is the one Smith and Wesson developed many years ago. Yet that UI shows absolutely no respect to the skills developed by its users. And BabySmash may be easy to use, but it shows no respect for my skills as a developer.

As it has been said many times, software is hard. It will always be hard. There will always be challenges. But if we respect the skills of the code ninjas that come forth to complete those challenges, we all benefit.

Re: 7 reasons why the Windows 7 Phone is THE iPhone Killer

I thought I’d repost my comment on this fascinating post: 7 reasons why the Windows 7 Phone is THE iPhone Killer – read the post first.

I must say that I am seriously tempted to get a Windows Phone 7 phone. For all the above reasons.

As a developer, the major enticement is the fact that I can write my own apps for the phone for free.

Having an iPhone and an AppleTv, I’m pretty heavily invested into iTunes store content. That is the big thing holding me back. If Microsoft could get their software to authenticate files with Apple’s DRM servers,t his would be the cherry on the top. The media hub is certainly indicative of Microsoft embracing content irrespective of its origin.

Finally, I assume that the phone syncs with Microsoft’s beautiful Zune software. iTunes as a software program is terrible and the Zune software out does it six ways to Sunday. Again, another big plus.

All the above having being said. I’m wondering what apple will do to respond to this. They clearly have a huge task ahead of them. Microsoft is cleverly tapping into the large install base of Windows and Xbox Live games, the large install base of Mesh, the huge install base of visual studio and Silverlight developers and finally, the huge install base of Exchange servers. These are four constituencies that Apple does not have any worthy alternative (unless one counts the pitiful Exchange support in the iPhone).

This is clearly Microsoft playing to its strengths and not its weaknesses. They are playing this on their own rules, on their terms and and on their own turf.

This is why competition works.

I’d thoroughly encourage everyone to go and watch the Mix 10 Keynote on-demand here: http://live.visitmix.com/MIX10/Sessions/KEY01

You’ll see why am so excited about this as a developer.

Finally, while I’m on the subject of Mix 10, go ahead and see UI designer Bill Buxton in the second half of the second keynote here for a truly inspiring speech: http://live.visitmix.com/MIX10/Sessions/KEY02 (he’s introduced at the 2:13 mark)

Help Needed: Silicon Image Sil 3512 SATALink Controller BIOS Flash

So, I installed a 2 port eSata adaptor from LaCie last week and connected my brand spanking new 1.5Tb drive to it.

This is a Windows Home Server system, if you must know. So disk activity is always high, both reading and writing.

Now the hard drive itself is perfectly fine (I’ve tested it on other computers using USB 2.0). The enclosure is perfectly fine (since I’ve tested that too).

This leads me to the issue I have with the controller.

This error message always preceded a crash:

“The device, \Device\Scsi\SI3112r1, did not respond within the timeout period.”

That error let me to this Microsoft KB article: http://support.microsoft.com/kb/154690/EN-US/

A quote:

The reason that drives tend to have these types of problems under heavy stress is often slow microprocessors. In a multitasking environment, the processor may not be fast enough to process all the I/O commands that come in nearly simultaneously.

Hmmmmm…… This certainly fits he bill, since, after much careful examination, it seems heavy reads cause this problem.

I’ve tried all the other stuff in the KB article except flashing the PCI cards’ BIOS.

Now this is where it gets interesting. The LaCie card uses the Silicon Image Sil 3512 SATALink Controller. This is what shows up in Windows Device Manager.

I’ve updated the driver to its latest version from Windows Update. But not the BIOS.

Now the download is simply a flashtool and a readme file thats gives the following command line instructions:

Procedures to run SiFlashTool .exe

· Open Windows command prompt

· Change to a directory where the SiFlashTool .exe and BIOS binary file are located.

· Run SiFlashTool to update the flash memory with BIOS binary code

The SiFlashTool.exe command line syntax is as follows:

SiFlashTool [/BusNum:xx /DevNum:xx] [/File:filespec] [/v]

Where:

BusNum / DevNum: These parameters specify the PCI bus and device number respectively of a Silicon Image storage controller. These parameters only need to be used if there is more than one Silicon Image storage controller in the system.

File: This parameter specifies the path and name of the BIOS image file to be programmed.

/V: This switch causes the program to only display the version number of a controller’s BIOS. No BIOS image is programmed when this switch is used. The /File parameter is ignored if specified along with this switch. If /BusNum and/or /Devnum are specified, then only the BIOS versions of controller’s at the specified PCI locations are displayed.

If I Run it with /V it tells me that BusNum is 05 and DevNum is 04.

Question One, what BIOS binary file are they talking about?

Question two, how am I supposed to include the BusNum and DevNum arguments?

 

Many thanks for any help all the hardware and command prompt gurus out there can give.

In defense of @friendfeed from @techcrunch’s attack. (@parislemon I’m looking at you)

Allow me to repost the comment I made on this Techcrunch post, it being a blatant attack on Friendfeed.

One we are not “pissed”. At all. We’d only be up in arms if Facebook closed Friendfeed.

Two. If its not news why are you reporting it.

Three. It is news because Friendfeed pioneered some of those wonderful features now known as Google buzz.

Four. The last time Friendfeed had problems was October 29th with some network problems. Ergo, it is NOT twitter. At all.

Five. Even if it were twitter, you never did treat twitter as harshly as you treated Friendfeed in this post. Even during the Era of the Failwhale.

Six. You don’t like Friendfeed. We get it.

Seven. Here endeth the lesson.

Really. I’m not surprised.

The question I really want to be answering here is why people are leaving Friendfeed. I certainly can’t think of a reason why not to. Even Scoble freely admits that Friendfeed has the superior feature set.

Facebook as a 200 million strong userbase.

So, Mark Zuckerberg, turn them loose on Friendfeed please.

Scobles’ Molecules of Infomation

Devotion to Duty

Scobles’  molecules of information post reminded me of something. Blog posts are the original molecules of information. A blog post is a place to bring tweets, pics and youtube videos together. Since blogging took off, we have a host of new tools to add to the army knife. We have foursquare check-ins for example. they provide an awful lot of context to location sensitive tweets.

Thats why I’m sharing this here rather than going straight to Friendfeed and Twitter. 

I commented on Scobles’ post:

Er, Scoble. You can tag tweets. Its called hashtags. What we DON’T have is the ability to search and mine that information.
Friendfeed has hashtags as well. And FF has a far more power search engine for all these little atoms of information.
Friendfeed is way ahead of you. They show you related items.
The future is here, its just not evenly distributed yet.

To which Scoble replied (Disqus comments with replies are awsome, BTW)

Nice try. Hashtags are NOT tags. At least they aren’t anything like the tags that Flickr photos have. FriendFeed does NOT have tags. It has comments. Not the same again. Not even close. FriendFeed’s related items? They are to remove some duplication noise and that feature doesn’t work anywhere close to as well as a human curated system would. Try again.

To which I responded:

Robert, hashtags need a systematic engine for them to work as actual
tags. Twitter should add this.

But nonetheless they provide a way of categorising tweets. Tweetdecks
tweet filtering works primarily due to hashtags. For events, for
example, hashtags are brilliant.

Friendfeed related items link may primarily be for noise reduction,
but this functionaity could be greatly extended. Comments are content
as well, but quite often they provide context too. See Jesses’ FF3.0
FF posts this morning for an example. Where links between FF items are
posted in the comments.

If this were extended to solidify the relationtionship between items
beyond simply showing the items linking to the same page, we’d have
your information molecules.

The sum total of tweets, posts, videos, foursquare check ins, you name
it about something often ends up providing more context than any one
single service or method can provide.
Typically speaking blog posts have filled this need for creating
context, collateing all this related information together in a single
article. This tweet, that twitpic, this video. The first instance of
an information molecule.

As noted above we already have been manually adding in links between
related content. Geolocation services have always created information
molecules, combining tweets and google maps. In like manner, the
services concerned need to solidify these methods for other types of
information.

What do you think?

In response to a N900 review

Theres a nice comparison of the iPhone versus the N900 here.

I’m not sold. So I thought I’d repost my comment here (read the post first):

Good review.
1. How does the N900 support very Flash heavy sites?? Can you play Flash games etc??

2.How do N900 apps compare to iPhone ones?? How is the fit and finish?? Do UI designers aspire to the Apple-esque UI paradigm that has made iPhone apps so successful (and so user-friendly)?? Is there the same range of apps that the iPhone app store has?? the ones that are completely off the wall brilliant??

3.I agree that Contacts need to be updated soon, but I don’t like the inclusion of all services contact lists. There are apps that will work with your contacts. And if you use Gmail Mobile Sync, youc an manage your contacts on line and have that synced to you phone.

4.I’d be very glad to be rid of iTunes. My itunes library got borked and its a pain to rebuild and re sync etc. Not the first time either. However, I’m not sure moving to something thats even worse at syncing is a good  idea. While there are no apps for Windows Media devices, there are certainly apps for Sonos and AppleTv/iTunes for the iPhone. If you are all Apple devices in the home, this is no problem.

5. I’m not sure I like the idea of all in one messaging. I typically like to keep the real and online worlds separate. Can you turn it off?? Customise what services appear?? Custimise whose updates from online appear??

6. Yes, the iPhone camera needs an upgrade. And yes the shareing options are limited. But you completely ignore the role of apps here. there is a breathtakign range of apps that work with your photos, adding effects, cropping, panoramas etc. Apps will share you photos on twitter, facebook, posterous, etc.

Finally, I think we need to see what will be in iPhone 4. There will be a new camera no doubt.

Push notifications are an acceptable alternative to multitasking, but i’d take performance and battery life over real multitasking any day of the week.

And i argue that once ap developers have figured out how to bring Push to thier social networkign apps, we will see some amazing integration. But even now, there are loads of social apps in the store.

I’m not sure you’ve sold me on the N900.

@Arrington and the Crunchpad.

The internet is awash with the news that the CrunchPad is dead. More accurately, dead on arrival.

I won’t regurgitate all the original details, which you can find here

crunchpadfinal

This morning (or this afternoon, depending where you are), Mike posted an update.

The letters attached make for interesting reading (even if they are long on legalese).

Originally I wrote a couple of long paragraphs before confusing even myself.  But I’ll quote Mike:

There is just no way to argue that TechCrunch is not the joint owner of all intellectual property of the CrunchPad, and outright owner of the CrunchPad trademark. The CEO of Fusion Garage has spent nearly six months this year working from Silicon Valley and our offices. Most of the Fusion Garage team has spent the last three months here working with our team on the project. And our key team members have spent time in Singapore working directly on the hardware and software that powers the device. Fusion Garage emails and their own blog, before it was deleted, acknowledge this. We have also spent considerable amounts of money creating the device, paying the vendor and other bills that Fusion Garage wasn’t able to.

What’s even more absurd is the idea that we somehow knew about Fusion Garage’s intentions to break off the partnership before a couple of days prior to the device launching. Until November 17 we had every reason to believe that Fusion Garage was our trusted ally in creating the CrunchPad. We received nearly daily emails confirming that everything was on track. Raising funding for the project was a goal but wouldn’t have been necessary for some time; besides, we had U.S. investors lined up and ready to put money into the venture. Fusion Garage admitted to us on November 18 that the news of them pulling out of the partnership was “out of the blue.”

There is quite simply no way we will allow this company to move forward on this project. The extent of their fraud is only now becoming clear to me. The audacity of their scheme is staggering. We believe that they engaged with us until the last possible moment to get press attention and access to our development resources and cash, and then walk away hoping that we’d do nothing.

Other Options

 

Disclaimer: What I’m about to do here is be incredibly naive and view the world for a moment the way a programmer does: neat, ordered and sensible.

I wonder what solutions there are to this mess (besides legal proceedings). One is to throw money at the problem. And no, I’m not suggesting mike buys the company, or the rights.

Its interesting that Mike planned to have ChromeOS running on the CrunchPad at the launch. Although the CrunchPad predates the relase of ChromeOS, it is the the very epitome of the types of devices the creators of ChromeOS envisioned running ChromeOS on.

So I think that Google, indirectly, has a stake in the success of the CrunchPad.

So, and this may seem un-orthodox, but I suggest that Google should buy out FG. Google has the money, after all.

It’s a win-win for everyone involved. Mike gets on with his Crunchpad. Google gets a posterchild for its ChromeOS (plus being able to contribute significantly to the device software to make sure the Google Experience is up to standard).

ChromiumOS is opensource. The crunchpad started out is short life as an opensourced, crowdsourced project. I can’t imagine a better match.

There is a market.

There is a market for that device, even with the iTablet looming on the horizon. I.e, Me. I’m sitting on my couch right now as I type this. A CrunchPad would be much easier than my Dell Laptop. 

Knowing Apple, the iTablet will be expensive (even if its a contract device). The CrunchPad will be far cheaper (between $300 to $400 as far as i know).

Besides the price issue (and the little matter of a global recession), rumour, as well as logic has it that that Apple will impose an App Approval Process for the iTablet. And an App Store. The pros and cons of a such a move are for another post when we have more substantial information.

This stands in stark contrast with the CrunchPad

Mike says that the CrunchPad can be hacked to run Windows 7 (that would be awesome) and ChromeOS (and by extension any Linux based OS including Android).

(Actually I think Mike should have a version with no OS preloaded)

I’d much rather buy a Crunchpad I can write my own apps for. And before anyone accuses me of hypocrisy (since I like the App Store), I will not tolerate an App Store for anything approaching a work machine.

And after all the problems developers are having with the App Store, I have no intention of writing Apps for the iPhone (Apple does have the chance to change this, mind you).

Not being able to write apps for my iPhone frustrates me to no end. There are too many roadblocks.

However, with the promise of the CrunchPad, I drool at the App possibilities. Being a totally open platform, the possibilities are endless. Whether one uses ChromeOS ( more properly, ChromiumOS), Linux or Windows 7, the underlying hardware will be exposed for the developer to use.

Public Opinion is heavily in favour of the CrunchPad. Public Opinion is squarely behind Mike Arrington (yes, this includes me).

Hopefully it will live.

PS. For a fascinating discussion on the CrunchPad, listen to MacBreak Weekly 169: This Is What Happens Larry

Apple’s App Store ( or NoStore, the way things are going)

Apple’s draconian App store approval process (more like rejection process, currently) needs a share up. Here are a few suggestions to stream line the process.

  1. Reviewers need to have accountability. We have heard of one reviewer accepting and app, but another reviewer rejecting it. Reviewers need to manage an account made up of a number of apps, ensuring that one reviewer handles an app throughout its lifecycle on the store.
  2. There should be two kinds of updates – bug fixes that need to be pushed out STAT and upgrades that add features. Splitting updates up like this is the equivalent of adding a car pool lane. Bug fixes go out immediately, but new features are still reviewed.
  3. This has been suggested before, but I’ll say it again: trusted developers should be given carte blanche.

Managing 100k apps on the store is NOT easy. Apple’s tenacious grip on every single app is unsustainable. It has to give up some of that top make the app store work.

To be clear, I love the app store. I trust Apple that the apps I install aren’t going to brick my phone. Or that hidden features are going to leave me embarrassed when others borrow the phone. That Apps will be well designed and though out.

Apple is trying to preserve the design aesthetic and vision that Steve Jobs had. That is why originally Apple pushed developers to build web apps. And indeed, there are still some web apps around that I use frequently. The Google Reader iPhone page, the Friendfeed iPhone page, etc. Apple never intended that this be the case. The App store mess marrs the otherwise pristine reputation of the iPhone. It is a perpetual thorn in Steve Jobs’ side.

I hope it gets sorted, soon.