Building the Back-End

If no one has read the Ask the Wizard blog, don’t worry – I subscribed only yesterday. Although this is mainly a business post focussing on how building the back end of a service gives you more flexibility in the products that go live, I see a great many parellels with software development.

Much more simple to understand the FeedBurner example. We didn’t spend the first five months building those two services we rolled out in February. We spent the first five months building out the architecture for feed filtering and feed processing such that we could quickly deploy any new feed service we decided to build, and then we spent about a week building out those first two services. Yes, it was true that somebody could have built a competitor to what we launched in a weekend. However, we would be able to quickly iterate and innovate on top of our release, whereas the built-in-a-weekend competitor would have to keep building one-off services that would eventually either become untenable or require an incredibly long period of underlying architecture refactoring while we continued to innovate.

In software development, you can get right in there and do your bit. But thats all that your software does. If you spend the time creating a framework that supports the code that you later write, then its so much easier to add features and services becuase the framework has already been build and all the really complex stuff goes on under the bonnet. I supose it comes down to the adage: build it right the first time.

Super Large Screens

  • I’ve seen not one, but two posts today about multi-monitor/super large screens.

The first is from Scott Hanselman on multimonitor setups:

While I was at the Eleutian offices last week I was impressed at their commitment to the multi-monitor lifestyle. I’m all about the Third Monitor (in case you haven’t heard, it’s one better than just two monitors) as areothers. If you value your time, you should think about getting the widest view possible.

The Dell 30-inch is amazing…they each had a Dell 30″ widescreen at 2560×1600 pixels, but they also had what appeared to be two 22″ widescreen’s also, rotated and butted up against the 30″ so their horizontal working space was 1050+2560+1050=4660 pixels wide. Glorious. I turned them on to (I hope) RealtimeSoft’s must-have Ultramon multimonitor tool. They were running x64, and Ultramon has a 64-bit version, so that was cool.

And Simon Brocklehurst points to this cool video of  the most advanced multi-touch, super large monitor setup I have ever seen:

The point is – we’ve been used to the desktop metaphor for user interfaces for a long time now, but still the “desktops” on our computers are incredibly small compared to our real, physical desktops. If someone gave you a desk in your work place that was 24 inches across, you wouldn’t be able to get much work done on it. And yet, a 24 inch LCD screen is seen as an extravagant luxury by many. Lots of companies give their employees 15 inch “computer desktops” to work on.

You can see his point. I have a 19″ myself that suites me most of the time. But somtimes its just so small.

I’m wondering, considering Simon’s point why there is still a stigma attached to multi-monitor setups in the work place? Cost can be out-weighed many times by the productivity benefits and its actually an incentive for businesses to do that. Space is a concern, but the setup Scott saw dosen’t take up thatmuch space. So I’m wondering why. Perhaps is slightly too hard for the bosses to believe that having an extra monitor or to check your emails or have a reference to what you’re working on in front of you is beneficial. It seems like a large outlay for little perceived return. Ha! Question answered.  Its a crisis of perception.  

As my PC is at home, I’m wondering whether the outlay for a new moitor is justified (given that productivity is not an issue) ?

AJAX

I’ve not that hard a look at AJAX, but this post shared by Roberto Scoble caught my eye. The crux of the post is looking at a new AJAX bases platform, a company called Morfik. From what I read it does push the boundaries as to what can be don within the browser sandbox:

the crux of it is that Morfik uses 100% Ajax and renders in the native browser. Whereas all the other platforms use non-native browser plug-ins (like Flash) or render outside the browser. Adobe’s Apollo and Laszlo both largely output in Flash (a browser plug-in) and Microsoft’s WPF renders outside the browser.

Also, while Google is Java  heavy   (no offense intended) for their UI’s, Morfik:

allows developers to use high-level programming languages (which give the developer more power – e.g. BASIC, C#, Pascal) to create web apps. It does this by converting apps from high level language INTO Ajax code.

This is great news. As a VB /C# Developer, it might just make ASP.Net 2.0 fade into the background as far as UI is concerned.

I’ll be watching.

Google Reader – Again

After 3 weeks of using Google Reader, I can honestly say that I’m not looking back.

The App is just so damn inteligent. The other day I read a shared Doc Searls post via Robert Scoble’s link blog. Not only was the post in the Link blog marked as read. But the same post in my seperate Doc Searls feed was marked read as well. Amazing. Its these small, hardly noticable, under publicised but highly valuble features that earn an app a loyal userbase. 

We’ll put up with a little less readability in order to share items with other people, in order to see the information on multiple computers and platforms, and the ability to mash up the content with content from other services ala BlogLines, NewsGator, or Google Reader or other RSS aggregators.

Scoble

There is seriously very little to evern begin complaining about GoogleReader.

Perhaps integrated blogging with your own blog (they don’t even do Blogger!) in the form of a “Blog This” button.  Google should seriously think about this. If they want to become the worlds [Personalised] Homepage, they can do alot more.

The only sour note, if you can call it that, are people who use partial text feeds ( forgive the mixed tenses – I’m worked up about this 🙂 ). It drives me insane. Perhaps I should just unsubscribe?

The good thing about the whole Google experiance is that the page updates as if its a thick client application running localy – emails and all. I’d love to see the framework that makes it possible.

Yet another “The Network is the Computer” post

Johnathan Schwatrz, CEO of Sun Microsystems, just posted something interesting and something I’ve never thought about.

Nowdays, server-side hardware is tending to focus on unilisation rather than sheer clock speed. I guess the point is to make more use of each single clock pulse. if you have 8 cores with 32 threads executing 32 instructions per clock pulse, it beats the hell out of a single core with a single thread exectuting one instruction per clock pulse. This is known as server virtualization . Essentially because you can assign a different OS ( never mind application) to each core, effectivly getting 8 servers ( in the case of the Niagra chip) for the price one one physical server. Not that you’d find 8 server OS’s, which is beside the point. But all this fancy stuff is usually dedicated to servers ( Intel Core Duos and Quads to the contrary). And servers need to be networked. And you only have one physical network to use. Or do you:

That’s why we just introduced Project Neptune – a silicon project that marries the parallelism of the microprocessor (for Intel, AMD and SPARC systems), with the parallelism of the underlying operating system (Solaris, Linux or Windows), with parallelism in the network itself. Which in concert with some software magic (which goes by the name of the Crossbow project) allows enterprises to collapse cabling, ports, cards and spending – by bringing parallelism to basic network infrastructure (for geeks, you can take multiple TCP streams and allocate them to different processor threads, spreading out load and freeing up CPU’s/ports). Ports become a physical convenience, just like a server – what’s happening inside depends upon rules or policies set by the user/administrator to automate such decisions. Like I said, the network is the computer, and the computer’s virtualized, so why not the network?

Its simply too obvious to notice till its pointed out. For each physical port attached to your machine, you can have one physical connection. Here Sun engineers have turned that inside out, giving network engineers more bang for their buck ( or is that more connections for their ports?).

It really is an elegant solution.

Coding Smaller

Jeff Atwood has a great post on this. Its esentially about the tendency of code to get larger and larger, ad infinitum. I agree. Code can become so large that its unwiely and difficult to work with. A case in point.

I’m writing a product management system, on and off as a hobby to fill in the hours. Until a few months ago, my data strucures and my data source code were in the same class. This gave me a problem.The data sorce code was problematic meaning that it brough everything else down with it. I seperated the two ( logically) seperate enities and you wouldn’t belive how much better both class now are both to work with and to trouble shoot. So a little foward planning would have made me code smaller and better.

Which brings me to Scott Hanselman’s post on this.

I think that pre-planning is part of it, but there’s only so much Big Design Up Front (BDUF) you can do. More and more, as I work on larger and larger projects (and product suites) I realize that refactoring effectively and often is as or more valuable as pre-planning.

So, simply resisting the tendancy to add sub routines, modules, classes, etc to fill the immidate need for functionality (or i my case, data) is not enough. Some sort of planning is needed, formally or not. Having a good idea of what a given sction of code needs and does not need is of paramount importance. I add the “does not need” since I often find subroutines that have long since been made obselete by a newer subroutine or requirement. It may be 5 or fifty lines of code, but unused subroutines waste time, space and can lead to confusion when reading the code ( Code is part of the documention).

Scott seems to have the same problem:

I ran this CQL query without the “TOP 10” qualifier on some code on one project and found 292 methods that weren’t being used.

292 methods? Unused? It’ll make me feel better next time I read my code.

And it serves to highlight the  point. If its not needed, get rid of it!

Programming Languages: Thinking in Code

What precicely do we need out of a programming language? Steve Yegge has a list:

Here’s a short list of programming-language features that have become ad-hoc standards that everyone expects:

  1. Object-literal syntax for arrays and hashes
  2. Array slicing and other intelligent collection operators
  3. Perl 5 compatible regular expression literals
  4. Destructuring bind (e.g. x, y = returnTwoValues())
  5. Function literals and first-class, non-broken closures
  6. Standard OOP with classes, instances, interfaces, polymorphism, etc.
  7. Visibility quantifiers (public/private/protected)
  8. Iterators and generators
  9. List comprehensions
  10. Namespaces and packages
  11. Cross-platform GUI
  12. Operator overloading
  13. Keyword and rest parameters
  14. First-class parser and AST support
  15. Static typing and duck typing
  16. Type expressions and statically checkable semantics
  17. Solid string and collection libraries
  18. Strings and streams act like collections

Visual Studio missed the cross platform bit (Unless there’s a way for writing Linux readable C++ that no one has told me about). 

A language is not simply a series of sematic rules that work together to produce meaningful output ( written or spoken), but also the way we think. When I speak english, I think english. When I’m speaking itallian, I think itallian.

A programming language is the same. Progammers need to be able to think in a given language and also anticipate the reaction of the complier. A well thought out subroutine, is far better than one riddled with badly, though workable, code.

Thinking in code is important. (Its also a valid reason to say your’re working). When one thinks in code, the output becomes automatic. The trick is learning your chosen language(s) thoughly enough.

 Which brings me to the subject of switching languages. Do we want a new porgamming language to learn every 18-24 months? Can we even sustain that sort of learning curve?

At the end of the day, the Next Big Language (NBL as steve says) will have to be worth the effort to switch. BEcuase choosing the right programming language is crucial to programmers – if you can’t think it….

A wonderful, related, podcast here from OpenSource Conversations on Scott Rosenburg’s new book, Dreaming in Code:

Native UI

I happen to completely agree with Jeff Atwood.

I find my self tending towards using IE7 fro preciclythat reason: A native UI.  While the ability to re-skin Firefox with any one of hundreds, if not thousands, 0f skins is attractive on paper, I find Firefox a bit “strange” after an extended IE7 session.

They are both the same, with near enough the same abilities and the UI differences show up for that reason. I agree with Jeff:

When two applications with rough feature parity compete, the application with the native UI will win. Every time. If you truly want to win the hearts and minds of your users, you go to the metal and take full advantage of the native UI.

But when it comes to day-to-day browsing, I’ll always pick native speed and native look and feel over the ability to install a dozen user extensions, or the ability to run on umpteen different platforms. Every single time.

Time to get The Mozilla Foundation to adopt the .Net Framework.

Vista

Now over the past few days, I’ve seen a huge amount of people finding my blog posts on Windows Vista. Truth be told, I’ve still got the beta 2 installed, though don’t use it very much. The reason is simple. I never got round to it.

With the launch of Vista and Office 2007 i got a nastly surprise – Office 2007 Beta 2 stopped working. In the most literal sense of the Word. i had to re-install Office 2003. I’m insensed at this as it didn’t even give me the opportinutity of convert all my 2007 format documents and spreadsheets back to 2003 format. Wake up guys. So what on earth am i supposed to do now?

On to Vista.

I think its nothing less that pure brilliance. Stolen Mac OSX features or not, its great. The central question that Vista begs us to ask is “What do we want out of an OS?”. Seems Microsoft/Apple ( Depending who stole what from who) have asked themselves that and come out with an asnswer.

The times that i’ve used vista, I’ve never once failed to be impressed by some small but incredibly useful feature.  the integrated search in the start menu is amazing. The new layout of the programs is even better, avoising huge cascading menus that can end up taking up the whole screen.

The Network Centre is extremely useful for allowing you to instantly deduce the problem. It interfaces well with my router (XP tells me the Internet gateway is on, even when it isn’t).

The huge array of options to personalize your computer is extremly important.  The need to create somthing that’s distinctivly you is found everywhere, from the organisation of your desk to the decoration of your room.

The sidebar is extremely useful, as is the option to cconfigure which monitor it appears in in a multi-monitor setup (Microsoft is acknowledging the increasing populoarity of multi-monitor setups in a bid to boost productivity) . I’ve heard that developing Widgets is not every progrmmers cup of tea or coffee.

The way the file system is displayed is imortant. The new look and feel is extremly diferent to Xp, mainly being more userfriendly( while displaying more info) and givingthe user a great number of choices.

The parental controls are included out-of-the-box and are integrated with the accounts and games aspects of Vista. While i have not actually tested this, it seems pretty good. this is essentially Microsoft serving notice of its intention to expand into this tradtitionally third party domain.

The Account profiles are interesting. The new range of restrictions that can be leveled on an account is extremly extensive. This should  life easier on pleantly is network administrators.

The integrated Windows Defender is an inutive idea. The main question is about what advantages it offers over a third party product (ie Norton or McAffee).If Microsoft say greater OS integration, then Microoft open themselves up to a repeat of the EU Competition Commission debacle (only this time from those third-party developers as well). Microsoft need to ensure that all third-party developers have the opportunity to achive the same OS integration as Micorosofts own offerings.

The Aero Glass interface needs no explaination as it speaks for itself.

The irritating security popups become less irritating as time goes on and seem to appear less frequently as well ( did microsoft allow it to remember preferences?) .

Vista is a real RAM hog. On my machine  while doing nothing, it takes up a full 200MB more then XP running  a full set of services ( i.e nortons firewall, Ghost etc) and Visual Web Desingner. I can’t even get a DVD to play properly on  Vista. microsoft seems to have spotted this problem and allowd the use of memory keys as  RAM (“ReadyBoost”).

Vista is so large I’m probably missing a few things. Vista brings an entirely new .Net Framework for developers to work with ( formally Windows Presentation Foundation). I’ve yet to get round to using it since I’m only now getting to the height of my .Net version two programming powers.  I should give it a try.

Finally, I think the number of Vista Versions gives people more choices for their wallets. Coupled with the  ability to upgrade when you need to, its a huge plus for business procurement departments and people on a limited budget ( half with this months budget, half with next month’s) . The only thing missing here is the ability to download Vista from Microsoft ( saves shipping time and cost).

The only question left here is when to buy Vista. Now with all the bugs that are sure to be found.Of after the first Service Pack. It s a choice between too evils. Contend witht he bugs, or contend with the now obselete Windows XP. Which is the lesser of two evils ?

Software Engineering

My lecture this morning was on project management. Specifically how it applies within the games industry. So I was pleasntly surprised to find an almost identical post over at Coding Horror.

The name “software engineering” is apt enough. Computer Science is about creating pretty little algorthims ( don’t get me wrong, I use BubbleSort all the time).

Software engineering is about getting a given piece of software to work, no matter what the code looks like. 

Jeff says:

But software projects truly aren’t like other engineering projects. I don’t say this out of a sense of entitlement, or out of some misguided attempt to obtain special treatment for software developers. I say it because the only kind of software we ever build is unproven, experimental software. Sam Guckenheimer explains:

To overcome the gap, you must recognize that software engineering is not like other engineering. When you build a bridge, road, or house, for example, you can safely study hundreds of very similar examples. Indeed, most of the time, economics dictate that you build the current one almost exactly like the last to take the risk out of the project.  

I agree. Let me explain.

It has “engineering” in the title for a reason. You don’t need a fully qualified engineer to fix the gas boiler ( though thats what they’re called in the UK). You do need an engineer to build the world’s longest rail tunnel. Thats why its engineering. Thats the semantics

Also, like engineers, we tweak things constantly. My lecturer gave the example of the motorway just down the road. They built it in a marsh. But the thing is that you can’t build in a marsh. So they froze, yes froze, the ground with freon and built on top of that. Thats engineering.  

Thats why its like real, civil enginering. 

As far as unproven, experimental software goes, I’d like to give an example. I get project management software, a trial version. I test it to see if i’d be willing to shell out for the full version. I don’t like the program. So i take the basic idea ( “keeping track of development schedules”) and build a better project mangment software tool, with the all the cluncky bits stripped out. Both programs will work and do the job of keeping track of development schdules. One will be better than the other becuase end user input has been taken into account.  

My point is that. Most of what we as software developers do comes from the real work. Surely Pharaoh must have had project management in his time? The challenge is create something better than the previous iteration. So we port proven tasks, in this case project mangement, to the computer, while still being ready to improve on the product. Engineers build a bridge once and have to wait till the next bridge comes along to apply what they learnt on the last one. We write software that evolves, yes evolves. Snapshots of the  same bit of software take in the middle and the end, will be completely unrecodnizable. So in this sense, we do write experimental programs.

So, in response Jeff, it depends on your point of view.