Google Reader – Again

After 3 weeks of using Google Reader, I can honestly say that I’m not looking back.

The App is just so damn inteligent. The other day I read a shared Doc Searls post via Robert Scoble’s link blog. Not only was the post in the Link blog marked as read. But the same post in my seperate Doc Searls feed was marked read as well. Amazing. Its these small, hardly noticable, under publicised but highly valuble features that earn an app a loyal userbase. 

We’ll put up with a little less readability in order to share items with other people, in order to see the information on multiple computers and platforms, and the ability to mash up the content with content from other services ala BlogLines, NewsGator, or Google Reader or other RSS aggregators.

Scoble

There is seriously very little to evern begin complaining about GoogleReader.

Perhaps integrated blogging with your own blog (they don’t even do Blogger!) in the form of a “Blog This” button.  Google should seriously think about this. If they want to become the worlds [Personalised] Homepage, they can do alot more.

The only sour note, if you can call it that, are people who use partial text feeds ( forgive the mixed tenses – I’m worked up about this 🙂 ). It drives me insane. Perhaps I should just unsubscribe?

The good thing about the whole Google experiance is that the page updates as if its a thick client application running localy – emails and all. I’d love to see the framework that makes it possible.

Yet another “The Network is the Computer” post

Johnathan Schwatrz, CEO of Sun Microsystems, just posted something interesting and something I’ve never thought about.

Nowdays, server-side hardware is tending to focus on unilisation rather than sheer clock speed. I guess the point is to make more use of each single clock pulse. if you have 8 cores with 32 threads executing 32 instructions per clock pulse, it beats the hell out of a single core with a single thread exectuting one instruction per clock pulse. This is known as server virtualization . Essentially because you can assign a different OS ( never mind application) to each core, effectivly getting 8 servers ( in the case of the Niagra chip) for the price one one physical server. Not that you’d find 8 server OS’s, which is beside the point. But all this fancy stuff is usually dedicated to servers ( Intel Core Duos and Quads to the contrary). And servers need to be networked. And you only have one physical network to use. Or do you:

That’s why we just introduced Project Neptune – a silicon project that marries the parallelism of the microprocessor (for Intel, AMD and SPARC systems), with the parallelism of the underlying operating system (Solaris, Linux or Windows), with parallelism in the network itself. Which in concert with some software magic (which goes by the name of the Crossbow project) allows enterprises to collapse cabling, ports, cards and spending – by bringing parallelism to basic network infrastructure (for geeks, you can take multiple TCP streams and allocate them to different processor threads, spreading out load and freeing up CPU’s/ports). Ports become a physical convenience, just like a server – what’s happening inside depends upon rules or policies set by the user/administrator to automate such decisions. Like I said, the network is the computer, and the computer’s virtualized, so why not the network?

Its simply too obvious to notice till its pointed out. For each physical port attached to your machine, you can have one physical connection. Here Sun engineers have turned that inside out, giving network engineers more bang for their buck ( or is that more connections for their ports?).

It really is an elegant solution.

Link Blog

I spent a few hours syncing my BlogRoll with GoogleReader and did a little feed maintancence. So my BlogRoll is pretty much uptodate.

I also started adding items to my linkblog, provided by GoogleReader. The RSS Feed is at the top of the right hand column. It’ll contain a few items that I think are worth sharing.

Isn’t Technolgy Wonderful? ):

Coding Smaller

Jeff Atwood has a great post on this. Its esentially about the tendency of code to get larger and larger, ad infinitum. I agree. Code can become so large that its unwiely and difficult to work with. A case in point.

I’m writing a product management system, on and off as a hobby to fill in the hours. Until a few months ago, my data strucures and my data source code were in the same class. This gave me a problem.The data sorce code was problematic meaning that it brough everything else down with it. I seperated the two ( logically) seperate enities and you wouldn’t belive how much better both class now are both to work with and to trouble shoot. So a little foward planning would have made me code smaller and better.

Which brings me to Scott Hanselman’s post on this.

I think that pre-planning is part of it, but there’s only so much Big Design Up Front (BDUF) you can do. More and more, as I work on larger and larger projects (and product suites) I realize that refactoring effectively and often is as or more valuable as pre-planning.

So, simply resisting the tendancy to add sub routines, modules, classes, etc to fill the immidate need for functionality (or i my case, data) is not enough. Some sort of planning is needed, formally or not. Having a good idea of what a given sction of code needs and does not need is of paramount importance. I add the “does not need” since I often find subroutines that have long since been made obselete by a newer subroutine or requirement. It may be 5 or fifty lines of code, but unused subroutines waste time, space and can lead to confusion when reading the code ( Code is part of the documention).

Scott seems to have the same problem:

I ran this CQL query without the “TOP 10” qualifier on some code on one project and found 292 methods that weren’t being used.

292 methods? Unused? It’ll make me feel better next time I read my code.

And it serves to highlight the  point. If its not needed, get rid of it!

Programming Languages: Thinking in Code

What precicely do we need out of a programming language? Steve Yegge has a list:

Here’s a short list of programming-language features that have become ad-hoc standards that everyone expects:

  1. Object-literal syntax for arrays and hashes
  2. Array slicing and other intelligent collection operators
  3. Perl 5 compatible regular expression literals
  4. Destructuring bind (e.g. x, y = returnTwoValues())
  5. Function literals and first-class, non-broken closures
  6. Standard OOP with classes, instances, interfaces, polymorphism, etc.
  7. Visibility quantifiers (public/private/protected)
  8. Iterators and generators
  9. List comprehensions
  10. Namespaces and packages
  11. Cross-platform GUI
  12. Operator overloading
  13. Keyword and rest parameters
  14. First-class parser and AST support
  15. Static typing and duck typing
  16. Type expressions and statically checkable semantics
  17. Solid string and collection libraries
  18. Strings and streams act like collections

Visual Studio missed the cross platform bit (Unless there’s a way for writing Linux readable C++ that no one has told me about). 

A language is not simply a series of sematic rules that work together to produce meaningful output ( written or spoken), but also the way we think. When I speak english, I think english. When I’m speaking itallian, I think itallian.

A programming language is the same. Progammers need to be able to think in a given language and also anticipate the reaction of the complier. A well thought out subroutine, is far better than one riddled with badly, though workable, code.

Thinking in code is important. (Its also a valid reason to say your’re working). When one thinks in code, the output becomes automatic. The trick is learning your chosen language(s) thoughly enough.

 Which brings me to the subject of switching languages. Do we want a new porgamming language to learn every 18-24 months? Can we even sustain that sort of learning curve?

At the end of the day, the Next Big Language (NBL as steve says) will have to be worth the effort to switch. BEcuase choosing the right programming language is crucial to programmers – if you can’t think it….

A wonderful, related, podcast here from OpenSource Conversations on Scott Rosenburg’s new book, Dreaming in Code:

Native UI

I happen to completely agree with Jeff Atwood.

I find my self tending towards using IE7 fro preciclythat reason: A native UI.  While the ability to re-skin Firefox with any one of hundreds, if not thousands, 0f skins is attractive on paper, I find Firefox a bit “strange” after an extended IE7 session.

They are both the same, with near enough the same abilities and the UI differences show up for that reason. I agree with Jeff:

When two applications with rough feature parity compete, the application with the native UI will win. Every time. If you truly want to win the hearts and minds of your users, you go to the metal and take full advantage of the native UI.

But when it comes to day-to-day browsing, I’ll always pick native speed and native look and feel over the ability to install a dozen user extensions, or the ability to run on umpteen different platforms. Every single time.

Time to get The Mozilla Foundation to adopt the .Net Framework.

Vista

Now over the past few days, I’ve seen a huge amount of people finding my blog posts on Windows Vista. Truth be told, I’ve still got the beta 2 installed, though don’t use it very much. The reason is simple. I never got round to it.

With the launch of Vista and Office 2007 i got a nastly surprise – Office 2007 Beta 2 stopped working. In the most literal sense of the Word. i had to re-install Office 2003. I’m insensed at this as it didn’t even give me the opportinutity of convert all my 2007 format documents and spreadsheets back to 2003 format. Wake up guys. So what on earth am i supposed to do now?

On to Vista.

I think its nothing less that pure brilliance. Stolen Mac OSX features or not, its great. The central question that Vista begs us to ask is “What do we want out of an OS?”. Seems Microsoft/Apple ( Depending who stole what from who) have asked themselves that and come out with an asnswer.

The times that i’ve used vista, I’ve never once failed to be impressed by some small but incredibly useful feature.  the integrated search in the start menu is amazing. The new layout of the programs is even better, avoising huge cascading menus that can end up taking up the whole screen.

The Network Centre is extremely useful for allowing you to instantly deduce the problem. It interfaces well with my router (XP tells me the Internet gateway is on, even when it isn’t).

The huge array of options to personalize your computer is extremly important.  The need to create somthing that’s distinctivly you is found everywhere, from the organisation of your desk to the decoration of your room.

The sidebar is extremely useful, as is the option to cconfigure which monitor it appears in in a multi-monitor setup (Microsoft is acknowledging the increasing populoarity of multi-monitor setups in a bid to boost productivity) . I’ve heard that developing Widgets is not every progrmmers cup of tea or coffee.

The way the file system is displayed is imortant. The new look and feel is extremly diferent to Xp, mainly being more userfriendly( while displaying more info) and givingthe user a great number of choices.

The parental controls are included out-of-the-box and are integrated with the accounts and games aspects of Vista. While i have not actually tested this, it seems pretty good. this is essentially Microsoft serving notice of its intention to expand into this tradtitionally third party domain.

The Account profiles are interesting. The new range of restrictions that can be leveled on an account is extremly extensive. This should  life easier on pleantly is network administrators.

The integrated Windows Defender is an inutive idea. The main question is about what advantages it offers over a third party product (ie Norton or McAffee).If Microsoft say greater OS integration, then Microoft open themselves up to a repeat of the EU Competition Commission debacle (only this time from those third-party developers as well). Microsoft need to ensure that all third-party developers have the opportunity to achive the same OS integration as Micorosofts own offerings.

The Aero Glass interface needs no explaination as it speaks for itself.

The irritating security popups become less irritating as time goes on and seem to appear less frequently as well ( did microsoft allow it to remember preferences?) .

Vista is a real RAM hog. On my machine  while doing nothing, it takes up a full 200MB more then XP running  a full set of services ( i.e nortons firewall, Ghost etc) and Visual Web Desingner. I can’t even get a DVD to play properly on  Vista. microsoft seems to have spotted this problem and allowd the use of memory keys as  RAM (“ReadyBoost”).

Vista is so large I’m probably missing a few things. Vista brings an entirely new .Net Framework for developers to work with ( formally Windows Presentation Foundation). I’ve yet to get round to using it since I’m only now getting to the height of my .Net version two programming powers.  I should give it a try.

Finally, I think the number of Vista Versions gives people more choices for their wallets. Coupled with the  ability to upgrade when you need to, its a huge plus for business procurement departments and people on a limited budget ( half with this months budget, half with next month’s) . The only thing missing here is the ability to download Vista from Microsoft ( saves shipping time and cost).

The only question left here is when to buy Vista. Now with all the bugs that are sure to be found.Of after the first Service Pack. It s a choice between too evils. Contend witht he bugs, or contend with the now obselete Windows XP. Which is the lesser of two evils ?