Coding Smaller

Jeff Atwood has a great post on this. Its esentially about the tendency of code to get larger and larger, ad infinitum. I agree. Code can become so large that its unwiely and difficult to work with. A case in point.

I’m writing a product management system, on and off as a hobby to fill in the hours. Until a few months ago, my data strucures and my data source code were in the same class. This gave me a problem.The data sorce code was problematic meaning that it brough everything else down with it. I seperated the two ( logically) seperate enities and you wouldn’t belive how much better both class now are both to work with and to trouble shoot. So a little foward planning would have made me code smaller and better.

Which brings me to Scott Hanselman’s post on this.

I think that pre-planning is part of it, but there’s only so much Big Design Up Front (BDUF) you can do. More and more, as I work on larger and larger projects (and product suites) I realize that refactoring effectively and often is as or more valuable as pre-planning.

So, simply resisting the tendancy to add sub routines, modules, classes, etc to fill the immidate need for functionality (or i my case, data) is not enough. Some sort of planning is needed, formally or not. Having a good idea of what a given sction of code needs and does not need is of paramount importance. I add the “does not need” since I often find subroutines that have long since been made obselete by a newer subroutine or requirement. It may be 5 or fifty lines of code, but unused subroutines waste time, space and can lead to confusion when reading the code ( Code is part of the documention).

Scott seems to have the same problem:

I ran this CQL query without the “TOP 10” qualifier on some code on one project and found 292 methods that weren’t being used.

292 methods? Unused? It’ll make me feel better next time I read my code.

And it serves to highlight the  point. If its not needed, get rid of it!

Programming Languages: Thinking in Code

What precicely do we need out of a programming language? Steve Yegge has a list:

Here’s a short list of programming-language features that have become ad-hoc standards that everyone expects:

  1. Object-literal syntax for arrays and hashes
  2. Array slicing and other intelligent collection operators
  3. Perl 5 compatible regular expression literals
  4. Destructuring bind (e.g. x, y = returnTwoValues())
  5. Function literals and first-class, non-broken closures
  6. Standard OOP with classes, instances, interfaces, polymorphism, etc.
  7. Visibility quantifiers (public/private/protected)
  8. Iterators and generators
  9. List comprehensions
  10. Namespaces and packages
  11. Cross-platform GUI
  12. Operator overloading
  13. Keyword and rest parameters
  14. First-class parser and AST support
  15. Static typing and duck typing
  16. Type expressions and statically checkable semantics
  17. Solid string and collection libraries
  18. Strings and streams act like collections

Visual Studio missed the cross platform bit (Unless there’s a way for writing Linux readable C++ that no one has told me about). 

A language is not simply a series of sematic rules that work together to produce meaningful output ( written or spoken), but also the way we think. When I speak english, I think english. When I’m speaking itallian, I think itallian.

A programming language is the same. Progammers need to be able to think in a given language and also anticipate the reaction of the complier. A well thought out subroutine, is far better than one riddled with badly, though workable, code.

Thinking in code is important. (Its also a valid reason to say your’re working). When one thinks in code, the output becomes automatic. The trick is learning your chosen language(s) thoughly enough.

 Which brings me to the subject of switching languages. Do we want a new porgamming language to learn every 18-24 months? Can we even sustain that sort of learning curve?

At the end of the day, the Next Big Language (NBL as steve says) will have to be worth the effort to switch. BEcuase choosing the right programming language is crucial to programmers – if you can’t think it….

A wonderful, related, podcast here from OpenSource Conversations on Scott Rosenburg’s new book, Dreaming in Code:

Native UI

I happen to completely agree with Jeff Atwood.

I find my self tending towards using IE7 fro preciclythat reason: A native UI.  While the ability to re-skin Firefox with any one of hundreds, if not thousands, 0f skins is attractive on paper, I find Firefox a bit “strange” after an extended IE7 session.

They are both the same, with near enough the same abilities and the UI differences show up for that reason. I agree with Jeff:

When two applications with rough feature parity compete, the application with the native UI will win. Every time. If you truly want to win the hearts and minds of your users, you go to the metal and take full advantage of the native UI.

But when it comes to day-to-day browsing, I’ll always pick native speed and native look and feel over the ability to install a dozen user extensions, or the ability to run on umpteen different platforms. Every single time.

Time to get The Mozilla Foundation to adopt the .Net Framework.

Vista

Now over the past few days, I’ve seen a huge amount of people finding my blog posts on Windows Vista. Truth be told, I’ve still got the beta 2 installed, though don’t use it very much. The reason is simple. I never got round to it.

With the launch of Vista and Office 2007 i got a nastly surprise – Office 2007 Beta 2 stopped working. In the most literal sense of the Word. i had to re-install Office 2003. I’m insensed at this as it didn’t even give me the opportinutity of convert all my 2007 format documents and spreadsheets back to 2003 format. Wake up guys. So what on earth am i supposed to do now?

On to Vista.

I think its nothing less that pure brilliance. Stolen Mac OSX features or not, its great. The central question that Vista begs us to ask is “What do we want out of an OS?”. Seems Microsoft/Apple ( Depending who stole what from who) have asked themselves that and come out with an asnswer.

The times that i’ve used vista, I’ve never once failed to be impressed by some small but incredibly useful feature.  the integrated search in the start menu is amazing. The new layout of the programs is even better, avoising huge cascading menus that can end up taking up the whole screen.

The Network Centre is extremely useful for allowing you to instantly deduce the problem. It interfaces well with my router (XP tells me the Internet gateway is on, even when it isn’t).

The huge array of options to personalize your computer is extremly important.  The need to create somthing that’s distinctivly you is found everywhere, from the organisation of your desk to the decoration of your room.

The sidebar is extremely useful, as is the option to cconfigure which monitor it appears in in a multi-monitor setup (Microsoft is acknowledging the increasing populoarity of multi-monitor setups in a bid to boost productivity) . I’ve heard that developing Widgets is not every progrmmers cup of tea or coffee.

The way the file system is displayed is imortant. The new look and feel is extremly diferent to Xp, mainly being more userfriendly( while displaying more info) and givingthe user a great number of choices.

The parental controls are included out-of-the-box and are integrated with the accounts and games aspects of Vista. While i have not actually tested this, it seems pretty good. this is essentially Microsoft serving notice of its intention to expand into this tradtitionally third party domain.

The Account profiles are interesting. The new range of restrictions that can be leveled on an account is extremly extensive. This should  life easier on pleantly is network administrators.

The integrated Windows Defender is an inutive idea. The main question is about what advantages it offers over a third party product (ie Norton or McAffee).If Microsoft say greater OS integration, then Microoft open themselves up to a repeat of the EU Competition Commission debacle (only this time from those third-party developers as well). Microsoft need to ensure that all third-party developers have the opportunity to achive the same OS integration as Micorosofts own offerings.

The Aero Glass interface needs no explaination as it speaks for itself.

The irritating security popups become less irritating as time goes on and seem to appear less frequently as well ( did microsoft allow it to remember preferences?) .

Vista is a real RAM hog. On my machine  while doing nothing, it takes up a full 200MB more then XP running  a full set of services ( i.e nortons firewall, Ghost etc) and Visual Web Desingner. I can’t even get a DVD to play properly on  Vista. microsoft seems to have spotted this problem and allowd the use of memory keys as  RAM (“ReadyBoost”).

Vista is so large I’m probably missing a few things. Vista brings an entirely new .Net Framework for developers to work with ( formally Windows Presentation Foundation). I’ve yet to get round to using it since I’m only now getting to the height of my .Net version two programming powers.  I should give it a try.

Finally, I think the number of Vista Versions gives people more choices for their wallets. Coupled with the  ability to upgrade when you need to, its a huge plus for business procurement departments and people on a limited budget ( half with this months budget, half with next month’s) . The only thing missing here is the ability to download Vista from Microsoft ( saves shipping time and cost).

The only question left here is when to buy Vista. Now with all the bugs that are sure to be found.Of after the first Service Pack. It s a choice between too evils. Contend witht he bugs, or contend with the now obselete Windows XP. Which is the lesser of two evils ?

GoogleReader

I’ve head alot about Google Reader, mainly from a certain Rboert Scoble. And boy is it good. I’ve still not gotten round to using the actual Reader exrensively. And the reason is that I use the Googel Reader Widget for the Google Personalised homepage.

Its brialliant. The widget allows you to read all of your blogs right there on the page without ever being send tot he originating URL. Almost briliantly enought  to get me to learn how to use PHP ( or what ever they use) .

In the few times I’ve actually gone to the Reader page, I’ve never once failed to be impressed and say ” I wonder if theres away to do that in ASP.Net”.

If the “Network is the Computer”as Sun CEO Schwartz says, then google have a monopoly.

ET iPhone Home

 I’ve been wondering exactly what Apple have been thinking. Yep, another smart phone to look at. I’ve no doubt that people will look due to the  iPod halo around Apple. And I’ve no doubt when it comes to quality owing, to the reputation of his Steveness.

There are however a few disturbing things about it. Robert Scoble:

I was much more excited about the iPhone yesterday than I am today. Why? Cause reality is setting in. This thing is not as good as it seems. Paul Kedrosky has the details. He forgot a few things (he lists five):

6) Battery is only two hours up to five hours and is not replaceable (if you play video). UPDATE: sorry for getting that wrong, but tons of people, including some Mac journalists told me it’d only get two hours in video playback mode. Watch a video and your battery is dead. Now your cell phone is dead too. So, you won’t want to watch a video on a plane flight with this thing like you would with your iPod.
7) It’s Cingular only and GSM. That automatically keeps more than half of Americans from considering this and for the rest of the world? They are laughing about the iPhone now.
8) The camera sucks. It’s a 2megapixel device without flash, without zoom. Nokia’s newest cameras blow this one away.
9) No GPS. For a $600 device that really, really, really sucks.

Scobe is right about the Nokia phones being way better. I just got a Nokia N73. And it really rocks. Its tons better than the iPhone, from what I’ve heard about it.  The touchpad? Please. I mean I’d rather have keys that you can actually touch rather than a touchpad any day of the week.

Steve’s joy might be premature after Cisco sued Apple for trademark infringment.  Yep, another blog to add to my blogroll. The issue is one of principle. Not that principle is a very common thing in business.  It seems that what apple did is the business version of pie in the face, except that nobody is laughing. Can Apple shrug this one off? Possibly. Trademark cases can take quite a while.

One gets the impression that Apple is trying to  cash in as much as possible on the popularity of the iPod. Cue European Competition Comission  involvment and 600 million euro fines.

Vista Licenceing and Web 2.0

I was scrolling though my feeds and came across this post over at the One Man Shouting blog.

MSFN is reporting that all Vista Editions will be included on the same DVD, but that the discs will be color coded to indicate which version the consumer purchased.  The good news is that consumers will be able to upgrade to a higher version of Vista if they decide they need more features.

I’m thinking. Perhaps Microsoft should go one better (or worse, you decide) and bill users according to the features not in their current license that they use. So, if I don’t usually use, say Media Centre, but suddenly need to use it one night when my friends come over, Microsoft could just bill my PayPal account for that time. So I would choose the features I want year round access to and anything extra gets billed ( at a higher rate, obviously, to encorage people to buy an Ultimate license). Sun Microsystems do something similar to this, I belive ( Salesforce.com?).

But then again, perhaps bothering people for their Paypal account details everytime they open Media Centre or send a fax mught just bring out the extremist side in Microsoft customers 😉 .

 Why I’m posting about a lame idea, I don’t know. Perhaps its just the novelty of it. Web 2.0 and the fact that most people are connected tot he internet 24/7 are the reasons why these kinds of things possible.  The idea is, in essence what Sun CEO Johnathan Schwartz calls “The Network is the Computer”, the idea that the exististance of a network beyond out immidiate hardrive  increases the amount of things that we can do.

So, although Web 2.0 is a concept, its a powerful concept. We use new tools and technologies to turn what uised to be a static web into an extension of an application. in other words, we can use web pages and services as if they we local applications running from a local hard drive.  

I think I’m going to spring for Vista Ultimate myself (Media Player definately included :) )

Software Engineering

My lecture this morning was on project management. Specifically how it applies within the games industry. So I was pleasntly surprised to find an almost identical post over at Coding Horror.

The name “software engineering” is apt enough. Computer Science is about creating pretty little algorthims ( don’t get me wrong, I use BubbleSort all the time).

Software engineering is about getting a given piece of software to work, no matter what the code looks like. 

Jeff says:

But software projects truly aren’t like other engineering projects. I don’t say this out of a sense of entitlement, or out of some misguided attempt to obtain special treatment for software developers. I say it because the only kind of software we ever build is unproven, experimental software. Sam Guckenheimer explains:

To overcome the gap, you must recognize that software engineering is not like other engineering. When you build a bridge, road, or house, for example, you can safely study hundreds of very similar examples. Indeed, most of the time, economics dictate that you build the current one almost exactly like the last to take the risk out of the project.  

I agree. Let me explain.

It has “engineering” in the title for a reason. You don’t need a fully qualified engineer to fix the gas boiler ( though thats what they’re called in the UK). You do need an engineer to build the world’s longest rail tunnel. Thats why its engineering. Thats the semantics

Also, like engineers, we tweak things constantly. My lecturer gave the example of the motorway just down the road. They built it in a marsh. But the thing is that you can’t build in a marsh. So they froze, yes froze, the ground with freon and built on top of that. Thats engineering.  

Thats why its like real, civil enginering. 

As far as unproven, experimental software goes, I’d like to give an example. I get project management software, a trial version. I test it to see if i’d be willing to shell out for the full version. I don’t like the program. So i take the basic idea ( “keeping track of development schedules”) and build a better project mangment software tool, with the all the cluncky bits stripped out. Both programs will work and do the job of keeping track of development schdules. One will be better than the other becuase end user input has been taken into account.  

My point is that. Most of what we as software developers do comes from the real work. Surely Pharaoh must have had project management in his time? The challenge is create something better than the previous iteration. So we port proven tasks, in this case project mangement, to the computer, while still being ready to improve on the product. Engineers build a bridge once and have to wait till the next bridge comes along to apply what they learnt on the last one. We write software that evolves, yes evolves. Snapshots of the  same bit of software take in the middle and the end, will be completely unrecodnizable. So in this sense, we do write experimental programs.

So, in response Jeff, it depends on your point of view.