Windows Home Server: Resource Deconfliction

As more and more Windows Home Server Add-Ons are introduced, WHS becomes more and more like an appliance than a piece of software (and hardware).

More and more Add-ons mean that we ask more and more of our systems. These demands mean that finite resources have to allocated and shared with the WHS software itself.

DEMigrator.exe comes to mind ( the magic behind folder duplication). Since DeMigrator does not actually have a front end ( short of turning off folder duplication), it is impossible to pause or stop it when its running in favour of something more urgent. Granted we could change our backup window, but this is not always convenient or possible.

What WHS needs is some way of managing resources on a much more granular level than process priorities. By that I mean that WHS makes  a logical guess as to what process(es) need to run now  and what processes are less immediate.

So if I use SageTV to record show x at time y and a defrag ( or other processor intensive program) is scheduled to run at the same time, we need resource deconfliction to kick in and sort it out. We can do this in one of two ways: either throttle back the proccessor intensive process or re schedule it ( if the drive isn’t very fragmented a missed defrag pass wont make much of a difference).

Naturally, we can’t expect this souped up task scheduler to be able to handle every occurrence of every program. this means that WHS would simply notify the offending process(es) of the situation and it would be up to the program to implement a responsible and reasonable strategy to handle that.

If you’ve got a high end system running WHS, this discussion isn’t very dramatic. But between backups, defrags, virus scans, DeMigrator, SageTV  and others ad nauseam  ( even automatic Windows Update needs to be able to safely restart) jockeying for resources, something needs to manage this safely and well.

Essentially, this is bringing WHS closer to the headless system originally envisioned. It would save me a lot of Remoteing in every day.

Before we finish, let take a look at the specs for the WHS systems commercially available from HP, etc to get an idea of exactly what resources are available.

The Microsoft minimum spec is 1Ghz and 512Mb RAM and 1x 70Gb drive.

The recommended spec is 64-bit Compatible Intel Pentium 4, AMD x64 or newer with 512Mb Ram and 2x hard drives with a 300GB primary disk.

  CPU RAM Hard Drive
HP Media Smart AMD 1.8 GHZ 64-bit Sempron 3400+ processor 512Mb 2x 500Gb
Norco DS-520 Intel Celeron M 1GHz 1Gb 1x 250Gb
Piranha Home Server Intel Celeron 430 (1.8GHz, 512KB, Conroe) 1GB 2x250Gb
T2-WHS-A3 Harmony Home Server Intel Celeron 220 1.2GHz 512Mb (1Gb Optional) 1 x 500Gb (1Tb/2Tb Optional)
T7-HSA Harmony Home Server Via C7M “Esther” 1.5Ghz 512Mb (1Gb Optional) 1 x 500Gb (1Tb Optional)
My own homebuilt system (Dell  Poweredge SC440) Intel Celeron D 2.8Ghz 2GB 1x160Gb
1x400Gb
2x750Gb

I think this is a pretty representative sample of the entire range. You can get the reviews on these servers and others from We Got Served Hardware page.

NB. The extra possibilities of multi-core  64 bit machines allowing true concurrent execution are mind boggling.

Visual Studio Install Error 1935 (HRESULT: 0×80070BC9) Fix

As  I wrote here last week, getting Visual Studio 2008 installed was a bit of a problem for me on my main Desktop PC.

And I couldn’t find a fix anywhere. So since the installation was successful on my laptop ( they are both nearly identical systems) I set about trying to find some difference between them.

I came up with the fact that I’d had Visual Studio 2008 Visual Web Designer Express installed and had uninstalled it before my Visual Studio 2008 Pro install.

So in the finest tradition of Voodoo Troubleshooting I did the following:

  1. Mounted  the Visual Studio Express Editions DVD image available from Microsoft here
  2. Installed Visual C# Express ( it looks as if any edition will do)
  3. Restarted
  4. Uninstalled only Visual Studio C# Express (the runtime prerequisites will also uninstall)
  5. Restarted
  6. Installed Visual Studio 2008 Professional

I’m not quite sure why this works. I put forward the idea that it fixes the registry or the .Net Install ( see my earlier post for details).

Happy Coding 🙂 !

Installing Visual Studio 2008

UPDATE: I found a fix. See here.

Right. Let get this straight. I’m running Vista Business with Visual Studio 2005 Standard installed (and All the extras – SQL Server etc).

The short version is that Visual Studio 2008  Professional refuses  to install itself. It installed .Net 3.5, Document Explorer 2008 and the Web Authoring Component and then quit at some point while installing Visual Studio itself.

Heres the error log:

Microsoft Visual Studio 2008 Professional Edition – ENU: [2] ERROR:Error 1935.An error occurred during the installation of assembly ‘Microsoft.VC90.DebugCRT,version=”9.0.21022.8″,publicKeyToken= “1fc8b3b9a1e18e3b”,processorArchitecture=”x86″,type=”win32″‘. Please refer to Help and Support for more information. HRESULT: 0x80070BC9.

I’ve no idea what is going on. If you Google search  the Error Code you get this error for VS2005 (or SP1), .Net 2 or SQL Server.  Searching by HRESULT points to this MSDN Forum where the discussion is about a VS2008 Compile problem.

Now here’s the thing. It installs perfectly on my laptop (also running Vista Business with Visual Studio 2005 installed with all the bells and whistles). So the download ( and no, its a perfectly legal copy) is definitely not corrupted.

HELP!!!!!

Ever think of yourself as a walking server farm??

Now that’s the question I asked myself after reading this fascinating article about DNA as a programing language and the cells as the computers. 

[The Source code] Is here. This not a joke. We can wonder about the license though. Maybe we should ask the walking product of this source: Craig Venter. The source can be viewed via a wonderful set of perl scripts called ‘Ensembl‘. The human genome is about 3 gigabases long, which boils down to 750 megabytes. Depressingly enough, this is only 2.8 Mozilla browsers.

DNA is not like C source but more like byte-compiled code for a virtual machine called ‘the nucleus’. It is very doubtful that there is a source to this byte compilation – what you see is all you get.

And people wonder about the value of reading huge numbers of feeds….

Via Scott Rosenburg (Of Dreaming in Code fame).

Quote of the Day

I’m still here and posting will resume soon. In the mean while:

Amazingly, Adobe seems to have entirely missed the fact that the reason that the Flash video format has taken off is that it’s so fluid, versatile and remixable — not because they sucked up to some Hollysaurs and crippled their technology. – Cory Doctorow

Via Doc Searls

On Robotic Fish

I was reading this weeks New Scientist ( the print edition, mind you) and this story  about what the US Navy’s Office of Naval Research is doing caught my eye:

AGILE robotic fish that look like the real thing are being developed to act as government spies.

The article goes onto say that the fish will have cameras and communicate with each other using sonar.

To anyone that has read Michael Crichton’s Prey, this sounds suspiciously like a multi-agent system, albeit one that uses physical agents rather than computer simulated ones.

Wikipeedia has this to say:

The exact nature of the agents is a matter of some controversy. They are sometimes claimed to be autonomous. For example a household floor cleaning robot can be autonomous in that it is dependent on a human operator only to start it up. On the other hand, in practice, all agents are under active human supervision. Furthermore, the more important the activities of the agent are to humans, the more supervision that they receive. In fact, autonomy is seldom desired. Instead interdependent systems are needed.

[…]

MAS systems are also referred to as “self-organized systems” as they tend to find the best solution for their problems “without intervention”.

[…]

The main feature which is achieved when developing MAS systems, if they work, is flexibility, since a MAS system can be added to, modified and reconstructed, without the need for detailed rewriting of the application. These systems also tend to be rapidly self-recovering and failure proof, usually due to the heavy redundancy of components and the self managed features, referred to, above.

Although we’re not likely to see these become evolving, man-eating piranhas, it is something to keep an eye on (if you’ve read the book, you’ll know where I’m coming from).

And it demonstrates a physical application of  this technology that, while the agents are not strictly independent, they are not exactly predictable either.

At least, that is the way I understand it.

Microsoft-Yahoo

I was thinking about the merger this morning and it struck me that the model tech merger is… Adobe’s buyout of Macromedia for $ 3.4 billion.

Now it is nowhere near the size or complexity of the Microsoft merger offer. But the point is that both companies brought their software together to create the Creative Suite series.

I mean think of it, different software, different programming, different programming culture, ethics and architecture.

Now I currently have CS3 installed. Its quite amazing how Macromedia’s software ( Flash, Freehand – now Illustrator) works quite well with the rest of the suite.  My point is that it works, not that its amazing ( which it is).

In similar vein, Yahoo and Microsoft are totally different companies. 

The problem isn’t the technology. The different technology might be good, it’ll force Microsoft to take another look at Linux. The problem is the people, the culture. No matter how good your team may be, they’ll never turn out anything is they can’t work together.

Getting the two cultures to play nice is simple: phone up Adobe and ask who their  consultancy company was in 2005 for the merger, and hire them 🙂 . Or hire, Jim Rohn, Antony Robins, and Tim Berners Lee. As teams are integrated, send them off on a team building course or something.

Now, is $44 Billion too little for Yahoo! ? The board seems to think so. Forget the “only logical option” argument for a second here and think it though. As Kara Swisher said it:

Indeed, some think that if the company was managed more aggressively–and that has been a big if at Yahoo for far too long now–Yahoo shares could be trading closer to $30 a share.

And that makes $31 kind of a bargain.

It’s not such a leap of faith, in fact.

Many mid-level and senior Yahoo execs have told me that CEO Jerry Yang’s too-cautious approach has been the problem and that there was pressure building for a change

In fact, the more you think about it, the more it sounds as if Microsoft have jumped the gun looking (hoping?) for a quick deal.

Now for the de rigueur Scoble quote:

“Are they crazy?”

I said “probably, and arrogant too.” Then she wondered why they would do such a thing. I told her that I agreed with Philip Greenspun, who says that to reject this deal is lunacy. Since I know Yahoo’s board members aren’t lunatics, I figure there must be some other answer. I told Maryam “they are probably trying to see if the offer will go up.”

Yahoo! are playing a high stakes poker game. the winners get bask in all their glory for the next few decades and losers look for new jobs.

If you are going to pay money for something – make sure that you get your moneys worth. If Microsoft think that they are going to get Yahoo on the cheap ( relatively speaking, that is), they need to rethink their attitude. Yahoo is worth shelling out for, but is above being treated like a second class citizen.

And If Microsoft have such an attitude with Yahoo!, any merging will be a disaster. Yahoo’s minds (linux or not) will leave and the empire will suffer ( sounds like Star Wars). Microsoft will be left with a rotting hulk that will drain money and resources for no observable gain. It’ll be like Alice in Wonderland where she has to run faster  and faster to stay where she is (the translation being that Yahoo will require more and more to stay  the same).

So although this sounds like a 7 Habits lecture, Microsoft’s attitude will determine how this ends up.

In the words of the immortal Spiderman:

With great power comes great responsibility

WHS and SmugMug- Architecting the Add-In

The structure of an Add-In is important to what we are trying to accomplish here.

I could put all my code as part of a Console Tab and have people manually getting it working.

That would no doubt work, but its kind of lame and falls short of expectations.

So we want an application that reacts in real-time to folder and file changes, but is also able to be scheduled ( say for 3am after a defrag or back-up session).

This requires a number of different application blocks, each an application in its own right.

To see how an Add-In works in code, the WHS Photo Uploader for Flickr is hosted on Codeplex. Unfortunately its a totally different application for a totally different service. But I did get a good look at the Console and Settings Tabs to see what the code would/should look like.

If you want to download and run it, I’d download the WIX Installer XML for Visual Studio 2005 to allow the  Add-in’s installer to be recognised by VS and compiled.

So the lesson is that Console Code is run from the Console (doh!) and any file monitoring code will only work while the console is running.

So we’ll require a service to be running to monitor our gallery folders and record the events to our Master XML file. Hence this application will consists of XML code and our FileWatcher Object. It will need to use the Windows API to allow it to be installed and run  as a service.

Then, we want to schedule our uploads so that they are not taking over your broadband connection. We could use System.Threading to calculate the number of seconds remaining and have the timer call our upload method once the time is reached.

However, we want to keep things as simple as possible. This means using windows’ Scheduled Tasks to call our uploader (an exe file). Scheduling Tasks is for some reason not in the .Net Framework 2.0 so we’ll use a library to wrap all the C++ Interop code. There is a CodeProject article here on it with the library. This also means we can run tasks with a users username and password (i.e “Administrator” and whatever the password is) so tasks will run, in theory, even when no one is logged in.

So we have three applications here, all rather trivial:

  • The Console Tab and Settings Tab with code for usernames/passwords as well as rescheduling uploads and manually starting the uploader. I have the idea of getting album stats and displaying the charts by using the Google Charts API, but that’s for version 1.1.
  • The Service app that will take file and folder change events and writes them to our Master Settings file. This runs independent of any user being logged in.
  • The Uploader that is scheduled to run at such and such a time to execute the events in our Settings file on the server using he SmugMug API. Also runs independent of any users being logged in.

Slowly but surely the Add-in progresses.

WHS and SmugMug – A Word About XML

XML is a markup language in the same family as HTML. This means you have nested elements in a  structured document.

Its structure makes it ideal to read and write data quite easily and in  a logical format. Since the elements all have names, the documents are usually human readable. This is of course a disadvantage for a proprietary file format.

Reading and writing XML is quite easy with Visual Studio and the .Net Framework. You can use an object called XMLSerializer that will do it automatically, but it depends on you structuring you data for it. It is also great for recurring data. In the example below we are reading in multiple FileEvent object.

The way I prefer doing it is using XMlReader and XMLWriter. Like so:

reader.Read();

 for(int loops =0; loops<count;loops++){
                    FileEvent temp = new FileEvent();
                    reader.ReadStartElement();
                    temp.FilePath=reader.ReadElementString();
                    temp.EventType = reader.ReadElementString();
                    reader.ReadEndElement();
                    mastersettings.Events.Enqueue(temp);

}

reader.Close(); 

Line for line, the read and write methods of your application is exactly the same as your final XML document, unless you use a loop.

Once you know how to use XmlReader and Writer, its easy to use. Essentially you have to ensure that your elements open and close at the right points it the code to ensure that you get a properly formatted Xml Document.

Second, make sure you are reading the right data type. I use strings as they are quite easy to use, even for dates and such. Most of my errors come from, that.

Third, make sure that you read and write methods read and write the exact same data at the exact same points ( you don’t want to be reading a name when you expect a date, etc).

Fourth, you need to use the WriteStartDocument method to start writing an Xml file and WriteEndDocument for ending one. thing of them as a huge node that encapsulates rest of the document. Just remember, one document per file.

And that’s all there is to it.

More later on how I’m using XML as a data source.

Related Post: WHS and SmugMug – Keeping Track of Files