A Microsoft engineer could say “I like meat” and somebody would blog about how Microsoft just declared war on Buddhism
Alex Sexton on Paul Thurrots blog post The Last Version of Windows?
Category: Microsoft
Google Cookie Monster, Part Deux – Where Is The Debate??
I’m rather disspointed by thispost on GigaOm, responding to Microsoft’s Dean Hachamovitch’s post doing a little Google Privacy exposé of his own. .
The GigaOm post, it seems, blames Microsoft for pulling Google up on its failure to adhere to an old standard, in this case the P3P standard. This standard is meant to allow a site to notify a browser, and hence the user, on what it will do with the information it collects from you. In other words, the standard is meant to solve the very problem that started Google Cookiegate.
Instead of blaming Microsoft for jumping into the fray and touting its own browser, rather we should be pointing out that not only is Internet Explorer the only browser to implement this privacy related standard, but the fact that Google has highlighted the need for it to be improved.
Why is it that no other browsers implement this standard?? Why is it that there is no movement for it to be improved? All those bloggers calling bloody murder over this Google Cookiegate should be urging the updating and wider adoption of this standard.
Far from being a damp squib, Hachamovitch’s post is an important contribution to the debate. It may be self serving, but who else would have known of the existence of this standard if it wasn’t for that post.
Come on GigaOm – I expect better than blindly bashing any contribution Microsoft makes.
BUILD Keynote–App Approval
Almost in passing, though it received big applause, Microsoft announced that the Windows App store will make its technical compliance tools available to app developers so that they can run them themselves and see the output.
This might not seem to be a big deal, but its a shot across the bow of the Apple App Store. Apple’s apps store has had a terrible time over the years as high-profile Apple developers angst has come to the fore over the labyrinthine and mystical process of App Approval.
Microsoft are determined to do things differently. Obviously corporate prestige motivates Microsoft to keep some form of control on what ends up in the app store- no company wants to have PR disasters featured in their stores. On the other hand, Microsoft wants to move to Windows 8 and take their legions of Windows developers with them to the the new metro Style apps. So, by de-mystifying he approval process, Microsoft have removed another stumbling block to developers selling applications through the Microsoft App Store. Microsoft can have its cake and eat it too.
(Pic from the BUILD keynote 1)
I know I’m feeling more optimistic about the approval process.
Windows Azure Block Blobs
In Windows Azure Blob Storage, not all blobs are created equal. Windows Azure has the notion of Page Blobs and Block Blobs. Each of these distinct blob types aim to solve a slightly different problem, and its important to understand the difference.
To Quote the documentation:
- Block blobs, which are optimized for streaming.
- Page blobs, which are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob.
About Block Blobs
Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by uploading a set of blocks and committing them by their block IDs. If you are uploading a block blob that is no more than 64 MB in size, you can also upload it in its entirety with a single Put Blob operation.
When you upload a block to Windows Azure using the Put Block operation, it is associated with the specified block blob, but it does not become part of the blob until you call the Put Block Listoperation and include the block’s ID. The block remains in an uncommitted state until it is specifically committed. Writing to a block blob is thus always a two-step process.
Each block can be a maximum of 4 MB in size. The maximum size for a block blob in version 2009-09-19 is 200 GB, or up to 50,000 blocks.
About Page Blobs
Page blobs are a collection of pages. A page is a range of data that is identified by its offset from the start of the blob.
To create a page blob, you initialize the page blob by calling Put Blob and specifying its maximum size. To add content to or update a page blob, you call the Put Page operation to modify a page or range of pages by specifying an offset and range. All pages must align 512-byte page boundaries.
Unlike writes to block blobs, writes to page blobs happen in-place and are immediately committed to the blob.
The maximum size for a page blob is 1 TB. A page written to a page blob may be up to 1 TB in size.
So, before we determine what blob type we’re going to use, we need to determine what we’re using this particular blob for in the first place.
You’ll notice the above extract is quite clear what to use block blobs for: streaming video. In other words, anything that we don’t need random I/O access to. On the other hand page blobs have a 512-byte page boundary that makes it perfect for random I/O access.
And yes, its conceivably possible for you to need to host stuff such as streaming video as a page blob. When you think about this stuff to much, you end up imagining situations where that might be possible. So, these would be situations where you are directly editing or reading very select potions of a file. If you’re editing video, who wants to read in an entire 4MB for one frame of video? You might laugh at the idea of actually needing to do this, but that the Rough Cut Editor is web based and works primarily with web-based files. If you had to run that using Blob storage as a backend you’d need to use page blobs to fully realise the RCE’s functionality.
So, enough day-dreaming. Time to move on.
Some groundwork
Now, in our block blob, each individual block can be a maximum of 4MB in size. Assuming we’re doing streaming video, 4MB is not going to cut it.
The Azure API provides the CloudBlockBlob class with several helper methods for managing our blocks. The methods we are interested in are:
- PutBlock()
- PutBlockList()
The PutBlock method takes a base-64 encoded string for the Block ID, a stream object with the binary data for the block and a (optional) MD5 hash of the contents. Its important to note that the ID string MUST be base-64 encoded or else Windows Azure will not accept the block. For the MD5 hash, you can simply pass in null. This method should be called for each and every block that makes up your data stream.
The PutBlockList is the final method that needs to be called. It takes a List<string> containing every ID of every block that you want to be part of this blob. By calling this methods it commits all the blocks contained in the list. This means, then, that you could land up in a situation where you’ve called PutBlock but not included the ID when you called PutBlockList. You then end up with an incomplete and corrupted file. You have a week to commit uploaded blocks. So all is not lost if you know which blocks are missing. You simply call PutBlockList with the IDs of the missing blocks.
There are a number of reasons why this is a smart approach. Normally, I fall on the side of developer independence, the dev being free to do things as he likes without being hemmed in. In this case, by being forced to upload data in small chuncks, we realise a number of practical benefits. The big one being recovery from bad uploads – customers hate having to re-start gigabyte sized uploads from scratch.
Here be Dragons
The following example probably isn’t the best. I’m pretty sure someone will refactor and post a better algorithm.
Now there are a couple of things to note here. One bring that I want to illustrate what happens at a lower level of abstraction that we usually work at, so that means no StreamReaders – We’ll read the underlying bytes directly.
Secondly, not all Streams have the same capability. Its perfectly possible to come across a Stream object where you can’t seek. Or determine the length of the stream. So this is written to handle any data stream you can throw at it.
With that out of the way, lets start with some Windows Azure setup code.
StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key); CloudStorageAccount acc = new CloudStorageAccount(key, true); CloudBlobClient blobclient = acc.CreateCloudBlobClient(); CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos"); Videocontainer.CreateIfNotExist(); CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");
Note how we’re using the CloudBlockBlob rather than the CloudBlob class.
In this example we’ll need our data to be read into a byte array right from the start. While I’m using data from a file here, the actual source doesn’t matter.
byte[] data = File.ReadAllBytes("videopath.mp4");
Now, to move data from our byte array into individual blocks, we need a few variables to help us.
int id = 0; int byteslength = data.Length; int bytesread = 0; int index = 0; List blocklist = new List();
- Id will store a sequential number indicating the ID of the block
- byteslength is the length, in bytes of our byte array
- bytesread keeps a running total of how many bytes we’ve already read and uploaded
- index is a copy for bytes read and used to do some interim calculations in the body of the loop (probably will end up refactoring it out anyway)
- blocklist holds all our base-64 encoded block id’s
Now, on to the body of the algorthim. We’re using a do loop here since this loop will always run at least once (assuming, for the sake of example, that all files are larger than our 1MB block boundary)
do { byte[] buffer = new byte[1048576]; int limit = index + 1048576; for (int loops = 0; index < limit; index++) { buffer[loops] = data[index]; loops++; }
The idea (that of using a do loop) here being to loop over our data array until less than 1MB remains.
Note how we’re using a separate byte array to copy data into. This the block data that we’ll pass to PutBlock. Since we’re not using StreamReaders, we have to do the copy byte for byte as we go along.
It is this bit of code would be abstracted away were we using StreamReaders (or, more properly for this application, BinaryReaders)
Now, this is the important bit:
bytesread = index; string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id)); //1 blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null); //2 blocklist.Add(blockIdBase64); id++; } while (byteslength - bytesread > 1048576);
There are three things to note in the above code. Firstly, we’re taking the block ID and base-64 encoding it properly.
And secondly, note the call to PutBlock. We’re wrapped the second byte array containing just our block data as a MemoryStream object (since that’s what the PutBlock methods expects) and we’ve passed in null rather than an MD5 hash of our block data.
Finally, note how we add the block id to our blocklist variable. This will ensure that the call to PutBlockList will include the ID’s of all of our uploaded blocks.
So, by the time this do loops finally exits, we should be in a position to upload our final block. This final block will almost certainly be less than 1MB in size (barring the usual edge case caveats). Since this final block is less than 1MB, our code will need a final change to cope with it.
int final = byteslength - bytesread; byte[] finalbuffer = new byte[final]; for (int loops = 0; index < byteslength; index++) { finalbuffer[loops] = data[index]; loops++; } string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id)); blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null); blocklist.Add(blockId);
Finally, we make our call to PutBlockList, passing in our List array (in this example, the “blocklist” variable).
blob.PutBlockList(blocklist);
All our blocks are now committed. If you have the latest Windows Azure SDK (and I assume you do), the Server Explorer should allow you to see all your blobs and get their direct URL’s. You can downloaded the blob directly in the Server Explorer, or copy and paste the URL into your browser of choice.
Wrap up
Basically, what we’ve covered in this example is a quick way of breaking down any binary data stream into individual blocks conforming to Windows Azure Blob storage requirements, and uploading those blocks to Windows Azure. The neat thing here is that by using this method not only does the MD5 hash let Windows Azure check data integrity for you, but block ID’s let Windows Azure take care of putting the data back together in the correct sequence.
Now when I refactor this code for actual production, a couple of things are going to be different. I’ll do the MD5 hash. I’ll upload blocks in parallel to take maximum advantage of upload bandwidth (this being the UK, there not much upload bandwidth, but I’ll take all I can get). And obviously, I’ll use the full capability of Stream readers to do the dirty work for me.
Heres the full code:
StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key); CloudStorageAccount acc = new CloudStorageAccount(key, true); CloudBlobClient blobclient = acc.CreateCloudBlobClient(); CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos"); Videocontainer.CreateIfNotExist(); CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob"); byte[] data = File.ReadAllBytes("videopath.mp4"); int id = 0; int byteslength = data.Length; int bytesread = 0; int index = 0; List blocklist = new List(); do { byte[] buffer = new byte[1048576]; int limit = index + 1048576; for (int loops = 0; index < limit; index++) { buffer[loops] = data[index]; loops++; } bytesread = index; string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id)); blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null); blocklist.Add(blockIdBase64); id++; } while (byteslength - bytesread > 1048576); int final = byteslength - bytesread; byte[] finalbuffer = new byte[final]; for (int loops = 0; index < byteslength; index++) { finalbuffer[loops] = data[index]; loops++; } string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id)); blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null); blocklist.Add(blockId); blob.PutBlockList(blocklist);
Deploying your Database to SQL Azure (and using ASP.Net Membership with it)
Its been quite quiet around here on the blog. And the reason for that is the fact that I got asked by a Herbalife Distributor to put a little e-commerce site together (its called flying Shakes). So its been a very busy few weeks here, and hopefully as things settle down, we can get back to business as usual. I’ve badly neglected the blog and the screencast series.
I have a few instructive posts to write about this whole experience, as it presented a few unique challenges.
Now. I’m staring at the back end here since deploying the database to SQL Azure was the difficult part of deployment. The reason for this is mainly due to the ASP.Net Membership database.
But we’ll start from the beginning. Now I’m assuming here that your database is complete and ready for deployment.
Step 0: Sign up for Windows Azure (if you haven’t already) and provision a new database. Take note of the servers fully qualified DNS address and remember your username and password. You’ll need it in a bit.
Step 1 Attach your database to to your local SQL Server. Use SQL Server Management Studio to do that.
At this point we have our two databases and we need to transfer the schema and data from one to the other. To do that, we’ll use a helpful little Codeplex project called SQL Azure Migration Wizard. Download it and unzip the files.
Run the exe. I chose Analyse and Migrate:
Enter your Local SQL Server details:
Hit Next until it asks if you’re ready to generate the SQL Scripts. This is the screen you get after its analyse the database and complied the scripts.
Now you get the second login in screen that connects you to your newly created SQL Azure Database.
This is the crucial bit. You have to replace SERVER with your server name in both the server name box and the Username box, replacing username with your username in the process. You need to have @SERVER after your username or the connection will fail.
Fill in the rest of your details and hit Connect. Press next and at the next screen you’ll be ready to execute the SQL scripts against your SQL Azure database.
And its that easy.
All you have to do is to go ahead and change your connection string from the local DB to the one hosted on SQL Azure.
There is one last thing to do. When you first deploy your site and try and run it against SQL Azure, it won’t work. the reason being is that you have to set a firewall rule for your SQL Azure Database by IP Address range. So you should receive an error message saying that IP address such and such is not authorised to access the database. So you just need to go ahead and set the appropriate rule in the Management Portal. You’ll need to wait a while before those settings take effect.
And you should be good to go.
The ASP.Net Membership Database
In the normal course of events, you can setup Visual Studio to run the SQL Scripts against whatever database you have specified in your connection string when you Publish your project. However, there is a big gotcha. The SQL Scripts that ship with Visual Studio will not run against SQL Azure. This is because SQL Azure is restricted.
Even if you log into your SQL Azure database using SQL Management Studio you’ll see that your options are limited as to what you can do with the database from within SQL Management Studio. And if you try and run the scripts manually, they still wont run.
However, Microsoft has published a SQL Azure friendly set of scripts for ASP.net.
So we have two options: We can run the migrate tool again, and use the ASP.net Membership database to transfer over Schema and Data. Or we can run the scripts against the database.
For the sake of variety, I’ll go through the scripts and run them against the database.
- Open SQL Management Studio and log into your SQL Azure Database.
- Go File-> Open and navigate to the folder the new scripts are in.
- Open InstallCommon.sql and run it. You must run this before running any of the others.
- For ASP.net Membership run the scripts for Roles, Personalisation, Profile and Membership.
At this point I need to point out that bad things will happen if you try running your website now, even if your connection string has been changed.
ASP.net will try and create a new mdf file for the membership database. You get this error:
An error occurred during the execution of the SQL file ‘InstallCommon.sql’. The SQL error number is 5123 and the SqlException message is: CREATE FILE encountered operating system error 5(failed to retrieve text for this error. Reason: 15105) while attempting to open or create the physical file ‘C:\USERS\ROBERTO\DOCUMENTS\VISUAL STUDIO 2010\PROJECTS\FLYINGSHAKESTORE\MVCMUSICSTORE\APP_DATA\ASPNETDB_TMP.MDF’. CREATE DATABASE failed. Some file names listed could not be created. Check related errors. Creating the ASPNETDB_74b63e50f61642dc8316048e24c7e499 database…
Now, the problem with all this is the machine.config file where all of these default settings actually reside. See internally, it has a LocalSQLServer connection string. And by default, the RoleManager will use it because its a default setting. Here’s what it looks like:
<connectionStrings> <add name="LocalSqlServer" connectionString="data source=.\SQLEXPRESS;Integrated security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient"/> </connectionStrings> <system.web> <processModel autoConfig="true"/> <httpHandlers/> <membership> <providers> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="LocalSqlServer" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/> </providers> </membership> <profile> <providers> <add name="AspNetSqlProfileProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </profile> <roleManager> <providers> <add name="AspNetSqlRoleProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Security.SqlRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> <add name="AspNetWindowsTokenRoleProvider" applicationName="/" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> </system.web>
So, we have to overwrite those settings in our own web.config file like so:
<roleManager enabled="true" defaultProvider="AspNetSqlRoleProvider"> <providers> <clear/> <add name="AspNetSqlRoleProvider" connectionStringName="..." type="System.Web.Security.SqlRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral...."/> </providers> </roleManager> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider.... connectionStringName=""..../> </providers> </membership> <profile> <providers> <clear/> <add name="AspNetSqlProfileProvider" connectionStringName="" type="System.Web.Profile.SqlProfileProvider..../> </providers> </profile>
Now, what we are doing here is simply replacing the connectionstringaname attribute in each of these providers with our own connection string name. Before that, however, we put “<clear/>” to dump the previous settings defined in the machine.config file and force it to use our modified settings.
That should allow the role manager to use the our SQL Azure database instead of trying to attach its own. Things should run perfectly now.
Finally
The Azure Management Portal has a very nice UI for managing and editing your database. You can add and remove tables, columns rows etc. Its really good. And rather welcome. I thought I’d have to script every change and alteration.
Using MVC and jQuery to build a NewsTicker
Scott Guthrie tweeted a link to an article by Scott Mitchell. In it Scott wrote a handy news ticker in Asp.Net . And he wrote it in VB. I recommend going to read it before doing anything else.
It is a handy little example and a very good demonstration of jQuery in action. I actually first thought of using it in The Feedreader. Of course if I wanted to do that I’d have to re-write it as an MVC application rather than an ASP.Net website.
This is a nice academic exercise on moving from ASP.Met to MVC.
So lets get cracking.
Groundwork
So. We begin by creating an empty MVC2 project.
Actually, we began when we read Scott article. Its important to understand what Scotts trying toacomplish and how Scotts code works, before we shamelessly copy it.
The first thing you are going to want to do is make sure that Scotts’ ticker.js and jquery-1.4.4.min.js are in your MVC Scripts folder and included in the project. Also, you’ll want to copy Scott’s css files across. I put them in a folder called Styles and made sure it was included in the project.
Now, we need a Masterpage before we can create any Views. So add one in Views/Shared. We are shamelessly copying Scotts example in every detail. So go ahead and copy the markup in Scotts example and paste it into the your masterpage. You’ll want to change the script and css paths accordingly.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="../../Scripts/jquery-1.4.4.min.js"></script> <script type="text/javascript" src="../../Scripts/ticker.js"></script> <asp:ContentPlaceHolder id="head" runat="server"> </asp:ContentPlaceHolder> <link href="../../Styles/sinorcaish-screen.css" rel="stylesheet" type="text/css" /> <link href="../../Styles/CustomStyles.css" rel="stylesheet" type="text/css" /> <link href="../../Styles/NewsTicker.css" rel="stylesheet" type="text/css" /> </head> <body> <form id="form1" runat="server"> <!-- ======== Header ======== --> <div id="header"> <div class="left"> News Ticker Demo </div> <div class="right"> <%=DateTime.Now.ToShortDateString()%> </div> <div class="subheader"> <em>News me!</em> </div> </div> <!-- ======== Left Sidebar ======== --> <div id="sidebar"> <div> <ul> <li><%: Html.ActionLink("Simple Ticker Demo", "Simple","Home")%></li> <li><%: Html.ActionLink("Dynamic Ticker Demo", "Dynamic","Home")%></li> </ul> </div> </div> <!-- ======== Main Content ======== --> <div id="main"> <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> <!-- ======== Footer ======== --> <div id="footer"> ASP.NET application designed by <a href="http://www.4guysfromrolla.com/ScottMitchell.shtml">Scott Mitchell</a>. Website design by <a href="mailto:J.Zaitseff@zap.org.au">John Zaitseff</a>, and available at <a href="http://www.opendesigns.org/preview/?template=1700">OpenDesigns.org</a>. </div> </form> </body> </html>
Remember to change the links in the sidebar to ActionLinks.
Controllers
The next thing we need to do is to write a Controller. Typically the first controller in any MVC application is the home controller.
So go ahead and create one, adding three ActionResult methods: Index, Simple and Dynamic.
public class HomeController : Controller { // // GET: /Home/ public ActionResult Index() { return View(); } public ActionResult Simple() { return View(); } public ActionResult Dynamic() { return View(); } }
At this point, create a View for Index and copy the HTML from Default.aspx and stick it in the content control thats been created in the view.
Then, add an empty view for the Simple controller. Since this is straightforward HTML, we simply copy the contents of both ContentPlaceHolders in Simple.aspx into the two that have been created for us in the view
<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <h2>Simple News Ticker Demo</h2> <p> This demo shows two simple news tickers. Each news ticker has the same hard-coded news items. The first one shows only the body of each news item, one at a time; the second one shows the headline, body, and published date of each news item and shows three at a time. </p> <h3>One Row News Ticker</h3> <div class="ticker stretched"> <ul id="latestNews1"> <li> <div class="body">Politician Joe Smith has assembled a news conference for this afternoon to apologize for some indiscretion he had. This is Mr. Smith's third such "apology press conference" this year.</div> </li> <li> <div class="body">Did you know that you can play the fun (and addictive!) board game Axis & Allies online? Head on over to <a target="_blank" href="http://gamesbyemail.com/Games/WW2">http://gamesbyemail.com/Games/WW2</a> and give it a whirl!</div> </li> <li> <div class="body">A recent study by some doctors somewhere showed a strong correlation between unhealthy eating and unheathly people. More studies are to be performed to verify these findings.</div> </li> <li> <div class="body">This just in - ASP.NET is awesome! jQuery is not so bad, either. In fact, most technologies are pretty darn cool. For more information, see <a href="http://www.4guysfromrolla.com/" target="_blank">4GuysFromRolla.com</a>.</div> </li> <li> <div class="body">Last night the local sports team won a convincing victory over their hated rivals. After the game there was much jubilation.</div> </li> <li> <div class="body">Visit my blog. Please. You can find it at <a href="http://scottonwriting.net/sowblog/">ScottOnWriting.NET</a>.</div> </li> </ul> </div> <h3>Three Rows News Ticker</h3> <div class="ticker threeRows medium"> <ul id="latestNews3"> <li> <div class="header">Politician schedules news conference</div> <div class="body">Politician Joe Smith has assembled a news conference for this afternoon to apologize for some indiscretion he had. This is Mr. Smith's third such "apology press conference" this year.</div> <div class="footer">Published @ 8:30 AM</div> </li> <li> <div class="header">Play Axis & Allies Online!</div> <div class="body">Did you know that you can play the fun (and addictive!) board game Axis & Allies online? Head on over to <a target="_blank" href="http://gamesbyemail.com/Games/WW2">http://gamesbyemail.com/Games/WW2</a> and give it a whirl!</div> <div class="footer">Published @ 8:38 AM</div> </li> <li> <div class="header">Study links unhealthy food to unhealthy people</div> <div class="body">A recent study by some doctors somewhere showed a strong correlation between unhealthy eating and unheathy people. More studies are to be performed to verify these findings.</div> <div class="footer">Published @ 9:00 AM</div> </li> <li> <div class="header">ASP.NET is awesome!</div> <div class="body">This just in - ASP.NET is awesome! jQuery is not so bad, either. In fact, most technologies are pretty darn cool. For more information, see <a href="http://www.4guysfromrolla.com/" target="_blank">4GuysFromRolla.com</a>.</div> <div class="footer">Published @ 9:09 AM</div> </li> <li> <div class="header">Local sports team wins</div> <div class="body">Last night the local sports team won a convincing victory over their hated rivals. After the game there was much jubilation.</div> <div class="footer">Published @ 9:35 AM</div> </li> <li> <div class="header">Read my blog</div> <div class="body">Please. You can find it at <a href="http://scottonwriting.net/sowblog/">ScottOnWriting.NET</a>.</div> <div class="footer">Published @ 10:30 AM</div> </li> </ul> </div> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server"> <script type="text/javascript"> $(document).ready(function () { startTicker('#latestNews1', 1, 5000); startTicker('#latestNews3', 3, 5000); }); </script> </asp:Content>
Note the javascript in the “head” ContentPlaceHolder. This fires as soon as the page as finished rendering. Scott has more details about it in his article.
Model
Now, this is the hard part.
Scott Mitchells’ example used an ASP ListView control. If you’re writing ASP.Net code a listview is the easiest way to accomplish what we’re trying to do. You’ll notice as well that Scott passes the contents of SyndicationFeed.Items directly to the databound control. Now there is most probably a way of using Scott’s code directly in an MVC view. However, for the purposes of convenience we’ll dispense with the ListView and iterate over items ourselves.
There is also the issue of the Formatting of the items. You’ll notice that the ItemTemplate in Scotts’ code calls FormatSummary and FormatPubDate from the HTML. Because of the separation between code and HTML in MVC we can’t do that.
The solution to both of these “problems” is to do things ourselves. The M in MVC stands for model. So we need a model before we go any further. The two pieces of data we need are contained in the Summary and the PublishDate fields of the SyndicationItems. So this is what our model looks like:
public class Model { public String Title { get; set; } public string Date { get; set; } }
FormatSummary and Format PubDate, obviously need to be part of the HomeController class:
public static string FormatSummary(string summary){ string header = "ScottOnWriting: "; //Remove the leading "ScottOnWriting: " if (summary.StartsWith(header)){ return summary.Substring(header.Length); } return summary; } public static string FormatPubDate(DateTimeOffset pubDate) { return pubDate.ToString("h:mm, MMM d"); }
View
Now that the groundwork has been laid, we can write our actual HTML view. First we need to add our javascript function:
<asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server"> <script type="text/javascript"> $(document).ready(function () { startTicker('#tweets', 2, 4500); }); </script> </asp:Content>
This javascript function will do nothing unless ticker.js exists in your scripts folder. It will end up in the page header and will be executed as soon as the document has finished rendering. Also note that we are passing 2 in here. We could pass in any number we wanted. Scott explains more about this in his article.
Now, the fact is that the original implementation using a ListView basically iterated over all the objects in the datasource and output a select piece of HTML for each item in that collection. So, we’ll do the same.
<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <h2>Dynamic News Ticker Demo</h2> <p> This demo shows a ticker whose contents are populated dynamically. This particular example pulls the most recent tweets from <a href="http://twitter.com/ScottOnWriting">my Twitter account</a> and displays them in a ticker, showing two entries at a time. </p> <h3>@ScottOnWriting's Latest Tweets</h3> <div class="ticker twoRows medium"> <ul id="tweets"> <%foreach (var item in this.Model) { %> <li> <div class="header" style="font-weight: normal"> <%= item.Title.ToString()%> </div> <div class="footer">Tweeted @ <%: item.Date.ToString()%></div> </li> <%} %> </ul> </div> </asp:Content>
Note the careful placement of our Foreach statement. Our list item (<li/>) and the HTML with in it will be repeated on each pass over the loop body. Also, note how we are using var item in this.Model. Our View knows exactly what datatype has been passed to it. In fact, our view is actually View<Model.Model>. Yes, its a generic. And we are using var here to avoid typing Model.Model over and over again.
So, in a nutshell, the above code does the exact same thing as a ListView Control.
Hit run and it should be working.
The Solution of Minimal Effort – Drive Extender and Windows Home Server Vail
Ever since Microsoft announced it was removing Drive Extender from the next version of Windows Home Server, there has been an echo chamber effect with everyone saying the same thing : we don’t like it, we want it back, WHS is dead without it.
The same goes for what Microsoft should do now: port DE v1 into Vail, re-ad DE v2 to Vail only.
So I wont go and repeat all that .
The fact of the matter that WHS does not make nearly enough money to merit the full attention its DE woes deserve (The disKeeper blog makes this point as well). I’m sure all manner of problems could have been solved were the full might of the Developer Division to decent on the WHS team like a deus ex machina … Ok, maybe I’m being a little dramatic here. Nonetheless, my point stands – all problems can be solved with adequate resources – read money-in fact its practically the American way (I’m looking at you Bernanke).
The reason why Xbox (a big leap but bear with me) has flourished so much is because the team understands consumers. They understand what we, the consumer, want from them, the team. Xbox went from being a niche to a multi billion dollar arm of Microsoft. Helped in no small part by the Halo franchise (again – understanding of the consumer wants and needs at work).
Windows Home Server is in a similar place at the moment. WHS v1 was perfect. perfect in a way that’s difficult to describe. It was perfect enough for me to go out on a limb and buy a Dell server to run beta 1 on. The kind of perfect where you feel it in your bones – “this is it”. (PS Microsoft: try get a commission off Dell for that if you can)
The fact is that WHS solved a number of difficulties at a stroke: back up and redundant protection against hard drive failure. As a result I no longer have nightmares (well i do, but about fire burning down the house rather than hard drives and computers biting the dust, but thats another story).
The peace of mind that comes along with this is simply priceless. The fact of the matter is that there is no other way of getting that peace of mind with as minimal effort as setting up a WHS server. I’m not Microsoft – I don’t have the hardware and legions of RAID experts to call on. So WHS is the only way (yes there are alternatives, but I’m talking about the solution of minimal effort here).
So, Microsoft. Please. Give us our Drive Extender back. Whether you decide to use v1 or v2. Whether Aurora and Beckenridge have it or not. Add it back to Vail. You will have the appreciation and loyalty of a grateful bunch of people. This is an opportunity to pour fire over burning bridges (and maybe rebuild them with stone).
In the meanwhile, WHS users have started a petition. Vote here (half tempted to call this Organising For Drive Extender – a pun on Obama’s organising for America).
Kinect Puppet Show
I think I’ll be buying me a Kinect one of these days….
Amazing eh!!
Core Competencies and Cloud Computing
Wikipedia defines Core Competency as:
Core competencies are particular strengths relative to other organizations in the industry which provide the fundamental basis for the provision of added value. Core competencies are the collective learning in organizations, and involve how to coordinate diverse production skills and integrate multiple streams of technologies. It is communication, an involvement and a deep commitment to working across organizational boundaries.
So, what does this have to do with Cloud Computing?
I got thinking about different providers of cloud computing environments. If you abstract away the specific feature set of each provider what were the differences remaining that set these providers apart from each other.
Now, I actually starting thinking about this backwards. I asked myself why Microsoft Windows Azure couldn’t do a Google App Engine and offer free applications. I had to stop myself there and go off to wikipedia and remind myself of the quotas that go along with an App Engine free application:
Hard limits
Apps per developer
10Time per request
30 secBlobstore size (total file size per app)
2 GBHTTP response size
10 MBDatastore item size
1 MBApplication code size
150 MBFree quotas
Emails per day
2,000Bandwidth in per day
1,000 MBBandwidth out per day
1,000 MBCPU time per day
6.5 hours per dayHTTP Requests per Day
1,300,000*Datastore API calls per day
10,000,000*Data stored
1 GBURLFetch API calls per day..
657,084*
Now the reason why i even asked this question, was the fact that I got whacked with quite a bit of a bill for the original Windows Azure Feed Reader I wrote earlier this year. That was for my honours year university project, so I couldn’t really complain. But looking at those quotas from Google, I could have done that project many times over for free.
This got me thinking. Why does Google offer that and not Microsoft? Both of these companies are industry giants, and both have boatloads of CPU cycles.
Now, Google, besides doing its best not to be evil, benefits when you use the web more. And how do they do that? They go off and create Google App Engine. Then they allow the average dev to write an app they want to write and run it. For free. Seriously, how many websites run on App Engine’s free offering?
Second, Google is a Python shop. Every time someone writes a new Library or comes up with a novel approach to something, Google benefits. As Python use increases, some of that code is going to be contributed right back into the Python open source project. Google benefits again. Python development is a Google Core competency.
Finally, Google is much maligned for its approach to software development: thrown stuff against the wall and see what sticks. By giving the widest possible number of devs space to go crazy, the more apps are going to take off.
So, those are all Googles core competencies:
- Encouraging web use
- Python
- See what sticks
And those are perfectly reflected in App Engine.
Lets contrast this to Microsoft.
Microsoft cater to writing line of business applications. They don’t mess around. Their core competency, in other words, is other companies IT departments. Even when one looks outside the developer side of things, one sees that Microsoft office and windows are all offered primarily to the enterprise customer. The consumer versions of said products aren’t worth the bits and bytes they take up on disk. Hence, windows Azure is aimed squarely at companies who can pay for it, rather than enthusiasts.
Secondly, Windows Azure uses the .Net Framework, another uniquely Microsoft core competency. With it, it leverages the C# language. Now, it is true that .net is not limited to Windows, nor is Windows Azure a C# only affair. However, anything that runs on Windows Azure leverages the CLR and the DLR. Two pieces of technology that make .Net tick.
Finally, and somewhat related, Microsoft has a huge install base of dedicated Visual Studio users. Microsoft has leveraged this by creating a comprehensive suite of Windows Azure Tools.
Hopefully you can see where I’m going with this. Giving stuff away for free for enthusiasts to use is not a Microsoft core competency. Even with Visual Studio Express, there are limits. Limits clearly defined by what enterprises would need. You need to pay through the nose for those.
So Microsoft core competencies are:
- Line of Business devs
- .Net, C# and the CLR/DLR
- Visual Studio
Now, back to what started this thought exercise – Google App Engines free offering. As you can see its a uniquely Google core competency, not a Microsoft one.
Now, what core competencies does Amazon display in Amazon Web Services?
Quite simply, Amazon doesn’t care who you are or what you want to do, they will provide you with a solid service at a very affordable price and sell you all the extra services you can handle. Amazon does the same things with everything else, so why not cloud computing. Actually, AWS is brilliantly cheap. Really. This is Amazon’s one great core competency and they excel at it.
So, back to what started this thought exercise – a free option. Because of its core Competencies, Google is uniquely positioned to do it. And by thinking about it, Microsoft and Amazon’s lack of a similar offering becomes obvious.
Also, I mentioned the cost of Windows Azure.
Google App Engine and its free option mean that university lecturers are choosing to teach their classes using Python and App Engine rather than C# and Windows Azure.
Remember what a core competency is. Wikipedia defines Core Competency as:
Core competencies are particular strengths relative to other organizations in the industry which provide the fundamental basis for the provision of added value. Core competencies are the collective learning in organizations, and involve how to coordinate diverse production skills and integrate multiple streams of technologies. It is communication, an involvement and a deep commitment to working across organizational boundaries.
I guess the question is, which offering make the most of their parent companies core competencies? And is this a good thing?
WHS – Setting up a VPN server
I’m away on holiday next week and thought, like any good geek, I’d set up a VPN connection to my Windows Home Server.
The thing is that there are Add-Ins that will set up a VPN server for you.
However, by way of The MS Home Server Blog, there is a delightful little walkthrough that is remarkably simple.
This is for when the WHS console just won’t quite do it.
I configured my iPhone with the VPN details, and will probably just RDP in through it. I setup the laptop with it as well – so being able to work remotely now makes this little holiday a little less likely to be relaxing 🙂
I’ll let you know how it goes.
You must be logged in to post a comment.