The iPhone 4S: A Rose by any Other Name (a Response to Dan Gillour)

Dan Gilmour thinks Apple made a bit of a blunder by calling it the iPhone 4S rather than the iPhone 5 .

Sorry Dan, But I disagree completely.

On Google Plus I put the following argument forward:

I want to mention that Apple always thinks long term. They called it the 4S because:

  • Its convention – the minor versions (yes, this is minor, or we would have got a form factor change) always have a S appended to the name of the last major release.
  • Apple have something big on the horizon. They have big plans for the next major release of the iPhone. They want to reserve the "iPhone 5" name for that release.

The iPhone, the iPhone 3G and the iPhone 4 have all been major releases and have all sported form factor redesigns.

The iPhone 4S specs may seem to be major (in a parallel universe where all phone manufacturers but Apple went bankrupt in 2007), but they merely bring Apple to PARITY with Android.

The iPhone 5 is going to be the release that makes Android play some serious catchup.

Another important thing to note is that Apple would never every call it the iPhone 5 just because people want it to be the iPhone 5. I’m sure it was Steve Jobs himself who said something along the lines of “People don’t know what they want, they just think they do”, or words to that effect. People have no idea what they want out of an iPhone 5. People don’t know what they want until Apple shows it off to them and they go, “Yeah, I want that”.

Everyone is acting like Steve Jobs’ influence has gone. Nonsense. I think even Leo Laporte said yesterday that he got the impression that the reality distortion field was no longer there. What other company spends 53 minutes repeating what it announced at it’s last press conference and then spend half an hour on a new iPhone and Siri? Oh yeah, then announce as an after thought, “Yeah, we’re on Sprint”. That’s a Steve Jobs keynote if there ever was one,only without Steve. The only thing that changed was that we noticed it.

My bet is that’s it’s the same reason why Steve jobs wasn’t there: not major enough.

When there’s a One More Thing to announce, he’ll be there.

 

PS – Pardon the Shakespeare reference.

LINQ Goodness: Google Charts Edition

My day job (one of them, anyway) is to design*, run and maintain Flying Shakes

If someone had told me when I started  that 90% of the code (and 87.653% of all stats are made up, but you get my drift) I’d write would be for the administration side of things, I’d never have believed it.

 

Anyway, to cut a long story short, it was in this context that I came across a fascinating article from the Association for Computing Machinery (and no, I have not heard of them before either). I came across this a month or so ago, but lost the link.

 

With a little bit of Google-fu, I’ve found it once again: The World According to LINQ.

While its a fascinating article that appeals to the Computer Scientist in me (supposedly useless classes on in-depth database theory tend to do that), what caught my eye was the code sample right at the bottom for generating Google Chart Url’s.

That sample is going to come in very handy for me and I thought I’s share it with you.

Go ahead and read the article.

 

*If you see me ranting on Twitter or Google+ about CSS, this is probably why.

Some Interesting Code – your thoughts required

Without going to into a long story, I found some interesting code here to convert anonymous types to any strongly typed, well, type.

 

public static object ToType<T>(this object obj, T type)
{

    //create instance of T type object:
    var tmp = Activator.CreateInstance(Type.GetType(type.ToString())); 

    //loop through the properties of the object you want to covert:          
    foreach (PropertyInfo pi in obj.GetType().GetProperties()
    {
      try 
      {   

        //get the value of property and try 
        //to assign it to the property of T type object:
        tmp.GetType().GetProperty(pi.Name).SetValue(tmp, 
                                  pi.GetValue(obj, null), null)
      }
      catch { }
     }  

   //return the T type object:         
   return tmp; 
}

From this codeproject article.

Anyone have any thoughts on this?  Is it good? Bad? Inefficient? Crap??

The Dell XPS 8300

If you follow me on Twitter, you’ll know that my ancient Pentium 4 powered desktop died on Saturday. I had powered it off and unplugged from the socket while I was away in holiday. So it died a peaceful death in its sleep. Smile My it rest in peace forever more….

 

So I had to go off to Dell and spec a replacement PC.

I settled on an XPS 8300 with a nice new Intel i7-2600. Its the latest Sandy Bridge CPU, 4 cores clocked at 3.40Ghz.

Base
Intel® Core™ i7-2600 Processor (3.40GHz, 8MB)

Memory
8192MB Dual Channel DDR3 1333MHz [2×4096] Memory

Video Card
Graphics : 1GB AMD Radeon HD 6670

Hard Drive
1.5TB (7,200rpm) Serial ATA Hard Drive

Microsoft Operating System
English Genuine Windows®7 Professional SP1 (64 BIT)

Sound Cards
Sound : Integrated 7.1 with THX® TruStudio

 

I’m quite looking forward to playing with this once it arrives. All that power…..

Once again I’ll be able to run Visual Studio on the desktop. As well as  Virtual PC and Virtualbox.

Not to mention the fact that Flight Simulator is going to rock on this machine Smile

Expect a review in about 2 weeks  Smile

Windows Azure Block Blobs

In Windows Azure Blob Storage, not all blobs are created equal. Windows Azure has the notion of Page Blobs and Block Blobs.  Each of these distinct blob types aim to solve a slightly different problem, and its important to understand the difference.

To Quote the documentation:

  • Block blobs, which are optimized for streaming.
  • Page blobs, which are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob.

About Block Blobs

Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by uploading a set of blocks and committing them by their block IDs. If you are uploading a block blob that is no more than 64 MB in size, you can also upload it in its entirety with a single Put Blob operation.

When you upload a block to Windows Azure using the Put Block operation, it is associated with the specified block blob, but it does not become part of the blob until you call the Put Block Listoperation and include the block’s ID. The block remains in an uncommitted state until it is specifically committed. Writing to a block blob is thus always a two-step process.

Each block can be a maximum of 4 MB in size. The maximum size for a block blob in version 2009-09-19 is 200 GB, or up to 50,000 blocks.

About Page Blobs

Page blobs are a collection of pages. A page is a range of data that is identified by its offset from the start of the blob.

To create a page blob, you initialize the page blob by calling Put Blob and specifying its maximum size. To add content to or update a page blob, you call the Put Page operation to modify a page or range of pages by specifying an offset and range. All pages must align 512-byte page boundaries.

Unlike writes to block blobs, writes to page blobs happen in-place and are immediately committed to the blob.

The maximum size for a page blob is 1 TB. A page written to a page blob may be up to 1 TB in size.

So, before we determine what blob type we’re going to use, we need to determine what we’re using this particular blob for in the first place.

You’ll notice the above extract is quite clear what to use block blobs for: streaming video. In other words, anything that we don’t need random I/O access to. On the other hand page blobs have a 512-byte page boundary that makes it perfect for random I/O access.

And yes, its conceivably possible for you to need to host stuff such as streaming video as a page blob. When you think about this stuff to much, you end up imagining situations where that might be possible.  So, these would be situations where you are directly editing or reading very select potions of a file. If you’re editing video, who wants to read in an entire 4MB for one frame of video? You might laugh at the idea of actually needing to do this, but that the Rough Cut Editor is web based and works primarily with web-based files. If you had to run that using Blob storage as a backend you’d need to use page blobs to fully realise the RCE’s functionality.

So, enough day-dreaming. Time to move on.

Some groundwork

Now, in our block blob, each individual block can be a maximum of 4MB in size. Assuming we’re doing streaming video, 4MB is not going to cut it.

The Azure API provides the CloudBlockBlob class with several helper methods for managing our blocks. The methods we are interested in are:

  • PutBlock()
  • PutBlockList()

The PutBlock method takes a base-64 encoded string for the Block ID, a stream object with the binary data for the block and a (optional) MD5 hash of the contents. Its important to note that the ID string MUST be base-64 encoded or else Windows Azure will not accept the block. For the MD5 hash, you can simply pass in null.  This method should be called for each and every block that makes up your data stream.

The PutBlockList  is the final  method that needs to be called. It takes a List<string>  containing every ID of every block that you want to be part of this blob. By calling this methods it commits all the blocks contained in the list. This means, then, that you could land up in a situation where you’ve called PutBlock but not included the ID when you called PutBlockList. You then end up with an incomplete and corrupted file. You have a week to commit uploaded blocks. So all is not lost if you know which blocks are missing. You simply call PutBlockList with the IDs of the missing blocks.

There are a number of reasons why this is a smart approach.  Normally, I fall on the side of developer independence, the dev being free to do things as he likes without being hemmed in. In this case, by being forced to upload data in small chuncks, we realise a number of practical benefits. The big one being recovery from bad uploads – customers hate having to re-start gigabyte sized uploads from scratch.

Here be Dragons

The following example probably isn’t the best. I’m pretty sure someone will refactor and post a better algorithm.

Now there are a couple of things to note here.  One bring that I want to illustrate what happens at a lower level of abstraction that we usually work at, so that means no StreamReaders – We’ll read the underlying bytes directly.

Secondly, not all Streams have the same capability. Its perfectly possible to come across a Stream object where you can’t seek. Or determine the length of the stream. So this is written to handle any data stream you can throw at it.

With that out of the way, lets start with some Windows Azure setup code.

StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key);
CloudStorageAccount acc = new CloudStorageAccount(key, true);

CloudBlobClient blobclient = acc.CreateCloudBlobClient();
CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos");
Videocontainer.CreateIfNotExist();

CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");

Note how we’re using the CloudBlockBlob rather than the CloudBlob class.

In this example we’ll need our data to be read into a byte array right from the start. While I’m using data from a file here, the actual source doesn’t matter.

byte[] data = File.ReadAllBytes("videopath.mp4");

Now, to move data from our byte array into individual blocks, we need a few variables to help us.

            int id = 0;
            int byteslength = data.Length;
            int bytesread = 0;
            int index = 0;
            List blocklist = new List();
  • Id will store a sequential number indicating the ID of the block
  • byteslength is the length, in bytes of our byte array
  • bytesread keeps a running total of how many bytes we’ve already read and uploaded
  • index is a copy for bytes read and used to do some interim calculations in the body of the loop (probably will end up refactoring it out anyway)
  • blocklist holds all our base-64 encoded block id’s

Now, on to the body of the algorthim. We’re using a do loop here since this loop will always run at least once (assuming, for the sake of example, that all files are larger than our 1MB block boundary)

do
            {
                byte[] buffer = new byte[1048576];
                int limit = index + 1048576;
                for (int loops = 0; index < limit; index++)
                {
                    buffer[loops] = data[index];
                    loops++;
                }

The idea (that of using a do loop) here being to loop over our data array until less than 1MB remains.

Note how we’re using a separate byte array to copy data into. This the block data that we’ll pass to PutBlock. Since we’re not using StreamReaders, we have to do the copy byte for byte as we go along.

It is this bit of code would be abstracted away were we using StreamReaders (or, more properly for this application, BinaryReaders)

Now, this is the important bit:

                 bytesread = index;
                string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id)); //1

                blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null); //2

                blocklist.Add(blockIdBase64);
                id++;
            } while (byteslength - bytesread > 1048576);

There are three things to note in the above code. Firstly, we’re taking the block ID and base-64 encoding it properly.

And secondly, note the call to PutBlock. We’re wrapped the second byte array containing just our block data as a MemoryStream object (since that’s what the PutBlock methods expects) and we’ve passed in null rather than an MD5 hash of our block data.

Finally, note how we add the block id to our blocklist variable. This will ensure that the call to PutBlockList will include the ID’s of all of our uploaded blocks.

So, by the time this do loops finally exits, we should be in a position to upload our final block. This final block will almost certainly be less than 1MB in size (barring the usual edge case caveats). Since this final block is less than 1MB, our code will need a final change to cope with it.

            int final = byteslength - bytesread;
            byte[] finalbuffer = new byte[final];
            for (int loops = 0; index < byteslength; index++)
            {
                finalbuffer[loops] = data[index];
                loops++;
            }
            string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id));
            blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null);
            blocklist.Add(blockId);

Finally, we make our call to PutBlockList, passing in our List array (in this example, the “blocklist” variable).

blob.PutBlockList(blocklist);

All our blocks are now committed. If you have the latest Windows Azure SDK (and I assume you do), the Server Explorer should allow you to see all your blobs and get their direct URL’s.  You can downloaded the blob directly in the Server Explorer, or copy and paste the URL into your browser of choice.

Wrap up

Basically, what we’ve covered in this example is a quick way of breaking down any binary data stream into individual blocks conforming to Windows Azure Blob storage requirements, and uploading those blocks to Windows Azure. The neat thing here is that by using this method not only does the MD5 hash let Windows Azure check data integrity for you, but block ID’s let Windows Azure take care of putting the data back together in the correct sequence.

Now when I refactor this code for actual production, a couple of things are going to be different. I’ll do the MD5 hash. I’ll upload blocks in parallel to take maximum advantage of upload bandwidth (this being the UK, there not much upload bandwidth, but I’ll take all I can get). And obviously, I’ll use the full capability of Stream readers to do the dirty work for me.

Heres the full code:

StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key);
CloudStorageAccount acc = new CloudStorageAccount(key, true);

CloudBlobClient blobclient = acc.CreateCloudBlobClient();
CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos");
Videocontainer.CreateIfNotExist();

CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");

byte[] data = File.ReadAllBytes("videopath.mp4");

int id = 0;
int byteslength = data.Length;
int bytesread = 0;
int index = 0;
List blocklist = new List();

do
            {
                byte[] buffer = new byte[1048576];
                int limit = index + 1048576;
                for (int loops = 0; index < limit; index++)
                {
                    buffer[loops] = data[index];
                    loops++;
                }
                bytesread = index;
                string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id));

                blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null);

                blocklist.Add(blockIdBase64);
                id++;
            } while (byteslength - bytesread > 1048576);

            int final = byteslength - bytesread;
            byte[] finalbuffer = new byte[final];
            for (int loops = 0; index < byteslength; index++)
            {
                finalbuffer[loops] = data[index];
                loops++;
            }
            string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id));
            blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null);
            blocklist.Add(blockId);
            blob.PutBlockList(blocklist);

From the Department of MVC useful goodies (and sponsored by the department of Stackoverflow saved my butt)

I use the ipinfodb.com API in my app a lot to do geolocation. Well, as such things are wont to do, it went down for about an hour today. Curiously, the production site wasn’t affected at all, but my dev work was. So, in panic mode  I needed to add a country selector so that the rest of the site would have  the country information it needed. Now the reason there wasn’t one already was a deliberate design choice. So, i needed a backup plan for the next time ipinfodb went down.

As usual, Stackoverflow saved my butt (again). There’s a great answer that explains the way to do things in MVC using Razor.

Rather than steal the guys thunder, I’m just going to add one recommendation. In the body of the javascript function add the following:

location.reload();

And the page refreshes, including any changes triggered by the selection.

Go on and read the answer here.

Sitemaps in ASP.Net MVC: Icing on the Cake

This is short simple and sweet (forgive the pun).  The reason why i say that is that you have two options when doing a sitemap in MVC (actually, you have more, but whatever).

The first is using a Library. There’s a MVC Sitemap provider on Codeplex that you can download and install. It involves some XML configuration and attributes on all the actions you want to include in your sitemap.

The fact is, I don’t have time to fiddle around with configurations. I just want a simple sitemap file with a handful of products, categories and one or two other links. If the site was larger and more complex I might consider it.

So, we come to the second, DIY way. Now, this is not entirely my idea. I just repurposed it to pull the correct URL parameters out of the db. The original code is found on Codeplex.

Firstly, we have to register a new route to www.example.com/sitemap.xml. Go to Global.asax and put the following in your RegisterRoutes() method. I put mine after the call to IgnoreRoute.

routes.MapRoute("Sitemap", "sitemap.xml", new { controller = "Home", action = "Sitemap", id = UrlParameter.Optional });

Now, you can use any default routing you want with this. As you can see above, the route is pointing to the Sitemap action of the Home controller.

Then we have to actually populate our Action with some code.

  protected string GetUrl(object routeValues)
        {
            RouteValueDictionary values = new RouteValueDictionary(routeValues);
            RequestContext context = new RequestContext(HttpContext, RouteData);

            string url = RouteTable.Routes.GetVirtualPath(context, values).VirtualPath;

            return new Uri(Request.Url, url).AbsoluteUri;
        }
        [OutputCache (Duration=3600)]
        public ContentResult Sitemap()
        {
            var categories = storeDB.Categories.Include("Products").Where(g => g.Id != 8); //some filtering of categories
            XNamespace xmlns = "http://www.sitemaps.org/schemas/sitemap/0.9";
            XElement root = new XElement(xmlns + "urlset");

            List<string> urlList = new List<string>();
            urlList.Add(GetUrl(new { controller = "Home", action = "Index" }));
            urlList.Add(GetUrl(new { controller = "Home", action = "Terms" }));
            urlList.Add(GetUrl(new { controller = "Home", action = "ShippingFAQ" }));
            urlList.Add(GetUrl(new { controller = "Home", action = "Testimonials" }));
            foreach (var item in categories)
            {
                urlList.Add(string.Format("{0}?{1}={2}",GetUrl(new { controller = "Store", action = "BrowseProducts"}),"category",item.Name));

                foreach (var product in item.Products)
                {
                    urlList.Add(string.Format("{0}/{1}", GetUrl(new { controller = "Store", action = "ProductDetails" }), product.Id));
                }
            }

            foreach (var item in urlList)
            {
                root.Add(
                new XElement("url", 
                new XElement("loc", item), 
                new XElement("changefreq", "daily")));
            }

            using (MemoryStream ms = new MemoryStream())
            {
                using (StreamWriter writer = new StreamWriter(ms, Encoding.UTF8))
                {
                    root.Save(writer);
                }

                return Content(Encoding.UTF8.GetString(ms.ToArray()), "text/xml", Encoding.UTF8);
            }
        }

Essentially, we’re just outputting an xml file with the correct format and structure.  This gives us a file that looks like:

<?xml version="1.0" encoding="utf-8"?>

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">

  <url xmlns="">

    <loc>http://localhost:26641/</loc>

    <changefreq>daily</changefreq>

  </url>

  <url xmlns="">

    <loc>http://localhost:26641/Home/Terms</loc>

    <changefreq>daily</changefreq>

  </url>

  <url xmlns="">

    <loc>http://localhost:26641/Home/ShippingFAQ</loc>

    <changefreq>daily</changefreq>

  </url>

You get the idea.

The above code is using Entity Framework 4.1, so you can replace the line that declares  “var categories” with whatever data source you have. And you’ll have to reformat the url strings to conform to your parameter format.

Now I’m not suggesting this for any large MVC deployment. The code could get rather messy.

 

But for something simple, it works like a dream.

Getting your News Manually Blows: A Reply

Last night Holden Page wrote a pretty good blog post entitled: Getting Your News Manually Blows (http://pagesaresocial.com/2011/04/05/getting-your-news-manually-blows/).

It was late, so I thought I’d reply now when I’m fully awake :).

Holden’s point was that when you use a service like my6thsense that auto-curates the news and presents this to you, it’s much easier than using Twitter (and by extension Google Reader etc) to get your news. Essentially, it’s easier to have the news pushed to you rather than to have to go off and pull it from various services.

Essentially, Holden is arguing for the curation model rather than e co sunroom model. This is infect something that I’ve noticed myself. There is consistent lack of content, even using Feedly and Twitter and Friendfeed to get my news.

Now I got an iPad about 2 months ago. And promptly installed Flipboard. I mention this because the experience is as important, if not more so, as the content in that experience. This rapidly warmed me to the iPad as a digital newspaper, complete with page turns and layout. Now it’s blatantly obvious, but the iPads ability to BECOME the app that’s running is a singular experience. As a dead tree newspaper reader, the fuss of a broadsheet just melts away on the iPad. Turning pages on a broadsheet can be a torturous experience. Flipboard showed me how that just melts away.

Thus I went off to seek actual newspaper apps to use.

Now before you groan and call newspapers dead tree media of the past, sit down and think about it. Newspapers were the my6thsense of the dead tree media (albeit it political persuasions, rivalries and narcissistic owners took thee place of algorithms). Decades worth of experience curating the news still produces some damn fine newspapers.

That’s a fact. Take the Times of London. I’ve been a Times Reader for years, even though it is a Tory paper. The iPad apps for the Times and the Sunday Times are incredibly good. They preserve the newspaper’s look and feel whilst incorporating video and some innovative layouts. I like it so much I signed up for the monthly subscription.

Here’s the scary bit: I open up The Times app and read it before I open up Feedly. No kidding. It’s curated news that covers a wide variety of topics succinctly. It’s quick and easy to browse through.

“Hang on a minute” I hear you say, “there’s no sharing or linking or liking or anything! It’s a Walled Garden”. And that’s the truly scary part: I don’t care.

Now i’m not at all suggesting that we abandon our feed readers and start reading digital newspapers. Quite the opposite. I’m saying that if we’re looking for sources of curated news, digital newspapers had better be one of those sources. Indeed, Feedly still gets used everyday. It’s invaluable to me. (Today, for example, the Falcon 9 Heavy story is nowhere to be found in The Times).

There other good newspaper app I found was the USA Today app. It’s nice and clean, with a simple interface that focusses on content. As a bonus, you can share articles to your hearts content. Plus, it’s US centric for the Americans among us 😉

I mentioned above that looking through my feeds for good content or through Twitter and Friendfeed was/is becoming a it of a chore. I’m finding more and more time where i’m looking at absolutely nothing. I probably need to subscribe to better feeds or better people. I probably need to reorganise my feeds to better layout content, and cull the boring ones. But here’s the thing, I don’t have the time to do all that.

As Apple would say, There’s an App For That!

Using SQL Azure with ELMAH

If you don’t know what ELMAH is, stop right now and go and read about it.

ELMAH (Error Logging Modules and Handlers) is an application-wide error logging facility that is completely pluggable. It can be dynamically added to a running ASP.NET web application, or even all ASP.NET web applications on a machine, without any need for re-compilation or re-deployment.

Then go and read Scott “TheHa” Hanselman’s post on it.

There is a Nuget package for it as well, to make things really super easy.

In fact, running Nuget, setting up SQL Azure and tweaking some config settings took me all of 20 minutes. No freaking kidding. 

Now remember that this is being installed on an MVC site, so don’t let that put you off. Here we go:

Step One: Install from Nuget (making sure you have the latest build of Nuget in the process)

Step Two: Setup SQL Azure

Step Three configure web.config

 

And done Smile

 

So, lets go back and look at the details.

In step two, this is the SQL Azure script that the Migrate Assist wizard spat out:

--~Changing index [dbo].[ELMAH_Error].PK_ELMAH_Error to a clustered index.  You may want to pick a different index to cluster on.
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_Error]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[ELMAH_Error](
	[ErrorId] [uniqueidentifier] NOT NULL,
	[Application] [nvarchar](60) NOT NULL,
	[Host] [nvarchar](50) NOT NULL,
	[Type] [nvarchar](100) NOT NULL,
	[Source] [nvarchar](60) NOT NULL,
	[Message] [nvarchar](500) NOT NULL,
	[User] [nvarchar](50) NOT NULL,
	[StatusCode] [int] NOT NULL,
	[TimeUtc] [datetime] NOT NULL,
	[Sequence] [int] IDENTITY(1,1) NOT NULL,
	[AllXml] [nvarchar](max) NOT NULL,
 CONSTRAINT [PK_ELMAH_Error] PRIMARY KEY CLUSTERED 
(
	[ErrorId] ASC
)WITH (STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF)
)
END

IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_Error]') AND name = N'IX_ELMAH_Error_App_Time_Seq')
CREATE NONCLUSTERED INDEX [IX_ELMAH_Error_App_Time_Seq] ON [dbo].[ELMAH_Error] 
(
	[Application] ASC,
	[TimeUtc] DESC,
	[Sequence] DESC
)WITH (STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF)
GO
IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[DF_ELMAH_Error_ErrorId]') AND type = 'D')
BEGIN
ALTER TABLE [dbo].[ELMAH_Error] ADD  CONSTRAINT [DF_ELMAH_Error_ErrorId]  DEFAULT (newid()) FOR [ErrorId]
END

GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_GetErrorsXml]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE [dbo].[ELMAH_GetErrorsXml]
(
    @Application NVARCHAR(60),
    @PageIndex INT = 0,
    @PageSize INT = 15,
    @TotalCount INT OUTPUT
)
AS 

    SET NOCOUNT ON

    DECLARE @FirstTimeUTC DATETIME
    DECLARE @FirstSequence INT
    DECLARE @StartRow INT
    DECLARE @StartRowIndex INT

    SELECT 
        @TotalCount = COUNT(1) 
    FROM 
        [ELMAH_Error]
    WHERE 
        [Application] = @Application

    -- Get the ID of the first error for the requested page

    SET @StartRowIndex = @PageIndex * @PageSize + 1

    IF @StartRowIndex <= @TotalCount
    BEGIN

        SET ROWCOUNT @StartRowIndex

        SELECT  
            @FirstTimeUTC = [TimeUtc],
            @FirstSequence = [Sequence]
        FROM 
            [ELMAH_Error]
        WHERE   
            [Application] = @Application
        ORDER BY 
            [TimeUtc] DESC, 
            [Sequence] DESC

    END
    ELSE
    BEGIN

        SET @PageSize = 0

    END

    -- Now set the row count to the requested page size and get
    -- all records below it for the pertaining application.

    SET ROWCOUNT @PageSize

    SELECT 
        errorId     = [ErrorId], 
        application = [Application],
        host        = [Host], 
        type        = [Type],
        source      = [Source],
        message     = [Message],
        [user]      = [User],
        statusCode  = [StatusCode], 
        time        = CONVERT(VARCHAR(50), [TimeUtc], 126) + ''Z''
    FROM 
        [ELMAH_Error] error
    WHERE
        [Application] = @Application
    AND
        [TimeUtc] <= @FirstTimeUTC
    AND 
        [Sequence] <= @FirstSequence
    ORDER BY
        [TimeUtc] DESC, 
        [Sequence] DESC
    FOR
        XML AUTO

' 
END
GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_GetErrorXml]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE [dbo].[ELMAH_GetErrorXml]
(
    @Application NVARCHAR(60),
    @ErrorId UNIQUEIDENTIFIER
)
AS

    SET NOCOUNT ON

    SELECT 
        [AllXml]
    FROM 
        [ELMAH_Error]
    WHERE
        [ErrorId] = @ErrorId
    AND
        [Application] = @Application

' 
END
GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_LogError]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE [dbo].[ELMAH_LogError]
(
    @ErrorId UNIQUEIDENTIFIER,
    @Application NVARCHAR(60),
    @Host NVARCHAR(30),
    @Type NVARCHAR(100),
    @Source NVARCHAR(60),
    @Message NVARCHAR(500),
    @User NVARCHAR(50),
    @AllXml NVARCHAR(MAX),
    @StatusCode INT,
    @TimeUtc DATETIME
)
AS

    SET NOCOUNT ON

    INSERT
    INTO
        [ELMAH_Error]
        (
            [ErrorId],
            [Application],
            [Host],
            [Type],
            [Source],
            [Message],
            [User],
            [AllXml],
            [StatusCode],
            [TimeUtc]
        )
    VALUES
        (
            @ErrorId,
            @Application,
            @Host,
            @Type,
            @Source,
            @Message,
            @User,
            @AllXml,
            @StatusCode,
            @TimeUtc
        )

' 
END
GO

Log in to SQL Management Studio and run that script against your chosen db and you’re good to go.

 

In step 3, there are 2 main things you want to do.

The first is obviously setting up ELMAH to talk to the db. We do it like so.

<elmah>
    <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="ConnectionStringhere" />
    <security allowRemoteAccess="yes" />
  </elmah>

And the second is securing the actual elmah.axd page.

  <location path="elmah.axd">
    <system.web>
      <authorization>
        <deny users="?"/>
        <allow roles="Administrator"/>
      </authorization>
    </system.web>
  </location>

And we’re done. Easiest thing I’ve ever done. Smile

WordPress Feature Request: A Universal Video Player

As any of you who have been following my screen casts know, I’ve been using Vimeo to host and share my videos.

However, Vimeo as it is is slightly restrictive: 500Mb uploads a week, and one HD video a week. for screen casts, HD is essential. So I’m limited to one video a week, if that.

Now that I’d love to do is to upload my videos to Windows Azure blobs. Blobs is comply format agnostic, so I could upload any format I wanted to: it doesn’t have to be Silverlight Adaptive Streaming (or whatever the term is). The CDN capability of Blobs is also very helpful.

Now nothing is stopping me from moving away from WordPress and being able to customise what every blogging system i’d run to load videos from Windows Azure Blobs. However, that’s a little more trouble that’s it worth right now.

(I’m actually thinking of eventually moving to Windows Azure utilising the extra small instance, but that’s a ways off)

So what I’d love is a video player that you can just point at a URL and it will play the video. It doesn’t have to be Silverlight or Widows Media specific. It could be H.264 files (an iffy proposition with Google yanking H.264 support from Chromium to be sure).

So in other words, WordPress can provide the player in whatever form it wants to. I just provide the player with the URL to the file to be played.

I would think that the HTML5 spec would make this a fairly trivial undertaking.

In fact, now that I think of it. Couldn’t one simply insert a tag in the HTML view of posts??