WCF Server Chat Update:

I just left this reply to a comment on my last post on this project:

Hi there,

No, I Haven’t been able to make in any changes sine my last comit.

However, I have been taking a look at it over the past week, since there are issues with it. As you say authentication and authorisation are one of them.

I’ve been looking at the possibility of using Forms authentication. This brings ASP.Net Membership, Profiles and Roles to the table and these can be used.

This of course requires a SQL server as a back end. And requires the use of SSL.

This requires significant changes to the code base, to move from the current storage model (XML in the case of the Windows service, and Azure Tables in the case of the Windows Azure role). And since authorisation and authentication are now handled separately, those WCF method calls that currently handle this aren’t required.

Also, since the codebase is effectively two separate projects, these changes need to be made twice.

The client will also need to be changed.

These changes do make a lot of sense and I’m well along in implementing them. So look out for a post soon on them.

 

I thought I’d let everyone know that this is the direction I’m taking things in.

 

Life is busy, which why I haven’t been able to update things as much as I’d like.

There is one other particular problem with the WCF Server chat that I would have liked to have solved in my last comitt.

The issue involves the server pinging other clients across NAT. Of course, this could be solved by moving to a pull model rather than a push notification model. While that would be easy, I want to take a good long look at getting push to work properly before trying any other models.

Push notification is where its at. So if anyone has any pointers to implementing it, please send it my way.

Windows Azure Block Blobs

In Windows Azure Blob Storage, not all blobs are created equal. Windows Azure has the notion of Page Blobs and Block Blobs.  Each of these distinct blob types aim to solve a slightly different problem, and its important to understand the difference.

To Quote the documentation:

  • Block blobs, which are optimized for streaming.
  • Page blobs, which are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob.

About Block Blobs

Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by uploading a set of blocks and committing them by their block IDs. If you are uploading a block blob that is no more than 64 MB in size, you can also upload it in its entirety with a single Put Blob operation.

When you upload a block to Windows Azure using the Put Block operation, it is associated with the specified block blob, but it does not become part of the blob until you call the Put Block Listoperation and include the block’s ID. The block remains in an uncommitted state until it is specifically committed. Writing to a block blob is thus always a two-step process.

Each block can be a maximum of 4 MB in size. The maximum size for a block blob in version 2009-09-19 is 200 GB, or up to 50,000 blocks.

About Page Blobs

Page blobs are a collection of pages. A page is a range of data that is identified by its offset from the start of the blob.

To create a page blob, you initialize the page blob by calling Put Blob and specifying its maximum size. To add content to or update a page blob, you call the Put Page operation to modify a page or range of pages by specifying an offset and range. All pages must align 512-byte page boundaries.

Unlike writes to block blobs, writes to page blobs happen in-place and are immediately committed to the blob.

The maximum size for a page blob is 1 TB. A page written to a page blob may be up to 1 TB in size.

So, before we determine what blob type we’re going to use, we need to determine what we’re using this particular blob for in the first place.

You’ll notice the above extract is quite clear what to use block blobs for: streaming video. In other words, anything that we don’t need random I/O access to. On the other hand page blobs have a 512-byte page boundary that makes it perfect for random I/O access.

And yes, its conceivably possible for you to need to host stuff such as streaming video as a page blob. When you think about this stuff to much, you end up imagining situations where that might be possible.  So, these would be situations where you are directly editing or reading very select potions of a file. If you’re editing video, who wants to read in an entire 4MB for one frame of video? You might laugh at the idea of actually needing to do this, but that the Rough Cut Editor is web based and works primarily with web-based files. If you had to run that using Blob storage as a backend you’d need to use page blobs to fully realise the RCE’s functionality.

So, enough day-dreaming. Time to move on.

Some groundwork

Now, in our block blob, each individual block can be a maximum of 4MB in size. Assuming we’re doing streaming video, 4MB is not going to cut it.

The Azure API provides the CloudBlockBlob class with several helper methods for managing our blocks. The methods we are interested in are:

  • PutBlock()
  • PutBlockList()

The PutBlock method takes a base-64 encoded string for the Block ID, a stream object with the binary data for the block and a (optional) MD5 hash of the contents. Its important to note that the ID string MUST be base-64 encoded or else Windows Azure will not accept the block. For the MD5 hash, you can simply pass in null.  This method should be called for each and every block that makes up your data stream.

The PutBlockList  is the final  method that needs to be called. It takes a List<string>  containing every ID of every block that you want to be part of this blob. By calling this methods it commits all the blocks contained in the list. This means, then, that you could land up in a situation where you’ve called PutBlock but not included the ID when you called PutBlockList. You then end up with an incomplete and corrupted file. You have a week to commit uploaded blocks. So all is not lost if you know which blocks are missing. You simply call PutBlockList with the IDs of the missing blocks.

There are a number of reasons why this is a smart approach.  Normally, I fall on the side of developer independence, the dev being free to do things as he likes without being hemmed in. In this case, by being forced to upload data in small chuncks, we realise a number of practical benefits. The big one being recovery from bad uploads – customers hate having to re-start gigabyte sized uploads from scratch.

Here be Dragons

The following example probably isn’t the best. I’m pretty sure someone will refactor and post a better algorithm.

Now there are a couple of things to note here.  One bring that I want to illustrate what happens at a lower level of abstraction that we usually work at, so that means no StreamReaders – We’ll read the underlying bytes directly.

Secondly, not all Streams have the same capability. Its perfectly possible to come across a Stream object where you can’t seek. Or determine the length of the stream. So this is written to handle any data stream you can throw at it.

With that out of the way, lets start with some Windows Azure setup code.

StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key);
CloudStorageAccount acc = new CloudStorageAccount(key, true);

CloudBlobClient blobclient = acc.CreateCloudBlobClient();
CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos");
Videocontainer.CreateIfNotExist();

CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");

Note how we’re using the CloudBlockBlob rather than the CloudBlob class.

In this example we’ll need our data to be read into a byte array right from the start. While I’m using data from a file here, the actual source doesn’t matter.

byte[] data = File.ReadAllBytes("videopath.mp4");

Now, to move data from our byte array into individual blocks, we need a few variables to help us.

            int id = 0;
            int byteslength = data.Length;
            int bytesread = 0;
            int index = 0;
            List blocklist = new List();
  • Id will store a sequential number indicating the ID of the block
  • byteslength is the length, in bytes of our byte array
  • bytesread keeps a running total of how many bytes we’ve already read and uploaded
  • index is a copy for bytes read and used to do some interim calculations in the body of the loop (probably will end up refactoring it out anyway)
  • blocklist holds all our base-64 encoded block id’s

Now, on to the body of the algorthim. We’re using a do loop here since this loop will always run at least once (assuming, for the sake of example, that all files are larger than our 1MB block boundary)

do
            {
                byte[] buffer = new byte[1048576];
                int limit = index + 1048576;
                for (int loops = 0; index < limit; index++)
                {
                    buffer[loops] = data[index];
                    loops++;
                }

The idea (that of using a do loop) here being to loop over our data array until less than 1MB remains.

Note how we’re using a separate byte array to copy data into. This the block data that we’ll pass to PutBlock. Since we’re not using StreamReaders, we have to do the copy byte for byte as we go along.

It is this bit of code would be abstracted away were we using StreamReaders (or, more properly for this application, BinaryReaders)

Now, this is the important bit:

                 bytesread = index;
                string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id)); //1

                blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null); //2

                blocklist.Add(blockIdBase64);
                id++;
            } while (byteslength - bytesread > 1048576);

There are three things to note in the above code. Firstly, we’re taking the block ID and base-64 encoding it properly.

And secondly, note the call to PutBlock. We’re wrapped the second byte array containing just our block data as a MemoryStream object (since that’s what the PutBlock methods expects) and we’ve passed in null rather than an MD5 hash of our block data.

Finally, note how we add the block id to our blocklist variable. This will ensure that the call to PutBlockList will include the ID’s of all of our uploaded blocks.

So, by the time this do loops finally exits, we should be in a position to upload our final block. This final block will almost certainly be less than 1MB in size (barring the usual edge case caveats). Since this final block is less than 1MB, our code will need a final change to cope with it.

            int final = byteslength - bytesread;
            byte[] finalbuffer = new byte[final];
            for (int loops = 0; index < byteslength; index++)
            {
                finalbuffer[loops] = data[index];
                loops++;
            }
            string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id));
            blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null);
            blocklist.Add(blockId);

Finally, we make our call to PutBlockList, passing in our List array (in this example, the “blocklist” variable).

blob.PutBlockList(blocklist);

All our blocks are now committed. If you have the latest Windows Azure SDK (and I assume you do), the Server Explorer should allow you to see all your blobs and get their direct URL’s.  You can downloaded the blob directly in the Server Explorer, or copy and paste the URL into your browser of choice.

Wrap up

Basically, what we’ve covered in this example is a quick way of breaking down any binary data stream into individual blocks conforming to Windows Azure Blob storage requirements, and uploading those blocks to Windows Azure. The neat thing here is that by using this method not only does the MD5 hash let Windows Azure check data integrity for you, but block ID’s let Windows Azure take care of putting the data back together in the correct sequence.

Now when I refactor this code for actual production, a couple of things are going to be different. I’ll do the MD5 hash. I’ll upload blocks in parallel to take maximum advantage of upload bandwidth (this being the UK, there not much upload bandwidth, but I’ll take all I can get). And obviously, I’ll use the full capability of Stream readers to do the dirty work for me.

Heres the full code:

StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key);
CloudStorageAccount acc = new CloudStorageAccount(key, true);

CloudBlobClient blobclient = acc.CreateCloudBlobClient();
CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos");
Videocontainer.CreateIfNotExist();

CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");

byte[] data = File.ReadAllBytes("videopath.mp4");

int id = 0;
int byteslength = data.Length;
int bytesread = 0;
int index = 0;
List blocklist = new List();

do
            {
                byte[] buffer = new byte[1048576];
                int limit = index + 1048576;
                for (int loops = 0; index < limit; index++)
                {
                    buffer[loops] = data[index];
                    loops++;
                }
                bytesread = index;
                string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id));

                blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null);

                blocklist.Add(blockIdBase64);
                id++;
            } while (byteslength - bytesread > 1048576);

            int final = byteslength - bytesread;
            byte[] finalbuffer = new byte[final];
            for (int loops = 0; index < byteslength; index++)
            {
                finalbuffer[loops] = data[index];
                loops++;
            }
            string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id));
            blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null);
            blocklist.Add(blockId);
            blob.PutBlockList(blocklist);

Using SQL Azure with ELMAH

If you don’t know what ELMAH is, stop right now and go and read about it.

ELMAH (Error Logging Modules and Handlers) is an application-wide error logging facility that is completely pluggable. It can be dynamically added to a running ASP.NET web application, or even all ASP.NET web applications on a machine, without any need for re-compilation or re-deployment.

Then go and read Scott “TheHa” Hanselman’s post on it.

There is a Nuget package for it as well, to make things really super easy.

In fact, running Nuget, setting up SQL Azure and tweaking some config settings took me all of 20 minutes. No freaking kidding. 

Now remember that this is being installed on an MVC site, so don’t let that put you off. Here we go:

Step One: Install from Nuget (making sure you have the latest build of Nuget in the process)

Step Two: Setup SQL Azure

Step Three configure web.config

 

And done Smile

 

So, lets go back and look at the details.

In step two, this is the SQL Azure script that the Migrate Assist wizard spat out:

--~Changing index [dbo].[ELMAH_Error].PK_ELMAH_Error to a clustered index.  You may want to pick a different index to cluster on.
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_Error]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[ELMAH_Error](
	[ErrorId] [uniqueidentifier] NOT NULL,
	[Application] [nvarchar](60) NOT NULL,
	[Host] [nvarchar](50) NOT NULL,
	[Type] [nvarchar](100) NOT NULL,
	[Source] [nvarchar](60) NOT NULL,
	[Message] [nvarchar](500) NOT NULL,
	[User] [nvarchar](50) NOT NULL,
	[StatusCode] [int] NOT NULL,
	[TimeUtc] [datetime] NOT NULL,
	[Sequence] [int] IDENTITY(1,1) NOT NULL,
	[AllXml] [nvarchar](max) NOT NULL,
 CONSTRAINT [PK_ELMAH_Error] PRIMARY KEY CLUSTERED 
(
	[ErrorId] ASC
)WITH (STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF)
)
END

IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_Error]') AND name = N'IX_ELMAH_Error_App_Time_Seq')
CREATE NONCLUSTERED INDEX [IX_ELMAH_Error_App_Time_Seq] ON [dbo].[ELMAH_Error] 
(
	[Application] ASC,
	[TimeUtc] DESC,
	[Sequence] DESC
)WITH (STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF)
GO
IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[DF_ELMAH_Error_ErrorId]') AND type = 'D')
BEGIN
ALTER TABLE [dbo].[ELMAH_Error] ADD  CONSTRAINT [DF_ELMAH_Error_ErrorId]  DEFAULT (newid()) FOR [ErrorId]
END

GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_GetErrorsXml]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE [dbo].[ELMAH_GetErrorsXml]
(
    @Application NVARCHAR(60),
    @PageIndex INT = 0,
    @PageSize INT = 15,
    @TotalCount INT OUTPUT
)
AS 

    SET NOCOUNT ON

    DECLARE @FirstTimeUTC DATETIME
    DECLARE @FirstSequence INT
    DECLARE @StartRow INT
    DECLARE @StartRowIndex INT

    SELECT 
        @TotalCount = COUNT(1) 
    FROM 
        [ELMAH_Error]
    WHERE 
        [Application] = @Application

    -- Get the ID of the first error for the requested page

    SET @StartRowIndex = @PageIndex * @PageSize + 1

    IF @StartRowIndex <= @TotalCount
    BEGIN

        SET ROWCOUNT @StartRowIndex

        SELECT  
            @FirstTimeUTC = [TimeUtc],
            @FirstSequence = [Sequence]
        FROM 
            [ELMAH_Error]
        WHERE   
            [Application] = @Application
        ORDER BY 
            [TimeUtc] DESC, 
            [Sequence] DESC

    END
    ELSE
    BEGIN

        SET @PageSize = 0

    END

    -- Now set the row count to the requested page size and get
    -- all records below it for the pertaining application.

    SET ROWCOUNT @PageSize

    SELECT 
        errorId     = [ErrorId], 
        application = [Application],
        host        = [Host], 
        type        = [Type],
        source      = [Source],
        message     = [Message],
        [user]      = [User],
        statusCode  = [StatusCode], 
        time        = CONVERT(VARCHAR(50), [TimeUtc], 126) + ''Z''
    FROM 
        [ELMAH_Error] error
    WHERE
        [Application] = @Application
    AND
        [TimeUtc] <= @FirstTimeUTC
    AND 
        [Sequence] <= @FirstSequence
    ORDER BY
        [TimeUtc] DESC, 
        [Sequence] DESC
    FOR
        XML AUTO

' 
END
GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_GetErrorXml]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE [dbo].[ELMAH_GetErrorXml]
(
    @Application NVARCHAR(60),
    @ErrorId UNIQUEIDENTIFIER
)
AS

    SET NOCOUNT ON

    SELECT 
        [AllXml]
    FROM 
        [ELMAH_Error]
    WHERE
        [ErrorId] = @ErrorId
    AND
        [Application] = @Application

' 
END
GO
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ELMAH_LogError]') AND type in (N'P', N'PC'))
BEGIN
EXEC dbo.sp_executesql @statement = N'
CREATE PROCEDURE [dbo].[ELMAH_LogError]
(
    @ErrorId UNIQUEIDENTIFIER,
    @Application NVARCHAR(60),
    @Host NVARCHAR(30),
    @Type NVARCHAR(100),
    @Source NVARCHAR(60),
    @Message NVARCHAR(500),
    @User NVARCHAR(50),
    @AllXml NVARCHAR(MAX),
    @StatusCode INT,
    @TimeUtc DATETIME
)
AS

    SET NOCOUNT ON

    INSERT
    INTO
        [ELMAH_Error]
        (
            [ErrorId],
            [Application],
            [Host],
            [Type],
            [Source],
            [Message],
            [User],
            [AllXml],
            [StatusCode],
            [TimeUtc]
        )
    VALUES
        (
            @ErrorId,
            @Application,
            @Host,
            @Type,
            @Source,
            @Message,
            @User,
            @AllXml,
            @StatusCode,
            @TimeUtc
        )

' 
END
GO

Log in to SQL Management Studio and run that script against your chosen db and you’re good to go.

 

In step 3, there are 2 main things you want to do.

The first is obviously setting up ELMAH to talk to the db. We do it like so.

<elmah>
    <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="ConnectionStringhere" />
    <security allowRemoteAccess="yes" />
  </elmah>

And the second is securing the actual elmah.axd page.

  <location path="elmah.axd">
    <system.web>
      <authorization>
        <deny users="?"/>
        <allow roles="Administrator"/>
      </authorization>
    </system.web>
  </location>

And we’re done. Easiest thing I’ve ever done. Smile

Deploying your Database to SQL Azure (and using ASP.Net Membership with it)

Its been quite quiet around here on the blog. And the reason for that is the fact that I got asked by a Herbalife Distributor to put a little e-commerce site together (its called flying Shakes). So its been a very busy few weeks here, and hopefully as things settle down, we can get back to business as usual. I’ve badly neglected the blog and the screencast series.

I have a few instructive posts to write about this whole experience, as it presented a few unique challenges.

Now. I’m staring at the back end here since deploying the database to SQL Azure was the difficult part of deployment. The reason for this is mainly due to the ASP.Net Membership database.

But we’ll start from the beginning. Now I’m assuming here that your database is complete and ready for deployment.

Step 0: Sign up for Windows Azure (if you haven’t already) and provision a new database. Take note of the servers fully qualified DNS address and remember  your username and password. You’ll need it in a bit.

Step 1 Attach your database to to your local SQL Server. Use SQL Server  Management Studio to do that.

At this point  we have our two databases  and we need to transfer  the schema and data from one to the other. To do that, we’ll use a helpful little Codeplex project called SQL Azure Migration Wizard. Download it and unzip the files.

Run the exe. I chose Analyse and Migrate:

Capture

 

Enter your Local SQL Server details:

Capture2

 

Capture3

 

Hit Next until it asks if you’re ready to generate the SQL Scripts. This is the screen you get after its analyse the database and complied the scripts.

Capture4

Now you get the second login in screen that connects you to your newly created SQL Azure Database.

This is the crucial bit. You have to replace SERVER with your server name in both the server name box and the Username box, replacing username with your username in the process. You need to have @SERVER after your username or the connection will fail.

Capture5

Fill in the rest of your details and hit Connect. Press next and at the next screen you’ll be ready to execute the SQL scripts against your SQL Azure database.

And its that easy.

All you have to do is to go ahead and change your connection string from the local DB to the one hosted on SQL Azure.

There is one last thing to do. When you first deploy your site and try and run it against SQL Azure, it won’t work. the reason being is that you have to set a firewall rule for your SQL Azure Database by IP Address range. So you should receive an error message saying that IP address such and such is not authorised to access the database. So you just need to go ahead and set the appropriate rule in the Management Portal. You’ll need to wait a while before those settings take effect.

And you should be good to go.

 

The ASP.Net Membership Database

In the normal course of events, you can setup Visual Studio to run the SQL Scripts against whatever database you have specified in your connection string when you Publish your project. However, there is a big gotcha. The SQL Scripts that ship with Visual Studio will not run against SQL Azure. This is because SQL Azure is restricted.

Even if you log into your SQL Azure database using SQL Management Studio you’ll see that your options are limited as to what you can do with  the database from within SQL Management Studio. And if you try and run the scripts manually, they still wont run.

However, Microsoft has published a SQL Azure friendly set of scripts for ASP.net.

So we have two options: We can run the migrate tool again, and use the ASP.net Membership database  to transfer over Schema and Data. Or we can run the scripts against the database.

For the sake of variety, I’ll go through the scripts and run them against the database.

  1. Open SQL Management Studio and log into your SQL Azure Database.
  2. Go File-> Open and navigate to the folder the new scripts are in.
  3. Open InstallCommon.sql and run it. You must run this before running any of the others.
  4. For ASP.net Membership run the scripts for Roles, Personalisation, Profile and Membership.

At this point I need to point out that bad things will happen if you try running your website now, even if your connection string has been changed.

ASP.net will try and create a new mdf file for the membership database. You get this error:

An error occurred during the execution of the SQL file ‘InstallCommon.sql’. The SQL error number is 5123 and the SqlException message is: CREATE FILE encountered operating system error 5(failed to retrieve text for this error. Reason: 15105) while attempting to open or create the physical file ‘C:\USERS\ROBERTO\DOCUMENTS\VISUAL STUDIO 2010\PROJECTS\FLYINGSHAKESTORE\MVCMUSICSTORE\APP_DATA\ASPNETDB_TMP.MDF’. CREATE DATABASE failed. Some file names listed could not be created. Check related errors. Creating the ASPNETDB_74b63e50f61642dc8316048e24c7e499 database…

Now, the problem with all this is the machine.config file where all of these default settings actually reside. See internally, it has a LocalSQLServer connection string. And by default, the RoleManager will use it because its a default setting. Here’s what it looks like:

<connectionStrings>		
<add name="LocalSqlServer" connectionString="data source=.\SQLEXPRESS;Integrated security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient"/>	</connectionStrings>
<system.web>		
<processModel autoConfig="true"/>	
<httpHandlers/>		
<membership>			
<providers>				
<add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="LocalSqlServer" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/>			</providers>		
</membership>
<profile>
<providers>	
<add name="AspNetSqlProfileProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>			</providers>
</profile>
<roleManager>
<providers>
<add name="AspNetSqlRoleProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Security.SqlRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>				<add name="AspNetWindowsTokenRoleProvider" applicationName="/" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</providers>
</roleManager>
</system.web>

So, we have to overwrite those settings in our own web.config file like so:

<roleManager enabled="true" defaultProvider="AspNetSqlRoleProvider"> 
<providers>
<clear/>
<add name="AspNetSqlRoleProvider" connectionStringName="..." type="System.Web.Security.SqlRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral...."/> 
</providers>
</roleManager>
<membership>
<providers> 
<clear/>
<add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider.... connectionStringName=""..../>        </providers> 
</membership>
<profile>
<providers>
<clear/>
<add name="AspNetSqlProfileProvider" connectionStringName="" type="System.Web.Profile.SqlProfileProvider..../>
</providers> 
</profile>

Now, what we are doing here is simply replacing the connectionstringaname attribute in each of these providers with our own connection string name. Before that, however, we put “<clear/>” to dump the previous settings defined in the machine.config file and force it to use our modified settings.

That should  allow the role manager to use the our SQL Azure database instead of trying to attach its own. Things should run perfectly now.

Finally

The Azure Management Portal has a very nice UI for managing and editing your database. You can add and remove tables, columns rows etc. Its really good. And rather welcome. I thought I’d have to script every change and alteration.

WordPress Feature Request: A Universal Video Player

As any of you who have been following my screen casts know, I’ve been using Vimeo to host and share my videos.

However, Vimeo as it is is slightly restrictive: 500Mb uploads a week, and one HD video a week. for screen casts, HD is essential. So I’m limited to one video a week, if that.

Now that I’d love to do is to upload my videos to Windows Azure blobs. Blobs is comply format agnostic, so I could upload any format I wanted to: it doesn’t have to be Silverlight Adaptive Streaming (or whatever the term is). The CDN capability of Blobs is also very helpful.

Now nothing is stopping me from moving away from WordPress and being able to customise what every blogging system i’d run to load videos from Windows Azure Blobs. However, that’s a little more trouble that’s it worth right now.

(I’m actually thinking of eventually moving to Windows Azure utilising the extra small instance, but that’s a ways off)

So what I’d love is a video player that you can just point at a URL and it will play the video. It doesn’t have to be Silverlight or Widows Media specific. It could be H.264 files (an iffy proposition with Google yanking H.264 support from Chromium to be sure).

So in other words, WordPress can provide the player in whatever form it wants to. I just provide the player with the URL to the file to be played.

I would think that the HTML5 spec would make this a fairly trivial undertaking.

In fact, now that I think of it. Couldn’t one simply insert a tag in the HTML view of posts??

Windows Azure Feedreader Episode 8

Apologies for the long delay between episodes. I do these in my spare time and my spare time doesn’t always coincide with a quite environment to record the screencast.

But safe to say, I’m super excited to be back and doing these once again.

So, to it!!

This week we follow straight on from what we did in week 7, and modify our backend code to take account of user subscriptions when adding RSS feeds and RSS feeds from OPML files.

Additionally, we lay the ground work for next weeks episode where we will be writing the code that will update the feeds every couple of hours.

And finally, and perhaps most importantly, this week we start using the Windows Azure SDK 1.3. So if you haven’t downloaded and installed it, now is  the time.

There are some slight code modifications to take account of some breaking changes in 1.3 that are detailed in Steve Marx’s post on the subject.

Finally, I’ve just realised that the episode contains the wrong changeset information. The correct changeset is: 84039 (I’ll be correcting this slight oversight. Perfectionism demands it)

So, enjoy the show:

Remember, you can head over to vimeo.com to see the show in all its HD glory.

As I said, next week, We’ll be writing the update code that will automatically update all RSS feeds being tracked by the application.

Windows Azure Feedreader Episode 7: User Subscriptions

Episode 7 is up. I somehow found time to do it over several days.

There are some audio issues, but they are minor. There is some humming noises for most of the show. I’ve yet to figure out where they came from. Apologies for that.

 

This week we

  • clear up our HTML Views
  • implement user subscriptions

We aren’t finished with user subscriptions by any means. We need to modify our OPML handling code to take account of the logged in user.

 

Enjoy:

Remember, you can head over to vimeo.com to see the show in all its HD glory.

Next week we’ll finished up that user subscriptions  code in the OPML handling code. 

And we’ll start our update code as well.

Windows Azure Feedreader Episode 6: Eat Your Vegetables

A strange title, no doubt, but I’ll explain in a moment.

Firstly, apologies for the delay. I’ve been busy with some other projects that couldn’t be delayed. And yes, I have been using Windows Azure Storage for that.

I’m doing some interesting work with Tropo, the cloud communications platform. It’s similar to Twillo. So, at some point I’ll do a screencast – my contribution to the lack of documentation for Tropo’s C# library.

This weeks episode had the stated intention of testing the Opml upload and storage routine we wrote back in Week 4.

We manage this, reading in the contents of the OPML file, storing the blog metadata in Windows Azure tables and the posts in Blob Storage.

However, in getting there, we have to tackle a number of bugs. Truthfully speaking, a few could have been avoid earlier – such as the fact that calling UrlPathEncode does not remove the ‘+’, and so IIS7 freaks out as a result (e.g. when blob names are used in URLs).  Others, I had no idea about – like the requirement for only lowercase blob container and queue names.

Which brings me to why I’ve named this episode as such. Dave Winer wrote a brilliant post earlier this week about working with Open Formats. To quote him:

1. If it hurts when you do it, stop doing it.

2. Shut up and eat your vegetables.

3. Assume people have common sense.

Number 2 says you have to choose between being a person who hangs out on mail lists talking foul about people and their work, or plowing ahead and making software anyway even though you’re dealing with imperfection. If you’re serious about software, at some point you just do what you have to do. You accept that the format won’t do everything anyone could ever want it to do. Realize that’s how they were able to ship it, by deciding they had done enough and not pushing it any further.

So, this episode is about doing exactly that – shutting up about the bugs and getting the software to work.

The fact is that every episode so far has been about writing the foundations upon which we can build. The bugs solved in this episode, mean we have few problems down the road. We have been immersed in the nitty gritty details, rather than build with out having to worry if our feed details are really being read and stored, or if our tables are being created correctly, or if blobs are being stored in the right containers.

Enjoy (or not) my bug fixing:

Remember to head over to Vimeo.com to see the episode in all its HD glory.

A word about user Authentication. While I don’t cover it in the video, I’ll be moving to use Google Federated Authentication over Windows Live ID. So for next week, I’ll have the Windows Live Stuff removed, and we’ll be using Google and the Forms Authentication integration the dontnetopenauth library provides.

Next week, the order of business is as follows:

  1. Clean up our view HTML
  2. ViewData doesn’t work in a couple of places – so we need to fix that
  3. Store user subscriptions.
  4. Make a start on our update code

PS. I added new pre-roll this week as an experiment. Hope you like it.

Core Competencies and Cloud Computing

Wikipedia defines Core Competency as:

Core competencies are particular strengths relative to other organizations in the industry which provide the fundamental basis for the provision of added value. Core competencies are the collective learning in organizations, and involve how to coordinate diverse production skills and integrate multiple streams of technologies. It is communication, an involvement and a deep commitment to working across organizational boundaries.

 

So, what does this have to do with Cloud Computing?

I got thinking about different providers of cloud computing environments. If you abstract away the specific feature set of each provider what were the differences remaining that set these providers apart from each other.

Now, I actually starting thinking about this backwards. I asked myself why Microsoft Windows Azure couldn’t do a Google App Engine and offer free applications. I had to stop myself there and go off to wikipedia and remind myself of the quotas that go along with an App Engine free application:

 

Hard limits

Apps per developer
10

Time per request
30 sec

Blobstore size (total file size per app)
2 GB

HTTP response size
10 MB

Datastore item size
1 MB

Application code size
150 MB

Free quotas

Emails per day
2,000

Bandwidth in per day
1,000 MB

Bandwidth out per day
1,000 MB

CPU time per day
6.5 hours per day

HTTP Requests per Day
1,300,000*

Datastore API calls per day
10,000,000*

Data stored
1 GB

URLFetch API calls per day..
657,084*

Now the reason why i even asked this question, was the fact that I got whacked with quite a bit of a bill for the original Windows Azure Feed Reader I wrote earlier this year. That was for my honours year university project, so I couldn’t really complain. But looking at those quotas from Google, I could have done that project many times over for free.

This got me thinking. Why does Google offer that and not Microsoft? Both of these companies are industry giants, and both have boatloads of CPU cycles.

Now, Google, besides doing its best not to be evil, benefits when you use the web more.  And how do they do that? They go off and create Google App Engine. Then they allow the average dev to write an app they want to write and run it. For free. Seriously, how many websites run on App Engine’s free offering?

Second, Google is a Python shop. Every time someone writes a new Library or comes up with a novel approach to something, Google benefits. As Python use increases, some of that code is going to be contributed right back into the Python open source project. Google benefits again. Python development is a Google Core competency.

Finally, Google is much maligned for its approach to software development: thrown stuff against the wall and see what sticks. By giving the widest possible number of devs space to go crazy, the more apps are going to take off.

So, those are all Googles core competencies:

  1. Encouraging web use
  2. Python
  3. See what sticks

And those are perfectly reflected in App Engine.

Lets contrast this to Microsoft.

Microsoft cater to writing line of business applications. They don’t mess around. Their core competency, in other words, is other companies IT departments. Even when one looks outside the developer side of things, one sees that Microsoft office and windows are all offered primarily to the enterprise customer. The consumer versions of said products aren’t worth the bits and bytes they take up on disk. Hence, windows Azure is aimed squarely at companies who can pay for it, rather than enthusiasts.

Secondly, Windows Azure uses the .Net Framework, another uniquely Microsoft core competency.  With it, it leverages the C# language. Now, it  is true that .net is not limited to Windows, nor is Windows Azure  a C# only affair. However, anything that runs on Windows Azure leverages the CLR and the DLR. Two pieces of technology that make .Net tick.

Finally, and somewhat  related, Microsoft has a huge install base of dedicated Visual Studio users. Microsoft has leveraged this by creating a comprehensive suite of Windows Azure Tools.

Hopefully you can see where I’m going with this. Giving stuff away for free for enthusiasts to use is not a Microsoft core competency. Even with Visual Studio Express, there are limits. Limits clearly defined by what enterprises would need. You need to pay through the nose for those.

So Microsoft core competencies are:

  1. Line of Business devs
  2. .Net, C# and the CLR/DLR
  3. Visual Studio

Now, back to what started this thought exercise – Google App Engines free offering. As you can see its a uniquely Google core competency, not a Microsoft one.

Now, what core competencies does Amazon display in Amazon Web Services?

Quite simply, Amazon doesn’t care who you are or what you want to do, they will provide you with a solid service at a very affordable price and sell you all the extra services you can handle. Amazon does the same things with everything else, so why not cloud computing. Actually, AWS is brilliantly cheap. Really. This is Amazon’s one great core competency and they excel at it.

So, back to what started this thought exercise – a free option. Because of its core Competencies, Google is uniquely positioned to do it. And by thinking about it, Microsoft and Amazon’s lack of a similar offering becomes obvious.

Also, I mentioned the cost of Windows Azure.

Google App Engine and its free option mean that university lecturers are choosing to teach their classes using Python and App Engine rather than C# and Windows Azure.

Remember what a core competency is. Wikipedia defines Core Competency as:

Core competencies are particular strengths relative to other organizations in the industry which provide the fundamental basis for the provision of added value. Core competencies are the collective learning in organizations, and involve how to coordinate diverse production skills and integrate multiple streams of technologies. It is communication, an involvement and a deep commitment to working across organizational boundaries.

I guess the question is, which offering make the most of their parent companies core competencies? And is this a good thing?

Windows Azure Feedreader Episode 5: User Authentication

Firstly, apologies for being late with  this episode. Real life presented some challenges over the weekend.

This weeks episode focuses on the choice of user authentication systems. As I mentioned last week, there is a choice to be made between Google Federated login and Windows Live ID.

So, this week, I implement both systems.

It should be noted that I only do the basics for Google Federated Login – that is, only the openID part of the process. We’ll leave OAuth till later.

If you read my earlier post, I was still deliberating on which to use. Having actually worked with the dontnetopenauth library in an application centric manner, it does seem to be more appealing. Because it integrates nicely with Forms authentication, it lends itself to MVC. Also, because of this, having dual login systems isn’t going to be possible. So we have to choose one of them.

So next week, we’ll be removing the code  for the loser. As i said above, I’m leaning toward Google Federated Login.

So this week we cover:

Here’s the show:

And remember, you’ll have to go to vimeo.com to see the full HD version.

I had planed to test our OPML code that we wrote last week instead, but we’ll do that for next weeks episode. As  bonus, we can do it properly with full integration with user information.

About next weeks episode, I’m busy all weekend. The next episode may or may not appear on time next tuesday.