LINQ + Google Charts + MVC : Pie Chart

A couple of weeks ago, I pointed out a LINQ snippet that was running against the Google Charts API.

This week, I’m back with my implementation of  the Pie Chart.

I’ve started with the Pie chart because its relatively simple and will lay the foundation for dealing with the complexities of the other charts.

Building Blocks

Since this is MVC, I created a new ChartsService class in the Services namespace of the project.

If you look at the snippets in the original ACM article, you’ll notice an extension method called “SeparatedBy”. So our first task is to create this extension method.

There are, of course, a number of ways to concatenate a list of string with a separator. This being an exercise in LINQ, we are going to use the LINQ Aggregate method.

   
public static string SeparatedBy(this List<String> list, string seperator)
{ 
return list.Aggregate((current, next) => current + seperator + next); 
}

 

I’m sure that you’ll agree with me when i say that it’s a nice and clean approach to what could be a messy block of code.

However, that extension method will only concatenate the list of strings passed to it. Why is this a problem? Because we are going to want to concatenate lists of int objects as well. So that extension method will just not do. We  could do some fancy work with generics, but the simplest thing to do is to supply to an overloaded method that accepts lists of type int.

public static string SeparatedBy(this List<int> list, string seperator)

List<string> thelist = list.ConvertAll<string>(x => x.ToString());

 return thelist.Aggregate((current, next) => current + seperator + next); 
}

There is one additional difference between these and our original method – the call to ConvertAll. Rather than have a foreach loop that does the conversation, we simply supply an inline lambda function that gets executed against each item  in the list and returns a list of the desired type. Again, very clean.

 

So, armed with these extension methods, we can now declare our classes.

Data

Google charts offers a wide range of functionality and many different kinds of charts. Each chart has a host of differing options and settings available to it. So when creating classes to represting thse charts kinds we have to bear in mind that there will be unique functionality not common to other charts that will come up.

Charts logically are made up of a number of  smaller complements:  bar charts have columns, pie charts have slices and so on and so forth. So we’ll represent these first.


public class Slice
{
public int Value { get; private set; }
public string Legend { get; private set; }

public Slice(int value, string legend)
 this.Value = value;
this.Legend = legend;
}

} 

Lets first look at the Pie class itself now.

Pie Class


public List<Slice> Slices { get; set; }
public string Title { get; private set; 
public Double Height { get; private set; }
public Double Width { get; private set; }

public Pie(string title, double height, double width)
{
this.Slices = new List<Slice>();
this.Title = title;
this.Height = height;
this.Width= width;
}

We start by declaring a number of properties and a constructor. In the constructor we initialize our list of Slices. This is where we see a departure from the snippets of he ACM article.  We do not pass the slices into the constructor. Of course, this is an issue of style over substance. There is no reason why we could not have generated the slices before the creating the chart and then passed the slices.


public string Compile()
{
var tt = Title.Split(' ').ToList().SeparatedBy(' ');
var p = new List<string>(){
this.Slices.Select(slice =>slice.Legend).ToList().SeparatedBy("|"),
this.Slices.Select(slice =>slice.Value).ToList().SeparatedBy(",")};

return string.Format(@"http://chart.googleapis.com/chart?cht=p&chtt={0}&chs={3}x{4}&chl={1}&chd=t:{2}", tt, p.ElementAt(0), p.ElementAt(1), this.Width, this.Height)
}

This is where all the important stuff happens.

Line 3 properly formats the title by putting a + sign to signify spaces between words. Note that we are using the extension method we wrote earlier.

Line 4 creates a new list of strings, each string being the comma or | delimited list of values and legends. Using a List gives us greater flexibility later on when our implementation will handle multiple data series and legends

Line 8 uses String.Format to merge all this data into a url that we can use to call the Google Charts API with. For an explanation of what each of these parameters mean, see the Google Charts API Parameter List

 

ViewModel

Now, this being MVC, the way to display stuff is by using a ViewModel. So lets create one:


public class MonthlyReportsViewModel
{
public MonthlyReports Details { get; set; }
public Pie Chart { get; set; }
 }

The one property we are interested in here is the Chart Property.

Now that we have our ViewModel, we have to populate it. So, in our ActionResult method:


<pre>Pie chart = new Pie("This is my chart",200);
chart.Slices.Add(new Slice(25, “Slice 1”));
chart.Slices.Add(new Slice(25, “Slice 2”));
chart.Slices.Add(new Slice(25, “Slice 3”));
chart.Slices.Add(new Slice(25, “Slice 4”));
model.Chart = chart;
return View(model);

 

View

In our View itself, we’re going to have to render out the chart. Since the call to Google Charts API will return an image, we can simply do the following:


<img src = "@Model.Chart.Compile()" alt ="">

 

What one could do is to put the actual rendering code in a Helper Method and call the Helper Method from your view, like so:


@GoogleCharts.PieChart(@Model.Chart)

That, of course, further abstracts the code. It does have the advantage of being much cleaner and easier to do.

 

Conclusion

As you can see, using LINQ to abstract away the complexity of what your code is actually doing is not just the province of database code. One thing what I’ve enjoyed about working with LINQ is how code always comes out looking fresh, clean and crisp. Having worked extensively with LINQ and Lambda expressions, using foreach loops  to process Lists looks so much messier.

Next time, we’ll take a look at the somewhat more complicated Bar Chart. I’ll not cover every single possible piece of functionality, but I’ll cover the basics. All I want to show is how the foundation laid down today can easily translate over. My current implementation of bar charts is sufficient only for the limited functionality the app needs and nothing more. 

At some point in the future, I’d also like to implement Line charts.

 

Postscript

I must say that apart from working with LINQ, its been a very satisfying experience for me to implement a C# version of a web API.

There is GoogleChartsSharp on Codeplex that Implements a whole lot more of the functionality of Google Charts. I did indeed use it for a while before implementing it on my own.

So its been a satisfying experience for me to implement an API that allows me to work the way I want to work. Not only did I write something simpler and easier to work with, but I dropped a dependency in the process and that made me happier than I think its safe to admit. 

Writing against something requires you, the writer, to pay extra attention to the small details. It requires you to think of the relationship between your code,the web API  calls and the documentation that supports it. When one uses an already baked implementation such as GoogleChartSharp, its like working with a giant black box  you have no idea what goes on inside. And you really don’t want to know the finer details. But writing the API, you create a white box. And you HAVE to understand those finer details.

So while the LINQ is nothing special in and of itself,nothing earth shattering or ground-breaking, it is the experience and the satisfaction gained from it that makes this a worthwhile post to write.

WCF Chat Update: Long Polling

Updating WCF chat continues slowly but surely. I have not made any commits yet, so you’ll have to wait to see the changes.

In addition to the changes to Authentication, there are changes to the callback mechanism that I originally wrote.

 

When i originally wrote the chat application, the callback was one of the first pieces of code that i wrote. The fact of the matter is, the the only requirement for it was to work across the local network (or even simply between instances on the local machine). So when I wrote the Cloud version, suddenly callbacks had to work across NAT to let the application function across the internet.

 

Now, there are a number of possible design patterns that would allow the server to execute a callback on a remote client. 

 

The first is ,indeed, the design pattern we use at the moment. Where we actually have a callback. the enabler for this is actually found in WCF. The wsHTTPDuplexBinding allows for dual HTTP connections – one in each direction. This allows you to invoke an operation on the client. However, in order for this to work, you need to have an instance of WCF server on a per-instance basis. So. Every new client session will spawn a new instance of the server. This means that you are going to end up with dozens ( or hundreds etc) of long-lived instances. The question here is scalability. Is this scalable?

It might seem somewhat arrogant to talk about scalability, but if you design with scalability in mind, you’re not going to end up dealing with it later.

 

The other is something that, while not new, really hit the big-time after Friendfeed released its Tornado server. Tornado supports long polling http connections. Long polling is not in and of itself new. The basic design pattern involves a client making a call ( http or otherwise) to a server. The server receives the connection and keeps it open until it has something to return. Some long polled connections eventually time out and this is the implicit signal for the client to immediately open another connection. Others, such as the Friendfeed implementation, keep them open indefinitely.

 

There are probably more that you can think of, but these are the two that I considered for the Chat Application. As with most things, the choice is between a Push model (where notifications are pushed to the client) and the Pull model (where information, Notifications or otherwise, is pulled from the server by the client).

 

The fact of the matter is that I like both. Both are Cool. And both are supported intrinsically by WCF – no coding voodoo to make thins work.

Of the two, I’ve begun implementing the long polling method. Although it s radical departure, it will allow the overall design of to server-side to remain the same. The WCF server remains as a Single Instance service, and so the implementation remains the same.

The WCF stack is written in such a way that when you mark a OperationContract as needing the Async Pattern, WCF with start the Asyc operation and then go off and handle another request until that method returns. The End method that receives the results then returns the data to the client. In other words, its non-blocking.

I’ve not sorted out the exact specifics of implementation, but there will be changes to both the server and the client to accommodate  this. In saying that, I’m doing  a lot of simplification to the class structure. So hopefully what emerges from all these changes will be better than the current setup. Even I had to go back and follow the inheritance tree to figure things out.

These changes are happening in parallel with the changes to the authentication scheme.

 

So, while I’ve got no code, I leave you with this MSDN blog post on Async programming with WCF and this post that adapts it to long-polling specifically.

WCF Server Chat Update:

I just left this reply to a comment on my last post on this project:

Hi there,

No, I Haven’t been able to make in any changes sine my last comit.

However, I have been taking a look at it over the past week, since there are issues with it. As you say authentication and authorisation are one of them.

I’ve been looking at the possibility of using Forms authentication. This brings ASP.Net Membership, Profiles and Roles to the table and these can be used.

This of course requires a SQL server as a back end. And requires the use of SSL.

This requires significant changes to the code base, to move from the current storage model (XML in the case of the Windows service, and Azure Tables in the case of the Windows Azure role). And since authorisation and authentication are now handled separately, those WCF method calls that currently handle this aren’t required.

Also, since the codebase is effectively two separate projects, these changes need to be made twice.

The client will also need to be changed.

These changes do make a lot of sense and I’m well along in implementing them. So look out for a post soon on them.

 

I thought I’d let everyone know that this is the direction I’m taking things in.

 

Life is busy, which why I haven’t been able to update things as much as I’d like.

There is one other particular problem with the WCF Server chat that I would have liked to have solved in my last comitt.

The issue involves the server pinging other clients across NAT. Of course, this could be solved by moving to a pull model rather than a push notification model. While that would be easy, I want to take a good long look at getting push to work properly before trying any other models.

Push notification is where its at. So if anyone has any pointers to implementing it, please send it my way.

Windows Azure Block Blobs

In Windows Azure Blob Storage, not all blobs are created equal. Windows Azure has the notion of Page Blobs and Block Blobs.  Each of these distinct blob types aim to solve a slightly different problem, and its important to understand the difference.

To Quote the documentation:

  • Block blobs, which are optimized for streaming.
  • Page blobs, which are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob.

About Block Blobs

Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by uploading a set of blocks and committing them by their block IDs. If you are uploading a block blob that is no more than 64 MB in size, you can also upload it in its entirety with a single Put Blob operation.

When you upload a block to Windows Azure using the Put Block operation, it is associated with the specified block blob, but it does not become part of the blob until you call the Put Block Listoperation and include the block’s ID. The block remains in an uncommitted state until it is specifically committed. Writing to a block blob is thus always a two-step process.

Each block can be a maximum of 4 MB in size. The maximum size for a block blob in version 2009-09-19 is 200 GB, or up to 50,000 blocks.

About Page Blobs

Page blobs are a collection of pages. A page is a range of data that is identified by its offset from the start of the blob.

To create a page blob, you initialize the page blob by calling Put Blob and specifying its maximum size. To add content to or update a page blob, you call the Put Page operation to modify a page or range of pages by specifying an offset and range. All pages must align 512-byte page boundaries.

Unlike writes to block blobs, writes to page blobs happen in-place and are immediately committed to the blob.

The maximum size for a page blob is 1 TB. A page written to a page blob may be up to 1 TB in size.

So, before we determine what blob type we’re going to use, we need to determine what we’re using this particular blob for in the first place.

You’ll notice the above extract is quite clear what to use block blobs for: streaming video. In other words, anything that we don’t need random I/O access to. On the other hand page blobs have a 512-byte page boundary that makes it perfect for random I/O access.

And yes, its conceivably possible for you to need to host stuff such as streaming video as a page blob. When you think about this stuff to much, you end up imagining situations where that might be possible.  So, these would be situations where you are directly editing or reading very select potions of a file. If you’re editing video, who wants to read in an entire 4MB for one frame of video? You might laugh at the idea of actually needing to do this, but that the Rough Cut Editor is web based and works primarily with web-based files. If you had to run that using Blob storage as a backend you’d need to use page blobs to fully realise the RCE’s functionality.

So, enough day-dreaming. Time to move on.

Some groundwork

Now, in our block blob, each individual block can be a maximum of 4MB in size. Assuming we’re doing streaming video, 4MB is not going to cut it.

The Azure API provides the CloudBlockBlob class with several helper methods for managing our blocks. The methods we are interested in are:

  • PutBlock()
  • PutBlockList()

The PutBlock method takes a base-64 encoded string for the Block ID, a stream object with the binary data for the block and a (optional) MD5 hash of the contents. Its important to note that the ID string MUST be base-64 encoded or else Windows Azure will not accept the block. For the MD5 hash, you can simply pass in null.  This method should be called for each and every block that makes up your data stream.

The PutBlockList  is the final  method that needs to be called. It takes a List<string>  containing every ID of every block that you want to be part of this blob. By calling this methods it commits all the blocks contained in the list. This means, then, that you could land up in a situation where you’ve called PutBlock but not included the ID when you called PutBlockList. You then end up with an incomplete and corrupted file. You have a week to commit uploaded blocks. So all is not lost if you know which blocks are missing. You simply call PutBlockList with the IDs of the missing blocks.

There are a number of reasons why this is a smart approach.  Normally, I fall on the side of developer independence, the dev being free to do things as he likes without being hemmed in. In this case, by being forced to upload data in small chuncks, we realise a number of practical benefits. The big one being recovery from bad uploads – customers hate having to re-start gigabyte sized uploads from scratch.

Here be Dragons

The following example probably isn’t the best. I’m pretty sure someone will refactor and post a better algorithm.

Now there are a couple of things to note here.  One bring that I want to illustrate what happens at a lower level of abstraction that we usually work at, so that means no StreamReaders – We’ll read the underlying bytes directly.

Secondly, not all Streams have the same capability. Its perfectly possible to come across a Stream object where you can’t seek. Or determine the length of the stream. So this is written to handle any data stream you can throw at it.

With that out of the way, lets start with some Windows Azure setup code.

StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key);
CloudStorageAccount acc = new CloudStorageAccount(key, true);

CloudBlobClient blobclient = acc.CreateCloudBlobClient();
CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos");
Videocontainer.CreateIfNotExist();

CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");

Note how we’re using the CloudBlockBlob rather than the CloudBlob class.

In this example we’ll need our data to be read into a byte array right from the start. While I’m using data from a file here, the actual source doesn’t matter.

byte[] data = File.ReadAllBytes("videopath.mp4");

Now, to move data from our byte array into individual blocks, we need a few variables to help us.

            int id = 0;
            int byteslength = data.Length;
            int bytesread = 0;
            int index = 0;
            List blocklist = new List();
  • Id will store a sequential number indicating the ID of the block
  • byteslength is the length, in bytes of our byte array
  • bytesread keeps a running total of how many bytes we’ve already read and uploaded
  • index is a copy for bytes read and used to do some interim calculations in the body of the loop (probably will end up refactoring it out anyway)
  • blocklist holds all our base-64 encoded block id’s

Now, on to the body of the algorthim. We’re using a do loop here since this loop will always run at least once (assuming, for the sake of example, that all files are larger than our 1MB block boundary)

do
            {
                byte[] buffer = new byte[1048576];
                int limit = index + 1048576;
                for (int loops = 0; index < limit; index++)
                {
                    buffer[loops] = data[index];
                    loops++;
                }

The idea (that of using a do loop) here being to loop over our data array until less than 1MB remains.

Note how we’re using a separate byte array to copy data into. This the block data that we’ll pass to PutBlock. Since we’re not using StreamReaders, we have to do the copy byte for byte as we go along.

It is this bit of code would be abstracted away were we using StreamReaders (or, more properly for this application, BinaryReaders)

Now, this is the important bit:

                 bytesread = index;
                string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id)); //1

                blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null); //2

                blocklist.Add(blockIdBase64);
                id++;
            } while (byteslength - bytesread > 1048576);

There are three things to note in the above code. Firstly, we’re taking the block ID and base-64 encoding it properly.

And secondly, note the call to PutBlock. We’re wrapped the second byte array containing just our block data as a MemoryStream object (since that’s what the PutBlock methods expects) and we’ve passed in null rather than an MD5 hash of our block data.

Finally, note how we add the block id to our blocklist variable. This will ensure that the call to PutBlockList will include the ID’s of all of our uploaded blocks.

So, by the time this do loops finally exits, we should be in a position to upload our final block. This final block will almost certainly be less than 1MB in size (barring the usual edge case caveats). Since this final block is less than 1MB, our code will need a final change to cope with it.

            int final = byteslength - bytesread;
            byte[] finalbuffer = new byte[final];
            for (int loops = 0; index < byteslength; index++)
            {
                finalbuffer[loops] = data[index];
                loops++;
            }
            string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id));
            blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null);
            blocklist.Add(blockId);

Finally, we make our call to PutBlockList, passing in our List array (in this example, the “blocklist” variable).

blob.PutBlockList(blocklist);

All our blocks are now committed. If you have the latest Windows Azure SDK (and I assume you do), the Server Explorer should allow you to see all your blobs and get their direct URL’s.  You can downloaded the blob directly in the Server Explorer, or copy and paste the URL into your browser of choice.

Wrap up

Basically, what we’ve covered in this example is a quick way of breaking down any binary data stream into individual blocks conforming to Windows Azure Blob storage requirements, and uploading those blocks to Windows Azure. The neat thing here is that by using this method not only does the MD5 hash let Windows Azure check data integrity for you, but block ID’s let Windows Azure take care of putting the data back together in the correct sequence.

Now when I refactor this code for actual production, a couple of things are going to be different. I’ll do the MD5 hash. I’ll upload blocks in parallel to take maximum advantage of upload bandwidth (this being the UK, there not much upload bandwidth, but I’ll take all I can get). And obviously, I’ll use the full capability of Stream readers to do the dirty work for me.

Heres the full code:

StorageCredentialsAccountAndKey key = new StorageCredentialsAccountAndKey(AccountName, Account Key);
CloudStorageAccount acc = new CloudStorageAccount(key, true);

CloudBlobClient blobclient = acc.CreateCloudBlobClient();
CloudBlobContainer Videocontainer = blobclient.GetContainerReference("videos");
Videocontainer.CreateIfNotExist();

CloudBlockBlob blob = Videocontainer.GetBlockBlobReference("myblockblob");

byte[] data = File.ReadAllBytes("videopath.mp4");

int id = 0;
int byteslength = data.Length;
int bytesread = 0;
int index = 0;
List blocklist = new List();

do
            {
                byte[] buffer = new byte[1048576];
                int limit = index + 1048576;
                for (int loops = 0; index < limit; index++)
                {
                    buffer[loops] = data[index];
                    loops++;
                }
                bytesread = index;
                string blockIdBase64 = Convert.ToBase64String(System.BitConverter.GetBytes(id));

                blob.PutBlock(blockIdBase64, new MemoryStream(buffer, true), null);

                blocklist.Add(blockIdBase64);
                id++;
            } while (byteslength - bytesread > 1048576);

            int final = byteslength - bytesread;
            byte[] finalbuffer = new byte[final];
            for (int loops = 0; index < byteslength; index++)
            {
                finalbuffer[loops] = data[index];
                loops++;
            }
            string blockId = Convert.ToBase64String(System.BitConverter.GetBytes(id));
            blob.PutBlock(blockId, new MemoryStream(finalbuffer, true), null);
            blocklist.Add(blockId);
            blob.PutBlockList(blocklist);

Holiday Reading iList

While I don’t usually do this before going on holiday, this time I’m not taking any dead tree books with me at all.

Rather, I’m taking my trusty iPad with IBooks and Kindle for iPad installed. Since we’re flying Ryanair, with their stickiness for baggage weights and sizes, he weight saved has been substantial. Usually I take a couple of paperbacks and a hardcover or two, so my bags a lot lighter this time around.

So, that reading list again, split up between iBooks and Kindle.

iBooks:

1. The Void Trilogy – Peter F Hamilton
2. Pandora’s Star – Peter F Hamilton
3. Judas Unchained – Peter F Hamilton
4. Servants of the People – Andrew Rawnsley
5. Life and Death of the Party – Andrew Rawnsley
6. Red November – W. Craig Reed
7. The Hobbit – J.R.R Tolkien
8. Paypal API’s Up and Running – Micheal Balderas

Kindle

1. The Design of Everyday Things – Don Norman
2. Dreaming in Code – Scott Rosenburg

I think that’ll keep me busy for a week 🙂

As you can see the above list is heavily biased towards iBooks. The ability to buy a book off iBooks without even thinking about it is the probable reason. Amazon Kindle gives you too much pause for thought.

About the Paypal API book. Yes, I’m sad. I do have the tendency to program while on holiday. If you’ll recall, I did some major re-architecting of my Client Server Chat project while is was in Spain in December. So goodness know what I might do this time around.

I hear the Design of Everyday Things is a seminal work and every designer should read it. Jeff Atwood of the Coding Horror blog (and Stackoverflow, StackExchange etc) highly recommends it.

Dreaming in Code is the most readable book about programmers and programming I’ve ever read. Though I must say reading it elicits the same reaction as watching Dennis Nedry screw Jurassic Park’s computer systems up: A Long Loud Cry of NOOOOOOOOOOOOOOOOOOOO!

Sightly changing the subject away from books, I got the Camera Connection Kit for my iPad. It’s works like a swiss car. If I could get Visual Studio on my IPad, I’d leave the laptop at home.

I’ll see you all in a week (and a bit, one always needs a holiday from holiday when you get back)

Sitemaps in ASP.Net MVC: Icing on the Cake

This is short simple and sweet (forgive the pun).  The reason why i say that is that you have two options when doing a sitemap in MVC (actually, you have more, but whatever).

The first is using a Library. There’s a MVC Sitemap provider on Codeplex that you can download and install. It involves some XML configuration and attributes on all the actions you want to include in your sitemap.

The fact is, I don’t have time to fiddle around with configurations. I just want a simple sitemap file with a handful of products, categories and one or two other links. If the site was larger and more complex I might consider it.

So, we come to the second, DIY way. Now, this is not entirely my idea. I just repurposed it to pull the correct URL parameters out of the db. The original code is found on Codeplex.

Firstly, we have to register a new route to www.example.com/sitemap.xml. Go to Global.asax and put the following in your RegisterRoutes() method. I put mine after the call to IgnoreRoute.

routes.MapRoute("Sitemap", "sitemap.xml", new { controller = "Home", action = "Sitemap", id = UrlParameter.Optional });

Now, you can use any default routing you want with this. As you can see above, the route is pointing to the Sitemap action of the Home controller.

Then we have to actually populate our Action with some code.

  protected string GetUrl(object routeValues)
        {
            RouteValueDictionary values = new RouteValueDictionary(routeValues);
            RequestContext context = new RequestContext(HttpContext, RouteData);

            string url = RouteTable.Routes.GetVirtualPath(context, values).VirtualPath;

            return new Uri(Request.Url, url).AbsoluteUri;
        }
        [OutputCache (Duration=3600)]
        public ContentResult Sitemap()
        {
            var categories = storeDB.Categories.Include("Products").Where(g => g.Id != 8); //some filtering of categories
            XNamespace xmlns = "http://www.sitemaps.org/schemas/sitemap/0.9";
            XElement root = new XElement(xmlns + "urlset");

            List<string> urlList = new List<string>();
            urlList.Add(GetUrl(new { controller = "Home", action = "Index" }));
            urlList.Add(GetUrl(new { controller = "Home", action = "Terms" }));
            urlList.Add(GetUrl(new { controller = "Home", action = "ShippingFAQ" }));
            urlList.Add(GetUrl(new { controller = "Home", action = "Testimonials" }));
            foreach (var item in categories)
            {
                urlList.Add(string.Format("{0}?{1}={2}",GetUrl(new { controller = "Store", action = "BrowseProducts"}),"category",item.Name));

                foreach (var product in item.Products)
                {
                    urlList.Add(string.Format("{0}/{1}", GetUrl(new { controller = "Store", action = "ProductDetails" }), product.Id));
                }
            }

            foreach (var item in urlList)
            {
                root.Add(
                new XElement("url", 
                new XElement("loc", item), 
                new XElement("changefreq", "daily")));
            }

            using (MemoryStream ms = new MemoryStream())
            {
                using (StreamWriter writer = new StreamWriter(ms, Encoding.UTF8))
                {
                    root.Save(writer);
                }

                return Content(Encoding.UTF8.GetString(ms.ToArray()), "text/xml", Encoding.UTF8);
            }
        }

Essentially, we’re just outputting an xml file with the correct format and structure.  This gives us a file that looks like:

<?xml version="1.0" encoding="utf-8"?>

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">

  <url xmlns="">

    <loc>http://localhost:26641/</loc>

    <changefreq>daily</changefreq>

  </url>

  <url xmlns="">

    <loc>http://localhost:26641/Home/Terms</loc>

    <changefreq>daily</changefreq>

  </url>

  <url xmlns="">

    <loc>http://localhost:26641/Home/ShippingFAQ</loc>

    <changefreq>daily</changefreq>

  </url>

You get the idea.

The above code is using Entity Framework 4.1, so you can replace the line that declares  “var categories” with whatever data source you have. And you’ll have to reformat the url strings to conform to your parameter format.

Now I’m not suggesting this for any large MVC deployment. The code could get rather messy.

 

But for something simple, it works like a dream.

Deploying your Database to SQL Azure (and using ASP.Net Membership with it)

Its been quite quiet around here on the blog. And the reason for that is the fact that I got asked by a Herbalife Distributor to put a little e-commerce site together (its called flying Shakes). So its been a very busy few weeks here, and hopefully as things settle down, we can get back to business as usual. I’ve badly neglected the blog and the screencast series.

I have a few instructive posts to write about this whole experience, as it presented a few unique challenges.

Now. I’m staring at the back end here since deploying the database to SQL Azure was the difficult part of deployment. The reason for this is mainly due to the ASP.Net Membership database.

But we’ll start from the beginning. Now I’m assuming here that your database is complete and ready for deployment.

Step 0: Sign up for Windows Azure (if you haven’t already) and provision a new database. Take note of the servers fully qualified DNS address and remember  your username and password. You’ll need it in a bit.

Step 1 Attach your database to to your local SQL Server. Use SQL Server  Management Studio to do that.

At this point  we have our two databases  and we need to transfer  the schema and data from one to the other. To do that, we’ll use a helpful little Codeplex project called SQL Azure Migration Wizard. Download it and unzip the files.

Run the exe. I chose Analyse and Migrate:

Capture

 

Enter your Local SQL Server details:

Capture2

 

Capture3

 

Hit Next until it asks if you’re ready to generate the SQL Scripts. This is the screen you get after its analyse the database and complied the scripts.

Capture4

Now you get the second login in screen that connects you to your newly created SQL Azure Database.

This is the crucial bit. You have to replace SERVER with your server name in both the server name box and the Username box, replacing username with your username in the process. You need to have @SERVER after your username or the connection will fail.

Capture5

Fill in the rest of your details and hit Connect. Press next and at the next screen you’ll be ready to execute the SQL scripts against your SQL Azure database.

And its that easy.

All you have to do is to go ahead and change your connection string from the local DB to the one hosted on SQL Azure.

There is one last thing to do. When you first deploy your site and try and run it against SQL Azure, it won’t work. the reason being is that you have to set a firewall rule for your SQL Azure Database by IP Address range. So you should receive an error message saying that IP address such and such is not authorised to access the database. So you just need to go ahead and set the appropriate rule in the Management Portal. You’ll need to wait a while before those settings take effect.

And you should be good to go.

 

The ASP.Net Membership Database

In the normal course of events, you can setup Visual Studio to run the SQL Scripts against whatever database you have specified in your connection string when you Publish your project. However, there is a big gotcha. The SQL Scripts that ship with Visual Studio will not run against SQL Azure. This is because SQL Azure is restricted.

Even if you log into your SQL Azure database using SQL Management Studio you’ll see that your options are limited as to what you can do with  the database from within SQL Management Studio. And if you try and run the scripts manually, they still wont run.

However, Microsoft has published a SQL Azure friendly set of scripts for ASP.net.

So we have two options: We can run the migrate tool again, and use the ASP.net Membership database  to transfer over Schema and Data. Or we can run the scripts against the database.

For the sake of variety, I’ll go through the scripts and run them against the database.

  1. Open SQL Management Studio and log into your SQL Azure Database.
  2. Go File-> Open and navigate to the folder the new scripts are in.
  3. Open InstallCommon.sql and run it. You must run this before running any of the others.
  4. For ASP.net Membership run the scripts for Roles, Personalisation, Profile and Membership.

At this point I need to point out that bad things will happen if you try running your website now, even if your connection string has been changed.

ASP.net will try and create a new mdf file for the membership database. You get this error:

An error occurred during the execution of the SQL file ‘InstallCommon.sql’. The SQL error number is 5123 and the SqlException message is: CREATE FILE encountered operating system error 5(failed to retrieve text for this error. Reason: 15105) while attempting to open or create the physical file ‘C:\USERS\ROBERTO\DOCUMENTS\VISUAL STUDIO 2010\PROJECTS\FLYINGSHAKESTORE\MVCMUSICSTORE\APP_DATA\ASPNETDB_TMP.MDF’. CREATE DATABASE failed. Some file names listed could not be created. Check related errors. Creating the ASPNETDB_74b63e50f61642dc8316048e24c7e499 database…

Now, the problem with all this is the machine.config file where all of these default settings actually reside. See internally, it has a LocalSQLServer connection string. And by default, the RoleManager will use it because its a default setting. Here’s what it looks like:

<connectionStrings>		
<add name="LocalSqlServer" connectionString="data source=.\SQLEXPRESS;Integrated security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient"/>	</connectionStrings>
<system.web>		
<processModel autoConfig="true"/>	
<httpHandlers/>		
<membership>			
<providers>				
<add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="LocalSqlServer" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/>			</providers>		
</membership>
<profile>
<providers>	
<add name="AspNetSqlProfileProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>			</providers>
</profile>
<roleManager>
<providers>
<add name="AspNetSqlRoleProvider" connectionStringName="LocalSqlServer" applicationName="/" type="System.Web.Security.SqlRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>				<add name="AspNetWindowsTokenRoleProvider" applicationName="/" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</providers>
</roleManager>
</system.web>

So, we have to overwrite those settings in our own web.config file like so:

<roleManager enabled="true" defaultProvider="AspNetSqlRoleProvider"> 
<providers>
<clear/>
<add name="AspNetSqlRoleProvider" connectionStringName="..." type="System.Web.Security.SqlRoleProvider, System.Web, Version=4.0.0.0, Culture=neutral...."/> 
</providers>
</roleManager>
<membership>
<providers> 
<clear/>
<add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider.... connectionStringName=""..../>        </providers> 
</membership>
<profile>
<providers>
<clear/>
<add name="AspNetSqlProfileProvider" connectionStringName="" type="System.Web.Profile.SqlProfileProvider..../>
</providers> 
</profile>

Now, what we are doing here is simply replacing the connectionstringaname attribute in each of these providers with our own connection string name. Before that, however, we put “<clear/>” to dump the previous settings defined in the machine.config file and force it to use our modified settings.

That should  allow the role manager to use the our SQL Azure database instead of trying to attach its own. Things should run perfectly now.

Finally

The Azure Management Portal has a very nice UI for managing and editing your database. You can add and remove tables, columns rows etc. Its really good. And rather welcome. I thought I’d have to script every change and alteration.

Windows Azure Feedreader Episode 8

Apologies for the long delay between episodes. I do these in my spare time and my spare time doesn’t always coincide with a quite environment to record the screencast.

But safe to say, I’m super excited to be back and doing these once again.

So, to it!!

This week we follow straight on from what we did in week 7, and modify our backend code to take account of user subscriptions when adding RSS feeds and RSS feeds from OPML files.

Additionally, we lay the ground work for next weeks episode where we will be writing the code that will update the feeds every couple of hours.

And finally, and perhaps most importantly, this week we start using the Windows Azure SDK 1.3. So if you haven’t downloaded and installed it, now is  the time.

There are some slight code modifications to take account of some breaking changes in 1.3 that are detailed in Steve Marx’s post on the subject.

Finally, I’ve just realised that the episode contains the wrong changeset information. The correct changeset is: 84039 (I’ll be correcting this slight oversight. Perfectionism demands it)

So, enjoy the show:

Remember, you can head over to vimeo.com to see the show in all its HD glory.

As I said, next week, We’ll be writing the update code that will automatically update all RSS feeds being tracked by the application.

Client Server Chat with WCF

Almost a year after I wrote this post promising to keep WCF Chat updated, I’m living up to that promise and updating my WCF Chat application over on Codeplex. The original release on Codeplex is actually just a zip file with the project in it. All things considered it was a knee-jerk posting in the mist of the openFF effort to clone Friendfeed. Of course, the original, actual reason why I posted it is lost to history. And in the middle of all that  hoohah I never wrote an introduction to and an explanation of the codebase.

An Introduction

 

The WCF Chat application was actually a class assignment for a class that included, among other things, WCF, REST and TCP. Its actually interesting to see how that class has changed since I took it three years ago. This year, for example, its including MVC, but I digress. The fact is that my submission did go above and beyond the requirements. And the reason for that is that once I wrote the basic logic, the more complicated stuff was easy. In other words: given enough layers of abstractions, anything is easy.

Having dusted off and worked with the code for a few hours, its rather amazing how easy a chat application is. Now, that statement should be taken in the context of the fact that WCF is doing most of the heavy lifting. So getting the client to ask for data is a relatively trivial task. The tricky bit is the need for a callback.

In this case, I use callbacks for Direct Messages and File transfers. Now, you are probably wonder why I went through the trouble given that the sensible option is simply to poll a server. And it is a sensible option. Tweetdeck, Seesmic and other twitter clients all use polling. Basically, it was in the requirements that there should be two way communication. There are a number of way sto implement this. One could, for example, have a WCF service on the client that the server can call back to. This did occur to me, but its a complex and heavy handed approach, not to mention a resource intensive one. WCF always gives me headaches and so I was only going to do one service. So I wrote a TCP listener that received pings from the server on a particular port.

Thats one peculiarity about this app. The other is the way the server is actually written. We have the WCF service implementation and we have a second server class that the WCF service calls. There is a division of responsibility between the two. The service is always responsible for getting the data ready for transmission and the server does the actual application logic.

The client is fairly straightforward. It uses the Kryton Library from the Component Factory. Its a great UI library that I use anytime I have to write a Windows forms UI. The actual UI code is rather like the Leaning Tower of Pisa – its damn fragile. Basically because it relies on the component designer logic for the way controls are layered. So I haven’t touched it at all. In fact, I haven’t needed to. More on this later.

When you are looking at the client code, you’ll realise that for each type of message control, there is a message type base control. The reason for this is that I foolishly (and successfully) tried to use generic controls. In the current implementation, there is actually precious little code in the MessageBase control. The reason  for this is mainly historical. There was originally a lot of code in there, mainly event code. In testing, I discovered that  those events weren’t actually firing for reasons beyond understanding. So they were moved from the inherited class to the inheriting class.  This is the generic control.

There are message type base controls that inherit from the MessageBase control, and pass in the type (Post, DM, File). This control is in turn inherited by the actual Message, DM or File control. The reason for this long inheritance tree is that the designer will not display generic controls. Rather, that was the case. I’ve yet to actually try it with Visual Studio 2010 Designer. As I said, I haven’t changed the client UI code and architecture at all.

The client has a background thread that runs a TCP Listener to  listen for call backs from the server. Its on a background thread so that it does not block the UI thread. I used a background worker for this rather than the newer Task functionality built into .Net 4 that we have today.

Functionality

 

Basically, every one sees all the public messages on a given server. There is no mechanism to follow specific people or ever to filter the stream by those usernames. Archaic, I know, But I’m writing my chat application, not Twitter.

There are Direct Messages that can be directed to a specific user. Because the server issues callbacks for DM’s, they appear almost instantly in the users stream.

You can send files to specific users as well. These files are stored on the server when you sent them. The server will issue a call back to the user and the file will be sent to them when the client responds to that callback.  You can also forward a file you have received to another user. Files are private only by implication. They are able to be accessed by whoever is informed of the files existence.

All of the above messages are persisted on the server. However, forwarding messages is not persisted in any shape or form.

Also, you can set status. This status is visible only when you are logged in. In fact, your username is only visible to others when you are logged in.

It should be noted that you have to use the Server Options panel to add and remove users.

Todays Release

 

Todays changes basically upgrade everything to .Net 4  and make sure its compatible. Todays release does not take advantage of anything new other than some additional LINQ and extension methods. Taking advantage of the new stuff will require a careful think of how and where to use them. I’m not quite willing to sacrifice a working build for new code patterns that do the exact same thing.

The original server was actually just a console application. I took that original code and ported it to a Windows Service. There were trivial logic changes made at most. The UI ( i.e the Options form) that was part of that console application has been moved into its own project.

I also ported the server code to a Windows Azure web role. And let me tell you something- it was even easier that I had anticipated. The XML file and the collections I stored the streams  in  are replaced with Windows Azure Tables for Users, Streams, DMS and Files. The files themselves are written to Windows Azure Blobs rather than being written out to disk. 

The web role as written is actually single instance. The reason is that the collection that stores the active users (i.e. what users are active right now) is still a collection. I haven’t moved it moved it over to windows azure tables yet. You could fire up more than one instance of this role, but all of them would have a different list of active users. And because Windows Azure helpfully provides you with a load balancer, there’s no guaranteeing which instance is going to respond to the client. There is a reason why i haven’t move that collection over to Windows Azure Tables. Basically, I’m not happy with it. If Azure had some sort of caching tier, using Velocity or something so i could instantiate a collection of objects to the cache and have all instances share that collection. The Windows Azure table would be changing from minute to minute with Additions, Edits and Deletions and I don’t think Windows Azure Tables would keep up. I’m interested to now what you think of this.

I also added an Options application to talk to the Windows Azure Web Role, and I wrote a WCF web service in webrole to support this application.

The client  is essentially the same as it has always been. There is the addition of a domain when you are logging in – this could be for either cloud or service based server implementations. Since there is no default domain, the client needs one when you are logging in.  The client will ask for one when logging in. once you have provided one, you’ll have to restart the application.

There are installers for all the applications except for the Cloud project. The service installer will install both the service and the Options application.

Bear in mind that for the Options Applications, there is no authentication and authorisation. If you run the app on a server with the ChatServer installed, or you point the CloudOptions app at the appropriate server, you are in control. This is a concern to me and will be fixed in a future release.

Future changes

 

I was tempted to write a HTTP POST server for all this. MVC makes it so easy to implement. There would be plenty of XML flying back and forth. Some of the WCF operations would require some high-wire gymnastics to implement as HTTP POST, but its possible. I might to this.

The one thing that I didn’t update was the version of the Kryton UI library I use. I’d very much like to use the latest version to write a new UI from scratch. Again its a possibility.

The fact is that once you start thinking of implementing following a la twitter, your database schema suddenly looks much more complicated. And since I’m not writing Twitter, I’ll pass.

If you have any suggestions on future changes, holler.

Final Words

 

Bear in mind that for the Options Applications, there is no authentication and authorisation. If you run the app on a server with the ChatServer installed, or you point the CloudOptions app at the appropriate server, you are in control. This is a concern to me and will be fixed in a future release.

Also, bear in mind that this is marked Alpha for a reason. If it eats your homework and scares your dog, its not my fault – I’m just some guy that writes code.

Finally, this code all works IN THEORY. I’ll be testing all these pieces throughly in the coming weeks.

Where you can get it

 

You can get it off Codeplex. The source code for all the above mentioned components is checked into SVN source control.

For this 1.1 Alpha release, you’ll find each setup files for each component in a separate downloadable zip file. The CloudServer code is included as is, since no setup files are possible for Cloud projects.

Using MVC and jQuery to build a NewsTicker

Scott Guthrie tweeted a link to an article by Scott Mitchell. In it Scott wrote a handy news ticker in Asp.Net . And he wrote it in VB. I recommend going to read it before doing anything else.

It is a handy little example and a very good demonstration of jQuery in action. I actually first thought of using it in The Feedreader. Of course if I wanted to do that I’d have to re-write it as an MVC application rather than an ASP.Net website.

This is a nice academic exercise on moving from ASP.Met to MVC.

So lets get cracking.

Groundwork

So. We begin by creating an empty MVC2 project.

Actually, we began when we read Scott article. Its important to understand what Scotts trying toacomplish and how Scotts code works, before we shamelessly copy it.

The first thing you are going to want to do is make sure that Scotts’ ticker.js and jquery-1.4.4.min.js are in your MVC Scripts folder and included in the project. Also, you’ll want to copy Scott’s css files across. I put them in a folder called Styles and made sure it was included in the project.

Now, we need a Masterpage before we can create any Views. So add one in Views/Shared. We are shamelessly copying Scotts example in every detail. So go ahead and copy the markup in Scotts example and paste it into the your masterpage. You’ll want to change the script and css paths accordingly.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" runat="server">
<title>Untitled Page</title>
<script type="text/javascript" src="../../Scripts/jquery-1.4.4.min.js"></script>
<script type="text/javascript" src="../../Scripts/ticker.js"></script>
<asp:ContentPlaceHolder id="head" runat="server">
</asp:ContentPlaceHolder>
<link href="../../Styles/sinorcaish-screen.css" rel="stylesheet" type="text/css" />
<link href="../../Styles/CustomStyles.css" rel="stylesheet" type="text/css" />
<link href="../../Styles/NewsTicker.css" rel="stylesheet" type="text/css" />
</head>
<body>
<form id="form1" runat="server">
<!-- ======== Header ======== -->
<div id="header">
<div class="left">
News Ticker Demo
</div>
<div class="right">
<%=DateTime.Now.ToShortDateString()%>
</div>
<div class="subheader">
<em>News me!</em>
</div>
</div>

<!-- ======== Left Sidebar ======== -->
<div id="sidebar">
<div>
<ul>
<li><%: Html.ActionLink("Simple Ticker Demo", "Simple","Home")%></li>
<li><%: Html.ActionLink("Dynamic Ticker Demo", "Dynamic","Home")%></li>
</ul>
</div>
</div>

<!-- ======== Main Content ======== -->
<div id="main">
<asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server">

</asp:ContentPlaceHolder>
</div>

<!-- ======== Footer ======== -->

<div id="footer">
ASP.NET application designed by <a href="http://www.4guysfromrolla.com/ScottMitchell.shtml">Scott Mitchell</a>.
Website design by <a href="mailto:J.Zaitseff@zap.org.au">John Zaitseff</a>, and available
at <a href="http://www.opendesigns.org/preview/?template=1700">OpenDesigns.org</a>.
</div>

</form>
</body>
</html>

Remember to change the links in the sidebar to ActionLinks.

Controllers

The next thing we need to do is to write a Controller. Typically the first controller in any MVC application is the home controller.

So go ahead and create one, adding three ActionResult methods: Index, Simple and Dynamic.

public class HomeController : Controller
{
//
// GET: /Home/

public ActionResult Index()
{
return View();
}
public ActionResult Simple()
{
return View();
}
public ActionResult Dynamic()
{
return View();
}
}

At this point, create a View for Index and copy the HTML from Default.aspx and stick it in the content control thats been created in the view.
Then, add an empty view for the Simple controller. Since this is straightforward HTML, we simply copy the contents of both ContentPlaceHolders in Simple.aspx into the two that have been created for us in the view

<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server">

<h2>Simple News Ticker Demo</h2>
<p>
This demo shows two simple news tickers. Each news ticker has the same hard-coded news items. The
first one shows only the body of each news item, one at a time; the second one shows the headline,
body, and published date of each news item and shows three at a time.
</p>

<h3>One Row News Ticker</h3>
<div class="ticker stretched">
<ul id="latestNews1">
<li>
<div class="body">Politician Joe Smith has assembled a news conference for this afternoon to apologize for some
indiscretion he had. This is Mr. Smith's third such "apology press conference" this year.</div>
</li>
<li>
<div class="body">Did you know that you can play the fun (and addictive!) board game
Axis &amp; Allies online? Head on over to <a target="_blank" href="http://gamesbyemail.com/Games/WW2">http://gamesbyemail.com/Games/WW2</a>
and give it a whirl!</div>
</li>
<li>
<div class="body">A recent study by some doctors somewhere showed a strong correlation between unhealthy eating
and unheathly people. More studies are to be performed to verify these findings.</div>
</li>
<li>
<div class="body">This just in - ASP.NET is awesome! jQuery is not so bad, either. In fact, most technologies are pretty darn
cool. For more information, see <a href="http://www.4guysfromrolla.com/" target="_blank">4GuysFromRolla.com</a>.</div>
</li>
<li>
<div class="body">Last night the local sports team won a convincing victory over their hated rivals. After the game there was much
jubilation.</div>
</li>
<li>
<div class="body">Visit my blog. Please. You can find it at <a href="http://scottonwriting.net/sowblog/">ScottOnWriting.NET</a>.</div>
</li>
</ul>
</div>

<h3>Three Rows News Ticker</h3>
<div class="ticker threeRows medium">
<ul id="latestNews3">
<li>
<div class="header">Politician schedules news conference</div>
<div class="body">Politician Joe Smith has assembled a news conference for this afternoon to apologize for some
indiscretion he had. This is Mr. Smith's third such "apology press conference" this year.</div>
<div class="footer">Published @ 8:30 AM</div>
</li>
<li>
<div class="header">Play Axis &amp; Allies Online!</div>
<div class="body">Did you know that you can play the fun (and addictive!) board game
Axis &amp; Allies online? Head on over to <a target="_blank" href="http://gamesbyemail.com/Games/WW2">http://gamesbyemail.com/Games/WW2</a>
and give it a whirl!</div>
<div class="footer">Published @ 8:38 AM</div>
</li>
<li>
<div class="header">Study links unhealthy food to unhealthy people</div>
<div class="body">A recent study by some doctors somewhere showed a strong correlation between unhealthy eating
and unheathy people. More studies are to be performed to verify these findings.</div>
<div class="footer">Published @ 9:00 AM</div>
</li>
<li>
<div class="header">ASP.NET is awesome!</div>
<div class="body">This just in - ASP.NET is awesome! jQuery is not so bad, either. In fact, most technologies are pretty darn
cool. For more information, see <a href="http://www.4guysfromrolla.com/" target="_blank">4GuysFromRolla.com</a>.</div>
<div class="footer">Published @ 9:09 AM</div>
</li>
<li>
<div class="header">Local sports team wins</div>
<div class="body">Last night the local sports team won a convincing victory over their hated rivals. After the game there was much
jubilation.</div>
<div class="footer">Published @ 9:35 AM</div>
</li>
<li>
<div class="header">Read my blog</div>
<div class="body">Please. You can find it at <a href="http://scottonwriting.net/sowblog/">ScottOnWriting.NET</a>.</div>
<div class="footer">Published @ 10:30 AM</div>
</li>
</ul>
</div>

</asp:Content>

<asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server">
<script type="text/javascript">
$(document).ready(function () {
startTicker('#latestNews1', 1, 5000);
startTicker('#latestNews3', 3, 5000);
});
</script>
</asp:Content>

Note the javascript in the “head” ContentPlaceHolder. This fires as soon as the page as finished rendering. Scott has more details about it in his article.

Model

Now, this is the hard part.

Scott Mitchells’ example used an ASP ListView control. If you’re writing ASP.Net code a listview is the easiest way to accomplish what we’re trying to do.  You’ll notice as well that Scott passes the contents of SyndicationFeed.Items directly to the databound control. Now there is most probably a way of using Scott’s code directly in an MVC view. However, for the purposes of convenience we’ll dispense with the ListView and iterate over items ourselves.

There is also the issue of the Formatting of the items. You’ll notice that the ItemTemplate in Scotts’ code calls FormatSummary and FormatPubDate from the HTML. Because of the separation between code and HTML in MVC we can’t do that.

The solution to both of these “problems” is to do things ourselves. The M in MVC stands for model. So we need a model before we go any further. The two pieces of data we need are contained in the Summary and the PublishDate fields of the SyndicationItems. So this is what our model looks like:

public class Model
{
public String Title { get; set; }
public string Date { get; set; }
}

FormatSummary and Format PubDate, obviously need to be part of  the HomeController class:

public static string FormatSummary(string summary){
string header = "ScottOnWriting: ";

//Remove the leading "ScottOnWriting: "
if (summary.StartsWith(header)){
return summary.Substring(header.Length);
}
return summary;
}

public static string FormatPubDate(DateTimeOffset pubDate)
{
return pubDate.ToString("h:mm, MMM d");
}

View

Now that the groundwork has been laid, we can write our actual HTML view.  First we need to add our javascript function:

<asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server">
<script type="text/javascript">
$(document).ready(function () {
startTicker('#tweets', 2, 4500);
});
</script>
</asp:Content>

This javascript function will do nothing unless ticker.js exists in your scripts folder. It will end up in the page header and will be executed as soon as the document has finished rendering. Also note that we are passing 2 in here. We could pass in any number we wanted. Scott explains more about this in his article.

Now, the fact is that the original implementation using a ListView basically iterated over all the objects in the datasource and output a select piece of HTML for each item in that collection. So, we’ll do the same.

<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server">

<h2>Dynamic News Ticker Demo</h2>
<p>
This demo shows a ticker whose contents are populated dynamically. This particular example pulls the most recent tweets from
<a href="http://twitter.com/ScottOnWriting">my Twitter account</a> and displays them in a ticker, showing two entries at a time.
</p>
<h3>@ScottOnWriting's Latest Tweets</h3>
<div class="ticker twoRows medium">
<ul id="tweets">
<%foreach (var item in this.Model)
{  %>
<li>
<div class="header" style="font-weight: normal">
<%= item.Title.ToString()%>
</div>
<div class="footer">Tweeted @ <%: item.Date.ToString()%></div>
</li>
<%} %>
</ul>
</div>
</asp:Content>

Note the careful placement of our Foreach  statement. Our list item (<li/>) and the HTML with in it will be repeated on each pass over the loop body. Also, note how we are using var item in this.Model. Our View knows exactly what datatype has been passed to it. In fact, our view is actually View<Model.Model>. Yes, its a generic. And we are using var here to avoid typing Model.Model over and over again.

 

So, in a nutshell, the above code does the exact same thing as a ListView Control.

Hit run and it should be working.