Paul Buchheit (of Friendfeed and now currently at Facebook) taped an iPhone to his head and ran around Paris.
Enjoy:
I shared it mainly because it reminds me of my last trip to Paris.
Random Technology Musings
Paul Buchheit (of Friendfeed and now currently at Facebook) taped an iPhone to his head and ran around Paris.
Enjoy:
I shared it mainly because it reminds me of my last trip to Paris.
This is my attempt to kill 2 birds with one stone.
So, item number one is a new version of the Oil Spills application. New features added are:
You can get the new release here: http://oilslickfeeds.codeplex.com/releases/view/47469
This may seem like a little, but its really a total re-write.
Being able to remove panels that you do not want to watch makes the application more memory efficient. Though there is more work that can be done in this area, there is only so much you can do when streaming 8 live feeds.
Item number 2 is the fact that I did a short, 6 minute screen cast discussing the innards of the application. Now, it may seem silly to do a screencast for something so trivial, but I wanted some practice. It turns out that its not as easy as it sounds. So this video is my 4th attempt (which I’m still not 100% happy with, mainly because of the overlay).
I did this with Expression Encoder 3 Screen Capture. I usually use Community Clips for this sort of stuff. But, its not a half bad screen capture program. Expression Encoder Screen capture will let you add an overlay from an external camera, in this case, my laptops integrated webcam. I’m not too sure about this for future screencasts – so do let me know what you think.
So here it is:
This year was my last at Uni ( actually, i still have an exam to write, so the past tense isn’t accurate). As is typical with Honours year undergraduates, a final year project was set.
If you are regular reader of this blog, you’ll probably know that what i picked was a RSSCloud server running on Windows Azure. However, as they say on the home shopping networks, THERES MORE! My project needed a little more body to it. So I added an online Feedreader, in other words a poor (dirt-simple) imitation of Google Reader.
Now, this app uses a number of technologies for which it would be a pretty cool demo project. Windows Azure itself (obviously), Windows Azure Tables and Blobs, LINQ, WCF, MVC2 and so on. This includes it being a demonstrator of the RSSCLoud specification itself.
Although its an academic submission, my lecturers are fine with me opensourceing it.
Given the rise of .Net 4, and the experience points gained writing the first version, I feel that everyone would be better served with a rewrite. Not to mention the fact that It’ll give me a chance to use the new Windows Azure Tools for Visual Studio.
As I re-write it, I think a screencast series is in order. All the code will be checked in to codeplex. This’ll give everyone a chance to double check my logic (particularly interested in what Dave Winer thinks of my implementation of RSSCloud).
So, firstly, What do you think?
And secondly, anyone know a good hosting provider? I don’t know about Youtube. But Vimeo looks pretty good. If their limit is 500Gbs/per week upload space, it’ll give me chance to do one video each week, more or less.
I have all the software required to pull this off. So thats not a problem. I actually did a screencast of a live coding session in class for one of my lectures (writing an interpreter turns out to be pretty fun, actually).
i think this would be a pretty good contribution to the community as a whole.
You know, I’m kinda glad I’ve yet to buy an iPad. The reason being is the emergence of the iPhone 4.
I can just here you think “Roberto has well and truly lost it this time”. But think about it. In terms of net technology, the iPad adds only a very little. sure it has multi touch and all these, lest we forget, amazing applications. However, much of what i can now do on my laptop and iPhone I could do on an iPad. Hence if one had to do a cost benefit analysis, one would find that the large outlay for the iPad is disproportionate to the net benefit it would bring.
However, I’m not saying I’m not getting an iPad (next time Jeff Jarvis throws his away, he’s welcome to send it to me for “recycling”).
Now, Apple also has what can be rightly termed a mini-iPad, the iPhone 4. It should be said that the iPhone is now a stable plaftorm. We have a core set of features which we will always expect from an iPhone. This means that the majority of the features I already have in my trusty iPhone 3G are in the new model.
The difference is that the iPhone offers one large feature currently completely missing from my life: video. I on’t have a Flip or other camcorder. My old Nokia N74 did have one, but its no where near as good as the one in the iPhone 4. the iMovie app is yet more value added to the package that’s irresistible. So, the cost benefit analysis would find that the outlay for one is proportionate to the net benefit – the addition of video ( and iMovie).
I’m basing this on one hardware feature. There is a laundry list of new stuff to be found in the iPhone 4, not to mention the A4 CPU that’s to be found, or the bump in battery life.
One word of caution here. When I got my iPhone it got more and more valuable as i discovered apps and workflows that worked for me. And I still do discover things, that sense of child-like wonder is still there. The same will most certainly apply to the iPad.
A second post script to add to this: As a budding amateur photographer, I see tremendous value in both of these devices. The iPad is perfect for showing off a portfolio or album. In the media-rich world we now live in, the ability to record video, even just in 720p from the iPhone 4 adds another dimension to my photography. It is a pity that Apple does not let these two devices work together.
Third postscript: Gizmodo ruined the iPhone 4 announcement. Glad they were banned from WWDC. Good riddance to bad rubbish.
Not from here, but rather from Kara Swishers post on Apple barring AdMob.
David K makes this excellent point:
Really? I never realized how I was held hostage. I could swear that I am completely free to buy any smart phone I want if I don’t like the iPhone. I wasn’t aware that I Apple would come send its iPolice after me if I walked into a Verizon store tommorow and picked up a Droid…
Since basically the argument in the preceding comments was along the lines that iAd was yet another instance of the closed ecosystem.
By the same token, devs aren’t held hostage as regards to their choice of Ad provider. This is made clear by the language in the new ToS. They just can’t use AdMob.
Quite frankly, Jobs has every right to bar Admob. To do anything else would be like Microsoft selling Lotus Notes in their stores. Not gonna happen.
Like that. Took me 30 minutes.
Head over to http://oilslickfeeds.codeplex.com/ to get it.
Note: These are live streams from the ROV’s monitoring the damaged riser. Please be aware, as these are live streams they may freeze or be unavailable from time to time.
(thanks to the Channel 4 TV guys for putting the original web page together)
Update 08/06/2010 There was a problem with the install files. I’ve uploaded a new version. Please do let me know if there are further problems.
Update 24/06/2010 Version 1.3 has been released. please see the details here: https://rbonini.wordpress.com/2010/06/24/oil-spill-app-1-3-screencast/
The first comes from David Weiss’s blog:
"Engineering is the art of modelling materials we do not wholly understand, into shapes we cannot precisely analyze so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect the extent of our ignorance."
– Dr. AR Dykes, British Institution of Structural Engineers, 1976.
To echo what David said: If this doesn’t accurately describe software engineering, I don’t know what does.
The second comes from Jeff Atwood’s post on Coding Horror: The Vast And Endless Sea:
If you want to build a ship, don’t drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea.
– Antoine de Saint Exupéry
Seriously I’d go and read the whole post. If I were teaching Introduction to Programming, this is the sort of quote I’d use on slide number one. Since thats what I’d be doing.
In a perfect world, these two Microsoft products would Just Work. Unfortunately, it does not quite work that way in practice.
In general, WCF can be run in a number of ways.
In Windows Azure terms, the first method would be a Web role, and the second method would be a worker role.
It should be noted that all of the above methods and code patterns are perfectly legitimate ways to create and host a WCF service. The question here is, which of these ways will run on Windows Azure.
Programming for Windows Azure is generally an easy thing so long as your application start up code is correct. If there are any exceptions in your start up code, the role will cycle between Initializing, Busy and Stopping. Its enough to drive you mad. Therefore, if you choose to self host, you need to be sure that your worker role code runs perfectly.
For all your Windows Azure applications its also a good idea to create a log table using Windows Azure tables to assist you in debugging your service.
Now, before you begin any WCF Azure work, you need to install a patch. WCF can exhibit some strange behaviour when deployed behind a load balancer.Since all Windows Azure service instances are behind a LB by default, we need to install this patch. This patch is also installed in the cloud.
You can find the patch, as well a thorough run down of all the WCF issues in Azure here. While I’m not attempting to tackle all of the problems, and indeed the solutions to them, i do aim to get a basic service working with as little fuss as possible.
It must be said that i did try all sorts of settings in the config file. It turns out that the one with almost no actual settings is the one that works. basically this forces us to rely on IIS to do the configurations for us. On one hand this is a good thing, saving us the hassle. On the other, there is some loss of control that the programmer in me does not like.
Firstly, lets look at the config file. This section is vanilla. And basically what you’ll get when you add the WCF Service Web Role via the Add New Role dialog.
<system.serviceModel> <services> <service name="WCFServiceWebRole1.Service1" behaviorConfiguration="WCFServiceWebRole1.Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" contract="WCFServiceWebRole1.Service1"> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WCFServiceWebRole1.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>
If you deploy this file to the cloud, the role will initialise and start up correctly. However, at this point, the service is not aware of anything outwith the load balancer. In fact, if you deploy the WCF web role template to the cloud it will start up and run perfectly, albeit only behind the LB.
Now for the actual service code. This code is actually from a REST service I wrote. It basically provides a interface for the RSSCloud specification (sorry Dave). So we can talk to the RSS Cloud server using HTTP Post, REST and any WSDL client.
[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class Service1
{
[WebInvoke(Method = "POST", UriTemplate = "pleasenotify/notifyProcedure/{notifyprocedure}/port/{port}/protocol/{protocol}/domain/{domain}/url/{url}"), OperationContract]
String RequestNotification(string notifyprocedure, int port, string path, string protocol, string domain, string url)
{
// this change to the storage model is intended to make scaleing easier.
// the subscribers will be stored perblog in individual blobs
// the table will store the blogids, the blogurls and the last modified date of the blob file.
DateTime Timestamp = DateTime.Now;
XmlDocument thedoc = new XmlDocument();
string name = BlobNameServices.generateBlobName(url);
if (name == null)
{
name = BlobNameServices.CreateBlobEntry(url, source);
}
// Set the metadata into the blob
string hash = Utils.HashToBase64(domain + path + port + protocol + url);
insertTable(name, hash, notifyprocedure, port, path, protocol, domain, url);
requestqueue.AddMessage(new CloudQueueMessage(name));
return "reply";
}
[WebInvoke(Method = "POST", UriTemplate = "recivenotify/url/{url}"), OperationContract]
String RecieveNotification(string url)
{
recieverqueue.AddMessage(new CloudQueueMessage(url));
return "Recieved";
}
[WebInvoke(Method = "POST", UriTemplate = "ping/url/{value}"), OperationContract]
String Ping(string value)
{
char[] chars = value.ToCharArray();
//we have a potential inconsistancy here - the url is passed as an arguement,
//whereas for notifications the blog id is passed as an argument
pingqueue.AddMessage(new CloudQueueMessage(value));
String result = "You said: " + value;
return result;
}
}
I arrived at this code by taking the vanilla WCF webrole and combining it with the WCF REST template and replacing the template code with my own code
Amazingly enough, it will work. Up to a point though. When you hit the svc file, it points you to a WSDL BEHIND the load balancer, using the local instances’ address. Obviously, we can’t get to it. So, we have to modify our WSDL very slightly to take advantage of the patch mentioned above.
<behaviors> <serviceBehaviors> <behavior name="WCFServiceWebRole1.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> <useRequestHeadersForMetadataAddress> <defaultPorts> <add scheme="http" port="8080" /> </defaultPorts> </useRequestHeadersForMetadataAddress> </behavior> </serviceBehaviors> </behaviors> <pre>
If you hit the service now, the WSDL file is now pointing to the correct address and you can get to it. So any WSDL client can now talk to your service.
You want to make sure that the port selected in in the webrole settings matches the port you have in the Request headers section. Whether that is port 80 or something else. If you have an HTTPS end point, you need to add it as well to get it to work.
Now, I’ve tried self hosting. This is the idea.
string port="";
try
{
port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WCFEndpoint"].IPEndpoint.Port.ToString();
}
catch
{
port = "8000";
}
try
{
using (ServiceHost svcHost = new ServiceHost(typeof(RssNotifyService), new Uri(http://northwind.cloudapp.net: + port)))
{
svcHost.Open();
while (true)
{
//do somthing
}
svcHost.Close();
}
}
catch(Exception ex)
{
CloudStorageAccount acc = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
LogDataSource log = new LogDataSource(acc);
log.Insert("WCF",ex.Message,"Error",DateTime.Now);
}
For this to work you need to have a HTTP in port set in your web role settings. In this case it was port 8000.
While the above will work in the development fabric, it won’t in the cloud. But it demonstrates the idea.
At the end of the day, there is not very much to set up if you are writing a new WCF service with minimal custom configuration.
So, install the patch and watch that config file.
PS. Many thanks to Steve Marx ( Windows Azure Program Manager) for his help with this.
I left this comment on Paul Bucheit’s Friendfeed Thread:
Paul: I get value out of having Twitter and FF completely public. Thats not the issue. The issue here is that FB was originally sold as a private service. Another thing. You and I may have seen value out of being completely public, but the only value to anyone about Grandma Indinana being completely public belongs to the knitting accessories advertisers.
And Followed it up with this one:
And for the record I don’t have a FB account. In the old days, snail mail mostly guaranteed privacy for your communications by virtue of the fact that your communiques were physically sealed by you. That essentially is the analogue version of FB pre privacy changes, albeit not at scale. In other words, privacy was implicit in the social convention of exchanging snail mails. With FB, people expected this social convention to extend, at slightly greater scale, to the online medium. Since, thats essentially how FB was marketed in the beginning. Now, FB has single handedly challenged the privacy implicit in this social convention, changing the default implied to a public one. That is the problem
I think the last comment sums things up nicely. But do read that thread for a wide variety of opinions.
If you’ll remember, a while back I announced I was implementing RSSCloud on Windows Azure. By and large this is going well and I expect to have a demo up and running soon.
PS. what follows below is based on an email I sent to my Honours Year supervisor at University and some of this will make it into my thesis too.
The RSSCloud API relies on HTTP POST for messages in and out. And I initially thought Windows Communication Foundation was the way to go.
(bear with me, I’m using this to illustrate my point)
Up until now, WCF has been working. However, in order to actually test the RSS Cloud code, I’ve had to have it the WCF Operation Contracts written as a REST service. Its clearly HTTP POST, but its not in the RSSCloud specification. Though arguably, it should be. Why do i say this? Developers should be able to write code they are comfortable with. Whether than is REST or POST or SOAP or its against an WSDL generated API client.
Back to my little problem. So instead of:
[WebInvoke(Method = "POST", UriTemplate = "/ping?url={value}")]
[OperationContract]
String Ping(string value);
I had to use:
[WebInvoke(Method = "POST", UriTemplate = "/ping/url/{value}"/)]
[OperationContract]
String Ping(string value);
There is a subtle difference. HTTP post uses the query string, where as REST uses the url itself to transmit information.
Sending a HTTP POST to the first method (where the query string is of the form" ?url={value}&port={value}&…..") hangs the test client. The request is accepted, but it never returns. I can’t even debug the method. Using a pure REST url (the second method), things work perfectly.
In order for project as a whole to conform to the project specification (by which I mean the interoperability of the program and its compliance with the HTTP POST methods defined in the RSSCloud specification), being able to accept HTTP POST is paramount.
I spoke to one of my WCF savvy lecturers. Basically he said that there were two ways to doing this: either stick to using a REST. Or encode the url as part of the POST data. Neither of which solve the problem of sticking to the specification and using HTTP Post.
So, I was digging around ASP.Net MVC 2 yesterday. I was building the page that will actually display the posts in a feed on the page. I noticed that the Controller actions that handle the request (i.e the feed id to get) have a [HttpPost] attribute above them. I’d never really given that much thought until yesterday.
After my little chat, I had a hunch. Using MVC, I simply added a controller action like so:
[HttpPost]
public RedirectToRouteResult ThePing()
{
string url = (string) Request.QueryString["url"];
url = url.ToLower();
……..
And it worked flawlessly. After all my wrestling with WCF configurations and what not, i was actually quite shocked that it worked first time. One of the problems with working with new frameworks is that you keep discovering new things, but only long after you shoud’ve known.
So, to hit the ThePing method above, the url is http://rsscloudproject/Cloud/ThePing?url=….... (obviously this isn’t deployed yet)
Why does this work?
The reason is quite simple: As I understand it, MVC exposes the Request object for you to use directly, while WCF hides this somewhere in the bowels of its inner workings. So, without getting a handle on the Request object, I can’t force WCF to process the query string differently. Hence, WCF was the wrong choice of framework for this.
So my code is now 100% in compliance with the HTTP POST methods defined in the RSSCloud Specification
Now, what does this mean for the WCF REST project?
I’m keeping it as part of the project. It gives a REST interface, and it gives WSDL that developers can use to build against my service.
Not so much the case with REST, but I personally think that the concept of a WSDL is under-represented when it comes to web based APIs. Adding these two additional interfaces to the RSSCloud specification will be one of my recommendations in the final report. I feel strongly that a web based API needs to give developers as many alternative interfaces as possible. Its no fun when you know one way of doing things, but this API is only provided in another.
For example. I wish Smugmug provided a WSDL that I could point Visual studio to and generate a client for.
Both of these situations illustrate a problem among Web API’s.
I wrote a while back that Bill Buxton’s Mix 10 keynote about designing natural user interfaces, interfaces that respect the abilities and the skills acquired by the user also applies to designers of API’s.
Bill gives this wonderful example of a violin. The violin itself may be worth millions of dollars (if I remember correctly, Joshua Bell paid $4.5 million for his Stradivarius). The bow of the violin for any first violinist in any symphony orchestra is never less than $10,000. Remember these are musicians. They make a pittance. So as a proportion of income, it’s a fortune. But it’s worth it. Why? Because it’s worthy of the skills that those musicians been acquired over decades.
Dave Winer today published a post about Facebook not providing XML API responses. And bemoaning that Twitter is going to do the same. Dave does not want JSON. He wants XML. Why? He feels comfortable with it, and he has to tools to work with it. Clearly the new API changes do not respect Dave Winer and the abilities he has acquired of decades.
I left the following comment:
I totally understand where you are coming from.
On the other hand, tools will always be insufficent. I don’t think .Net, for example, has JSON support built in, either.
Technology moves so fast, as you say, that next week there will be something new and shiny. Developers find themselves in the curious position for having to write for today, but to prepare for next weeks new thing – they have to find a way to straddle that fence between the two. Open Source is not the complete answer to this problem ( its part of it tho).
- So, API developers have the responsibility to provide for developers.
- Tool developers (closed or open source) have a responsibility to provide the tools in a timely fashion.
- And developers have the responsibility to have reasonable expectations for what they want supported.
This is a large and deep problem in the world of web API’s. They don’t have to be Just JSON or Just XML or Just HTTP POST or Just XML-RPC or Just SOAP or Just WSDL. This collection of formats and standards can co-exist.
And co-exist they should. An API should be available to the widest possible cross section of developers, to respect the individual skills that these developers have acquired of years and decades.
Because when you don’t do that, when you don’t respect that, you make people like Dave Winer stop coding against your API.
You must be logged in to post a comment.