Azure Developer Challenge – YouConf – My article for Challenge Two

Since I’m going to have to update my article for Challenge 3 (and hence remove a lot of the content from challenge two) I thought I’d post the whole contents of my article from challenge two (minus the daily progress reports) here for posterity. Here goes!



Building on Scott Hanselman’s excellent post regarding hosting a conference in the cloud ( why not make the ability to host a conference and stream it available to everyone? We’re going to use the same principle as dotnetConf, but build on it so that anyone can create their own conference with speakers and presentations, then record and stream to a live audience.


When I visited the dotNetConf site and saw Scott Hanselman’s blog post about it, I thought that this could be useful to a much wider audience, particularly smaller groups such as .Net User groups who might like to record and stream their presentations. Having seen the Azure developer contest brief, I figured it would be a good chance to learn more about Azure, MVC, and the .Net technology stack. Hence my entry into this competition.

How will Azure Benefit me?

Azure will allow me to focus on what I do best – development – and not have to worry about the intricacies of infrastructure or hosting concerns. It provides a robust, scalable platform which should allow me to scale-up my app/site as needed, and hopefully stay in control of costs through increased visibility. This will allow me to evolve the application rapidly and deploy to the cloud automatically using the built-in integration with TFS/Git. I’ve often found myself in situations where in order to release code I have to:

  1. Build and test it locally
  2. Prepare a release package
  3. Give it to someone else and prepare installation instructions, then sit with them as they do the deployment
  4. Discover that a file was missing from the release package and start the process all over again

With Azure and automated deployments I should be able to streamline the development process so I can make and deploy changes rapidly, with much less overhead than in than in the past.

Challenge Two – Build a website


For this challenge I’m going to implement the ideas discussed earlier, and try to get a fully functional site up & running in Azure. I’ll provide details of the discoveries I make, and issues encountered, in the sections below. Note that I’ll also be recording daily progress as I go, in the History section of this article. For more detail on some of the daily items I covered, please read that section as I’ll be referring to it in other parts of this article.

If you’d like to skip ahead and view the end-product, you can do so at
You can also view all source code, including history, at my GitHub repository –


For challenge one, the application has a number of initial goals:

  • Allow users to create conferences, including both presentation and speaker details
  • Give them a nice SEO-friendly url that the can direct their audience to so they can view conference and session details before the conference begins
  • Provide attractive pages for audiences to view conference details
  • Provide an embedded Google Hangout video feed on either the conference page, or an auxiliary page, so users can view presentations in realtime (and also view the relevant YouTube videos once the conference has finished).
  • When users are viewing a conference live, ensure they always have the most up-to-date feed url by using SignalR to pushing updates directly to their browser
  • Allow users to chat and interact with each other, and also the presenter, via a chat window on the same page as the live video feed
  • Implement some basic responsive design features (although not to the point of perfection as it takes a long time, and I have to do that in challenge 5!)
  • Technical – Allow me to push changes directly to Azure from my source-control repository without having to prepare a release package, with the additional option of deploying from my local machine if needed
  • Technical – Implement error logging with logs stored in some form of persistent storage
  • Financial – Try and minimize hosting costs by reducing the amount of outbound data where possible, and only scaling up when necessary
  • Plus one more little secret, which you’ll have to read to the end of this section to find out….

Let’s get started!

Creating the website

The first thing I needed was a website. I opened up Visual Studio 2012, and followed along with the following tutorial on how to build an MVC4 website, naming my project/solution YouConf. Note that since I’m not using SQL for this part of the competition I left the membership parts out (by commenting out the entire AccountController class so it doesn’t try to initialize the membership database). Whilst this means that users won’t be able to register, they will still be able to create and edit conferences, it’s just that they will all be publicly available for editing. More detail on this is in my daily progress report.

Once I had it building locally, the next step was to get it into Azure. To do this, I went to the Azure Management Portal, select the Web Sites Node, and hit the New button. I wanted the url to start with YouConf, so I entered youconf in the url field, and selected West US as the region since it’s closest to me (I’m in New Zealand!) as per the screenshot below:


Once I’d hit the Create site button I had a new site up & running just like that!

Next up I wanted to deploy to it, which required me to download the publish profile and import it into Visual Studio. To do so, I clicked on my YouConf site in the Azure Management Portal, then selected the Download the publish profile link. This opened up a window with the publish profile details, which I saved locally.

I then right-clicked on my YouConf web project in Visual Studio, and hit Publish. In the Publish dialog, I selected Import publish profile, and selected the .publishsettings file I’d saved earlier. I validated the connection using the button, chose Web Deploy as the publishing option, hit Next, and in the Settings section chose Release as the Configuration. I hit Next again, then hit Publish, and after about a minute I was able to browse my site in Azure. Now wasn’t that easy?!


Source Control Integration

Next up was getting source-control in place so that it would deploy automatically to Azure. I chose to use Git, mainly because I haven’t used it before and thought this would be a good opportunity to learn about it. I also wanted to be able to have a publicly-available source repository available for anyone to view, and having seen GitHub being used for this by others, thought I’d give it a go. Make no mistake, I love TFS, and use it on every other project, but for this I really wanted to push myself (although Azure makes it so easy that this wasn’t quite the case as you’ll see).

In order to get this working, I downloaded the Git explorer from, and setup a local youconf repository. I committed my changes locally, then synced my local changes to Git using the explorer. My Git repository is available at if you’d like to see the code.

Rather than pushing local changes directly to Azure, I wanted them first to go to GitHub so they’d be visible to anyone else who might want to have a poke around. To accomplish this I followed the steps in this article under the heading “Deploy files from a repository web site like BitBucket, CodePlex, Dropbox, GitHub, or Mercurial“.

*IMPORTANT* After publishing my changes to Git I realised that I’d included all of my publish profile files as well, which contained some sensitive Azure settings (not good). To remove them, I did a quick search and found the following article The commands I ran in the Git shell were as follows:


I also added an entry to my .gitignore file so that I wouldn’t accidentally checkin anything in the Publish profile folder again:


After fixing those, I clicked on my website in the Azure portal, clicked the link under Integrate Source Control, and followed the steps, selecting my youconf repository in GitHub. About 20 seconds later – voila! – my site has been deployed to Azure from GitHub. Seriously, how easy was that?!! It took next to no time, and left me set to focus on development, as I’d set out to do from the beginning.

Building the application

From here on, most of my time was spent on building the functionality of the web app, which as I mentioned earlier was an MVC 4 web application. I started building some basic forms for creating/editing/deleting a conference, and was faced with my next challenge – where to store data? I wanted persistent storage with fast access, and an easy api to use.Since SQL wasn’t available (till challenge 3), Azure Table Storage seemed like the logical option. See this daily progress update for more on why I chose this.

Azure Table Storage, so many options….

As per this daily progress update, I got setup and read about Partition and Row Keys, and found this article very helpful – . There are plenty of tutorials available about Azure Table storage which is helpful, and I created a table as per

Azure allows you to use the storage emulator when developing locally, and then update your settings for Azure so that your app will use Azure Table storage when deployed to the cloud. I added the following line to my appsettings in web.config to tell Azure to use the development storage account locally:

<add key="StorageConnectionString" value="UseDevelopmentStorage=true" /> 

I created a YouConfDataContext class (link to GitHub) and accessed this connection string using the following code:

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(

Things seemed to be going well, but once I tried to save a conference I soon realized that I didn’t quite understand table storage quite as well as I’d thought! Basically I planned to store each conference, including speakers and presentations, as a single table entity, so that I could store/retrieve each conference in one go (as you would in a document oriented database). I started out writing code for the Conference class as below:

public class conference
    public conference()
        presentations = new list<presentation>();
        speakers = new list<speaker>();

    public string hashtag { get; set; }
    public string name { get; set; }
    public string description { get; set; }
    public ilist<presentation> presentations { get; set; }
    public ilist<speaker> speakers { get; set; }
    public string abstract { get; set; }
    public datetime startdate { get; set; }
    public datetime endtime { get; set; }
    public string timezone { get; set; }

When I tried to save one of these I ran into a bit of a roadblock though…Unfortunately you can only store primitive properties for a table entity, but not child collections or complex child objects. DOH! So, how could I work around this? I found a number of options:

  • Store each object type as a separate entity, E.g. Conference, Speaker, Presentation all get their own rows in the table. I wasn’t too keen on this as it seemed like more work than it was worth. Plus it seemed far more efficient to retrieve the whole conference in one hit rather than having to retrieve each entity separately then combine them in the UI.
  • FatEntities – – this looked very thorough, although I don’t think it wasn’t up to date with the latest Azure Table storage api
  • Lucifure – – this also looked like it wasn’t up to date with the latest Azure Table storage api
  • Use an object deriving from TableEntity, with a single property containing the Conference serialized as a JSON string. In the end I chose this option as it was easy to implement and allowed me to store the whole conference in a single table row. I used JSON.Net as it’s already included in the default MVC4 project, and allows me to serialize/deserialize in one line.

Some sample code from my YouConfDataContext.cs class for doing Inserts/Updates is below:

public void UpsertConference(Conference conference)
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    var entity = new AzureTableEntity()
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference

where AzureTableEntity is just a wrapper class for a Table Entity:

public class AzureTableEntity : TableEntity
    public string Entity { get; set; }

An advantage of this approach is that it makes it easy to visualize conference data as well. To view my data in the Azure storage emulator, I downloaded the wonderful Azure Storage Explorer and viewed my Conferences table as shown below (note that I can see each conference serialized as JSON easily):


So now I had my data being stored using Azure Table Storage locally, how could I get it working when deployed in the cloud? I just had to setup a storage account and update my Azure Cloud settings as per

I created a storage account name youconf, then copied the primary access key. I then went to the websites section, selected my youconf site, clicked Configure, then added my StorageConnectionString to the app setttings section with the following value:

DefaultEndpointsProtocol=https;AccountName=youconf;AccountKey=[Mylongaccountkey] [/code]Now when I deployed to Azure I could save data to table storage in the cloud.

Note that I ran into an issue when updating a conference’s hashtag, as this is also used for the rowkey in Azure Table storage, and in order to make an update I first had to delete the existing record, then insert the new one (with the new hashtag/rowkey). See this daily progress report for more details.

Site Features

As mentioned earlier, most of my time was spent on working with MVC and finding/fixing issues with the site as they arose, rather than having any issues with Azure itself. The following section outlines some of the application highlights, and how they address the goals described in the introduction. Note that I’ve created an example conference – WgtnDeveloperConf2013 – which contains an example presentation from dotNetConf to illustrate how the video and chat feeds work. Please feel free to add your own conference/speakers/presentations and give it a run-through.

Viewing Conferences – for participants

The conference listing page – – lists available conferences, and allows users to drill into the conference/speaker/presentation details if they wish to. It also provides users with an SEO-friendly url for their conference, based on their chosen conference hashtag. In order to achieve this I had to add a custom route for conferences which automatically routed the request to the Conference Controller when applicable, and also a route constraint to ensure that this didn’t break other controller routes. The code for adding my custom route is below (from the /App_Start/RouteConfig.cs file – abbreviated for brevity):

public static void RegisterRoutes(RouteCollection routes)
        name: "ConferenceFriendlyUrl",
        url: "{hashTag}/{action}",
        defaults: new { controller = "Conference", action = "Details" },
        constraints: new { hashTag = new IsNotAControllerNameConstraint() }

and the end result at


Easy to use conference management/maintenance screens

I used a number of techniques to help make it easier for those running conferences to maintain them. For example:

  • Inline tooltips using the jQuery Tools Tooltip functionality
  • jQuery Date/time picker for easy date/time selection (see daily progress report for detail)
  • Help and FAQ pages
  • Inline validation, including a dynamic lookup on the conference creation page to show whether a conference hashtag is available or not
  • A right-hand sidebar containing tips for end-users

See for an example of an edit form.

Embedded videos and Twitter Chat

Both of these involved obtaining code from Google/Twitter which created an embedded widget on the conference live page, based on the hangout id/twitter widget id associated with the conference. The dotNetConf site uses Jabbr for chat, however, I thought that I’d try and go for something that allowed for chat to be on the same page as the video feed. One of the commenters on my article suggested Twitter, which seemed like a good choice as it’s already so widely used. In the next stage I might also look at using SignalR for this if time permits.

The image below shows an example of a page with embedded video and chat (note that I used the hangout id for one of the dotNetConf videos for demonstration, and had to shrink the screenshot to fit into the CodeProject window):


Keeping the live video feed url up to date with SignalR

SignalR is a great tool for providing realtime updates to clients, and the Jabbr chat site provides a great example of how to harness this technology. For live conference page I used SignalR in a similar way to dotNetConf to ensure that if a conference presenter updated the Google Hangout id for the conference, viewers would be provided with the updated url without having to refresh their page.

To install SignalR, I installed the SignalR Nuget package as below:


I then set about building a SignalR hub and client. My main issue came with how to push the notification to my SignalR hub from the Conference Controller. To give some context, here’s my YouConfHub class:

public class YouConfHub : Hub
    public Task UpdateConferenceVideoUrl(string conferenceHashTag, string url)
        //Only update the clients for the specific conference
        return Clients.All.updateConferenceVideoUrl(url);
    public Task Join(string conferenceHashTag)
        return Groups.Add(Context.ConnectionId, conferenceHashTag);

and my client javascript code:

<script src="//"></script>
    <script>$.signalR || document.write('<scr' + 'ipt src="~/scripts/jquery.signalr-1.0.1.min.js")></sc' + 'ript>');</script>
    <script src="~/signalr/hubs" type="text/javascript"></script>
        $(function () {
            $.connection.hub.logging = true;

            var youConfHub = $.connection.youConfHub;

            youConfHub.client.updateConferenceVideoUrl = function (hangoutId) {
                $("#video iframe").attr("src", "" + hangoutId + "?autoplay=1");

            var joinGroup = function () {
            //Once connected, join the group for the current conference.
            $.connection.hub.start(function () {

            $.connection.hub.disconnected(function () {
                setTimeout(function () {
                }, 5000);

See the UpdateVideoUrl method in the Hub? I wanted to call that from my ConferenceController when a user updated the conference hangout id/url, and thought I could do so by getting an instance of the Hub, then calling the method on it. E.g.

var context = GlobalHost.ConnectionManager.GetHubContext();
context.UpdateConferenceVideoUrl("[conference hashtag]", "[new hangout id]"); 

Sadly, it turns out that you can’t actually call methods on the hub from outside the hub pipeline šŸ˜¦ You can, however, call methods on the Hub clients, and groups. So, in my conference controller’s edit method, I was able to use the following code to notify all clients for the specific conference that they should update their url as follows:

if (existingConference.HangoutId != conference.HangoutId)
    //User has changed the conference hangout id, so notify any listeners/viewers
    //out there if they're watching (e.g. during the live conference streaming)
    var context = GlobalHost.ConnectionManager.GetHubContext<YouConfHub>();

Not too bad in the end eh?

Responsive design – Basic features

Responsive design is all the rave these days, and fair enough too given the proliferation of web-enabled devices out there. I won’t spend too long on this, except to say I’ve implemented a number of specific styles using media queries to make the bulk of the site look good on both desktop, tablet, and mobile device resolutions. There’s a huge amount of information out there about responsive design, and I found articles by the Filament Group and Smashing Magazine very helpful in both understanding and fixing some of the issues. An example of one of my media queries for devices width widths below 760px (mobiles or small tablets) is below:

*   Mobile Styles   *
@media only screen and (max-width: 760px) {
    .main-content aside.sidebar, .main-content .content {
        float: none;
        width: auto;
    .main-content .content {
        padding-right: 0;

I’ve included a screenshot below to show the homepage on a mobile device. It looks good, but there’s still work to do for future challenges….


Financial – reducing outbound traffic and scaling up only when necessary

For Azure websites, you’re only charged for outbound traffic, hence it makes sense both financially, and for usability, to reduce the amount of bandwidth your site consumes. I used a number of techniques to achieve this:

  • CSS and Javascript bundling/minification using the System.Web.Optimization framework provided by MVC
  • Using CDN-host javascript javascript libraries where possible

For example, in the code below I try to load the jQuery library from the Microsoft Ajax CDN if possible, but if it’s not available, fallback to a local copy, which has already been minified to reduce bandwidth:

 <script src=""></script>
    <script>window.jQuery || document.write('<scr' + 'ipt src="@Scripts.Url("~/bundles/jquery")></sc' + 'ript>');</script> 

I do the same for other CSS/Javascript too – see my code on GitHub for examples.


Being able to log and retrieve exceptions for further analysis is key for any application, and it’s easy to get good quality logging setup in Azure, along with persistent storage of the logs for detailed analysis.

I’ve written quite a large article up on how I implemented logging in this daily progress report, so please see it for further technical details. In brief, I used Elmah for logging errors, with a custom logger that persisted errors to Azure Table storage. This means I can view my logs both on the server, and locally using Azure Storage explorer. Awesome!

… and the little secret – a blog!

Since about day 3 I’d been thinking of moving the posts on my daily progress into a separate blog, as there’s enough information to make some of them worth an entire entry. I was also aware that one of the competition points for this section was around what we do with our other 9 websites. So I figured I’d see if it really was as easy to setup a blog as they made out in

As with logging, the bulk of the implementation details are included in this daily progress report. I managed to get the blog up & running without too much fuss, but thought I’d better not move all my content there as it would mean having to cross-post content, and possibly make it harder to assess my article of content was in different places. Here’s a screenshot from


In Conclusion

It’s been quite an adventure this far, but I think I’ve managed to complete what I set out to achieve for challenge two, namely getting the site up & running in Azure with source control integration, and delivering the main features it was required to. I’ve used table storage both in the emulator and the cloud, and become much more familiar with the Azure platform as a whole. I’ve also gone through the process of setting up a blog, which was even easier than I thought it would be.

Finally – where are my tests? You may have noticed a distinct lack of unit tests, which I’m ashamed to say is at least partially intentional. Thus far my api has been changing so often that I felt adding tests would slow me down more than it was worth. I know this would drive TDD purists insane, but in my experience it’s sometimes helpful to wait till one’s api is more stable before adding tests, particularly when it comes to testing controllers. In addition to this, I’m going to be swapping out my table-based data access layer for SQL in challenge 3, so things are likely to change a lot more throughout the application. I will, however, at least add tests for my controllers at the start of challenge 3, so that I can verify I haven’t broken anything once I start adding SQL membership etc.

So what’s next?

Future challenges

For future challenges, there a number of additional features to focus on:

  1. Adding membership and registration, so users can manage their own conferences privately. This is reliant on having SQL available, which ties into challenge 3 nicely
  2. Adding unit and integration tests, particularly for the controllers
  3. Adding the ability to upload Speaker photos and store them in BLOB storage
  4. Add SSL to secure the registration and authentication process
  5. Adding live feeds of the slides, possibly using SlideShare
  6. Doing further testing to ensure the site is fully responsive across desktop, tablet, and mobile devices, which will be the focus for challenge 5
  7. Add the ability to setup reminders either using vcards or SMS so a user can register interest in a session, then be reminded when it’s coming up.
  8. Perform further security hardening, such as removing parameter-tampering and possible XSS issues associated with MVC model-binding


Part one: I’ve registered for the free Azure offering of up to 10 websites ( and just realised how generous the offer really is. Up to 10 websites!!! Hopefully we won’t need all of those, but you never know….

*I’ll try and post daily project updates, but if there are no entries for a given day, I either didn’t find time to work on the project, or was so caught-up in working on the project that I forgot to post an update.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s