Azure Developer Challenge – YouConf – Day 13 (May 11)

Carried on updating my article and tidying up my code to fix all those little things such as extraneous files that were no longer necessary. Also added an Easter Egg for the spot challenge!! Script as below:

 

 

$(function () {
    //And for the second spot prize - make the Code Project man go wild!
    if (window.location.search.indexOf("crazymaneasteregg") > 0) {
        $("img")
        .attr("id", "crazyman")
        .attr("src", "http://www.codeproject.com/images/bob.png")
        .css("width", "278px")
        .css("height", "384px")
        .css("position", "absolute")
        .css("top", "0")
        .css("left", $(window).width() / 2 - 278 / 2)
        .css("display", "none")
        .appendTo("body");

        $("#crazyman")
        .animate({
            width: 'toggle',
            height: 'toggle'
        }, {
            duration: 5000,
            specialEasing: {
                width: 'linear',
                height: 'easeOutBounce'
            },
            complete: function () {
                $("#crazyman").hide("explode", 3000, function () {
                    $("<div style='position: absolute; top: 0; left: 0; width: 100%; height: 100%; text-align: center;background-color:#fff;font-size:50px;'>Easter egg time!</div>")
                    .appendTo("body")
                    .slideDown(1500)
                    .slideUp(1500, function () {
                        $("#crazyman").remove();
                        $(this).remove();
                    });
                });
            }
        });
    }
});

Azure Developer Challenge – YouConf – Day 12 (May 10)

Today I spend most of my time writing up the final article content for challenge two. I also implemented the SignalR functionality for keeping the live video url up-to-date as below.

SignalR

Keeping the live video feed url up to date with SignalR

SignalR is a great tool for providing realtime updates to clients, and the Jabbr chat site provides a great example of how to harness this technology. For live conference page I used SignalR in a similar way to dotNetConf to ensure that if a conference presenter updated the Google Hangout id for the conference, viewers would be provided with the updated url without having to refresh their page.

To install SignalR, I installed the SignalR Nuget package as below:

NugetSignalRClient

I then set about building a SignalR hub and client. My main issue came with how to push the notification to my SignalR hub from the Conference Controller. To give some context, here’s my YouConfHub class:

public class YouConfHub : Hub
{
    public Task UpdateConferenceVideoUrl(string conferenceHashTag, string url)
    {
        //Only update the clients for the specific conference
        return Clients.All.updateConferenceVideoUrl(url);
    }
    public Task Join(string conferenceHashTag)
    {
        return Groups.Add(Context.ConnectionId, conferenceHashTag);
    }
}  

and my client javascript code:

<script type="text/javascript" src="http://www.codeproject.com/ajax.aspnetcdn.com/ajax/signalr/jquery.signalr-.0.1.min.js"></script>
<script type="text/javascript">
// <![CDATA[$.signalR || document.write('<scr' + 'ipt src="~/scripts/jquery.signalr-1.0.1.min.js")></sc' + 'ript>');
// ]]>
</script>
<script type="text/javascript" src="~/signalr/hubs"></script>
<script type="text/javascript">
        $(function () {
            $.connection.hub.logging = true;
            var youConfHub = $.connection.youConfHub;
            youConfHub.client.updateConferenceVideoUrl = function (hangoutId) {
                $("#video iframe").attr("src",
                   "http://youtube.com/embed/" + hangoutId + "?autoplay=1");
            };
            var joinGroup = function () {
                youConfHub.server.join("@Model.HashTag");
            }
            //Once connected, join the group for the current conference.
            $.connection.hub.start(function () {
                joinGroup();
            });
            $.connection.hub.disconnected(function () {
                setTimeout(function () {
                    $.connection.hub.start();
                }, 5000);
            });
        });

   </script>
} 

See the UpdateVideoUrl method in the Hub? I wanted to call that from my ConferenceController when a user updated the conference hangout id/url, and thought I could do so by getting an instance of the Hub, then calling the method on it. E.g.

var context = GlobalHost.ConnectionManager.GetHubContext();
context.UpdateConferenceVideoUrl("[conference hashtag]", "[new hangout id]");

Sadly, it turns out that you can’t actually call methods on the hub from outside the hub pipeline 😦 You can, however, call methods on the Hub clients, and groups. So, in my conference controller’s edit method, I was able to use the following code to notify all clients for the specific conference that they should update their url as follows:

if (existingConference.HangoutId != conference.HangoutId)
{
    //User has changed the conference hangout id, so notify any listeners/viewers
    // out there if they're watching (e.g. during the live conference streaming)
    var context = GlobalHost.ConnectionManager.GetHubContext();
    context.Clients.Group(conference.HashTag).updateConferenceVideoUrl(conference.HangoutId);
}

Not too bad in the end eh?

Article progress

My article is now almost complete, with just a few touchups required. I’ll probably spend the next day or two tidying up the site’s css, javascript etc and making sure I haven’t missed anything!

Azure Developer Challenge – YouConf – Day 11 (May 9th)

Quite a bit to report on today….

Setting up a Dupal blog website

Since about day 3 I’d been thinking of moving the posts on my daily progress into a separate blog, as there’s enough information to make some of them worth an entire entry. I was also aware that one of the competition points for this section was around what we do with our other 9 websites. So I figured I’d see if it really was as easy to setup a blog as they made out in http://www.windowsazure.com/en-us/develop/php/tutorials/website-from-gallery/.

I found the above post, and a few others, which setup WordPress blogs, so I thought why not try a different one to make things a bit more interesting. In the end I went with Drupal, as an old workmate of mine used to rave about it. I found an article for guidance on installing WordPress at http://www.windowsazure.com/en-us/develop/php/tutorials/website-from-gallery/, so used this as a guide. Here’s what I did:

  1. Selected the web sites note in the Azure management screen and clicked New
  2. Selected Compute > Web site > From gallery
  3. Selected Acquia Drupal 7 (Note that later I realized there were specific blog applications, so if doing this again I would use one of those…)
    DrupalSelectFromGallery
  4. Chose the url for my blog – youconfblog – and chose to create a new mysql database.
    DrupalConfigureUrl
  5. Followed the rest of the prompts and provided my email address etc, and let it complete. I was then able to browse to my vanilla Drupal installation at http://youconfblog.azurewebsites.net/
  6. I then wanted to install a blog theme, and I found a nice looking one at http://drupal.org/project/responsive_blog
  7. To install it on my site, I found the .tar.gz url for the installation package – http://ftp.drupal.org/files/projects/responsive_blog-7.x-1.6.tar.gz – and in the admin section of my Drupal site, selected Appearance from the top menu, then Install new theme.
  8. I provided the url to the responsive blog package, and then let Drupal do its thing and complete the installation.
  9. I then configured the theme by going to the Settings page for the theme, and added my own YouConfBlog logo, and disabled the slideshow on the homepage.
    DrupalConfigureTheme

And now I have a nice themed Drupal site! http://youconfblog.azurewebsites.net

I then added a couple of blog entries for day one & two, by copying & pasting the html code from my CodeProject article into the blog entry.

What, wait a minute, aren’t we supposed to avoid duplication?

After getting my second day’s progress blogpost into my Drupal site, I realized that if I was to copy & paste all the articles:

  1. It could take a while
  2. I’d have to do the same in future for all my other daily progress updates
  3. If I changed one, I’d have to update the other
  4. I wouldn’t be keeping in line with the CodeProject terms, which discourage you from posting content from CodeProject elsewhere
  5. I might make it harder for the judges to assess me article, as now they’d have to look in two places

In light of the above, I left my two initial blog posts intact, and decided that for now I’ll only post updates in my CodeProject article, since the goal of setting up the blog was to see if it really was as easy as others had made out (whilst learning along the way), which indeed it was. I’ll leave the blog in place though, as it deserves to be part of my entry for challenge two as one of the other 9 websites.

Error Logging

Usually one of the first things I do when creating a project is setting up Error Logging. Sometimes it’s to a text file, sometimes to xml, sometimes to a database, depending on the application requirements. My favourite logging framework for .Net web apps is Elmah, as it takes care of catching unhandled exceptions and logging them to a local directory right out-of-the-box. It has an extension for MVC too, which is awesome.

Elmah allows you to specify the route url you’re like to use for viewing errors in your web.config. It also allows you to restrict access to the log viewer page if needed, using an authorization filter so you can specify which user roles should have access. At this stage I haven’t implemented membership, and so can’t restrict access via roles. Thus I’m going to leave remote access to the logs off (which it is by default). For part 3 when I implement membership I’ll update this. Note that for any production application I’d never leave the error log page open to the public, as it would give away far too much to anyone who happens to come snooping.

Right – to setup Elmah logging I did the following:

  1. Opened the nuget package manager for my YouConf project in Visual Studio, and searched for Elmah as below
    ElmahNuget
  2. Selected the Elmah.Mvc package and installed it. This added an section to my web.config, and also some appsettings for configuring Elmah.
  3. Opened up my web.config and (using the ol’ security through obfuscation mantra) updated my elmah.mvc.route appsetting value to be a long complicated url – superdupersecretlogdirectorythatwillbeprotectedonceweimplementregistrationwithSqlandsimplemembership
  4. Fired up the local debugger and navigated to http://localhost:60539/superdupersecretlogdirectorythatwillbeprotectedonceweimplementregistrationwithSqlandsimplemembership
  5. Voila – we have an error viewer!
    ElmahLogViewer
  6. Now if I trigger an error by going to a dodgy url e.g. http://localhost:60539/// I should see an error appear in my list.
    ElmahLogViewerError
  7. And voila – there it is!
Logging to persistent storage

By default Elmah logs exceptions in-memory, which is great when you’re developing, but not so good when you deploy to another environment and want to store your errors so you can analyze them later. So, how do we setup persistent storage?

In the past I’ve used local xml file, which is really easy to configure in Elmah by adding the following line to the section of your web.config as follows:

<elmah>
  <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data" />
</elmah>

This is fine if you’re working on a single server, or can log to a SAN or similar and then aggregate your log files for analysis. However, in our case we’re deploying to Azure, which means there are no guarantees that our site will stay on a single server for its whole lifetime. Not to mention that the site will be cleared each time we redeploy, along with any local log files. So what can we do?

One option is to setup Local Storage in our Azure instance. This will give us access to persistent storage will not be affected by things like web role recycles or redeployments. To use this, we would need to:

  1. Setup local storage as per the following article (http://msdn.microsoft.com/en-us/library/windowsazure/ee758708.aspx)
  2. Configure our error logger to use this directory instead of App_Data.
  3. Sit back and relax

The above solution would work fine, however, since I’m already using Azure Table storage, I thought why not use it for storing errors as well? After some googling I came upon the following package for using table storage with Elmah, but upon downloading the code realized it wasn’t up-to-date with the Azure Storage v2 SDK. It was easy to modify though, with the end result being the class below.

namespace YouConf.Infrastructure.Logging
{
    /// <summary>
	/// Based on http://www.wadewegner.com/2011/08/using-elmah-in-windows-azure-with-table-storage/
    /// Updated for Azure Storage v2 SDK
    /// </summary>
	public class TableErrorLog : ErrorLog
    {
        private string connectionString;
        public const string TableName = "Errors";
        private CloudTableClient GetTableClient()
        {
            // Retrieve the storage account from the connection string.
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
               CloudConfigurationManager.GetSetting("StorageConnectionString"));
            // Create the table client.
            return storageAccount.CreateCloudTableClient();
        }
        private CloudTable GetTable(string tableName)
        {
            var tableClient = GetTableClient();
            return tableClient.GetTableReference(tableName);
        }
        public override ErrorLogEntry GetError(string id)
        {
            var table = GetTable(TableName);
            TableQuery<ErrorEntity> query = new TableQuery<ErrorEntity>();
            TableOperation retrieveOperation = TableOperation.Retrieve<ErrorEntity>("", id);
            TableResult retrievedResult = table.Execute(retrieveOperation);
            if (retrievedResult.Result == null)
            {
                return null;
            }
            return new ErrorLogEntry(this, id,
              ErrorXml.DecodeString(((ErrorEntity)retrievedResult.Result).SerializedError));
        }
        public override int GetErrors(int pageIndex, int pageSize, IList errorEntryList)
        {
            var count = 0;
            var table = GetTable(TableName);
            TableQuery<ErrorEntity> query = new TableQuery<ErrorEntity>()
            .Where(TableQuery.GenerateFilterCondition(
              "PartitionKey", QueryComparisons.Equal, TableName))
            .Take((pageIndex + 1) * pageSize);
            //NOTE: Ideally we'd use a continuation token
            // for paging, as currently we're retrieving all errors back
            //then paging in-memory. Running out of time though
            // so have to leave it as-is for now (which is how it was originally)
            var errors = table.ExecuteQuery(query)
                .Skip(pageIndex * pageSize);
            foreach (var error in errors)
            {
                errorEntryList.Add(new ErrorLogEntry(this, error.RowKey,
                    ErrorXml.DecodeString(error.SerializedError)));
                count += 1;
            }
            return count;
        }
        public override string Log(Error error)
        {
            var entity = new ErrorEntity(error);
            var table = GetTable(TableName);
            TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
            table.Execute(upsertOperation);
            return entity.RowKey;
        }
        public TableErrorLog(IDictionary config)
        {
            Initialize();
        }
        public TableErrorLog(string connectionString)
        {
            this.connectionString = connectionString;
            Initialize();
        }
        void Initialize()
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
               CloudConfigurationManager.GetSetting("StorageConnectionString"));
            var tableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = tableClient.GetTableReference("Errors");
            table.CreateIfNotExists();
        }
    }
    public class ErrorEntity : TableEntity
    {
        public string SerializedError { get; set; }
        public ErrorEntity() { }
        public ErrorEntity(Error error)
            : base(TableErrorLog.TableName,
              (DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19"))
        {
            PartitionKey = TableErrorLog.TableName;
            RowKey = (DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks).ToString("d19");
            this.SerializedError = ErrorXml.EncodeString(error);
        }
    }
} 

This will log all errors to the Errors table in Azure table storage, and also take care of reading them back out again.

I also had to update my web.config to use the new logger class as follows:

<elmah>
    <errorLog type="YouConf.Infrastructure.Logging.TableErrorLog, YouConf" />
</elmah>

Now if I generate an error I’ll still see it on the Elmah log viewpage, but I can also see it in my table storage. I’m using dev storage locally, so I can fire up the wonderful Azure Storage Explorer and view my Error Log table as shown below:

AzureStorageExplorerErrors

and also on-screen:

ElmahLogViewerWithTableStorage

Lovely!

Azure Developer Challenge – YouConf – Day 7 – 10 (May 5 – 8)

Spent most of my time doing CSS and UI enhancements, and making the homepage look pretty. I tend to struggle with CSS and making things look beautiful at the best of times, particularly when I start to run into cross-browser issues. However, I think that I’ve come up with something that looks quite nice now – check it out at http://youconf.azurewebsites.net/

A few things I found helpful along the way…

jQuery buttons – make your buttons and links look pretty

jQuery UI comes with a button widget, which “Enhances standard form elements like buttons, inputs and anchors to themeable buttons with appropriate hover and active styles.” It makes them look quite nice, and since I already had jQuery UI included in the project (it comes bundled with the MVC4 Internet web application template) I thought I’d use it. One line of javascript was all that was needed:

<br />$("#main-content input[type=submit], #main-content a:not(.no-button), #main-content button")<br />.button();<br />

Note that I’ve scoped it to only include items within the main-content element to improve selector performance. The before > after is below:

jQueryButton

Nice Icons

On the subject of buttons, it’s often nice to have icons for various buttons, not to mention in either your header or logo. I found a couple of sites that provide free icons released under the Creative Commons attribute licence, and so I used a few of them (and included the relevant link-back in my site footer). The sites were:

Find Icons – http://findicons.com/
Icon Archive – http://www.iconarchive.com/

I also found a very cool logo generator at http://cooltext.com/, which I used to generate the text in the YouConf logo.

Social links – Twitter, Google Plus, Facebook

It’s fairly easy to include links for the three big boys above since they provide code that you can embed either via iFrame or javascript. Unfortunately they seem to take quite long time to load though, so this can result in flickering of the icons as one populates after the other. To hide this I hacked away and ended up hiding the section with the buttons in it till 5 seconds after the DOM had loaded. E.g.

<br />setTimeout(function () {<br /><%%KEEPWHITESPACE%%>                $("#social").show();<br /><%%KEEPWHITESPACE%%>            }, 5000);<br />

I’m sure there’s a better way to do this, but I’m not sure I have time to find out just yet! Thanks to http://www.noamdesign.com/3-ways-to-integrate-social-media/ for the idea anyhow…

Azure + Source Control = a match made in heaven

Isn’t it nice when things just work? All this time that I’ve been stressing away fixing bugs and getting my site looking nice, I haven’t had a single issue with Git publishing to TFS. I simply check in my changes to my local repository as I complete features, and try to sync to GitHub a few times a day. Each time I sync to GitHub my changes are automatically pushed to my Azure website, usually within a few minutes. I’ve been able to focus on building my website and not fret over versioning or deployment issues. Phew!

Windows Azure Developer Challenge – YouConf – Day 6 (May 4)

More CSS and UI tidy-up. Things are starting to look better now – have a look at the live site to see it coming together.

JSON Serialization

When adding the functionality to delete a speaker, I ran into an issue where I would delete the speaker, but they would not be removed from the actual presentation. Here’s a snippet of the code from the Presentation class:

...
[Display(Name="Speaker/s")]
public IList<Speaker> Speakers { get; set; } 
... 

Now in the Delete method of my speaker controller, I have code like this to delete the speaker:

...
//Remove the speaker
conference.Speakers.Remove(currentSpeaker);
//Also remove them from any presentations...
foreach (var presentation in conference.Presentations)
{
    var speaker = presentation.Speakers.FirstOrDefault(x => x.Id == currentSpeaker.Id);
    presentation.Speakers.Remove(speaker);
}
YouConfDataContext.UpsertConference(conferenceHashTag, conference);
return RedirectToAction("Details", "Conference", new { hashTag = conferenceHashTag }); 
... 
 

Note that line which says Presentation.Speakers.Remove(speaker)… with my default setup this wasn’t actually deleting the speaker, because by default JSON.Net serializes all objects by reference (remember that we’re serializing the entire conference when we save it to table storage then deserializing it on the way back out). This means that the speaker object that I retrieved on the line beforehand is not actually the same instance as the one in the presentation.Speakers collection.

Initially I was going to override Equals on the Speaker class to have it compare them by Id, but then I did some googling and found that, sure enough, others had already run into this problem. And it turns out JSON.Net (written by the coding fiend aKa James Newton-King, who also happens to be in Wellington, NZ) already handles this situation and allows you to preserve object references! See http://johnnycode.com/2012/04/10/serializing-circular-references-with-json-net-and-entity-framework/ for more. Basically I just had to specify the right option when serializing the conference before saving as follows, in my UpsertConference method:

var entity = new AzureTableEntity()
{
    PartitionKey = "Conferences",
    RowKey = conference.HashTag,
    //When serializing we want to make sure that object references are preserved
    Entity = JsonConvert.SerializeObject(conference, 
      <strong>new JsonSerializerSettings { PreserveReferencesHandling = PreserveReferencesHandling.Objects }</strong>)
};
TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
 

Setting the width of textareas in MVC

Remember earlier how I said I could make MVC automatically render a textarea for a property by simple decorating the property with the [DataType(DataType.MultilineText)] attribute? Well, what if I want to specify the height/width of the textarea? CSS to the rescue!

The framework automatically adds the multi-line class to any textareas that it renders using the default editortemplate, which means I was able to add a style for this class and achieve the desired result. E.g.

.multi-line { height:15em; width:40em; }     

Windows Azure Developer Challenge – Day Four (May 2nd)

Currently working on the input screens for conferences and speakers. I really love the MVC framework, both how easy it is to use for common scenarios such as validation, and also how easy it is to extend through ModelBinders, DisplayTemplates etc. Some cool things I’ve discovered:

Display Templates/Editor Templates

Each conference has a TimeZoneId, such as (UTC-04:00) Atlantic Time (Canada).This is stored as a string property on the Conference e.g.

public class Conference
{
public string TimeZoneId { get; set; }
...
}

The advantage of just storing this as a string rather than a TimeZoneInfo is that I don’t need to write a custom modelbinder or custom validator as it’s just a plain old string, so the framework can take care of binding and validating it when it’s a mandatory field etc.

When adding/editing a conference I want to be able to display a dropdown list of all timezones, and have this automatically bound to the conference. To achieve this, I used code from the http://romikoderbynew.com/2012/03/12/working-with-time-zones-in-asp-net-mvc/ and omitted the custom ModelBinder as I didn’t need it. I created a new Editor Template in /Views/Shared/EditorTemplates named TimeZone, and also in /Views/Shared/DisplayTemplates as follows:

@* Thanks to http://romikoderbynew.com/2012/03/12/working-with-time-zones-in-asp-net-mvc/*@
@model string
@{
    var timeZoneList = TimeZoneInfo
        .GetSystemTimeZones()
        .Select(t => new SelectListItem
        {
            Text = t.DisplayName,
            Value = t.Id,
            Selected = Model != null && t.Id == Model
        });
}
@Html.DropDownListFor(model => model, timeZoneList)
@Html.ValidationMessageFor(model => model)

This will handle displaying a dropdown with all timezones, however, I needed to tell the framework that when rendering the TimeZoneId property on a Conference it should use this template… and it turned out to be really easy! I just had to add a UiHint to the TimeZoneId property and it automagically wired it up. E.g

[Required]
[<strong>UIHint("TimeZone"), Display(Name = "Time Zone")]</strong>
public string TimeZoneId { get; set; }

And that’s it! Now when I call .DisplayFor or .EditorFor in my views for the TimeZoneId property it automatically renders this template. In the view it looks like this:

</pre>
<div class="editor-label">@Html.LabelFor(model => model.TimeZoneId)</div>
<div class="editor-field">@Html.EditorFor(model => model.TimeZoneId)
 @Html.ValidationMessageFor(model => model.TimeZoneId)</div>
<pre>

and on-screen:

5TimeZoneDropdownList

BOOM!!!

Validation

Well that turned out to be as easy as adding the right attributes to the properties I wanted to validate. You’ll see above I added the [Required] attribute to the TimeZoneId property, which ensures a user has to enter it. I also added the [Display] attribute with a more user-friendly property name.

Azure Table storage issues when updating a conference

When storing conferences, I used “Conferences” as the PartitionKey, and the conference HashTag as the RowKey, as each conference should have a unique HashTag. My UpsertConference code is as follows:

public void UpsertConference(Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
}

Unfortunately this means that if I were to update a conference’s HashTag, a new record would be inserted as the .InsertOrReplace code thinks it’s a completely new entry. To work around this, I had to find the old conference record first using the old HashTag, delete it, then Insert the conference again with the new HashTag. It feels a bit clunky, especially since it’t not wrapped in a transaction or batch, but as I mention in my comments, this is something I’ll be refactoring to use SQL Server in Part 3 of the competition, so I’m not stressing too much over it at the moment. The updated code is as follows:

public void DeleteConference(string hashTag)
{
    var table = GetTable("Conferences");
    TableQuery<AzureTableEntity> query = new TableQuery<AzureTableEntity>();
    TableOperation retrieveOperation =
      TableOperation.Retrieve<AzureTableEntity>("Conferences", hashTag);
    TableResult retrievedResult = table.Execute(retrieveOperation);
    if (retrievedResult.Result != null)
    {
        TableOperation deleteOperation = TableOperation.Delete((AzureTableEntity)retrievedResult.Result);
        // Execute the operation.
        table.Execute(deleteOperation);
    }
}
/// <span class="code-SummaryComment"><summary>
</span>/// Inserts or updates a conference
/// <span class="code-SummaryComment"></summary>
</span>/// <span class="code-SummaryComment"><param name="hashTag">The hashTag of the existing conference
</span>// (for updates) or the hashTag of the new conference (for inserts)</param>
/// <span class="code-SummaryComment"><param name="conference">The conference itself</param>
</span>public void UpsertConference(string hashTag, Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    //We're using the HashTag as the RowKey, so if it gets changed
    // we have to remove the existing record and insert a new one
    //Yes I know that if the code fails after the deletion we could be left
    // with no conference.... Maybe look at doing this in a batch operation instead?
    //Once I move this over to SQL for part 3 we can wrap it in a transaction
    if (hashTag != conference.HashTag)
    {
        DeleteConference(hashTag);
    }
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
}

public void DeleteConference(string hashTag)
{
    var table = GetTable("Conferences");
    TableQuery<AzureTableEntity> query = new TableQuery<AzureTableEntity>();
    TableOperation retrieveOperation =
      TableOperation.Retrieve<AzureTableEntity>("Conferences", hashTag);
    TableResult retrievedResult = table.Execute(retrieveOperation);
    if (retrievedResult.Result != null)
    {
        TableOperation deleteOperation =
          TableOperation.Delete((AzureTableEntity)retrievedResult.Result);
        // Execute the operation.
        table.Execute(deleteOperation);
    }
}
/// <span class="code-SummaryComment"><summary>
</span>/// Inserts or updates a conference
/// <span class="code-SummaryComment"></summary>
</span>/// <span class="code-SummaryComment"><param name="hashTag">The hashTag of the existing conference
</span>// (for updates) or the hashTag of the new conference (for inserts)</param>
/// <span class="code-SummaryComment"><param name="conference">The conference itself</param>
</span>public void UpsertConference(string hashTag, Conference conference)
{
    //Wrap the conference in our custom AzureTableEntity
    var table = GetTable("Conferences");
    //We're using the HashTag as the RowKey, so if it gets changed
    // we have to remove the existing record and insert a new one
    //Yes I know that if the code fails after the deletion we could be left
    // with no conference.... Maybe look at doing this in a batch operation instead?
    //Once I move this over to SQL for part 3 we can wrap it in a transaction
    if (hashTag != conference.HashTag)
    {
        DeleteConference(hashTag);
    }
    var entity = new AzureTableEntity()
    {
        PartitionKey = "Conferences",
        RowKey = conference.HashTag,
        Entity = JsonConvert.SerializeObject(conference)
    };
    TableOperation upsertOperation = TableOperation.InsertOrReplace(entity);
    // Insert or update the conference
    table.Execute(upsertOperation);
}  

CRUD

I’ve found it fairly easy to perform simple CRUD operations using Table storage thus far, with the minor issue relating to updating an entity’s RowKey. While developing locally I used Development storage by setting my web.config storage connection string as follows <add key=”StorageConnectionString” value=”UseDevelopmentStorage=true” />. In order to get this working in the cloud I just had to setup a storage account and update my Azure Cloud settings as per http://www.windowsazure.com/en-us/develop/net/how-to-guides/table-services/

I created a storage account name youconf, then copied the primary access key. I then went to the websites section, selected my youconf site, clicked Configure, then added my StorageConnectionString to the app setttings section with the following value:

DefaultEndpointsProtocol=https;AccountName=youconf;AccountKey=[Mylongaccountkey] 

Date/time and TimeZone fun

I’ve had to do a bit more work than expected with the date/times, given that when a conference is created, the creator can select a start/end date/time, and also a timezone. The same goes for a Presentation, which has a start date/time, duration, and timezone.

Initally I was going to store them in local format, along with the timezone Id (as they appear to be stored in dotNetConf from reading Scott’s blog post). However, after doing some reading on the subject of storing date/time information, I gathered that it’s best to store datetimes in UTC, then convert them into either the user’s timezone, or your chosen timezone (such as the event timezone) as close to the UI as possible. This allows for easier comparisons in server-side code, and also makes it easy to order Conferences and presentations by date/time E.g.

@foreach (var presentation in Model.Presentations.OrderBy(x =&gt; x.StartTime))

http://stackoverflow.com/questions/2532729/daylight-saving-time-and-timezone-best-practices seems to be an article that I keep coming back to whenever I do anything involving datetimes and different timezones, and I read it once again to re-familiarise myself with how to go about things.

So, a user enters the datetime in their chosen timezone, selects the timezone from a dropdown list, and hits Submit. In order to store the date in UTC I have to have code such as this in the Controller, or possibly in a ModelBinder (I haven’t tried using a Custom ModelBinder yet though)

var conferenceTimeZone = TimeZoneInfo.FindSystemTimeZoneById(conference.TimeZoneId);
conference.StartDate = TimeZoneInfo.ConvertTimeToUtc(conference.StartDate, conferenceTimeZone);
conference.EndDate = TimeZoneInfo.ConvertTimeToUtc(conference.EndDate, conferenceTimeZone);

… then to render it back out again in the local timezone I created a custom EditorTemplate called LocalDateTime.cshtml. Note that I also add a date class onto the input field, so that I can identify and date fields using jQuery when wiring up a date time picker (more on that later).

@model DateTime
@{
    var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById((string)ViewBag.TimeZoneId);
    var localDateTime = Model.UtcToLocal(localTimeZone);
}
@Html.TextBox("", localDateTime.ToString(),
  new { @class = "date",
  @Value = localDateTime.ToString("yyyy-MM-dd HH:mm") })

.. and to use this template, I can either decorate the relevant properties on my Conference/Presentation classes with a UIHint, or specify the editor template directly from another view. For example, here’s some of the code from /Views/Conference/Edit.cshtml:

@Html.LabelFor(model => model.StartDate)
@Html.EditorFor(model => model.StartDate, "LocalDateTime",
  new { TimeZoneId = Model.TimeZoneId }) @Html.ValidationMessageFor(model => model.StartDate)

Note that 2nd parameter which specifies the editor template that I want to use. I also pass in the TimeZoneId of the conference as a parameter to the LocalDateTime editor template.

The UI – How to display date/times?

I was investigating how best to render date/times, and was initially looking at using dual input boxes, with one holding the date, and one holding the time, as per yet another of Scott’s articles. However, after getting partway through implementing that, I discovered an amazing jQuery datetimepicker plugin at http://trentrichardson.com/examples/timepicker/ which extends the existing jQuery datepicker.

By using that I was able to get away with having a single input box containing both the date AND time, along with a nice picker to help users. It really is cool, and only takes a single line of code to add:

$(function () {
            $(".date").datetimepicker({ dateFormat: 'yy-mm-dd' });
        });

… and the resulting UI looks pretty good to me!

6DateTimePicker