Ray Ozzie opened the keynotes today and gave a general intro to MIX07. This was a pretty general overview and focused a lot on the release of Silverlight, Microsoft’s Flash competitor. It was an interesting talk, but was mostly benign.

After that, Scott Guthrie came up and that’s when the really good stuff started. Lots of great announcements:

  • Silverlight is out, released for download just a little bit ago.
  • Silverlight comes with a cross-platform .NET framework that runs in the browser. With that comes a lot of interesting things:
    • Initial support is for Firefox, IE, and Safari. Yes, it runs on Mac.
    • You can now write client-side code on any Silverlight-enabled browser in any .NET language you like.
    • Client-side code in .NET has HTML DOM access including all of the browser components (status bar, etc.) and runs thousands of times faster than JavaScript.
    • There are robust data services including LINQ and caching built-in.
  • There’s a new service called Silverlight Streaming that lets you upload your Silverlight application and assets, up to 4GB, and Microsoft will host it for free. That’s a huge bandwidth-saver for folks wanting to use Silverlight to stream video, etc.
  • New Visual Studio (Orcas) feature - Core CLR Remote Cross-Platform Debugging. You can runtime debug a Silverlight application executing in a browser on another machine, including remote debugging to a Safari instance on a Mac. This is huge. Guthrie demonstrated one of these sessions, intercepting events and changing values in the debug session on the fly and those values get real-time updated in the target session. Very, very cool.
  • Silverlight projects seem to work like other Visual Studio projects, including the ability to “Add Web Reference” and have your Silverlight applications call web services.
  • If you have a web application project in Visual Studio, you can put that in the same solution as your Silverlight app and then select “Add Silverlight Link” to your web application. When you build your web app, the Silverlight app automatically rebuilds and deploys.
  • The dynamic language support in .NET is growing. They’ve got support for Python and JavaScript and are adding official support for Ruby via IronRuby. They’ll be releasing that source just like IronPython.
  • This dynamic language support has an additional meaning - you can write your Silverlight apps in any of those languages as well. And, again, they’ll run cross-platform. Huge.
  • It installs in like three seconds. The demo showed a user experience for someone coming to a Silverlight app and not having the plugin installed. From the point where the user clicks the “Install” link to the point where the app is running was about three seconds. It was super fast.
  • After Summer 07 they’ll be adding even better mobile support. It has pretty good support now (also demonstrated) but I guess they’re adding more.

There seems to be a big focus on delivering video with Silverlight. Most of the demos they showed involved integrating video. It does a lot more than that, and I can envision a lot of cool XAML based apps I could write, but there’s a huge video push, going so far as having NetFlix come in and demonstrate an application where you can watch movies on-demand.

The Silverlight community site is at http://www.silverlight.net. Check it out.

The session on AJAX patterns was very cool. In one demo application (a photo album application), six specific patterns were addressed and a little on how to solve it was also shown.

Pattern - Script to Enable Interactivity Sort of a no-brainer, but using script to enable interactive elements is sort of the basis of a rich application. In this particular pattern, it was more about making it easy to script what you’re looking to do. ASP.NET AJAX offers a lot of shortcuts to help you do that scripting.

This pattern also addressed the notion of separating script from behavior. ASP.NET AJAX introduces the notion of “extender controls” that allow you to use server controls to modify the behavior of controls in the page. An example was shown where some existing markup got modified by adding an extender - a server control registering script to modify HTML on the client side. It’s a great way to do the separation.

Pattern - Logical Navigation AJAX applications have typically lost the ability to use the back/forward buttons and the ability to bookmark a page. ASP.NET Futures contains a “History” control that allows you to enable your AJAX elements to support state, sort of like ViewState, but on the URL. Modifying the page contents modifies the browser URL and, thus, enables logical navigation and bookmarking. As long as your scripts store enough history state to be able to recreate a logical view, this looks like a great way to overcome some shortcomings in AJAX.

Pattern - Update Indicators Notifying a user of what changed when an AJAX request finishes is helpful so they can see the results of an action. The UpdateAnimation control in ASP.NET AJAX is one way to do that - it performs AJAX updates in an animated fashion so movement is the key for the user. There is a prototype UpdateIndicator control that scrolls the page to the location of the change and does a highlight animation on the change; this isn’t in ASP.NET AJAX now but will hopefully be in the future.

Pattern - Smart Data Access Possibly a poorly-named pattern, but the idea is that you should use HTML properly such that external services like search engine crawlers or programmatic site map generators can correctly access/index the content you post. Use tags in the correct semantic sense (e.g., if it’s not a header, don’t put it in <h1 /> tags). Also, keep in mind the way you display pages in non-scripted environments, such as in a search engine crawler or when the user has script disabled. Your content should look good either way.

Pattern - Mashups (Using External Services) There’s a lot of data out there, and a lot of services providing added value. Make use of them where you can. The example shown was a call to Flickr to get images and data.

What was interesting about the discussion of this pattern was less the “what” than the “how.” Browsers don’t allow cross-site scripting, so you have one of two options to get third-party data into your application.

You can use a server-side proxy where you create a proxy on your site that requests the third-party data. Your application then talks to your proxy to get the data. This is a good general-purpose solution and allows you to take advantage of things like caching calls on your site and gives you the ability to manipulate the data before passing it to the client (possibly optimizing it). The downside is that it does use up your server’s bandwidth.

The other option is JSONP, which is a way you can add a script reference to your page that requests data in JSON format from a third-party service and when that data gets returned, it gets passed to a callback that you specify. ASP.NET AJAX supports this by allowing you to specify your own executor in an AJAX call, so the result of the call gets passed to your callback.

More Resources ASP.NET AJAX AJAX Patterns Yahoo! Design Pattern Library

The conference technically starts tomorrow, but I’m in town a day early to get settled so I can be there, bright-eyed and bushy-tailed. Or at least bright-eyed.

There was a mashup session that ran from 4:00p to 8:00p alongside registration, but by the time I got here, got registered, got to the room, and got something to eat… well, I also got a little tired and didn’t really feel like throwing down with the mad technical skillz. Instead, I thought it would be prudent to take it easy - it’s been a long day, and I do want to be ready to pay attention and learn in some of the great sessions planned.

My schedule looks like this:

Monday, April 30 9:30a - General Session 1:30p - Building Rich Web Experiences using Silverlight and JavaScript for Developers 3:00p - Using Visual Studio “Orcas” to Design and Develop Rich AJAX Enabled Web Sites 4:30p - AJAX Patterns with ASP.NET

Tuesday, May 1 8:30a - Front-Ending the Web with Microsoft Office 10:15a - Designing with AJAX: Yahoo! Pattern Library 11:45a - Developing ASP.NET AJAX Controls with Silverlight 2:15p - Go Deep with AJAX 4:00p - General Session

Wednesday, May 2 8:30a - How to Make AJAX Applications Scream on the Client 10:00a - Windows Presentation Foundation for Developers (Part 1 of 2) 11:30a - Windows Presentation Foundation for Developers (Part 2 of 2)

Interestingly, this isn’t the schedule as I originally planned it on Friday. Even up to the last minute the times, places, and topics are changing. I don’t know if this is the set of classes I’ll actually be in or not, but we’ll see.

Getting into the spirit of things, I’ve joined Facebook and Twitter since those seem to be ways folks are supposed to coordinate things. I’m not super taken with either one, but then, I’m not a big “social networker.” I’ll withhold judgment for now.

gaming, xbox comments edit

So about eight months ago I had to send my Xbox 360 in for repair and they sent me back a refurbished console. Due to the crazy, crappy DRM scheme they have on the content you get from Xbox Live Marketplace (which includes Xbox Live Arcade games), that meant I had to jump through a bunch of hoops to get the games on my system to work correctly again.

Well, I just got my Xbox 360 back from my recent bout with the Red Ring of Death and guess what - they sent me another refurb.

Which, of course, means I get to go through the hoops a second time. That’s right - I get to create a second dummy Xbox Live Silver membership (because I can’t use the dummy account I created last time around), have them refund me points to that account, and then use that account to re-purchase everything. Again.

Net result is that I spent like an hour last night taking inventory of all of the Xbox Live Arcade games we’ve purchased, figuring out which account we originally bought them with, and determining the price for each game as listed in the Xbox Live Marketplace.

I then called Xbox Live Support and after explaining the situation to one of the representatives, he mentioned that I should just be able to go in with the account I purchased the games with, hit Xbox Live Arcade, and select the “re-download” option (without deleting the game from the hard drive first) and it should authorize the new console.

That doesn’t work.

The call got escalated to the supervisor, who spent time going through my account and my wife’s account and calculating up all of the things we’ve purchased. Problem there is that their history only goes back one year so they don’t actually have a visible record of what you purchased beyond that… so they argue with you when you tell them, say, that you bought one of the Xbox Live Gold packages at a retail outlet over a year ago (because you’ve renewed since then) and it came with a copy of Bankshot Billiards 2, and yes, you’d like to have that re-authorized on the console as well.

After all of that, they still came up with a different number of points that they owe me than I did. You know why? Because they use the number of points you originally spent on the game as a guide, not today’s prices. And prices have gone up, so now the game you paid 400 points for six months ago costs 800 points if you want to buy it today but they only want to give you the 400 points you originally paid. Obviously, that causes a little contention on the phone, but the best the supervisor can do is put a note in there that mentions your concern because…

…there’s a guy named Eric whose job it is, apparently, to call all of the people that this happens to and hash out the whole “Points After Repair” thing (yes, they have an actual name for it, which sort of tells you something). I get to argue with Eric about the difference in what they think they owe me and what they actually owe me, and that discussion will happen in “approximately five business days.”

And there it sits. A couple of hours of work and phone later and I’m hanging on for Eric to call me and give me points so I can re-purchase and re-download the games I already own so my console works like it should again. Awesome.

subtext, blog, xml comments edit

I’ve been looking for a while to migrate off this infernal pMachine blog engine I’m on. The major problem is how to migrate my data to the new platform. Enter BlogML.

BlogML is an XML format for the contents of a blog. You can read about it and download it on the CodePlex BlogML site. They’re currently at version 2.0, which implies there was a 1.0 somewhere along the lines that I missed.

Anyway, the general idea is that you can export blog contents in BlogML from one blog engine and import into another blog engine, effectively migrating your content. Thus began my journey down the BlogML road.

If you download BlogML from the site it comes with an XSD schema for BlogML, a sample BlogML export file, a .NET API, and a schema validator.

I didn’t use the .NET API because pMachine is in PHP and all of the routines for extracting data are already in PHP, so I wrote my pMachine BlogML exporter in - wait for it - PHP. As such, I can’t really lend any commentary to the quality of the API’s functionality. That said, a quick perusal of the source shows that there are almost no comments and the rest looks a lot like generated XmlSerializable style code.

The schema validator is a pretty basic .NET application that can validate any XML against any schema - you select the schema and the XML files manually and it just runs validation. This actually makes it troublesome to use; you’d think the schema would be embedded by default. If you have some other schema validation tool, feel free to ignore the one that comes with BlogML.

The real meat of BlogML is the schema. That’s where the value of BlogML is - in defining the standard format for the blog contents.

The overall format of things seems to have been thought out pretty well. The schema accounts for posts, comments and trackbacks on each post, categories, attachments, and authors. I was pretty easily able to map the blog contents of pMachine into the requisite structure for BlogML.

There are three downsides with the schema:

First, the schema could really stand to be cleaned up. This may not be obvious if you’re editing the thing in a straight text editor, but when you throw it into something like XMLSpy, you can see the issues. Things could be made simpler by better use of common base types that get extended. There are odd things like an empty, hanging element sequence in one of the types. Generally speaking, a good tidy-up might make it a lot easier to use, because…

Second, the documentation is super duper light. I think there are like 10 lines of documentation in the schema, tops, and there’s nothing outside the schema that explains it, either. Without going back and forth between the schema and the sample document, I’d have no idea what exactly was supposed to be where, what the format of things needed to be, etc.

Third, and admittedly this may be more pMachine-specific, there’s no notion of distinguishing between a “trackback” and a “pingback.” There’s only a “trackback” entity in the schema, so if your blog supports the notion of a “pingback,” you will lose the differentiation when you export.

Anyway, I planned on importing my blog into Subtext, so I set up a test site on my development machine, ran the export on my pMachine blog (through a utility I wrote; I’m going to do some fine-tuning and release it for all you stranded pMachine users) and did the import. This is where I started noticing the real shortcomings in BlogML proper. These fall into two categories:

Shortcoming 1: Links. If you’ve had a blog for any length of time, you’ve got posts that link to other posts. That works great if your link format doesn’t change. If I’m moving from pMachine to Subtext, though, I don’t want to have to keep my old PHP blog around (hence “moving”), and, if possible, I’d like to have any intra-site links get updated. There doesn’t seem to be any notion in BlogML pre-defining a “new link mapping” (like being able to say “for this post here, its new link will be here”) so import engines will be able to convert content on the fly. There’s also no notion of a response from an import engine to be able to say “Here’s the old post ID, here’s the new one” so you can write your own redirection setup (which you will have to do, regardless of whether you update the links inside the posts).

I think there needs to be a little more with respect to link and post ID handling. BlogML might be great for defining the contents of a blog from an exported standpoint, but it doesn’t really help from an imported standpoint. Maybe offering a second schema for old-ID-to-new-ID mapping (or even old-ID-to-new-post-URL) that blog import engines could return when they finish importing… something to address the mapping issue. As it stands, I’m going to be doing some manual calculation and post-import work.

Shortcoming 2: Non-Text Content If you’ve got images or downloads or other non-text content on your blog posts, it’s most likely stored in some proprietary folder hierarchy for the blog engine you’re on… and if you’re moving, you won’t be having that hierarchy anymore, will you? That means you’ve got to not only move the text content, but the rest of the content into the new blog engine.

There is a notion of attachments in BlogML, but it’s not clear that solves the issue. You can apparently even embed “attachments” for each entry as a base64 encoded entity right in the BlogML. It’s unclear, however, how this attachment relates back to the entry and, further, unclear how the BlogML import will handle it. This could probably be remedied with some documentation, but like I said, there really isn’t any.

This sort of leaves you with one of two options: You can leave the non-text content where it is and leave the proprietary folder structure in place… or you can move the non-text content and process all of the links in all of your posts to point to the new location. One way is less work but also less clean; the other is cleaner but a lot of work. Lose-lose.

Anyway, the end result of my working with BlogML: I like the idea and I’ll be using it as a part of a fairly complex multi-step process to migrate off pMachine. That said, I think it’s got a long way to go for widespread use.