ASP.NET Ninjas On Fire Black Belt Tips

Demo-heavy Haack talk on ASP.NET MVC:

  • CSRF
  • Unit Testing
  • Model Binders
  • Concurrency
  • Expression Helpers
  • Custom Scaffolding
  • AJAX Grid
  • Route Debugger

The first demo started with Haack writing a bank site. A topic close to my heart. And it’s for CSRF protection, which is also interesting.

The [Authorize] attribute on a controller means anyone accessing the controller method needs to be authenticated. Cool.

OK, so the demo is showing a cross-site request forgery on a POST request. You apply a [ValidateAntiForgeryToken] attribute on the controller action and in the form you put a hidden form field with a random value associated with your session using the Html.AntiForgeryToken method. This appears to me to be the MVC answer to ViewStateUserKey and ViewState MAC checking. If the POST is made without the token, an exception is thrown. I was talking to Eilon Lipton at the attendee party a couple of nights back and confirmed that only POST requests can be protected. The problem there is that if the browser is insecure and allows the attacker to create a cross-domain GET to retrieve the form and inspect the results of that GET, then it can grab the anti-forgery token, add it to the POST, and it will succeed. (This is the same case with ViewState MAC checking in web forms.) A full CSRF protection mechanism covers every request, not just select ones. I’ll have to see if I can get that pushed through into MVC. (That would be a pretty compelling solution to get us to switch away from web forms/MVP.)

Next demo is how to do a controller action unit test. I got this one. Should be using Isolator for mocking, though. :) Showed some good patterns for folks who are unfamiliar with them, though - TDD, dependency injection, repository pattern… valuable stuff to get the community thinking about. Might have been just a liiiittle too fast for some of the folks unfamiliar with the patterns, though.

Next demo is model binding. The [BindAttribute] lets you specify which fields posted to the controller action should be used when populating the action’s parameters. I think more time should have been spent on this because model binding is actually pretty interesting. (Maybe I missed this in the latter half of yesterday’s talk.)

Concurrency. That is, two people editing the same record through the web interface at the same time. The tip here used a timestamp in the database using the “rowversion” data type and setting the “Update Check” value to “true” on that column. When you try to submit an update to the record, it’ll check to see if the row version you’re sending in is different than the one on the actual record in the database. If they’re different, you know the record has changed since you started editing and you throw an exception; if they’re the same, you’re good to go.

He’s using stuff from the “Microsoft.Web.Mvc” assembly - the MVC Futures assembly

  • which isn’t part of the RTM that was announced this week. Not sure I’d be demoing stuff that doesn’t ship… but I understand. Now I’m curious to see what’s in the Futures assembly besides the base64 encoding method he’s showing. (Futures is hard to find on CodePlex. Look for the MVC “source” release - you’ll find it there.)

One of the most confusing things about the [HandleError] attribute is that if you’re using it on localhost, it has the same semantics as the CustomErrors section in web.config. If you want to see the [HandleError] attribute work, you need to set web.config correctly.

MVC Futures has “expression-based helpers” to render controls based on your model using lambdas. Instead of: <% Html.TextBox(“Title”, null, new {width=80}) %> you can use: <% Html.TextBoxFor(m => m.Title, new {width=80}) %> Nice because of the strong typing.

In order to move from string-based to expression-based binding, you need to override the T4 templates that generate the default views. Putting your overrides in your project in a CodeTemplates/AddController or CodeTemplates/AddView folder will get the project to override the defaults for that project. You’ll need to remember to remove the custom tool from the .tt templates or it will try to generate output for them. You can even add your own custom .tt templates in there so when you do File -> New Controller or whatever it will show up in the dialogs.

If you’re doing a lot of T4 editing, the Clairus VisualT4 editor looks nice. It adds syntax highlighting for T4 into Visual Studio. Not sure I’d have included that in the demo, though, since it’s not what the lay-user is going to see.

“Validation in ASP.NET MVC is a little tricky because we don’t have built-in support for DataAnnotations.” There’s an example on CodePlex for this. I’ve played a bit with DataAnnotations and I’m not overly won-over. You have to add a partial class to “extend” your data object, put the [MetadataType] attribute on that and point to a “buddy class,” then create another class that has properties all of the same name as the data object that you want to annotate. Something like this:

[MetadataType(typeof(Question.Metadata))]
public partial class Question
{
  private class Metadata
  {
    [Required]
    [StringLength(10, ErrorMessage="Too long.")]
    public string Title { get; set; }
  }
}

(This is how Dynamic Data does it.) Apparently there’s some way coming out where you can specify that metadata through XML rather than attributes. I think I’ll be more interested when that comes out.

Nice tip here, instead of specifying an error message in your annotation, you can specify a resource. That’s key, since we have to localize everything.

[MetadataType(typeof(Question.Metadata))]
public partial class Question
{
  private class Metadata
  {
    [Required]
    [StringLength(10,
                  ErrorMessageResourceType=typeof(Resources),
                  ErrorMessageResourceName="TitleVerboseError")]
    public string Title { get; set; }
  }
}

Finally, a demo that shows something more complicated around validation. Now to see a demo where the validation parameters aren’t static…

Route debugging. Haack has posted a nice route debugger that puts up a page that shows the various routes in the table and which route was matched based on the incoming URL. Very helpful if you’re having a tough time figuring out why you’re not getting to the controller action you think you should be getting to.

We skipped the demo for the jQuery AJAX grid. He’ll show that in an open space later if you want to see it.

There’s a Little Scripter in All of Us

This is Rob Conery’s challenge to the audience to embrace their inner scripter and move away from the “architecture astronauts.”

First point is the acronyms we get into with ASP.NET. TDD, DRY, KISS, etc. Can we break the rules that ASP.NET generally leads us to? “Not everything is an enterprise app.” Hmm. This is going to be a little interesting for me since I’d actually like to see MORE focus on enterprise app development in ASP.NET. It’s like ASP.NET is hovering in this limbo area where it’s not fully set for enterprise development, but it’s also more than tiny scripting sorts of apps need. Makes me wonder if it’s trying to be too much. Jack of all trades, master of none.

Lots of apologies for the demo. “I’m on a Mac and the tech here doesn’t like it. The CSS on the demo doesn’t like a 1024 x 768 resolution so it looks bad on the screen.” As an audience member I don’t care, I just want to see it working and looking good.

He mentions that he jammed together a truckload of reeeeeally bad JavaScript code to get the MVC Storefront to work. “If I showed you that code, you’d probably throw up. Do I care?” Hmmm. This is getting harder for me to swallow. “Success as a metric” only works if you don’t have to go back and maintain the app, fix bugs, or add features. Oh, or if your team never changes. Just because it works doesn’t mean it’s right.

Oh, there’s another apology. “OpenID should be showing up down there… but I don’t have network connectivity.” Demo FAIL. With all the stuff not working, it’s really not convincing me that the rapid scripter approach to things is the way to go.

Bit of a backtrack - “I’m not giving up on architecture.” Showed some data access stuff - repository pattern, state pattern. Okay… and then we get to see the massive amount of inline script in the view. Wow. My head a-splode.

Here’s the point, I think: He showed this application he downloaded that had like 20 assemblies and when it didn’t work… it was so complex it was impossible to troubleshoot. The architecture might have been great, but it’s not something you could just download and get going. With a flatter application you might have a less “correct” architecture, but it might also be easier to get up and running and in front of the eyes of your users. That, I will buy. Granted, you have to take it with a grain of salt - if you’re making a massive distributed system that has certain scalability and deployment requirements, yeah, it’s going to be complex. On the other hand, if you’re just “making a web site,” you might not need all that. He kind of took it from one far end of the spectrum to the other (which made it a hard sell to me) but I get the idea.

Crap. Battery’s dying. Time to plug in.

Building Microsoft Silverlight Controls

I’ve not done a lot of Silverlight work so seeing this stuff come together is good. The lecture is in the form of building a shopping site using Silverlight. I got here a little late (was eating lunch) and the topic is setting up styles in a separate XAML file (StaticResources). Sort of like CSS for XAML. Good.

The clipboard manager the presenter is using is kind of cool. Curious what it is. Looks WPF.

So, new styling stuff in Silverlight 3 - “BasedOn” styles, so you can basically “derive and override” styling. Also, “merged dictionaries” so you can define styles that are compilations of mulitple styles. (Not sure I described that last one well. There was no demo and it was skimmed over.)

Skinning works with custom controls but not user controls or panels. The reason for this is that custom control visuals are in a <ControlTemplate> in XAML and all of the control logic is in code - good separation. User controls, I’m gathering, are more tightly coupled.

“Parts and States Model” - Make it easy to skin your control by separating logic and visuals and defining an explicit control contract. It’s a recommended pattern but is not enforced. “Parts” are named elements (x:Name) in a template that the code manipulates in some way. “States” are a way for you to define the way a control should look in the “mouseover” state or the “pressed” state. You define these with <VisualState> elements. Not all controls have states. “Transitions” are the visual look your control goes through as it moves between states and are defined with a <VisualTransition> element. “State gropus” are sets of mutually exclusive states and are defined in <VisualStateGroup> elements. (I’m gathering that the demo here will show this all in action.)

The demo is making a validated text box. Styling of the textbox is done using {TemplateBinding} markup so if someone sets various properties on the text box they can change the style. Another “part” of the text box is the place where the text goes and… oh, she moved too fast. Somehow by calling that element “ContentElement” using x:Name attribute the text magically showed up in the text box. We saw a VisualState setup where the mouseover for an element on the text box would enlarge the element (a little star, when the mouse isn’t over it, would get twice its original size in mouseover state). Using VisualTransitions, she animated the transition between the two states so it looked nice and smooth.

The default binding for a text box is, apparently, that whenever the user tabs away, that’s when the “onchanged” event happens. In Silverlight 3 they let you set the binding to be explicit (it will never automatically happen) and then you can add a KeyUp event handler that lets you do the binding every time a key is pressed. Nice. (Seems a little roundabout, but I’m gathering this is a big improvement from Silverlight 2.)

Out of the box, Silverlight 3 will have good, standard-looking validation UI. TextBox, CheckBox, adioButton, ComboBox, ListBox, PasswordBox. Good. I think we’re fighting validation right now in one of our projects.

I haven’t used Blend a lot before, but I have used Photoshop, Illustrator, AutoCAD, and 3DSMax. Those are listed in order of UI complexity (my opinion based on my experiences with them). Blend seems to fall somewhere between Illustrator and AutoCAD. The demo of hooking up states in Blend is interesting, but… well, not really straightforward. If someone grabbed me right after this there’s no way I could repeat it.

“The coolest and least interesting demo” for people who have used Silverlight 2 - They’ve enabled the ability to change the style of elements at runtime. I’m gathering that wasn’t possible in previous versions. The demo looked basically like a demo that uses JS to change CSS on some HTML at runtime. Glad Silverlight can do… uh… the same thing DHTML has been able to do for years.

Next demo is creation of a custom control showing the control’s contract (attributes that define the various states the control can be in) and the manner you programmatically track the control’s state. The default style for your control should be in “generic.xaml” and needs to be included in the Themes namespace of your control assembly as an embedded resource. The custom control created was a five-star “rating” control like you’d see on Netflix or Amazon. Cool.

A lot of the way this seems to work is reminiscent of trying to deliver packaged user controls. The markup (ASCX in user controls, XAML for these Silverlight controls) may or may not have all of the controls they should because the designer may or may not have included them all, so you have to check to see if the nested controls even exist before acting on them.

Just about time for the final session of the day.

Building High-Performance Web Applications and Sites

The tips here should help in all web browsers, not just IE, but specific stats will be in IE (since it is given by an IE team member).

In the top 100 sites online (don’t know what those are), IE spent 16% of its time in script but the rest in layout. In AJAX-heavy web sites, it only increased to 33% in script. Most time is spent in layout and rendering.

CSS performance.

  • Minimize included styles. Unused styles increase download size and rendering time because failures (CSS selectors that don’t point to anything) cost time.
  • Simplify selectors. Complex selectors are slow. Where possible, use class or ID selectors. Use a child selector (ul > li) instead of a descendant selector (ul li). Don’t use RTL and LTR styles. Minimizing included styles makes this easier.
  • Don’t use expressions. They’re non-standard and they get constantly evaluated.
  • Minimize page re-layouts. Basically, as the site is dynamically updating or the user’s working on things, you want to minimize the amount of things tha update. The example here was a page that dynamicaly builds itself and inserts advertisements as they load… and things jump all over the place. When those sorts of changes happen, the browser has to re-layout the page. A better approach for this would be to have placeholders where the ads are so the page doesn’t re-layout - content just gets inserted and things don’t jump around.

Optimizing JavaScript symbol resolution… Lookups are done by scope - local, intermediate, global - or by prototype - instance, object prototype, DOM. If you can optimize these lookups, your script will run faster. One example showed the difference between using the “var” keyword to declare a local scope variable and forgetting the keyword - if you forget the keyword, the variable isn’t local so the lookups get longer. Another example was showing repeated access of an element’s innerHTML property - rather than doing a bunch of sets on the property, calculate the total value you’re going to set at the end and access innerHTML once. Yet a third example showed a function that got called in a loop - every time it runs, the symbol gets resolved. Making a local scope variable function pointer and resolving the symbol once is better.

Of course, you only want to do this sort of optimization when you need to, but how do you know if you need to? There are various JS profilers out there, and the presenter showed the one in IE8 which is pretty sweet and easy to use. I haven’t gotten so far into JS that I needed to profile, but it’s nice to know this sort of thing is out there. Anyway, the interesting point of this part of the demo was showing that optimizing some of the lookup chains (in these simple examples) reduced some execution times from, say, 400ms to 200ms. I guess VS2010 will have this built in.

JavaScript Coding Inefficiencies.

  • Parsing JSON. You do an AJAX call, get some script back and need to turn it into an object. How do you do it? With “eval()” it’s slow and pretty insecure. In a third-party parsing library it’s slower but more secure. The ideal solution is to use the native JSON parsing methods JSON.parse(), JSON.stringify(), and toJSON() on Date/Number/String/Boolean prototypes. This is in IE8 and FF 3.5.
  • The switch statement. In a compiled language, the compiler does some optimization around switch/case statements. Apparently in JavaScript, that optimization doesn’t happen - it turns into huge if/else if blocks. A better way to go is to make a lookup table surrounded by a try/catch block where the catch block is the default operation. Definitely want to run that through the profiler to see if it’s worth it.
  • Property access methods. Instead of getProperty() and setProperty(value) methods (which makes for clean code), just directly access the property backing store directly. Skip the function call and added symbol resolution.
  • Minimize DOM interaction. As mentioned above, the DOM is the last place that’s looked to resolve symbols. The less you have to do that, the better. (DOM performance has improved, apparently, in IE8.)
  • Smart use of DOM methods. For example, use nextSibling() rather than nodes[i] when iterating through a node list. These methods are optimized to be fast. The querySelectorAll method, new in IE8, is optimized for getting elements by CSS class selectors and can be faster than getElementById or iterating through the whole DOM to find groups of elements.

Through all of this, though, optimize only when needed and consider code maintainability when you do optimize. You don’t just want to blindly implement this stuff.

HTTP Performance. This is a lot of that YSlow stuff you’re already familiar with.

  • Use HTTP compression. Whenever you get a request that says it allows gzip, you can gzip the response. You only want to do this on text or other uncompressed things, though - you don’t want to compress something like a JPEG that’s already compressed. If you do, in some cases, the download to the client might actually get bigger and you’ve wasted both client and server cycles in compressing/decompressing that JPEG.
  • Scaling images. Dont use the width/height tags on an image to scale it down - actually scale the image file.
  • File linking. Rather than having a bunch of JS or CSS files, link them all together into a single CSS and a single JS file. You’ll still get client-side caching, but you’ll reduce the number of requests/responses going on.
  • CSS sprites instead of several images. Say you have a bunch of buttons on a toolbar. You could have a bunch of images - one image per button… or you could have one composite image and use DIVs and CSS to show the appropriate portion of the compositie image on each button.
  • Repeat visits. Use conditional requests - use the Expires header in a response so the browser knows if it can get the item out of local cache.
  • Script blocking. When a browser hits a <script> tag the browser stops because it doesn’t know if it’s going to change the page or not. Where you can, put the <script> at the bottom of the body so it’s loaded last. This is improved in IE8, but it’s still there.

IE8 has increased the connections-per-domain from two to six by default. No more registry hacking to get that to work.

Tools

  • Fiddler - inspects network traffic.
  • neXpert - plugin for Fiddler to aid performance testing.

And that’s all, folks. Battery’s dead and the conference is over. Time to fly home!

Keynote

Bill Buxton introduced the keynote today, which is about the release of Internet Explorer 8. The intro video, once again, was awesome. I think every web meme in existence showed up in this thing. (Not as good as ScottGu wrestling a bear yesterday, but pretty funny nonetheless.)

The first speaker is Dean Hachamovitch, GM of IE. He has some interesting points. We need a browser that “just works” for people who want to browse. Something that’s secure and stable. We also need a browser that works well for developers (uh… Firefox?). That’s what, apparently, IE8 is supposed to be. Available today, you can go download the final release version of Internet Explorer 8.

Some interesting statistics presented and the way they dealt with them in making IE8: 80% of user navigations are the user going back to a page they already were at. 70% of people have more than one search provider installed. To address that, the search box will return results as you type that come from your history and make that easier to get to. They also added easy buttons at the bottom of the search results box to toggle search providers on and off.

Oh, surprise: when a browser crashes, users don’t care why it crashed, they just don’t want to be interrupted. Not sure what genius figured that one out. The historic problem is you might be dong a bunch of stuff and if the browser crashes, you lose everything. To answer that, they did the thing Chrome did where each tab runs in its own process so if one crashes, the rest don’t. That took long enough.

Some of the performance statistics they’re showing are nice. Comparable to Firefox 3, nice and fast. Faster than Chrome. I’ll have to see if that plays out in more day-to-day scenarios.

Some of the little security stuff they did is nice. The top level domain is highlighted in the address bar so it’s easier to see. Say you went to “http://www.paypal.badguy.com/foo/bar/baz” - it’s not obvious that you’re not on Paypal… but if you highlight the top level domain, it is: http://www.paypal.**badguy.com**/foo/bar/baz. Oh, and built-in clickjacking prevention, that’s cool.

The standards compliance stuff is compelling… but the side effect of showing that IE8 is really standards compliant is that it shows the other browsers might not be quite as compliant, so you’re still going to be dealing with cross-browser formatting problems. It’ll be more compelling when all of the browsers get as behind standards compliance as this.

Web slices look like an interesting developer technology. The’re these little HTML snippets that run in a tiny gadget-style window in IE8 so the user doesn’t have to open a whole tab and log in. I can see some interesting potential use cases in some of our projects - let you get your account balances, for example, without having to go to your banking site. Examples they showed on this is the ability to check your Yahoo! mail or look at traffic reports in little web slice windows. Sounds pretty easy to implement, too - just add a few tags around exsting comments.

Accelerators also look pretty interesting. Context-sensitive functionality like the ability to highlight some text and send it in Gmail, or select an address on a page and get a map. That content, like the slices, shows up in a little gadget-style window. I wonder if it would be interesting to people to be able to, say, highlight a biller’s name and have an accelerator to start a payment to that biller.

He’s making a big point about the fact that “they’re going to listen to the users” in the future. Interesting. I mean, we all know they didn’t listen to us before, but dwelling on it shows they really heard that this was an issue. Let’s hope it sticks.

Next speaker is Deborah Adler, a designer who revolutionized the way pharmaceuticals get packaged and labeled. Not a techie by any means - not even someone who interacts with the tech world. She started out by trying to solve a problem for her grandmother and ended up solving a problem for the world. The problem was that her grandmother mistook her grandfather’s Alzheimer’s medication and took the wrong one. Same medicine, similar names… but different doses. Problems.

Other problems she saw were things like people chewing pills that shouldn’t be chewed because the warning about that got hidden or obscured among a lot of other text on the bottle. Apparently like 60% of Americans make mistakes in taking their medication because of difficulty in reading or understanding the instructions. Showing us the issues made it very clear - poor coloration, far too much text that is diffcult to read in tiny print, extra pages of difficult text to accompany the prescription bottle… I’ve seen it myself. It’s not clear. Tiny, poorly printed labels that make sense to the pharmacy but not to the end user.

Her solution - a revised label - is really good. It still has all of the information on it, but formatted in a much clearer manner where the information you need immediately (what the medicine is, how to take it) is prominent and the less important things (the phone number for the pharmacy) are less prominent. Labels get color-coded on a per-person and per-medication basis so my prescription for something will have a different color label than your prescription for the same thing - so I won’t accidentally take your meds. The bottle is reshaped to be flat on the back and slightly round on the front so you don’t have to rotate the bottle 360 degrees to read the information. Warnings go in bold, clear print on the back of the bottle. And a huge improvement - the label will actually get a red X that shows up on the front when the drug has expired so you know not to take it. Automatically. (Like time-release ink.) Standardized warning icons that are clear and easy to understand.

She tried to get it pushed through at the federal level but, while the FDA liked the idea, each state has its own pharmacy board so they couldn’t do it. In the end, Target took it and they’re using the idea now. It’s now called the “ClearRx” system.

This is really cool - it’s designed specifically for good human interaction. Granted, there were challenges in getting it out there (there are 23 different variations in the label to accommodate the different states’ regulatory requirements) but it’s a huge improvement from the crappy orange hard-to-read bottles.

Gonna have to ask Jenn if she’s seen this at the VA. The Surgeon General really likes it.

Makes me wonder what major changes we can make online to help people this much. Working in online banking, I’m sure there’s a lot of improvement we could make to clarify what people are looking at and make online banking easier and more compelling.

Wireframes That Work

Presented by a representative from a company called Cynergy that does contract RIA design, primarily in Adobe Flex. They list Bank of America as a customer, which is interesting to me.

Interesting point one - good design does not necessarily equate with good user experience. The example here was a house in Germany that won Time Magazine design of the year. It looks great… but the people living there aren’t having such a great time. Great design, great look, not great UX.

So here’s a new xDD acronym for you: Purpose-Driven Design. This seems to be the idea that you need to design your experience with the end purpose of the app in mind. Tailoring the experience to the user, the user’s needs, and the overall aim of the application.

Interesting idea that came up (that I happen to agree with) - don’t wait for the users to come back and complain about the experience before you start fixing the problem. Anticipate the issues and fix them up front. How often have you been on a project where you clap some UI on something that you know isn’t awesome but that’s what the stakeholders asked for… only to hear that it’s not the greatest and it needs to be redone?

Everyone comes into the deisgn process with some baggage - tunnel vision (thinking you’re limited by technology or “this is how we’ve always done it), changing minds (or not making any decision)… In a purposeful design scenario you have to step back from that and look at the problem. Watch the customer do their work. Look at the pain points. Look at the problem you’re trying to solve. Solve it without that baggage.

A tip presented: Turn off your computers when doing high-level design. Use a whiteboard. Use a pencil and paper. Computers are great productivity tools, but how many times do you check your email, get interrupted by IM, get sidetracked… and it’s true. I think about how I work and I totally get all of that information coming in all the time. (The computer will obviously have its place in the process, but try doing some of the brainstorming without it.)

And a note on process: Don’t be so rigid in process that it hurts the development effort or the flow of ideas. Hmmm. That’s definitely something I’ll have to take back to work with me next week.

From the presentation:

  • “It hasn’t been hard to make things look interesting or cool. Usefulness and joy can be elusive.”
  • “Design like an architect, refine like a sculptor.”
  • “Don’t be a usability nazi.” (This has to do with the idea of getting too caught up in process and letter-of-the-law usability guidelines like the Jakob Nielsen things like minimizing number of clicks and such. Solving the problem in the best way might break some of those guidelines but will actually provide a better experience.)
  • “In software, the desired goal is often a disruptive solution in the marketplace. Know that this may require a disruptive process.” This is definitely one I want to take back to work with me.

For my projects, I know I have opinions about how we should be doing things. I’m going to have to stop and think now - am I looking at it with my baggage-goggles on? Or am I really solving the problem? I know our UX folks are doing a great job at researching peoples’ needs, and I’ve seen the personality profiles and such that they’ve come up with… but one of the questions I have now that I didn’t think of before - have we talked to people who don’t do online banking and figured out why? Are we solving the people for only existing users or are we solving it for everyone? How do we solve the problem in such a way that we can increase our user base instead of just retaining the existing folks?

Lunchtime - Microsoft Surface

Got to play a bit with a Microsoft Surface during lunch. It’s sort of hard to really understand the coolness of the tactile experience without actually doing it. The videos and demos you see are neat, but when you actually use it, it makes a lot more sense.

One of the apps they had was a CD player where you set the disc on the table and it [somehow] looks at the case, figures out what the CD is, and starts playing the music from it. And, of course, you’ve seen the demos where the person sets their phone down and starts working with the pictures on it.

What if you could set your wallet on the table and see your account information? See your balances and such for your various accounts and credit cards? Want to pay your credit card bill? Drag a payment from your checking account over to your credit card account. Work with your electronic balances as easily as you work with cash, adding an easy to understand, tactile experience to your online banking. Might be interesting. Now if I can just convince work that I need to get a Surface… you know, for development purposes.

Securing Web Applications

I admittedly got here a few minutes late because I couldn’t find the room, but coming in… it looks like a better title for this would be “How We Improved Security in IE8.” Not quite what I expected. We’ll see.

Oh, yeah, uh… looking at the description - “Learn how to take advantage of browser security improvements to help protect your web applications and visitors.” Might have to go see what other presentations are out there. Recent projects have taught me that the security department won’t let us trust security to the browser - we have to control it all entirely at the server level. So…

Choosing Between ASP.NET Web Forms and MVC

This session is to help you determine what’s better for you - standard ASP.NET web forms or the new ASP.NET MVC framework. The demo shown here is two applications that have identical user interfaces, do exactly the same thing, but one’s web forms and the other’s MVC. Comparing apples to apples, so to speak.

Interesting bit when describing the way the demos were put together - a guy asked why there weren’t any themes used (.skin files, etc.) for the demo and all the styling was done in CSS. The answer - no web controls in MVC, so it doesn’t make sense to use .skin files. Interesting because I’m curious why it wouldn’t work if you were using ASPX as the view engine. Thinking what they meant wasn’t “you can’t use them” so much as “we chose not to.”

The presenter (Rachel Appel) seems to be dwelling on the URL format that MVC routing gives you. She brings up the querystring vs. nice routed URLs… but you can use routing with web forms. I’ve done it. Not sure the URL format is a selling point one way or the other. (Actually, later she mentions that routing will work with both, though she did pretty well omit it and sell hard when talking about MVC.)

She also seems to be talking about using web forms but NOT using the MVP pattern to separate the code out of the codebehind and into a separately testable class. I think that’s missing here. She brings up a lot about separation of concerns, but you can get some pretty good SoC with MVP.

I think the best part here and the most obvious thing that never gets said: With MVC you get full control over everything… but there’s a corresponding increase in effort to get results out of the box. You don’t get anything for free. Sort of the Spider-Man “with great power comes great responsibility.” Kudos to Appel for saying it. It’s true, and no one ever really mentions that.

Another thing she said that never gets said: when showing a <% foreach %> loop building a table, she mentioned how this is reminiscent of classic ASP. Absolutely. What she doesn’t mention is that the next logical step of creating lots of pages with tables is to create a block of logic that you can call and pass data into so you don’‘t have to write the <% foreach %> on every page with every table. Isn’t that… server controls?

Really this solidifies my thoughts that the best way to go is a sort of middle ground: web forms using MVP, taking advantage of the routing (which shipped separately from MVC, by the way), and having all of that third-party control support and the richness of web forms while also getting your separation of concerns goodness.

Granted, I very well could be convinced otherwise when MVC 2.0 ships, whenever that is. I was talking to Eilon Lipton on the MVC team last night about some of my concerns that never seem to be shown in the MVC demos. Complex input validation and localization. Can it be done? Sure, but it’s not really a great story. Again, with all that control, you get a lot more manual wireup and, in some cases, no help at all. Apparently some of these more complex scenarios are on the list of things to address. Looking forward to seeing that.

File -> New Company: NerdDinner.com

In this one, Hanselman is showing how to easily create a reasonably rich application, his example being a dinner scheduling application. Technologies used include LINQ to SQL and MVC. The data is getting abstracted away with the repository pattern. A very good demo of how you can really rapidly get something going here. Also a good overview of how MVC comes together. Probably a little more useful for the folks who haven’t messed with MVC, but good to see it all come together.

You know how you say a word so many times you forget what it means and it sounds like gibberish? The word “dinner” has been worn out for me now. Dinner dinner dinner dinner dinner. Yup. Meaningless.

New favorite site: sadtrombone.com. (Yes, you can find anything on the web.)

ASP.NET MVC - America’s Next Top Model View Controller Framework

This is an introduction to MVC given by Phil Haack. File -> New Project demo including a walkthrough of the project structure. How controllers get set up, that sort of thing.

I think this should probably have been given on day one to give the people a foundation on which to build over the course of the next two days.

Connecting Applications Across Networks with Microsoft .NET Services

This is an intro to the Microsoft .NET Service Bus, which looks interesting, particularly since we’re doing a lot of WCF in one of my current projects. Clemens Vasters is the presenter on this one.

Lots of interesting features here. For example, they’re working on a feature where you’ll still be able to connect to your service endpoint even if the port is blocked by the firewall. Sounds sort of like the way Google Talk will use port 80 instead of the standard Jabber port 5222 if it’s blocked. No real details but, still, on the horizon.

Another interesting thing - if you have a client talking to a service and the service bus detects that, say, they’re in the same subnet, the bus will detect that and upgrade the connection to get the client talking directly to the service. There’s an event you can listen to that will tell you when that happens. (I’m pretty sure I’m understanding that right, but I admittedly came in a little late.) You can also set connections to be reliable so if a connection breaks it’ll automatically be re-established.

They have a queuing behavior where you can send messages into a queue and the service will pull messages off the queue and respond to them. This is set up through a policy in the service registry. He made a big deal to say this isn’t, say, MSMQ queuing, but I’m not really sure how specifically it differs. The behavior seems to be the same, but with some REST sort of semantics based on HTTP verbs (like “GET” on the queue will read a message on the queue bot leave it and not dequeue it).

Something else interesting - if you want to see what’s subscribed to a certain message set, you can do a GET on a router subscriptions feed and get an ATOM document back with the list of all subscriptions. Do a POST to create a new subscription, DELETE to unsubscribe… all RESTful semantics around that subscription endpoint.

Good demo just sort of solidified it for me, though. Sort of like a chat app. Two Silverlight applications subscribing to a service on the bus listen for messages. Someone enters some text, submits it to the service. The service turns around and sends a message to the subscribers

  • the listening chat clients. Both chat clients get the text that was submitted. Basically Twitter. Got it. I see what’s going on now. (Oh, hey, the demo’s called “Text140!” I get it!) Was feeling a little out of sorts for a bit, not really knowing what I was looking at. Messages, at least in the demo, all take the form of ATOM entries.

OK. I get it. REST + ATOM + pub/sub + cloud = Microsoft.ServiceBus. Basically. Nice. Unfortunately, with the cloud portion, I don’t think we’ll be able to use it for the project I’m on (banks + cloud isn’t gonna happen) but I can see that it could be very useful in other scenarios. Twitter competitor? :) (Didn’t realize it was an Azure service until pretty late in the game. Again, probably from being late to the show here.)

Keynote

After some decent beats spun on stage by a DJ, Bill Buxton came on stage to talk about design. Very engaging speaker. Started out talking about designers through history. I had no idea that one guy was responsible for the Hoover, Shell Oil, and Coca Cola logos. The theme: “Return on Experience.” The idea that you’re not just selling a product - an object- you’re selling an experience. I’ve heard this idea before and the example given was Starbucks - they don’t really sell a $5 cup of coffee, but people pay for it because they want the experience.

Interesting idea presented - we can sketch objects with relative ease, we can (with more effort) sketch more complex user interfaces… but we can’t sketch experience. Or, at least, not easily. We need to get to a spot where it is easy. Given the right tools, we might get there. Simple example - post-it notes to add more dimension to a UI design than if you had just one piece of flat paper, and adding arrows between the various states on the post-its can get you to a state diagram, but that’s not enough.

“If you don’t have as much detail in the transitions as you do in the states, you’re going to get it wrong.” – Bill Buxton

And there’s where the gap is. Without taking the transitions into account, the timings, we can’t see the whole picture. The experience is more than just some mockups.

We then got to see some of the design examples Buxton is talking about - the design, the experience. The Zune and a new “arc mouse” that looks kind of neat. Hmm.

The big question: How can we have a unified way to deliver rich interaction techniques over the web without having to do it mulitple times? (That is, develop the app once and have the same rich experience across all channels.) Good question, that. I’d love to see that answer.

THE GU.

ScottGu’s intro video is possibly the best. intro. video. evar.

Talk topic: Standards-based web, media, and RIA.

Expression Web 3 - Standards based web authoring, multi-language targeting, SFTP support, CSS diagnostics, and a new feature called “SuperPreview” that allows you to preview your web site in a broad range of browsers and figure out how to fix issues. SuperPreview is really cool. You can see previews from all the browsers on the system or browsers hosted in the cloud(!) right in the designer… and you can do “onion-skin” overlays so you can see the differences in how things render. A demo was given to show this and it’s hot. You can then use this to diagnose layout problems. Think about that - you can test IE6, IE7, and IE8 all on the same box. A free beta of SuperPreview as a standalone application will be made available starting today (download here).

ASP.NET MVC 1.0 - I expected this one as, I’m sure, did we all. RTM for this is shipping today. (Congrats, Phil!)

ASP.NET 4 and VS2010 - Three are a lot of improvements in ASP.NET 4.0, particularly in web forms, giving you more control over you rmarkup, and they’re integrating the new distributed caching platform. VS will get a big JavaScript tooling support update, lots of code focused improvements, SharePoint support will be built right in. You’ll be able to create different web.config files for your site so you can have different config files for debug vs. release - long overdue. I’ll have to check the sessions on this out. Very cool looking.

Web Server Extensions - They’re adding 8 extension updates for IIS7 starting today including database administration through the IIS admin tool, an applicaiton request routing proxy, a secure FTP server, and a few others.

Microsoft Web Platform Installer - Shipping version 2 of this today. It’s a single download you can put on your system that will install the latest web stack for developer boxes and production servers.

Windows Web App Gallery - They’re launching a new application gallery that lets you easily locate and install .NET-based web applications. (Yay, Subtext! You’re in the gallery!) The cool bit here is it integrates with the Microsoft Web Platform Installer so you can select from these apps and install not only the web stack, but the applications. This is awesome. (Shame on you for not telling us, Phil!)

Commerce Server 2009 - Released, got about a sentence-worth of lip service. That’s about how much interest I have in it, too.

Azure Services Platform - They’re adding features. FastCGI/PHP support, .NET full trust, SQL relational database support (so you can LINQ to SQL to the cloud)

BizSpark - Atwood and Spolsky got up to talk about StackOverflow as an illustration of success with the BizSpark startup assistance program. I’ve always found StackOverflow to be of questionable value (in many cases it could be replaced by Google) but I’m happy for their success. They’re using ASP.NET MVC and the two features they brought up were routing (which can be used with web forms, too) and performance, which… well, I haven’t seen any comparison stats on MVC vs. web forms. Either way, good for them.

Silverlight - By the numbers: 350 million Silverlight installs, 300K designers, 200+ sites. They’re releasing the new Virtual Earth SDK and the WorldWide Telescope (so you can look at the stars on any platform through SL). Netflix is using Silverlight to stream their movies over the net. Dude from Netflix got up to talk about the cross-platform awesomeness of Silverlight. Interesting part of this was the installers - 12% of people would come in and just not do the installer for the player, and 8% would fail the installer and never come back. Streaming through Silverlight gets Netflix out of the installer business. (Of course, they also pimped the DRM “PlayReady” scheme in SL.) Silverlight 3 will have GPU hardware acceleration, new codec support (H.264, AAC, MPEG-4), raw bitstream API (you can write your own codecs), and impoved logging for media analytics (for monetization). Perspective 3D, bitmap and pixel API, pixel shader effects, and Deep Zoom improvements.

Lots of improvements in IIS media services… which I suppose would be more interesting to me if I wasn’t in the online banking field. The demo of adaptive streaming was very interesting and I look forward to some of that making it to the Xbox 360 Netflix streaming app. No more buffering! Interesting stat - Beijing Olympics coverage delivered 3.4 petabytes of video content on NBC.com. Maybe my Home Server needs to hold a petabyte.

Oh, just saw a tweet come through - should be some Silverlight 3 bits and a book available after the keynote.

 Some application development improvements - deep linking, navigation, and SEO (so a search engine can index your Silverlight content(?)), ClearType support, multi-touch support, 100+ controls available, and library caching support (several apps can use a common library and the client only downloads it once). Nice demo of some of the new features in a Rolling Stone archive. (They’re also releasing a Playboy archive with 54 free issues.)

Expression Blend 3 - “SketchFlow” (which helps you prototype faster and work more in the way Bill Buxton talked about), Photoshop and Illustrator import, source code control support, and Intellisense for XAML/C#/VB. SketchFlow demo was good - I think it’ll save our UX folks a truckload of time in prototyping and getting feedback (as long as the feedback cycle doesn’t turn into one of those “50 people in a room watching one person mark up the prototype” sorts of things). The big thing I see here is the “sample data” feature - you can have XML sample data that your designers can use while making their prototype. This will be HUGE for us.

Silverlight 3 will have richer data validation controls and templates, Eclipse developer support, data-binding improvements, and multi-tier REST support. The validation thing is key - we’re looking at that on a project in our group right now.

SAP NetWeaver will start supporting Silverlight in a future release.

Silverlight 3 will include support for running applications outside the browser on both Windows and Mac. That’s going to be great. Write little companion applications for your web sites or whatever, write it all in Silverlight, and everything just works. (Adobe AIR style, right?) Built-in auto-update support, offline-aware app support (event you can handle when net status changes), and integration with the underlying OS (like the multi-touch feature mentioned).

There’s one beta of Silverlight 3 planned and it ships today. RTM will ship later this year. Oh, and did I mention it’s 40K smaller download size than Silverlight 2? Huge. Thinking for our project we should skip Silverlight 2 and go straight to 3.

ASP.NET 4.0

New Control.ViewStateMode property allows you to totally shut down ViewState and selectively enabling it. Differs from the EnableViewState = false, but not totally sure how. Seems to be more effective.

New ControlIdMode so you can take control of your control IDs. You can keep the old way of doing things, or you can set it to “static” and have server controls actually have the IDs you set - no mangling. And you can make that the default! About freaking time.

Routing - this is the same as from MVC. There’s a PageRouteHandler there… But I don’t know if that’s new or if I just don’t remember it. They added an expression builder to build routes in markup. Page class now has the RouteData class so we don’t need our own custom base page class for that anymore.

New Page.Description and Page,Keywords properties will add the correct meta tags to your page. They support localization, too, using standard ASP.NET techniques.

New Response.RedirectPermanent method uses a 301 permanent redirect code instead of the usual 302 temporary redirect.

Informal poll of the room showed a lot of folks target XHTML Strict, but more target XHTML Transitional. Lots of folks interested in Section 508 standards, which is good.

The QueryExtender control helps you create search pages easier by interacting with IQueryable data sources. Hook it up to a textbox (or whatever) and have that interact with a LinqDataSource (or whatever).

Core improvements - Cache has been updated to use the provider model and be extensible. Browser capabilities extensibility allows you to create custom browserCaps providers. Out of process session state can now optionally be compressed.

ASP.NET AJAX is all new. Compiled templates, controls, databinding, and cross-browser compatibility. All client side. All entirely disconnected from ASP.NET - the demo was an HTML page using a client-side dataview control. HOT.

Microsoft is going to start offering full product support for jQuery - a new thing for MS, to offer tech support for open source. They’re calling it “Best Effort” support - they will troubleshoot with you and, if you find a defect, they’ll create a patch to submit to jQuery just like a responsible community member. Good for them, good for us.

MVC was mentioned but not covered because there are sessions dedicated to it.

Dynamic Data… Not seeing anything new here.

Out of Browser Experience for Silverlight 3

This is pretty cool Silverlight 3 allows you to run a Silverlight app outside the browser so you can run the same app in the browser and standalone. By setting an option in your app manifest, you can enable a user to install the app. Setup experience is basically two checkboxes - add a shortcut to desktop and/or add a shortcut to start menu - and uninstall is a right-click away. Nice.

Really cool way to dogfood Silverlight 3 and a great demo - the entire presentation was done in a lightweight PowerPoint analog entirely written in Silverlight 3. You’d never know it wasn’t PowerPoint except the transitions between slides were all 3D. Very cool.

Looks very easy to write an app that runs both in and out of the browser. I can envision, for my project, a little online banking widget that sits on your desktop and lets you view your balances or do small money movement operations without launching the browser. Could be a neat proof-of-concept.

The auto-update feature is pretty slick, too. It’s optimized for “instant on,” so if a new version is found, it’s downloaded in the background and the new version takes effect the next time the app is run. It’s done without any notice, so it’s recommended that your app display some sort of message when it’s updated to let folks know. Sort of odd that it doesn’t say anything out of the box, but it’s still slick. You do need to keep all your code in the XAP file to get this to work smoothly.

There’s still IsolatedStorage for Silverlight, but the quota’s increased from 1MB to 25MB. On the other hand, they have new open and save file dialog boxes that allows you to interact with the real filesystem.

Offline apps are basically pulled off by a little launching application hosting a browser, creating a simple HTML page, and loading your Silverlight app. Very interesting. Technically the app is actually still running in a browser. Looks like you have to be careful if you’re using Silverlight to write to the DOM, though - there’s no DOM in an offline app, so you won’t see those things working.

Data-Driven Apps in Silverlight

Primarily a demo using LINQ to SQL to show the old .NET 1.1 “IBuySpy” demo site reimagined in Silverlight. I haven’t done a ton in Silverlight so this was pretty good to see. Only downside is the simplicity of the demo (LINQ to SQL) didn’t help me in figuring out how to do larger-scale apps that use services for data access or use the MVVM pattern. I didn’t stay for the whole thing because there was another presentation starting (a “mini session”) that showed various WPF and Silverlight apps and I wanted to see some examples of stuff out there that aren’t necessarily apps from giant companies (like the Netflix player or NBC and the Olympics) but also weren’t really quick demos.

How’d They Do That?

Hanselman showed a demo of an app my friend John did for Adidas using WPF and Dynamic Data. Good stuff. Really showing quick concept-to-production work using some of the new tools out.

conferences comments edit

I’ve got some work to do this morning, but come noon, I’m off to the MIX09 conference. I don’t travel much, and rarely alone, but I do love me some Las Vegas, so I’m looking forward to it. There are a lot of folks I’m looking forward to meeting up with when I get there that I’ve corresponded with online a lot. Should be good times. I will, of course, tweetand blog the experience.

downloads, vs, windows comments edit

I found that being able to right-click a .sln file and open it as Administrator in Visual Studio is a huge help because I always open solutions by finding the solution file and opening from there, not opening Visual Studio first. Anyway, based on the Elevation Power Toys stuff, I write an “Elevate VS Solution” Power Toy that lets you right-click a .sln file and open it as Administrator.

Download the zip file, extract the .inf file, right-click, install. Standard disclaimers (“works on my box!”) apply.

[Download ElevateVSSolution.zip]