Monday, December 23, 2013

MVVM light and Model Validation

I have been using the MVVM light toolkit for a project recently. It is a great toolkit but is missing a couple things and Laurent Bugnion does a good job trying to cover those holes. One of the things the toolkit does not support is Validation. The good news is there is a great CodePlex project out there call Fluent Validation that makes this pretty easy to add and really powerful. My objective was to add validation to my model so I could call “IsValid” on the model itself (similar to the MVC attribute approach). Fluent Validation has you create a new class file that holds you validation rules for a given model. This is the approach I took to enable each model to have an “IsValid” property and a “Errors” property that returns the validation errors.

First I setup my ValidationFactory:

public class ValidatorFactory : FluentValidation.ValidatorFactoryBase
{
    public override FluentValidation.IValidator CreateInstance(Type validatorType)
    {
        return SimpleIoc.Default.GetInstance(validatorType) as FluentValidation.IValidator;
    }
 
}

Next I used the SimpleIoC object that comes with MVVM light to register my validator class in the ViewModelLocator :

// Register Validator
SimpleIoc.Default.Register<IValidator<Car>, CarValidator>();

You can set up your CarValidator pretty easy and follow the examples on the Fluent Validation site so I will not go through all that. The next part was figuring out how to have all my models have the functionality I needed without adding custom code to all my models. I wanted a basemodel class to handle this for all models that are built on top of it. Here is what I came up with.

Create a BaseModel class that all Models will inherit from.

   1: public abstract class ModelBase
   2: {
   3:     private IValidator _validator;
   4:     private IValidator Validator
   5:     {
   6:         get
   7:         {
   8:             if (_validator == null)
   9:             {
  10:                 var valFactory = new ValidatorFactory();
  11:                 _validator = valFactory.GetValidator(this.GetType().UnderlyingSystemType);
  12:             }
  13:             return _validator;
  14:         }
  15:     }
  16:  
  17:     public bool IsValid
  18:     {
  19:         get
  20:         {
  21:             return Validator.Validate(this).IsValid;
  22:         }
  23:     }
  24:  
  25:     public IList<ValidationFailure> ValidationErrors
  26:     {
  27:         get
  28:         {
  29:             return Validator.Validate(this).Errors;
  30:         }
  31:     }
  32: }

The key here is the “Validator” property. This does all the work for all models. Each model will now have an “IsValid” property and a “ValidationErrors” property. These two properties use the private “validator” property to get the type of validator it needs and pass in the object to validate. On line 11 you will see that the ValidationFactory.GetValidator method is called. The base ValidatorFactoryBase does most this work for us. The key here is that on line 11 we use the UnderlyingSystemType of the object. In this case the “Car” class. The factory then uses the SimpeIoC to find the right validator for that class. Now on line 21 and 29 I just call the methods I want on the Validator property. The “Validate” method requires the object to validate to be passed in. This is also easily obstructed by just passing “this” (the current model that is built on top of the ModelBase) into the call.

Just like that I can now do the following in my ViewModel (which holds a property of Car for my new Car model object).

   1: private void SaveCar()
   2: {
   3:     if (this.Car.IsValid)
   4:     {
   5:         _CarProvider.SaveCar(this.Car);
   6:  
   7:         _NavService.NavigateTo(ViewModelLocator.MyProfilePage);
   8:     }
   9:     else
  10:         base.DisplayValidationErrors(this.Car.ValidationErrors);
  11:  
  12: }

On line 3 I simple call “IsValid” on the model I have bound to the view. You could also now wire that up to a commands “CanExecute” method as well. Hats off to the Fluent Validation team, it is a slick module easy to plug into and very powerful.

 

NOTE: If you are curious about the “NavigationService” checkout this blog. It does a great job enabled the standard Windows Phone NavigationService in an MVVM architecture.

Sunday, October 20, 2013

Visual Studio Code Review

I have finally been able to work with Visual Studio and TFS code reviews. There were a couple thing that threw me off I wanted to write about. If you want a good quick overview of what this is check out the Channel9 video “Using Code Review to Improve Quality.” There are a few things to note about this feature no one calls out easily:

1- You must have TFS to enable this

2- You must have at least Visual Studio 2012 premium

If you have those two things you can start doing code reviews this way. It is a great feature I think. Sadly it will be under utilized by a lot of development shops for a few reason. First, I think few will understand this feature is there and even if they do fewer will spend the time to defined their processes for it. Second I think a lot of development shops have only a professional license, don’t use TFS as their back, or if they do, they don’t work to use its full capability and just use it as source control.

With all that said if you are using it or thinking of using here are some things I have found out.

Inline code comments

Inline code comment

You can provide comments for the entire change set, document or highlight a line of code and provide comments for just that. However, you can only do inline code comments on code. This means no CSS files, no CSHTML files, etc. Not a huge deal but a bit of a disappointment.

Closing and Sending Comments

Send Comments or Close Reivew options

Once you have provided comments you can send those comments. It is important to note though that is is not the same as closing out your code review (approve or reject). In the above picture you will notice that the “Send Comments” button is grayed out because I have sent my comments. However, the “close review” item is still active. This should allow you to provide feedback on the code without closing out the review. This allows you to have some back and forth with the developer to fine tune the code before you approve the code.

To Check-In or leave outstanding

With code reviews you can request a code review and then either leave your code checked out until code reviews are done or you can check your code in. I am sure everyone has different thoughts here on what they like. I prefer to have developers check-in their code. I prefer this for a few reason. The main reason is code reviews normally don’t happen in a timely manner. I don’t want additional coding or testing slowed down waiting for this. Plus as long as the developer goes back and checks the status of his code review any requested changes can be made then. This is a good reason to have TFS send emails when code review task item status change. It helps make sure developers know comments have been added or a review is completed either because it was declined or it was approved.

You can request a review in a couple ways.

1- Go to “My Work” and under in progress you can select “Request Review”. This is not bad but it means the develop has to remember to go there and request review before checking in.

My work screen for request review

2- Request review while checking in. This can be done by selecting the “Actions” link.

pending changes screen for request review

Code reviews are only good if they happen. I am sure there are lots of opinions out there on how to make code reviews happen. I don’t think there is one answer for this problem. TFS does not support (out of the box) a check-in policy that says you have to have had or requested a code review. Personally I am not a big fan of that approach anyway but I don’t think every check-in needs a code review. This is where the SDLC process creation and management comes into play. If your team is not active in creating, managing, enforcing and refining their SDLC rules this will not be as beneficial as it could be to you.

Wednesday, October 16, 2013

Web API – Pass multiple simple parameters on a POST/PUT

I have been working a lot lately on Web API (v1). For the most part I really like but there is one thing I found out that I really hate. Web API (v1) does not let you POST multiple simple type parameters.

Here are the links I found that let me get through this.

WebAPI Multiple Put/Post parameters

Passing multiple simple POST Values to ASP.NET Web API

This was a little tricky to find and most people think Web API cannot do this. Which I by default it cannot be this extension has worked great for me. So I wanted to make sure I could find it again and make it a little easier for others to maybe find it.

Tuesday, October 15, 2013

Calculate Screen Size in Inches based on height and width in millimeters

This will be short and simple but some math that hopefully will be helpful. I have been doing a lot of mobile device detection work as of late. Recently I was about to get the physical width and height of a device in millimeters. This was great but what I really needed was the diagonal screen size in inches. Here is the math for it.

- Get the width in inches : width in millimeters times 0.0393700787

- Get the height in inches :  height in millimeters times 0.0393700787

- Combined value of both to the power of 2 : width in inches ^2 + height in inches^2

- Get the square root which should be the diagonal screen size : Combined value square root

That is it.

You can see it working on my test site

Monday, October 14, 2013

Web API, JSON, AJAX, CORS, Chrome 500 error and Authentication–Oh My!

I have been working on a project that is using .Net 4.0 and MVC 4 with Web API. Web API is great and provides a lot of great functionality easily. We have needed to call our Web API from another domain though which introduces Cross Origin Resource Sharing (CORS) issues. Web API in 4.0 does not really support this so it has been causes us issues, you can configure IIS to work with it pretty easily.. In later version it will support it better. See the following links:

Web API VS 2013 (.Net 4.5 and later)

http://aspnetwebstack.codeplex.com/

Enabling Cross-Origin Requests in ASP.NET Web API

ASP.NET Web API: CORS support and Attribute Based Routing Improvements

However, if you are using 4.0 that is not much help to you. We have had to push through a few issues on this front. If you are testing with IE8 you will not see them as it does not care about CORS issues. If it is working in IE but not working in Chrome or FireFox here are some issues we have been working through:

 

Some calls return data others do not

Be default Web returns both XML and JSON formatted objects. If you are doing a browser based request (especially in Chrome) the default request asks for XML back. Look at what type of objects you are returning. If you are returning an complex type that holds a complex type this can cause an issue. The embedded complex types may need serialization defined for them. Or you can just remove the XML serializor which forces the response to be JSON instead of XML.

In the WebAPIConfig.cs Register method:

//Remove XML formatting 
config.Formatters.Remove(config.Formatters.XmlFormatter);
 

Some calls never get to the API

If you are seeing your calls in a cancelled state (Chrome inspector or fiddler) this means you are having CORS issues. This means the browser sees that you are trying to request data from the client side from a different domain. Keep in mind here that domain means different website. So just because you have test1.mysite.com  and test2.mysite.com (same domain different sub domain) it does not mean you are good. If those two sites are hosted in different webservers or different sites in IIS. Here are some sites with solutions on what you need to setup to make this work.

Web API, JavaScript, Chrome & Cross-Origin Resource Sharing

jQuery, CORS, JSON (without padding) and authentication issues

Using CORS to access ASP.NET services across domains

http://www.w3.org/TR/access-control/#origin-request-header

HTTP Cookies and Ajax requests over HTTPS

Here is an example AJAX call I setup that works.

var xhr = new XMLHttpRequest();
 
$.ajax( {
  method: "POST",
  url: form.attr( 'action' ),
  data: form.serialize(),
  xhrFields: {withCredentials: true},      
    cache: false,
    async:true,
  dataType: "application/x-www-form-urlencoded",
  success: function( response ) 
{
    console.log( response );
},
error: function (XMLHttpResponse, textStatus, errorThrown) {
            console.log(XMLHttpResponse);console.log(textStatus); console.log(errorThrown);
        }
 });

 

Passing Credentials and Cookies

Notice that I have xhrFields: {withCredentials: true}, this is a must have if you are doing an AJAX call that requires authentication. This will make sure any cookies created stay attached to the request.

We had to do authentication with Novell Access Manager (NetIQ Access Manager) and it was a serious pain. NetIQ really struggles to allow CORS calls. In fact I would say that it does not. Now I am no NAM expert and we were working with another firm who managed that, so if you know how to make it support CORS let me know. We ended up having to make it so CORS did not come into play.

There seems to be some confusion on the origin request header. You will even see this in some posts. Some people say to if you need to allow multiple headers just add it in one header separated by a comma. Others will say add multiple headers. Here is the key. If your request includes credentials you can only have one origin and no wildcards.

Per the w3 specs:

“If the resource supports credentials add a single Access-Control-Allow-Origin header, with the value of the Origin header as value, and add a single Access-Control-Allow-Credentials header with the case-sensitive string "true" as value.”

Thursday, September 12, 2013

Clearing Indexes on Sitecore Content Delivery Servers

For anyone that is building a website on Sitecore you have probably come across this issue. In even a basic production deployment of Sitecore you have a content management box and a content delivery box. When content is updated on the management servers you want your content delivery box’s indexes to be updated. Some people think you need to create your own custom process to move content management indexes out to the content delivery services. They think this because the content management box detects the indexes needs to be updated and runs the process to update its indexes, but there is nothing on the content delivery boxes to do this. Well this is just not true. To be honest if it was true Sitecore would have a serious short coming if they expected this process to be created manually every time by everyone.

Alex Shyba has written a great post with additional details on how all this works. Of course there is also the Sitecore docs you can reference.

The fact is the web.config file or your own include configs (best approach is in the app_config include folder) have values that can be set to tell the content delivery servers to check at a certain interval to see if their indexes need to be updated. To do this there are a couple key config values you need to set.

<setting name="Indexing.UpdateInterval" value="00:05:00"/>

This setting tells the servers how often to check for index updates. This does not mean the index will always update this often. It only means it will check to see if it needs to update. If you want details on what it is checking and how it knows if the index is dirty please see Alex’s blog.

There is another key setting you have to have as well to make this work on the content delivery servers.

<setting name="Indexing.ServerSpecificProperties" value="true"/>

If this value is not set, or set to false, on the content delivery servers your indexes will never update. Again see Alex’s blog for details on why.

This may or may not light things up for you. There is one more key config file in the web.config you have to enable. If you read Alex’s blog he mentioned the event queues (but never this value). By default Event Queues are disabled in the web.config. If they are disabled this notification about indexes will not make its way to the content delivery servers. 

<setting name="EnableEventQueues" value="true" />

Make sure this setting is also true. Once you have all of these set the content delivery servers should start updating on the defined interval on their on when a index event is queued. Don’t make your UpdateInterval to short as it could start causing performance issues (if it is set to 00:00:00 it is disabled).

Tuesday, April 30, 2013

Sitecore Azure – Getting up and running

I am a fan of Windows Azure and Sitecore so I figured it was time to see what Sitecore for Azure was all about. The post is just about first impressions and lessons learned while getting started.

There are a few must haves if you are going to work with Sitecore Azure. First you have to get a license file. This is a manual process where you have to email Sitecore with your Sitecore license info and your Azure subscription info (assuming your provisioning your Azure environments yourself). This takes about 24 hours to get the environment file back. Secondly make sure you have the version of the Azure SDK they note in the documentation. In my case it was version 1.7 (documentation gives you a link as well). I figured this was a soft requirement so I just downloaded the latest Azure SDK (1.7 was 2 version behind) however, this did not work and caused deployments to error out (it also gives you an “environment file not found error” in the Sitecore Azure screen).

Once you have all that and you can go into Sitecore – click Sitecore and then click Sitecore Azure you can really start working with it. I first wanted to play with having a local CM (content management) server and a cloud based CD (content delivery) server. So I selected an Azure location and clicked “add delivery farm.” This packaged the current base Sitecore site up and deployed it to Azure.

What is setup for you:

- Azure web role (stagging only. No production role created or started)

- Azure SQL database (core and web). Both setup as Business edition 150GB (I downgraded to Web 1GB for the lower price point and things seem to be working fine still). A UID and PWD is generated for you and the connectionstrings.config file is updated.

- Azure Blob, Table and Queue storage. The following blobs and tables are created for you (no queues are created)

image

Blob Container Blob Contains Description
CacheClusterConfigs ConfigBlog XML File XML file with ConfigEntries nodes which contain CDATA values
  ConfigSchemaVersionBlob Text File Just a version number
  InitCompleteBlob XML File Empty ConfigInit node
  InitStartedBlog XML File Empty ConfigInit node
Sitecore-auto-deploy     Currently empty
WAD*     Standard Azure WAD blobs

 

Table Description
WAD* Standard Azure debug and logging tables
$Metrics* Standard Azure metrics tables

 

- Sitecore publishing target to your Azure web database

What is NOT setup for you: (I was pretty disappointed to see these files are missed and left to the user to flush out)

- GAC level assemblies needed that are not deployed by default. These are assemblies in the GAC that during setup and deployment are not GAC level assemblies Azure hosts (or are different versions) so must be copied to the bin so they are deployed.

- System.Web.WebPages.Razor

- Microsoft.Web.Infrastructure

- System.web.webpages.deployment

- System.Web.Helpers (1.0)

- System.Web.MVC (3.0)

- System.Web.WebPages (1.0)

Once you have copied over the assemblies that are not setup for you, you can now deploy to your Azure environment and hit the site without an error. You will be missing some images because the “/sitecore” directory is not copied over and this is where the default images sit.

As of today Sitecore says this is for Sitecore 6.5 and Azure 2.0. I however did all this with Sitecore 6.6 rev. 130214 and Azure 2.0 rev. 120731 and things seemed to work fine (minus what I noted above, and I doubt that is because 6.6 instead of 6.5).

 

Shutting Azure down

After this simple test of setting up a content delivery server in Azure I thought I would see what the shutdown process is like. In the Sitecore Azure management window you can click on your node and say suspend. This is pretty easy and does what you would expect and suspends your webrole. Now that the role is suspended lets delete the delivery instance.  In the Sitecore Azure management window the instance looks like it is deleted, but if I look at my Azure account the Cloud Service, databases and storage accounts are still there. Do get rid of these you need to manually delete via the Azure web UI. Not a big deal but disappointing it does not clean itself up better. It also does not clean up any of the environments in the “/sitecore/system/settings/azure/environments” node in Sitecore. Disappointed to see it does not clean itself up better.

Monday, March 25, 2013

Create Web Forms for Marketers Custom Save Action and Edit Screen

I was recently working on a project where I needed to create a custom save action to add to my Web Forms for Marketers module.  I needed a custom save action to push the data to salesforce and I also needed a custom edit screen so the author could setup some configuration values the action needed. Here are the details of what I did starting with the save action.

Save Action

The first thing you need to do is create a new class that inherits the “ISaveAction” interface (Sitecore.Form.Submit.ISaveAction) and implement the Execute method.

 
public class SalesforceSaveAction : ISaveAction
{
    public string FormKey { get; set; }
    public string FieldsToSend { get; set; }
 
    void ISaveAction.Execute(ID formid, AdaptedResultList fields, params object[] data)
    {
        // Code to execute here
    }
}

That is really all you need. Now it all becomes about custom code and configuration.  To configure the save action to show up you need to go to Modules –> WFM –> Settings –> Save action (master DB). Right click on save actions and select new save action. Give the item the name of the new save action you want to see in the list. In the assembly line put the name of the assembly your code shows up in, and in the Class put the name of the class with the namespace declaration. Don’t pay attention to the “Editor” section just yet. We will come back to that.

image

Once you have this step done you can now set up your save action via the form designer. Select the new action in the drop down and click add.

image

That is it. You now have a new custom save action. You can use the AdaptedResultList parameter object to access all the fields on the form and work with them. If all you want to do is work with the form values you are set. However, if you also need to allow the editor to provide some info to the action you need to also create an edit screen.  

Edit Screen

Creating and configuring the edit screen will make it so the “edit” button on the right will be enabled when you select your custom save action.  There are two parts to doing this. First you have to create the Sheer UI and then you have to create the code that Sheer UI calls. I started building this out from Sitecore’s “My First Sheer UI Application” example but found it only a somewhat helpful. Since I am not really creating an application it was not really addressing what I wanted to do. The XML section was helpful in understand what controls I can declare in the XML to create the UI. The first step is really just creating your XML file that holds that XAML that makes up your UI. I created a “SalesforceEditor.xml” file defined like this:

 
 
<?xml version="1.0" encoding="utf-8" ?>
<control xmlns="http://schemas.sitecore.net/Visual-Studio-Intellisense"
  xmlns:def="Definition"
  xmlns:asp="http://www.sitecore.net/microsoft/webcontrols">
  <SendToWebServiceSettings>
    <FormDialog ID="Dialog" Icon="Applications/32x32/gear.png"
      Header="Salesforce Web-to-Lead"
      Text="Define the OID for your Salesfoce form and define your mappings.">
      <CodeBeside Type="Web.Common.SalesforceWTLEditor,Web.Common"/>
      <GridPanel Class="scfContent" ID="MainGrid" Columns="2" Rows="5"
        Margin="20 25 25 15" Width="100%" Border="1">
 
        <Literal Text="Form Key:" GridPanel.Align="left" GridPanel.Column="0" GridPanel.Row="1"/>
        <Edit ID="FormKey" GridPanel.Column="1" GridPanel.Row="1"></Edit>
                
      </GridPanel>
    </FormDialog>
  </SendToWebServiceSettings>
</control>

All the fields are standard XAML controls. The key field to note is the “CodeBeside” element. This points the UI to where the codebeside file is that will execute against the XAML class. For our simple example we are just trying to create an edit screen that looks like this. This gives the editor a simple place to create an associated Salesforce key for each form they is generated.

image

Once we have the XML file we need to tell Sitecore about it. This happens in few places.

First you need to tell Sitecore about the layout you just created (the XML file). In the Sitecore “core” database you need to setup your page, under Sitecore –> layout –>layouts –> layouts, right click and add new item from template “/sitecore/templates/System/Layout/Xml layout.” Name it what you want and set the path to the actual physical location of where your xml file is.

Second, in the Sitecore “core” database you need to setup your page, under Sitecore –> content –> Applications –> Dialogs right click and add application.  Per the below picture give it a name and a icon. It can be whatever you want. Once this is done you want to go to the “presentation” ribbon and select the “Details” icon in the “layouts” section. Here you will select “edit” and select the Sitecore layout you created in the previous step.

imageimage

At this point you are done with the Sitecore configuration except for one thing. Earlier I told you about one field in the custom save action screen in Sitecore to ignore. Here is where it becomes meaningful. In the first image above there is a field called “editor.” By putting “control:SendToWebServiceSettings” we connect the edit button of the custom save action to our new Sheer UI edit screen. NOTE: The “SendToWebServiceSettings” is the name of our main node in the XML file.

Now the last thing. You need to create the code behind file that does all the processing of the XAML form.

Create a class that inherits from DialogForm

 
class SalesforceWTLEditor : DialogForm
{
   protected override void OnLoad(EventArgs e)
   {
       base.OnLoad(e);
       if (!Context.ClientPage.IsEvent)
       {
           // Execute your edit page onload code          
       }
   }
 
   protected override void OnOK(object sender, EventArgs args)
   {
        SheerResponse.SetDialogValue(
              ParametersUtil.NameValueCollectionToXml(new NameValueCollection() 
          { 
            {"FormKey", PersistentFormKeyValue },  
            {"FieldsToSend", string.Join(",", selectedFields)} 
          }));
 
 
            base.OnOK(sender, args);
   }
}

The Onload method, as you would expect, fires off when the form is opened. The OnOk is fired off when the user clicks the ok button at the button on the form.  I don’t have all my code here but I have listed the key parts. In the code behind you can access all the field IDs in your XAML. For example I can do MainGrid.Controls to get a list of all controls inside the GridPanel defined in my XML above. I have some SheerResponse code I am setting. This is the key code to setting information in the edit UI that can be accessed via your custom save action. You will note I am setting a NameValue collection for “FormKey” and “FieldsToSend.” The key and the values are just strings. If you take a look at the custom save action class at the top of the post you will see I set two public parameters of “FormKey” and “FieldsToSend.” The SheerResponse.SetDialogValue method pushes my name value collection to those properties. So when my save action fires off I can access FieldsToSend or FormKey and it has the string value of the namevalue collection.