Archive for the ‘Microsoft’ Category

Hey Flickr, Where Did My Statistics Go? The CouchBase Connection. Part IV   Leave a comment

We interrupt this series to take a side trip concerning application logging.  The series begins here. NLog is an excellent open source logging project available from NuGet and other sources.   The sample code for this blog post can be found HERE. Although this is a kitchen sink implementation (Log to files, event logs, database, SMTP whatever) I will be using it as a simple way to log text information to files.  Once you have created a Visual Studio Project open Tools / NuGet Package  Manager/Package Manager Console.  From Here you can add NLog to your object with the command:

PM> Install-Package NLog

This will install NLog, modify your project and add a project reference for NLog.  Although NLog targets and rules can be managed programmatically, I 
normally user the configuration file: 

NLog.Config

You can set this up using the Package Manager Console with the command:

PM> Install-Package NLog.Config

Configuration File Setup

The NLog config file is then modified to define “targets” and “rules”.  The former defines where log entries are written and the latter define which log 
levels are written to which targets.  A file based target section might look like:

<targets>

   DLR.Flickr/Debug.txt” archiveNumbering=”Rolling”   archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”logfile” xsi:type=”File” layout=”${message}”    fileName=”C:/temp/DLR.Flickr/Info.txt”  archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Errorsfile” xsi:type=”File” layout=”${message}” fileName=”C:/temp/DLR.Flickr/Error.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Fatalfile” xsi:type=”File” layout=”${message}”  fileName=”C:/temp/DLR.Flickr/Fatal.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

</targets>

where name is the symbolic name of the target xsi:type defines this as a file target.  If you are controlling the layout of the log entry set layout to “${message}”.  Given that we are using xsi:type as File we can use fileName to set the physical location of the log file.  The value of fileName can be changed programmatically at runtime but I will not give examples here.

NLog defines five Log levels:  Debug, Info, Warn, Error and Fatal.  These levels are defined in an enum and have the names have no special significance except as you define them.  The Rules section of the config file defines which Log Levels are written to which targets. A given level can be written to zero to many targets.  My Rules section typically looks like:

<rules>

<logger name=”*” minlevel=”Debug” maxlevel=”Debug” writeTo=”debugfile” />

<logger name=”*” minlevel=”Info” maxlevel= “Info” writeTo=”logfile” />

<logger name=”*” minlevel=”Warn” maxlevel=”Warn” writeTo=”Warnfile” />

<logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

<logger name=”*” minlevel=”Fatal” maxlevel=”Fatal” writeTo=”Fatalfile” />
  </rules>

More complex rules like the following are possible:

    <logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

       <logger name=”*” minlevel=”Error” maxlevel=”Fatal” writeTo=”Fatalfile” />

NLog initialization at runtime is very simple.  Typically you can you a single line like:

using NLog;

static Logger _LogEngine = LogManager.GetLogger(“Log Name”);

this need only be called once.

The simplest NLog log call (given the definition layout=”${message}”  ) would look like:

_LogEngine.Log(NLog.LogLevel.Info, “Info Message”);

We can extend this quite simply.  I have a single class extension providing a simple extension of NLog on Git Hub.  You can find it here.  Specifically I have provided wrapper methods for each NLog.LogLevel and support for Exception Stack Dumps.  Include this file in your project (after installing NLog and NLog config) then you can write:

using DLR.Util;

namespace DLR.CCDB.ConsoleApp

{

    class Program

{

static void Main(string[] args)

{

string _CorrelationID=System.Guid.NewGuid().ToString();

CCDB cbase = new CCDB { CorrelationID = _CorrelationID };

cbase.Client = CouchbaseManager.Instance;

NLS.Info(_CorrelationID, “Helllo, CouchBase”);

try{

throw new ApplicationException(“My Exception”);

}catch(Exception x){

NLS.Error(_CorrelationID,”Error”,x.Message);

//OR

NLS.Error(_CorrelationID,”Error”,x);

}

_CorrelationID is supported here so in multiuser situations (like WebAPI) we can identify which messages where written by which task.  In a console app this is not strictly necessary.  The call to NLS.Info results in an output log line like:

DLR|20140909-152031037|2f8f89ce-51de-4269-9ae0-9313ad2a0243|Helllo, CouchBase|

where:

  • DLR is the Log Engine name (more than one engine can write to a given log file);
  • 20140909-152031037 is the terse timestamp of the form: YYYYMMDD-HHMMSSmmm; and
  • Hello, CouchBase is our text message

My call:

NLS.Error(_CorrelationID,”Error”,x);

would result in a log line like:

DLR|20140909-152544801|46e656cd-4e17-4285-a5f3-e1484dad2995|Error|Error Data. Message: [My Exception]Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args|

where Error is my message;

Error Data. Message: [My Exception] is the Message in ApplicationException; and

Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args| is the stack dump.

NLS will handle nested exceptions and stack dumps but we are only showing a single un-nested exception in this example.

OK! That’s it for this post.  We will, hopefully return to couchBase and the Flickr API in the next post.

Posted 2014/09/09 by Cloud2013 in GitHub, Microsoft, NLog, NuGet

Tagged with , , ,

Visual Studio 2013: Your License will expire in 2147483647 days.   Leave a comment

image

Wikipedia helpful explains:

The number 2,147,483,647 (two billion one hundred forty-seven million four hundred eighty-three thousand six hundred forty-seven) is the eighth Mersenne prime, equal to 231 − 1. It is one of only four known double Mersenne primes…The number 2,147,483,647 may have remained the largest known prime until 1867.

The number 2,147,483,647 is also the maximum value for a 32-bit signed integer in computing. It is therefore the maximum value for variables declared as int in many programming languages running on popular computers, and the maximum possible score, money etc. for many video games. The appearance of the number often reflects an error,overflow condition, or missing value.

The data type time_t, used on operating systems such as Unix, is a 32-bit signed integer counting the number of seconds since the start of the Unix epoch (midnight UTC of 1 January 1970).[9] The latest time that can be represented this way is 03:14:07 UTC on Tuesday, 19 January 2038 (corresponding to 2,147,483,647 seconds since the start of the epoch), so that systems using a 32-bit time_t type are susceptible to the Year 2038 problem.

output

REST, WEB API And CORS   2 comments

Introduction

Cross Domain AJAX calls (CORS) on desktop browsers require special processing on both the server side and in the way we call AJAX from within the browser. A general overview of CORS can be found hereASP.Net WEB API allows a couple of fairly straight forward ways to implement REST HTTP endpoints with CORS support.  Using the current release build of WEB API we can code our CORS handlers directly or if you want to use the nightly builds of the WEB API you can use an attribute approach.  This post will concentrate on how to write CORS handlers directly since this is the approach I have this in test right now and this approach allows you more flexibility in implementation and improved debugging options.  I will be concentrating on implementation details and assume you have read the background blogs listed above before we start.  I will also be looking at the browser side implementation of the CORS call and some issues with IE browsers (IE 9 in particular).  We are testing with Windows Server 2012 and are using Firefox, Chrome and IE as our test browsers.

Voice from the future: Brock Allen’s great work on CORS, CORS based CORS Attribute support has now been incorporated into Web API 2.  See here and here for details.

So What’s the Problem.

The W3C defines special procedures required if a browser is going to make an AJAX call to a server which is not in the domain of the page which served the page which is making the call (hence Cross Domain).  To enable CORS the server must implement CORS and the browser must make the AJAX call following some conventions.  In the WEB API framework CORS can be implemented on the method or site level.  We will focus on site level CORS in this post.  The WEB API pipeline allows us to hook in message handlers at several places.  The canonical CORS handler, given by the links listed above looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Security.Claims;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http;
using System.Web.Http.Tracing;

public class CorsHandler : DelegatingHandler
{
const string AccessControlRequestMethod = “Access-Control-Request-Method”;
const string AccessControlRequestHeaders = “Access-Control-Request-Headers”;
const string AccessControlAllowOrigin = “Access-Control-Allow-Origin”;
const string AccessControlAllowMethods = “Access-Control-Allow-Methods”;
const string AccessControlAllowHeaders = “Access-Control-Allow-Headers”;

       protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken )
{
const string Origin = “Origin”;
bool isCorsRequest = request.Headers.Contains( Origin );
bool isPreflightRequest = request.Method == HttpMethod.Options;
if ( isCorsRequest )
{
//HTTP CORS OPTIONS
if ( isPreflightRequest )
{
HttpResponseMessage response = new HttpResponseMessage( HttpStatusCode.OK );
response.Headers.Add( AccessControlAllowOrigin, request.Headers.GetValues( Origin ).First( ) );
                   string accessControlRequestMethod = request.Headers.GetValues( AccessControlRequestMethod ).FirstOrDefault( );
if ( accessControlRequestMethod != null )
{
response.Headers.Add( AccessControlAllowMethods, accessControlRequestMethod );
}

                   string requestedHeaders = string.Join( “, “, request.Headers.GetValues( AccessControlRequestHeaders ) );
if ( !string.IsNullOrEmpty( requestedHeaders ) )
{
response.Headers.Add( AccessControlAllowHeaders, requestedHeaders+”, AICJWT” );
}

                   TaskCompletionSource<HttpResponseMessage> tcs = new TaskCompletionSource<HttpResponseMessage>( );
tcs.SetResult( response );
return tcs.Task;
}
else
{
//HTTP CORS GET
return base.SendAsync( request, cancellationToken ).ContinueWith<HttpResponseMessage>( t =>
{
HttpResponseMessage resp = t.Result;
resp.Headers.Add( AccessControlAllowOrigin, request.Headers.GetValues( Origin ).First( ) );
return resp;
} );
}
}
else
{
//NOT A CORS CALL
return base.SendAsync( request, cancellationToken );
}
}
}

Lets break this down from the simplest part first.  We create a class derived from DelegatingHandler (since we are implementing at the site level).  We hook this handler into the system within the framework generated class WebApiConfig as

public static class WebApiConfig
{
public static void Register( HttpConfiguration config )
{
//your route code here

           );
config.MessageHandlers.Add( new WebAPI.Handler.CorsHandler( ) );

//other handlers are included here.
}
}

If you have other classes based on DelegatingHandler the order in which they are added in WebApiConfig matters.

In the simplest case where we are not making a CORS call we can simply return the handler without action as:

return base.SendAsync( request, cancellationToken );

When the CORS call is made by the browser the caller should include the standard HTTP header: Origin with a value of the calling pages domain.  The canonical code assumes this and uses the presence of this header to detect a CORS call. Hence the code:

const string Origin = “Origin”;
bool isCorsRequest = request.Headers.Contains( Origin );

If the CORS call is not an OPTIONS call (which the canonical code call preFlight) we see the code:

return base.SendAsync( request, cancellationToken ).ContinueWith<HttpResponseMessage>( t =>
{
HttpResponseMessage resp = t.Result;
resp.Headers.Add( AccessControlAllowOrigin, request.Headers.GetValues( Origin ).First( ) );
return resp;
} );

Here the code returns a required header for the Browser: Access-Control-Allow-Origin with the value taken from the Origin Header of the caller.

We could, if we choice to, have set the value to  the wild card value ( * ) but this openness may make your system administrator nervous.  Notice here that nothing in the W3C specification restricts what other Headers the sender can include in the CORS call.  However certain browsers (IE) and certain Javascript packages (jQuery) restrict the call to standard HTTP request Headers.  In our implementation this gave us some problems but more on this later. The browser code (User-Agent), not the user code, will refuse to accept the return if the Origin Header is missing or does not contain either the wild card or the calling page’s domain in the value for the Origin header.

So What is the Rest of the Handler Code Doing?

Following this document from Mozilla.org, the browser making the call may make an optional CORS OPTIONS call (see here for HTTP verbs if this one is new to you).  This preflight call (as the canonical code names it) has asks the server for details about what may be in the CORS request when it is actually call.  Following the Mozilla explanation here is what needs to happen:

    • 1a. The User-Agent, rather than doing the call directly, asks the server, the API, the permission to do the request. It does so with the following headers:
      • Access-Control-Request-Headers, contains the headers the User-Agent want to access.
      • Access-Control-Request-Method contains the method the User-Agent want to access.
    • 1b. The API answers what is authorized:
      • Access-Control-Allow-Origin the origin that’s accepted. Can be * or the domain name.
      • Access-Control-Allow-Methods a list of allowed methods. This can be cached. Note than the request asks permission for one method and the
        server should return a list of accepted methods.
      • Access-Allow-Headers a list of allowed headers, for all of the methods, since this can be cached as well.

In the canonical code given above here is what happens in the CORS OPTIONS call:

//( 0 )create a response object

HttpResponseMessage response = new HttpResponseMessage( HttpStatusCode.OK );
//( 1 ) build the value string for the Access-Control-Allow-Origin header from the ORIGIN header value of the request

response.Headers.Add( AccessControlAllowOrigin, request.Headers.GetValues( Origin ).First( ) );

//( 3 )build the value string for the Access-Control-Request-Headers from the values in the request
string accessControlRequestMethod = request.Headers.GetValues( AccessControlRequestMethod ).FirstOrDefault( );
if ( accessControlRequestMethod != null )
{
response.Headers.Add( AccessControlAllowMethods, accessControlRequestMethod );
}

//( 4 ) build the value string for the Access-Control-Allow-Headers header from the ORIGIN headers value of the request

string requestedHeaders = string.Join( “, “, request.Headers.GetValues( AccessControlRequestHeaders ) );
if ( !string.IsNullOrEmpty( requestedHeaders ) )
{
response.Headers.Add( AccessControlAllowHeaders, requestedHeaders);
}

//( 5 ) interrupt the pipeline and return the response object to the caller.

TaskCompletionSource<HttpResponseMessage> tcs = new TaskCompletionSource<HttpResponseMessage>( );
tcs.SetResult( response );
return tcs.Task;

Please note that we can put whatever we need into the Header values.  For example if we wanted to limit CORS calls to GET request only we could replace ( 3) with the simple:

response.Headers.Add( AccessControlAllowMethods, “GET” );

To allow a specific domain only to make the CORS call we could replace ( 1 ) with:

response.Headers.Add( AccessControlAllowOrigin, “www.special.com” );

In our case we wanted to allow a specific non-standard Header into the CORS request.  We called this Header AICJWT. So we expanded the key line in ( 4 ) to be:

response.Headers.Add( AccessControlAllowHeaders, requestedHeaders+”, AICJWT” );

The reason we added it explicitly here is due to problems in both JQuery and in IE.  Please note again that the CORS OPTIONS call is optional.  At this point in our development we were using the awesome async Framework 4.5  object: System.Net.Http.HttpClient.  This is a get object and very useful during development BUT there is no User-Agent (browser side code) involved.

The Trouble Begins: Browser Side Code

All seemed swell, till the JavaScript coders tried to call into the system.  JQuery forces a CORS OPTIONS call when it detects a Cross Domain AJAX call.  For reasons which remain unclear JQUERY does not include custom headers in the OPTIONS request.  Some people think this is in the W3C spec for CORS but I don’t see it there, do you?  Some folks out there indicate that the user-agent is forcing the OPTIONS request but this is not true.  If we use a direct AJAX call, not using JQUERY we can make our own CORS OPTIONS request or skip the OPTIONS call completely.  Here is the code to make the call using JavaScript in IE:

function callHttpReq() {
var invocation = new XMLHttpRequest();
var url = ‘
http://whereever/api/Concert?Year=1980′;
    var body = ”;

var token = “myspecialstuff”;
function callOtherDomain(){
if(invocation)
{
invocation.open(‘GET’, url, true);
invocation.setRequestHeader(‘AICJWT’, token);
invocation.setRequestHeader(‘Accept’, ‘application/json’);
invocation.onreadystatechange = httpHandler;
invocation.send(body);
}
}
callOtherDomain();

    function httpHandler() {
if(invocation.readyState == 4) {
var jsonObj = $.parseJSON(invocation.responseText);
if(jsonObj.length > 0) {
htmlStr = “<ul>”;
$.each(jsonObj, function(i,row) {
htmlStr += “<li>” + row.Date + ‘—-‘ + ” ” + row.Venue +”</li>”;
});
htmlStr += “</ul>”;
$(“#responeBody”).append(htmlStr);
}
}
}

}

Note we are skipping JQUERY because we require a custom header in our use case.  This step is not necessary if you are NOT using custom headers in the CORS call.  Note also that if you are not using JQUERY you need to use different AJAX object other than IE’s XMLHttpRequest.  If you can use JQUERY there is a masive amount of documentation about how to make AJAX calls and JQUERY will handle CORS and differences between the IE and other AJAX objects automatically.

IE Blues

OK all is good but when we test with IE 8 or 9 we get back the data from the CORS get BUT the user also gets the dialog box:

image

Microsoft tells us the USER can suppress this IN IE8 and IE9 by following this procedure:

You can check your Security Zone level as well as change this setting by going to Tools, Internet Options, and click Security tab. Your Security Zone level will be listed on this screen, by default this is set to Medium-high. To change the setting for your message click Custom Level , locate the Access data sources across domains under the Miscellaneous section and change the setting from Prompt to a desired setting.

image

We do not have this problem in Chrome or Firefox. Live Free or Die.

One Last Server Side Issue

During our testing, using Windows Server 2012 we ran into one additional problem.  Our CORS OPTIONS calls were not getting to our site but where being intercepted by the HTTP Module prior to the site Delegate Handler.  Without getting into it too deeply we needed to modify the web.config for our CORS site and disable WebDAV (don’t ask) and allow OPTIONS for the ExtensionlessUrlHandler.  See here for details.  As far as we know this is a pure Windows 2012 server issue.

SQL Server As An Object Store – Not the Best but…   1 comment

     In a prior post I discussed using SQL Server as tool to store C# objects.  There are two downsides to this approach, the need to manage object size and the need to optimize retrieval times.  The basic approach is to use a key/value column layout to store the data.  We first define a column to hold the object key.  Then we need to define any other columns which will be used to aid in retrieval of object ( lets call this metadata).  In my case I am primarily interested in retrieving objects based on Key and a date range.  To optimize this we used an integer column for the id and made a primary key on this column.  I store date not as the SQL Server datetime object, which is too complex and too slow for our purposes. The simple and typical solution is to use a fixed width character format ( char(8) ) and store the date as a ISO string in the form YYYYMMDD.  This is simple and sorts in expected manner without fuss or muss. In SQL server additional indexes can be created on the metadata if needed) 

     Having defined both the key and meta data we are left with the question of how to define the column to hold the XML serialized form of the C# object.  Since we don’t know the maximum size of the XML string derived from the objects we need something which will flex as the size grows.  Currently SQL server has a maximum size for a variable length string of 8,000.  If all of our object will be 8,000 characters or less we could define the object column as: varchar(8000).  If the object size could grow beyond this maximum will need to use what SQL Server calls:  varchar(max).  In this case, at this time, varchar(max) can support up to 2,147,483,647  ( 2 to the 34th power if you are checking).  This is good and will serve for most objects.

     We need to turn additional optimization.  Looking at the (rather poorly written) SQL Server documentation on varchar(max) we find that if the string to be stored (our object serialized as an XML string) is 8,000 bytes it can be stored ‘in line’ with the data row.  If the size is greater than 8,000 the data will be stored ‘elsewhere’ and a pointer to the data is stored in the column.  When this happens a SELECT statement will return pointers to the data and only when we directly access the column in our code, the data string is retrieved.  This indirect access can be costly. Note:  to be sure that string of 8,000 characters or less are stored ‘in line’ you must set a TABLE option on the SQL table containing your varchar(max) column.  The format of this T-SQL command is”

sp_tableoption N’YOURTABLENAMEHERE’, ‘Large Value Types Out Of Row’, ‘OFF’

If this option is ‘ON’ even strings equal to or smaller than 8,000 characters will be stored ‘elsewhere’.  Got it? Good.

     We can make one other generalized optimization.  From Framework 2.0 on, GZIP has been part of the base system underneath C#.  We can compress the XML string to be stored in GZIP format as a small cpu cost cost.  The GZIP code is compact and simple but very specific.  The basic program flow is:

Object => XML String => byte array => GZIP compression => byte array => Base64 Encoded String

Note that GZIP works on byte arrays not strings.  We need a string to push to SQL Server, the only portable safe conversion from a compressed byte array to string to call the Framework function to create a Base64 encoded string. Retrieval reverses this process:

Base64 Encoded String => byte array => GZIP decompression => byte array => XML string

I am seeing compressions like these at a cpu time cost of  less than 10 ms on my desk top:

XML String Size

Compressed Bytes

Base64 Encoded String

6,603

964

1,288

25,414

2,532

3,376

42,495

4,075

5,436

116,422

8,500

11,336

The maximum string that GZIP will compress is 4GB, so we will hit the SQL Server limits long before the GZIP limit.

Here is the code (condensed from many examples on the web).

Standard Libraries:

using System.IO;
using System.IO.Compression;

Compression:

public static byte[] Compress( byte[] bytes )
{
using ( MemoryStream ms = new MemoryStream( ) )
{
using ( GZipStream gs = new GZipStream( ms, CompressionMode.Compress, true ) )
{
gs.Write( bytes, 0, bytes.Length );
}
ms.Position = 0L;
return ToByteArray( ms );
}
}

public static byte[] ToByteArray( Stream stream )
{
int count = 0;
List<byte> result = new List<byte>( );
try
{
byte[] buffer = new byte[0x20000];
int bytes = 0;
while ( ( bytes = stream.Read( buffer, 0, 0x20000 ) ) > 0 )
{
count += bytes;
for ( int i = 0; i < bytes; i++ )
{
result.Add( buffer[i] );
}
}
}
catch ( Exception ex )
{
//log error and re-throw

               throw ex;
}
return result.ToArray( );
}

Decompression:

public static byte[] DeCompress( byte[] bytes )
{
byte[] result;
using ( MemoryStream ms = new MemoryStream( ) )
{
ms.Write( bytes, 0, bytes.Length );
using ( GZipStream gs = new GZipStream( ms, CompressionMode.Decompress, true ) )
{
ms.Position = 0L;
result = ToByteArray( gs );
}
}
return result;
}

Wrapper Methods:

         /// <summary>
/// GZip Compression Wrapper
/// </summary>
///regular string (not Base64 Encoded)
/// Base64 Encoded Compressed String
public static string Compress( string sin )
{
byte [] cBytes=Compress( Encoding.Unicode.GetBytes( sin ) );
return Convert.ToBase64String( cBytes );
}

       /// <summary>
/// GZip DeCompression Wrapper
/// </summary>
///sin64″>64BitEncoded Compressed String
/// <returns>UnCompressed Regular String</returns>
public static string DeCompress( string sin64 )
{
byte [] cBytes2=Convert.FromBase64String( sin64 );
byte [] ucBytes=DeCompress( cBytes2 );
return Encoding.Unicode.GetString( ucBytes );
}

 

The bottom line?  SQL Server will work as a key/value object store but:

( 0 ) A lot of hand tooling is required; and

( 1 ) It is not blazingly fast for larger objects (larger than 8,000 bytes);  and

(2) There are size limits which must be watched closely; and

(3) A lot of translation is involved between the raw object and what is ultimately stored.

None of these issues are fatal but it makes one wonder if couchDB or some other key/value store wouldn’t be a whole lot easier to work with.

QCON 2011 San Francisco and Occupy California   2 comments

Let me say write off that I do not pay for my own ticket to QCON, my boss picks up the tag.  I love QCON.  It is definitely not MIX. I go there to see what is happening in the world which 6439629043_9a7e84a2bd_z is NOT Oracle and Not Microsoft.  That’s the same reason I read their online Zine: InfoQ.   QCon always provides a look at what is current and recent in the open stack world.  This year we looked closely at REST, Mobile development, Web API and NOSQL. As they did last  year QCON provides a nice look at what is open and emerging.  Big metal with always be with us but the desk top is looking6373613127_9780c7d60f very weak during the next few years while Mobile devices of all kinds and makers are exploding.  The biggest fall out is that while HTML5 is only slowly emerging on desktops in place, all new Mobile devices (which is to say most new systems) will be fully HTML5 compliant.  Not only that but with the exception of Windows Phones, the rendering engine for all mobile devices is based on WebKit.  What this mean for those of us in the cubes is that worrying about how to bridge to pre-HTML5 browsers with HTML5 code is a non-issue.  Mobile development is HTML5 development.  The big metal end of the supply chain is being segmented into Web API servers (which service JSON XHR2 data calls) and the NOSQL engines which serve the WEB API farms.  Remember a native mobile app     ideally has pre-loaded all of its pages its interactions are solely over JSON XHR2 for data (be it documents, data or HTML fragments).  The traditional JSP or ASPX web server is not really in play with native mobile apps and has and increasingly small role to play in “native like” or browser based mobile apps.  Let’s move on.

“IPad Light by cloud2013”

Speaking of moving on: There is an occupation going on in this country.  I visited occupations sites in San Francisco, UCal Berkeley and  Berkeley “Tent City”.  These are all very active and inspiring occupy sites.  Now if we can only get to Occupy Silicon Valley! 

I attended the REST in Practice tutorial this year and it was a very nice.  The authors were well informed and the agenda comprehensive.  I personally like the Richardson maturity model but think that people are not facing up to the fact that level three is rarely achieved in practice and the rules of web semantics necessary to interoperate at level 3 are almost non-existent. Remember the original REST model is client/server.  The basic model is a finite state machine and the browser (and the user) are in this model required to be dumb as fish.  Whether Javascript is a strong enough model and late binding semantics can be made clear enough to pull off level three is really an open question which no one has an answer to.  If we forget about interoperability (except for OAuth) things start to fall into place but we thought OPENNESS was important to REST.

Workshop: REST In Practice by the Authors: Ian Robinson & Jim Webber

Why REST? The claims:

· Scalable

· Fault Tolerant

· Recoverable

· Secure

· Loosely coupled6439625819_5705585c80

Questions / Comment:6380018433_9172323197

Do we agree with these goals?

Does REST achieve them?

Are there other ways to achieve the same goals?

REST design is important for serving AJAX requests and AJAX requests are becoming central to Mobile device development, as opposed to intra-corporate communication. See Web API section below.

Occupy Market Street (San Francisco)            

The new basic Document for REST: Richardson Maturity Model (with DLR modifications)

Level 0:

One URI endpoint

One HTTP method [Get]

SOAP, RPC

Level 1:

Multiple URI,

One HTTP Method [Get]

Century Level HTTP Codes (200,300,400,500)

Level 2:

Multiple URI,

Multiple HTTP Methods

Fine Grain HTTP Codes (“Any code below 500 is not an error, it’s an event”)

URI Templates

Media Format Negotiation (Accept request-header)

Headers become major players in the interaction between client and server

Level 3:  The Semantic Web

Level 2 plus

Links and Forms Tags (Hypermedia as the engine of state)

Plus emergent semantics

<shop xmlns=”http://schemas.restbucks.com/shop&#8221;

xmlns:rb=”http://relations.restbucks.com/”&gt;

<items>

<item>…</item>

<item>…</item>

</items>

<link rel=”self” href=http://restbucks.com/quotes/1234 type=”application/restbucks+xml”/>

<link rel=”rb:order-form” href=”http://restbucks.com/order-forms/1234″ type=”application/restbucks+xml”/&gt;

</shop>

6439622787_7b614f312c

Think of the browser (user) as a finite State Machine where the workflow is driven by link tags which direct the client as to which states it may transition to and the URI associated with each state transition.6380028389_e64c6a826f

The classic design paper on applied REST architecture is here: How To GET a Cup Of Coffee. Moving beyond level 1 requires fine grain usage of HTTP Status Codes, Link tags, the change headers and media type negotiation. Media formats beyond POX and JSON are required to use level 3 efficiently (OData and ATOM.PUB for example).

Dude, where’s my two phase commit? Not supported directly, use the change headers (if-modified, if-non-match, etag headers) or architectural redesign (redefine resources or workflow). Strategic choice is design of the finite state machine and defining resource granularity.

clip_image002

(Slide from Rest in Practice)

Architectural Choices:

The Bad Old Days: One resource many, many ‘verbs’.

The Happy Future: Many, many resources, few verbs.

The Hand Cuff Era: Few Resources, Few verbs.

The Greater Verbs:

GET: Retrieve a representation of a resource

POST: Create a new resource (Server sets the key)

PUT: Create new resource (Client sets the key); ( or Update an existing resource ?)

DELETE: Delete an existing resource

Comment: The proper use of PUT vs. POST is still subject to controversy and indicates (to me) that level 3 is still not well defined.

Typically they say POST to create a blog entry and PUT at append a comment to a blog. In Couchdb we POST to create a document and PUT to add a revision (not a delta) and get back a new version number. The difference here is how the resource is being defined, which is an architectural choice.

6439621853_3275941633

The Lesser Verbs:

OPTIONS: See which verbs a resource understands

HEAD: Return only the header (no response body)

PATCH: Does not exist in HTML5. This would be a delta Verb but no one could agree on a specification for the content.  Microsoft did some early work on this with their XML Diffgram but no one else followed suit.

Security

Authentication (in order of increased security)

Basic Auth

Basic Auth + SSL

Digest

WSSE Authentication (ATOM uses this)

Message Security:

Message Level Encrypt (WS-SEC)

For the Microsoft coders I highly recommend

RESTful .Net (WCF For REST (Framework 3.5) Jon Flanders

There are significant advantages to building your RESTful services using .Net.  Here is a comparison table to get you oriented:

DLR’s Cross Reference:
Web Service Standard REST Service WCF For REST (Framework 3.5)
1 TCP/IP + others TCP/IP TCP/IP
2 SOAP Wrapper HTTP HTTP
3 SOAP Headers HTTP Headers HTTP Headers
4 WS*Security Basic Auth/SSL Basic Auth/SSL or WS*Security
5 Early Binding Late Binding Late Binding
6 XSD WADL XSD, WADL
7 XML Media Negotiation Media Negotiation
8 SOAP FAULTS HTTP Response Codes HTTP Response Codes
9 Single Endpoint Multiple Endpoints, URI Templates Multiple Endpoints, URI Templates
10 Client Proxy Custom auto-generated Javascript proxy

6439623457_d2599a85eb_m

The REST of the Week

Wednesday is more or less vendor day at QCON and the sessions are a step down from the tutorials but the session quality6373577519_b3a8be078c picked up again on Thursday and Friday.  XXX XXXX who gave an excellent tutorial last year gave an informative talk on ‘good code’.  The Mobile Development and HTML5 tracks were well attended and quite informative.  The fie   ld is wild open with many supporting systems being free to the developer (support will cost you extra) and the choices are broad: from browser ‘responsive design’ application to native appearing applications to native apps ( and someone threw in “hybrid app” into the mix).  The Mobile panel of IBM DOJO, JQuery.Mobil and Sencha was hot.  I am new (to say the least) to Mobile development but here are my (somewhat) random notes on these sessions:

MOBILE Development is HTML5 Development

HTML5 is the stack. Phone and Tablet applications use WebKit based rendering engines and HTML5 conformant browsers only (Windows Phone 7 is the exception here). HTML5 has its own new security concerns ( New Security Concerns)

Three major application development approaches are:

· Browser Applications;

· Native like Applications;

· Hybrid Applications; and

· Native Applications.

Browser applications may emulate the screens seen on the parallel desk top browser versions on the front end but in practice the major players (Facebook, YouTube, Gmail) make substantial modifications to at least the non-visual parts of the Mobile experience making extensive use of local storage and the HTML5 manifest standard for performance and to allow for a reasonable off line experience. Browser applications fall under the guidelines of Responsive Design (aka adaptive Design) and tend to be used when content will appear similarly between desktop and Mobile devices.

“Native like” applications use:

· The Browser in full screen Mode with no browser ‘chrome’; and

· Widgets are created using CSS, JS and HTML5 which simulate the ‘look and feel’ of a native application;

· No Access to Native Functionality (GPS, Camera, etc)6380026599_db3ba709db

· Tend to use, but does not require use of HTML5 manifest or local storage but it is strongly encouraged. 6439624411_22b452613f

A Native application is still an HTML5 application with the following characteristics:

· All JS Libraries, CSS and HTML are packaged and pre-loaded using a vendor specific MSI/Setup package;

· AJAX type calls for data are allowed;

· Access to Native Widgets and/or Widgets are created using CSS, JS and HTML5

· Access to Native Functionality (GPS, Camera, etc)

· Standard HTTP GET or POST are NOT allowed

A Hybrid Application is a “Native Like” Application” placed within a wrapper which allows access to device hardware and software (like the camera) via a special JavaScript interface and, with additional special coding, can be packaged within a MSI/Setup and distributed as a pure Native application.

AJAX calls are made via XHR2 (aka XMLHttpRequest Level 2) which among other things relaxes the single domain requirement of XHR and processing Blob and File interfaces.

The following major vendors offer free libraries and IDE for development:

Native Apps: PhoneGap, Appcelerator

Native App Like: Sencha, PhoneGap, IBM Dojo

Browser App: JQuery.Mobile

PhoneGap does NOT require replacement of Sencha, JQuery.Mobil, Dojo.Mobile JQuery libraries.

PhoneGap allows JavaScript to call PhoneGap JavaScript libraries which abstract access to device hardware (camera, GPS, etc).

Sencha does not require replacement of the JQuery.Mobil, Dojo.Mobile JQuery libraries.

Although it is theoretically possible to create “Native like” applications with only JQuery.Mobile this is NOT encouraged.6439625143_caa6996f39 6337926187_91ca36793d

Local Storage

This is a major area of performance efforts and is still very much open in terms of how best to approach the problem:

The major elements are:

App Cache (for pre-fetch. and Native App Approach)

DOM Storage (aka Web Storage)

IndexedDB (vs. Web SQL)

File API (this is really part of XHR2)

Storing Large Amounts of Data Locally

If you are looking to store many Megabytes – or more, beware that there are limits in place, which are handled in different ways depending on the browser and the particular API we’re talking about. In most cases, there is a magic number of 5MB. For Application Cache and the various offline stores, there will be no problem if your domain stores under 5MB. When you go above that, various things can happen: (a) it won’t work; (b) the browser will request the user for more space; (c) the browser will check for special configuration (as with the “unlimited_storage” permission in the Chrome extension manifest).

IndexedDB:

clip_image002

Web SQL Database is a web page API for storing data in databases that can be queried using a variant of SQL.

Storage Non-Support as of two weeks ago.

IE Chrome Safari Firefox iOS BBX[RIM] Android
IndexedDB Supported Supported No Support Supported No Support No Support No Support
WEB SQL No Support Supported Supported No Support Supported Supported Supported

6439626605_ee7b664332

Doing HTML5 on non-HTML5 Browsers: If you are doing responsive design and need to work with Desktop and6380016957_4c6b5e7345_z Mobil using the same code base: JQuery.Mobile, DOJO and , Modernizr(strong Microsoft support for this JavaScript library).

WEB API

What is it? Just a name for breaking out the AJAX servers from the web server. This is an expansion of REST into just serving data for XHR. It is a helpful way to specialize our design discussions by separating serving pages (with MVC or whatever) from serving data calls from the web page. Except for security the two can be architecturally separated.

Web APIs Technology Stack

clip_image002[6]

Look familiarr? Looks like our old web server stack to me.

NOSQL

The CAP Theorem  (and Here)

  • Consistency: (all nodes have the same data at the same time)
  • Availability: (every request receives a response – no timeouts, offline)
  • Partition tolerance: (the system continues to operate despite arbitrary message loss)

Pick Any Two6439627917_7f88626477_z

If some of the data you are serving can tolerate Eventual Consistency then NOSQL is much faster.6380029445_0e0ecf7d53

If you need two phase commit, either use a SQL database OR redefine your resource to eliminate the need for the 2Phase Commit.

NoSQL databases come in two basic flavors:

Key/Value: This are popular with content management and where response time must be minimal. In general you define what btrees you want to use before the fact. There are no on the fly Joins or projects. MongoDB and CouchDB are typical leaders in this area.

Column Map: This is what Google calls Big Table. This is better for delivering groups of records based on criteria which may be defined ‘on the fly’. Cassandra is the leader in this group.

Web Sockets:

6439628517_6c7955df1f_zSad to say this is still not standardized and preliminary support libraries are still a little rough.  Things do not seem to have moved along much since the Microsoft sessions I attended at MIX 11.

Photos: All Photos by Cloud2013

%d bloggers like this: