Archive for the ‘Web’ Category

Hey Flickr, Where Did My Statistics Go? The CouchBase Connection. Part IV   Leave a comment

We interrupt this series to take a side trip concerning application logging.  The series begins here. NLog is an excellent open source logging project available from NuGet and other sources.   The sample code for this blog post can be found HERE. Although this is a kitchen sink implementation (Log to files, event logs, database, SMTP whatever) I will be using it as a simple way to log text information to files.  Once you have created a Visual Studio Project open Tools / NuGet Package  Manager/Package Manager Console.  From Here you can add NLog to your object with the command:

PM> Install-Package NLog

This will install NLog, modify your project and add a project reference for NLog.  Although NLog targets and rules can be managed programmatically, I 
normally user the configuration file: 

NLog.Config

You can set this up using the Package Manager Console with the command:

PM> Install-Package NLog.Config

Configuration File Setup

The NLog config file is then modified to define “targets” and “rules”.  The former defines where log entries are written and the latter define which log 
levels are written to which targets.  A file based target section might look like:

<targets>

   DLR.Flickr/Debug.txt” archiveNumbering=”Rolling”   archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”logfile” xsi:type=”File” layout=”${message}”    fileName=”C:/temp/DLR.Flickr/Info.txt”  archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Errorsfile” xsi:type=”File” layout=”${message}” fileName=”C:/temp/DLR.Flickr/Error.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

<target name=”Fatalfile” xsi:type=”File” layout=”${message}”  fileName=”C:/temp/DLR.Flickr/Fatal.txt” archiveNumbering=”Rolling”  archiveEvery=”Day” maxArchiveFiles=”7″ ConcurrentWrites=”true”/>

</targets>

where name is the symbolic name of the target xsi:type defines this as a file target.  If you are controlling the layout of the log entry set layout to “${message}”.  Given that we are using xsi:type as File we can use fileName to set the physical location of the log file.  The value of fileName can be changed programmatically at runtime but I will not give examples here.

NLog defines five Log levels:  Debug, Info, Warn, Error and Fatal.  These levels are defined in an enum and have the names have no special significance except as you define them.  The Rules section of the config file defines which Log Levels are written to which targets. A given level can be written to zero to many targets.  My Rules section typically looks like:

<rules>

<logger name=”*” minlevel=”Debug” maxlevel=”Debug” writeTo=”debugfile” />

<logger name=”*” minlevel=”Info” maxlevel= “Info” writeTo=”logfile” />

<logger name=”*” minlevel=”Warn” maxlevel=”Warn” writeTo=”Warnfile” />

<logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

<logger name=”*” minlevel=”Fatal” maxlevel=”Fatal” writeTo=”Fatalfile” />
  </rules>

More complex rules like the following are possible:

    <logger name=”*” minlevel=”Error” maxlevel=”Error” writeTo=”Errorfile” />

       <logger name=”*” minlevel=”Error” maxlevel=”Fatal” writeTo=”Fatalfile” />

NLog initialization at runtime is very simple.  Typically you can you a single line like:

using NLog;

static Logger _LogEngine = LogManager.GetLogger(“Log Name”);

this need only be called once.

The simplest NLog log call (given the definition layout=”${message}”  ) would look like:

_LogEngine.Log(NLog.LogLevel.Info, “Info Message”);

We can extend this quite simply.  I have a single class extension providing a simple extension of NLog on Git Hub.  You can find it here.  Specifically I have provided wrapper methods for each NLog.LogLevel and support for Exception Stack Dumps.  Include this file in your project (after installing NLog and NLog config) then you can write:

using DLR.Util;

namespace DLR.CCDB.ConsoleApp

{

    class Program

{

static void Main(string[] args)

{

string _CorrelationID=System.Guid.NewGuid().ToString();

CCDB cbase = new CCDB { CorrelationID = _CorrelationID };

cbase.Client = CouchbaseManager.Instance;

NLS.Info(_CorrelationID, “Helllo, CouchBase”);

try{

throw new ApplicationException(“My Exception”);

}catch(Exception x){

NLS.Error(_CorrelationID,”Error”,x.Message);

//OR

NLS.Error(_CorrelationID,”Error”,x);

}

_CorrelationID is supported here so in multiuser situations (like WebAPI) we can identify which messages where written by which task.  In a console app this is not strictly necessary.  The call to NLS.Info results in an output log line like:

DLR|20140909-152031037|2f8f89ce-51de-4269-9ae0-9313ad2a0243|Helllo, CouchBase|

where:

  • DLR is the Log Engine name (more than one engine can write to a given log file);
  • 20140909-152031037 is the terse timestamp of the form: YYYYMMDD-HHMMSSmmm; and
  • Hello, CouchBase is our text message

My call:

NLS.Error(_CorrelationID,”Error”,x);

would result in a log line like:

DLR|20140909-152544801|46e656cd-4e17-4285-a5f3-e1484dad2995|Error|Error Data. Message: [My Exception]Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args|

where Error is my message;

Error Data. Message: [My Exception] is the Message in ApplicationException; and

Stack Trace:  DLR.CCDB.ConsoleApp.Program.MainString args| is the stack dump.

NLS will handle nested exceptions and stack dumps but we are only showing a single un-nested exception in this example.

OK! That’s it for this post.  We will, hopefully return to couchBase and the Flickr API in the next post.

Posted 2014/09/09 by Cloud2013 in GitHub, Microsoft, NLog, NuGet

Tagged with , , ,

My (Virtual) Year On Tour With the Grateful Dead   1 comment

 

From Tape to The Internet

Crimson flames tied through my ears
Rollin’ high and mighty traps
Pounced with fire on flaming roads
Using ideas as my maps
“We’ll meet on edges, soon,” said I
Proud ’neath heated brow
Ah, but I was so much older then
I’m younger than that now

My Back Pages by Bob Dylan

I first heard the Grateful Dead live in Albuquerque in 1971.  Like many, this was a seminal experience for me, changing my understanding of the meaning of performance and of Rock and Roll.  That over weight, black clad, Prankster with a halo of unmanageable black hair playing guitar was clearly the center of the band and the performance.  I was young and was unaware that this was Captain Trips.   Captain America was more like it.  Nor was I aware of the mythical aura that was even then growing around the band and Jerry and their fans.  Like many, my first experience of the Dead was prepared only by my youth and The Bear’s  purple haze of the night.  In the intervening years I was more attracted to Frank Zappa who produced a consistent recording experience that the Dead never would achieve. After the (limited) success of the Grateful Dead Movie in capturing what the Dead were, the world moved on. But the band played on.  Perhaps it is better that way.  The early taping of Dead shows from  the sound boards (thank you Owsley) and later by dedicated deadhead tapers left us with a rich vein of music and magic in the over 3000 individual live performances available in one form or another.  Beginning in the last decade of the last century, the Dead organization began to issue live sound board recordings from this corpus. 

Dick’s Picks and Me

Half-wracked prejudice leaped forth
“Rip down all hate,” I screamed
Lies that life is black and white
Spoke from my skull. I dreamed
Romantic facts of musketeers
Foundationed deep, somehow
Ah, but I was so much older then
I’m younger than that now

While a great resource for those of us not conversant in bit torrent, these releases where frequently expensive and (to my ears) over produced which subtracted from the raw energy of the original sound board tapes.  I am lucky to have access to a great music store with plentiful numbers of used Grateful Dead CD.  If you are in Maine visit a local Bull Moose Music store.   I was luck to be able to pick up many Live Dead concert CDs at a reasonable price.  Thank you Bull Moose.  Although the official Dead releases enhanced my life, there are some problems:

I) The list prices are quite high (try to get used copies)

II) Selections for the Dick’s Pick Series seem to be primarily based on best complete shows (The Dead were often hit and miss in the same night and limiting yourself to the best complete show skips a lot of great music. This problem has been reduced by the newer Road Trips Series and specialty releases like Ladies and Gentlemen… the Grateful Dead which cooks down the best of a four night stand at the Fillmore, 1971).

III) The processing of the raw tapes, IMHO, sometimes cook the life out of some the releases.

IV) Some of the specialty releases seem to be picked more for the historical importance than the quality of the performance (Closing the Winterland, for example).

Having said all that, if you can get the official releases used, some of them are great.

A Short Divergence in Our Story

Girls’ faces formed the forward path
From phony jealousy
To memorizing politics
Of ancient history
Flung down by corpse evangelists
Unthought of, though, somehow
Ah, but I was so much older then
I’m younger than that now

I started to cooked down my copies of the official live releases into playlist CDs (favorites of 1974, Dark Star releases, etc.).  Then I had open heart surgery, caught a post operative wound infection and almost died.  Stephen Gould wrote someplace that the greatest species in evolution are Bactria.  They are everywhere.  There are more Bactria in your body than body cells.  And I was in the three month war between the bugs and myself ( to be honest I had massive antibiotic infusions on my side).  My day was composed of pain medicine, James Joyce’s Ulysses, The Bible, and my CDs of live Dark Star performances.  Let’s just say that Dark Star and the Gospel of Mark were more significant than Tramadol in my recovery.   Rehab consisted of countless hours of treadmill work.  That, and an MP3 player packed with Scarlet Begonia and Fire on the Mountain.

BTW: Tom Constanten said somewhere that they didn’t play Dark Star, it was always going on,  they just joined in.  Although T.C. recommends ‘any East Coast Dark Star’ my favorites are early West Coast versions.

The Internet Archive Connection

In a soldier’s stance, I aimed my hand
At the mongrel dogs who teach
Fearing not that I’d become my enemy
In the instant that I preach
My pathway led by confusion boats
Mutiny from stern to bow
Ah, but I was so much older then
I’m younger than that now

The Internet Archive, in early 2000 began collecting, digitalizing and making available for re-distribution the large body of Grateful Dead concerts made by independent tapers and sound board recordings (SB) which were in circulation.  By policy SB recordings are available for playing on the web site and non-SB recordings are available for downloading.  There are multiple recordings available for most shows and these vary in quality from commercially releasable to barely audible.  There are over 8,000 individual recording of about 1,900 shows.  About 1,000 of these are SB. While vast the Internet Archive is not the most accessible site. Like most people I started with the feature of the Grateful Dead collection called:

Click Me:

 Grateful Dead Shows on This Day In History 

(If you have never been there – try the link right now).

For 08-30 (today while I am writing this) the Internet Archive will display 30 recordings (for shows of this date in 1985, 1983, 1981, 1980, 1970 and 1969).  There are limited sort options of these results.  Selecting a given recording brings one to a new web page containing an online player and (if the show is not an SB) download options.  I was hooked on the musical possibilities but trapped  by the limited user interface of the Internet Archive.   I wanted more.  Much more…

Hacking The Internet Archive

A self-ordained professor’s tongue Too serious to fool 
Spouted out that liberty
Is just equality in school
“Equality,” I spoke the word
As if a wedding vow
Ah, but I was so much older then
I’m younger than that now

My goal was to have the ability to listen to ALL of the Dead’s concerts but using only the best recordings and to be able to move through the collection using a better user interface which would allow me to decide where and when to go to any individual date. My goal was to spend a year and at least sample all 1,900 concerts and listen completely to all SB concert recordings.  I decided to complete this project in 12 months.  To do this I would first need to wrestle the Internet Archive (IA) to its knees.  Little did I know that this would take me on a programming journey involving three programing languages (Ruby, Javascript and C#), two data specifications (XML and JSON), two database engines (couchdb and SQL Server) as well as understanding the (somewhat loosely documented) search engine of IA, and more….  Readers interested in the technical details should see my series of postings on Ruby on Rails and CouchDB.  Please see Part 5 has the details of how to hack the Internet Archive to get at the data for the Grateful Dead recordings on IA.  Thus armed with the complete dataset from the Internet Archive of Grateful Dead recordings and a new front end I was ready to begin my listening  project.  There are over 8,000 recordings of over 2,000 concerts on the Internet Archive.  My first cut on the recordings in to use an algorithm to select ONE recording for each recording date for review.  This is a very simple selection based on the first of:

  • Was processed by Charlie Miller (IMHO the BEST processor of Grateful Dead Tapes)
  • Is a Sound Board Recording
  • Is  a Matrix Recording
  • Is the most recently posted tape for a given date.

Does this process miss some gems? Undoubtedly but it did give me 2,000 tapes to review rather than 8,000. With this criteria in place,  my local copy of the IA database and my own UI for IA I started listening in July, 2011.  I did not attempt to listen to all 2,000 recordings completely.  If a recording was of poor quality or the band was out of tune or Jerry was ‘uninspired’ I abandoned the tape have brief samples of my favorite tunes.  In the end I reviewed about 1,000 concerts by in thirteen months (I finished during the ‘days between’ period;[August 1 and  August 9]).  I ended up with about 475  concerts on my personal playlist of ‘greatest concerts’.  Along the way I wrote several reviews on this blog of concerts which I thought were particularly of note. and compiled hyperlinked list of shows by year (the series starts here) and hyperlinks to Dark Star concerts and Scarlet Begonia –> Fire on the Mountain concerts.  All of these blogs contain links to jump right into the concert within the Internet Archive (but you still need to use the IA music player however).  Do I have a favorite sequence of songs, a favorite concert, a favorite era.  Yes.  Am I going to tell you? No.  Dig in visit the Internet Archive and start listening.  It could save your life.

 

Days Between Grateful Dead

and there were days
and there were days I know
when all we ever wanted
was to learn and love and grow
Once we grew into our shoes
we told them where to go
walked halfway around the world
on promise of the glow
stood upon a mountain top

walked barefoot in the snow
gave the best we had to give
how much we’ll never know we’ll never know

Days Between by Garcia and Hunter

 

 

 

 

Was It Worth The Trip?

 

Yes!

 

                                                               To Bear and Captain Trips, we say Thank You and Rest In Peace.

All photos by cloud2013 except Bear and Captain Trips Credit: Rosi McGee 

 

PS: Stupid Grateful Dead Statistics From the Internet Archive Database

Top 12 Most Played By Era (excluding Space and Drums):

Title 1967-1971 1972-1978 1979-1990 1991-1995
Althea     *  
Big River   *    
Brown Eyed Women   *    
Casey Jones *      
Cassidy     *  
China Cat Sunflower *      
Corrina       *
Crazy Fingers       *
Cryptical Envelopment *      
Cumberland Blues *      
Dark Star *      
Deal   *    
El Paso   *    
Estimated Prophet     *  
Eyes Of ThWorld       *
Good Lovin *      
Hard to Handle *      
I Know You Rider *   *  
Jack Straw   *    
Lazy River Road       *
Little Red Rooster     *  
Looks Like Rain     *  
Me and My Uncle *      
Mexicali Blues   *    
         
Not Fade Away * * * *
Playing In ThBand   * *  
Sugar Magnolia   * * *
Sugaree   *    
Tennesse Jed   *    
Terrapin Station       *
The Other One     *  
Throwing Stones       *
Truckin   * *  
Turn On Your Lovelight *      
Uncle Johns Band *     *
Wang Dang Doodle       *
Way To Go Home       *
Wharf Rat     *  
When I Paint My Masterpiece       *

Internet Archive:  All Recordings and Sound Board Recordings

image

Concert Length

image

Song Counts By Year(Dark Star, Playin’ in the Band and Scarlet Begonia –> Fire On The Mountain)

ffimage

QCON 2011 San Francisco and Occupy California   2 comments

Let me say write off that I do not pay for my own ticket to QCON, my boss picks up the tag.  I love QCON.  It is definitely not MIX. I go there to see what is happening in the world which 6439629043_9a7e84a2bd_z is NOT Oracle and Not Microsoft.  That’s the same reason I read their online Zine: InfoQ.   QCon always provides a look at what is current and recent in the open stack world.  This year we looked closely at REST, Mobile development, Web API and NOSQL. As they did last  year QCON provides a nice look at what is open and emerging.  Big metal with always be with us but the desk top is looking6373613127_9780c7d60f very weak during the next few years while Mobile devices of all kinds and makers are exploding.  The biggest fall out is that while HTML5 is only slowly emerging on desktops in place, all new Mobile devices (which is to say most new systems) will be fully HTML5 compliant.  Not only that but with the exception of Windows Phones, the rendering engine for all mobile devices is based on WebKit.  What this mean for those of us in the cubes is that worrying about how to bridge to pre-HTML5 browsers with HTML5 code is a non-issue.  Mobile development is HTML5 development.  The big metal end of the supply chain is being segmented into Web API servers (which service JSON XHR2 data calls) and the NOSQL engines which serve the WEB API farms.  Remember a native mobile app     ideally has pre-loaded all of its pages its interactions are solely over JSON XHR2 for data (be it documents, data or HTML fragments).  The traditional JSP or ASPX web server is not really in play with native mobile apps and has and increasingly small role to play in “native like” or browser based mobile apps.  Let’s move on.

“IPad Light by cloud2013”

Speaking of moving on: There is an occupation going on in this country.  I visited occupations sites in San Francisco, UCal Berkeley and  Berkeley “Tent City”.  These are all very active and inspiring occupy sites.  Now if we can only get to Occupy Silicon Valley! 

I attended the REST in Practice tutorial this year and it was a very nice.  The authors were well informed and the agenda comprehensive.  I personally like the Richardson maturity model but think that people are not facing up to the fact that level three is rarely achieved in practice and the rules of web semantics necessary to interoperate at level 3 are almost non-existent. Remember the original REST model is client/server.  The basic model is a finite state machine and the browser (and the user) are in this model required to be dumb as fish.  Whether Javascript is a strong enough model and late binding semantics can be made clear enough to pull off level three is really an open question which no one has an answer to.  If we forget about interoperability (except for OAuth) things start to fall into place but we thought OPENNESS was important to REST.

Workshop: REST In Practice by the Authors: Ian Robinson & Jim Webber

Why REST? The claims:

· Scalable

· Fault Tolerant

· Recoverable

· Secure

· Loosely coupled6439625819_5705585c80

Questions / Comment:6380018433_9172323197

Do we agree with these goals?

Does REST achieve them?

Are there other ways to achieve the same goals?

REST design is important for serving AJAX requests and AJAX requests are becoming central to Mobile device development, as opposed to intra-corporate communication. See Web API section below.

Occupy Market Street (San Francisco)            

The new basic Document for REST: Richardson Maturity Model (with DLR modifications)

Level 0:

One URI endpoint

One HTTP method [Get]

SOAP, RPC

Level 1:

Multiple URI,

One HTTP Method [Get]

Century Level HTTP Codes (200,300,400,500)

Level 2:

Multiple URI,

Multiple HTTP Methods

Fine Grain HTTP Codes (“Any code below 500 is not an error, it’s an event”)

URI Templates

Media Format Negotiation (Accept request-header)

Headers become major players in the interaction between client and server

Level 3:  The Semantic Web

Level 2 plus

Links and Forms Tags (Hypermedia as the engine of state)

Plus emergent semantics

<shop xmlns=”http://schemas.restbucks.com/shop&#8221;

xmlns:rb=”http://relations.restbucks.com/”&gt;

<items>

<item>…</item>

<item>…</item>

</items>

<link rel=”self” href=http://restbucks.com/quotes/1234 type=”application/restbucks+xml”/>

<link rel=”rb:order-form” href=”http://restbucks.com/order-forms/1234″ type=”application/restbucks+xml”/&gt;

</shop>

6439622787_7b614f312c

Think of the browser (user) as a finite State Machine where the workflow is driven by link tags which direct the client as to which states it may transition to and the URI associated with each state transition.6380028389_e64c6a826f

The classic design paper on applied REST architecture is here: How To GET a Cup Of Coffee. Moving beyond level 1 requires fine grain usage of HTTP Status Codes, Link tags, the change headers and media type negotiation. Media formats beyond POX and JSON are required to use level 3 efficiently (OData and ATOM.PUB for example).

Dude, where’s my two phase commit? Not supported directly, use the change headers (if-modified, if-non-match, etag headers) or architectural redesign (redefine resources or workflow). Strategic choice is design of the finite state machine and defining resource granularity.

clip_image002

(Slide from Rest in Practice)

Architectural Choices:

The Bad Old Days: One resource many, many ‘verbs’.

The Happy Future: Many, many resources, few verbs.

The Hand Cuff Era: Few Resources, Few verbs.

The Greater Verbs:

GET: Retrieve a representation of a resource

POST: Create a new resource (Server sets the key)

PUT: Create new resource (Client sets the key); ( or Update an existing resource ?)

DELETE: Delete an existing resource

Comment: The proper use of PUT vs. POST is still subject to controversy and indicates (to me) that level 3 is still not well defined.

Typically they say POST to create a blog entry and PUT at append a comment to a blog. In Couchdb we POST to create a document and PUT to add a revision (not a delta) and get back a new version number. The difference here is how the resource is being defined, which is an architectural choice.

6439621853_3275941633

The Lesser Verbs:

OPTIONS: See which verbs a resource understands

HEAD: Return only the header (no response body)

PATCH: Does not exist in HTML5. This would be a delta Verb but no one could agree on a specification for the content.  Microsoft did some early work on this with their XML Diffgram but no one else followed suit.

Security

Authentication (in order of increased security)

Basic Auth

Basic Auth + SSL

Digest

WSSE Authentication (ATOM uses this)

Message Security:

Message Level Encrypt (WS-SEC)

For the Microsoft coders I highly recommend

RESTful .Net (WCF For REST (Framework 3.5) Jon Flanders

There are significant advantages to building your RESTful services using .Net.  Here is a comparison table to get you oriented:

DLR’s Cross Reference:
Web Service Standard REST Service WCF For REST (Framework 3.5)
1 TCP/IP + others TCP/IP TCP/IP
2 SOAP Wrapper HTTP HTTP
3 SOAP Headers HTTP Headers HTTP Headers
4 WS*Security Basic Auth/SSL Basic Auth/SSL or WS*Security
5 Early Binding Late Binding Late Binding
6 XSD WADL XSD, WADL
7 XML Media Negotiation Media Negotiation
8 SOAP FAULTS HTTP Response Codes HTTP Response Codes
9 Single Endpoint Multiple Endpoints, URI Templates Multiple Endpoints, URI Templates
10 Client Proxy Custom auto-generated Javascript proxy

6439623457_d2599a85eb_m

The REST of the Week

Wednesday is more or less vendor day at QCON and the sessions are a step down from the tutorials but the session quality6373577519_b3a8be078c picked up again on Thursday and Friday.  XXX XXXX who gave an excellent tutorial last year gave an informative talk on ‘good code’.  The Mobile Development and HTML5 tracks were well attended and quite informative.  The fie   ld is wild open with many supporting systems being free to the developer (support will cost you extra) and the choices are broad: from browser ‘responsive design’ application to native appearing applications to native apps ( and someone threw in “hybrid app” into the mix).  The Mobile panel of IBM DOJO, JQuery.Mobil and Sencha was hot.  I am new (to say the least) to Mobile development but here are my (somewhat) random notes on these sessions:

MOBILE Development is HTML5 Development

HTML5 is the stack. Phone and Tablet applications use WebKit based rendering engines and HTML5 conformant browsers only (Windows Phone 7 is the exception here). HTML5 has its own new security concerns ( New Security Concerns)

Three major application development approaches are:

· Browser Applications;

· Native like Applications;

· Hybrid Applications; and

· Native Applications.

Browser applications may emulate the screens seen on the parallel desk top browser versions on the front end but in practice the major players (Facebook, YouTube, Gmail) make substantial modifications to at least the non-visual parts of the Mobile experience making extensive use of local storage and the HTML5 manifest standard for performance and to allow for a reasonable off line experience. Browser applications fall under the guidelines of Responsive Design (aka adaptive Design) and tend to be used when content will appear similarly between desktop and Mobile devices.

“Native like” applications use:

· The Browser in full screen Mode with no browser ‘chrome’; and

· Widgets are created using CSS, JS and HTML5 which simulate the ‘look and feel’ of a native application;

· No Access to Native Functionality (GPS, Camera, etc)6380026599_db3ba709db

· Tend to use, but does not require use of HTML5 manifest or local storage but it is strongly encouraged. 6439624411_22b452613f

A Native application is still an HTML5 application with the following characteristics:

· All JS Libraries, CSS and HTML are packaged and pre-loaded using a vendor specific MSI/Setup package;

· AJAX type calls for data are allowed;

· Access to Native Widgets and/or Widgets are created using CSS, JS and HTML5

· Access to Native Functionality (GPS, Camera, etc)

· Standard HTTP GET or POST are NOT allowed

A Hybrid Application is a “Native Like” Application” placed within a wrapper which allows access to device hardware and software (like the camera) via a special JavaScript interface and, with additional special coding, can be packaged within a MSI/Setup and distributed as a pure Native application.

AJAX calls are made via XHR2 (aka XMLHttpRequest Level 2) which among other things relaxes the single domain requirement of XHR and processing Blob and File interfaces.

The following major vendors offer free libraries and IDE for development:

Native Apps: PhoneGap, Appcelerator

Native App Like: Sencha, PhoneGap, IBM Dojo

Browser App: JQuery.Mobile

PhoneGap does NOT require replacement of Sencha, JQuery.Mobil, Dojo.Mobile JQuery libraries.

PhoneGap allows JavaScript to call PhoneGap JavaScript libraries which abstract access to device hardware (camera, GPS, etc).

Sencha does not require replacement of the JQuery.Mobil, Dojo.Mobile JQuery libraries.

Although it is theoretically possible to create “Native like” applications with only JQuery.Mobile this is NOT encouraged.6439625143_caa6996f39 6337926187_91ca36793d

Local Storage

This is a major area of performance efforts and is still very much open in terms of how best to approach the problem:

The major elements are:

App Cache (for pre-fetch. and Native App Approach)

DOM Storage (aka Web Storage)

IndexedDB (vs. Web SQL)

File API (this is really part of XHR2)

Storing Large Amounts of Data Locally

If you are looking to store many Megabytes – or more, beware that there are limits in place, which are handled in different ways depending on the browser and the particular API we’re talking about. In most cases, there is a magic number of 5MB. For Application Cache and the various offline stores, there will be no problem if your domain stores under 5MB. When you go above that, various things can happen: (a) it won’t work; (b) the browser will request the user for more space; (c) the browser will check for special configuration (as with the “unlimited_storage” permission in the Chrome extension manifest).

IndexedDB:

clip_image002

Web SQL Database is a web page API for storing data in databases that can be queried using a variant of SQL.

Storage Non-Support as of two weeks ago.

IE Chrome Safari Firefox iOS BBX[RIM] Android
IndexedDB Supported Supported No Support Supported No Support No Support No Support
WEB SQL No Support Supported Supported No Support Supported Supported Supported

6439626605_ee7b664332

Doing HTML5 on non-HTML5 Browsers: If you are doing responsive design and need to work with Desktop and6380016957_4c6b5e7345_z Mobil using the same code base: JQuery.Mobile, DOJO and , Modernizr(strong Microsoft support for this JavaScript library).

WEB API

What is it? Just a name for breaking out the AJAX servers from the web server. This is an expansion of REST into just serving data for XHR. It is a helpful way to specialize our design discussions by separating serving pages (with MVC or whatever) from serving data calls from the web page. Except for security the two can be architecturally separated.

Web APIs Technology Stack

clip_image002[6]

Look familiarr? Looks like our old web server stack to me.

NOSQL

The CAP Theorem  (and Here)

  • Consistency: (all nodes have the same data at the same time)
  • Availability: (every request receives a response – no timeouts, offline)
  • Partition tolerance: (the system continues to operate despite arbitrary message loss)

Pick Any Two6439627917_7f88626477_z

If some of the data you are serving can tolerate Eventual Consistency then NOSQL is much faster.6380029445_0e0ecf7d53

If you need two phase commit, either use a SQL database OR redefine your resource to eliminate the need for the 2Phase Commit.

NoSQL databases come in two basic flavors:

Key/Value: This are popular with content management and where response time must be minimal. In general you define what btrees you want to use before the fact. There are no on the fly Joins or projects. MongoDB and CouchDB are typical leaders in this area.

Column Map: This is what Google calls Big Table. This is better for delivering groups of records based on criteria which may be defined ‘on the fly’. Cassandra is the leader in this group.

Web Sockets:

6439628517_6c7955df1f_zSad to say this is still not standardized and preliminary support libraries are still a little rough.  Things do not seem to have moved along much since the Microsoft sessions I attended at MIX 11.

Photos: All Photos by Cloud2013

Microsoft MVC 3 and CouchDB – Low Level Get Calls   1 comment

I have written elsewhere on couchdb on Windows and using Ruby on Rails to interface to this system.  These posts can be found here:couchdb

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

In my work life I work in a Microsoft shop which for us means Microsoft servers for the back end and (mostly) pure HTML/AJAX frontends.  We are transitioning towards using Microsoft MVC 3 to provide HTTP end points for our AJAX calls.  Here are some notes from my POC work in this area.  My couch data consists of documents describing Grateful Dead concerts stored on the great site Internet Archive, if you have never visited the Internet Archive, please do so.  I back engineered the meta data of IA’s extensive collection of Dead concerts (over 2,000 concert recordings).  Visit the Grateful Dead Archive Home at the Internet Archive here.

CouchDB Documents and Views

I stored the meta data into a local couchdb (running on Windows XP).  The basic document I am storing is a master detail set for the ‘best’ recording for each Dead concert.  The Master part of the document contains the date, venue and other data of the concert and the detail set is an array of meta data on each song preformed during the concert.  As is traditional with couchdb, the documents are represented as JSON strings.  Here is what the document for the UR recording (1965-11-01) found on the IA:

{

“_id”: “1965-11-01”,tumblr_ld4jfoNw7F1qai6ym

“_rev”: “1-6ea272d20d7fc80e51c1ba53a5101ac1”,

“mx”: false,

“pubdate”: “2009-03-14”,

“sb”: true,

“venue”: “various”,

“tracks”: [

{

“uri”: “http://www.archive.org/download/gd1965-11- 01.sbd.bershaw.5417.sbeok.shnf/Acid4_01_vbr.mp3”,

“track”: “01”,

“title”: “Speed Limit”,

“time”: “09:48”

},

{

“uri”: “http://www.archive.org/download/gd1965-11-01.sbd.bershaw.5417.sbeok.shnf/Acid4_02_vbr.mp3&#8221;,

“track”: “02”,

“title”: “Neil Cassidy Raps”,

“time”: “02:19”

}

]

}

Couchdb allow the creation of views which are binary trees with user defined Keys and user defined sub sets of the document data.  If one wanted to return the venue and the tracks for each concert for a given Month and Day (across all years) the view created in couchdb would look like:

“MonthDay”: {

“map”: “function(doc){emit(doc._id.substr(5,2)+doc._id.substr(8,2),[doc.venue , doc.IAKey, doc.tracks ])}”

}

This view allows us to use and HTTP GET to pass in a monthday key (e.g. “1101”) and get back (as a JSON array)

the date (MMDDYY: doc._id.substr(5,2)+doc._id.substr(8,2))

the venue (doc.venue);

the AI URI of the concert (doc.IAKey); and

an array of track data (doc.tracks)

MVC URL Routing Maps

Although we could call couchdb directly from the browser, we normally work through a gateway system for security, so we will build a shim to sit between the browser and couchdb.  This allows us to flow the authentication / authorization stack separately from couchdb’s security system.  In MS MVC we can create a new HTTP endpoint for AJAX calls (our shim) is a very simple manner. Let’s create an endpoint which will look like:

http:\\{our server path}\DeadBase\MonthDay\{month}\{day}

where vacuum_routing

http:\\{our server path}\DeadBase\MonthDay\111

would request month:11 and day:01 concerts.  In MVC we can declare this routing as:

routes.MapRoute(

“MyMonthDay”,

“{controller}/{action}/{month}/{day}”, 

new { controller = “DeadBase”, action = “RestMonthDay”,null} );

Done.  Interestingly in MVC 3 this route definition will accept either the form:

http:\\{our server path}\DeadBase\MonthDay\{month}\{day} ; or

http:\\{our server path}\DeadBase\MonthDay?month=”??”&day=”??”

In the second form,  parameter order does not matter, but case does; quotation marks are optional and need to be dealt with internally by the action method.

either of these call will resolve to the same controller and method.

MVC Controller and Method HandlerMVC

We now need to create the shim which will be the target for the Http Endpoint.  In C# this looks like:

public class DeadBaseController : Controller

public string RestMonthDay( string month, string day )
{
//our shim code goes here

      }

    }

We able to use string as our return type because we will be calling couchdb which returns a string from of JSON by default.  As a side note if we wanted to use MVC 3 to return JSON from a native C# object our controller method takes a different form:

public JsonResult GetStateList()

{

List<ListItem> list = new List<ListItem>() {

new ListItem() { Value = “1”, Text = “VA” },

new ListItem() { Value = “2”, Text = “MD” },

new ListItem() { Value = “3”, Text = “DC” } };

return this.Json(list);

}

Our AJAX call from the browser does not need to know any of these details.  Here is one way to code the call in JavaScript using JQuery:

var url = urlBase + “?” + args;ajax

$.ajax({

url: url,

dataType: ‘json’,

success: okCallBack,

error: nookCallBack

});

function okCallBack(data) {

gdData = data;

//do something useful here

}

function nookCallBack(xhr, ajaxOptions, errorThrown) {

alert(“ErrorText:” + errorThrown + ” ” + “Error Code:” + xhr.status);

}

}

From Handler to CouchDB in C#

Here is the rest of the generic C# code to go from the Handler to CouchDB and back.

Clean the parameters and pass the call to a generic couchDB GET caller:mvc

image

Format the view name and parameter into couchdb format  and pass to the low level couchDB caller:

image

Classic Framework HTTP code to make the HTTP GET and return the results as a string back up the call stack:

image

We could (and did) take our Browser code from the Ruby on Rails project above and with minimum changes call our MVC shim.

Simple clean and fun.

Occupy your mind2

Ruby On Rails, CouchDB and Me – Part 7 – JQUERY,JPlayer and HTML5   Leave a comment

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB  

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

We have two missions in the current post:

  • Getting Our browser side chops together: Using Javascript, DHTML, CSS Level 3, JQUERY against our JSON feed
  • Using JQuery UI and a JQuery UI Plugin: JPlayer to play songs from Internet Archive based on our JSON feed

Recall that our JSON feed provides concert and track data for concerts preformed on an arbitrary date. The top level data of the feed can be visualized like this:

clip_image001

The field total row refers to the total number of records in the database not the number of rows in this feed. The field offset indicates the entry point in the b-tree of the view used for this feed.  Beats me why these would be useful to the calling program!  Following this ‘header’ data we have each concert listed in Key order. The offsets and values are

  • 0: Venue
  • 1: IAKey
  • 2: Array of Track Data

We can visualize the expanded  track data array as:

image Within each offset of the array we have the fields:

  • uri     – The pointer into IA for the mp3 file
  • track – order in the concert of this track
  • title   – track name
  • time  – Track length as MM:SS

We clearly could iterate through these fields and list the concerts and tracks statically on the web page using standard ROR tools but lets be more dynamic.  Let’s first display the concert dates and venues and then display the tracks for a concert  when the user click on a concert without a round trip to the server (and ROR).

Do You Do JAVASCIPT?

Someone once said that Javascript is the only languare that people use without knowning how. Don’t be one of those people.  The cleanest approach to learning Javascript is Crockfords: Javascript:  The Good Parts – Simple, clean Javascript fun.  (Steal This Book Here) Read this even if you ‘know’ Javascript.  If you don’t like to read, try the movie:

JQUERY: It Puts the Dynamic into DHTML.

JQuery is my favorite Javascript library.  Not necessarily the best or the most common.  Just my favorite.  JQuery accomplishes two goals very well:

  • Eliminating (or at least simplifying) DHTML coding differences between all main stream browsers (and most non-mainstream ones);
  • Simplifying and abstracting the operations necessary to drive DHTML via Javascript.

The design of JQUERY leverages the CSS 3 selector syntax so you will need to understand modern CSS selectors.

DHTML was first introduced as a Microsoft extension.  Netscape (remember Netscape?) soon followed with a similar, but not exact DHTML API of its own.  Further each of these browsers also tended to render certain edge cases differently.  And the CSS Level 3 Selectors and HTML5 specifications were coming down the pike. Both CSS3 and HTML5 are now a reality on Chrome, Foxfire and Safari (and some day, real soon) on IE9.  What to do?  John Resiq had an idea and the idea was called JQUERY.  The BASIC idea is to use the CSS Level 3 selectors to selects sets of HTML Tags and then to preform actions on those tags using a common API which would mask the differences between Browsers (and differences between versions of browsers).  Along the way JQUERY attempts to provide features not available in some browsers as long as those features would appear in the (then emergent) HTML5 specification.  Learning JQUERY is difficult only because the API is abstract and their is no BEST text on JQUERY.  Here is how John explains JQUERY:

OK So Lets See Some Code Already!

Iterating The JSON Object In Javascript And Display Using JQUERY

Please refer to our prior post for a description of how the JSON object is delivered to the page via the Rails mark up in our rb file.  Basically we had a single line:

gdData=<%=  @parm  %> ;

Let’s work with this data to display the structure on the browser screen.

We start with two EMPTY HTML tags on our page:

<div id=”concertdiv“>

<lu id=’track’></lu>

We can iterate this object  using javascript as:

ConcertList2(gdData);

where ConcertList2 is defined as:

function ConcertList2(o){
var iaURL=”http://www.archive.org/details/”
for (ndx = o[“rows”].length – 1; ndx !=-1; ndx–) {
var cdate = o[“rows”][ndx].id;
var venue = o[“rows”][ndx][“value”][0];
var itemID=ndx.toString();
var uri= iaURL +o[“rows”][ndx][“value”][1]
var href=”<a id=”href” href=””+uri+”” target=”_BLANK”>”+” – IA -” +””;
var className=’normal’;
if (ndx==0){
className=’hilite’;
}
var item=”

” + cdate + ‘ – ‘ + venue + href + ‘

‘ ;
$(‘#concertdiv’).after(item);
}
}

The javascript variable “item” for a given concert would contain a string of HTML:

<p id=’0’ class=’concert normal’>1969-08-16 – Woodstock Music <a href=http://www.archive.org/details/gd1969-08-16.sbd.gmbfixed.95918.flac16’/>-IA-</p>

Note that this tag contains two class: ‘concert’ and ‘normal’.

The JQuery code line:

$(‘#concertdiv’).after(item);

consists of a selector:

#concertdiv

an action verb:

after

and an argument:

item

The selector uses CSS 3 syntax so it selects the SET of all tags with the ID of ‘concertdiv’.  In our page this is a set of one item.

Iterating through our JSON object will post-pend our items after the tag associated with concertdiv

The results  looks like this on the Browser Page:

1969-08-16 – Woodstock Music – IA –

1980-08-16 – Mississippi River Festival – IA –

1981-08-16 – MacArthur Court – University of Oregon – IA –

1987-08-16 – Town Park – IA –

1991-08-16 – Shoreline Amphitheatre – IA –

Simple, no?

We can iterate and display the tracks as:

TrackList(gdData,ndx);

where TrackList is defined as:

function TrackList(o,ndx){
$(‘li’).remove();
var ndx1=0;
for(ndx1=o[“rows”][ndx][“value”][2].length – 1;ndx1!=-1;ndx1–){
var title=o[“rows”][ndx][“value”][2][ndx1].title;
var time=o[“rows”][ndx][“value”][2][ndx1].time
var track=o[“rows”][ndx][“value”][2][ndx1].track
var uri=o[“rows”][ndx][“value”][2][ndx1].uri
var item=”<li>” + track + ‘ ‘+ time +’ ‘+title+ ‘</li>’;
$(‘#track’).after(item);
}

}

In this case our ‘item’ variable contains a simple HTML string like:

<li>01 03:08 Stage Announcements, Introduction</li?

The results on the browser page for a given concert will look like this:

  • 01 03:08 Stage Announcements, Introduction
  • 02 02:04 Saint Stephen >
  • 03 02:42 Mama Tried >
  • 04 00:38 High Time false start
  • 05 10:28 Stage Banter. Technical Difficulties
  • 06 19:05 Dark Star >
  • 07 06:10 High Time
  • 08 38:32 Turn On Your Lovelight
  • 09 01:52 Applause, Stage AnnouncementsWe can bind these two display routines together with two simple Javascript functions so that when we click on a concert name the page will refresh the track list without a visit back to the web server.First we will use JQUERY to BIND a function to a click event to the concert class:

    function bindClick(){
    $(‘.concert’).click(function() {
    removehilite();
    $(this).toggleClass(‘hilite’,true);
    TrackList(gdData,$(this).attr(‘id’));
    });
    }

  • This bound function uses the pre-defined function ‘removehilite’ to swap

  • function removehilite(){
    $(‘.concert’).toggleClass(‘hilite’,false);
    $(‘.concert’).toggleClass(‘normal’,true);
    }

    and a simple inline CCS definition:

    <style>
    .normal {color:#0B559B;}
    .hilite {color:#FF0000;}
    </style>

    We pull this all together into a simple driver as:

                gdData=<%=  @parm  %> ;
    $(document).ready(function(){
    ConcertList2(gdData);
    bindClick();
    TrackList(gdData,”0″);
    });

    Got it? Good.  Now let’s use a JQUERY UI plug in allow us to play concerts from our browser page.

    JQUERY UI and JQUERY UI Widgets

    As useful as JQUERY is for dynamic web pages let’s go further use the JQuery UI system and the UI Widget: JPlayer,  to allow us to play the mp3 files which reside on Internet Archive.  JQuery UI  is a system built on top of JQUERY to allow the systematic development of UI Widgets which page developers can deploy which minimize un wanted interactions between widgets.  Further the JQuery UI system (and widgets developed within that system) can use a systematic set of theme classes whose color scheme can be generated with a nice tool called ThemeRoller.  I will not have a lot to say about these products in general (except to say they are free and work great) and you will need to visit the links noted in this paragraph to learn more about these tools.

    HTML5 Audio Tag

    HTML5 has introduced a new tag to allow playing audio without using a plug in.  There are some issues still being worked out since there is NOT common agreement yet about whether the standard should universally support MP4 or OGG files universally.  Currently MP3 is supported by all browsers which support HTML5.  Nominally the new tag looks like this:

    <audio controls=”controls”>
    <source src=”horse.mp3″ type=”audio/mp3″ />
    Your browser does not support the audio element.
    </audio>

    Note that the line after the “source” tags is what is rendered if your browser does NOT support HTML5.  If we replace this line with appropriate code to support a plug-in like Flash we have a control which will play well in both HTML5 and HTM4 environments.  We could develop our own solution but I have been working with JPlayer a very nice JQUERY UI widget and will use that for this post.  I like this widget because JPlayer

    • Is a JQuery widget
    • Works with JQuery Themeroller
    • Has a very active user community
    • Displays graphics and video as well as audio tracks

    I developed my final browser page in this series using a modified version of  the  ‘demo 2’ code example which is downloaded along with JPlayer.  Here is the plan:

    Display the Concert list the same way as above (with a few extras for visual appeal).  Prepare the track list in a way similar to that used above but modified to put it in a form that JPlayer can both display the tracklist for us and load the track list into JPlayer (more on this below).  We are going to modify the RoR rb file but not the underlying R0R code.  We will let the browser do the work.  I follow this strategy since our next phase of the project will allow the user to select the date for which concert data is to be displayed and played using AJAX calls in a RESTful manner (more on this next time) rather than round tripping to the server when we want to load a new date (or date range).

    What changes?

    Two new Javascript files:  one for JPlayer and one to handle preparing the track list for JPlayer to consume; and a reference to the themeroller prepared CSS file:

    <link href=”/skin/jplayer.blue.monday.css” rel=”stylesheet” type=”text/css” />

    <script type=”text/// <![CDATA[
    javascript” src=”
    // ]]>http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js“></script>

    <script type=”text/// <![CDATA[
    javascript” src=”
    // ]]>/javascripts/jquery.jplayer.min.js“></script>

    <script type=”text/// <![CDATA[
    javascript” src=”
    // ]]>/javascripts/Playlist.js“></script>

    Playlist.js is the new file I am contributing to the mix the others are JQuery and JPlayer Javascript files.

    Using Themeroller styles I can generate a completely different style for the page and only chang the link reference to the CSS page to change how the page appears.  There are NO embedded style elements on the page.

    Rather than calling the TrackList method I am going to call a new method makePlayList when a concert is selected. This code  looks like this:

    function makePlayList(selected){

    var ndx1=0;

    var tList=new Array();

    for (ndx1 = gdData[“rows”][selected][“value”][2].length – 1; ndx1 != -1; ndx1–) {

    tList[ndx1] = buildTrack(gdData[“rows”][selected][“value”][2][ndx1].track, gdData[“rows”][selected][“value”][2][ndx1].time, gdData[“rows”][selected][“value”][2][ndx1].title, gdData[“rows”][selected][“value”][2][ndx1].uri, gdData[“rows”][selected].id.substring(0, 4));

    }

    return tList;

    }

    In turn, buildTrack looks like:

    var buildTrack=function(num,time,title,ref,cYear){
    var dwnldicon=’pic/download.png’;
    var    nameFMT=”$0     $1     $2<img src=’$4′>“;   
    var track=new Object();
    var name=nameFMT.replace(“$0”,num);
    name=name.replace(“$1”,time);
    name=name.replace(“$2”,title);
    name=name.replace(“$3”,ref);
    name=name.replace(“$4″,dwnldicon);
    track.name=name;
    track.mp3=ref;
    track.poster=”/pic/”+cYear+”.png”;
    return track;
    };

    All of which is returned to JPlayer as:

    mediaPlaylist.playlist = makePlayList(selected);

    Our core Javascript code now looks like:

    var mediaPlayer=null;

    $(document).ready(function(){

    ConcertList2(gdData);  //displays the concert list at the top of the page

    bindClick();  //binds a click event on a concert to loading a new playlist to JPlayer

    mediaPlaylist = new Playlist(“1”,makePlayList(0) , //jump start with the first concert item

    {

    ready: function() {

    mediaPlaylist.displayPlaylist();  //show the playlist

    mediaPlaylist.playlistInit(false);

    },

    ended: function() {

    mediaPlaylist.playlistNext();

    },

    swfPath: “javascripts”,  //jplayer option

    solution: “flash, html”, //jplayer option

    supplied: “mp3” //jplayer option

    });

    });

    Most of our HTML tags are stolen directly from the JPlayer ‘demo 2’ code and mostly deal with setting up the player controls (play, pause, stop, next, etc).

    OK.  The new browser page looks like this (in two Parts):

    Concert Listing Section:

    clip_image001[1]

    I am using icons for hyperlinks to the Internet Archive Grateful Dead Collection: and the smaller icons to link to all recordings for a given date:.   The bottom half of the screen contains the JPlayer and its user controls as well as a user selectable track list:

    image The image is associated with the track selected (there is a JPlayer bug  with the images.  If the same image is associated with two seccesive tracks the second picture will not be displayed – they are working on this). I use the selected Concert text (in this case “Madison Square Garden: 1987-09-16” as a hyperlink to the page containing the  concert recording on Internet Archive.

    These screen captures are from Chrome (Safari and FireFox look the same).  On IE 8 (and lower) HTML5 is not supported and the player reverts to Flash.  The track list on IE8 is not as pretty and is no longer selectable (although the player controls still work):

    image The Sad IE 8 Track Display

    What more can I say?

REST, Ruby On Rails, CouchDB and Me – Part 5 Getting The Data Ready for CouchDB   3 comments

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

The Internet Archive and the Grateful Dead

The Internet Archive (IA)  is a 501( c ) 3 non-profit corporation dedicated to disseminating of public domain digital artifacts.  This includes such items as books, videos of imagesall kinds (TV Shows, shorts and feature films) audio recordings of all kinds (musical and spoken word).    One of their most popular projects is a truly huge collection of Grateful Dead concert recordings.  One 418910of the most visited pages of the Internet Archive’s web site is the “Grateful Dead Shows on This Day In History” page, which lists and allows play back of any Grateful Dead concerts they have in their collection of whatever day you visit the site.  Did I say that there IA has a large number of Grateful Dead concert recordings?   The are recordings of around 2,000 separate concert dates.  Any given concert may be represented by multiple recordings from a variety of sources: soundboard, audience recordings using professional and amateur equipment.  Original media ranges from cassette tapes, to 7 inch reel to reel and digital media.  If you are going to be working with the IA Grateful Dead collection please review the FAQ on the collections policy notes as well as the special notes here.

IA uses a very sophisticated data repository of meta data and an advanced query engine to allow retrieving both the meta data and the recordings.  Meta data can be retrieved directly using the “advanced search” engine.  On the day I started this post I visited IA and used the “Grateful Dead Shows on This Day In History”  The query returned data on 8  concerts (and 25 recordings of those 8 concerts).  A partial page image is given below:

image

Clicking on any of these entries moves us to a second screen in order to play the concert recording.  A screen shot of the playback screen looks like this:

image

Looking closer at the second screen we see the music player:

image

Can we design a faster, simpler and better looking interface into the Grateful Dead Archive?  Can couchDB help us? gratefuldead_20070108135140 The the first question will be addressed in a later post. This current post will look at how couchDB can  help us achieve a faster more efficient information system.  IA does a super job of serving up the music files on demand – there is no reason to duplicate their storage system.   However, IA is fairly slow to serve up meta data (such as the results of the “Grateful Dead Shows on This Day In History” query) Abstracting the IA metadata into a CouchDB database will allow us to serve up the meta data much faster than the IA query system.

Getting Data Into CouchDB

Our basic plan for using RUBY to get data from IA and into couchdb consists of:

  1. Prepare a URL query request to get the basic recording meta data (not the track meta data);
  2. Submit A GET request to IA using the URL query;
  3. Parse the XML returned to get at the individual Concert meta data fields;ruby
  4. Select the BEST recording for any given concert (more on this below);
  5. Prepare a URL to request track listing XML file based on the IA Primary Key of the selected concert recording;
  6. Submit a GET request to IA;
  7. Parse the XML returned to get at the individual track meta data fields;
  8. Create a ruby object which can be safely serialized to JSON;
  9. Serialize the object to a JSON string (i.e. a JSON document);
  10. Do a POST request to insert a JSON document for each concert into the couchdb database;
  11. Create couchDB views to allow optimized data retrieval; and
  12. Create a couchDB view to optimize retrieval recordings for all years for an arbitrary Month and Day (this duplicates the data provided by the “Grateful Dead Shows on This Day In History” selection in the Internet Archive.

Note we are not accessing nor storing the actual music files.  Before discussing how this plays out in practice lets define our JSON couchDB document.  We will cover items one through eight in this post.  We turn to items nine through twelve in the next post.

CouchDB Document Schema

CouchDB databases start with documents as the basic unit.  Typically a couchdb based application will have one database holding one or more variant document types.  There will be one or more design documents which provide multiple views, show functions and map functions as necessary to facilitate the application.  We will use a single document which will represent a abstract of the meta data contained in IA for individual recordings ( we are going to select the one ‘best’ recording per concert).  Our couchdb database will hold one document per concert.   The tracks (actually the track meta data will be stored as arrays within the  concert document).  We will populate the couchdb database in a single background session  pulling meta data (NOT THE MUSIC FILES) from IA and we will  include the IA publication date in the document so we can update our database when (if) new recordings are added to IA in the Grateful Dead collection.

Here are the document fields  which we will use:

Field

Notes

Typical Values

_id couchdb primary key.  We will use a natural key: a string representation of the concert date. 1969-07-04
_rev revision number provided by couchDB 1-6ea272d20d7fc80e51c1ba53a5101ac1
IAKey Internet Archive Key for this Recording gd1965-11-01.sbd.bershaw.5417.sbeok.shnf
pubdate Internet Archive Date When Recording was published to the web 2009-03-14
venue Wherethe concert took place Fillmore East Ballroom
description free text describing the concert – provided by the uploader Neal Cassady & The Warlocks 1965 1. Speed Limit studio recording/Prankster production tape circa late 1965
cm boolean – Recording by Charlie MIller  – used to select the ‘best’ recording true or false
sb boolean – Recording was made from a soundboard – used to select the ‘best’ recording true or false
mx boolean – A matrix style recording – used to select the ‘best’ recording true or false
tracks an array of meta data for each track of the recording see below

Each track in the tracks array  formally looks like:

Field Notes Typical value
IAKey The Internet Archive key for this track.  This key is unique within a given recording  (see the IAKey above) gd1965-11-01.sbd.bershaw.5417.sbeok.shnf/Acid4_01_vbr
track track number 02
title the song title Cold Rain and Snow
time the length of the track in minutes and seconds 09:48

Let call everything except the tracks our BASE data and the track data our TRACK data.

We insert documents to the database (using an HTTP post) as JSON so a typical document would look like this in JSON format:

{
“_id”: “1966-07-30”,
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16”,
“pubdate”: “2008-09-22”,
“venue”: “P.N.E. Garden Auditorium”,
“description”: “Set 1 Standing On The Corner I Know You Rider Next Time You See Me”,
“cm”: false,
“sb”: true,
“mx”: false,
“tracks”: [
{
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16/gd1966-07-30.d1t01_vbr”,
“track”: “01”,
“title”: “Standing On The Corner”,
“time”: “03:46”
},
{
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16/gd1966-07-30.d1t02_vbr”,
“track”: “02”,
“title”: “I Know You Rider”,
“time”: “03:18”
},
{
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16/gd1966-07-30.d1t03_vbr”,
“track”: “03”,
“title”: “Next Time You See Me”,
“time”: “04:00”
}
]
}

2654190796_c0a810ec44

Hacking The Internet Archive: Getting Data From IA and Into CouchDB:

Here is the URL to obtain the page for “Grateful Dead Shows on This Day In History”:

http://www.archive.org/search.php?query=collection:GratefulDead%20date:19??-08-04&sort=-/metadata/date

This is a simple GET request with a query for the IA “collection” of Grateful Dead items filtered on the date string: 19??-08-04 and sorted descending by the concert date.  This get returns an HTML page.  This type of interface is known as an HTTP RPC interface.  RPC (Remote Procedure Call) interfaces are not pure REST interfaces but they are somewhat RESTful inthat they allow us to make a data request using a late bound, loosely coupled HTTP call.  See here and here for more theoretic background on RPC calls.  IA provides an  “Advanced Search” function will allow us to return data for an arbitrarily complex query in one of several different data formats other than HTML.  We selected XML as the format  for our work here.  XML is the traditional format for HTTP RPC but other formats may be better for certain applications.  Unfortunaely IA does not directly document the format of the RPC data request but they do provide a QEB page to build the request.  The page looks like this:

image

Using this screen we can compose a HTTP RPC request which will mimic the URL produced by “Grateful Dead Shows on This Day In History” and with a little brain effort and experimentation we can  understand how to compose requests without using the QBE screen.  By feeding the RPC request query back into advanced search and selecting XML as an output format as shown here:

clip_image001

we produce both an example of the HTTP RPC request which will return our desired data in our desired format.  Thus we generate a HTMLEncoded RPC request like:

@uri=”http://www.archive.org/advancedsearch.php?q=collection%3AGratefulDead+date%3A#{_dateString}&fl%5B%5D=avg_rating&fl%5B%5D=date&fl%5B%5D=description&fl%5B%5D=downloads&fl%5B%5D=format&fl%5B%5D=identifier&fl%5B%5D=publicdate&fl%5B%5D=subject&fl%5B%5D=title&sort%5B%5D=date+desc&sort%5B%5D=&sort%5B%5D=&rows=2000&page=1&callback=callback&output=xml”

where we replace #{_dateString} with a date string like 19??-08-08.  Of course to one years worth of data we could use a data string like: 1968-??-??.  It is a simple extension of the query languge to replace the singular date request: date%3A#{_dateString} with a date range.

which returns Grateful Dead recording data for all years of the last century which were recorded on 08-08.  The XML output returned to the caller looks like:

clip_image001[4]

In a more graphic format the output looks like:

clip_image001[6]

Within Ruby we will need to make the HTTP Get request with a desired date range, transform the body of the return request into an XML document and use XPATH to parse the XML and retrieve the meta data values for each recording (see below).  The is NOTHING inherently wrong with this RPC interface.  It is flexible and allows us to select only the data fields we are interested in and return data only for the dates we wish.  Since RUBY supports neither native consumption of JSON nor XML. So the XML format of the data is as good as any other and numerous tools exist in RUBY to manipulate XML data.  I which RUBY had a more native interface for JSON but it does not.

At this point, we do not have meta-data about individual tracks in a given recording.  It turns out that we can get this data but not through an HTTP RPC request.  It turns our, dear reader, that if we have the IAKey for the recording we can obtain an xml file with track meta data by making the following call:

http://www.archive.org/download/{IAKEY}/{IAKEY}_files.xml.

This file contains assorted XML data, it varies by what formats IA makes available the individual tracks via a 309 (HTTP redirect).  This is not an RPC call so we are far from a RESTful interface here.  We do not have control over the fields or range of the data included in this call.  It is all or nothing.  But at least the XML format is simple to mainipulate.  With the IAKey in hand for an individual recording and making some reasonable guesses we can parse the XML file of track data and compose the TRACKS array for our couchDB document using XPATH. A single entry for the high bit rate mp3 track recording looks like:

<file name=”gd89-08-04d2t01_vbr.mp3″ source=”derivative”>
<creator>Grateful Dead</creator>
<title>Tuning</title>
<album>1989-08-04 – Cal Expo Amphitheatre</album>
<track>13</track>
<bitrate>194</bitrate>
<length>00:32</length>
<format>VBR MP3</format>
<original>gd89-08-04d2t01.flac</original>
<md5>91723df9ad8926180b855a0557fd9871</md5>
<mtime>1210562971</mtime>
<size>794943</size>
<crc32>2fb41312</crc32>
<sha1>80a1a78f909fedf2221fb281a11f89a250717e1d</sha1>
</file>

Note that we have the IAKey for the track (gd89-08-04d2t01 ) as part of the name attribute.

Garcia

Using a background Ruby Process to Read the Data

The following RUBY GEMS are required to complete this step:

rest-open-uri : This GEM extends open-uri to support POST, PUT and DELTE HTTP command

json : This GEM handles serialization and de-serialization of a limited subset of RUBY into JSON strings.

From the standard RUBY library we will also be using

rexml : This GEM creates XML documents from XML Strings and supports XPATH which we will use to read the XML documents from IA

Our first step is to extract the get the the data via HTTP and parse the XML file returned to find individual recordings.  There are  (in most cases) be multiple recordings per concert (per date) and we want to retain for the database only the “best”.

In pseudo Ruby code:

require ‘rest-open-uri’

require ‘rexml/document’

 def initialize(_dateString)

#HTTP GET, create a string of the response body and transform the string into an XML node tree
#mind the screen wrap and html Encoding:
@uri=http://www.archive.org/advancedsearch.php?q=collection%3AGratefulDead+date%3A#{_dateString}&fl%5B%5D=avg_rating&fl%5B%5D=date&fl%5B%5D=description&fl%5B%5D=downloads&fl%5B%5D=format&fl%5B%5D=identifier&fl%5B%5D=publicdate&fl%5B%5D=subject&fl%5B%5D=title&sort%5B%5D=date+desc&sort%5B%5D=&sort%5B%5D=&rows=2000&page=1&callback=callback&output=xml

  xmlString=”
open (@uri) do |x|       #build a representation of the response body as a string
x.each_line do |y|
xmlString=xmlString+y
end
if xmlString==”
puts ‘No String Returned From Internet Archive’
quit
end
end
@IAXMLDocument= REXML::Document.new(xmlString)  #turn the string into an XML document
end #open

Now we need  to loop through the XML document and pull out each ‘doc’ section using XPATH and read each doc section for the meta data for that recording.

#use XPATH and find each response/result/doc node and yield

def get_recordings(document)

document.elements.each(‘response/result/doc’)do |doc|

yield doc
end
end

#get the XML document and yield

def get_record(xmldoc)

get_recordings(xmldoc) do |doc|
yield doc
end
end

#general purpose XPATH method to extract element.text (the metadata values) for arbitrary XPATH expressions

def extract_ElmText(doc,xpath)

doc.elements.each(xpath) { |element|  return element.text }
end

def worker(xmldoc)

#main loop

_docCount=0

get_recordings(xmldoc) do |doc|
_docCount+=1
_date=extract_ElmText(doc,’date[@name=”date”]’)[0..9]
_description=extract_ElmText(doc,”str[@name=’description’]”)
_title=extract_ElmText(doc,”str[@name=’title’]”)
_title=clean_Title(_title)
_keylist=pull_keys(doc)
_pubdate=extract_ElmText(doc,’date[@name=”publicdate”]’)[0..9]  #there is a bug here , corrected by lines below

if (_pubdate.length==0)
_pubdate=’1999-01-01′
puts “#No Publication Date: {_date} #{_title}”
end
_uri=extract_ElmText(doc,’str[@name=”identifier”]’)

#make a RUBY class object to hold one recording
_record=GDRecording.new _date, _description, _tracklist, _title, _keylist, _pubdate,_uri

#save the recording class objects in an array

@list[@list.count]=_record

end

In this code the ‘worker’ method calls the helper methods to:

0) Do the HTTP  get to make the RPC request and read the response body one line at a time and

1) transform the lines into a single string and convert (REXML::Document.new) the string into an XML document for processing by XPATH

2) loop through the doc nodes of the xml tree and extract the values of the  meta data fields

3) the meta data values are passed to an RUBY class ( GDRecording) which holds this meta data for later processing,

4 finally we temporarily store the recordings in an array for the next processing step.

Note that these routines work  whether the query returns a single day (with multiple recordings) or multiple days or even the whole dataset!  What is essencial is that we process the file as N ‘doc’ sub trees (which represent single recordings) and have recording date (at least) to group our data and extract the ‘best’ recording within each date group.

Our next step will be group the recordings by day (i.e. concert) and provide our own filter to select a single ‘best’ recording for each concert.

Shake and Bake:  Finding A ‘Best’ Recording.

best

What is the best Grateful Dead concert.  Why the first one I went to of course.  Just ask any Deadhead and you will probably get the same answer.  But what is the best recording of any given GD concert? My approach is very simple.

  • Most recent posted recordings are better than older recordings. (least important criteria)
  • Soundboard recordings are better than audience recordings.
  • Matrix recordings are even better.
  • Recordings mixed by Charlie Miller are best of all. (most important criteria)

Well these are MY criteria.  What ever criteria as long as they are hieratical  you can code the select in a very trivial manner.  If we have a field in each recording for the concert date and a field for each selection criteria (we derive these from the keywords field in IA) we sort the recordings by date and then by each of the criteria from most important (Charlie Miller in may case) to least important (date posted) and then select the first recording in sort order within each date group. On Ruby the sort of our list of recordings is trivial to code and easy to maniuplate (add new criteria or change the priority of criteria). The sort statement looks like this:

@list.sort! { |a,b| (a.date+a.cm.to_s+a.sb.to_s+a.mx.to_s+a.pubdate ) <=> (b.date+b.cm.to_s+b.sb.to_s+b.mx.to_s+b.pubdate )   }

Once sorted we create a list of best recordings as:

def newSelect
_dateGroup=nil
_list=Array.new
if  @list==nil or @list.count==0
puts ‘No Recordings.’
return;
end
foreach do |rec|
if _dateGroup!=rec.date
if dateGroup!=nil
@selectList[@selectList.count]=list[0]
end
_dateGroup=rec.date
_list=Array.new
end
_list[_list.count]=rec
end
if dateGroup!=nil
@selectList[@selectList.count]=list[0]
end
end

Note that is code is not only simple but it is independent of the selection criteria we are using.

Now that we have a list of recordings we are interested in,  we can get the XML file of track meta data using the IAKey discussed above and making a simple GET call and parsing the XML file for the meta data for each.  Much of the code used duplicates the XML code presented above so we need not reproduce all the code except to show a short section which uses a slightly different XML XPATH syntax:

open (filesURI) do |x| x.each_line do |y| xmlString=xmlString+y end end
REXML::Document.new(xmlString).elements.each(‘files’) do |doc|

doc.elements.each(‘file’) {|file_elm|
obj=nil
title=nil
trackString=nil
lengthString=nil
obj=file_elm.attributes[“name”]
file_elm.elements.each(‘title’) { |element| title=element.text }
file_elm.elements.each(‘track’) { |element| trackString=element.text}
file_elm.elements.each(‘length’) { |element| lengthString=element.text}

{omitted code}

end

Okay now we have a (hash) list of recording meta data,  each item of which contains a (hash) list of track meta data for that recording.  In our next post we will leave this unRestful world behind and move into the RESTful world of couchDB when we:

  • Serialize the object to a JSON string (i.e. a JSON document);
  • Do POST requests to insert  a JSON document for each concert into the couchdb database;
  • Create couchDB views to allow optimized data retrieval; and
  • Create a couchDB view to optimize retrieval recordings for all years for an arbitrary Month and Day (this duplicates the data provided by the “Grateful Dead Shows on This Day In History” selection in the Internet Archive.

cat on fancy couch

REST, Ruby On Rails, CouchDB and Me – Part 4 – CURL on Windows And Ruby POST   Leave a comment

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

In The Post:

  • CURL and Couchdb
  • Documents Design and Otherwise
  • Posting Documents to couchDB Using Ruby

If you are like me you have spent some time with the free ebook: CouchDB The Definitive Guide.  If you are a windows user you may have run into some problems with the examples given in the chapter  on “Design Documents”.  Specifically they don’t work ‘out of the box’.  The examples in that chapter show us how to: create a database, to create and post a design document and to post a document to the database.  These examples use  CURL in a command shell.

OmniVortex

Since we are running Windows first we need to install CURL on our system.  Either set your system path to include the CURL executable. We can get a windows version here.  Use the version labeled DOS, Win32- MSVC or Win64 depending on your system. We assume here that couchDB has been installed successfully on your system. Now open a ‘command prompt’ on your system.  If you must have a UNIX type shell you need to install CYWIN or some other UNIX emulator for Windows.  If you are using the Aptana IDE like me you need to create an “external command” to open a command shell within Aptana.  This figure illustrates the setup within the Aptana IDE to do this:

image

In the command shell you can create a couchdb Database using a POST command and CURL.  Couchdb is RESTful so we use a PUT command for all actions which CREATE a resource, of which a database is one example.  The format of the command is:

curl -X PUT http://{couchdb}/{yourdatabasename}I want to create a database named deadbase so on my system this command and response looks like:

C:\Documents and Settings\dredfield\My Documents\Aptana Studio Workspace\couchDB01

>curl -X PUT http://127.0.0.1:5984/deadbase

{“ok”:true}

The where “{“ok”:true}” is the response body of the http response to my put command.  Confirm your work by starting a browser and navigating to Futon user interface to your couchdb installation.  On my system this url is:

http://127.0.0.1:5984/_utils/index.html

you should see something like this:

image

CURL and Documents

OK, now lets make a design document for this database and PUT that document to the new database.  With slight modifications to the example given in CouchDB The Definitive Guide my first cut at a design document looks like this:

{

     “_id” : “_design/example”,

     “views” : {

        “View00” : {

       “map” : “function(doc){emit(doc._id, doc.date)}”

        }

  }

}

This is a JSON formatted document.  Initial syntax checking is up to you.  Basically couchDB will accept anything within the outer brackets whether or not it is formatted as usable JSON or not.  We have several options for checking syntax.  There are free online syntax checkers like JSONLint.  The interface to JSONLint looks like:

clip_image001

An installable open source JSON checker and visualizing tool, JSON View is available here.  JSON View’s output looks like:

clip_image001[10]

Now that we know our syntax is correct (if not the logic of the design document – more on this in the next installment) we can PUT this document to our database.  We can have more than one design document in a given database.  The name (id) of this document is “_design/example”.  where “_design” tells couchdb this is indeed a design document and its name is “example”.   My document is named mydesign.json on my file system.  The CURL command to PUT this into the database looks like:

curl -X PUT http://127.0.0.1:5984/deadbase/_design/example -d @mydesign.json

couchdb will respond:

{“ok”:true,”id”:”_design/example”,”rev”:”1-45f081a3f681b28ce7a0bdc5db216e74″}

Note here that this is NOT the syntax shown in CouchDB The Definitive Guide.  The syntax there will not work in a windows shell (i.e. command prompt).  Even when you have syntax correct JSON document  and the correct format of the PUT statement on Windows you may recieve an error message from CURL complaining about UTF8 errors within the document and have a PUT failure.  The problem here is that the Windows file system supports several encoding schemes and various windows programs save documents in to different default encoding.  If you are using Notepad.exe to create your files be sure to save the files in ANSIformat.

 
Check your work using the FUTON interface locate the “_design/example document” in deadbase

clip_image001[12]

Double click on the document:

clip_image001[16]

Note that “views” is a “Field” within the document.  Select the “Source” tab  and take a look inside the document:

clip_image002[4]

Now lets POST a document into the database.  Since we have not defined any validation fields we can push anything into the database.  Even documents which consist of just “{}”.  CouchDB defines only one innate restriction:

If a document defines the id field (“_id”) then the value of _id must not conflict with an existing value of the ID field of ANY other document in the database.

If the document does not define an ID field, couchDB will generate an ID (as a UUID) and apply it to the document.  You can supply your own ID values.  If you can either generate your own value  (Ruby can generate a GUID for you) or you can request a GUID from couchdb with a GET command.  See this page for more information.  In the sample program I will be developing for this series I will be using a ‘natural key’ – that is a key whose value has an actual meaning (a Social Security is such a natural key for example, but please never use this).  If you try to POST a document and use a duplicate key you will get back a 409 status code for the error.

The document I will be using in the next post looks like this:

{

“_id” : “1972-07-22”,

“IAKey” : “gd1972-07-22.sbd.miller.94112.sbeok.flac16”,

“description” : “Set 1 Bertha Me And My Uncle You Win Again Jack Straw Bird Song Beat It On Down The Line Sugaree Black Throated …

“pubdate”: “2008-08-15”,

“sb”: true,

“cm”: true,

“mx”: false,

“venue”: “Paramount Northwest Theatre”,

}

If I save this document as ConcertRecord.json I can use CURL to POST this document as:

curl -H “Content-Type: application/json” -X POST http://127.0.0.1:5984/deadbase/ -d @ConcertRecord.json

and couchdb will reply with an HTTP status 200 and a response body of:

{“ok”:true,”id”:”1972-07-22″,”rev”:”1-01a182f329c40ba3bab4b13695d0a098″}

In couchDB Futon this document looks like:

clip_image001[20]

Note that the order of the fields is set by couchDB not the order in the first loaded document.

Ruby At Last

OK, enough of the command shell let’s do some couchDB work using RUBY.  I am going to access couchDB from a fairly low level within Ruby in these posts.  There are several ActiveRecord type GEMS which will interface with couchDB but my focus here will be on: (1)  speed of access and (2) transferability of knowledge between Ruby access and direct Javascript/Browser access to couchDB.

Here’s a minimum of what we need to POST a document to a couchdb using RUBY.

The GEMS for

JSON : This will always load the Ruby based version of the JSON module.  If you want to have ‘pure’ JSON (i.e. a C based module you will need to have the Ruby/Windows DEVKit installed on your system.  For our purposes the ‘pure’ version is not necessary.

REST-OPEN-URI:  This extends open-uri by using the net/http  and the uri GEMs to cover all of the REST verbs (GET, POST, PUT and DELETE).  This is a very light install and is only lightly documented.

Here is the basic plan:

Assume we have a RUBY object (call it “rec”) which includes, among other things the fields we want to POST into the deadbase as a deadbase document like the one developed above.  We first need to convert the fields into a JSON string and then to POST the JSON string into the deadbase.  The JSON GEM is used to achive the first goal and REST-Open-URI is used to accomplish the second.

JSON Strings:

The JSON GEM will only serialize Ruby base types (strings, numbers and bools and HASH objects).  The JSON GEM is quite limited in that it will not serialize a Ruby object derived from the base RUBY object  into a JSON string, even if that object consists only of base types and Hash objects.  Although you may extend JSON we did not choose to do so. Rather we will create a simple Hash object and populate it manually via Ruby code with the fields we want to use for a document. Simply this could look like:

def makeJSON(rec)

thing=Hash.new()  #we know that JSON can serialize this type of object

thing[“_id”]=rec.date

thing[“IAKey”]=rec.uri

thing[“description”]=rec.description

thing[“venue”]=rec.title

thing[“pubdate”]=rec.pubdate

thing[“cm”]=rec.cm

thing[“sb”]=rec.sb

thing[“mx”]=rec.mx

return JSON.generate(thing)  #this returns a JSON String

end

REST-OPEN_URI:

Our POST routine will use the output form makeJSON and POST the JSON string to the deadbase.  In simple for this routine looks like:

def PostRecording(jsonString)

uri=”http://127.0.0.1:5984/deadbase/”   #this is our database

begin

responseBody=open(uri,:method=> :post, :body => jsonString,”Content-Type” => “application/json”).read

puts ‘POST Response Success: ‘ + responseBody

end

rescue

OpenURI::HTTPError => the_error

puts ‘Post Response Error: ‘ + the_error.io.status[0]

end

end

The key line is, of course:

responseBody=open(uri,:method=> :post, :body => jsonString,”Content-Type” => “application/json”).read

If we ran this line as:

responseBody=open(uri,:method=> :post, :body => jsonString).read

we would get an http Status code for an “Invalid Media Type”.  That’s because the default “Content-Type” for POST commands is “application/xxx-form” which is the typical format of a HTML “form” involved in a POST from a web browser.  We are far from a browser here and our “Content-Type” needs to be “application/json”.  The way to add Headers to the POST is to provide one or more key/value pairs with the desired header information.  Hence:

“Content-Type” => “application/json”

and the correct Ruby line is:

responseBody=open(uri,:method=> :post, :body => jsonString,”Content-Type” => “application/json”).read

We need to wrap the POST command in an exception block where the line:

OpenURI::HTTPError => the_error

is only executed IF the Http response status is > 399.  You can then do more fine grained responses to the error condition.  Specifically, if the_error.io.status[0]==409 you have attempted to POST the same document twice (at least two documents with the same ID).

That looks like a wrap for now.

5901067736_08fe849334_z

Posted 2011/07/22 by Cloud2013 in Aptana, couchdb, REST, Ruby

Tagged with , , , ,

%d bloggers like this: