Archive for the ‘REST’ Category

QCON 2011 San Francisco and Occupy California   2 comments

Let me say write off that I do not pay for my own ticket to QCON, my boss picks up the tag.  I love QCON.  It is definitely not MIX. I go there to see what is happening in the world which 6439629043_9a7e84a2bd_z is NOT Oracle and Not Microsoft.  That’s the same reason I read their online Zine: InfoQ.   QCon always provides a look at what is current and recent in the open stack world.  This year we looked closely at REST, Mobile development, Web API and NOSQL. As they did last  year QCON provides a nice look at what is open and emerging.  Big metal with always be with us but the desk top is looking6373613127_9780c7d60f very weak during the next few years while Mobile devices of all kinds and makers are exploding.  The biggest fall out is that while HTML5 is only slowly emerging on desktops in place, all new Mobile devices (which is to say most new systems) will be fully HTML5 compliant.  Not only that but with the exception of Windows Phones, the rendering engine for all mobile devices is based on WebKit.  What this mean for those of us in the cubes is that worrying about how to bridge to pre-HTML5 browsers with HTML5 code is a non-issue.  Mobile development is HTML5 development.  The big metal end of the supply chain is being segmented into Web API servers (which service JSON XHR2 data calls) and the NOSQL engines which serve the WEB API farms.  Remember a native mobile app     ideally has pre-loaded all of its pages its interactions are solely over JSON XHR2 for data (be it documents, data or HTML fragments).  The traditional JSP or ASPX web server is not really in play with native mobile apps and has and increasingly small role to play in “native like” or browser based mobile apps.  Let’s move on.

“IPad Light by cloud2013”

Speaking of moving on: There is an occupation going on in this country.  I visited occupations sites in San Francisco, UCal Berkeley and  Berkeley “Tent City”.  These are all very active and inspiring occupy sites.  Now if we can only get to Occupy Silicon Valley! 

I attended the REST in Practice tutorial this year and it was a very nice.  The authors were well informed and the agenda comprehensive.  I personally like the Richardson maturity model but think that people are not facing up to the fact that level three is rarely achieved in practice and the rules of web semantics necessary to interoperate at level 3 are almost non-existent. Remember the original REST model is client/server.  The basic model is a finite state machine and the browser (and the user) are in this model required to be dumb as fish.  Whether Javascript is a strong enough model and late binding semantics can be made clear enough to pull off level three is really an open question which no one has an answer to.  If we forget about interoperability (except for OAuth) things start to fall into place but we thought OPENNESS was important to REST.

Workshop: REST In Practice by the Authors: Ian Robinson & Jim Webber

Why REST? The claims:

· Scalable

· Fault Tolerant

· Recoverable

· Secure

· Loosely coupled6439625819_5705585c80

Questions / Comment:6380018433_9172323197

Do we agree with these goals?

Does REST achieve them?

Are there other ways to achieve the same goals?

REST design is important for serving AJAX requests and AJAX requests are becoming central to Mobile device development, as opposed to intra-corporate communication. See Web API section below.

Occupy Market Street (San Francisco)            

The new basic Document for REST: Richardson Maturity Model (with DLR modifications)

Level 0:

One URI endpoint

One HTTP method [Get]

SOAP, RPC

Level 1:

Multiple URI,

One HTTP Method [Get]

Century Level HTTP Codes (200,300,400,500)

Level 2:

Multiple URI,

Multiple HTTP Methods

Fine Grain HTTP Codes (“Any code below 500 is not an error, it’s an event”)

URI Templates

Media Format Negotiation (Accept request-header)

Headers become major players in the interaction between client and server

Level 3:  The Semantic Web

Level 2 plus

Links and Forms Tags (Hypermedia as the engine of state)

Plus emergent semantics

<shop xmlns=”http://schemas.restbucks.com/shop&#8221;

xmlns:rb=”http://relations.restbucks.com/”&gt;

<items>

<item>…</item>

<item>…</item>

</items>

<link rel=”self” href=http://restbucks.com/quotes/1234 type=”application/restbucks+xml”/>

<link rel=”rb:order-form” href=”http://restbucks.com/order-forms/1234″ type=”application/restbucks+xml”/&gt;

</shop>

6439622787_7b614f312c

Think of the browser (user) as a finite State Machine where the workflow is driven by link tags which direct the client as to which states it may transition to and the URI associated with each state transition.6380028389_e64c6a826f

The classic design paper on applied REST architecture is here: How To GET a Cup Of Coffee. Moving beyond level 1 requires fine grain usage of HTTP Status Codes, Link tags, the change headers and media type negotiation. Media formats beyond POX and JSON are required to use level 3 efficiently (OData and ATOM.PUB for example).

Dude, where’s my two phase commit? Not supported directly, use the change headers (if-modified, if-non-match, etag headers) or architectural redesign (redefine resources or workflow). Strategic choice is design of the finite state machine and defining resource granularity.

clip_image002

(Slide from Rest in Practice)

Architectural Choices:

The Bad Old Days: One resource many, many ‘verbs’.

The Happy Future: Many, many resources, few verbs.

The Hand Cuff Era: Few Resources, Few verbs.

The Greater Verbs:

GET: Retrieve a representation of a resource

POST: Create a new resource (Server sets the key)

PUT: Create new resource (Client sets the key); ( or Update an existing resource ?)

DELETE: Delete an existing resource

Comment: The proper use of PUT vs. POST is still subject to controversy and indicates (to me) that level 3 is still not well defined.

Typically they say POST to create a blog entry and PUT at append a comment to a blog. In Couchdb we POST to create a document and PUT to add a revision (not a delta) and get back a new version number. The difference here is how the resource is being defined, which is an architectural choice.

6439621853_3275941633

The Lesser Verbs:

OPTIONS: See which verbs a resource understands

HEAD: Return only the header (no response body)

PATCH: Does not exist in HTML5. This would be a delta Verb but no one could agree on a specification for the content.  Microsoft did some early work on this with their XML Diffgram but no one else followed suit.

Security

Authentication (in order of increased security)

Basic Auth

Basic Auth + SSL

Digest

WSSE Authentication (ATOM uses this)

Message Security:

Message Level Encrypt (WS-SEC)

For the Microsoft coders I highly recommend

RESTful .Net (WCF For REST (Framework 3.5) Jon Flanders

There are significant advantages to building your RESTful services using .Net.  Here is a comparison table to get you oriented:

DLR’s Cross Reference:
Web Service Standard REST Service WCF For REST (Framework 3.5)
1 TCP/IP + others TCP/IP TCP/IP
2 SOAP Wrapper HTTP HTTP
3 SOAP Headers HTTP Headers HTTP Headers
4 WS*Security Basic Auth/SSL Basic Auth/SSL or WS*Security
5 Early Binding Late Binding Late Binding
6 XSD WADL XSD, WADL
7 XML Media Negotiation Media Negotiation
8 SOAP FAULTS HTTP Response Codes HTTP Response Codes
9 Single Endpoint Multiple Endpoints, URI Templates Multiple Endpoints, URI Templates
10 Client Proxy Custom auto-generated Javascript proxy

6439623457_d2599a85eb_m

The REST of the Week

Wednesday is more or less vendor day at QCON and the sessions are a step down from the tutorials but the session quality6373577519_b3a8be078c picked up again on Thursday and Friday.  XXX XXXX who gave an excellent tutorial last year gave an informative talk on ‘good code’.  The Mobile Development and HTML5 tracks were well attended and quite informative.  The fie   ld is wild open with many supporting systems being free to the developer (support will cost you extra) and the choices are broad: from browser ‘responsive design’ application to native appearing applications to native apps ( and someone threw in “hybrid app” into the mix).  The Mobile panel of IBM DOJO, JQuery.Mobil and Sencha was hot.  I am new (to say the least) to Mobile development but here are my (somewhat) random notes on these sessions:

MOBILE Development is HTML5 Development

HTML5 is the stack. Phone and Tablet applications use WebKit based rendering engines and HTML5 conformant browsers only (Windows Phone 7 is the exception here). HTML5 has its own new security concerns ( New Security Concerns)

Three major application development approaches are:

· Browser Applications;

· Native like Applications;

· Hybrid Applications; and

· Native Applications.

Browser applications may emulate the screens seen on the parallel desk top browser versions on the front end but in practice the major players (Facebook, YouTube, Gmail) make substantial modifications to at least the non-visual parts of the Mobile experience making extensive use of local storage and the HTML5 manifest standard for performance and to allow for a reasonable off line experience. Browser applications fall under the guidelines of Responsive Design (aka adaptive Design) and tend to be used when content will appear similarly between desktop and Mobile devices.

“Native like” applications use:

· The Browser in full screen Mode with no browser ‘chrome’; and

· Widgets are created using CSS, JS and HTML5 which simulate the ‘look and feel’ of a native application;

· No Access to Native Functionality (GPS, Camera, etc)6380026599_db3ba709db

· Tend to use, but does not require use of HTML5 manifest or local storage but it is strongly encouraged. 6439624411_22b452613f

A Native application is still an HTML5 application with the following characteristics:

· All JS Libraries, CSS and HTML are packaged and pre-loaded using a vendor specific MSI/Setup package;

· AJAX type calls for data are allowed;

· Access to Native Widgets and/or Widgets are created using CSS, JS and HTML5

· Access to Native Functionality (GPS, Camera, etc)

· Standard HTTP GET or POST are NOT allowed

A Hybrid Application is a “Native Like” Application” placed within a wrapper which allows access to device hardware and software (like the camera) via a special JavaScript interface and, with additional special coding, can be packaged within a MSI/Setup and distributed as a pure Native application.

AJAX calls are made via XHR2 (aka XMLHttpRequest Level 2) which among other things relaxes the single domain requirement of XHR and processing Blob and File interfaces.

The following major vendors offer free libraries and IDE for development:

Native Apps: PhoneGap, Appcelerator

Native App Like: Sencha, PhoneGap, IBM Dojo

Browser App: JQuery.Mobile

PhoneGap does NOT require replacement of Sencha, JQuery.Mobil, Dojo.Mobile JQuery libraries.

PhoneGap allows JavaScript to call PhoneGap JavaScript libraries which abstract access to device hardware (camera, GPS, etc).

Sencha does not require replacement of the JQuery.Mobil, Dojo.Mobile JQuery libraries.

Although it is theoretically possible to create “Native like” applications with only JQuery.Mobile this is NOT encouraged.6439625143_caa6996f39 6337926187_91ca36793d

Local Storage

This is a major area of performance efforts and is still very much open in terms of how best to approach the problem:

The major elements are:

App Cache (for pre-fetch. and Native App Approach)

DOM Storage (aka Web Storage)

IndexedDB (vs. Web SQL)

File API (this is really part of XHR2)

Storing Large Amounts of Data Locally

If you are looking to store many Megabytes – or more, beware that there are limits in place, which are handled in different ways depending on the browser and the particular API we’re talking about. In most cases, there is a magic number of 5MB. For Application Cache and the various offline stores, there will be no problem if your domain stores under 5MB. When you go above that, various things can happen: (a) it won’t work; (b) the browser will request the user for more space; (c) the browser will check for special configuration (as with the “unlimited_storage” permission in the Chrome extension manifest).

IndexedDB:

clip_image002

Web SQL Database is a web page API for storing data in databases that can be queried using a variant of SQL.

Storage Non-Support as of two weeks ago.

IE Chrome Safari Firefox iOS BBX[RIM] Android
IndexedDB Supported Supported No Support Supported No Support No Support No Support
WEB SQL No Support Supported Supported No Support Supported Supported Supported

6439626605_ee7b664332

Doing HTML5 on non-HTML5 Browsers: If you are doing responsive design and need to work with Desktop and6380016957_4c6b5e7345_z Mobil using the same code base: JQuery.Mobile, DOJO and , Modernizr(strong Microsoft support for this JavaScript library).

WEB API

What is it? Just a name for breaking out the AJAX servers from the web server. This is an expansion of REST into just serving data for XHR. It is a helpful way to specialize our design discussions by separating serving pages (with MVC or whatever) from serving data calls from the web page. Except for security the two can be architecturally separated.

Web APIs Technology Stack

clip_image002[6]

Look familiarr? Looks like our old web server stack to me.

NOSQL

The CAP Theorem  (and Here)

  • Consistency: (all nodes have the same data at the same time)
  • Availability: (every request receives a response – no timeouts, offline)
  • Partition tolerance: (the system continues to operate despite arbitrary message loss)

Pick Any Two6439627917_7f88626477_z

If some of the data you are serving can tolerate Eventual Consistency then NOSQL is much faster.6380029445_0e0ecf7d53

If you need two phase commit, either use a SQL database OR redefine your resource to eliminate the need for the 2Phase Commit.

NoSQL databases come in two basic flavors:

Key/Value: This are popular with content management and where response time must be minimal. In general you define what btrees you want to use before the fact. There are no on the fly Joins or projects. MongoDB and CouchDB are typical leaders in this area.

Column Map: This is what Google calls Big Table. This is better for delivering groups of records based on criteria which may be defined ‘on the fly’. Cassandra is the leader in this group.

Web Sockets:

6439628517_6c7955df1f_zSad to say this is still not standardized and preliminary support libraries are still a little rough.  Things do not seem to have moved along much since the Microsoft sessions I attended at MIX 11.

Photos: All Photos by Cloud2013

Microsoft MVC 3 and CouchDB – Low Level Get Calls   1 comment

I have written elsewhere on couchdb on Windows and using Ruby on Rails to interface to this system.  These posts can be found here:couchdb

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

In my work life I work in a Microsoft shop which for us means Microsoft servers for the back end and (mostly) pure HTML/AJAX frontends.  We are transitioning towards using Microsoft MVC 3 to provide HTTP end points for our AJAX calls.  Here are some notes from my POC work in this area.  My couch data consists of documents describing Grateful Dead concerts stored on the great site Internet Archive, if you have never visited the Internet Archive, please do so.  I back engineered the meta data of IA’s extensive collection of Dead concerts (over 2,000 concert recordings).  Visit the Grateful Dead Archive Home at the Internet Archive here.

CouchDB Documents and Views

I stored the meta data into a local couchdb (running on Windows XP).  The basic document I am storing is a master detail set for the ‘best’ recording for each Dead concert.  The Master part of the document contains the date, venue and other data of the concert and the detail set is an array of meta data on each song preformed during the concert.  As is traditional with couchdb, the documents are represented as JSON strings.  Here is what the document for the UR recording (1965-11-01) found on the IA:

{

“_id”: “1965-11-01”,tumblr_ld4jfoNw7F1qai6ym

“_rev”: “1-6ea272d20d7fc80e51c1ba53a5101ac1”,

“mx”: false,

“pubdate”: “2009-03-14”,

“sb”: true,

“venue”: “various”,

“tracks”: [

{

“uri”: “http://www.archive.org/download/gd1965-11- 01.sbd.bershaw.5417.sbeok.shnf/Acid4_01_vbr.mp3”,

“track”: “01”,

“title”: “Speed Limit”,

“time”: “09:48”

},

{

“uri”: “http://www.archive.org/download/gd1965-11-01.sbd.bershaw.5417.sbeok.shnf/Acid4_02_vbr.mp3&#8221;,

“track”: “02”,

“title”: “Neil Cassidy Raps”,

“time”: “02:19”

}

]

}

Couchdb allow the creation of views which are binary trees with user defined Keys and user defined sub sets of the document data.  If one wanted to return the venue and the tracks for each concert for a given Month and Day (across all years) the view created in couchdb would look like:

“MonthDay”: {

“map”: “function(doc){emit(doc._id.substr(5,2)+doc._id.substr(8,2),[doc.venue , doc.IAKey, doc.tracks ])}”

}

This view allows us to use and HTTP GET to pass in a monthday key (e.g. “1101”) and get back (as a JSON array)

the date (MMDDYY: doc._id.substr(5,2)+doc._id.substr(8,2))

the venue (doc.venue);

the AI URI of the concert (doc.IAKey); and

an array of track data (doc.tracks)

MVC URL Routing Maps

Although we could call couchdb directly from the browser, we normally work through a gateway system for security, so we will build a shim to sit between the browser and couchdb.  This allows us to flow the authentication / authorization stack separately from couchdb’s security system.  In MS MVC we can create a new HTTP endpoint for AJAX calls (our shim) is a very simple manner. Let’s create an endpoint which will look like:

http:\\{our server path}\DeadBase\MonthDay\{month}\{day}

where vacuum_routing

http:\\{our server path}\DeadBase\MonthDay\111

would request month:11 and day:01 concerts.  In MVC we can declare this routing as:

routes.MapRoute(

“MyMonthDay”,

“{controller}/{action}/{month}/{day}”, 

new { controller = “DeadBase”, action = “RestMonthDay”,null} );

Done.  Interestingly in MVC 3 this route definition will accept either the form:

http:\\{our server path}\DeadBase\MonthDay\{month}\{day} ; or

http:\\{our server path}\DeadBase\MonthDay?month=”??”&day=”??”

In the second form,  parameter order does not matter, but case does; quotation marks are optional and need to be dealt with internally by the action method.

either of these call will resolve to the same controller and method.

MVC Controller and Method HandlerMVC

We now need to create the shim which will be the target for the Http Endpoint.  In C# this looks like:

public class DeadBaseController : Controller

public string RestMonthDay( string month, string day )
{
//our shim code goes here

      }

    }

We able to use string as our return type because we will be calling couchdb which returns a string from of JSON by default.  As a side note if we wanted to use MVC 3 to return JSON from a native C# object our controller method takes a different form:

public JsonResult GetStateList()

{

List<ListItem> list = new List<ListItem>() {

new ListItem() { Value = “1”, Text = “VA” },

new ListItem() { Value = “2”, Text = “MD” },

new ListItem() { Value = “3”, Text = “DC” } };

return this.Json(list);

}

Our AJAX call from the browser does not need to know any of these details.  Here is one way to code the call in JavaScript using JQuery:

var url = urlBase + “?” + args;ajax

$.ajax({

url: url,

dataType: ‘json’,

success: okCallBack,

error: nookCallBack

});

function okCallBack(data) {

gdData = data;

//do something useful here

}

function nookCallBack(xhr, ajaxOptions, errorThrown) {

alert(“ErrorText:” + errorThrown + ” ” + “Error Code:” + xhr.status);

}

}

From Handler to CouchDB in C#

Here is the rest of the generic C# code to go from the Handler to CouchDB and back.

Clean the parameters and pass the call to a generic couchDB GET caller:mvc

image

Format the view name and parameter into couchdb format  and pass to the low level couchDB caller:

image

Classic Framework HTTP code to make the HTTP GET and return the results as a string back up the call stack:

image

We could (and did) take our Browser code from the Ruby on Rails project above and with minimum changes call our MVC shim.

Simple clean and fun.

Occupy your mind2

REST, Ruby On Rails, CouchDB and Me – Part 5 Getting The Data Ready for CouchDB   3 comments

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

The Internet Archive and the Grateful Dead

The Internet Archive (IA)  is a 501( c ) 3 non-profit corporation dedicated to disseminating of public domain digital artifacts.  This includes such items as books, videos of imagesall kinds (TV Shows, shorts and feature films) audio recordings of all kinds (musical and spoken word).    One of their most popular projects is a truly huge collection of Grateful Dead concert recordings.  One 418910of the most visited pages of the Internet Archive’s web site is the “Grateful Dead Shows on This Day In History” page, which lists and allows play back of any Grateful Dead concerts they have in their collection of whatever day you visit the site.  Did I say that there IA has a large number of Grateful Dead concert recordings?   The are recordings of around 2,000 separate concert dates.  Any given concert may be represented by multiple recordings from a variety of sources: soundboard, audience recordings using professional and amateur equipment.  Original media ranges from cassette tapes, to 7 inch reel to reel and digital media.  If you are going to be working with the IA Grateful Dead collection please review the FAQ on the collections policy notes as well as the special notes here.

IA uses a very sophisticated data repository of meta data and an advanced query engine to allow retrieving both the meta data and the recordings.  Meta data can be retrieved directly using the “advanced search” engine.  On the day I started this post I visited IA and used the “Grateful Dead Shows on This Day In History”  The query returned data on 8  concerts (and 25 recordings of those 8 concerts).  A partial page image is given below:

image

Clicking on any of these entries moves us to a second screen in order to play the concert recording.  A screen shot of the playback screen looks like this:

image

Looking closer at the second screen we see the music player:

image

Can we design a faster, simpler and better looking interface into the Grateful Dead Archive?  Can couchDB help us? gratefuldead_20070108135140 The the first question will be addressed in a later post. This current post will look at how couchDB can  help us achieve a faster more efficient information system.  IA does a super job of serving up the music files on demand – there is no reason to duplicate their storage system.   However, IA is fairly slow to serve up meta data (such as the results of the “Grateful Dead Shows on This Day In History” query) Abstracting the IA metadata into a CouchDB database will allow us to serve up the meta data much faster than the IA query system.

Getting Data Into CouchDB

Our basic plan for using RUBY to get data from IA and into couchdb consists of:

  1. Prepare a URL query request to get the basic recording meta data (not the track meta data);
  2. Submit A GET request to IA using the URL query;
  3. Parse the XML returned to get at the individual Concert meta data fields;ruby
  4. Select the BEST recording for any given concert (more on this below);
  5. Prepare a URL to request track listing XML file based on the IA Primary Key of the selected concert recording;
  6. Submit a GET request to IA;
  7. Parse the XML returned to get at the individual track meta data fields;
  8. Create a ruby object which can be safely serialized to JSON;
  9. Serialize the object to a JSON string (i.e. a JSON document);
  10. Do a POST request to insert a JSON document for each concert into the couchdb database;
  11. Create couchDB views to allow optimized data retrieval; and
  12. Create a couchDB view to optimize retrieval recordings for all years for an arbitrary Month and Day (this duplicates the data provided by the “Grateful Dead Shows on This Day In History” selection in the Internet Archive.

Note we are not accessing nor storing the actual music files.  Before discussing how this plays out in practice lets define our JSON couchDB document.  We will cover items one through eight in this post.  We turn to items nine through twelve in the next post.

CouchDB Document Schema

CouchDB databases start with documents as the basic unit.  Typically a couchdb based application will have one database holding one or more variant document types.  There will be one or more design documents which provide multiple views, show functions and map functions as necessary to facilitate the application.  We will use a single document which will represent a abstract of the meta data contained in IA for individual recordings ( we are going to select the one ‘best’ recording per concert).  Our couchdb database will hold one document per concert.   The tracks (actually the track meta data will be stored as arrays within the  concert document).  We will populate the couchdb database in a single background session  pulling meta data (NOT THE MUSIC FILES) from IA and we will  include the IA publication date in the document so we can update our database when (if) new recordings are added to IA in the Grateful Dead collection.

Here are the document fields  which we will use:

Field

Notes

Typical Values

_id couchdb primary key.  We will use a natural key: a string representation of the concert date. 1969-07-04
_rev revision number provided by couchDB 1-6ea272d20d7fc80e51c1ba53a5101ac1
IAKey Internet Archive Key for this Recording gd1965-11-01.sbd.bershaw.5417.sbeok.shnf
pubdate Internet Archive Date When Recording was published to the web 2009-03-14
venue Wherethe concert took place Fillmore East Ballroom
description free text describing the concert – provided by the uploader Neal Cassady & The Warlocks 1965 1. Speed Limit studio recording/Prankster production tape circa late 1965
cm boolean – Recording by Charlie MIller  – used to select the ‘best’ recording true or false
sb boolean – Recording was made from a soundboard – used to select the ‘best’ recording true or false
mx boolean – A matrix style recording – used to select the ‘best’ recording true or false
tracks an array of meta data for each track of the recording see below

Each track in the tracks array  formally looks like:

Field Notes Typical value
IAKey The Internet Archive key for this track.  This key is unique within a given recording  (see the IAKey above) gd1965-11-01.sbd.bershaw.5417.sbeok.shnf/Acid4_01_vbr
track track number 02
title the song title Cold Rain and Snow
time the length of the track in minutes and seconds 09:48

Let call everything except the tracks our BASE data and the track data our TRACK data.

We insert documents to the database (using an HTTP post) as JSON so a typical document would look like this in JSON format:

{
“_id”: “1966-07-30”,
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16”,
“pubdate”: “2008-09-22”,
“venue”: “P.N.E. Garden Auditorium”,
“description”: “Set 1 Standing On The Corner I Know You Rider Next Time You See Me”,
“cm”: false,
“sb”: true,
“mx”: false,
“tracks”: [
{
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16/gd1966-07-30.d1t01_vbr”,
“track”: “01”,
“title”: “Standing On The Corner”,
“time”: “03:46”
},
{
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16/gd1966-07-30.d1t02_vbr”,
“track”: “02”,
“title”: “I Know You Rider”,
“time”: “03:18”
},
{
“IAKey”: “gd1966-07-30.sbd.GEMS.94631.flac16/gd1966-07-30.d1t03_vbr”,
“track”: “03”,
“title”: “Next Time You See Me”,
“time”: “04:00”
}
]
}

2654190796_c0a810ec44

Hacking The Internet Archive: Getting Data From IA and Into CouchDB:

Here is the URL to obtain the page for “Grateful Dead Shows on This Day In History”:

http://www.archive.org/search.php?query=collection:GratefulDead%20date:19??-08-04&sort=-/metadata/date

This is a simple GET request with a query for the IA “collection” of Grateful Dead items filtered on the date string: 19??-08-04 and sorted descending by the concert date.  This get returns an HTML page.  This type of interface is known as an HTTP RPC interface.  RPC (Remote Procedure Call) interfaces are not pure REST interfaces but they are somewhat RESTful inthat they allow us to make a data request using a late bound, loosely coupled HTTP call.  See here and here for more theoretic background on RPC calls.  IA provides an  “Advanced Search” function will allow us to return data for an arbitrarily complex query in one of several different data formats other than HTML.  We selected XML as the format  for our work here.  XML is the traditional format for HTTP RPC but other formats may be better for certain applications.  Unfortunaely IA does not directly document the format of the RPC data request but they do provide a QEB page to build the request.  The page looks like this:

image

Using this screen we can compose a HTTP RPC request which will mimic the URL produced by “Grateful Dead Shows on This Day In History” and with a little brain effort and experimentation we can  understand how to compose requests without using the QBE screen.  By feeding the RPC request query back into advanced search and selecting XML as an output format as shown here:

clip_image001

we produce both an example of the HTTP RPC request which will return our desired data in our desired format.  Thus we generate a HTMLEncoded RPC request like:

@uri=”http://www.archive.org/advancedsearch.php?q=collection%3AGratefulDead+date%3A#{_dateString}&fl%5B%5D=avg_rating&fl%5B%5D=date&fl%5B%5D=description&fl%5B%5D=downloads&fl%5B%5D=format&fl%5B%5D=identifier&fl%5B%5D=publicdate&fl%5B%5D=subject&fl%5B%5D=title&sort%5B%5D=date+desc&sort%5B%5D=&sort%5B%5D=&rows=2000&page=1&callback=callback&output=xml”

where we replace #{_dateString} with a date string like 19??-08-08.  Of course to one years worth of data we could use a data string like: 1968-??-??.  It is a simple extension of the query languge to replace the singular date request: date%3A#{_dateString} with a date range.

which returns Grateful Dead recording data for all years of the last century which were recorded on 08-08.  The XML output returned to the caller looks like:

clip_image001[4]

In a more graphic format the output looks like:

clip_image001[6]

Within Ruby we will need to make the HTTP Get request with a desired date range, transform the body of the return request into an XML document and use XPATH to parse the XML and retrieve the meta data values for each recording (see below).  The is NOTHING inherently wrong with this RPC interface.  It is flexible and allows us to select only the data fields we are interested in and return data only for the dates we wish.  Since RUBY supports neither native consumption of JSON nor XML. So the XML format of the data is as good as any other and numerous tools exist in RUBY to manipulate XML data.  I which RUBY had a more native interface for JSON but it does not.

At this point, we do not have meta-data about individual tracks in a given recording.  It turns out that we can get this data but not through an HTTP RPC request.  It turns our, dear reader, that if we have the IAKey for the recording we can obtain an xml file with track meta data by making the following call:

http://www.archive.org/download/{IAKEY}/{IAKEY}_files.xml.

This file contains assorted XML data, it varies by what formats IA makes available the individual tracks via a 309 (HTTP redirect).  This is not an RPC call so we are far from a RESTful interface here.  We do not have control over the fields or range of the data included in this call.  It is all or nothing.  But at least the XML format is simple to mainipulate.  With the IAKey in hand for an individual recording and making some reasonable guesses we can parse the XML file of track data and compose the TRACKS array for our couchDB document using XPATH. A single entry for the high bit rate mp3 track recording looks like:

<file name=”gd89-08-04d2t01_vbr.mp3″ source=”derivative”>
<creator>Grateful Dead</creator>
<title>Tuning</title>
<album>1989-08-04 – Cal Expo Amphitheatre</album>
<track>13</track>
<bitrate>194</bitrate>
<length>00:32</length>
<format>VBR MP3</format>
<original>gd89-08-04d2t01.flac</original>
<md5>91723df9ad8926180b855a0557fd9871</md5>
<mtime>1210562971</mtime>
<size>794943</size>
<crc32>2fb41312</crc32>
<sha1>80a1a78f909fedf2221fb281a11f89a250717e1d</sha1>
</file>

Note that we have the IAKey for the track (gd89-08-04d2t01 ) as part of the name attribute.

Garcia

Using a background Ruby Process to Read the Data

The following RUBY GEMS are required to complete this step:

rest-open-uri : This GEM extends open-uri to support POST, PUT and DELTE HTTP command

json : This GEM handles serialization and de-serialization of a limited subset of RUBY into JSON strings.

From the standard RUBY library we will also be using

rexml : This GEM creates XML documents from XML Strings and supports XPATH which we will use to read the XML documents from IA

Our first step is to extract the get the the data via HTTP and parse the XML file returned to find individual recordings.  There are  (in most cases) be multiple recordings per concert (per date) and we want to retain for the database only the “best”.

In pseudo Ruby code:

require ‘rest-open-uri’

require ‘rexml/document’

 def initialize(_dateString)

#HTTP GET, create a string of the response body and transform the string into an XML node tree
#mind the screen wrap and html Encoding:
@uri=http://www.archive.org/advancedsearch.php?q=collection%3AGratefulDead+date%3A#{_dateString}&fl%5B%5D=avg_rating&fl%5B%5D=date&fl%5B%5D=description&fl%5B%5D=downloads&fl%5B%5D=format&fl%5B%5D=identifier&fl%5B%5D=publicdate&fl%5B%5D=subject&fl%5B%5D=title&sort%5B%5D=date+desc&sort%5B%5D=&sort%5B%5D=&rows=2000&page=1&callback=callback&output=xml

  xmlString=”
open (@uri) do |x|       #build a representation of the response body as a string
x.each_line do |y|
xmlString=xmlString+y
end
if xmlString==”
puts ‘No String Returned From Internet Archive’
quit
end
end
@IAXMLDocument= REXML::Document.new(xmlString)  #turn the string into an XML document
end #open

Now we need  to loop through the XML document and pull out each ‘doc’ section using XPATH and read each doc section for the meta data for that recording.

#use XPATH and find each response/result/doc node and yield

def get_recordings(document)

document.elements.each(‘response/result/doc’)do |doc|

yield doc
end
end

#get the XML document and yield

def get_record(xmldoc)

get_recordings(xmldoc) do |doc|
yield doc
end
end

#general purpose XPATH method to extract element.text (the metadata values) for arbitrary XPATH expressions

def extract_ElmText(doc,xpath)

doc.elements.each(xpath) { |element|  return element.text }
end

def worker(xmldoc)

#main loop

_docCount=0

get_recordings(xmldoc) do |doc|
_docCount+=1
_date=extract_ElmText(doc,’date[@name=”date”]’)[0..9]
_description=extract_ElmText(doc,”str[@name=’description’]”)
_title=extract_ElmText(doc,”str[@name=’title’]”)
_title=clean_Title(_title)
_keylist=pull_keys(doc)
_pubdate=extract_ElmText(doc,’date[@name=”publicdate”]’)[0..9]  #there is a bug here , corrected by lines below

if (_pubdate.length==0)
_pubdate=’1999-01-01′
puts “#No Publication Date: {_date} #{_title}”
end
_uri=extract_ElmText(doc,’str[@name=”identifier”]’)

#make a RUBY class object to hold one recording
_record=GDRecording.new _date, _description, _tracklist, _title, _keylist, _pubdate,_uri

#save the recording class objects in an array

@list[@list.count]=_record

end

In this code the ‘worker’ method calls the helper methods to:

0) Do the HTTP  get to make the RPC request and read the response body one line at a time and

1) transform the lines into a single string and convert (REXML::Document.new) the string into an XML document for processing by XPATH

2) loop through the doc nodes of the xml tree and extract the values of the  meta data fields

3) the meta data values are passed to an RUBY class ( GDRecording) which holds this meta data for later processing,

4 finally we temporarily store the recordings in an array for the next processing step.

Note that these routines work  whether the query returns a single day (with multiple recordings) or multiple days or even the whole dataset!  What is essencial is that we process the file as N ‘doc’ sub trees (which represent single recordings) and have recording date (at least) to group our data and extract the ‘best’ recording within each date group.

Our next step will be group the recordings by day (i.e. concert) and provide our own filter to select a single ‘best’ recording for each concert.

Shake and Bake:  Finding A ‘Best’ Recording.

best

What is the best Grateful Dead concert.  Why the first one I went to of course.  Just ask any Deadhead and you will probably get the same answer.  But what is the best recording of any given GD concert? My approach is very simple.

  • Most recent posted recordings are better than older recordings. (least important criteria)
  • Soundboard recordings are better than audience recordings.
  • Matrix recordings are even better.
  • Recordings mixed by Charlie Miller are best of all. (most important criteria)

Well these are MY criteria.  What ever criteria as long as they are hieratical  you can code the select in a very trivial manner.  If we have a field in each recording for the concert date and a field for each selection criteria (we derive these from the keywords field in IA) we sort the recordings by date and then by each of the criteria from most important (Charlie Miller in may case) to least important (date posted) and then select the first recording in sort order within each date group. On Ruby the sort of our list of recordings is trivial to code and easy to maniuplate (add new criteria or change the priority of criteria). The sort statement looks like this:

@list.sort! { |a,b| (a.date+a.cm.to_s+a.sb.to_s+a.mx.to_s+a.pubdate ) <=> (b.date+b.cm.to_s+b.sb.to_s+b.mx.to_s+b.pubdate )   }

Once sorted we create a list of best recordings as:

def newSelect
_dateGroup=nil
_list=Array.new
if  @list==nil or @list.count==0
puts ‘No Recordings.’
return;
end
foreach do |rec|
if _dateGroup!=rec.date
if dateGroup!=nil
@selectList[@selectList.count]=list[0]
end
_dateGroup=rec.date
_list=Array.new
end
_list[_list.count]=rec
end
if dateGroup!=nil
@selectList[@selectList.count]=list[0]
end
end

Note that is code is not only simple but it is independent of the selection criteria we are using.

Now that we have a list of recordings we are interested in,  we can get the XML file of track meta data using the IAKey discussed above and making a simple GET call and parsing the XML file for the meta data for each.  Much of the code used duplicates the XML code presented above so we need not reproduce all the code except to show a short section which uses a slightly different XML XPATH syntax:

open (filesURI) do |x| x.each_line do |y| xmlString=xmlString+y end end
REXML::Document.new(xmlString).elements.each(‘files’) do |doc|

doc.elements.each(‘file’) {|file_elm|
obj=nil
title=nil
trackString=nil
lengthString=nil
obj=file_elm.attributes[“name”]
file_elm.elements.each(‘title’) { |element| title=element.text }
file_elm.elements.each(‘track’) { |element| trackString=element.text}
file_elm.elements.each(‘length’) { |element| lengthString=element.text}

{omitted code}

end

Okay now we have a (hash) list of recording meta data,  each item of which contains a (hash) list of track meta data for that recording.  In our next post we will leave this unRestful world behind and move into the RESTful world of couchDB when we:

  • Serialize the object to a JSON string (i.e. a JSON document);
  • Do POST requests to insert  a JSON document for each concert into the couchdb database;
  • Create couchDB views to allow optimized data retrieval; and
  • Create a couchDB view to optimize retrieval recordings for all years for an arbitrary Month and Day (this duplicates the data provided by the “Grateful Dead Shows on This Day In History” selection in the Internet Archive.

cat on fancy couch

REST, Ruby On Rails, CouchDB and Me – Part 4 – CURL on Windows And Ruby POST   Leave a comment

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5

In The Post:

  • CURL and Couchdb
  • Documents Design and Otherwise
  • Posting Documents to couchDB Using Ruby

If you are like me you have spent some time with the free ebook: CouchDB The Definitive Guide.  If you are a windows user you may have run into some problems with the examples given in the chapter  on “Design Documents”.  Specifically they don’t work ‘out of the box’.  The examples in that chapter show us how to: create a database, to create and post a design document and to post a document to the database.  These examples use  CURL in a command shell.

OmniVortex

Since we are running Windows first we need to install CURL on our system.  Either set your system path to include the CURL executable. We can get a windows version here.  Use the version labeled DOS, Win32- MSVC or Win64 depending on your system. We assume here that couchDB has been installed successfully on your system. Now open a ‘command prompt’ on your system.  If you must have a UNIX type shell you need to install CYWIN or some other UNIX emulator for Windows.  If you are using the Aptana IDE like me you need to create an “external command” to open a command shell within Aptana.  This figure illustrates the setup within the Aptana IDE to do this:

image

In the command shell you can create a couchdb Database using a POST command and CURL.  Couchdb is RESTful so we use a PUT command for all actions which CREATE a resource, of which a database is one example.  The format of the command is:

curl -X PUT http://{couchdb}/{yourdatabasename}I want to create a database named deadbase so on my system this command and response looks like:

C:\Documents and Settings\dredfield\My Documents\Aptana Studio Workspace\couchDB01

>curl -X PUT http://127.0.0.1:5984/deadbase

{“ok”:true}

The where “{“ok”:true}” is the response body of the http response to my put command.  Confirm your work by starting a browser and navigating to Futon user interface to your couchdb installation.  On my system this url is:

http://127.0.0.1:5984/_utils/index.html

you should see something like this:

image

CURL and Documents

OK, now lets make a design document for this database and PUT that document to the new database.  With slight modifications to the example given in CouchDB The Definitive Guide my first cut at a design document looks like this:

{

     “_id” : “_design/example”,

     “views” : {

        “View00” : {

       “map” : “function(doc){emit(doc._id, doc.date)}”

        }

  }

}

This is a JSON formatted document.  Initial syntax checking is up to you.  Basically couchDB will accept anything within the outer brackets whether or not it is formatted as usable JSON or not.  We have several options for checking syntax.  There are free online syntax checkers like JSONLint.  The interface to JSONLint looks like:

clip_image001

An installable open source JSON checker and visualizing tool, JSON View is available here.  JSON View’s output looks like:

clip_image001[10]

Now that we know our syntax is correct (if not the logic of the design document – more on this in the next installment) we can PUT this document to our database.  We can have more than one design document in a given database.  The name (id) of this document is “_design/example”.  where “_design” tells couchdb this is indeed a design document and its name is “example”.   My document is named mydesign.json on my file system.  The CURL command to PUT this into the database looks like:

curl -X PUT http://127.0.0.1:5984/deadbase/_design/example -d @mydesign.json

couchdb will respond:

{“ok”:true,”id”:”_design/example”,”rev”:”1-45f081a3f681b28ce7a0bdc5db216e74″}

Note here that this is NOT the syntax shown in CouchDB The Definitive Guide.  The syntax there will not work in a windows shell (i.e. command prompt).  Even when you have syntax correct JSON document  and the correct format of the PUT statement on Windows you may recieve an error message from CURL complaining about UTF8 errors within the document and have a PUT failure.  The problem here is that the Windows file system supports several encoding schemes and various windows programs save documents in to different default encoding.  If you are using Notepad.exe to create your files be sure to save the files in ANSIformat.

 
Check your work using the FUTON interface locate the “_design/example document” in deadbase

clip_image001[12]

Double click on the document:

clip_image001[16]

Note that “views” is a “Field” within the document.  Select the “Source” tab  and take a look inside the document:

clip_image002[4]

Now lets POST a document into the database.  Since we have not defined any validation fields we can push anything into the database.  Even documents which consist of just “{}”.  CouchDB defines only one innate restriction:

If a document defines the id field (“_id”) then the value of _id must not conflict with an existing value of the ID field of ANY other document in the database.

If the document does not define an ID field, couchDB will generate an ID (as a UUID) and apply it to the document.  You can supply your own ID values.  If you can either generate your own value  (Ruby can generate a GUID for you) or you can request a GUID from couchdb with a GET command.  See this page for more information.  In the sample program I will be developing for this series I will be using a ‘natural key’ – that is a key whose value has an actual meaning (a Social Security is such a natural key for example, but please never use this).  If you try to POST a document and use a duplicate key you will get back a 409 status code for the error.

The document I will be using in the next post looks like this:

{

“_id” : “1972-07-22”,

“IAKey” : “gd1972-07-22.sbd.miller.94112.sbeok.flac16”,

“description” : “Set 1 Bertha Me And My Uncle You Win Again Jack Straw Bird Song Beat It On Down The Line Sugaree Black Throated …

“pubdate”: “2008-08-15”,

“sb”: true,

“cm”: true,

“mx”: false,

“venue”: “Paramount Northwest Theatre”,

}

If I save this document as ConcertRecord.json I can use CURL to POST this document as:

curl -H “Content-Type: application/json” -X POST http://127.0.0.1:5984/deadbase/ -d @ConcertRecord.json

and couchdb will reply with an HTTP status 200 and a response body of:

{“ok”:true,”id”:”1972-07-22″,”rev”:”1-01a182f329c40ba3bab4b13695d0a098″}

In couchDB Futon this document looks like:

clip_image001[20]

Note that the order of the fields is set by couchDB not the order in the first loaded document.

Ruby At Last

OK, enough of the command shell let’s do some couchDB work using RUBY.  I am going to access couchDB from a fairly low level within Ruby in these posts.  There are several ActiveRecord type GEMS which will interface with couchDB but my focus here will be on: (1)  speed of access and (2) transferability of knowledge between Ruby access and direct Javascript/Browser access to couchDB.

Here’s a minimum of what we need to POST a document to a couchdb using RUBY.

The GEMS for

JSON : This will always load the Ruby based version of the JSON module.  If you want to have ‘pure’ JSON (i.e. a C based module you will need to have the Ruby/Windows DEVKit installed on your system.  For our purposes the ‘pure’ version is not necessary.

REST-OPEN-URI:  This extends open-uri by using the net/http  and the uri GEMs to cover all of the REST verbs (GET, POST, PUT and DELETE).  This is a very light install and is only lightly documented.

Here is the basic plan:

Assume we have a RUBY object (call it “rec”) which includes, among other things the fields we want to POST into the deadbase as a deadbase document like the one developed above.  We first need to convert the fields into a JSON string and then to POST the JSON string into the deadbase.  The JSON GEM is used to achive the first goal and REST-Open-URI is used to accomplish the second.

JSON Strings:

The JSON GEM will only serialize Ruby base types (strings, numbers and bools and HASH objects).  The JSON GEM is quite limited in that it will not serialize a Ruby object derived from the base RUBY object  into a JSON string, even if that object consists only of base types and Hash objects.  Although you may extend JSON we did not choose to do so. Rather we will create a simple Hash object and populate it manually via Ruby code with the fields we want to use for a document. Simply this could look like:

def makeJSON(rec)

thing=Hash.new()  #we know that JSON can serialize this type of object

thing[“_id”]=rec.date

thing[“IAKey”]=rec.uri

thing[“description”]=rec.description

thing[“venue”]=rec.title

thing[“pubdate”]=rec.pubdate

thing[“cm”]=rec.cm

thing[“sb”]=rec.sb

thing[“mx”]=rec.mx

return JSON.generate(thing)  #this returns a JSON String

end

REST-OPEN_URI:

Our POST routine will use the output form makeJSON and POST the JSON string to the deadbase.  In simple for this routine looks like:

def PostRecording(jsonString)

uri=”http://127.0.0.1:5984/deadbase/”   #this is our database

begin

responseBody=open(uri,:method=> :post, :body => jsonString,”Content-Type” => “application/json”).read

puts ‘POST Response Success: ‘ + responseBody

end

rescue

OpenURI::HTTPError => the_error

puts ‘Post Response Error: ‘ + the_error.io.status[0]

end

end

The key line is, of course:

responseBody=open(uri,:method=> :post, :body => jsonString,”Content-Type” => “application/json”).read

If we ran this line as:

responseBody=open(uri,:method=> :post, :body => jsonString).read

we would get an http Status code for an “Invalid Media Type”.  That’s because the default “Content-Type” for POST commands is “application/xxx-form” which is the typical format of a HTML “form” involved in a POST from a web browser.  We are far from a browser here and our “Content-Type” needs to be “application/json”.  The way to add Headers to the POST is to provide one or more key/value pairs with the desired header information.  Hence:

“Content-Type” => “application/json”

and the correct Ruby line is:

responseBody=open(uri,:method=> :post, :body => jsonString,”Content-Type” => “application/json”).read

We need to wrap the POST command in an exception block where the line:

OpenURI::HTTPError => the_error

is only executed IF the Http response status is > 399.  You can then do more fine grained responses to the error condition.  Specifically, if the_error.io.status[0]==409 you have attempted to POST the same document twice (at least two documents with the same ID).

That looks like a wrap for now.

5901067736_08fe849334_z

Posted 2011/07/22 by Cloud2013 in Aptana, couchdb, REST, Ruby

Tagged with , , , ,

REST, Ruby On Rails, CouchDB and Me – Part 3 – CouchDB Up and Running on Windows   Leave a comment

Part 0 – REST, Ruby On Rails, CouchDB and Me

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 6 – Getting The Data Into And Out Of CouchDB

Part 7 – JQUERY,JPlayer and HTML5
THIS JUST IN:

CouchDB 1.1.0 For Windows Released.

From the CouchDB listserver:

On 9 June 2011 21:25, Rob wrote:
>  Hi guys
> I know its not officially supported but I am just wondering if anyone out
> there is working on a Windows release of the latest CouchDB. Otherwise does
> anyone know if it is possible to upgrade an existing 1.0.2 setup to 1.1.0?
>
> Thanks a lot
> Rob
>
Hi Rob,
https://github.com/dch/couchdb/downloads is what you want; I'd use the
R14B03 based release
https://github.com/downloads/dch/couchdb/setup-couchdb-1.1.0+COUCHDB-1152_otp_R14B03.exe

I tweet each new build from @davecottlehuber #couchdb #windows incl
interim ones. More than anything else the windows build needs a wider
group of testers, particularly around load/performance, and feedback
on features/changes that people want.

I am working on getting trunk to build on windows with lots of help
from others on @dev, but currently that's broken.

Any questions feel free to email to the list, or ask me (dch) on
irc://freenode.net/#couchdb.

A+

————————————————————————

THIS INFORMATION SUPERSEDS THE WINDOWS BINARY INSTALLER LISTED BELOW.

BTW: CoucDB mailing lists are maintained here.

—————————————————————-

Installation On Windows

The Apache organization is controlling the development and distribution of CouchDB.  I installed CouchDB on an XP system using the a binary windows installer. The Bible of CouchDB:  “CouchDB The Definitive Guide” is excellent and is available for free online.  The Windows binary installer for Couchdb is referenced from the  Apache Couchdb Wiki Page hereThe Naills Blog discusses basic Windows installation.  If you change the default CouchDB port – you need to be responsible to remembering it.   CouchDB will be running as a Windows Service. Set the Statup type to Automatic and the Log on “Local System”.  Note what version you have just installed. CouchDB is undergoing rapid development and there was a new sub version drop on June 6th.  There have been assorted reports of Couchdb ‘hanging’ or ‘dying’ during certain operation on Windows systems.  I have not been able to replicate these but you should (as with all windows servers) review and set as appropriate the automatic recovery options for the service on the  “Recovery” tab of the service.  You do not need to install from source and compile.

image

clip_image001image

CouchDB General Property Page                CouchDB Log On Property Page             CouchDB Recovery Property Page

Learning By Fooling Around

Ok.  All installed? Let’s check the system out.  BE SURE YOU HAVE REVIEWED THE FIRST FOUR CHAPTERS of: CouchDB The Definitive Guide.  Load up your favorite browser (We like Chrome ourselves) and enter the url:

http://http://127.0.0.1:5984/

you should get back the (JSON) Text:

{"couchdb":"Welcome","version":"1.0.2"}

Now try:

http://http://127.0.0.1:5984/_utils/

(this is the Futon frontend to couchDB).  You should get back something like:

clip_image001[4]

Note you can add a new database from this screen. and then instantly add an (unstructured) document to the database.

clip_image001[6]

Where Futon has, helpfully, given you GUID ID for the document.  ID is the ONLY required field.  We will be discussing Keys and GUIDS and CouchDB in a later posting.  Now Switch To the Fields View and add a new field and value.

clip_image001[8]

Save The Document and drop back to the database level:

clip_image001[10]

Note we have picked up a rev (revision) field. All keys in couchdb are really compound keys.  The Primary Key (aka ID) and the revision number (aka rev). The ID refers to the collection of  ALL revisions of that document.  ID plus REV refers to a particular document.

If you got this far gives yourself a big hug.  Now you should turn back to chapter four and work your way through the examples.   You can use cURL to play along with Chapter 4: The Core API in the Definitive Guide.  You must do these exercises and understand how HTTP Headers are used to modify the effects of the basic RESTful HTTP command set for CouchDB.   If it makes you feel even more manly you can use Fiddler in  place of cURL.  A Windows version of cURL is available here. [select download wizard, curl executable, Windows/Win32]. And Fiddler is available here.  We assume you are able to follow Chapter 4 on your own.  Note cURL makes use of an extensive set of command line switches.  These are used with little comment in Chapter 4 – you may want to have this MANPage availabe to decode the switches as you move through the chapter.

214-5

Posted 2011/06/07 by Cloud2013 in couchdb, REST, Web

Tagged with , , , , ,

REST, Ruby On Rails, CouchDB and Me – Part 0   4 comments

Part 0 – This Post

Part 1 – Ruby, The Command Line Version

Part 2 – Aptana IDE For Ruby

Part 3 – CouchDB Up and Running on Windows

Part 4 – CouchDB, Curl and RUBY

Part 5 – Getting The Data Ready for CouchDB

Part 7 – JQUERY,JPlayer and HTML5

The is the first in asmall series of blog posts concerning what is often known as as Open Source Stack for Web Development.  My interests here are in WEB 2.0 development.  We will be working towards using JQUERY and CouchDB to develop REST(ful) Web sites.  I will start with a focus on Rudy and Ruby on Rails (aka RadRails).   However, I will be using free and open source products not on UNIX but on Windows based systems.  Windows runs on fairly cheap hardware (compared to Apples UNIX based platforms) and Windows is much more accessible to new developers than Linux systems.  I will be working here with Windows XP and Windows Vista but you can play alone on Linux and MAC x86.  To get started let’s get together some software and supporting documentation and begin.

Ruby

Version 1.8.7 is the current stable release but since we are interested in using CouchDB I have installed the required ‘edge’ version 1.9.2.  Either of these version will support Ruby on Rails development. Ruby installations are available from several sites. Until you know exactly what you are doing stick with distributions from RubyInstaller.org since these are the baseline Ruby builds which everyone tests against first.  As you get deeper into it (or if  things get a little more sticky)  you try an integrated open stack like the one offered by BitNami.  At some point you may need to install the DevKit.  This is not as heavy as it sounds (but you will need to work with your system paths to get everything in place).  You can get the DevKit here and installation instructions are here.   Note:  the devkit download is now a Windows MSI file but the instructions are still based on a ZIP file extraction, but it all works trust me.  Don’t install either the BitNami Stack or the DevKit until you know you need it.   If you completely new to Ruby a nice introduction can be found in Why’s Poignant Guide To Ruby.

Ruby On Rails

There are a lot of different options to developing Ruby on Rails applications.  If I was not an IDE freak I could use  a simple text editor (I often use the non-open Primal Script) but being new to Ruby on Rails I wanted the support of and IDE.  For this option I selected the Aptana RadRails Development Environment.  This free development environment is based on the open Eclipse editor.  I downloaded the base Aptana IDE (Studio 2) and then added RADRails as a plug in.  These are available in Linux and Mac x86 installers in addition to windows.  You could install only RADRails but then you would have a crippled version of the Aptana product.  We will be noting Ruby on Rails training materials as we move along.

CouchDB

Although we will be using mySQL or SQLlite for some of our development, our real content manager will be the NOSQL couchDB.  This is our real goal in the project – testing the viability  of a REST(ful) HTTP addressable NOSQL database.  This project is from the Apache organization and is available for Linux, MAC x86 and Windows.  It runs as a service on the Windows OS.  The Windows installer is available here.  There is an excellent open source on line couchDB book available here.  For Ruby on Rails we will be using – CouchRest for our Ruby on Rails work with couchdb.  CouchRest is available as a ruby GEM and can be installed on Linux, MAC x86 and Windows versions of Ruby.

JQUERY

Web 2.0 is not possible with out modern, sophisticated JavaScript libraries.  One of the best of these is JQUERY and not surprisingly couchDB ships with a powerful jquery library (couchdb.js) to facilitate browser side manipulation of couchdb data.  For browser work you should be pulling your JQuery from CDN Host.  For use within Ruby for Rails project you will need to add JQUERY  support to your Aptana IDE.  JQuery works with all modern browsers (and even some which not).PacalII

JQuery 2009 & AJAX Experience 2009   Leave a comment


John Resig
Photo By dlr2008 On FlickR.com

JQuery Conference 2009

This conference took place at the Microsoft Research Center in Cambridge Massachusetts on September 12 & 13, 2009. This was a small but intense gathering of JQuery and JQuery UI core developers and assorted developers working on extensions to JQuery (aka plug-ins). There were about 300 participants most of whom were intensely involved with JQuery. Your humble scribe was one of the least informed participants. JQuery has never looked stronger, the development team is cohesive and motivated, and community support is extensive. John Resig’s numbers indicate that something in excess of 35% of all public sites running JavaScript are using JQuery. The library wars are over. Let the framework wars begin. The conference agenda and speakers list and slides can be found by starting: here.

JQuery core is stable at release 1.3.2 but 1.3.3 is coming “real soon now.” The 1.3 versions have focused on optimization, simplification and (proactive) HTML5 support.John’s talk about recent changes to JQuery Internals can be found here. Next year will see moving JQuery core and open source licenses will be transferred to Software Freedom Law Center. Projects for the next year (think version 1.4) include move JQuery\UI infrastructure into JQuery proper. Significant work has been done within JQuery core to streamline and simplify plug-in development via the Widget Factory ($.widget(…)) (thanks to Scott González for this excellent presentation). For the hard core, Paul Irish gave an excellent presentation on JQuery Anti-Patterns. This was bookended by Yehuda Katz’s excellent Best Practices presentation. Aaron Quint was among the Rubists who are advancing the state of JQuery Art. His Sammy.js project attempts to use JQuery to create a browser side MVC/REST framework. John Nunemaker is also working in this basic area and his presentation can be found here.


The Jupiter Room


Photo By dlr2008 On FlickR.com

The walk away component demonstrated at the conference was the work done by Filament Group employees, Todd Parker and Scott Jehl who developed and maintain the new ThemeRoller CSS generator for use with JQuery/UI components. Outstanding Work!

This year’s conference was sold out at 300 participants and was a mind blowing experience. Two days of the sort of “deep dive” Microsoft presenters can only dream of. All this, plus great food and a T-shirt, for one hundred bucks American. We won’t see this sort of thing until the next big thing comes along. Look for the following event during 2010: one virtual (online) developer conference (similar to JQuery 2009 but without the food) and three ‘bigger’ user conferences (London, Los Angles and Boston). Splendid!

The AJAX Experience 2009

This conference took place at the Boston Airport Hilton on September 14 – 16, 2009. What an ugly hotel. Isolated, bad restaurant, overpriced breakfast, cold design, the hotel truly sucks. The conference itself was much better. If at JQuery 2009 we saw significant evidence of what the web will be in the next two years, The AJAX Experience showed us some of what will not happen:

  • The HTML5 specification will be released too late to mater,
  • ES5 will not change the world, or JavaScript.
  • Browser vendors will implement HTML5 using different API’s and approaches,
  • Conflicts between Security and JavaScript will NOT be resolved anytime soon,
  • JSON is not going away but XML ,
  • JavaScript is not going to be changed in any fundamental way,
  • Page Refreshes are not.

The AJAX Experience is a developer driven conference and uniquely includes presenters from both the standards community (W3C, ES5) and major players (Google, Facebook, Yahoo and Microsoft).


Douglas Crockford


Photo By dlr2008 On FlickR.com

The conference is well worth attending to take the pulse of the AJAX/REST world from a developer perspective. It is a conference with does not have an edge to it.  To be fair, this is a conference with a large number of big faces present and the structure of the conference is oriented towards making these faces easily available for one on one’s.  And for that we thank the folks at AJAX Experience.

On the plus side AJAX and REST continue to hold an enduring fascination for web developers looking for the edge. One of the best attended sessions was Facebook’s overview of how AJAX, careful architectural design, client side caching and REST principles, can be brought together into a workable whole. The presentation slides can be found here. This is an important presentation and should be viewed by anyone who want to ‘go all the way’ with AJAX/REST. If nothing else this presentation shows how little native support there is in the current set of browsers for this effort and how clumsy the basic design of JavaScript is when this advanced work is attempted. Please understand dear reader that the best AJAX/REST web site would have only ONE page load all other data exchange and UI work is done on this canvas. The Facebook folks found that after years of effort they still force unnecessary page loads as the only way to clean up events, timers, namespace debris and memory leaks.

Security

For me the most interesting, and the most depressing presentation was the much anticipated panel discussion: Secure Mashups: Getting to Safe Web Plug-ins

The panelists where Marcel Laverdet (Facebook – FBJS), Mike Samuel (Google – Caja), Scott Issacs (Microsoft – The Open Source Web Sandbox) and Douglas Crockford (adSafe). And the discussion was spirited. The goal here is to develop a method by which a web site (think Facebook for example) can accept browser widgets (think ads or user gadgets) from multiple sources and assure “secure cooperation” between all widgets and with the vendor’s (the page owner) HTML and JavaScript code. Although there are nuances between the different approaches and differences in scope, each of these attempts to mash-up ‘security’ follow the same pattern:

  • Clean incoming HTML for “evil parts”
  • Clean incoming JavaScript for “evil parts”
  • Encapsulate the remaining JavaScript code in wrapper functions which ultimately call methods in:
  • The Vendor’s Security Library

The Security Library purpose is to prevent, at run time, manipulation of HTML belonging to other parts of the system and preventing JavaScript exploits. Each solution provides its own definition of what the “evil parts” are, what constitutes ‘secure’ behavior and what are the limits of the security library. None of the solutions currently support JavaScript libraries. Although, Google and Microsoft made noises like they will attempt to bring third party libraries (read JQuery here) into the tent. There was a lot of discussion around this point. My notebook shows that John Resig’s name was mentioned by panelists eight times during this discussion. The overall goals of the projected solutions vary from forcing no code changes (Web Sandbox) to forcing complete re-writes (adSafe is alone is requiring safe widgets to being written exclusively using ONLY the adSafe library). All four projects are in early beta.
Significantly, there were no presentations which addressed Secure Browser OS projects like Microsoft Gazelle or Google’s Chrome OS.

PS: On the UI side of the house Bill Scott gave a delightful presentation on UI methods. For UI designers his presentations (and books) are not be to be missed.

JQuery And AJAX Experience 2009 – The Movie:



%d bloggers like this: