Reading data from Xero API with Classic ASP

It is possible to read data from the Xero API using Classic ASP, there’s little online at all to help you, other than a couple of cryptic hints, including this post which was the start of my adventure. Only problem being it refers to a Git repo which the files have vanished from.

Before you proceed, please note, you must have the Chilikat activeX COM installed – which I’d heartily recomend anyway, it’s the Swiss Army Knife for Classic ASP development, I’ve used Chilikat for everything from reading email from Gmail accounts, to reading/writing CSV files, to encrypting/decrypting data.

Let’s walk through the steps.

Create a public/private key pair

You’re going to need a public/private key pair, Xero has good instructions on creating these. I’m on a Mac so I can use openSSL, you can skip the creation of the pfx file as we only need the cer and the pem files.

openssl genrsa -out privatekey.pem 1024
openssl req -new -x509 -key privatekey.pem -out publickey.cer -days 1825

You’ll need to place the PEM file somewhere that can be read by your ASP code, so it’ll need to be in a folder on your server. The cer file you’ll need when registering your application in Xero.

Register a Xero Application

I’m working with a Private Application in Xero. First step is to register your application – note I already had a Xero account, but Xero does give you the ability to create a test account.

As part of registering your account you will upload your cer public key file. Once the application is registered you’ll see a Consumer Key value – grab that, you will need it later. Ignore the Consumer Secret, we’re not using this.

Call the Xero API

Here’s the code I use to obtain a list of all the invoices in my Xero account. I’ve tried to add full comments to explain what is happening.

My starting point for my adventure was a post in the Xero Community ‘Has anyone managed to get the API working using Classic ASP?‘. And my thanks to Ben SnapeĀ who even though his ASP solution had vanished from Github, still pointed me in the right direction, and I’ve borrowed his encoding and timestamp functions.

Quick explanation of a couple of things:

  • The nonce value is a one time number, basically you just need to send a unique number each time. I’ve gone with a simple random number.
  • The timestamp is a standard epoch number – just remember to shift your local time to UTC before generating, I’m 11 hours ahead of UTC at the moment on my local machine so you’ll need to adjust to suit your server (My production servers are UTC, I’ll tweak my code eventually to cope with the fact it’s UTC on production but not UTC on my local).
  • It’s is correct you use the consumer key for both the oauth_consumer_key and oauth_token parameters.
<%
Dim consumerKey,url,respText,params,crypt,baseSignature,pkey,success,rsa,signature,pkeyXml,hexSig
dim objXMLHTTP

' URL to get a list of my invoices
url="https://api.xero.com/api.xro/2.0/Invoices"
' your consumer key provided by Xero when you register your application
consumerKey = "[Your Xero consumer key]"

' we need a random number (or a unique number) for every call - that's the nonce value
Randomize
' build the parameters

params=""
params = params & "oauth_consumer_key="&consumerKey
params = params & "&oauth_nonce="&int((9999999-99999+1)*rnd+99999)
params = params & "&oauth_signature_method=RSA-SHA1"
params = params & "&oauth_timestamp="&makeTimeStamp()
params = params & "&oauth_token="&consumerKey

' this is the string we will make a signature from
baseSignature = "GET&"&encode(url)&"&"&encode(params)
response.write baseSignature&"<hr>"

' Load the private key from the RSA PEM file
' Using example from https://www.example-code.com/asp/rsa_signWithPEM.asp 
set pkey = Server.CreateObject("Chilkat_9_5_0.PrivateKey")
success = pkey.LoadPemFile(server.MapPath("keys/privatekey.pem"))
If (success <> 1) Then
Response.Write "<pre>" & Server.HTMLEncode( rsa.LastErrorText) & "</pre>"
Response.End
End If
' Get the private key in XML format
pkeyXml = pkey.GetXml()

set rsa = Server.CreateObject("Chilkat_9_5_0.Rsa")
success = rsa.UnlockComponent("[Your unlock code]") 'important, need to unlock
' Import the private key into the RSA component
success = rsa.ImportPrivateKey(pkeyXml)
If (success <> 1) Then
Response.Write "<pre>" & Server.HTMLEncode( rsa.LastErrorText) & "</pre>"
Response.End
End If
' we want a base64 output NOT hex
rsa.EncodingMode = "base64"
rsa.LittleEndian = 0
signature = rsa.SignStringENC(baseSignature,"sha-1")
Response.Write "<pre>" & Server.HTMLEncode( rsa.LastErrorText) & "</pre>"
response.write "signature="&signature&"<Hr>"

' add the signature to the parameters
params = params & "&oauth_signature="&encode(signature)
response.write "params="&params&"<Hr>"

' add params to the url
url=url&"?"&params
response.write url&"<hr>"

' fire this at Xero
set objXMLHTTP = Server.CreateObject("Msxml2.ServerXMLHTTP.6.0")
objXMLHTTP.Open "GET", url, False
objXMLHTTP.SetRequestHeader "Content-Type","application/json"
objXMLHTTP.Send() 
respText=objXMLHTTP.ResponseText
response.write respText&"<hr>"
set objXMLHTTP=nothing

Function makeTimeStamp()
Dim dteFrom : dteFrom = "01/01/1970 00:00:00 AM"
Dim dteNow : dteNow = Now()
dteNow = DateAdd("n", -(11*60), dteNow) ' shift back 11 hours to UTC
makeTimeStamp = DateDiff("s", dteFrom, dteNow)
End Function

Public Function encode(s)
if s <> "" then
Dim strTmpVal : strTmpVal = s
Dim strRetVal : strRetVal = ""
Dim intAsc : intAsc = 0
Dim strHex : strHex = ""

Dim i, strChr : For i = 1 To Len(strTmpVal)
strChr = Mid(strTmpVal, i, 1)

If InStr(1, "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_.~", strChr) = 0 Then
intAsc = Asc(strChr)

If intAsc < 32 Or intAsc > 126 Then
strHex = encodeURIComponent(strChr)
Else
strHex = "%" & Hex(intAsc)
End If

strRetVal = strRetVal & strHex
Else
strRetVal = strRetVal & strChr
End If
Next

encode = strRetVal
else
encode = ""
end if
End Function

%>

WordPress on Azure is really slow

I’ve recently set up a couple of WordPress blogs on Azure using the standard WordPress package from the Azure marketplace. Both times the sites have been really slow. The problem appears to be if you configure the database to a different region than the web app. Which is really easy to do because there’s no mySQL available on Australian Azure so you are told to pick the closest, which is SE Asia. Yet by default I always will add web apps onto Australian nodes. I thus wind up with the web app in Australia and the DB in Asia.

I experimented by spinning up a new WordPress with both the web app and database on SE Asia node – significant increase in speed.

If I have both internet and wifi enabled on my Apple Mac which connection is used?

I’ve long pondered this – if I have my MacBook connected both to ethernet and WiFi, which internet connection is it actually using?

In general sitting at my desk I want my Mac on ethernet, because speed-wise I see anything up to 100Mb down our cable connection. But I like having the WiFi on because then my iPhone and iPad are syncing in the background, and I don’t have to remember to turn the WiFi on and off as I move to and fro. However, the connection speed over the WiFi is slower, and I tend to see disconnects from my company’s VPNs although I could probably solve that with a bit of fiddling with the cable router configuration.

Finally I’ve tracked down the answer, and it’s simple.

Open System Preferences > Network. You’ll see a list of your connections on the left hand side. The active ones are green.

Screen Shot 2013-08-06 at 3.13.10 PM

The answer is, your Mac will use the connections in order – so if Ethernet is active and top of the list, that’s the connection used.

By default the order looks good to me – but you can change. Click the little ‘gear’ icon at the bottom of the list, there’s an option to change the order.

Screen Shot 2013-08-06 at 3.19.31 PM

Moving my blog from TypePad to WordPress

I’ve had a fun time the last couple of days moving this blog from its home of the last 5 or 6 years at TypePad, to WordPress.com. I made the decision for a number of reasons:

  • I’ve now used WordPress extensively over the past couple of years, and grown to love the software for its ease of use, configurability and flexibility. As an open source product, there is a vast community of contributors, an army of plugins and a battalion of really helpful users hanging out in forums. In contrast TypePad is a proprietary, closed ecosystem.
  • TypePad costs money. I’ve been paying them a subscription every month. WordPress.com is free, at least until I finalise the move by shifting my http://www.eedle.com domain, which I’ll sort out in the next week or two. It’s not that I begrudge paying money – completely the opposite given my line of work. But with TypePad I just didn’t feel I was seeing a significant innovation path. The software, in terms of the features I use, really didn’t feel it had moved forward in leaps and strides. By paying money to WordPress.com I’m supporting the development of a community-oriented product.
  • TypePad’s design templates are restricted and boring. I never really found a template I liked.I lack the design skills to create one for myself, and I’m not up for paying money to have one custom made for me. WordPress.com gives me a great set of templates, including the one I’m using now, which I think is a much more contemporary presentation than TypePad’s.
  • Finally, I actually have several other blogs on my WordPress.com account, for various businesses and ventures in which I am involved. My personal blog was the only thing I hosted at TypePad. For convenience it makes sense to keep everything together.

The actual move was pretty painless for those interested in the mechanics.

  • I exported my blog using the Export option in TypePad
  • I imported the blog (posts, comments etc) into WordPress.com using the specific TypePad import option. The only hitch with the import I’ve posted about in the forums – just an incorrect file extension, should have been .xml and not .txt. Although it would have been nice if the WordPress import page had validated this.
  • I had some problems with images. Some of the images were automatically scraped by WordPress – eg copied over from TypePad. Some were not. I’m not completely sure why this is the case, I think it’s something to do with the way TypePad hosts and names image files. They don’t look like image file URLs so perhaps WordPress didn’t recognise them as such. I wound up downloading a couple of dozen images by hand (eg Save As..) from TypePad and manually editing the posts in WordPress to update the image.
  • The text of the posts and comments came without any drama, although my ‘About’ page from TypePad turned up as a post in WordPress instead of a Page. Pretty minor, just a quick copy/paste.
  • All the Categories, and allocation of Posts to Categories arrived fine.

My one concern was for post URLs. On TypePad they all had an /eedle directory, because it didn’t actually have a domain in place, it was just http://eedle.typepad.com. I really should know better, but never got around to setting a domain up. Which meant I was faced with a bunch of URLs in Google, pointing to eedle.typepad.com – not much help if all the posts have moved over to WordPress.com.

So solution was to bite the bullet. I’ve pointed my www.eedle.com domain to TypePad, so that blog is now responding to that domain. I’ve never really used the domain for web pages – it’s mostly so I can have a cool email address like david@eedle.com.

I’ve submitted the URL to Google. I’ll use Google Webmaster Tools to tell Google to drop eedle.typepad.com URLs from the index.

We’ll let Google to its thing for a few days, by which hopefully all my pages in the index will be under http://www.eedle.com. Then I’ll move the domain over to the WordPress blog.

I know, I know, really what I should do is organise a 301 redirect on the TypePad URLs. (For the uninitiated this is an instruction to the search engines that the URL has changed).

Two reasons why I didn’t bother:

  • I actually cannot figure out a way to do this with TypePad in a way that is not a complete dodgy hack. Another fail for the software;
  • It doesn’t matter much. I don’t have a lot of traffic, I don’t have millions of inbound links. If someone is searching in the future, they’ll still find my pages.

SQL Server mqSQL “Commands out of sync; you can’t run this command now”

Scenario is:

1. You have a SQL Server db

2. You have a mySQL db

3. The SQL server db has a Linked Server connection to mySQL and you are using the mySQL Connector/OBDC

4. You try an insert or update statement in SQL server against the mySQL db using OPENQUERY

5. You receive the error "Commands out of sync; you can't run this command now"

To solve, uncheck the Forward Only Cursor option in the OBDC options.

2012-05-01_16-15-01

How much do you love IE CSS problems!

Oh how I love working with IE browsers. Sure, IE6 is pretty much dead and buried – wasn't that a fun period in browser evolution – but now we grapple with IE7, IE8 and IE9. And the good folks at Microsoft at least have now been kind enough to add th F12 Developer Tools window where we can emulate back versions.

All this was thrown into stark highlight last week when I came across an interesting quirk, with someone running IE7, viewing an SSL secured page, containing a set of input fields, and styled using CSS that included a file called 'ie-css3.htc' from http://fetchak.com/ie-css3/.

 

Problem was he couldn't even move cursor focus onto thos fields, let alone type anything into the boxes. Everything was completely fine in every other browser and version and platform.

Turns out the author was prescient:

You will probably run into issues with z-index, especially if embedding one IE-CSS3 enabled element inside of another. There are two simple workarounds:

    Set the z-index of the IE-CSS3 element to a number larger than its surrounding elements.

We did just that, and changed the z-index of the input field class to 9999 instead of 0 and all was good.

It's the little quirks that make me laugh.

 

 

Calculate distance between web site visitor IP and list of locations

I’ve been having fun this morning writing code to calculate the distance between a web site visitor’s location, and a number of fixed locations, using the visitor’s IP number.

I have a list of events in different geographic locations. The idea is that when someone arrives at the web site we show them information about the event that is closest to them.

I’ve used a couple of tools to make this work.

First up I found the GeoLiteCity database from MaxMind. This is a free database that can calculate the longitude and latitude of an IP number. They say they cover the whole of the IPv4 system (and a fair chunk of the IPv6).

MaxMind does have a web service available (paid) but you can download the GeoLiteCity database and query it directly – they have a good set of example APIs. I’m using the COM object on a Windows server.

The database is a binary file that you reference via file path. They do supply a CSV to import into SQL but they make the point the binary is seriously faster, so unless you need additional access to the data, it’s not worth the loss of speed. If you also use the COM just take note of their instruction to rename the database file from GeoLiteCity.dat to GeoIPCity.dat. I have the file sitting just above the www root folder so it’s not visible to the world.

Finding the site visitor’s latitude and longitude is a doddle:

Dim geoip,visitorLat,visitorLong
set geoip = Server.CreateObject("GeoIPCOMEx.GeoIPEx")
geoip.set_db_path("D:\inetpub\GeoIPDb\")
geoip.find_by_addr(request.ServerVariables("REMOTE_ADDR"))
visitorLat=geoip.latitude
visitorLong=geoip.longitude


I then hunted around online and found the longitute and latitude of the cities where my events are occuring – we’re not looking for ‘to the metre’ resolution here, just a quick guesstimate of which event and city is closest. When I get some more time I’ll sit down and figure out the coordinates of the actual venue for each event.

So now I need a way to calculate the distance between the visitor’s IP, and all of the events. Along the way I found this really useful page from Moveable Type  ‘Calculate distance, bearing and more between Latitude/Longitude points‘. It’s a great introduction to all the maths needed.

But another Google search found a forum post with a SQL stored procedure that calculates the distance between two coordinates, and it works well.

CREATE FUNCTION dbo.UDF_Distance
(
@FromLat float, @FromLong float, @ToLat float, @ToLong float
)
RETURNS FLOAT
AS
BEGIN
-------------------------------------------------------------------------------------------------
-- Function Name	: UDF_Distance
-- Description		: Returns distance in KM between two points on earth
-- Created by		: David Eedle
-- Created on		: 24 Jan 2011
-- Source:			: http://www.dbnewsgroups.net/sqlprogramming/t19131-sql-distance-between-latitude-longitude.aspx
-------------------------------------------------------------------------------------------------
declare @X float
SET @X =
Sin(Radians(@FromLat))
* Sin(Radians(@ToLat))
+ Cos(Radians(@FromLat))
* Cos(Radians(@ToLat))
* Cos(Radians(@ToLong)-Radians(@FromLong))
SET @X = Acos(@X)
RETURN 1.852 * 60.0 * Degrees(@X)
END
GO
/*
select dbo.uf_Distance (41.63,-87.73,41.7,-88.07)
go
*/

I used the Moveable Type page to double check the results and the procedure yields numbers within a few kilometres of MT’s answers over distances of 12,000km. If I was a maths whiz I’d probably now sit down for a day and figure out the inconsistency but I’m not trying to send a rocket to the moon so we’ll leave that for the experts.

Finally it was just a question of writing a quick query that pulls the coordinates of the events out, along with a calculation of the distance between the visitor and each event along the lines of:

SELECT cityId,cityName,cityState,cityDate,
CAST((select dbo.UDF_Distance ("&visitorLat&","&visitorLong&",V_ListCities.cityLat,V_ListCities.cityLong)) AS int) distanceKM
FROM V_ListCities ORDER BY (select dbo.UDF_Distance ("&visitorLat&","&visitorLong&",V_ListCities.cityLat,V_ListCities.cityLong))


 

 

 

 

 

 

 

 

ASP: Reading multiple recordsets from a stored procedure with ASP

Reading a recordset from a stored procedure is easy, essentially we just treat the procedure statement as the SQL statement, for example, something like:

sqlStr="USP_MyProc 1,2,3"
set rs=Server.CreateObject("adodb.Recordset")
rs.cursorlocation=aduseclient
rs.cachesize=1
rs.open sqlStr, conn, adOpenForwardOnly,adLockReadOnly

But how about reading more than one recordset from a procedure? Perhaps you have a circumstance where you need a couple of resultsets within the same code block. You could have more than one procedure and open each recordset in turn (or perhaps use the same procedure and control which resultset is returned using the procedure’s parameters).

Instead why not return all the required recordists from the one procedure using the one call to the database? It can be done, a procedure can return multiple recordists and each can be read in turn from your ASP code.

The trick is NextRecordset. You may already have been using this without quite realising, for example, when inserting a new record to the database that results in a unique identity value, we usually have a need to grab that value, so we’d do something like this:

sqlStr="INSERT INTO tblMyTable (myField) VALUES (myValue);"
Set rs = conn.execute (sqlStr & "select SCOPE_IDENTITY()").nextrecordset

If you look carefully you’ll see we’re actually running two queries, first the insert, and then a second to grab the identity. You can do the same with a procedure and multiple recordists. For example, let’s say you have a procedure that returns two recordists, you just cycle through them with NextRecordset:

sqlStr="USP_MyProc 1,2,3"
set rs=Server.CreateObject("adodb.Recordset")
rs.cursorlocation=aduseclient
rs.cachesize=1
rs.open sqlStr, conn, adOpenForwardOnly,adLockReadOnly
set rs2 = rs.NextRecordSet

Now you have the first results in rs and the second batch of results in rs2.

ASP Timeout expired error: Microsoft OLE DB Provider for SQL Server error ‘80040e31’

I was working on an ASP page today that included an AJAX call off to trigger various SQL updates, amongst which was a call to a stored procedure. The page started to error with:

Microsoft OLE DB Provider for SQL Server error ‘80040e31’
Timeout expired

The error line was the call to the stored procedure. After a bit of fiddling around I traced the issue to the ADO ConnectionTimeOut value. By default this is 30 seconds.

There are, I guess, 3 timeout values of interest on an ASP page:

1. Session timeout

How long the user’s browser session will last. On IIS this is defaulted to 20 minutes

2. Script timeout

How long a page can take to complete. On IIS this is defaulted to 90 seconds.

3. Connection timeout

How long a single call to the database can take to complete. This defaults to 30 seconds.

The stored procedure was taking > 30 seconds, hence throwing an error. I solved the problem by increasing the ConnectionTimeOut value for the particular call to the stored procedure, eg:

 

Dim cmdC
Set cmdC = Server.CreateObject("ADODB.Command")
with cmdC
.ActiveConnection = conn
.CommandText = "[procedure name]"
.CommandType =adCmdStoredProc
.CommandTimeout = 3000
.Execute
end with

 

The value is in seconds, so 3000 seconds = 5 minutes. You can read more about ConnectionTimeOut here.

Of course, there’s a whole other discussion why we have a stored procedure taking so long to run as part of an AJAX call. We’ll probably shift it off to a scheduled job or something eventually but we needed a quick fix for the interim.

Looping through records in SQL Server Stored Procedure

Someone who’s pretty good programming on the front end, but not so crash hot on the backend, eg SQL Server, asked me the other day about looping through recordsets in a SQL Server Stored Procedure.

It is of course possible to use a cursor and FETCH but most SQL programmers frown on cursors in stored procedures. Cursors involve grabbing a bunch of records and locking them, there can be a substantial performance hit. They should really only be used as a last resort, for example, where you don’t have a good enough unique key value on your rows (although there are ways around that one usually as well).

The easiest way I know to look through records is to employ a temporary table, here’s a simple example:

 

-- Create a temporary table to hold the records to be updated
DECLARE @uniqueId int
DECLARE @TEMP TABLE (uniqueId int)
-- Insert into the temporary table a list of the records to be updated
INSERT INTO @TEMP (uniqueId)
SELECT uniqueId FROM myTable
-- Start looping through the records
WHILE EXISTS (SELECT * FROM @TEMP)
BEGIN
-- Grab the first record out
SELECT Top 1 @uniqueId = uniqueId FROM @TEMP
PRINT 'Working on @uniqueId = ' + CAST(@uniqueId as varchar(100))
-- Perform some update on the record
UPDATE myTable SET myField = 'something or other' WHERE uniqueId = @uniqueId
-- Drop the record so we can move onto the next one
DELETE FROM @TEMP WHERE uniqueId = @uniqueId
END