Configure FTPS on IIS7 for SSL file transfers

I’ll presume you already have an SSL certificate installed to your IIS web server and thus have your web site running with https://www.domain.com. This post is about how to enable SSL for your FTP connections to the server.

Set the SSL cert for your FTP server and site

  • Go to IIS Manager
  • Click the Server name at top of left tree and open FTP SSL Settings icon
  • In the SSL Certificate drop down select your certificate
  • Check the Require SSL connections
  • Expand the list of Sites in the left tree and select your FTP site
  • Open FTP SSL Settings icon
  • In the SSL Certificate drop down select your certificate
  • Check the Require SSL connections

Yes I know we’ve done this in two places, but it does appear to be necessary.

Set the Bindings for the FTP Site

  • Go to IIS Manager
  • Expand the list of Sites in the left tree and select your FTP site
  • Over on the right click Bindings…
  • Click Add button, add Type = FTP, IP Address = All Unassigned, Port = 21 and Host Name = http://www.domain.com (as in whatever your domain is)

The following ports work is because you need to specify and open the data ports.

Configure Firewall Ports

  • Go to IIS Manager
  • Click the Server name at top of left tree and open FTP Firewall Support icon
  • The Data Channel Port Range may be set already, if not make it ‘7000-7003’
  • Set the External IP Address of Firewall to the public IP of your server
  • Go to Windows Firewall
  • Add a new Inbound Rule, named something like FTPS for IIS
  • On the Protocols and Ports tab select Protocol Type = TCP, and Local Port = Specific Ports, with the range 7000-7003
  • Leave Remote Port  = All Ports

This allowed me to FTP into my server, with FTPS selected. Don’t forget because you have set the Bindings for the FTP site to your domain, the user name in your FTP program needs to be domain name and user name separated by a pipe, for example http://www.domain.com|userName.

 

An item with the same key has already been added (SQL Report Services)

Screen Shot 2014-02-26 at 7.42.34 AM

 

I was creating a new report in SQL Reporting Services, I pasted in the SQL query for the report and up popped the message “An item with the same key has already been added”.

Turns out I had two columns with the same name – which as always with these error messages, begs the question, why not just say so!

Copy Azure SQL Database to local machine (SQL 2008 R2)

You would have thought in these days of point and click a simple task like copying a database from one place to another would be a walk in the park. So when I wanted to take a copy of a SQL database from Azure and move it to my local machine, I thought it surely would be straightforward. It’s not. Here’s what I came up with:

In Azure make sure you have a Storage account set up, with a container inside. My container is called ‘files’.

export1

In Azure go to your database. Click the Export button down the bottom.

export2

A new window will pop up. The file name will probably be pre-filled. Select your blob storage account (if you only have one then it’s already selected). Select a container (mine is ‘files’). Enter your SQL Server log in name and password – these are the credentials you set up when you added the database. Click OK and the export should start – you’ll see the status down the bottom.

export3

Once the export is done, you can click to view your Storage container, and you should see the export file there.

export4

The trick now is to get the file from Azure to your local machine. I use Azure Storage Explorer, it’s a free download. Download the file from your container.

export5

If you have SQL Server 2012, then you are home free, because you can import the BACPAC file in SQL Management Studio. Right click on the Databases folder under a server in the Object Explorer and choosing “Import Data-tier Application.

If like me you are on SQL Server 2008 R2, you need some help.

Create a new blank database on your local machine.

Go to  http://msdn.microsoft.com/en-us/jj650014 and install the SQL Server Data Tools for Visual Studio 2010. After install go and find SqlPackage.exe, I located it in C:\Program Files\Microsoft SQL Server\110\DAC\bin. Open up a DOS prompt and use the following command:

SqlPackage.exe /a:Import /sf:c:\Desktop\FILENAME.bacpac /tdn:DATABASENAME /tsn:SQLSERVERNAME
  • FILENAME.bapac is the file you downloaded from Azure, whereever it is located on your machine.
  • DATABASENAME is the name of the new blank database you created a minute ago.
  • SQLSERVERNAME is the name of the SQL server on your local machine.

I followed these steps, albeit for a fairly small file, and it worked perfectly.

Puzzle for the day, why one particular Glyphicon Halflings icon is acting weird

calendar-icon

Today’s conundrum, why does one particular Glyphicon Halflings icon display differently between browsers? I have several other icons and they are all correctly displaying black and white. Yet this one icon – calendar – on the same page, same code, displays differently across my browsers.

On FireFox and Safari it displays in colour:

Screen Shot 2014-01-29 at 9.03.15 AM

On IE it displays in black and white:

Screen Shot 2014-01-29 at 9.04.37 AM

On Chrome it doesn’t display at all:

Screen Shot 2014-01-29 at 9.04.12 AM

The markup is straightforward:

Screen Shot 2014-01-29 at 9.11.55 AM

Finally tracked down this answer on Stackoverflow:

There is most likely an issue with inclusion of Glyphicons on BootstrapCDN 3.0.0 version. When 3.0 was released the glyphicons were in a separate repo, and then combined into the Bootstrap baseline again later around 3.0.1.

I updated the CSS CDN include from:

<link href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css" rel="stylesheet">

to:

<link href="//netdna.bootstrapcdn.com/bootstrap/3.0.3/css/bootstrap.min.css" rel="stylesheet">

and my calendar icon is now correctly displaying as black and white across all browsers.

Now Internet Explorer won’t even confess to being Internet Explorer

liar

The bizarre world of Internet Explorer never ceases to amaze me. Today’s revelation – Internet Explorer 11 doesn’t even like to admit that it’s Internet Explorer.

When a web browser visits a web site, it identifies itself to the web site with a string of information that contains various elements that say to the web site “hey, I’m here, this is the type of web browser I am, my version, operating system”. It’s known as the user-agent string.

In the past the user-agent string for Internet Explorer would look something like this:

Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)

The ‘MSIE’ stands for Microsoft Internet Explorer, so pretty easy to work out which browser is visiting your web site.

Today I was wondering why I could not teach some web site code to recognise it was being visited by IE, did a little research and came up with this page of explanation from Microsoft.

Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko

What’s missing? The ‘MSIE’ of course!

If I was using Firefox, the string would say FIREFOX, like this:

MOZILLA/5.0 (MACINTOSH; INTEL MAC OS X 10.8; RV:26.0) GECKO/20100101 FIREFOX/26.0

Safari would say ‘SAFARI’, and Chrome would, well now there’s the rub, because Chrome can say CHROME as well as SAFARI:

MOZILLA/5.0 (MACINTOSH; INTEL MAC OS X 10_8_5) APPLEWEBKIT/537.36 (KHTML, LIKE GECKO) CHROME/32.0.1700.77 SAFARI/537.36

Remind me again why I got into this web development malarky?

Hackers with a sense of humour, or at least a love of Chinese food

Screen Shot 2014-01-16 at 3.51.46 PM

A client asked me to check into a page on their web site that captures name and address registrations from users. The page is available to the public, and does not included a CAPTCHA. The form entries are saved into a database table, there were many entries clearly submitted by a bot over a period of time. Many of them had the address “3137 Laguna Street, San Francisco”, which presumably was just a piece of junk, the main prize was them adding a bunch of links off to dodgy web sites in a description field.

Out of idle curiosity I Googled the address. It’s a Chinese restaurant!

I bet the Syrian Electronic Army never leaves restaurant recommendations on the sites they attack 🙂

nvarchar(max) doesn’t work on Azure SQL with Classic ASP

stress

This is one of those completely obtuse issues that one hits from time to time – nvarchar(max) fields behave oddly on Microsoft Azure SQL – yet it all works fine on SQL Server 2008 on my local machine.

I battled this for an hour before finally simultaneously figuring out that trying to return data from an nvarchar(max) column via Classic ASP on Azure yields a blank value (or weird binary garbage); and falling over someone else who’s had exactly the same problem recently.

It’s fine to have a table with nvarchar(max) in Azure. I can push data into it, I can see the data via SSMS, but returning via ASP – nada.

I’m returning the data via a UDF, so I altered the table definition returned from the function so the column is nvarchar(4000). My function grabs the data from the (max) column and sticks it into the output column without trouble, and the function returns values fine, don’t even need a CAST.

It’s frustrating because I’m anticipating these columns to hold large blocks of text, and now I’m size restricted.

I haven’t had time to explore whether messing with the connection strings might do the trick.

Create Twitter-like REST API endpoints with IIS URL rewrite

Twitter’s REST API uses endpoints that look like this:

http://www.domain.com/[object]/method.json

In addition for GET request you can append parameters like this:

http://www.domain.com/[object]/[method].json?param1=a&param2=b

So for example a call to the Twitter REST might look like this:

https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=davideedle&count=2

I wanted to try and reproduce this format with a REST API I’ve been playing with, so I spent some time figuring out how to configure IIS URL rewrite to handle these URLs and deliver the parameters to an ASP page.

You’ll need IIS rewrite installed, in IIS Manager you should see the ULR rewrite icon if you click on a website, for example:

Screen Shot 2013-09-23 at 11.30.31 AM

If the icon is not there download and install URL rewrite extension from the Microsoft website, then open URL Rewrite and add a new Inbound Rule.

Here’s the regular expression pattern I came up with:

^rest/v1/([^/]*)/([^/]*)\.json$

With the Rewrite URL of:

rest/v1/?objectTypeName={R:1}&objectMethod={R:2}

This means that a URL like this:

http://www.domain.com/myObject/myMethod.json?param1=a&param2=b

Will pass the parameters into my ASP page like this:

objectTypeName = myObject
objectMethod = myMethod
param1 = a
param2 = b

You need to make sure you have the “Append query string” option checked for the Rule so the additional parameters (eg param1 and param2) are passed through the rewrite.

If Kim Kardashian’s baby with Kanye West is called North West I’d improve my search engine rankings

If Kim Kardashian's baby with Kanye West is called North West I'd improve my search engine rankings

My random thought for the day was, if I stuff my blog post title with lots of interesting words and terms, just how many people would turn up. Sadly (perhaps promisingly) I did then have to go and check several key facts:

  • How to spell Kim Kardashian
  • Who her boyfriend is today
  • The name of Kim Kardashian’s baby (who calls their child North West? All that pops into my head is North By North West, a really rather good film from the late 1950s directed by Alfred Hitchcock).

Finally I had a quick look at the Google Trends site which gives you an idea of what lots of people are searching for at the moment. Today it turns out the top of the hit parade is:

  • NFL
  • Julius Thomas (had to look him up as well)
  • Anthony Weiner (will that man never give up?)

Which means I probably should write something like how I went to watch an NFL game with Anthony Weiner and Julius Thomas scored a goal/basket/point. Yes, just realised Thomas plays basketball and the NFL is American Football, but hey, who cares when you’re just a search engine traffic moll.

 

 

Time Machine: The problem may be temporary. Try again later to back up. If the problem persists, use Disk Utility to repair your backup disk

timemachine_logo

Yesterday Time Machine on my MacBook started to fail with the message:

“The problem may be temporary. Try again later to back up. If the problem persists, use Disk Utility to repair your backup disk”

It kept erroring after each hourly backup attempt. So I tried the obvious and unmounted the external HDD, turned off/on, remounted, ran backup again, no dice.

I ran Disk Utility to repair the disk, no dice.

I then found this forum post , which led me to the absolutely fantastic Pondini site (the link on the forum post is old, you need to go here) and smacked my head because I’m an idiot, I already have the Time Machine Buddy widget on my Desktop, just been so long I had forgotten.

That showed me:

“Starting standard backup
Backing up to: /Volumes/2TB/Backups.backupdb
Waiting for index to be ready (101)
2.52 GB required (including padding), 1.18 TB available
Indexing a file failed. Returned 1 for: /Library/Spotlight, /Volumes/2TB/Backups.backupdb/Apple’s MacBook Pro (2)/2013-09-03-071009.inProgress/C981E713-FC42-4A7D-BDD5-C83709DF0D5B/Macintosh HD/Library/Spotlight
Aborting backup”

Which shows that Time Machine has been choking on a Spotlight search file, which in turn led me to notice that my Spotlight seemed to be taking forever to complete an index (you can tell Spotlight is indexing, there’ll be a tiny black dot in the circle of the Spotlight magnifying glass top right of your screen).

Screen Shot 2013-09-04 at 11.01.28 AM

In fact, the hours remaining at one point said “About 11 MONTHS” remaining. Not encouraging. I know there’s a way to force Spotlight to reindex your Mac:

  • Open System Preferences
  • Click Spotlight icon
  • Click the Privacy button
  • Add your HDD to the list “Prevent Spotlight from searching these locations”
  • Close System Preferences
  • Re-open, click Spotlight icon, click Privacy button, remove your HDD from the list, close System Preferences

This should kick off a fresh index, it’s much easier than one method I figured out a while ago where you manually delete the Spotlight index file (at one stage my Spotlight decided it didn’t want to index my local hard drive, just my external drives).

So far so good. Time Machine successfully completed its last backup. Spotlight has been sporadically re-indexing, but appears to be returning correct search results when I try a few tests.