Aiming for elegance, one thought at a time

Javascript Form Validation

Posted: May 23rd, 2010 | Author: | Filed under: IT | No Comments »

This is an incredibly common problem, and there’s really no reason to reinvent the wheel. That said, yesterday I thought of (what I think is) a rather neat way of solving the problem. So I’ve deliberately not looked in-depth at how others have solved this particular problem, and instead made a quick sketch of my solution. When I do look, I think I’ll be looking at the jQuery Validation Plugin.

In short, this model is inspired by jQuery’s ability to chain queries together, to give succinct, powerful syntax. There are two essential things that I want to be able to do:

  1. Check if a condition is present
  2. Assert that something should be so

My aim is to be able to do this with code that looks like this:

vl.ifEqual("detailrequired","Full")
  .assertNotEmpty("If full details are required,"
                  + "contributions must be entered",
                  "contributions");

But let’s start at the beginning. Form validation: there are many levels. Let me enumerate some:

  1. Required fields
  2. Fields having the correct format
  3. Inter-field validation (eg a start date falling before an end date)

In this sketch, I’m particularly thinking of the third level of form validation. In the example of a start and end date, all I want to do is check that the start date falls before the end date, and it it does, throw an error. I can do that with this code (error handling removed for brevity):

var startdate = Date.parse(document.getElementById("startdate").value);
var enddate = Date.parse(document.getElementById("enddate").value);
if (startdate > enddate) {
    alert("Start date must fall before the end date");
}

Simple enough, but rather wordy – particularly when you get in to handling errors and unexpected input. A more complex example might be to check the value of one field, and depending on its value, apply a particular rule. For example, if a field detailsrequired equals full, then a number of other fields are required. This could be achieved with this Javascript:

if (document.getElementById("detailsrequired").value=="Full") {
    if (document.getElementById("contribututions").value=="") {
        alert("If full details are required, "
               + "contributions must be entered");
    }
}

These individual samples aren’t very complex, but once you add code to handle exceptions, then they blow out a little bit. It’s also quite verbose. To achieve the syntax outlined above, where I can chain the checks and assertions together, I need to create an object that returns a reference to itself when you call any of its methods – the same way as jQuery works.

In the following code excerpt, I create two functions, and then bind them to a function object, ValidationLibrary. First a word on the function object: it has a single property, ‘check’, that is set by its single parameter. This parameter is used by the function assertEmpty. If this.check is true, then it will apply the assertion. If not, it doesn’t do anything.

The ifEqual function takes two parameters and compares them. If they are equal, it returns the parent object unaltered. On the other hand, if it they are unequal, it returns a new ValidationLibrary with check set to false – thus disabling any subsequent assertions.

function ifEqual(a, b) {
    if (a!=b) {
        return new ValidationLibrary(false);
    }
    return this;
}

function assertNotEmpty(msg, a) {
    if (this.check) {
        if (a=="") {
            this.error(msg);
        }
    }
    return this;
}

function ValidationLibrary(check) {
     this.check = check;
     this.assertNotEmpty = assertNotEmpty;
     this.ifEqual = ifEqual;
} 

With this framework, the example above becomes:

var vl = new ValidationLibrary(true);
vl.ifEqual(f("detailsrequired"),"Full")
  .assertNotEmpty("If full details only are required, "
                    + " contribution must be provided",
                  f("contribution"));

Note: I’ve created a function f(id) that returns the value of a form field, given a particular ID. If the form field contains a date, a Date object will be returned. If the form field contains a number, a Number object will be returned. Otherwise, a string will be returned.

With this framework, there’s a lot less boilerplate in order to get the same effect, and behind the scenes there is (or is in theory) a lot more error checking.

As above, this is only a rough sketch to capture the idea. I’m still to go and look at how other people have solved this same problem. And to be usable, a lot more would need to be done on this framework: the way assertions are reported in particular would need a lot of work, and even a cursory glance shows the the jQuery Validation plugin does a much better job of simple validations like making a field mandatory.

That said, the main idea I wanted to jot down was a way of handling more complex validations in an elegant way: I’m curious to see what other options are already in use.

In the meantime, please feel free to check out the rough demo and peruse the full javascript files, ValidateLibrary.js and FormValidations.js.

Tags: , , , , , ,

Installing Oracle Instant Client (and connecting to Oracle from Excel)

Posted: May 1st, 2010 | Author: | Filed under: IT | 2 Comments »

‘Instant’ probably overstates it somewhat, but the Oracle Instant Client does let you connect to an Oracle database in a reasonably snappy way. It’s pretty straight-forward too, but there are a few hoops to jump through. Here’s how I got it working.

Installing Instant Client

  1. Download the Instant Client. You’ll need the ‘basic’ or ‘basiclite’ packages, and you’ll probably want one of the add-ons, like the jdbc driver (for Java) or the odbc driver. I grabbed the odbc driver, because I’m going to connect via Excel. You’ll need to sign up to the Oracle website to access the downloads. I’ve been a member for a while, and it seems pretty harmless – no spam that I’ve noticed. Once you’ve downloaded the packages, unzip them to the same directory.
  2. The current packages ship without some necessary DLLs, as detail on the OTN Discussion Forum. The missing DLLs are MFC71.dll, msvcr71.dll and MFC71ENU.dll. I believe they’re part of the Visual Studio install, and I had them on my PC, but I needed to drag them into the install directory. If you don’t have them, you can google them (if you’re feeling lucky.) Update: looks like they’ve updated the packages, and you shouldn’t need to track down these dlls any longer.
  3. Place this directory where you want it and run odbc_install.exe. The install adds some registry settings to register to odbc driver, and it points to the driver in the directory you’re using.
  4. Create an environment variable called TNS_ADMIN. The value should be the path to the directory that contains tnsnames.ora, which lets the Oracle driver know what servers are available. Managing tnsnames.ora can be frustrating, especially for the uninitiated (that is, me), and in a subsequent post, I’ll detail how to connect without tnsnames.ora.
  5. You’ll also need to create an NLS_LANG environmental value. Oracle recommends you set this in the registry, but the instantclient doesn’t create the registry structures needed. You could create them, but it’s easier to create the environmental variable. Oracle provides a list of possible values.
  6. You can now connect to your Oracle DB using Microsoft Query.

Update: The promised post to connect using VBA is on it’s way! In the meantime, to connect with Microsoft Query, there’s a few things to be aware of.

Connecting with Microsoft Query

Firstly, your TNS_ADMIN environment variable must point to a valid file – or this won’t work. If you’re connecting to Oracle Express Edition, then you’re tnsnames.ora will look like this:

XE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = XE)
)
)

If you’re connecting to another Oracle database, you’ll need to find the appropriate tnsnames.ora. It might be under a path like C:/oracle/network/admin/. Without TNS_ADMIN pointing to a directory with a valid tnsnames.ora, you won’t be able to connect using this method.

There are two ways of setting up the connection. The first is directly through Microsoft Excel. The second is through ODBC Data Source Administrator. ODBC Data Source Administrator is probably the better way, but I’ll look at setting it up through Excel first.

New connection through excel

  1. In Excel, start Microsoft Query. In Office 2007, go to the Data ribbon and click on Get External Data -> From Other Sources -> Microsoft Query.
    The Choose Data Source dialog shown in Excel when you choose to import external data using Microsoft Query
  2. Leave <New Data Source> selected and click OK. The Create New Data Source dialogue will be displayed. Enter a name for the data source, and the driver drop-down becomes enabled. Select Oracle in instantclient_11_2 (or similar).
    The Create New Data Source window in Microsoft Query
  3. Click connect. The service name must much a valid service name in the tnsnames.ora. If you’re using the Oracle Express example above (and have installed the Oracle Express client with default settings) this will be XE. The username and password will be whatever you or the sysadmin set.
  4. Click OK. If you cannot connect at this point, but you can connect to the database by other means, it most likely means that your tnsnames.ora is wrong or that TNS_ADMIN is not pointing to the right directory (note that if you change the environmental variable, you’ll need to restart Excel for the change to take effect.)
  5. All being well, you will now be able to select a default table (if you choose to) and use Microsoft Query as you normally would.

Congratulations! You’re now connected to Oracle using Microsoft Query!

Creating the connection in ODBC Data Source Administrator

It’s often easier and more convenient to set up the new data source through the ODBC Data Source Administrator. This way, the new data source will be available whenever you want to use it, rather than needing to recreate it every time.

  1. Open the ODBC Data Source Administrator. This is in Control Panels. Under Windows 7 64bit you’ll need to choose the appropriate version: odbcad32.exe under either system32 or SysWOW64, depending on whether you’re setting up a connection for 32bit or 64 bit applications.
  2. Click Add. The Create New Data Source window appears. Choose the Oracle in instantclient_11_2 driver and click OK.
  3. The Oracle ODBC Driver Configuration page will open. This page gives you far more options and is more intelligent than the equivalent if you create the connection in Excel. The TNS Service Names drop-down box will populate with the databases specified in tnsnames.ora: if no options appear, then either your tnsnames.ora file is invalid, or TNS_ADMIN is not specified correctly. Again, if you change TNS_ADMIN, you’ll need to restart ODBC Data Source Administrator for the change to take effect.
  4. Click ‘Test Connection’. You’ll be prompted to enter a password, and all being well, you’ll get this dialogue:
  5. Click OK in the Data Source Configuration dialogue, and open Microsoft Excel. The new Data Source will appear when you open Microsoft Query.
  6. Click OK and you can use the connection in Microsoft Query as usual.

Still to come…

So that’s two different ways to connect to Oracle in Excel using Microsoft Query. As soon as I have time, I’ll be posting a sample workbook and instructions on how to connect to Oracle using VBA instead of Microsoft Query, which is especially handy if you want to distribute the workbook.

Tags: , , , ,

MySQL, parameterized queries, PHP Data Objects and stored procedure out parameters

Posted: April 4th, 2010 | Author: | Filed under: IT | 1 Comment »

For the first time in a decade, I’m doing some PHP development. That’s scary in itself. The usual thing: connect to a database, get some data, serve up a page. The usual CRUD. I’ve elected not to use a framework because this is a bit of an experimental project and I’m not sure what I need – which makes the choice of frameworks difficult.

So I’m doing the database connection myself. No big deal, but I was surprised to find that the traditional way to handle dynamic queries in PHP is by building your own query string. Naturally, this means that you need to protect against SQL injection attacks yourself. Now, perhaps this is my own fault for not using a framework, but I really don’t want to roll my own SQL injection protection. Thankfully, there’s PHP Data Objects (PDO) which provide parameterized queries – which pretty much come standard in every other language on the planet (including VBA, of all places… technically, it’s standard in the PHP install as well, but I get the impression that it’s not been used traditionally.)

The syntax will be familiar to anyone who’s used parameterized queries before:


// configuration

$dbhost     = "localhost";
$dbname     = "notes";
$dbuser     = "root";
$dbpass     = "password";

// database connection

$conn = new PDO("mysql:host=$dbhost;dbname=$dbname",$dbuser,$dbpass);

$subject = $_POST['subject'];
$object = $_POST['object'];
// query
$sql = "CALL SetContent(?,?)";
$q = $conn->prepare($sql);
$q->execute(array($subject,$object));

$sql = "SELECT object from threestore where subject = ?";

$q = $conn->prepare($sql);
$q->execute(array($subject));
$object = $q->fetchColumn();

?>

This is a simplified example, with all non-essential code removed. It writes something to a database and then reads it back straight away: useful, no?

You’ll note that I don’t use a stored procedure to retrieve the object: that’s because MySQL version 5 doesn’t support out parameters properly, as detailed in this bug. The patch is scheduled for version 6, and given that the production release is at 5.1, it’s going to be quite a wait. There’s a few different ways of working around the bug, but I wasn’t that attached to using stored procs at this stage.

Tags: , , , , ,

These are some of my favourite things

Posted: April 2nd, 2010 | Author: | Filed under: IT | No Comments »

Over the past year, I’ve spent a lot of time extracting and manipulating data in Oracle databases. Powerful things, them. These are some of the small-ticket, but kinda cool, features that I’ve found useful – the type of thing that doesn’t make the sales brochures, but can save time when you need it.

wm_concat
wm_concat is an unsupported string aggregate function, so it’s not often mentioned. In a grouping query, wm_concat will concatenate up to 255 (I believe) string values, and return a comma separated list. I used wm_concat when I had a table of operations that could be linked to multiple errors, and I wanted a summary of the most common combinations. You can achieve the same thing with a user defined aggregate function, but it’s nice that it’s just built in (unless the DBAs have disabled it.)

xmlelement, xmlforest, xmlagg
Need to get xml out of your database? Sure you do. Yes, you could write something in whatever language, or better yet, use a case tool to autogen that code, but it’s pretty neat to get it straight from the DB. xmlelement, predicably, takes some parameters and makes an xml element. xmlforest returns a whole bunch of elements. xmlagg is an aggregation function to wrap a number of rows up together. You can combine these three functions (plus there are others) and build some very complex xml. The downside: you get a query that’s really not pretty. These function are part of the SQL/XML standard, which seems to have pretty much languished since 2003. Anyone using this in a production environment?

case statements in sql
Case statements within sql queries are ugly (they break with the sql paradigm – but maybe it’s the SQL that’s ugly, and the case statement just brings that home?) but they sure are useful. They can be easier to understand than decode() or some of the more creative hacks combining sign() and other functions in ways that were never intended. So I guess it’s not all bad.
Tags: , , , , ,

Wish I’d read this first

Posted: April 2nd, 2010 | Author: | Filed under: IT | No Comments »

I’ve been partially responsible for the creation of a new XML format for use at work. We’ve been working on it for around six months. It does the job- but it sure is ugly. A lot of that is because we didn’t have a nice set of standards to begin with. I wish I’d known about Google’s XML Document Format Style Guide six months ago.

The major things that leap out at me:

  • Consistency. This is really grinding on me at the moment. Parts of the format are camelCase, parts are all lowercase, others just random. In parts we use venetian blind design, other places Russian doll.
  • The build versus design argument: we built part, and borrowed a large chunk at the ninth hour (big part of the reason behind the inconsistency.)

I’m not completely sold that reusing an existing format would have been better. There are existing formats out there to deal with the type of data we’re using (financial services client/account details.) In this case, though, we face some unique constraints.

The schema is directly exposed to users at two levels. Firstly, through automatically generated forms from the off-the-shelf package we’re implementing. And secondly, behind the scenes, to BAs that support that package. The schema that we stole parts of was clearly not designed for this type of exposure. It focussed on machine-readability over human readability. We needed to do a lot of work to clean it up.

Tags: , , , , , ,

Don’t avoid rework

Posted: January 18th, 2010 | Author: | Filed under: IT | No Comments »

I’ve learnt an important lesson over the last few weeks. Don’t avoid rework – make it easy to do instead.

A few months ago, we were working on the foundations for the project I’m on. We knew that if we got the foundations wrong, the potential rework would be time consuming and expensive. Needless to say, we wanted to avoid that, and so we started doing some analysis to make sure we did it right. All fair enough.

But the fear of getting it wrong led to analysis paralysis. In the end, we ran out of time. We’d only got through one tenth of the scope when we needed to deliver. For the rest, we had to guess, and we got it wrong anyway. We went through the expensive and time consuming rework that we were trying to avoid.

It was only after that experience that we sat down and thought: does this rework really need to be time consuming and expensive? It turns out, the answer is no. With couple of hours work, we were able to write a script that did the bulk of the heavy lifting. It’s still a little bit manual, and if we wanted to, we could certainly make substantial additional improvements.

Already, though, we can feel the fear of rework lifting. We’ve now got the confidence to decide, and act, without wasting time chasing an elusive perfection.

Tags: , , , , ,

Test driven design

Posted: November 1st, 2009 | Author: | Filed under: IT | No Comments »

I don’t know quite what it is, but something about test-driven development (TDD) appeals to me. Perhaps it strikes a chord with my fundamental belief that machines should do the work so that people have time to think. Or perhaps it’s because TDD appeals to my anal nature. Whatever the case is, I like any opportunity to automate things, and although I’ve never done any, TDD seems to an absolutely brilliant way to spend one’s days.

Only down side is – I’m on an integration project at the moment, and so the opportunity for TDD is limited, right? Well, it might be a wee bit harder, but we shouldn’t let that stand in the way. ThoughWorks have a whitepaper (written by Gregor Hohpe and Wendy Istvanick) that talks about their approach to TDD in enterprise integration projects.

It lays out really clearly all the component pieces needed to overcome the challenges in creating automated tests for enterprise integration solutions, and gives some pretty good advise on designing for testability – which is probably not on our radar at the moment.

Might just drop this on the test analysts desk come Monday.

Tags: , , , , , , ,

Multiple monitors

Posted: August 9th, 2009 | Author: | Filed under: IT | No Comments »

Anyone who has used dual monitors (or a very large monitor) will know that the extra screen real estate makes work more productive and less difficult. However, it can be difficult to convince your boss – who works exclusively on their laptop’s 11 inch screen – that it’s worth the cost.

So I put together a business case.

First step: is there a benefit? From my own experience, yes there is. Any time I’m doing any sort of serious work at home, be it research, coding, or writing a blog post, I set my laptop up at my desk with a second monitor. It’s a much faster, more pleasant way of working with multiple windows. I might have reference material open in one window, and my be writing a post in another, for example. A quick google reinforces that there is ample anecdotal evidence that multiple monitors are a smarter way to work. I needed something more than that, though. I needed an empirical study.

Enter Productivity and multi-screen displays. According to this NEC-Mitsubishi study, “Respondents got on task quicker, did the work faster, and got more of the work done with fewer errors in multi-screen configurations than with a single screen. They were 6 percent quicker to task, 7 percent faster on task, generated 10 percent more production, were 16 percent faster in production, had 33 percent fewer errors, and were 18 percent faster in errorless production. Multi-screens were seen as 29 percent more effective for tasks, 24 percent more comfortable to use in tasks, 17 percent easier to learn, 32 per cent faster to productive work, 19 percent easier for recovery from mistakes, 45 percent easier for task tracking, 28 percent easier in task focus, and 38 percent easier to move around sources of information.” Admittedly, one study is not sufficient cause for certainty, but one study combined with strong anecdotal evidence and a logical theory to explain the results is very compelling.

For my business case, I’m most interested in the 10% gain in productivity. Where I work, it’s reasonable to assume that at least half of these productivity gains will come in the form of chargeable work. This is because I’m currently less than 100% chargeable (a large percentage of my work is ‘business as usual’), and we’ve got a large amount of chargeable work on the horizon that we just don’t have the resources to take on. This makes it really easy to demonstrate that getting multiple monitors will generate more benefits than it costs. For others, who might already be at 100% chargeability- or who don’t do any chargeable work- this argument won’t apply. In those cases, it will be more difficult- but not impossible- to demonstrate that multiple monitors generates value.

So, finally, to the figures (download the Excel spreadsheet). Assuming a 10% productivity gain, I’ll have time to do an extra 24 days work per year. If half of that is chargeable, that’s an additional 12 days work. At my chargeout rate, that equates to $AUD6,600 per year in additional revenue. The cost of dual monitors is $AUD250 for a dual-monitor video card, and $240 per year to lease the monitor. Over three years, the total benefit per employee with dual-monitors works out to be $AUD18,830.

That’s pretty clear cut.

Update: (13 Sep 2009) The business case was accepted and I now have two monitors on my desk.

Tags: , , , ,

Pheonix

Posted: July 31st, 2009 | Author: | Filed under: IT | No Comments »

Almost everything we come in to contact with is designed to be thrown away. Toasters, couches, computers, buildings, public transport systems – almost everything we use will eventually end up in land fill. When we design things, we normally don’t think about what will happen once it’s served its purpose. If we do, it’s only to plan how we can manage getting rid of it. Often we don’t even do that. Most of us are guilty of hoarding some useless thing or other, simply because we never thought about what we’d do with it once we were done with it.

IT systems are no exception.

A few years ago I read a fascinating book called Cradle to Cradle: Remaking the Way We Make Things by McDonough and Braungart. The normal approach to design – the one we experience every day – is cradle to grave design. We design a product or service to be created, used, maintained, and then thrown away. McDonough and Braungart opened my eyes to a different type of design. Designing things in a way that allows them to be reborn when they’ve reached the end of their current life.

For physical objects, this means designing things to be either recyclable or biodegradable. There are countless examples of what can be done. Phones that simply pop-apart when heated above a certain temperature, making it economical to recycle their component parts. Square carpet ’tiles’, instead of rolls of carpet, that can be replaced individually when they wear, and again be recycled in to new carpet (your office, if it’s been fitted out recently, probably has these.) The ‘renting out’ and re-capture, rather than sale, of industrial chemicals.

Why do these things? Because we’re running out of land fill and resources, and so ultimately we have no choice. But perhaps more relevantly, because it’s often cheaper. In the long term, it’s cheaper to design products and services that create more resources than they use. To design products and services that will be the foundations that tomorrows products are built upon, and the fertile soil that tomorrows services grow within.

Does this have any lessons for IT management? We habitually manage our systems and services on a cradle to grave basis. Indeed, IT management frameworks such as ITIL build in the assumption that systems will be decommissioned-thrown in the bin. We make decisions on the basis that, at some point, the system is going to be replaced. We take shortcuts when developing new processes or capabilities, because we know we won’t have to support them forever. Often, we avoid making necessary changes because a new system is perpetually just around the corner.

Would we make different decisions if we were managing IT cradle to cradle? What would this mean in practice? Stay tuned.

Tags: , , , , , , , ,