Aiming for elegance, one thought at a time

MySQL, parameterized queries, PHP Data Objects and stored procedure out parameters

Posted: April 4th, 2010 | Author: | Filed under: IT | 1 Comment »

For the first time in a decade, I’m doing some PHP development. That’s scary in itself. The usual thing: connect to a database, get some data, serve up a page. The usual CRUD. I’ve elected not to use a framework because this is a bit of an experimental project and I’m not sure what I need – which makes the choice of frameworks difficult.

So I’m doing the database connection myself. No big deal, but I was surprised to find that the traditional way to handle dynamic queries in PHP is by building your own query string. Naturally, this means that you need to protect against SQL injection attacks yourself. Now, perhaps this is my own fault for not using a framework, but I really don’t want to roll my own SQL injection protection. Thankfully, there’s PHP Data Objects (PDO) which provide parameterized queries – which pretty much come standard in every other language on the planet (including VBA, of all places… technically, it’s standard in the PHP install as well, but I get the impression that it’s not been used traditionally.)

The syntax will be familiar to anyone who’s used parameterized queries before:


// configuration

$dbhost     = "localhost";
$dbname     = "notes";
$dbuser     = "root";
$dbpass     = "password";

// database connection

$conn = new PDO("mysql:host=$dbhost;dbname=$dbname",$dbuser,$dbpass);

$subject = $_POST['subject'];
$object = $_POST['object'];
// query
$sql = "CALL SetContent(?,?)";
$q = $conn->prepare($sql);
$q->execute(array($subject,$object));

$sql = "SELECT object from threestore where subject = ?";

$q = $conn->prepare($sql);
$q->execute(array($subject));
$object = $q->fetchColumn();

?>

This is a simplified example, with all non-essential code removed. It writes something to a database and then reads it back straight away: useful, no?

You’ll note that I don’t use a stored procedure to retrieve the object: that’s because MySQL version 5 doesn’t support out parameters properly, as detailed in this bug. The patch is scheduled for version 6, and given that the production release is at 5.1, it’s going to be quite a wait. There’s a few different ways of working around the bug, but I wasn’t that attached to using stored procs at this stage.

Tags: , , , , ,

These are some of my favourite things

Posted: April 2nd, 2010 | Author: | Filed under: IT | No Comments »

Over the past year, I’ve spent a lot of time extracting and manipulating data in Oracle databases. Powerful things, them. These are some of the small-ticket, but kinda cool, features that I’ve found useful – the type of thing that doesn’t make the sales brochures, but can save time when you need it.

wm_concat
wm_concat is an unsupported string aggregate function, so it’s not often mentioned. In a grouping query, wm_concat will concatenate up to 255 (I believe) string values, and return a comma separated list. I used wm_concat when I had a table of operations that could be linked to multiple errors, and I wanted a summary of the most common combinations. You can achieve the same thing with a user defined aggregate function, but it’s nice that it’s just built in (unless the DBAs have disabled it.)

xmlelement, xmlforest, xmlagg
Need to get xml out of your database? Sure you do. Yes, you could write something in whatever language, or better yet, use a case tool to autogen that code, but it’s pretty neat to get it straight from the DB. xmlelement, predicably, takes some parameters and makes an xml element. xmlforest returns a whole bunch of elements. xmlagg is an aggregation function to wrap a number of rows up together. You can combine these three functions (plus there are others) and build some very complex xml. The downside: you get a query that’s really not pretty. These function are part of the SQL/XML standard, which seems to have pretty much languished since 2003. Anyone using this in a production environment?

case statements in sql
Case statements within sql queries are ugly (they break with the sql paradigm – but maybe it’s the SQL that’s ugly, and the case statement just brings that home?) but they sure are useful. They can be easier to understand than decode() or some of the more creative hacks combining sign() and other functions in ways that were never intended. So I guess it’s not all bad.
Tags: , , , , ,

Wish I’d read this first

Posted: April 2nd, 2010 | Author: | Filed under: IT | No Comments »

I’ve been partially responsible for the creation of a new XML format for use at work. We’ve been working on it for around six months. It does the job- but it sure is ugly. A lot of that is because we didn’t have a nice set of standards to begin with. I wish I’d known about Google’s XML Document Format Style Guide six months ago.

The major things that leap out at me:

  • Consistency. This is really grinding on me at the moment. Parts of the format are camelCase, parts are all lowercase, others just random. In parts we use venetian blind design, other places Russian doll.
  • The build versus design argument: we built part, and borrowed a large chunk at the ninth hour (big part of the reason behind the inconsistency.)

I’m not completely sold that reusing an existing format would have been better. There are existing formats out there to deal with the type of data we’re using (financial services client/account details.) In this case, though, we face some unique constraints.

The schema is directly exposed to users at two levels. Firstly, through automatically generated forms from the off-the-shelf package we’re implementing. And secondly, behind the scenes, to BAs that support that package. The schema that we stole parts of was clearly not designed for this type of exposure. It focussed on machine-readability over human readability. We needed to do a lot of work to clean it up.

Tags: , , , , , ,

Alternative carbon policy

Posted: February 10th, 2010 | Author: | Filed under: Reflection | No Comments »

Every now and then – just for kicks – I like to play armchair senior public servant and imagine alternative policy. I’ve struck on what I think is a pretty good alternative carbon policy.

Now, not being and actual senior public servant, I don’t have access to Treasury advice and am thus liable to get some terminology wrong. I’m not sure if this is an emissions trading scheme, or if it’s a carbon tax. It is the bastard love child of carbon trading and the GST.

It seems there are three criteria that a good carbon policy has to fit:

  1. It needs to work in some reasonably intuitive way, and do so efficiently.
  2. It must attend to the real or imagined threat to “trade-exposed” companies.
  3. It must not harm Aussie battlers.

Now, Rudd’s policy initially hit points 1 and 3, but completely missed point 2. In order to fight back on point 2, Rudd watered the policy down. Now it misses point 1.

Abbott’s policy really doesn’t address any of the points above. It’s targeted at an alternate policy problem: how to win elections.

My alternative – let’s call it a Wholesale Carbon Levy (WCL – need something snappier?) – works this way. “Wholesale” carbon emitters – coal miners, oil wells, importers, farmers, etc – buy a certain number of carbon permits. That’s the carbon trading part. No mystery there.

“Wholesale” carbon emitters then pass that cost on to their customers (retail carbon emitters – power plants, petrol stations, financial services companies), but they pass it on as a separate item on any invoice or receipt. Using existing GST infrastructure (software etc), retail carbon emitters are able to pass this forward to their customers, and so on. That’s the GST part.

So far, so bad. The twist is that the ATO will allow a company to tax-deduct any WCL-amount on any goods and services sent overseas, until such time as a binding global agreement is reached.

This addresses the three points above:

  1. Only so many carbon permits are available. Therefore, carbon emissions are reduced. Same as any ETS.
  2. Trading-exposed companies are reimbursed for any WCL amount, and so business has little cause for complaint (assuming GST infrastructure can be used easily.)
  3. Naturally, part of the additional revenue from the WCL would be directed to income tax cuts for Aussie battlers. As with every policy, weak points can be hidden behind tax cuts.

I’d very much appreciate any criticism the internet can muster. Suggested starting points: the stunning lack of originality (references please), or how it will cripple Australian farmers.

Tags: , , ,

Don’t avoid rework

Posted: January 18th, 2010 | Author: | Filed under: IT | No Comments »

I’ve learnt an important lesson over the last few weeks. Don’t avoid rework – make it easy to do instead.

A few months ago, we were working on the foundations for the project I’m on. We knew that if we got the foundations wrong, the potential rework would be time consuming and expensive. Needless to say, we wanted to avoid that, and so we started doing some analysis to make sure we did it right. All fair enough.

But the fear of getting it wrong led to analysis paralysis. In the end, we ran out of time. We’d only got through one tenth of the scope when we needed to deliver. For the rest, we had to guess, and we got it wrong anyway. We went through the expensive and time consuming rework that we were trying to avoid.

It was only after that experience that we sat down and thought: does this rework really need to be time consuming and expensive? It turns out, the answer is no. With couple of hours work, we were able to write a script that did the bulk of the heavy lifting. It’s still a little bit manual, and if we wanted to, we could certainly make substantial additional improvements.

Already, though, we can feel the fear of rework lifting. We’ve now got the confidence to decide, and act, without wasting time chasing an elusive perfection.

Tags: , , , , ,

Movember

Posted: November 9th, 2009 | Author: | Filed under: Charity | 1 Comment »

This year I’m participating (for the first time) in Movember! This involves growing a hideous moustache to raise awareness for prostate cancer and other men’s health issues. Obligatory photos to follow. Anyone who is willing to support me ‘sight unseen’ can go to my donation page. I know that readers of this blog are very generous, and won’t hesitate to dig deep for a great cause!

Update: Day 9 – how time flies! I know I only posted this morning, but still…

Snapshot_20091109_10

Forgive the shocking image quality. A David Niven moustache has been requested. Unless there’s a higher bidder, I think that’s how I’ll have to go!

Tags: , , , ,

Oxfam Trailwalker

Posted: November 4th, 2009 | Author: | Filed under: Charity | 1 Comment »

Anyone want to do the Oxfam Trailwalker? Seems I’ve been somewhat tardy getting a team together… the places available for those willing to commit to raising $1000+ filled up within a day. Which means now, the minimum fund raising to enter is $3000. Anyone game?

Tags: ,

Test driven design

Posted: November 1st, 2009 | Author: | Filed under: IT | No Comments »

I don’t know quite what it is, but something about test-driven development (TDD) appeals to me. Perhaps it strikes a chord with my fundamental belief that machines should do the work so that people have time to think. Or perhaps it’s because TDD appeals to my anal nature. Whatever the case is, I like any opportunity to automate things, and although I’ve never done any, TDD seems to an absolutely brilliant way to spend one’s days.

Only down side is – I’m on an integration project at the moment, and so the opportunity for TDD is limited, right? Well, it might be a wee bit harder, but we shouldn’t let that stand in the way. ThoughWorks have a whitepaper (written by Gregor Hohpe and Wendy Istvanick) that talks about their approach to TDD in enterprise integration projects.

It lays out really clearly all the component pieces needed to overcome the challenges in creating automated tests for enterprise integration solutions, and gives some pretty good advise on designing for testability – which is probably not on our radar at the moment.

Might just drop this on the test analysts desk come Monday.

Tags: , , , , , , ,

Multiple monitors

Posted: August 9th, 2009 | Author: | Filed under: IT | No Comments »

Anyone who has used dual monitors (or a very large monitor) will know that the extra screen real estate makes work more productive and less difficult. However, it can be difficult to convince your boss – who works exclusively on their laptop’s 11 inch screen – that it’s worth the cost.

So I put together a business case.

First step: is there a benefit? From my own experience, yes there is. Any time I’m doing any sort of serious work at home, be it research, coding, or writing a blog post, I set my laptop up at my desk with a second monitor. It’s a much faster, more pleasant way of working with multiple windows. I might have reference material open in one window, and my be writing a post in another, for example. A quick google reinforces that there is ample anecdotal evidence that multiple monitors are a smarter way to work. I needed something more than that, though. I needed an empirical study.

Enter Productivity and multi-screen displays. According to this NEC-Mitsubishi study, “Respondents got on task quicker, did the work faster, and got more of the work done with fewer errors in multi-screen configurations than with a single screen. They were 6 percent quicker to task, 7 percent faster on task, generated 10 percent more production, were 16 percent faster in production, had 33 percent fewer errors, and were 18 percent faster in errorless production. Multi-screens were seen as 29 percent more effective for tasks, 24 percent more comfortable to use in tasks, 17 percent easier to learn, 32 per cent faster to productive work, 19 percent easier for recovery from mistakes, 45 percent easier for task tracking, 28 percent easier in task focus, and 38 percent easier to move around sources of information.” Admittedly, one study is not sufficient cause for certainty, but one study combined with strong anecdotal evidence and a logical theory to explain the results is very compelling.

For my business case, I’m most interested in the 10% gain in productivity. Where I work, it’s reasonable to assume that at least half of these productivity gains will come in the form of chargeable work. This is because I’m currently less than 100% chargeable (a large percentage of my work is ‘business as usual’), and we’ve got a large amount of chargeable work on the horizon that we just don’t have the resources to take on. This makes it really easy to demonstrate that getting multiple monitors will generate more benefits than it costs. For others, who might already be at 100% chargeability- or who don’t do any chargeable work- this argument won’t apply. In those cases, it will be more difficult- but not impossible- to demonstrate that multiple monitors generates value.

So, finally, to the figures (download the Excel spreadsheet). Assuming a 10% productivity gain, I’ll have time to do an extra 24 days work per year. If half of that is chargeable, that’s an additional 12 days work. At my chargeout rate, that equates to $AUD6,600 per year in additional revenue. The cost of dual monitors is $AUD250 for a dual-monitor video card, and $240 per year to lease the monitor. Over three years, the total benefit per employee with dual-monitors works out to be $AUD18,830.

That’s pretty clear cut.

Update: (13 Sep 2009) The business case was accepted and I now have two monitors on my desk.

Tags: , , , ,

Pheonix

Posted: July 31st, 2009 | Author: | Filed under: IT | No Comments »

Almost everything we come in to contact with is designed to be thrown away. Toasters, couches, computers, buildings, public transport systems – almost everything we use will eventually end up in land fill. When we design things, we normally don’t think about what will happen once it’s served its purpose. If we do, it’s only to plan how we can manage getting rid of it. Often we don’t even do that. Most of us are guilty of hoarding some useless thing or other, simply because we never thought about what we’d do with it once we were done with it.

IT systems are no exception.

A few years ago I read a fascinating book called Cradle to Cradle: Remaking the Way We Make Things by McDonough and Braungart. The normal approach to design – the one we experience every day – is cradle to grave design. We design a product or service to be created, used, maintained, and then thrown away. McDonough and Braungart opened my eyes to a different type of design. Designing things in a way that allows them to be reborn when they’ve reached the end of their current life.

For physical objects, this means designing things to be either recyclable or biodegradable. There are countless examples of what can be done. Phones that simply pop-apart when heated above a certain temperature, making it economical to recycle their component parts. Square carpet ’tiles’, instead of rolls of carpet, that can be replaced individually when they wear, and again be recycled in to new carpet (your office, if it’s been fitted out recently, probably has these.) The ‘renting out’ and re-capture, rather than sale, of industrial chemicals.

Why do these things? Because we’re running out of land fill and resources, and so ultimately we have no choice. But perhaps more relevantly, because it’s often cheaper. In the long term, it’s cheaper to design products and services that create more resources than they use. To design products and services that will be the foundations that tomorrows products are built upon, and the fertile soil that tomorrows services grow within.

Does this have any lessons for IT management? We habitually manage our systems and services on a cradle to grave basis. Indeed, IT management frameworks such as ITIL build in the assumption that systems will be decommissioned-thrown in the bin. We make decisions on the basis that, at some point, the system is going to be replaced. We take shortcuts when developing new processes or capabilities, because we know we won’t have to support them forever. Often, we avoid making necessary changes because a new system is perpetually just around the corner.

Would we make different decisions if we were managing IT cradle to cradle? What would this mean in practice? Stay tuned.

Tags: , , , , , , , ,