Archive for the ‘risk’ Category

In from the cold

Monday, 25th November, 2013


So after a few months out I am back.

The cycle is fairly well established now – I do a contract, get frustrated, take a break, start looking for the next contract.

What is the frustration?

  • If I work in an IT department its their complete determination to do anything except deliver working software to the people who need it (and pay for it)
  • If I work in a business role its a. much less frustrating, b. more rewarding, c. bit of a niggle about not getting access to the best tools for the job.

Most recent contracts have been in IT departments.

I have had a great break over the summer, have been doing some teaching at a local college, but now its time to start the long painful search for a new contract.

The process was never fun, but gets even less funner every time. Clients with unrealistic skill set expectations (30 years .net 4.5, 100 years excel 2013 and 50 years Linux kernel debugging etc), and crashing pay rates (seem to be 60% of last year, which was 80% of the year before). Agents with even less knowledge of the business, the market or even IT. Too many alarm words: “prince2”, “visio”, even seen “waterfall” a few times last week!

The death of Excel as a client side target and the rise of its pale and pathetic arch-nemesis the browser, and all the the bullshit time wasting that represents. But having devs write thousands of lines of javascript to replicate 1 click actions in Excel sure cuts down the spreadsheet error rate.

So anyway I am brushing up my JQuery and Ajax skillz ready to bluff my way into that Useful Spreaddie to Pointless Web App migration project coming to a company near you soon. :-)



Irony disconnect

Friday, 29th June, 2012

Still keeping an eye out for that elusive challenging role on reasonable terms…

Although I am keen to stay in Energy or commodity trading, I have also strayed into applying for bank type roles because of my financial services background.

Its pretty ironic to see their 50 page plus bullshit recruitment bullshit about trustworthiness, and creditworthiness.

Bank trust:

Banks miss-selling complex derivatives.

Banks manipulating LIBOR.

Bank creditworthiness:

UK banks bailout.

Euro banks bailout.

and don’t get me started on ‘must have experience of testing’:

Bank testing.

Probably best if I steer clear of banks really, I would hate to develop some of the traits they seem to reward.

The disconnect between the way some of these banks perceive themselves, versus the way these malpractice investigations demonstrate them to be is, I find, amusingly ironic.







Academic and commercial spreadsheet errors

Thursday, 22nd December, 2011

[I just posted this on Eusprig – but I suspect it is too long to hold the interest in a list post]

I think there is a total chasm between
a. academic researchers whose main spreadsheet experience is the classic ‘student grades’ thing and
b. business spreadsheet jockeys who are in spreadsheets all day everyday.

group a think several hundred formulas is big, group b think several thousand is small.
group a think most commercial spreadsheets have material errors, group b rarely see any error effect.
a think b are over confident, b think a are inexperienced.

Within Eusprig I think we need to find a way to reconcile and explain these two completely opposed views of apparently the same thing. Otherwise neither side will ever gain any credibility from the other.

Personally I don’t believe many commercial spreadsheets have material errors, because most commercial spreadsheets are immaterial. They are a small piece of a bigger effort.

Yes I have seen spreadsheets wrong by millions, and 10+ % or whatever you want to call materiality. But did it change anything? no, not ever.

In a billion dollar, multi year, deal evaluation model, a multi million formula error can be dwarfed by inflation or interest rate assumptions. But whatever, if the price comes in at 1 billion and the client only wants to pay 900 million, then the whole analysis, errors and all, is largely irrelevant. Now the question is ‘are we prepared to take the risk that we can deliver this and survive for 900m?’ or slightly more cynically ‘will they ever tie cost overruns back to me and take back my bonus?’

In my experience spreadsheets are normally one of many inputs to important decisions, any inputs out of tune with the majority are either reviewed for credibility or rejected.

So I agree that most spreadsheets have defects, and I agree that very few lead to an erroneous outcome. And I agree that this is the Human element of spreadsheet interaction, ignored in much academic research. I also believe that the big issue is wasted time and effort, around ineffective spreadsheet use, not error impact.

Maybe we need some more holistic research that covers the whole person/spreadsheet system (in a commercial setting) rather than the spreadsheet in isolation.

I would highlight that in my experience when a spreadsheet changes hands (for holiday cover, job role change or whatever) there is a huge spike in wasted time and risk of nonsense outputs, and external support requests.

What’s is your experience? have you also found that the complete information system that includes these potentially erroneous spreadsheets is usually somewhat self healing? (and self learning – ‘x in reporting is useless, I now ignore everything they send me’)


Eusprig 2011

Friday, 17th June, 2011

This years spreadsheet risk and quality extravaganza is almost upon us. 

It is exactly just less than a month away in mid July.

You can book here.

I am not presenting this year, as I thought I would let someone else have a turn speaking (and of course I missed the submission deadline).

In fact I probably wont be attending as I’m not sure where I will be working/holidaying then.

I would be expecting a good talk from Patrick as we worked together this year on a few spreadsheet related projects. Indeed he came face to face with the source of several of my formula horrors from previous years!

oh looks like he is not presenting this year, but on the bright side there is some more original research on the power (or not) of range names, amongst other interesting papers.

Here is the (current) draft outline schedule.

Are you going?



pure joy

Friday, 3rd June, 2011

I loooove this!

I might even apply for the job (it might be the only chance to get tickets…)



Evil spreaddie fingered in RSA hack

Monday, 4th April, 2011

Dunno if you have been following the recent SecurID hack at RSA?

They fessed up then went quiet for a few weeks so a few people assumed the worst.

(If you dont know what SecurID is, is a little token (about 10mm by 30) that generates a new 6 digit number every minute. That number can be synched to a login server to ensure only people with the right physical token can login in.)

Anyway the latest news is that an Excel workbook was infected with a targeted, malicious flash swf containing a zero day.

It does appear to be a very clever attack, the spreadsheet had such an interesting name that one of the targets pulled it from the junk folder and opened it running the flash. I didn’t see anywhere whether the workbook had any VBA in or not.

One important point though is that it was a Flash vulnerability they exploited, Excel was merely the delivery mechanism. No Excel vuln was used, just its ability to act as a container.

I didn’t see how they were discovered either, but it sounds like the attackers pretty much got most of what they were after.

I wonder how many other orgs have been hit by this sort of attack, and either haven’t discovered it yet or haven’t admitted it in public?

Got any good links?



Test harnesses in production

Tuesday, 30th November, 2010

You wouldn’t believe how the cold dark winters evening simply fly by here (well at least I managed to get home  even though 4″ (100mm for the metricians) of snow fell this afty, the first November neige in Geneva since 1980 allegedly). And its still dumping it down – might have to throw a sickie tomorrow and go sledging with the kids.

Anyway the big argument today was test code.

Should you or should you not put your VBA test code into production?

Should you strip down your project to the absolute minimum clean prod only code?

Or should you leave in the code you used to test your production code? (assuming you are one of the 3 VBA devs worldwide who bothers to test of course)

My vote is to leave it in, and even though I was in the minority at school today that doesn’t mean I’m wrong, yet.

If your test code is crappy and distracting then yep take it out, no probs (in fact take out all the crappy distracting code), but why I think decent test harness code should be left in:

  1. It helps show what the code is meant to do
  2. It helps when you are trying to fix something later in production at a users’ desk.
  3. It shows that someone did some testing sometime
  4. If you make significant edits to your code you should retest, if that edit was removing what you thought was the test code how will you check you haven’t broken something? add a test harness???

What do you think? what am I missing that these clean code freaks can see? Remember I’m not for a minute suggesting leaving in a load of random junk scattered throughout a project. I’m thinking of separate modules or at least sections with a bunch of meaningful tests that exercise the main functionality of the system in a controlled way.

What do you do?



European Spreadsheet Risk Group 2010

Friday, 23rd July, 2010

We had another excellent Eusprig Conference last week – congrats to the organisers.

Lots of interesting discussions both within the sessions and outside, and of course in the pub.

Some highlights:

Sumwise is a new spreadsheet-alike product that allows more structured models, runs in the browser locally or remotely. It looked really good, I can see a whole class of problems that it fixes very elegantly.

EASA presented on their tech for publishing spreadsheets to web servers for browser based usage scenarios. Having built a few of these Excel-runners myself (I still have the scars) I appreciate what’s involved. I liked the way it would work with any spreadsheet and is not as picky as Excel Services (2007 anyway, 2010 is more accommodating).

ClusterSeven were talking about new value they are discovering for clients by tracking cell changes over time. They are able to build up not just validation trends, but also business, pricing, economic etc trends. I suspect converting all that unstructured tat scattered across the average spreadsheet forest into mineable information is more valuable, and a better sales story than the hunt for ‘potential’ errors, or mischief.

Dean Buckner from the FSA described their current views on data risk, and it close relation spreadsheet risk/end user apps. I always enjoy the clarity with which Dean explains what the FSA care about and how those things should be addressed. For example sometimes just a written policy is fine, for other areas the FSA want clear practical evidence.

There was some interest in trying to create a generally agreed set of best practices, with caveats as required. I’m not sure if this is something Eusprig will officially endorse/sanction, but I think its something they must if they want to maintain credibility. You can’t spend 10 years saying ‘what about the spreadsheets?’, and then offer nothing to help.

I was disappointed to miss some of the academic papers which ran on a different track. I am not a fan of the Eusprig 2 track approach. I don’t think there are enough people interested in this area to divide further, and I think the current conf length (1.5 days)  could be extended by 3 hours to allow the academic stuff, perhaps on the Friday afternoon.

So instead of hearing the evidence of how names can impair less experienced users we had a half hour slot about why a certain modelling company use names extensively. This was a little long on hyperbole and a little short on fact/evidence for me. And it unfortunately failed to address all the real world scenarios that make many experienced commercial devs wary of names in the real world.

My favourite (repeatable) quote of the event was actually just after
Ralph Baxter the CEO of clusterseven was explaining some of their new features/use cases to me as we ascended some lift in the tube system. As we got the street level some bloke turned round and said….
Drum roll please…
“Ralph thats the best elevator pitch I have ever heard”
That bloke turned out to be Mel Glass from EASA, we all then spent the next hour discussing the harsh reality of corporate spreadsheet use. (And some of the opportunities around at the moment)

One of the people pushing for some generally approved spreadsheet techniques was Morten Siersted from F1F9. Of course we will never all agree about the minutiae (note the interminable named ranges debate). But it has to be better to have reviewed a well thought out approach and decide where you will adopt and where you won’t and the supporting reasons.
FAST is one of these well thought out approaches, and its free/open source, non commercial etc etc. And unlike some of the others, Fast stands on its own. there are no chargaeble tools required to implement or test it.

Its here.

I’m not sure where the best place is to discuss it, but I do think we should discuss it. I’ll maybe do a more in depth post in the next week and we can discuss it there, or if FAST put up a discussion blog post that would be even better.

I’m not sure which is the most contentious, climate change or spreadsheet modelling/developer standards?

We’ll see I guess.

Did you go to Eusprig? what did you think?



ps I managed to use the é and the è on my Swiss keyboard today.

Eusprig 2010

Tuesday, 22nd June, 2010

Pack yer party frocks its nearly time for Eusprigs annual conference.

In my opinion this is a must attend event for anyone serious about professional spreadsheeting. It should also be required for anyone contemplating spreadsheet management/migration/quality/control projects. Why make all your own mistakes when you can invest 2 days, a couple of hundred quid and learn from others, bypassing a heap of pain (and cost) for your own organisation/project?

There are two key aspects

  1. The technical conference content, including several schools of thought on best practice (not sure whether to bring my cornering kit, or my gum shield and boxing gloves) spreadsheet quality project post mortems, and lots of original research. And not just wishy washy stuff – proper, well designed experiments that really focus in on key issues. I am particularly looking forward to the slot demonstrating that names make spreadsheets harder to understand. I’ve had plenty of rows on that very topic.
  2. Social/networking – where else do you get the opportunity to buy drinks for big cheeses at the FSA, and big cheeses from the spreadsheet audit/management software vendors, and student bludgers? At least that’s how I remember it.

Full conference details (including dates: Thur/Fri 15th – 16th July, and venue: Greenwich Uni London, England, and agenda etc) are here.

I’m giving a keynote first thing Friday (which seems a bit harsh!).

Why not combine it with the Excel Dev Conf for the full spreadsheet-oholic effect?

Are you going to Eusprig??

If not why not? (serious question) – what would need to change for you to attend?



non error error

Wednesday, 2nd June, 2010

how come this = 0 and not #REF or something?


=SUM(G8,#REF!) = #REF! as expected

=SUMIF(E7:E12,#REF!,F7:F12) = 0 too.

All in 2007 (and 2010), I don’t have a 2003 to check just now, is it the same? (just the SUMIF obviously, you dinosaurs don’t have the luxury of SUMIFS ;-))

OpenOffice returns the #REF! I would expect.

Would you expect a formula to return an error if one of its required arguments is an error?

In fairness if there are #REF!s in the data it matches them and returns the total, so in a literal sense it ‘works’, but I’m not sure its what I would expect. What about you?