Silicon V based mainly…any signs here of a slowdown?
I don’t know about the trendy startup scene, but consulting seems as boomy as ever. I’m recently departed from that scene, but looking from the outside there’s certainly no signs of slowdown. If anything wages seem like they might finally be creeping upwards, after seeming to hold strangely stagnant in the face of huge demand and little supply for a long time.
My feeling is that we have far fewer of the dot com style, revenue-less and business model-less companies in Ireland than they do in Silicon Valley, so we’re far less exposed.
Just looking around at the moment it’s a mixed bag in Dublin. Before Xmas most of the companies I dealt with were talking about expansion, and I was only aware of one company that was in trouble - they had said that if they didn’t get another round of funding they’d be gone in January - they didn’t get the funding but they still seem to be hanging on. The others are mainly stalled - they’re interviewing people but not taking very many on - they seem to be trying to create full teams before they move and the real crisis is in senior people - people with 15 plus years experience who can bind teams together and understand the delivery process - I think this is a result of the hole that was created in the early 2000’s when the tech industry was flat and nobody was interested in working in it. It’s also hard to get senior people with recent development experience because the career progression in the tech industry is so skewed to managerial positions - i.e. you won’t get a pay increase until you are a manager so you have to give up development. Getting a six figure salary as a developer is virtually impossible so many good developers move into management very early and development skills get lost. The contract market had dried up but seems to be flowing again - not sure why - it’s possible that more mature companies were looking for permanent staff - so this might be an indicator that the startup scene was struggling - maybe it still is or the more mature companies have found they can’t get permanent people and are now willing to consider contractors.
I’m not sure if all the above indicates a turning of the tide or if it’s just a slow start to the year. I don’t think things are slowing down but they’re not galloping forward.
The Trendy Startup scene seems to be a complete mess. We’ve gone from no support for tiny startup 10 years ago to absolutely throwing incubator space and 0-stage funds at anyone with phrases like “We’re the Uber for —” in their business plan, then they spend all their time going to hipster events with other startups and writing reports on how well they’re doing, rather than actually doing much. It’s a total circle-jerk.
A lot of the shit that comes out of the incubators is shocking. The companies that are doing well are either much bigger, not Irish but have opened development houses here for tax purposes; or Irish companies that have ignored the “trendy startup” circuit completely and actually spend their time executing.
Insourcing does seem to be in vogue, I’m working for over a year now with a US conglomerate in the top 50 of the Fortune 500 who outsourced 100% of dev work since the dotcom era, the rationale was they lost nearly all internal knowledge of how their own systems worked and the contract houses knew it and were taking advantage. Headcount is nearly 100 now, and they are opening similar centres in APAC and the Americas. Approx 50% are from outside Ireland, hailing from Spain, Italy, Portugal, Greece, India, China and Brazil. We still haven’t taken 35% of the work off of the contract houses.
My impression was that it was one very talented, dedicated, disciplined, bright, high-level individual in BOI largely responsible for this. Since moved on to new pastures. Anyway, my understanding of the nature of EAI system development before reading your comment was that you must have someone like that at the top. It is not something that you are likely to achieve by throwing new hires at it. Btw, I suspect the BOI individual, who developed his whole career within BOI, would be under some contractual constraint not to go on to other banks and do the exact same for them. (Borne out by his new position).
I would be interested to hear your opinion on this, because as someone relatively new to software development, I really struggle to conceive of how development like this would be achieved once you go beyond a very small close-knit team. - Would the broad knowledge required of business issues and technical aspects be available at all? (For example I would suspect you would need a fairly deep knowledge of each data model of each individual system, probably proprietary to each individual bank, also real insight into how users of those systems actually use them, and that’s well before you even get onto the technicalities (and art) of something like Sun Java Caps or whatever the equivalent is in banking.)
Or how would you actually do a project like this? Excuse my ignorance. I really know little of the world of banking, or even of software development in a large corporate environment. I’m only extrapolating from how I would personally work for smaller organisations.
The BOI individual was firstly a fantastic engineer, particularly in having a broad knowledge (within his sector), secondly a manager/director, although he must have excelled at that too. I know the ‘buy-in’ from the top stuff is non-trivial. But I have done some small scale integration (using Camel) myself and the most exigent issues I came across were mostly oriented around middle manager and user issues (the board level need/commitment is either there or it is not. - Typically any board director even without any technical knowledge easily conceives of the holy grail of having all their information “integrated” and will do anything asked of them to further it.). I may be wrong, but I have an intuition that such frameworks as you point to mostly go out the window, only remaining secondarily in the background to orient the other activities that by necessity come to the foreground. Actually I found that the development in conjunction with dealing with stakeholders became mostly intuitive in my own projects, in dealing with such issues as raised in your diagram. The mind boggles at how you would scale that for an organisation of the complexity of BOI. I suppose it is done by constraining complexity at the requirements analysis stage, but nearly always once the development gets underway and the detail reveals itself, proliferating complexity happens anyway. Well I have never managed to preempt it, no matter my efforts. Anyway, as I said I am only extrapolating from my own projects. I’m only speaking as an analyst/development grunt. Not even a very experienced one. But I’m interested to see if anyone on here has any insights about what they did in BOI and whether the other banks would be able to emulate it.
Its 99% politics and blind luck. Technical ability has little to do with. When it comes to managing teams and product divisions, technical ability only factors in when it comes to the upper end of the management chain being able to call bullshit on what they’reare being told by lower down the chain. Thats about it. And even when you are up near the top of the chain ones ability to change the outcome is nearly always totally dependent on the people even higher than you. Even when you’re VP Eng you’re ability to change and fix what is seriously wrong with a project is totally dependent on politics. Ones ability to persuade others that the change has to be done., given the vagaries of CEO’s, other VP’s, and investors, is a real uphill battle even when things are obviously spectacularly wrong. Its all politics.
So the fact that some guy got something worthwhile done at BOI was pure chance. His probability of success at his next gig is actually less than the usual 30%. Due to the second project syndrome. People thinking they have some special insight, whereas in most cases they were just lucky. Now if they have three of four successes in a row then maybe they do have something special going on. But thats about a once a decade phenomenon. In the whole world. If some one has a better than 3 out of 10 batting average sustained then they have worked out what makes real world dev projects have a chance of shipping something.
One number that really has not changed over the decades is the failure rate of dev teams. Around 70%. In other words only 30% of dev teams ship anything usable after some indeterminate period of time. In my end of the business of the 30% that ship maybe half, another 15%, are pure junk. Either technically (full of bugs / unusable interface / feature dont work) or businesswise (revenue is nil / low, less than dev costs, bad ROI). In enterprise, being a bureaucracy, its easier to hide failures. But based on what I’ve seen the numbers are about the same or even worse. At least with commercial software you have immediate feedback if you have shipped crap as no one buys it. In enterprise the customer, usually other employees, have few options to show their unhappiness. Which is why in my experience the absolute worst software I have ever seen has been all internally deployed in big companies.
With big projects the rule for success is to break very big projects into lots of much smaller, independent, projects. Each with a single well defined goal (closed feature set) and a close end duration. And limited interdependencey, if any. Full decoupling is best. And if a dev team is failing, shut it down fast. All failing teams fail from early on, it will never get better. So at the end you will have at least 30% of your original functional spec implemented and shipped. The other 70% can be attacked next time around. Small spiral projects gets stuff done.
With the traditional big project approach your probability of have nothing at the end is at least 70%. And this will be after two or three times the original time estimate and budget. In the other 30% of the projects, the “successful” ones, the probability of all the original spec features working correctly and as promised is nil. In most cases the “success” only happens when a lot of the original spec is dumped. The percentage usable features in the shipped code is basically a normal curve peaking between 30% / 50% of features in the original spec. So in other words the big project approach gives you about a 20% (maybe) probability of getting the same working features set (usually less) as what you would get with the small project approach. Every time.
Thats why pretty much every story you read about a government IT project is a fiasco. Government only does Big Team. Its even worse in the private world but you hear about it less often. Only when it is catastrophic enough to hurt the companies financials, with public companies, or cause some regulatory snafu. Like with banks.
Given all that the one sign of good management in a company is that all the more experienced tech people have not become deeply cynical about what they are working on. Black humour is a given in these stations but when the people are bitter and cynical the probably of any kind of successful project is usually nil. Best to move on to somewhere where at least the tech people enjoy what they are doing, and have a sporting chance of actually shipping something.
Governance, governance and some re-engineering. I work with financial institutions on this and many other data and system integration topics. The biggest blocker is that ambition cannot be matched by reality.
If you come bottom-up, there are too many disparate schemas representing customer. Sure you can match off fields etc, but what about when the fundamental structure is different?
I worked with one retail bank who are trying to build an ODS, starting with customer (banks always start integration projects with 360 customer view, or account opening - their two most difficult non-regulatory problems - and quit when these don’t work). The ODS project immediately ran into difficulties because some entities within the group store contact preferences at account level, others on the individual, and others a mix. A customer might have disparate contact preferences across different accounts or product lines. So which is the valid view of the customer? Ok, we need to bring in account-customer relationships and tie it off of that.
Stop. This is bottom-up building of complexity where transformation is actually needed at a product / line of business level to clean up the data with some targetted Know Your Customer activities - typically only performed on a demand driven basis when the customer requests a new product, and possibly only then stored against that line of business because they have their own bespoke account opening and activation systems. So it’s a chicken and egg approach to the single view of the customer.
So banks might try top down. This allows a governed approach to deciding what the single view of the customer should look like (either as a retrieved message on the bus, or stored with summary and kpis in the data warehouse). You get buy in of what it should look like, and you sponsor a single line of business (Consumer lending or credit cards, for instance) to build a service or populate a piece of the warehouse to satisfy this. But then, the transformation rules become quite complex, and the solution ends up biased towards the LOB who got their first. As soon as Mortgages, Savings come on board, they disagree with the solution, and build their own. The ambition cannot be matched by reality
And so we get to strong governance. Bank LOBs tend to have their own budget, they may not even be using a common tool set to design common systems and interfaces, let alone have shared services and systems. You actually need an architectural mandate to block projects from continuing on the same path, and that’s not easy because you’re going to take a lot of short-medium term pain in time to market, which won’t fly with individual VPs/Directors/whatever title means decision maker. One bank I’ve worked with has seven CIOs.
The further away a bank is from having a solution, the more you know how fractured the governance and management structure is, and the less likely it is you will ever be successful. I’ve seen banks attempt SOA, microservices/REST and whatever you’re having to build common services and data, when they really shouldn’t be touching anything until they do a core banking transformation. Now they all want to know how to be agile. They keep using different tools and techniques in attempt to continue layering services instead of transforming the core, which they’re afraid of because they fired all their mainframe developers, rely heavily on contractors, and have moved key system operations off shore.
Some requirements like PSD2 (payments directive) will force banks to build limited common interfaces, but these won’t really solve the issues under the hood, they’ll bash through the point to point integrations and transformations required to meet the directive, and no more.
In short, it’s a mess, but it’s very lucrative if you want to be a contractor and bang your head off the wall trying to direct individual projects in the right direction
Not surprised - they are planning to spend close to €1 billion over the next four years on their new IT system.
The Bank of Ireland system implementation being referred to is called, with biblical appropriateness, Project Omega.
It involves the replacement of their aged, in-house developed core banking and bookkeeping system. It is simply an installation of Temenos’ T24 “bank in a box” product with a bunch of other products from other vendors
The publically stated budget is €500 million. In reality this will be higher. Cap Gemini are the service provider. Because the Bank has outsourced most of its IT functions to third-parties: IBM for ITO, Accenture for projects and change and HCL for resourcing, there are few Bank resources to work on projects such as this. The budget will increase because of the costs these third-parties impose on such a project. The total project cost will be closer to €1 billion.
I certainly would not have selected Cap Gemini for a project of this size or complexity. Their track record does not justify it.
The Omega design process has been in train for 18 months. All they have delivered so far is a bunch of exotic PowerPoint presentations. It is not a particularly complex or spectacularly intelligent design. It does not evince some magnificent and wondrous brain power. It is not a design. It is the implementation of existing packaged products.
There are also significant gaps in the design, such as data architecture.
The Omega target landscape involves products from more than 10 other vendors with not well-defined boundaries. This is needed because of deficiencies in the T24 product suite. The integration and interoperation of these products is not well defined or elaborated in the design. For example T24 is very poor at AML/CFT/CDD and fraud and relies explicitly on third-party products.
Cap Gemini are forcing the agile snake oil down the throats of anyone involved in Omega. They are not even following a methodology such as SAFe. which might actually be useful for a large programme such as this.
This is Bank of Ireland’s second attempt at a bank-in-a-box implementation. The previous attempt, over 10 years ago, was called ALNOVA (fondly called ALNEVER by those involved) . The work was by Accenture using a Spanish product they acquired. It is only in use in the Bank of Ireland joint venture with the UK Post Office. It was abandoned for Ireland after expenditure of hundreds of millions.
Large IT projects such as these are very risky. They impose great stress on the organisation through demands for resources, impact on budgets and the organisational change they bring with them. Organisations, like people and society, can only accommodate so much change. When the project starts to fail, run over schedule or budget, what happens is that the organisation narrows it focuses on the single large project and other initiatives are stopped or deferred. The organisation starts to exhibit all the characteristics groupthink.
All this has to happen while the Bank proceeds with other projects: MiFID II, IFRS 9 (both of which have hard deadlines), PSD2 and 4AMLD/FTR as well as operating as normal and maintaining existing systems and other smaller projects.
When you are spending this scale of money on a system implementation, common sense should tell you to consider some creative alternatives: buy a smaller new bank with good scalable IT systems and reverse yourself into them, buy an entire software company and dedicate their development team solely to your project.
Temenos is a relatively small company with annual turnover of just over USD600 million. Over the years, they have expanded the functionality of a fairly basic product through the acquisition of a large number of small companies whose products they have generally poorly integrated.
And just in case you thought all was well with the other large Irish bank, AIB, well its IT systems and processes are in even worse shape. AIB also tried to replace their core banking systems some time ago with disastrous outcomes.
AIB sued Oracle (rte.ie/news/2011/0131/aib.html) for the failed implementation of their iFlex bank-in-a-box product.
AIB selected iFlex as their new banking solution for both commercial and retail banking. The commercial implementation programme was called Pentagon and it went reasonably well.
The retail implementation was called ACORN and was a complete disaster.
The reason why AIB selected the iFlex product is because it was forced on them by the ego of Steven Meadows who has since left - independent.ie/business/iris … 64027.html. His departure is the reason why the ACORN programme was finally cancelled.
Meadows worked in Citi group who developed the original product, spun it off to a separate company called iFlex that was then acquired by Oracle - en.wikipedia.org/wiki/Iflex.
There is loads of material on Meadows and AIB trumpeting this stupid decision:
Even when ACORN was falling apart, Meadows insisted on pushing it through. No dissent was tolerated. The programme ate and spat out staff.
They recently completed IT outsourcing to InfoSys and WIPRO. Their procurement function even won an award for this. Meanwhile IT staff are leaving in large numbers with consequent pressures on IT systems and their operation and support.
So both large Irish banks that provide banking services to over 80% of the country are involved in risky IT initiatives. That has to represent a major operational risk. But don’t worry. We have the best little regulator in the world overseeing this and making sure nothing can possibly go wrong. Paddy is safe. No RBS-like systems failure on the horizon here.
Fantastic information. Thank you Chicken, Jammy and JMC. Appreciate the distilling of your experiences. I suppose what we’re really talking about is the rough banking equivalent of a 1990’s style SAP/Oracle ERP implementation. And I was likely wrong about the accomplishment of my acquaintance (well I had misread the comment so as to understand the development as practically fait accompli.).
Related to ChickenParmentier’s post on core banking systems (possibly better in the banking thread).
That thing about core banking platforms, Oscar D Torson
medium.com/@odtorson/that-thing … .g3b8qasut
Great read thanks.
Maybe there’s a few nuggets of intelligence in there but it’s outweighed by horse shite:
It’s true that in the COBOL-OO/Java transition era there was a lot of unrealistic crap being peddled about the latest fad of “intergalactic distributed objects” but that was only the latest in a never-ending line of IT industry hype. I never came across a system that failed because of a lack of “compute and communications resources”. (I met and discussed this personally with Orfali and Edwards about their book at the time). Intergalactic objects didn’t fail because the internet wasn’t robust enough to support super-generic distributed ‘Customer’ and ‘Order’ objects. No, projects failed for the same reason they always had, always do, and always will – the (mis)management of organisational complexity and requirements specificity and the inability to bridge the gap between the business and techies. Whether it was knowledge-based systems in the 80s, client/server and objects in the 90s, service-oriented architecture in the noughties or microservices today, the IT industry has always dealt in silver bullets and snake oil at the expense of hapless customers with real business needs. Indeed, not only IT, but the consulting industry in general is populated by sharks ready to separate gullible people from their money.
Part of the mess of existing systems is the failure to fully offload and a chunk of that is down to capacity/throughput failures - project x takes 10% of load off; project x uses way more resources than planned. Project x costs more to run than the mainframe it is supposed to replace. Project x is quietly shelved at just 10%. 5 years later, there are two legacy system… repeat and rinse for 35 years…