IT & Automation across business

Inspired by how many coders there are around here from the COBOL thread:

How do my fellow coders feel about how long it would take for “the cloud” to take hold - it was warned that it would do everything from replace IT support to other aspects of IT such as business intelligence reporting etc while providing better mobile data collection etc etc etc.

I think its just happening more slowly than predicted but that if an organisation makes a decision to properly move to streamlined systems the change can be sudden and quite a few people end up sitting at their desks wondering how they are supposed to fill their day now…I can see so many jobs where I work that I just cant see being around in 10-15 years time generally. If we moved to a new system and decided to drop our various half automated half manual multiple systems for a single cloud system those jobs would be gone in a year or two.

In good economic conditions these people would find work elsewhere or be redeployed. In times like these and what looks like stagnation for time to come this looks like adding to structural unemployment

Agreed. Companies like Fund Recs, AQmetrics and Corlytics are on the right path. The whole Fund reconciliation, settlement and breaks process which fills so many jobs is going to be automated. I worked in one side of it 10yrs ago. The amount of wasted, Masters educated, labor doing these jobs was crazy back then.

I think it’s not so much a cloud thing as an AI and automation thing.

Scope has been there for a long time to streamline and automate many workflows, though with the need for a bit of process improvement and standardisation at the front end. I think that as AI improves, we get to the point where for more and more the computer can just come in and do the messy job without the tidy-up/standardisation step.

Long run its the right way to do it, however due to shape of economy and capital ownership it may be an ugly painful transition.

The buzzwords may change but the technology remains the same. As do the pitfalls and inevitable disasters.

Remember when the current “AI” technology was just called Bayesian data analysis and stochastic data modelling? Because that’s all it is. Same old stuff in a brand new shiny buzzword.

Or remember when “Cloud Computing” was called Grid Computing? Which itself was really not much more than a rebranding of Thin Client which itself was just a rebranding of clustered time-sharing. Which they had been doing back in the days of the S/370.

I remember one day, would have been around 2011, someone showing me excitedly how they were able to offload a big compile from their laptop onto a AWS instance and a 12 min compile could be done in about 15 secs. This was going to be the future. This was a huge breakthrough. It was fun watching the expression on their face when I told them about the first Apple II I ever used. In 1978. It was running Forth. And was networked to a wall of DG mini computers. Which themselves were networked to an even bigger wall of beefier DG mini computers about 5 miles away. The Apple II ran some software that did serious data analysis but all the number crunching was seamlessly handed off to the two level of minis. The locals did the data prep and compression, which was then sent up to big guys who crunched it and sent it back to the locals minis who sent the nice numbers to the Apple II to display. Now the interesting thing about the setup is that all layers were essentially running the one same program. Written in Forth. The source code on the Apple II was pretty much the same as the source code runingon the big DG minis. Only the i/o code was different. You could do that sort of stuff in Forth. The Apple II was not just a dumb graphics terminal. It could have done the number crunching itself it just would have taken many weeks for each data set. So the data got seamlessly shunted off to the component platform that could do it fastest. And shunted back when it was processed.

Now the most interesting thing about this particular setup was not that it was very clean and elegant, which it was, but it really was not considered any kind of big deal at the time. I dont think anyone thought it was novel enough to write it up in a paper for example. Although if you dig through the literature of the time you will find equivalent working systems described.

So my reaction in 2011 to the AWS compile demo was, well that’s nice, but show me something that actually is new and revolutionary. Because I’d sure like to see it. Its getting mighty boring around here the last decade or so.

So in the long run the whole Cloud Computing fad will follow the usual cycle of huge promises, unsuitable implementations, and disappointing results. With it eventually finding a home in some small nice market. Just like ever other over hyped Next Big Thing in the past. Its usually the unhyped things that cause the real revolutions in the long run. The dinosaurs may make all the noise but one should always watch the small mammals in the undergrowth. Because that is where the future always is.

So far I’ve only seen four small mammals - the Altair 8080, the 128K Mac, Mosaic, and Android. Always on the lookout for more small mammals.

Deep neural networks are where the real magic is happening these days. They will change everything. No one knows how they work in detail but they work exceptionally well as nonlinear function approximaters (send in an input → black box magic (neural net)-> desirable output).

All the big tech firms are pouring money into this area and neural networks are now powering Google’s natural language processing, search, AlphaGo etc. The ability to buy scaling GPU nodes from AWS significantly reduces the barrier to entry.

Fundamantally the question is… Can your job be replaced by a nonlinear function approximator?

Do you take an input (think driving a car visual input, tactile input, text etc) and then preform an predictable operation on that input?

If so a neural net will be coming for your job.

Until I see evidence to the contrary, I’m gonna agree with jmc. Back in the early 80s AI and expert systems were all the rage. What’s changed now? Even if deep neural networks do have anything beyond the hype … biological agents still have two advantages over them:

  • Four billion years of mistakes being punished mercilessly by evolution;
  • The ability to be sued

As for the cloud … it was somewhere around the end of the 80s iirc that Sun started using the phrase “the network is the computer” in their PR. Nothing new under the Sun (pun intended).

It’s easy to replace the millions of years of evolution that is how neural networks are trained to optimise a particular behaviour. Things that have changed since the 80s:

  1. Computing power
  2. Large datasets
  3. Big money and institutional support (How much did MobileEye sell for…)
  4. New algorithms

The cloud has created massive efficiencies. How many companies simply upload to AWS and forget about it? I remember the days when small businesses kept a Windows NT server going in the backroom. It’s not all hype, look at the AWS profits.

biological agents can be sued, hmmmm?
interesting!
(scurries away to code AI-cloud-based bot to sue biological agents)

You do know how neural nets work? Or Knowledge representation? Or how about Knowledge inference and reasoning? Or solution set resolution and convergence? Neural nets were first touted as the next big thing at the end of the 1980s’ when the 1980’s format systems approach AI bubble ran out of steam. Due to it not actually working. Neural nets never did deliver back then. And there has been little meaningful improvement since. Because when push comes to shove they have never been little more than weighted decision trees with some half assed dynamic feedback mechanism. Despite all the very fancy equations. Just because you can throw a 1Ghz assync clock FPGA with 2^n cells at it now rather than a 10Mhz single cell ASIC one back then does not change the basic real world result. The stuff will never live up to the hype. Not only have they not worked out a viable solution to the problem yet, they still dont actually understand what the problem really is.

There tends to be a 15/20 years cycle with this sort of bullshit. Because that usually how long it takes the cultural memory of an industry to forget that it did not work last time either.

I’m no neural network expert but I keep an eye on trends and this is a big one. Facebook, Google, Baidu are all on board and spending money. Nvidia stock is going north. AlphaGo was not possible in the 80s. I’ve read a few of the latest papers and I think the ingredients are there for something revolutionary. Here’s the best overview I could find:

deeplearningweekly.com/blog/ … -in-review

:open_mouth: :open_mouth:

Can’t see that putting too many people out of a job :angry:

Haha, what a cool blog that came from. Very interesting!

lewisandquark.tumblr.com

That is not AI, it is straight brute force statistical analysis / functional analysis. Exactly what we have been doing in image processing filters and dsp/audio pattern matchers for at least 20 years. In fact 30 years if you add stuff like audio to sheet music technology and basic edge detection and feature extraction. Its pretty much the same math and same data flow.

The only almost breakthrough I’ve seen in AI since the 1960’s was in the late 90’s when they were stumbling to an almost usable methodology for conceptual modelling/ conceptual reasoning and knowledge representation using Ontological models and Ontological analysis. They lined pretty much everything up for the big jump and then, nothing. They veered off in the semantic web fluff because that was an easy problem to deal with whereas the really interesting one, the fundamental one, the one that is key to a true representation of intelligence in software, leads into the whole area of cognitive modeling of cognitive processes and states. Which was way way beyond the pay grade of any of the guys working on software ontologies and related areas. Now some people did realize that the answer was somewhere in applying the approach of Charles Peirce and his group to the problem. But try getting long term funding for research using that particular approach.

So the one really good idea of the last 50 years in AI has withered and died. I’m sure it will be rediscovered in a few decades and someone might make it work then. Then you might see some actual real intelligence. Not a pastiche behavior of savant mimicry which is all they have produced so far.

So that whole area of meaningful research in AI pretty much spluttered to a halt about 10 years ago and after they got bored with the semantic web the whole herd of researchers barreled off into big data and data mining because that was an easy problem that had been solved decades ago. And advertising companies like Google and Facebook were willing to pay big money to someone to get sellable data out of their exobytes of data.

And that is the current AI bubble in a nutshell. Very old statistical analysis and pattern matching data mining repackaged as AI. It is manly used at the moment so that online advertising companies can produce fancy metrics to hoodwink advertisers into not knowing that 95% (plus) of their ad spend is a total waste of money.

So a very very lucrative technology.

But then, an awful lot of human “intelligence” is also just pattern matching. This is not a zero or one game, as you well know. Rather it’s a question of what tasks can be handled by advanced, fast but inherently limited AI, and what will remain in the human realm for the foreseeable future. We already have a lot of “AI” which we don’t call that, because what it does is “just”… whatever. Pattern matching. Following a flowchart.

It’s going to take away a lot of jobs regardless (and could even if all progress stopped today - there’s a lot of overhanging jobs which could be done away with tomorrow with a bit of investment in current technology).

No cognitive process involves base pattern matching. To put it simply its model, match, refine, repeat. And its the model bit they have nt even begun to crack yet. What they are doing at the moment is little different from a pure stochastic model of a physical process. The fact that the stochastic model might produce useful results has absolutely no relationship with what is actually going on in the underlying physical process. Or provide any understanding or insight on what is actually going on in the physical process.

Very large numbers of essentially clerical jobs will be displaced. Quite a few in the protected professions. But this is mainly due to very unAI software. Like simple databases, rule engines and workflow automation. Basically all non clinical pharmacists are a waste of space. Can be replaced by a terminal. As could about 80% of non clinical non specialist doctors. Pretty much all non advocacy lawyers are easily replaceable if it were not for the fact that they have a strangle hold on the political system. So the laws will be rewritten to keep them in work. But I see a big drop in billable work volume, especially commercial, once the customers realize just how little added value most lawyers actually add. And after the guys writing the software survive the inevitable deluge of lawsuits.

Give the track record of finance I expect a lot of mid level, junior level positions to disappear, once upper management resizes they can get the same rate of returns (usually terrible) from some full workflow software package or other.

But I dont see dentists automated out of job anytime soon. And given just how good the administrative class have been at creating non jobs for themselves and their children I expect a massive expansion of regulation and nanny stateism until the demographic bulge has worked its way through. Private sector paper pushing will contract enormously. Public sector paper pushing will expand enormously. Until the money runs out.

Any job that physically requires doing something will probably be safe. Any job that is basically just moving paper around is probably only safe if the segment involved has serious political leverage. Otherwise it will probably go sooner or later.

But saying all that how many people here remember double entry book-keeping? In physical ledger books? And what a typical accounting office looked like when everything was done on physical paper. Now the funny thing is that when book keeping was computerized the number of employees in a typical accounting office did not change that much. They just ended up doing different work. And although the physical work was greatly reduce the amount information now readily available meant that all the employees were now kept busy producing information and reports that could not have been easily or quickly done with the old physical record system. So despite computerization reducing the traditional data entry / reconciliation work load to almost zero the new work load expanded to fill the available workday.

Maybe there is a technology improvement corollary to Parkinsons Law regarding time available and work done.

Even though some sectors may be buggy whipped out of existence I suspect a lot of job roles, perhaps most, will end up over time going through the same process that accounting has gone through over the last 70 years. For a start I cannot see any plausible AI software replacing account receivable in any possible future universe. Now in accounts payable, that might be a different matter…

Although maybe we might end up with a situation where we had an escalating series of account receivable nagbots continuously calling the accounts payable excusebots. Threatening them with suebots if the overdue 30 day payable is not paid by end of the day. Now that would be an interesting extension of robocallers.

Don’t recall claiming it would…

I didn’t use the term AI on purpose. I used the technical term non-linear function approximator in order to limit the discussion to that particular technology. I have no time for people who believe in the ‘singularity’ or other such AI nonsense. The question again is can your job be replaced by a function approximator? Not all can… but a lot more than you might think.

Here is an example of state of the art:

github.com/junyanz/CycleGAN

as Richard Hamming would say: you’re stepping on each others toes.

I used the generic term AI because thats where the neural net stuff was lumped back in the early 90’s when their proponents first started making grandiose claims for it. And I remember them using non-linear weighting functions back then too. I did not take them very seriously at the time because I had seen exactly the same type of functions used to “tune” stochastic differential equations in astrophysical models and have it explained to me by an old timer how this was used to hide a multitude of sins in the underlying model. And by the mid 90’s the whole hype and brouhaha over neural nets had pretty much dissipated.

Funny how its following the twenty year cycle.

This is the sort of stuff they were going on about twenty five years ago.

springer.com/us/book/9783540198390

and there were a couple of these published every year back then. And that was just the conferences.

I remember when Standford University had the best new academic book store in the world back then the AI section was about seven floor to ceiling wide bookcases and at least of them were devoted to academic books about neural nets. By the mid 90’s and it was down to about two or three shelves and in '98 Standford sold the bookshop to a franchise chain and within five years it just sold tee-shirts and was a bad branch of Barnes and Noble. I still really miss that bookshop. I can still recite from memory which subject was in every row on every floor I spent so much time there in the 80’s and 90’s.

And the answer to question is no, my job is perfectly safe. As I seem to have spent a big chuck of the last few decades cleaning up the messes left behind by people with CompSci PhD’s who were the type of people who made grandiose claims regarding their technology. Like the neural net people.

As for CycleGAN, the demo video looks like just a typical automated matchmover with some fancy’ish (maybe) rotoscoping effects. I suspect I would not be too far wrong in thinking that what works in low res will not scale to DCI 4K, for example, without very serious artifacts. Or work with really busy backgrounds or with multiple foreground actors. Would not be the first time either. It almost a decade since I’ve done any serious broadcast-video compositing / matchmover work and almost two decades since I did the ILM related rotoscoping work but pushing framebuffers around, and whats involved, really doesnt change. Just the horsepower available to do it. At least from what I read in Cinefex. One of my better hacks was getting one pixel result per clock / 4 component / 32 bit / 8 bit alpha, out of a rotoscoping compositing engine back in 1995 with only three integer processor units available. Well there was this fpu unit on the CPU not doing anything and if you asked it nicely you got one op result per clock. Was the fastest desktop compositing software without hardware acceleration available for a few years. The guy whose code I rewrote had a PhD. And his code could barely do 2 frames per sec with an 10% update region. Mine could chug along happily at 50fps with 60% update regions. But that was because i did not have a PhD. Because academically elegant code is code that does not work.

So never ever believe people with PhD’s making grandiose or extravagant claims about their software technology. Because its never true. A paper does not a product make.